From patchwork Sun Feb 9 11:10:21 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Woudstra X-Patchwork-Id: 13966773 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B74D5C02199 for ; Sun, 9 Feb 2025 11:14:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=Jv+cLmILsq6/Q63wf4VLe/fU0JLiD7L+ZSaxXE2WQYo=; b=G5yn91VInGruaEajUAoqf64agm RddFcFVb5+idC1gi2m2Cy2Uh+VvHYO5adoV8ySSp60RW306nkeuE6qE23nw2/e8d78zzLVtUXcGNp lCi/SEPo6F0j0bVXnZSPIvooJTqa/tWJqiStac45UTa8yenzhpf+/bHxTRnPJfK5bqNnoAWInqSZT 0QJUrHjFLljpy3PSA18k8T+p/QLdT5om1VM/1yfZfSkearDA3rEzB/8yFI42JO2izm90q3XIwUjyu bhlF/mbViehMlFPYhLFb1nbmNimuO45zeJGIHWQHkSo9mc+ShrDuiA6K72mamnWNQfnawtv9GEbLc wQYKlX8g==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1th5GD-0000000Ehck-3maG; Sun, 09 Feb 2025 11:14:01 +0000 Received: from mail-ed1-x535.google.com ([2a00:1450:4864:20::535]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1th5D4-0000000EgXC-2LAI; Sun, 09 Feb 2025 11:10:47 +0000 Received: by mail-ed1-x535.google.com with SMTP id 4fb4d7f45d1cf-5dcea56d6e2so6290579a12.1; Sun, 09 Feb 2025 03:10:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1739099445; x=1739704245; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Jv+cLmILsq6/Q63wf4VLe/fU0JLiD7L+ZSaxXE2WQYo=; b=UV3FLH6m8p2UzHJeA93/rHZvIfFQBWG3+m/7LsNRiw1cPFdfKdpRIS1xO5uML9qmdw pi7Yn4OuvJh+h0kWKH+msQ+B0vy06LR/H9bO1LqVwzGqYGWGNWkV0ixanamRYkmm7eK2 Zx44XvnLyDPNHwNhZMq+pphY5lP15HXAVHqAuxOs3csgA1BZpuSYvhTfkuojw8eNt9HV sG6emoldfMIa/DZAFtzdIYfbw5akc6vh7sFf022PLPId3gx1FOMzJmNKqvKz2vmSjJdi 7xuQsKZ3oTnvZnkInjP0YTtBze97VIRYJT+V50/cDnPi3eQvc/ghzZ8moll5dPrWlb50 wmLw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1739099445; x=1739704245; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Jv+cLmILsq6/Q63wf4VLe/fU0JLiD7L+ZSaxXE2WQYo=; b=nB7JFmwxS3pTRaBBDxa6NFlBFxR0H3iGkrs+ieCTQjB+j4SMe9QZuvqSFzWA+TfU71 KzgKr5eXQRCrfIsTi+4HLHLF0a6JEkw2XquoVQzaZT++L5S1WdYRUbONA5a4Z31B8LdB RjV0qrbagnz4rYs5+sXBtxRlwE+R3YBkmAS7z3JEPi678q5T7O1LxQD/AI2ew3N9PXzJ Uju/f+N03WBZzw8tKjKKp/F1vVNFIJCrEWLup4xeipVIDquYUhJy2mEKoQ1O9kCHzcNC rsVlVEkjtc5XEV4iXea/dL3X6PPAjaW6LepByULD3LXS9DAqwLA//rGEvTr6mkCgsDdi BEzA== X-Forwarded-Encrypted: i=1; AJvYcCUa0DQk6hIAv0OgYWKwqQ2rfg/rzcJNCifhNBHJTHWy/V3lifw8dlhXKJecKbL7HAmAvN9F7WGL6J6KhjGqHE/C@lists.infradead.org, AJvYcCVXaTEJCSNTsNAmi02ulNtQupzLXaXWhKAp7iOfRjwzc0jdbcyYiiWB3oFDwc/0/PhFC+agdLFElaY7kwGxBXA=@lists.infradead.org X-Gm-Message-State: AOJu0Yw0fMJc9gNfIoRWMuN1W6yyqdNd8TgTRCIvA12o1DjUHICc0PiH Y9udAc1MGZ+xNjihfbfmHTc9qWp2jztx7JbOAz4ZL4KI9GpGlWPG X-Gm-Gg: ASbGncv7DoS4dMgtVramNaNK2DueArO4uklALBuMdI59DffR506OR1kuRe2eGgWKBjh D95RP9AHkpK94vpmFwoGehtX5Ly+eWv2Ufr/dLvXfNgxsN87P4dVoATiqI7k6s2L/8mlY+HpuRT aiy2+k0VEKBR7vfX7BjhCG7Gql0Ng+rSFn0Y8h7TEQg6VDXhrSoPb1Syai8CcCFrBbo3gg4Bo21 QHB4/qHRVJhGDf994Xn5PFswyUP3we8ySOoQgtP2THZ5vVgtWT33bbS3sae6Hqk/zE8W+MqomS4 0udErif1TFPi28havoiHsw3dcutBuGr7QHFkfDMGFEEMkaKVCK0kdp4pQ+dVVSzEsvm6xfAaNhn mAhWwFR+lU3haS5zSxUyVIK2Ny+Z5pLDA X-Google-Smtp-Source: AGHT+IFXYULiROmtF4EyTfGufju+UjViGw3XLxQmSmt9lA2TpOAIjz2H8Xb5sfx3OKtKl9zdly0ehQ== X-Received: by 2002:a17:906:3586:b0:ab7:97ca:e8f6 with SMTP id a640c23a62f3a-ab797caed01mr570807566b.54.1739099443030; Sun, 09 Feb 2025 03:10:43 -0800 (PST) Received: from corebook.localdomain (2001-1c00-020d-1300-1b1c-4449-176a-89ea.cable.dynamic.v6.ziggo.nl. [2001:1c00:20d:1300:1b1c:4449:176a:89ea]) by smtp.gmail.com with ESMTPSA id a640c23a62f3a-ab79afc7452sm357516366b.163.2025.02.09.03.10.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 09 Feb 2025 03:10:42 -0800 (PST) From: Eric Woudstra To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Simon Horman , Andrew Lunn , Pablo Neira Ayuso , Jozsef Kadlecsik , Jiri Pirko , Ivan Vecera , Roopa Prabhu , Nikolay Aleksandrov , Matthias Brugger , AngeloGioacchino Del Regno , Kuniyuki Iwashima , Sebastian Andrzej Siewior , Lorenzo Bianconi , Joe Damato , Alexander Lobakin , Vladimir Oltean , "Frank Wunderlich" , Daniel Golle Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, netfilter-devel@vger.kernel.org, coreteam@netfilter.org, bridge@lists.linux.dev, linux-arm-kernel@lists.infradead.org, linux-mediatek@lists.infradead.org, Eric Woudstra Subject: [PATCH v6 net-next 01/14] netfilter: nf_flow_table_offload: Add nf_flow_encap_push() for xmit direct Date: Sun, 9 Feb 2025 12:10:21 +0100 Message-ID: <20250209111034.241571-2-ericwouds@gmail.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20250209111034.241571-1-ericwouds@gmail.com> References: <20250209111034.241571-1-ericwouds@gmail.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250209_031046_619008_E5965E79 X-CRM114-Status: GOOD ( 19.43 ) X-BeenThere: linux-mediatek@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "Linux-mediatek" Errors-To: linux-mediatek-bounces+linux-mediatek=archiver.kernel.org@lists.infradead.org Loosely based on wenxu's patches: "nf_flow_table_offload: offload the vlan/PPPoE encap in the flowtable". Fixed double vlan and pppoe packets, almost entirely rewriting the patch. After this patch, it is possible to transmit packets in the fastpath with outgoing encaps, without using vlan- and/or pppoe-devices. This makes it possible to use more different kinds of network setups. For example, when bridge tagging is used to egress vlan tagged packets using the forward fastpath. Another example is passing 802.1q tagged packets through a bridge using the bridge fastpath. This also makes the software fastpath process more similar to the hardware offloaded fastpath process, where encaps are also pushed. After applying this patch, always info->outdev = info->hw_outdev, so the netfilter code can be further cleaned up by removing: * hw_outdev from struct nft_forward_info * out.hw_ifindex from struct nf_flow_route * out.hw_ifidx from struct flow_offload_tuple Reviewed-by: Nikolay Aleksandrov Signed-off-by: Eric Woudstra --- net/netfilter/nf_flow_table_ip.c | 96 +++++++++++++++++++++++++++++++- net/netfilter/nft_flow_offload.c | 6 +- 2 files changed, 96 insertions(+), 6 deletions(-) diff --git a/net/netfilter/nf_flow_table_ip.c b/net/netfilter/nf_flow_table_ip.c index 97c6eb8847a0..b9292eb40907 100644 --- a/net/netfilter/nf_flow_table_ip.c +++ b/net/netfilter/nf_flow_table_ip.c @@ -306,6 +306,92 @@ static bool nf_flow_skb_encap_protocol(struct sk_buff *skb, __be16 proto, return false; } +static int nf_flow_vlan_inner_push(struct sk_buff *skb, __be16 proto, u16 id) +{ + struct vlan_hdr *vhdr; + + if (skb_cow_head(skb, VLAN_HLEN)) + return -1; + + __skb_push(skb, VLAN_HLEN); + skb_reset_network_header(skb); + + vhdr = (struct vlan_hdr *)(skb->data); + vhdr->h_vlan_TCI = htons(id); + vhdr->h_vlan_encapsulated_proto = skb->protocol; + skb->protocol = proto; + + return 0; +} + +static int nf_flow_ppoe_push(struct sk_buff *skb, u16 id) +{ + struct ppp_hdr { + struct pppoe_hdr hdr; + __be16 proto; + } *ph; + int data_len = skb->len + 2; + __be16 proto; + + if (skb_cow_head(skb, PPPOE_SES_HLEN)) + return -1; + + if (skb->protocol == htons(ETH_P_IP)) + proto = htons(PPP_IP); + else if (skb->protocol == htons(ETH_P_IPV6)) + proto = htons(PPP_IPV6); + else + return -1; + + __skb_push(skb, PPPOE_SES_HLEN); + skb_reset_network_header(skb); + + ph = (struct ppp_hdr *)(skb->data); + ph->hdr.ver = 1; + ph->hdr.type = 1; + ph->hdr.code = 0; + ph->hdr.sid = htons(id); + ph->hdr.length = htons(data_len); + ph->proto = proto; + skb->protocol = htons(ETH_P_PPP_SES); + + return 0; +} + +static int nf_flow_encap_push(struct sk_buff *skb, + struct flow_offload_tuple_rhash *tuplehash, + unsigned short *type) +{ + int i = 0, ret = 0; + + if (!tuplehash->tuple.encap_num) + return 0; + + if (tuplehash->tuple.encap[i].proto == htons(ETH_P_8021Q) || + tuplehash->tuple.encap[i].proto == htons(ETH_P_8021AD)) { + __vlan_hwaccel_put_tag(skb, tuplehash->tuple.encap[i].proto, + tuplehash->tuple.encap[i].id); + i++; + if (i >= tuplehash->tuple.encap_num) + return 0; + } + + switch (tuplehash->tuple.encap[i].proto) { + case htons(ETH_P_8021Q): + *type = ETH_P_8021Q; + ret = nf_flow_vlan_inner_push(skb, + tuplehash->tuple.encap[i].proto, + tuplehash->tuple.encap[i].id); + break; + case htons(ETH_P_PPP_SES): + *type = ETH_P_PPP_SES; + ret = nf_flow_ppoe_push(skb, + tuplehash->tuple.encap[i].id); + break; + } + return ret; +} + static void nf_flow_encap_pop(struct sk_buff *skb, struct flow_offload_tuple_rhash *tuplehash) { @@ -335,6 +421,7 @@ static void nf_flow_encap_pop(struct sk_buff *skb, static unsigned int nf_flow_queue_xmit(struct net *net, struct sk_buff *skb, const struct flow_offload_tuple_rhash *tuplehash, + struct flow_offload_tuple_rhash *other_tuplehash, unsigned short type) { struct net_device *outdev; @@ -343,6 +430,9 @@ static unsigned int nf_flow_queue_xmit(struct net *net, struct sk_buff *skb, if (!outdev) return NF_DROP; + if (nf_flow_encap_push(skb, other_tuplehash, &type) < 0) + return NF_DROP; + skb->dev = outdev; dev_hard_header(skb, skb->dev, type, tuplehash->tuple.out.h_dest, tuplehash->tuple.out.h_source, skb->len); @@ -464,7 +554,8 @@ nf_flow_offload_ip_hook(void *priv, struct sk_buff *skb, ret = NF_STOLEN; break; case FLOW_OFFLOAD_XMIT_DIRECT: - ret = nf_flow_queue_xmit(state->net, skb, tuplehash, ETH_P_IP); + ret = nf_flow_queue_xmit(state->net, skb, tuplehash, + &flow->tuplehash[!dir], ETH_P_IP); if (ret == NF_DROP) flow_offload_teardown(flow); break; @@ -761,7 +852,8 @@ nf_flow_offload_ipv6_hook(void *priv, struct sk_buff *skb, ret = NF_STOLEN; break; case FLOW_OFFLOAD_XMIT_DIRECT: - ret = nf_flow_queue_xmit(state->net, skb, tuplehash, ETH_P_IPV6); + ret = nf_flow_queue_xmit(state->net, skb, tuplehash, + &flow->tuplehash[!dir], ETH_P_IPV6); if (ret == NF_DROP) flow_offload_teardown(flow); break; diff --git a/net/netfilter/nft_flow_offload.c b/net/netfilter/nft_flow_offload.c index 46a6d280b09c..b4baee519e18 100644 --- a/net/netfilter/nft_flow_offload.c +++ b/net/netfilter/nft_flow_offload.c @@ -124,13 +124,12 @@ static void nft_dev_path_info(const struct net_device_path_stack *stack, info->indev = NULL; break; } - if (!info->outdev) - info->outdev = path->dev; info->encap[info->num_encaps].id = path->encap.id; info->encap[info->num_encaps].proto = path->encap.proto; info->num_encaps++; if (path->type == DEV_PATH_PPPOE) memcpy(info->h_dest, path->encap.h_dest, ETH_ALEN); + info->xmit_type = FLOW_OFFLOAD_XMIT_DIRECT; break; case DEV_PATH_BRIDGE: if (is_zero_ether_addr(info->h_source)) @@ -158,8 +157,7 @@ static void nft_dev_path_info(const struct net_device_path_stack *stack, break; } } - if (!info->outdev) - info->outdev = info->indev; + info->outdev = info->indev; info->hw_outdev = info->indev;