From patchwork Tue Nov 19 10:19:06 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Woudstra X-Patchwork-Id: 13879589 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-ej1-f51.google.com (mail-ej1-f51.google.com [209.85.218.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4ABFB1CBE97; Tue, 19 Nov 2024 10:19:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.218.51 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732011580; cv=none; b=EOPdv0OpTZXjySrFs6hqqUH6Z7YE456CXAIa+au8S6p2IQ3+qA8n/tjwFvCbPvHEgKqe0cCy14KtPpF4FNLstvHIo8xiiNqDjy1UWq1puUTM42EhirC8PNpKmE2BUhmcCS9LdneVdP6xytTe41ZWgdYMjWe29fzfQhG3W7HJdgs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732011580; c=relaxed/simple; bh=h2EDobtYGuZ0p694lv+rpbiRRIT5cXJ238jQIT1Juig=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=i7B2uhN3BL2NZLoe7kv81fN2B9A0i3wln8CI7usMsIgbdenl346PBQalyaUsAjDClcStmX1ExZb4oFmvgueY2SWI7iq5U6UXCAnlqgk0OGfvWo4ouvdXdVqimrDi2jic1P4CJqCXIzrTtzEybbv6s8ia5BAN1jPOLRlNQfYfmzU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=khDWNxpR; arc=none smtp.client-ip=209.85.218.51 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="khDWNxpR" Received: by mail-ej1-f51.google.com with SMTP id a640c23a62f3a-a9a6acac4c3so124380866b.0; Tue, 19 Nov 2024 02:19:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1732011577; x=1732616377; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=93tHMoq61etCefqeI7/L8ZHaQMS3jaPhjtyND24xOo0=; b=khDWNxpRgDPYX6faCdFzCZ10sVTDzBTUxBsH0tLabXCy+yNGYSE0Dh/OrM6po8oOO5 OBlkbjx9s7xQg+dH4tjbcIbZ4you12sHjHJi+p0eUhPpV+6VhalnOQz000/f0dWpoAOz 1A1g3TDOoCjfxe9h+IPiJjyXxI7LH2ivK7dBVb+dT4zCQ3wsgzCOwTyhOKYQLSuG1YnJ exnofe25/I439t8ABBSzLifKi/YpBwPLpQQv6QUV+eGAnMY/zv+Q5LUJuvo/zdmn7i5t dZ5/S9PMfCTZ7s0qGwPwhuRfPZLW7mscAVLK52ydBEC/x2HlplTRna6jK1ZH42/LAVod jjDg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732011577; x=1732616377; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=93tHMoq61etCefqeI7/L8ZHaQMS3jaPhjtyND24xOo0=; b=wExuLVUx7OHrqcr3l4iNp0Us7QzzMtat/W5rOifRwsd49QnOns0hzxkg3Tk1XAbC0K EOhrkK6bTKdA/bX9/r4kf5BPvISedhwKhtlOSZZYXDqzFW6v5d3C9ssn+tB0hvf/gxvV V40ATos+CljfIdhbeBHrz2+L0I9LLjm9oRh1dMF/fkT9SzWnZQMo6v+lySsc+ttUCwAw kPzDKzfYspB3nAgB/RitxfJlRjJafMUuHFxQ1Q/B0FTt1HF4hJhgtmMwo18mjMm9MVIx cINNi4PmKVwioBaCKnfULfU78zNKHI0BoMjRF0VRR8Vj4Unn+/cpr63zwB6LGWFGnEeO 0/Hw== X-Forwarded-Encrypted: i=1; AJvYcCUQX/nl2WCcNws1wb3+cuYfitaFE3e/b1C1cvizDqExGWXKxWdxqfAZegtjvSc3OQZ0rk6c+gtMCeVHtpI=@vger.kernel.org, AJvYcCV1dxs1b7y5LfPXXgDq/yJcr2QtzVKvI12CzTCkKQ0VSXDBzJwifMORXN1wYfGOMmrg5pkQoXeOLD0zV1CaY5fq@vger.kernel.org X-Gm-Message-State: AOJu0Yxw36Fuf2yReNVcdheGkCrlnoyNVDPIYytPNe3Gp70WJuF5EV0k xzIReAfg3hEQvZJ2m+rfSCYlLt96hFfas1WD4nh+5FVAwMajQhVx X-Google-Smtp-Source: AGHT+IHFjm7z288du+sLPkPh9NWZ67RgYD8wK05AuA3EyPac75+11d1DSFwwXFLi3Yq64iVpPPRciw== X-Received: by 2002:a17:907:a4e:b0:a99:c9a4:a4d5 with SMTP id a640c23a62f3a-aa483482762mr1446252966b.29.1732011576580; Tue, 19 Nov 2024 02:19:36 -0800 (PST) Received: from corebook.localdomain (2001-1c00-020d-1300-1b1c-4449-176a-89ea.cable.dynamic.v6.ziggo.nl. [2001:1c00:20d:1300:1b1c:4449:176a:89ea]) by smtp.gmail.com with ESMTPSA id a640c23a62f3a-aa20e081574sm634875566b.179.2024.11.19.02.19.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 19 Nov 2024 02:19:36 -0800 (PST) From: Eric Woudstra To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Simon Horman , Andrew Lunn , Pablo Neira Ayuso , Jozsef Kadlecsik , Jiri Pirko , Ivan Vecera , Roopa Prabhu , Nikolay Aleksandrov , Matthias Brugger , AngeloGioacchino Del Regno , David Ahern , Sebastian Andrzej Siewior , Lorenzo Bianconi , Joe Damato , Alexander Lobakin , Vladimir Oltean , "Frank Wunderlich" , Daniel Golle Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, netfilter-devel@vger.kernel.org, coreteam@netfilter.org, bridge@lists.linux.dev, linux-arm-kernel@lists.infradead.org, linux-mediatek@lists.infradead.org, Eric Woudstra Subject: [PATCH RFC v2 net-next 14/14] netfilter: nft_flow_offload: Add bridgeflow to nft_flow_offload_eval() Date: Tue, 19 Nov 2024 11:19:06 +0100 Message-ID: <20241119101906.862680-15-ericwouds@gmail.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20241119101906.862680-1-ericwouds@gmail.com> References: <20241119101906.862680-1-ericwouds@gmail.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC Edit nft_flow_offload_eval() to make it possible to handle a flowtable of the nft bridge family. Use nft_flow_offload_bridge_init() to fill the flow tuples. It uses nft_dev_fill_bridge_path() in each direction. Signed-off-by: Eric Woudstra --- net/netfilter/nft_flow_offload.c | 144 +++++++++++++++++++++++++++++-- 1 file changed, 139 insertions(+), 5 deletions(-) diff --git a/net/netfilter/nft_flow_offload.c b/net/netfilter/nft_flow_offload.c index ed0e9b499971..b17a3ef79852 100644 --- a/net/netfilter/nft_flow_offload.c +++ b/net/netfilter/nft_flow_offload.c @@ -196,6 +196,131 @@ static bool nft_flowtable_find_dev(const struct net_device *dev, return found; } +static int nft_dev_fill_bridge_path(struct flow_offload *flow, + struct nft_flowtable *ft, + const struct nft_pktinfo *pkt, + enum ip_conntrack_dir dir, + const struct net_device *src_dev, + const struct net_device *dst_dev, + unsigned char *src_ha, + unsigned char *dst_ha) +{ + struct flow_offload_tuple_rhash *th = flow->tuplehash; + struct net_device_path_stack stack; + struct net_device_path_ctx ctx = {}; + struct nft_forward_info info = {}; + int i, j = 0; + + for (i = th[dir].tuple.encap_num - 1; i >= 0 ; i--) { + if (info.num_encaps >= NF_FLOW_TABLE_ENCAP_MAX) + return -1; + + if (th[dir].tuple.in_vlan_ingress & BIT(i)) + continue; + + info.encap[info.num_encaps].id = th[dir].tuple.encap[i].id; + info.encap[info.num_encaps].proto = th[dir].tuple.encap[i].proto; + info.num_encaps++; + + if (th[dir].tuple.encap[i].proto == htons(ETH_P_PPP_SES)) + continue; + + if (ctx.num_vlans >= NET_DEVICE_PATH_VLAN_MAX) + return -1; + ctx.vlan[ctx.num_vlans].id = th[dir].tuple.encap[i].id; + ctx.vlan[ctx.num_vlans].proto = th[dir].tuple.encap[i].proto; + ctx.num_vlans++; + } + ctx.dev = src_dev; + ether_addr_copy(ctx.daddr, dst_ha); + + if (dev_fill_bridge_path(&ctx, &stack) < 0) + return -1; + + nft_dev_path_info(&stack, &info, dst_ha, &ft->data); + + if (!info.indev || info.indev != dst_dev) + return -1; + + th[!dir].tuple.iifidx = info.indev->ifindex; + for (i = info.num_encaps - 1; i >= 0; i--) { + th[!dir].tuple.encap[j].id = info.encap[i].id; + th[!dir].tuple.encap[j].proto = info.encap[i].proto; + if (info.ingress_vlans & BIT(i)) + th[!dir].tuple.in_vlan_ingress |= BIT(j); + j++; + } + th[!dir].tuple.encap_num = info.num_encaps; + + th[dir].tuple.mtu = dst_dev->mtu; + ether_addr_copy(th[dir].tuple.out.h_source, src_ha); + ether_addr_copy(th[dir].tuple.out.h_dest, dst_ha); + th[dir].tuple.out.ifidx = info.outdev->ifindex; + th[dir].tuple.out.hw_ifidx = info.hw_outdev->ifindex; + th[dir].tuple.xmit_type = FLOW_OFFLOAD_XMIT_DIRECT; + + return 0; +} + +static int nft_flow_offload_bridge_init(struct flow_offload *flow, + const struct nft_pktinfo *pkt, + enum ip_conntrack_dir dir, + struct nft_flowtable *ft) +{ + struct ethhdr *eth = eth_hdr(pkt->skb); + struct flow_offload_tuple *tuple; + const struct net_device *out_dev; + const struct net_device *in_dev; + struct pppoe_hdr *phdr; + struct vlan_hdr *vhdr; + int err, i = 0; + + in_dev = nft_in(pkt); + if (!in_dev || !nft_flowtable_find_dev(in_dev, ft)) + return -1; + + out_dev = nft_out(pkt); + if (!out_dev || !nft_flowtable_find_dev(out_dev, ft)) + return -1; + + tuple = &flow->tuplehash[!dir].tuple; + + if (skb_vlan_tag_present(pkt->skb)) { + tuple->encap[i].id = skb_vlan_tag_get(pkt->skb); + tuple->encap[i].proto = pkt->skb->vlan_proto; + i++; + } + switch (pkt->skb->protocol) { + case htons(ETH_P_8021Q): + vhdr = (struct vlan_hdr *)skb_network_header(pkt->skb); + tuple->encap[i].id = ntohs(vhdr->h_vlan_TCI); + tuple->encap[i].proto = pkt->skb->protocol; + i++; + break; + case htons(ETH_P_PPP_SES): + phdr = (struct pppoe_hdr *)skb_network_header(pkt->skb); + tuple->encap[i].id = ntohs(phdr->sid); + tuple->encap[i].proto = pkt->skb->protocol; + i++; + break; + } + tuple->encap_num = i; + + err = nft_dev_fill_bridge_path(flow, ft, pkt, !dir, out_dev, in_dev, + eth->h_dest, eth->h_source); + if (err < 0) + return err; + + memset(tuple->encap, 0, sizeof(tuple->encap)); + + err = nft_dev_fill_bridge_path(flow, ft, pkt, dir, in_dev, out_dev, + eth->h_source, eth->h_dest); + if (err < 0) + return err; + + return 0; +} + static void nft_dev_forward_path(struct nf_flow_route *route, const struct nf_conn *ct, enum ip_conntrack_dir dir, @@ -306,6 +431,7 @@ static void nft_flow_offload_eval(const struct nft_expr *expr, { struct nft_flow_offload *priv = nft_expr_priv(expr); struct nf_flowtable *flowtable = &priv->flowtable->data; + bool routing = (flowtable->type->family != NFPROTO_BRIDGE); struct tcphdr _tcph, *tcph = NULL; struct nf_flow_route route = {}; enum ip_conntrack_info ctinfo; @@ -359,14 +485,20 @@ static void nft_flow_offload_eval(const struct nft_expr *expr, goto out; dir = CTINFO2DIR(ctinfo); - if (nft_flow_route(pkt, ct, &route, dir, priv->flowtable) < 0) - goto err_flow_route; + if (routing) { + if (nft_flow_route(pkt, ct, &route, dir, priv->flowtable) < 0) + goto err_flow_route; + } flow = flow_offload_alloc(ct); if (!flow) goto err_flow_alloc; - flow_offload_route_init(flow, &route); + if (routing) + flow_offload_route_init(flow, &route); + else + if (nft_flow_offload_bridge_init(flow, pkt, dir, priv->flowtable) < 0) + goto err_flow_route; if (tcph) { ct->proto.tcp.seen[0].flags |= IP_CT_TCP_FLAG_BE_LIBERAL; @@ -419,8 +551,10 @@ static void nft_flow_offload_eval(const struct nft_expr *expr, err_flow_add: flow_offload_free(flow); err_flow_alloc: - dst_release(route.tuple[dir].dst); - dst_release(route.tuple[!dir].dst); + if (routing) { + dst_release(route.tuple[dir].dst); + dst_release(route.tuple[!dir].dst); + } err_flow_route: clear_bit(IPS_OFFLOAD_BIT, &ct->status); out: