From patchwork Fri Mar 11 17:18:15 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Nguyen X-Patchwork-Id: 12778417 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A2F1BC433FE for ; Fri, 11 Mar 2022 17:18:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1350704AbiCKRTj (ORCPT ); Fri, 11 Mar 2022 12:19:39 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47840 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350669AbiCKRTh (ORCPT ); Fri, 11 Mar 2022 12:19:37 -0500 Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E9355E3C6E for ; Fri, 11 Mar 2022 09:18:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1647019113; x=1678555113; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=PrTPsZojGe2VLNTVedvy60BYUS6hp2wTeHdBDgBDHEo=; b=ZGeuK3fKN+hFYalbTBSZLEjQvxLFzNGR6W+sRFo4aouIAt6Bskimd86r N0GCHGmZZww7LsDZF4rUfgH+ekgGr5XzPE61Z3lj67tjtjssOj/0BKxXe VUfJq4XKlYW3datzm3DStaGgZ8MEEAakobx70b6te0+j+gnDh5vXjET1d lhsBvS0Jrn+eIh0KbgJ1TklsRTwytnzPduV6s0FP/zNIvFpZWIFrQDvts BiXS2Xsz5DImTr4DoQmftgeW3TQI030ao+7XM2iwq7CcBy2JxlHLqGIpC dmReJ0d9yO/ov5pLFOTTajBoW1uVnp+XacbtnzfqNaYh2dg3GxuwqOSKt g==; X-IronPort-AV: E=McAfee;i="6200,9189,10283"; a="316335108" X-IronPort-AV: E=Sophos;i="5.90,174,1643702400"; d="scan'208";a="316335108" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Mar 2022 09:18:32 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.90,174,1643702400"; d="scan'208";a="612215397" Received: from anguy11-desk2.jf.intel.com ([10.166.244.147]) by fmsmga004.fm.intel.com with ESMTP; 11 Mar 2022 09:18:31 -0800 From: Tony Nguyen To: davem@davemloft.net, kuba@kernel.org Cc: Wojciech Drewek , netdev@vger.kernel.org, anthony.l.nguyen@intel.com, marcin.szycik@linux.intel.com, michal.swiatkowski@linux.intel.com, jiri@resnulli.us, pablo@netfilter.org, laforge@gnumonks.org, xiyou.wangcong@gmail.com, jhs@mojatatu.com, osmocom-net-gprs@lists.osmocom.org Subject: [PATCH net-next v11 1/7] gtp: Allow to create GTP device without FDs Date: Fri, 11 Mar 2022 09:18:15 -0800 Message-Id: <20220311171821.3785992-2-anthony.l.nguyen@intel.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220311171821.3785992-1-anthony.l.nguyen@intel.com> References: <20220311171821.3785992-1-anthony.l.nguyen@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Wojciech Drewek Currently, when the user wants to create GTP device, he has to provide file handles to the sockets created in userspace (IFLA_GTP_FD0, IFLA_GTP_FD1). This behaviour is not ideal, considering the option of adding support for GTP device creation through ip link. Ip link application is not a good place to create such sockets. This patch allows to create GTP device without providing IFLA_GTP_FD0 and IFLA_GTP_FD1 arguments. If the user sets IFLA_GTP_CREATE_SOCKETS attribute, then GTP module takes care of creating UDP sockets by itself. Sockets are created with the commonly known UDP ports used for GTP protocol (GTP0_PORT and GTP1U_PORT). In this case we don't have to provide encap_destroy because no extra deinitialization is needed, everything is covered by udp_tunnel_sock_release. Note: GTP instance created with only this change applied, does not handle GTP Echo Requests. This is implemented in the following patch. Signed-off-by: Wojciech Drewek Signed-off-by: Tony Nguyen --- drivers/net/gtp.c | 101 +++++++++++++++++++++++++++++------ include/uapi/linux/if_link.h | 1 + 2 files changed, 85 insertions(+), 17 deletions(-) diff --git a/drivers/net/gtp.c b/drivers/net/gtp.c index bf087171bcf0..25d8521897b3 100644 --- a/drivers/net/gtp.c +++ b/drivers/net/gtp.c @@ -66,8 +66,10 @@ struct gtp_dev { struct sock *sk0; struct sock *sk1u; + u8 sk_created; struct net_device *dev; + struct net *net; unsigned int role; unsigned int hash_size; @@ -320,8 +322,16 @@ static void gtp_encap_disable_sock(struct sock *sk) static void gtp_encap_disable(struct gtp_dev *gtp) { - gtp_encap_disable_sock(gtp->sk0); - gtp_encap_disable_sock(gtp->sk1u); + if (gtp->sk_created) { + udp_tunnel_sock_release(gtp->sk0->sk_socket); + udp_tunnel_sock_release(gtp->sk1u->sk_socket); + gtp->sk_created = false; + gtp->sk0 = NULL; + gtp->sk1u = NULL; + } else { + gtp_encap_disable_sock(gtp->sk0); + gtp_encap_disable_sock(gtp->sk1u); + } } /* UDP encapsulation receive handler. See net/ipv4/udp.c. @@ -656,17 +666,69 @@ static void gtp_destructor(struct net_device *dev) kfree(gtp->tid_hash); } +static struct sock *gtp_create_sock(int type, struct gtp_dev *gtp) +{ + struct udp_tunnel_sock_cfg tuncfg = {}; + struct udp_port_cfg udp_conf = { + .local_ip.s_addr = htonl(INADDR_ANY), + .family = AF_INET, + }; + struct net *net = gtp->net; + struct socket *sock; + int err; + + if (type == UDP_ENCAP_GTP0) + udp_conf.local_udp_port = htons(GTP0_PORT); + else if (type == UDP_ENCAP_GTP1U) + udp_conf.local_udp_port = htons(GTP1U_PORT); + else + return ERR_PTR(-EINVAL); + + err = udp_sock_create(net, &udp_conf, &sock); + if (err) + return ERR_PTR(err); + + tuncfg.sk_user_data = gtp; + tuncfg.encap_type = type; + tuncfg.encap_rcv = gtp_encap_recv; + tuncfg.encap_destroy = NULL; + + setup_udp_tunnel_sock(net, sock, &tuncfg); + + return sock->sk; +} + +static int gtp_create_sockets(struct gtp_dev *gtp, struct nlattr *data[]) +{ + struct sock *sk1u = NULL; + struct sock *sk0 = NULL; + + sk0 = gtp_create_sock(UDP_ENCAP_GTP0, gtp); + if (IS_ERR(sk0)) + return PTR_ERR(sk0); + + sk1u = gtp_create_sock(UDP_ENCAP_GTP1U, gtp); + if (IS_ERR(sk1u)) { + udp_tunnel_sock_release(sk0->sk_socket); + return PTR_ERR(sk1u); + } + + gtp->sk_created = true; + gtp->sk0 = sk0; + gtp->sk1u = sk1u; + + return 0; +} + static int gtp_newlink(struct net *src_net, struct net_device *dev, struct nlattr *tb[], struct nlattr *data[], struct netlink_ext_ack *extack) { + unsigned int role = GTP_ROLE_GGSN; struct gtp_dev *gtp; struct gtp_net *gn; int hashsize, err; - if (!data[IFLA_GTP_FD0] && !data[IFLA_GTP_FD1]) - return -EINVAL; - gtp = netdev_priv(dev); if (!data[IFLA_GTP_PDP_HASHSIZE]) { @@ -677,11 +739,23 @@ static int gtp_newlink(struct net *src_net, struct net_device *dev, hashsize = 1024; } + if (data[IFLA_GTP_ROLE]) { + role = nla_get_u32(data[IFLA_GTP_ROLE]); + if (role > GTP_ROLE_SGSN) + return -EINVAL; + } + gtp->role = role; + + gtp->net = src_net; + err = gtp_hashtable_new(gtp, hashsize); if (err < 0) return err; - err = gtp_encap_enable(gtp, data); + if (data[IFLA_GTP_CREATE_SOCKETS]) + err = gtp_create_sockets(gtp, data); + else + err = gtp_encap_enable(gtp, data); if (err < 0) goto out_hashtable; @@ -726,6 +800,7 @@ static const struct nla_policy gtp_policy[IFLA_GTP_MAX + 1] = { [IFLA_GTP_FD1] = { .type = NLA_U32 }, [IFLA_GTP_PDP_HASHSIZE] = { .type = NLA_U32 }, [IFLA_GTP_ROLE] = { .type = NLA_U32 }, + [IFLA_GTP_CREATE_SOCKETS] = { .type = NLA_U8 }, }; static int gtp_validate(struct nlattr *tb[], struct nlattr *data[], @@ -848,7 +923,9 @@ static int gtp_encap_enable(struct gtp_dev *gtp, struct nlattr *data[]) { struct sock *sk1u = NULL; struct sock *sk0 = NULL; - unsigned int role = GTP_ROLE_GGSN; + + if (!data[IFLA_GTP_FD0] && !data[IFLA_GTP_FD1]) + return -EINVAL; if (data[IFLA_GTP_FD0]) { u32 fd0 = nla_get_u32(data[IFLA_GTP_FD0]); @@ -868,18 +945,8 @@ static int gtp_encap_enable(struct gtp_dev *gtp, struct nlattr *data[]) } } - if (data[IFLA_GTP_ROLE]) { - role = nla_get_u32(data[IFLA_GTP_ROLE]); - if (role > GTP_ROLE_SGSN) { - gtp_encap_disable_sock(sk0); - gtp_encap_disable_sock(sk1u); - return -EINVAL; - } - } - gtp->sk0 = sk0; gtp->sk1u = sk1u; - gtp->role = role; return 0; } diff --git a/include/uapi/linux/if_link.h b/include/uapi/linux/if_link.h index ddca20357e7e..ebd2aa3ef809 100644 --- a/include/uapi/linux/if_link.h +++ b/include/uapi/linux/if_link.h @@ -887,6 +887,7 @@ enum { IFLA_GTP_FD1, IFLA_GTP_PDP_HASHSIZE, IFLA_GTP_ROLE, + IFLA_GTP_CREATE_SOCKETS, __IFLA_GTP_MAX, }; #define IFLA_GTP_MAX (__IFLA_GTP_MAX - 1) From patchwork Fri Mar 11 17:18:16 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Nguyen X-Patchwork-Id: 12778418 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0B9DDC433F5 for ; Fri, 11 Mar 2022 17:18:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1350709AbiCKRTk (ORCPT ); Fri, 11 Mar 2022 12:19:40 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47884 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350705AbiCKRTi (ORCPT ); Fri, 11 Mar 2022 12:19:38 -0500 Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6EE85E4488 for ; Fri, 11 Mar 2022 09:18:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1647019114; x=1678555114; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=BPPzYZG+VfUizkyjkMhoV9UeRIfCMuI949dl5154g7Q=; b=J9gIFT0bE3sdXNX4jXUSMt+rHPpCTGHcDgQ4famzqH0FwPIF63AE+rSI 1QDCQgnxgVwIdF9tKo0E6599zDaDgvTAPPFt9Ih2ebV24RnthlhNLuz/L h9LrC4zKvS12Qrb4Cl7ZNfJ81M9P1wsuOxY0nu6+Rqx7acBabxJgnxZ1E dvlNxroXTdWBudbSVyiyYsUxTdb0LYt+4SLKK4lqEBJOgu7r3/kBqoGvD lApUSUfDtYZpPdff1Sq4SajZqdjsCEIumyXckUIfWL512Rp/njdMz5LAb Q02WWboFJKSjO/j0Yq/87Hdgu+m3Lk2ulmGnyPCoewzM5DpWr4uHFYdkT g==; X-IronPort-AV: E=McAfee;i="6200,9189,10283"; a="316335114" X-IronPort-AV: E=Sophos;i="5.90,174,1643702400"; d="scan'208";a="316335114" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Mar 2022 09:18:32 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.90,174,1643702400"; d="scan'208";a="612215401" Received: from anguy11-desk2.jf.intel.com ([10.166.244.147]) by fmsmga004.fm.intel.com with ESMTP; 11 Mar 2022 09:18:32 -0800 From: Tony Nguyen To: davem@davemloft.net, kuba@kernel.org Cc: Wojciech Drewek , netdev@vger.kernel.org, anthony.l.nguyen@intel.com, marcin.szycik@linux.intel.com, michal.swiatkowski@linux.intel.com, jiri@resnulli.us, pablo@netfilter.org, laforge@gnumonks.org, xiyou.wangcong@gmail.com, jhs@mojatatu.com, osmocom-net-gprs@lists.osmocom.org Subject: [PATCH net-next v11 2/7] gtp: Implement GTP echo response Date: Fri, 11 Mar 2022 09:18:16 -0800 Message-Id: <20220311171821.3785992-3-anthony.l.nguyen@intel.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220311171821.3785992-1-anthony.l.nguyen@intel.com> References: <20220311171821.3785992-1-anthony.l.nguyen@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Wojciech Drewek Adding GTP device through ip link creates the situation where there is no userspace daemon which would handle GTP messages (Echo Request for example). GTP-U instance which would not respond to echo requests would violate GTP specification. When GTP packet arrives with GTP_ECHO_REQ message type, GTP_ECHO_RSP is send to the sender. GTP_ECHO_RSP message should contain information element with GTPIE_RECOVERY tag and restart counter value. For GTPv1 restart counter is not used and should be equal to 0, for GTPv0 restart counter contains information provided from userspace(IFLA_GTP_RESTART_COUNT). Signed-off-by: Wojciech Drewek Suggested-by: Harald Welte Reviewed-by: Harald Welte Tested-by: Harald Welte Signed-off-by: Tony Nguyen --- drivers/net/gtp.c | 213 ++++++++++++++++++++++++++++++++--- include/net/gtp.h | 31 +++++ include/uapi/linux/if_link.h | 1 + 3 files changed, 229 insertions(+), 16 deletions(-) diff --git a/drivers/net/gtp.c b/drivers/net/gtp.c index 25d8521897b3..bf434d79f868 100644 --- a/drivers/net/gtp.c +++ b/drivers/net/gtp.c @@ -75,6 +75,8 @@ struct gtp_dev { unsigned int hash_size; struct hlist_head *tid_hash; struct hlist_head *addr_hash; + + u8 restart_count; }; static unsigned int gtp_net_id __read_mostly; @@ -217,6 +219,106 @@ static int gtp_rx(struct pdp_ctx *pctx, struct sk_buff *skb, return -1; } +static struct rtable *ip4_route_output_gtp(struct flowi4 *fl4, + const struct sock *sk, + __be32 daddr, __be32 saddr) +{ + memset(fl4, 0, sizeof(*fl4)); + fl4->flowi4_oif = sk->sk_bound_dev_if; + fl4->daddr = daddr; + fl4->saddr = saddr; + fl4->flowi4_tos = RT_CONN_FLAGS(sk); + fl4->flowi4_proto = sk->sk_protocol; + + return ip_route_output_key(sock_net(sk), fl4); +} + +/* GSM TS 09.60. 7.3 + * In all Path Management messages: + * - TID: is not used and shall be set to 0. + * - Flow Label is not used and shall be set to 0 + * In signalling messages: + * - number: this field is not yet used in signalling messages. + * It shall be set to 255 by the sender and shall be ignored + * by the receiver + * Returns true if the echo req was correct, false otherwise. + */ +static bool gtp0_validate_echo_req(struct gtp0_header *gtp0) +{ + return !(gtp0->tid || (gtp0->flags ^ 0x1e) || + gtp0->number != 0xff || gtp0->flow); +} + +static int gtp0_send_echo_resp(struct gtp_dev *gtp, struct sk_buff *skb) +{ + struct gtp0_packet *gtp_pkt; + struct gtp0_header *gtp0; + struct rtable *rt; + struct flowi4 fl4; + struct iphdr *iph; + __be16 seq; + + gtp0 = (struct gtp0_header *)(skb->data + sizeof(struct udphdr)); + + if (!gtp0_validate_echo_req(gtp0)) + return -1; + + seq = gtp0->seq; + + /* pull GTP and UDP headers */ + skb_pull_data(skb, sizeof(struct gtp0_header) + sizeof(struct udphdr)); + + gtp_pkt = skb_push(skb, sizeof(struct gtp0_packet)); + memset(gtp_pkt, 0, sizeof(struct gtp0_packet)); + + gtp_pkt->gtp0_h.flags = 0x1e; /* v0, GTP-non-prime. */ + gtp_pkt->gtp0_h.type = GTP_ECHO_RSP; + gtp_pkt->gtp0_h.length = + htons(sizeof(struct gtp0_packet) - sizeof(struct gtp0_header)); + + /* GSM TS 09.60. 7.3 The Sequence Number in a signalling response + * message shall be copied from the signalling request message + * that the GSN is replying to. + */ + gtp_pkt->gtp0_h.seq = seq; + + /* GSM TS 09.60. 7.3 In all Path Management Flow Label and TID + * are not used and shall be set to 0. + */ + gtp_pkt->gtp0_h.flow = 0; + gtp_pkt->gtp0_h.tid = 0; + gtp_pkt->gtp0_h.number = 0xff; + gtp_pkt->gtp0_h.spare[0] = 0xff; + gtp_pkt->gtp0_h.spare[1] = 0xff; + gtp_pkt->gtp0_h.spare[2] = 0xff; + + gtp_pkt->ie.tag = GTPIE_RECOVERY; + gtp_pkt->ie.val = gtp->restart_count; + + iph = ip_hdr(skb); + + /* find route to the sender, + * src address becomes dst address and vice versa. + */ + rt = ip4_route_output_gtp(&fl4, gtp->sk0, iph->saddr, iph->daddr); + if (IS_ERR(rt)) { + netdev_dbg(gtp->dev, "no route for echo response from %pI4\n", + &iph->saddr); + return -1; + } + + udp_tunnel_xmit_skb(rt, gtp->sk0, skb, + fl4.saddr, fl4.daddr, + iph->tos, + ip4_dst_hoplimit(&rt->dst), + 0, + htons(GTP0_PORT), htons(GTP0_PORT), + !net_eq(sock_net(gtp->sk1u), + dev_net(gtp->dev)), + false); + return 0; +} + /* 1 means pass up to the stack, -1 means drop and 0 means decapsulated. */ static int gtp0_udp_encap_recv(struct gtp_dev *gtp, struct sk_buff *skb) { @@ -233,6 +335,13 @@ static int gtp0_udp_encap_recv(struct gtp_dev *gtp, struct sk_buff *skb) if ((gtp0->flags >> 5) != GTP_V0) return 1; + /* If the sockets were created in kernel, it means that + * there is no daemon running in userspace which would + * handle echo request. + */ + if (gtp0->type == GTP_ECHO_REQ && gtp->sk_created) + return gtp0_send_echo_resp(gtp, skb); + if (gtp0->type != GTP_TPDU) return 1; @@ -245,6 +354,75 @@ static int gtp0_udp_encap_recv(struct gtp_dev *gtp, struct sk_buff *skb) return gtp_rx(pctx, skb, hdrlen, gtp->role); } +static int gtp1u_send_echo_resp(struct gtp_dev *gtp, struct sk_buff *skb) +{ + struct gtp1_header_long *gtp1u; + struct gtp1u_packet *gtp_pkt; + struct rtable *rt; + struct flowi4 fl4; + struct iphdr *iph; + + gtp1u = (struct gtp1_header_long *)(skb->data + sizeof(struct udphdr)); + + /* 3GPP TS 29.281 5.1 - For the Echo Request, Echo Response, + * Error Indication and Supported Extension Headers Notification + * messages, the S flag shall be set to 1 and TEID shall be set to 0. + */ + if (!(gtp1u->flags & GTP1_F_SEQ) || gtp1u->tid) + return -1; + + /* pull GTP and UDP headers */ + skb_pull_data(skb, + sizeof(struct gtp1_header_long) + sizeof(struct udphdr)); + + gtp_pkt = skb_push(skb, sizeof(struct gtp1u_packet)); + memset(gtp_pkt, 0, sizeof(struct gtp1u_packet)); + + /* S flag must be set to 1 */ + gtp_pkt->gtp1u_h.flags = 0x32; + gtp_pkt->gtp1u_h.type = GTP_ECHO_RSP; + /* seq, npdu and next should be counted to the length of the GTP packet + * that's why szie of gtp1_header should be subtracted, + * not why szie of gtp1_header_long. + */ + gtp_pkt->gtp1u_h.length = + htons(sizeof(struct gtp1u_packet) - sizeof(struct gtp1_header)); + /* 3GPP TS 29.281 5.1 - TEID has to be set to 0 */ + gtp_pkt->gtp1u_h.tid = 0; + + /* 3GPP TS 29.281 7.7.2 - The Restart Counter value in the + * Recovery information element shall not be used, i.e. it shall + * be set to zero by the sender and shall be ignored by the receiver. + * The Recovery information element is mandatory due to backwards + * compatibility reasons. + */ + gtp_pkt->ie.tag = GTPIE_RECOVERY; + gtp_pkt->ie.val = 0; + + iph = ip_hdr(skb); + + /* find route to the sender, + * src address becomes dst address and vice versa. + */ + rt = ip4_route_output_gtp(&fl4, gtp->sk1u, iph->saddr, iph->daddr); + if (IS_ERR(rt)) { + netdev_dbg(gtp->dev, "no route for echo response from %pI4\n", + &iph->saddr); + return -1; + } + + udp_tunnel_xmit_skb(rt, gtp->sk1u, skb, + fl4.saddr, fl4.daddr, + iph->tos, + ip4_dst_hoplimit(&rt->dst), + 0, + htons(GTP1U_PORT), htons(GTP1U_PORT), + !net_eq(sock_net(gtp->sk1u), + dev_net(gtp->dev)), + false); + return 0; +} + static int gtp1u_udp_encap_recv(struct gtp_dev *gtp, struct sk_buff *skb) { unsigned int hdrlen = sizeof(struct udphdr) + @@ -260,6 +438,13 @@ static int gtp1u_udp_encap_recv(struct gtp_dev *gtp, struct sk_buff *skb) if ((gtp1->flags >> 5) != GTP_V1) return 1; + /* If the sockets were created in kernel, it means that + * there is no daemon running in userspace which would + * handle echo request. + */ + if (gtp1->type == GTP_ECHO_REQ && gtp->sk_created) + return gtp1u_send_echo_resp(gtp, skb); + if (gtp1->type != GTP_TPDU) return 1; @@ -398,20 +583,6 @@ static void gtp_dev_uninit(struct net_device *dev) free_percpu(dev->tstats); } -static struct rtable *ip4_route_output_gtp(struct flowi4 *fl4, - const struct sock *sk, - __be32 daddr) -{ - memset(fl4, 0, sizeof(*fl4)); - fl4->flowi4_oif = sk->sk_bound_dev_if; - fl4->daddr = daddr; - fl4->saddr = inet_sk(sk)->inet_saddr; - fl4->flowi4_tos = RT_CONN_FLAGS(sk); - fl4->flowi4_proto = sk->sk_protocol; - - return ip_route_output_key(sock_net(sk), fl4); -} - static inline void gtp0_push_header(struct sk_buff *skb, struct pdp_ctx *pctx) { int payload_len = skb->len; @@ -517,7 +688,8 @@ static int gtp_build_skb_ip4(struct sk_buff *skb, struct net_device *dev, } netdev_dbg(dev, "found PDP context %p\n", pctx); - rt = ip4_route_output_gtp(&fl4, pctx->sk, pctx->peer_addr_ip4.s_addr); + rt = ip4_route_output_gtp(&fl4, pctx->sk, pctx->peer_addr_ip4.s_addr, + inet_sk(pctx->sk)->inet_saddr); if (IS_ERR(rt)) { netdev_dbg(dev, "no route to SSGN %pI4\n", &pctx->peer_addr_ip4.s_addr); @@ -746,6 +918,11 @@ static int gtp_newlink(struct net *src_net, struct net_device *dev, } gtp->role = role; + if (!data[IFLA_GTP_RESTART_COUNT]) + gtp->restart_count = 0; + else + gtp->restart_count = nla_get_u8(data[IFLA_GTP_RESTART_COUNT]); + gtp->net = src_net; err = gtp_hashtable_new(gtp, hashsize); @@ -801,6 +978,7 @@ static const struct nla_policy gtp_policy[IFLA_GTP_MAX + 1] = { [IFLA_GTP_PDP_HASHSIZE] = { .type = NLA_U32 }, [IFLA_GTP_ROLE] = { .type = NLA_U32 }, [IFLA_GTP_CREATE_SOCKETS] = { .type = NLA_U8 }, + [IFLA_GTP_RESTART_COUNT] = { .type = NLA_U8 }, }; static int gtp_validate(struct nlattr *tb[], struct nlattr *data[], @@ -815,7 +993,8 @@ static int gtp_validate(struct nlattr *tb[], struct nlattr *data[], static size_t gtp_get_size(const struct net_device *dev) { return nla_total_size(sizeof(__u32)) + /* IFLA_GTP_PDP_HASHSIZE */ - nla_total_size(sizeof(__u32)); /* IFLA_GTP_ROLE */ + nla_total_size(sizeof(__u32)) + /* IFLA_GTP_ROLE */ + nla_total_size(sizeof(__u8)); /* IFLA_GTP_RESTART_COUNT */ } static int gtp_fill_info(struct sk_buff *skb, const struct net_device *dev) @@ -826,6 +1005,8 @@ static int gtp_fill_info(struct sk_buff *skb, const struct net_device *dev) goto nla_put_failure; if (nla_put_u32(skb, IFLA_GTP_ROLE, gtp->role)) goto nla_put_failure; + if (nla_put_u8(skb, IFLA_GTP_RESTART_COUNT, gtp->restart_count)) + goto nla_put_failure; return 0; diff --git a/include/net/gtp.h b/include/net/gtp.h index 0e16ebb2a82d..0e12c37f2958 100644 --- a/include/net/gtp.h +++ b/include/net/gtp.h @@ -7,8 +7,13 @@ #define GTP0_PORT 3386 #define GTP1U_PORT 2152 +/* GTP messages types */ +#define GTP_ECHO_REQ 1 /* Echo Request */ +#define GTP_ECHO_RSP 2 /* Echo Response */ #define GTP_TPDU 255 +#define GTPIE_RECOVERY 14 + struct gtp0_header { /* According to GSM TS 09.60. */ __u8 flags; __u8 type; @@ -27,6 +32,32 @@ struct gtp1_header { /* According to 3GPP TS 29.060. */ __be32 tid; } __attribute__ ((packed)); +struct gtp1_header_long { /* According to 3GPP TS 29.060. */ + __u8 flags; + __u8 type; + __be16 length; + __be32 tid; + __be16 seq; + __u8 npdu; + __u8 next; +} __packed; + +/* GTP Information Element */ +struct gtp_ie { + __u8 tag; + __u8 val; +} __packed; + +struct gtp0_packet { + struct gtp0_header gtp0_h; + struct gtp_ie ie; +} __packed; + +struct gtp1u_packet { + struct gtp1_header_long gtp1u_h; + struct gtp_ie ie; +} __packed; + #define GTP1_F_NPDU 0x01 #define GTP1_F_SEQ 0x02 #define GTP1_F_EXTHDR 0x04 diff --git a/include/uapi/linux/if_link.h b/include/uapi/linux/if_link.h index ebd2aa3ef809..bd24c7dc10a2 100644 --- a/include/uapi/linux/if_link.h +++ b/include/uapi/linux/if_link.h @@ -888,6 +888,7 @@ enum { IFLA_GTP_PDP_HASHSIZE, IFLA_GTP_ROLE, IFLA_GTP_CREATE_SOCKETS, + IFLA_GTP_RESTART_COUNT, __IFLA_GTP_MAX, }; #define IFLA_GTP_MAX (__IFLA_GTP_MAX - 1) From patchwork Fri Mar 11 17:18:17 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Nguyen X-Patchwork-Id: 12778419 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CCEBBC433EF for ; Fri, 11 Mar 2022 17:18:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1350712AbiCKRTm (ORCPT ); Fri, 11 Mar 2022 12:19:42 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47950 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350695AbiCKRTj (ORCPT ); Fri, 11 Mar 2022 12:19:39 -0500 Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 85BB6E4D2B for ; Fri, 11 Mar 2022 09:18:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1647019115; x=1678555115; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=GLWQFcQhv41qWJY14jSLho+aQTh68WQFgVXXCvhOWSE=; b=Mga+TcXW75K5qQGTI2gz86Q/VmzTvY9dXM9s9OppO12p/dWIeU9bYLpZ fa29wo1HqSzeGLvq6GH3wRlkOCV7TrmRPrStS1gAwjP4ByqFUA7YwOoCl JRmy5KTc9VOS5bciZ/f7GKJLFb+PrlOBsl+hElok5FwAM8c70mOX90idB WN3Z53al2ZmHm+AyZ3mjD+UeDNzQPkrHYGoZts3cdTrwexAcgaqQy6/xy g9q5eFDsInYsef98ThFXvb6eON2T11+J2MveQHRI4T8/XfC5zXZofyoSV am2paeXcNjstzAawtCsb3fcHMZfq4rjlV+aHHkK8uUs4srbAE5UXADVa3 Q==; X-IronPort-AV: E=McAfee;i="6200,9189,10283"; a="316335122" X-IronPort-AV: E=Sophos;i="5.90,174,1643702400"; d="scan'208";a="316335122" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Mar 2022 09:18:33 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.90,174,1643702400"; d="scan'208";a="612215404" Received: from anguy11-desk2.jf.intel.com ([10.166.244.147]) by fmsmga004.fm.intel.com with ESMTP; 11 Mar 2022 09:18:32 -0800 From: Tony Nguyen To: davem@davemloft.net, kuba@kernel.org Cc: Wojciech Drewek , netdev@vger.kernel.org, anthony.l.nguyen@intel.com, marcin.szycik@linux.intel.com, michal.swiatkowski@linux.intel.com, jiri@resnulli.us, pablo@netfilter.org, laforge@gnumonks.org, xiyou.wangcong@gmail.com, jhs@mojatatu.com, osmocom-net-gprs@lists.osmocom.org Subject: [PATCH net-next v11 3/7] gtp: Implement GTP echo request Date: Fri, 11 Mar 2022 09:18:17 -0800 Message-Id: <20220311171821.3785992-4-anthony.l.nguyen@intel.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220311171821.3785992-1-anthony.l.nguyen@intel.com> References: <20220311171821.3785992-1-anthony.l.nguyen@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Wojciech Drewek Adding GTP device through ip link creates the situation where GTP instance is not able to send GTP echo requests. Echo requests are used to check if GTP peer is still alive. With this patch, gtp_genl_ops are extended by new cmd (GTP_CMD_ECHOREQ) which allows to send echo request in the given version of GTP protocol (v0 or v1), from the given ms address to he given peer. TID is not inclued because in all path management messages it should be equal to 0. When GTP echo response is detected, multicast message is send to everyone in the gtp_genl_family. Message contains GTP version, ms address and peer address. Suggested-by: Harald Welte Signed-off-by: Wojciech Drewek Reviewed-by: Harald Welte Signed-off-by: Tony Nguyen --- drivers/net/gtp.c | 305 ++++++++++++++++++++++++++++++++++----- include/uapi/linux/gtp.h | 1 + 2 files changed, 269 insertions(+), 37 deletions(-) diff --git a/drivers/net/gtp.c b/drivers/net/gtp.c index bf434d79f868..756714d4ad92 100644 --- a/drivers/net/gtp.c +++ b/drivers/net/gtp.c @@ -79,6 +79,12 @@ struct gtp_dev { u8 restart_count; }; +struct echo_info { + struct in_addr ms_addr_ip4; + struct in_addr peer_addr_ip4; + u8 gtp_version; +}; + static unsigned int gtp_net_id __read_mostly; struct gtp_net { @@ -87,6 +93,16 @@ struct gtp_net { static u32 gtp_h_initval; +static struct genl_family gtp_genl_family; + +enum gtp_multicast_groups { + GTP_GENL_MCGRP, +}; + +static const struct genl_multicast_group gtp_genl_mcgrps[] = { + [GTP_GENL_MCGRP] = { .name = GTP_GENL_MCGRP_NAME }, +}; + static void pdp_context_delete(struct pdp_ctx *pctx); static inline u32 gtp0_hashfn(u64 tid) @@ -243,12 +259,38 @@ static struct rtable *ip4_route_output_gtp(struct flowi4 *fl4, * by the receiver * Returns true if the echo req was correct, false otherwise. */ -static bool gtp0_validate_echo_req(struct gtp0_header *gtp0) +static bool gtp0_validate_echo_hdr(struct gtp0_header *gtp0) { return !(gtp0->tid || (gtp0->flags ^ 0x1e) || gtp0->number != 0xff || gtp0->flow); } +/* msg_type has to be GTP_ECHO_REQ or GTP_ECHO_RSP */ +static void gtp0_build_echo_msg(struct gtp0_header *hdr, __u8 msg_type) +{ + int len_pkt, len_hdr; + + hdr->flags = 0x1e; /* v0, GTP-non-prime. */ + hdr->type = msg_type; + /* GSM TS 09.60. 7.3 In all Path Management Flow Label and TID + * are not used and shall be set to 0. + */ + hdr->flow = 0; + hdr->tid = 0; + hdr->number = 0xff; + hdr->spare[0] = 0xff; + hdr->spare[1] = 0xff; + hdr->spare[2] = 0xff; + + len_pkt = sizeof(struct gtp0_packet); + len_hdr = sizeof(struct gtp0_header); + + if (msg_type == GTP_ECHO_RSP) + hdr->length = htons(len_pkt - len_hdr); + else + hdr->length = 0; +} + static int gtp0_send_echo_resp(struct gtp_dev *gtp, struct sk_buff *skb) { struct gtp0_packet *gtp_pkt; @@ -260,7 +302,7 @@ static int gtp0_send_echo_resp(struct gtp_dev *gtp, struct sk_buff *skb) gtp0 = (struct gtp0_header *)(skb->data + sizeof(struct udphdr)); - if (!gtp0_validate_echo_req(gtp0)) + if (!gtp0_validate_echo_hdr(gtp0)) return -1; seq = gtp0->seq; @@ -271,10 +313,7 @@ static int gtp0_send_echo_resp(struct gtp_dev *gtp, struct sk_buff *skb) gtp_pkt = skb_push(skb, sizeof(struct gtp0_packet)); memset(gtp_pkt, 0, sizeof(struct gtp0_packet)); - gtp_pkt->gtp0_h.flags = 0x1e; /* v0, GTP-non-prime. */ - gtp_pkt->gtp0_h.type = GTP_ECHO_RSP; - gtp_pkt->gtp0_h.length = - htons(sizeof(struct gtp0_packet) - sizeof(struct gtp0_header)); + gtp0_build_echo_msg(>p_pkt->gtp0_h, GTP_ECHO_RSP); /* GSM TS 09.60. 7.3 The Sequence Number in a signalling response * message shall be copied from the signalling request message @@ -282,16 +321,6 @@ static int gtp0_send_echo_resp(struct gtp_dev *gtp, struct sk_buff *skb) */ gtp_pkt->gtp0_h.seq = seq; - /* GSM TS 09.60. 7.3 In all Path Management Flow Label and TID - * are not used and shall be set to 0. - */ - gtp_pkt->gtp0_h.flow = 0; - gtp_pkt->gtp0_h.tid = 0; - gtp_pkt->gtp0_h.number = 0xff; - gtp_pkt->gtp0_h.spare[0] = 0xff; - gtp_pkt->gtp0_h.spare[1] = 0xff; - gtp_pkt->gtp0_h.spare[2] = 0xff; - gtp_pkt->ie.tag = GTPIE_RECOVERY; gtp_pkt->ie.val = gtp->restart_count; @@ -319,6 +348,61 @@ static int gtp0_send_echo_resp(struct gtp_dev *gtp, struct sk_buff *skb) return 0; } +static int gtp_genl_fill_echo(struct sk_buff *skb, u32 snd_portid, u32 snd_seq, + int flags, u32 type, struct echo_info echo) +{ + void *genlh; + + genlh = genlmsg_put(skb, snd_portid, snd_seq, >p_genl_family, flags, + type); + if (!genlh) + goto failure; + + if (nla_put_u32(skb, GTPA_VERSION, echo.gtp_version) || + nla_put_be32(skb, GTPA_PEER_ADDRESS, echo.peer_addr_ip4.s_addr) || + nla_put_be32(skb, GTPA_MS_ADDRESS, echo.ms_addr_ip4.s_addr)) + goto failure; + + genlmsg_end(skb, genlh); + return 0; + +failure: + genlmsg_cancel(skb, genlh); + return -EMSGSIZE; +} + +static int gtp0_handle_echo_resp(struct gtp_dev *gtp, struct sk_buff *skb) +{ + struct gtp0_header *gtp0; + struct echo_info echo; + struct sk_buff *msg; + struct iphdr *iph; + int ret; + + gtp0 = (struct gtp0_header *)(skb->data + sizeof(struct udphdr)); + + if (!gtp0_validate_echo_hdr(gtp0)) + return -1; + + iph = ip_hdr(skb); + echo.ms_addr_ip4.s_addr = iph->daddr; + echo.peer_addr_ip4.s_addr = iph->saddr; + echo.gtp_version = GTP_V0; + + msg = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_ATOMIC); + if (!msg) + return -ENOMEM; + + ret = gtp_genl_fill_echo(msg, 0, 0, 0, GTP_CMD_ECHOREQ, echo); + if (ret < 0) { + nlmsg_free(msg); + return ret; + } + + return genlmsg_multicast_netns(>p_genl_family, dev_net(gtp->dev), + msg, 0, GTP_GENL_MCGRP, GFP_ATOMIC); +} + /* 1 means pass up to the stack, -1 means drop and 0 means decapsulated. */ static int gtp0_udp_encap_recv(struct gtp_dev *gtp, struct sk_buff *skb) { @@ -342,6 +426,9 @@ static int gtp0_udp_encap_recv(struct gtp_dev *gtp, struct sk_buff *skb) if (gtp0->type == GTP_ECHO_REQ && gtp->sk_created) return gtp0_send_echo_resp(gtp, skb); + if (gtp0->type == GTP_ECHO_RSP && gtp->sk_created) + return gtp0_handle_echo_resp(gtp, skb); + if (gtp0->type != GTP_TPDU) return 1; @@ -354,6 +441,36 @@ static int gtp0_udp_encap_recv(struct gtp_dev *gtp, struct sk_buff *skb) return gtp_rx(pctx, skb, hdrlen, gtp->role); } +/* msg_type has to be GTP_ECHO_REQ or GTP_ECHO_RSP */ +static void gtp1u_build_echo_msg(struct gtp1_header_long *hdr, __u8 msg_type) +{ + int len_pkt, len_hdr; + + /* S flag must be set to 1 */ + hdr->flags = 0x32; /* v1, GTP-non-prime. */ + hdr->type = msg_type; + /* 3GPP TS 29.281 5.1 - TEID has to be set to 0 */ + hdr->tid = 0; + + /* seq, npdu and next should be counted to the length of the GTP packet + * that's why szie of gtp1_header should be subtracted, + * not size of gtp1_header_long. + */ + + len_hdr = sizeof(struct gtp1_header); + + if (msg_type == GTP_ECHO_RSP) { + len_pkt = sizeof(struct gtp1u_packet); + hdr->length = htons(len_pkt - len_hdr); + } else { + /* GTP_ECHO_REQ does not carry GTP Information Element, + * the why gtp1_header_long is used here. + */ + len_pkt = sizeof(struct gtp1_header_long); + hdr->length = htons(len_pkt - len_hdr); + } +} + static int gtp1u_send_echo_resp(struct gtp_dev *gtp, struct sk_buff *skb) { struct gtp1_header_long *gtp1u; @@ -378,17 +495,7 @@ static int gtp1u_send_echo_resp(struct gtp_dev *gtp, struct sk_buff *skb) gtp_pkt = skb_push(skb, sizeof(struct gtp1u_packet)); memset(gtp_pkt, 0, sizeof(struct gtp1u_packet)); - /* S flag must be set to 1 */ - gtp_pkt->gtp1u_h.flags = 0x32; - gtp_pkt->gtp1u_h.type = GTP_ECHO_RSP; - /* seq, npdu and next should be counted to the length of the GTP packet - * that's why szie of gtp1_header should be subtracted, - * not why szie of gtp1_header_long. - */ - gtp_pkt->gtp1u_h.length = - htons(sizeof(struct gtp1u_packet) - sizeof(struct gtp1_header)); - /* 3GPP TS 29.281 5.1 - TEID has to be set to 0 */ - gtp_pkt->gtp1u_h.tid = 0; + gtp1u_build_echo_msg(>p_pkt->gtp1u_h, GTP_ECHO_RSP); /* 3GPP TS 29.281 7.7.2 - The Restart Counter value in the * Recovery information element shall not be used, i.e. it shall @@ -423,6 +530,42 @@ static int gtp1u_send_echo_resp(struct gtp_dev *gtp, struct sk_buff *skb) return 0; } +static int gtp1u_handle_echo_resp(struct gtp_dev *gtp, struct sk_buff *skb) +{ + struct gtp1_header_long *gtp1u; + struct echo_info echo; + struct sk_buff *msg; + struct iphdr *iph; + int ret; + + gtp1u = (struct gtp1_header_long *)(skb->data + sizeof(struct udphdr)); + + /* 3GPP TS 29.281 5.1 - For the Echo Request, Echo Response, + * Error Indication and Supported Extension Headers Notification + * messages, the S flag shall be set to 1 and TEID shall be set to 0. + */ + if (!(gtp1u->flags & GTP1_F_SEQ) || gtp1u->tid) + return -1; + + iph = ip_hdr(skb); + echo.ms_addr_ip4.s_addr = iph->daddr; + echo.peer_addr_ip4.s_addr = iph->saddr; + echo.gtp_version = GTP_V1; + + msg = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_ATOMIC); + if (!msg) + return -ENOMEM; + + ret = gtp_genl_fill_echo(msg, 0, 0, 0, GTP_CMD_ECHOREQ, echo); + if (ret < 0) { + nlmsg_free(msg); + return ret; + } + + return genlmsg_multicast_netns(>p_genl_family, dev_net(gtp->dev), + msg, 0, GTP_GENL_MCGRP, GFP_ATOMIC); +} + static int gtp1u_udp_encap_recv(struct gtp_dev *gtp, struct sk_buff *skb) { unsigned int hdrlen = sizeof(struct udphdr) + @@ -445,6 +588,9 @@ static int gtp1u_udp_encap_recv(struct gtp_dev *gtp, struct sk_buff *skb) if (gtp1->type == GTP_ECHO_REQ && gtp->sk_created) return gtp1u_send_echo_resp(gtp, skb); + if (gtp1->type == GTP_ECHO_RSP && gtp->sk_created) + return gtp1u_handle_echo_resp(gtp, skb); + if (gtp1->type != GTP_TPDU) return 1; @@ -1431,16 +1577,6 @@ static int gtp_genl_del_pdp(struct sk_buff *skb, struct genl_info *info) return err; } -static struct genl_family gtp_genl_family; - -enum gtp_multicast_groups { - GTP_GENL_MCGRP, -}; - -static const struct genl_multicast_group gtp_genl_mcgrps[] = { - [GTP_GENL_MCGRP] = { .name = GTP_GENL_MCGRP_NAME }, -}; - static int gtp_genl_fill_info(struct sk_buff *skb, u32 snd_portid, u32 snd_seq, int flags, u32 type, struct pdp_ctx *pctx) { @@ -1584,6 +1720,95 @@ static int gtp_genl_dump_pdp(struct sk_buff *skb, return skb->len; } +static int gtp_genl_send_echo_req(struct sk_buff *skb, struct genl_info *info) +{ + struct sk_buff *skb_to_send; + __be32 src_ip, dst_ip; + unsigned int version; + struct gtp_dev *gtp; + struct flowi4 fl4; + struct rtable *rt; + struct sock *sk; + __be16 port; + int len; + + if (!info->attrs[GTPA_VERSION] || + !info->attrs[GTPA_LINK] || + !info->attrs[GTPA_PEER_ADDRESS] || + !info->attrs[GTPA_MS_ADDRESS]) + return -EINVAL; + + version = nla_get_u32(info->attrs[GTPA_VERSION]); + dst_ip = nla_get_be32(info->attrs[GTPA_PEER_ADDRESS]); + src_ip = nla_get_be32(info->attrs[GTPA_MS_ADDRESS]); + + gtp = gtp_find_dev(sock_net(skb->sk), info->attrs); + if (!gtp) + return -ENODEV; + + if (!gtp->sk_created) + return -EOPNOTSUPP; + if (!(gtp->dev->flags & IFF_UP)) + return -ENETDOWN; + + if (version == GTP_V0) { + struct gtp0_header *gtp0_h; + + len = LL_RESERVED_SPACE(gtp->dev) + sizeof(struct gtp0_header) + + sizeof(struct iphdr) + sizeof(struct udphdr); + + skb_to_send = netdev_alloc_skb_ip_align(gtp->dev, len); + if (!skb_to_send) + return -ENOMEM; + + sk = gtp->sk0; + port = htons(GTP0_PORT); + + gtp0_h = skb_push(skb_to_send, sizeof(struct gtp0_header)); + memset(gtp0_h, 0, sizeof(struct gtp0_header)); + gtp0_build_echo_msg(gtp0_h, GTP_ECHO_REQ); + } else if (version == GTP_V1) { + struct gtp1_header_long *gtp1u_h; + + len = LL_RESERVED_SPACE(gtp->dev) + + sizeof(struct gtp1_header_long) + + sizeof(struct iphdr) + sizeof(struct udphdr); + + skb_to_send = netdev_alloc_skb_ip_align(gtp->dev, len); + if (!skb_to_send) + return -ENOMEM; + + sk = gtp->sk1u; + port = htons(GTP1U_PORT); + + gtp1u_h = skb_push(skb_to_send, + sizeof(struct gtp1_header_long)); + memset(gtp1u_h, 0, sizeof(struct gtp1_header_long)); + gtp1u_build_echo_msg(gtp1u_h, GTP_ECHO_REQ); + } else { + return -ENODEV; + } + + rt = ip4_route_output_gtp(&fl4, sk, dst_ip, src_ip); + if (IS_ERR(rt)) { + netdev_dbg(gtp->dev, "no route for echo request to %pI4\n", + &dst_ip); + kfree_skb(skb_to_send); + return -ENODEV; + } + + udp_tunnel_xmit_skb(rt, sk, skb_to_send, + fl4.saddr, fl4.daddr, + fl4.flowi4_tos, + ip4_dst_hoplimit(&rt->dst), + 0, + port, port, + !net_eq(sock_net(sk), + dev_net(gtp->dev)), + false); + return 0; +} + static const struct nla_policy gtp_genl_policy[GTPA_MAX + 1] = { [GTPA_LINK] = { .type = NLA_U32, }, [GTPA_VERSION] = { .type = NLA_U32, }, @@ -1616,6 +1841,12 @@ static const struct genl_small_ops gtp_genl_ops[] = { .dumpit = gtp_genl_dump_pdp, .flags = GENL_ADMIN_PERM, }, + { + .cmd = GTP_CMD_ECHOREQ, + .validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP, + .doit = gtp_genl_send_echo_req, + .flags = GENL_ADMIN_PERM, + }, }; static struct genl_family gtp_genl_family __ro_after_init = { diff --git a/include/uapi/linux/gtp.h b/include/uapi/linux/gtp.h index 79f9191bbb24..2f61298a7b77 100644 --- a/include/uapi/linux/gtp.h +++ b/include/uapi/linux/gtp.h @@ -8,6 +8,7 @@ enum gtp_genl_cmds { GTP_CMD_NEWPDP, GTP_CMD_DELPDP, GTP_CMD_GETPDP, + GTP_CMD_ECHOREQ, GTP_CMD_MAX, }; From patchwork Fri Mar 11 17:18:18 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Nguyen X-Patchwork-Id: 12778420 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B4913C433F5 for ; Fri, 11 Mar 2022 17:18:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1350715AbiCKRTr (ORCPT ); Fri, 11 Mar 2022 12:19:47 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47978 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350703AbiCKRTj (ORCPT ); Fri, 11 Mar 2022 12:19:39 -0500 Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B95A0E5404 for ; Fri, 11 Mar 2022 09:18:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1647019115; x=1678555115; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=nIml3JckWGvIX1T6NUTo2r/2/oGCI1t1alYVs+Lj2f8=; b=QQ8oX3XZHZwgqQ2c31Rs1ezy0j8+GV+e8Xec9lJUJPSbmmZO+UeZxekc XCPF4qhA8VcV7vfPK3vbLi57sjef5hFRGpOcQHAVscoASrdOJoEd40z5T iN1R9p0bC3E+vk8ejMueOU5STlKY3z2wlzb0SVX3urd2i2RaOFH8YPk/1 S5+coMllKmZTtt+IAn8WDniig6oijiSNrvBCBaqzNMWNCU1f2m/XokMTR K7o3eli04vbZiJsJgmDLrpj6I5fwLYiYxCVTa85mB3NU8vZnU2Js2U7X7 /d5rv5F92dnsJGkfBMvvRIYjpEVz/OvNnfFTBjkqp0zxsRtQIYSy2oXI/ w==; X-IronPort-AV: E=McAfee;i="6200,9189,10283"; a="316335128" X-IronPort-AV: E=Sophos;i="5.90,174,1643702400"; d="scan'208";a="316335128" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Mar 2022 09:18:33 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.90,174,1643702400"; d="scan'208";a="612215407" Received: from anguy11-desk2.jf.intel.com ([10.166.244.147]) by fmsmga004.fm.intel.com with ESMTP; 11 Mar 2022 09:18:33 -0800 From: Tony Nguyen To: davem@davemloft.net, kuba@kernel.org Cc: Wojciech Drewek , netdev@vger.kernel.org, anthony.l.nguyen@intel.com, marcin.szycik@linux.intel.com, michal.swiatkowski@linux.intel.com, jiri@resnulli.us, pablo@netfilter.org, laforge@gnumonks.org, xiyou.wangcong@gmail.com, jhs@mojatatu.com, osmocom-net-gprs@lists.osmocom.org Subject: [PATCH net-next v11 4/7] net/sched: Allow flower to match on GTP options Date: Fri, 11 Mar 2022 09:18:18 -0800 Message-Id: <20220311171821.3785992-5-anthony.l.nguyen@intel.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220311171821.3785992-1-anthony.l.nguyen@intel.com> References: <20220311171821.3785992-1-anthony.l.nguyen@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Wojciech Drewek Options are as follows: PDU_TYPE:QFI and they refernce to the fields from the PDU Session Protocol. PDU Session data is conveyed in GTP-U Extension Header. GTP-U Extension Header is described in 3GPP TS 29.281. PDU Session Protocol is described in 3GPP TS 38.415. PDU_TYPE - indicates the type of the PDU Session Information (4 bits) QFI - QoS Flow Identifier (6 bits) # ip link add gtp_dev type gtp role sgsn # tc qdisc add dev gtp_dev ingress # tc filter add dev gtp_dev protocol ip parent ffff: \ flower \ enc_key_id 11 \ gtp_opts 1:8/ff:ff \ action mirred egress redirect dev eth0 Signed-off-by: Wojciech Drewek Signed-off-by: Tony Nguyen --- include/net/gtp.h | 5 ++ include/uapi/linux/if_tunnel.h | 4 +- include/uapi/linux/pkt_cls.h | 15 +++++ net/sched/cls_flower.c | 116 +++++++++++++++++++++++++++++++++ 4 files changed, 139 insertions(+), 1 deletion(-) diff --git a/include/net/gtp.h b/include/net/gtp.h index 0e12c37f2958..23c2aaae8a42 100644 --- a/include/net/gtp.h +++ b/include/net/gtp.h @@ -58,6 +58,11 @@ struct gtp1u_packet { struct gtp_ie ie; } __packed; +struct gtp_pdu_session_info { /* According to 3GPP TS 38.415. */ + u8 pdu_type; + u8 qfi; +}; + #define GTP1_F_NPDU 0x01 #define GTP1_F_SEQ 0x02 #define GTP1_F_EXTHDR 0x04 diff --git a/include/uapi/linux/if_tunnel.h b/include/uapi/linux/if_tunnel.h index 7d9105533c7b..102119628ff5 100644 --- a/include/uapi/linux/if_tunnel.h +++ b/include/uapi/linux/if_tunnel.h @@ -176,8 +176,10 @@ enum { #define TUNNEL_VXLAN_OPT __cpu_to_be16(0x1000) #define TUNNEL_NOCACHE __cpu_to_be16(0x2000) #define TUNNEL_ERSPAN_OPT __cpu_to_be16(0x4000) +#define TUNNEL_GTP_OPT __cpu_to_be16(0x8000) #define TUNNEL_OPTIONS_PRESENT \ - (TUNNEL_GENEVE_OPT | TUNNEL_VXLAN_OPT | TUNNEL_ERSPAN_OPT) + (TUNNEL_GENEVE_OPT | TUNNEL_VXLAN_OPT | TUNNEL_ERSPAN_OPT | \ + TUNNEL_GTP_OPT) #endif /* _UAPI_IF_TUNNEL_H_ */ diff --git a/include/uapi/linux/pkt_cls.h b/include/uapi/linux/pkt_cls.h index ee38b35c3f57..404f97fb239c 100644 --- a/include/uapi/linux/pkt_cls.h +++ b/include/uapi/linux/pkt_cls.h @@ -616,6 +616,10 @@ enum { * TCA_FLOWER_KEY_ENC_OPT_ERSPAN_ * attributes */ + TCA_FLOWER_KEY_ENC_OPTS_GTP, /* Nested + * TCA_FLOWER_KEY_ENC_OPT_GTP_ + * attributes + */ __TCA_FLOWER_KEY_ENC_OPTS_MAX, }; @@ -654,6 +658,17 @@ enum { #define TCA_FLOWER_KEY_ENC_OPT_ERSPAN_MAX \ (__TCA_FLOWER_KEY_ENC_OPT_ERSPAN_MAX - 1) +enum { + TCA_FLOWER_KEY_ENC_OPT_GTP_UNSPEC, + TCA_FLOWER_KEY_ENC_OPT_GTP_PDU_TYPE, /* u8 */ + TCA_FLOWER_KEY_ENC_OPT_GTP_QFI, /* u8 */ + + __TCA_FLOWER_KEY_ENC_OPT_GTP_MAX, +}; + +#define TCA_FLOWER_KEY_ENC_OPT_GTP_MAX \ + (__TCA_FLOWER_KEY_ENC_OPT_GTP_MAX - 1) + enum { TCA_FLOWER_KEY_MPLS_OPTS_UNSPEC, TCA_FLOWER_KEY_MPLS_OPTS_LSE, diff --git a/net/sched/cls_flower.c b/net/sched/cls_flower.c index 1a9b1f140f9e..c80fc49c0da1 100644 --- a/net/sched/cls_flower.c +++ b/net/sched/cls_flower.c @@ -25,6 +25,7 @@ #include #include #include +#include #include #include @@ -723,6 +724,7 @@ enc_opts_policy[TCA_FLOWER_KEY_ENC_OPTS_MAX + 1] = { [TCA_FLOWER_KEY_ENC_OPTS_GENEVE] = { .type = NLA_NESTED }, [TCA_FLOWER_KEY_ENC_OPTS_VXLAN] = { .type = NLA_NESTED }, [TCA_FLOWER_KEY_ENC_OPTS_ERSPAN] = { .type = NLA_NESTED }, + [TCA_FLOWER_KEY_ENC_OPTS_GTP] = { .type = NLA_NESTED }, }; static const struct nla_policy @@ -746,6 +748,12 @@ erspan_opt_policy[TCA_FLOWER_KEY_ENC_OPT_ERSPAN_MAX + 1] = { [TCA_FLOWER_KEY_ENC_OPT_ERSPAN_HWID] = { .type = NLA_U8 }, }; +static const struct nla_policy +gtp_opt_policy[TCA_FLOWER_KEY_ENC_OPT_GTP_MAX + 1] = { + [TCA_FLOWER_KEY_ENC_OPT_GTP_PDU_TYPE] = { .type = NLA_U8 }, + [TCA_FLOWER_KEY_ENC_OPT_GTP_QFI] = { .type = NLA_U8 }, +}; + static const struct nla_policy mpls_stack_entry_policy[TCA_FLOWER_KEY_MPLS_OPT_LSE_MAX + 1] = { [TCA_FLOWER_KEY_MPLS_OPT_LSE_DEPTH] = { .type = NLA_U8 }, @@ -1262,6 +1270,49 @@ static int fl_set_erspan_opt(const struct nlattr *nla, struct fl_flow_key *key, return sizeof(*md); } +static int fl_set_gtp_opt(const struct nlattr *nla, struct fl_flow_key *key, + int depth, int option_len, + struct netlink_ext_ack *extack) +{ + struct nlattr *tb[TCA_FLOWER_KEY_ENC_OPT_GTP_MAX + 1]; + struct gtp_pdu_session_info *sinfo; + u8 len = key->enc_opts.len; + int err; + + sinfo = (struct gtp_pdu_session_info *)&key->enc_opts.data[len]; + memset(sinfo, 0xff, option_len); + + if (!depth) + return sizeof(*sinfo); + + if (nla_type(nla) != TCA_FLOWER_KEY_ENC_OPTS_GTP) { + NL_SET_ERR_MSG_MOD(extack, "Non-gtp option type for mask"); + return -EINVAL; + } + + err = nla_parse_nested(tb, TCA_FLOWER_KEY_ENC_OPT_GTP_MAX, nla, + gtp_opt_policy, extack); + if (err < 0) + return err; + + if (!option_len && + (!tb[TCA_FLOWER_KEY_ENC_OPT_GTP_PDU_TYPE] || + !tb[TCA_FLOWER_KEY_ENC_OPT_GTP_QFI])) { + NL_SET_ERR_MSG_MOD(extack, + "Missing tunnel key gtp option pdu type or qfi"); + return -EINVAL; + } + + if (tb[TCA_FLOWER_KEY_ENC_OPT_GTP_PDU_TYPE]) + sinfo->pdu_type = + nla_get_u8(tb[TCA_FLOWER_KEY_ENC_OPT_GTP_PDU_TYPE]); + + if (tb[TCA_FLOWER_KEY_ENC_OPT_GTP_QFI]) + sinfo->qfi = nla_get_u8(tb[TCA_FLOWER_KEY_ENC_OPT_GTP_QFI]); + + return sizeof(*sinfo); +} + static int fl_set_enc_opt(struct nlattr **tb, struct fl_flow_key *key, struct fl_flow_key *mask, struct netlink_ext_ack *extack) @@ -1386,6 +1437,38 @@ static int fl_set_enc_opt(struct nlattr **tb, struct fl_flow_key *key, return -EINVAL; } break; + case TCA_FLOWER_KEY_ENC_OPTS_GTP: + if (key->enc_opts.dst_opt_type) { + NL_SET_ERR_MSG_MOD(extack, + "Duplicate type for gtp options"); + return -EINVAL; + } + option_len = 0; + key->enc_opts.dst_opt_type = TUNNEL_GTP_OPT; + option_len = fl_set_gtp_opt(nla_opt_key, key, + key_depth, option_len, + extack); + if (option_len < 0) + return option_len; + + key->enc_opts.len += option_len; + /* At the same time we need to parse through the mask + * in order to verify exact and mask attribute lengths. + */ + mask->enc_opts.dst_opt_type = TUNNEL_GTP_OPT; + option_len = fl_set_gtp_opt(nla_opt_msk, mask, + msk_depth, option_len, + extack); + if (option_len < 0) + return option_len; + + mask->enc_opts.len += option_len; + if (key->enc_opts.len != mask->enc_opts.len) { + NL_SET_ERR_MSG_MOD(extack, + "Key and mask miss aligned"); + return -EINVAL; + } + break; default: NL_SET_ERR_MSG(extack, "Unknown tunnel option type"); return -EINVAL; @@ -2761,6 +2844,34 @@ static int fl_dump_key_erspan_opt(struct sk_buff *skb, return -EMSGSIZE; } +static int fl_dump_key_gtp_opt(struct sk_buff *skb, + struct flow_dissector_key_enc_opts *enc_opts) + +{ + struct gtp_pdu_session_info *session_info; + struct nlattr *nest; + + nest = nla_nest_start_noflag(skb, TCA_FLOWER_KEY_ENC_OPTS_GTP); + if (!nest) + goto nla_put_failure; + + session_info = (struct gtp_pdu_session_info *)&enc_opts->data[0]; + + if (nla_put_u8(skb, TCA_FLOWER_KEY_ENC_OPT_GTP_PDU_TYPE, + session_info->pdu_type)) + goto nla_put_failure; + + if (nla_put_u8(skb, TCA_FLOWER_KEY_ENC_OPT_GTP_QFI, session_info->qfi)) + goto nla_put_failure; + + nla_nest_end(skb, nest); + return 0; + +nla_put_failure: + nla_nest_cancel(skb, nest); + return -EMSGSIZE; +} + static int fl_dump_key_ct(struct sk_buff *skb, struct flow_dissector_key_ct *key, struct flow_dissector_key_ct *mask) @@ -2824,6 +2935,11 @@ static int fl_dump_key_options(struct sk_buff *skb, int enc_opt_type, if (err) goto nla_put_failure; break; + case TUNNEL_GTP_OPT: + err = fl_dump_key_gtp_opt(skb, enc_opts); + if (err) + goto nla_put_failure; + break; default: goto nla_put_failure; } From patchwork Fri Mar 11 17:18:19 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Nguyen X-Patchwork-Id: 12778421 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7DC3CC433FE for ; Fri, 11 Mar 2022 17:18:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1350714AbiCKRTo (ORCPT ); Fri, 11 Mar 2022 12:19:44 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47952 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350701AbiCKRTj (ORCPT ); Fri, 11 Mar 2022 12:19:39 -0500 Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EDD82E540D for ; Fri, 11 Mar 2022 09:18:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1647019115; x=1678555115; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=cOPa8b1ggar4hcTeucBpmRcDsrxkQql4RjdyytRuI/s=; b=hTzetwIsdgVA90okvAvmerPgHsgrpHyzfKbVrcgPkpmhq7hVKh1e+eXS F2fdSRa1DNmj77Tr3scMXtcECpSY53PvHBXJqyKolOdid0cb/baEjqPMz SAt22JYD4aDUvRSo4nDNr681/YD0ljmjXQwE1EJ2yk8YgT5fEOLVlROBr EFAA5AeEIKzyvlA0Yas6XRg4ASMSzYsUzLY8K742CfzI7BnusmGENdPyY YzfXvdHlVEakoNPfqzIbchJU0tVtW9PL8+bqg8aEG3KvFmOEzEdC+0u0b GMl6QMpsbVoa8Ct4knitWpQYh2azW0tQZsNcLGwa0EoTienapA1wgWWVJ g==; X-IronPort-AV: E=McAfee;i="6200,9189,10283"; a="316335131" X-IronPort-AV: E=Sophos;i="5.90,174,1643702400"; d="scan'208";a="316335131" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Mar 2022 09:18:34 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.90,174,1643702400"; d="scan'208";a="612215411" Received: from anguy11-desk2.jf.intel.com ([10.166.244.147]) by fmsmga004.fm.intel.com with ESMTP; 11 Mar 2022 09:18:33 -0800 From: Tony Nguyen To: davem@davemloft.net, kuba@kernel.org Cc: Wojciech Drewek , netdev@vger.kernel.org, anthony.l.nguyen@intel.com, marcin.szycik@linux.intel.com, michal.swiatkowski@linux.intel.com, jiri@resnulli.us, pablo@netfilter.org, laforge@gnumonks.org, xiyou.wangcong@gmail.com, jhs@mojatatu.com, osmocom-net-gprs@lists.osmocom.org Subject: [PATCH net-next v11 5/7] gtp: Add support for checking GTP device type Date: Fri, 11 Mar 2022 09:18:19 -0800 Message-Id: <20220311171821.3785992-6-anthony.l.nguyen@intel.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220311171821.3785992-1-anthony.l.nguyen@intel.com> References: <20220311171821.3785992-1-anthony.l.nguyen@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Wojciech Drewek Add a function that checks if a net device type is GTP. Signed-off-by: Wojciech Drewek Reviewed-by: Harald Welte Signed-off-by: Tony Nguyen --- include/net/gtp.h | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/include/net/gtp.h b/include/net/gtp.h index 23c2aaae8a42..c1d6169df331 100644 --- a/include/net/gtp.h +++ b/include/net/gtp.h @@ -63,6 +63,12 @@ struct gtp_pdu_session_info { /* According to 3GPP TS 38.415. */ u8 qfi; }; +static inline bool netif_is_gtp(const struct net_device *dev) +{ + return dev->rtnl_link_ops && + !strcmp(dev->rtnl_link_ops->kind, "gtp"); +} + #define GTP1_F_NPDU 0x01 #define GTP1_F_SEQ 0x02 #define GTP1_F_EXTHDR 0x04 From patchwork Fri Mar 11 17:18:20 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Nguyen X-Patchwork-Id: 12778422 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 734DCC433F5 for ; Fri, 11 Mar 2022 17:18:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1350693AbiCKRTy (ORCPT ); Fri, 11 Mar 2022 12:19:54 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48066 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350708AbiCKRTk (ORCPT ); Fri, 11 Mar 2022 12:19:40 -0500 Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C0EBFE33A7 for ; Fri, 11 Mar 2022 09:18:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1647019116; x=1678555116; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=su1FUQlE8e53oIIpoDCVrJCNalkf9829lMykkCvQF74=; b=JiuyuS5jKmJsQFWFcxqYCTnw+3LnpX4tlhGBXbiDq0KSLDEnn8kHSM6X 07bXNI4OqU4z+ez+eIblSTNl3ZPimRUQjQ1JUL1LSv6dQn7TwPGZKGnnh 9GymGt57AeNa5oH9L8RP3rLT1EXGTv9IqE1LfuBWmxbDtO+nnK3ryC7J+ tyx0YLmAo2oeTWAlzpGAXFq8EPRbP43dKKFDKj26fVekFx6c33zOO/oBF IYF5M9r5NcYWHL3Hy1clhyo32dESWXYqexvfAbSKII9UIiWW6s3ax0XTG CrBMVbPR+yxNnZsMuVguI0zcX0kFQXA5Bu1+pp7cy4ZBchOSQCpm27x8X Q==; X-IronPort-AV: E=McAfee;i="6200,9189,10283"; a="316335136" X-IronPort-AV: E=Sophos;i="5.90,174,1643702400"; d="scan'208";a="316335136" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Mar 2022 09:18:35 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.90,174,1643702400"; d="scan'208";a="612215415" Received: from anguy11-desk2.jf.intel.com ([10.166.244.147]) by fmsmga004.fm.intel.com with ESMTP; 11 Mar 2022 09:18:34 -0800 From: Tony Nguyen To: davem@davemloft.net, kuba@kernel.org Cc: Michal Swiatkowski , netdev@vger.kernel.org, anthony.l.nguyen@intel.com, marcin.szycik@linux.intel.com, wojciech.drewek@intel.com, jiri@resnulli.us, pablo@netfilter.org, laforge@gnumonks.org, xiyou.wangcong@gmail.com, jhs@mojatatu.com, osmocom-net-gprs@lists.osmocom.org, Sandeep Penigalapati Subject: [PATCH net-next v11 6/7] ice: Fix FV offset searching Date: Fri, 11 Mar 2022 09:18:20 -0800 Message-Id: <20220311171821.3785992-7-anthony.l.nguyen@intel.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220311171821.3785992-1-anthony.l.nguyen@intel.com> References: <20220311171821.3785992-1-anthony.l.nguyen@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Michal Swiatkowski Checking only protocol ids while searching for correct FVs can lead to a situation, when incorrect FV will be added to the list. Incorrect means that FV has correct protocol id but incorrect offset. Call ice_get_sw_fv_list with ice_prot_lkup_ext struct which contains all protocol ids with offsets. With this modification allocating and collecting protocol ids list is not longer needed. Signed-off-by: Michal Swiatkowski Tested-by: Sandeep Penigalapati Signed-off-by: Tony Nguyen --- .../net/ethernet/intel/ice/ice_flex_pipe.c | 22 +++++------ .../net/ethernet/intel/ice/ice_flex_pipe.h | 2 +- drivers/net/ethernet/intel/ice/ice_switch.c | 39 +------------------ 3 files changed, 12 insertions(+), 51 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_flex_pipe.c b/drivers/net/ethernet/intel/ice/ice_flex_pipe.c index 38fe0a7e6975..c9bb338789ed 100644 --- a/drivers/net/ethernet/intel/ice/ice_flex_pipe.c +++ b/drivers/net/ethernet/intel/ice/ice_flex_pipe.c @@ -1871,20 +1871,19 @@ ice_get_sw_fv_bitmap(struct ice_hw *hw, enum ice_prof_type req_profs, /** * ice_get_sw_fv_list * @hw: pointer to the HW structure - * @prot_ids: field vector to search for with a given protocol ID - * @ids_cnt: lookup/protocol count + * @lkups: list of protocol types * @bm: bitmap of field vectors to consider * @fv_list: Head of a list * * Finds all the field vector entries from switch block that contain - * a given protocol ID and returns a list of structures of type + * a given protocol ID and offset and returns a list of structures of type * "ice_sw_fv_list_entry". Every structure in the list has a field vector * definition and profile ID information * NOTE: The caller of the function is responsible for freeing the memory * allocated for every list entry. */ int -ice_get_sw_fv_list(struct ice_hw *hw, u8 *prot_ids, u16 ids_cnt, +ice_get_sw_fv_list(struct ice_hw *hw, struct ice_prot_lkup_ext *lkups, unsigned long *bm, struct list_head *fv_list) { struct ice_sw_fv_list_entry *fvl; @@ -1896,7 +1895,7 @@ ice_get_sw_fv_list(struct ice_hw *hw, u8 *prot_ids, u16 ids_cnt, memset(&state, 0, sizeof(state)); - if (!ids_cnt || !hw->seg) + if (!lkups->n_val_words || !hw->seg) return -EINVAL; ice_seg = hw->seg; @@ -1915,20 +1914,17 @@ ice_get_sw_fv_list(struct ice_hw *hw, u8 *prot_ids, u16 ids_cnt, if (!test_bit((u16)offset, bm)) continue; - for (i = 0; i < ids_cnt; i++) { + for (i = 0; i < lkups->n_val_words; i++) { int j; - /* This code assumes that if a switch field vector line - * has a matching protocol, then this line will contain - * the entries necessary to represent every field in - * that protocol header. - */ for (j = 0; j < hw->blk[ICE_BLK_SW].es.fvw; j++) - if (fv->ew[j].prot_id == prot_ids[i]) + if (fv->ew[j].prot_id == + lkups->fv_words[i].prot_id && + fv->ew[j].off == lkups->fv_words[i].off) break; if (j >= hw->blk[ICE_BLK_SW].es.fvw) break; - if (i + 1 == ids_cnt) { + if (i + 1 == lkups->n_val_words) { fvl = devm_kzalloc(ice_hw_to_dev(hw), sizeof(*fvl), GFP_KERNEL); if (!fvl) diff --git a/drivers/net/ethernet/intel/ice/ice_flex_pipe.h b/drivers/net/ethernet/intel/ice/ice_flex_pipe.h index 2fd5312494c7..9c530c86703e 100644 --- a/drivers/net/ethernet/intel/ice/ice_flex_pipe.h +++ b/drivers/net/ethernet/intel/ice/ice_flex_pipe.h @@ -87,7 +87,7 @@ ice_get_sw_fv_bitmap(struct ice_hw *hw, enum ice_prof_type type, void ice_init_prof_result_bm(struct ice_hw *hw); int -ice_get_sw_fv_list(struct ice_hw *hw, u8 *prot_ids, u16 ids_cnt, +ice_get_sw_fv_list(struct ice_hw *hw, struct ice_prot_lkup_ext *lkups, unsigned long *bm, struct list_head *fv_list); int ice_pkg_buf_unreserve_section(struct ice_buf_build *bld, u16 count); diff --git a/drivers/net/ethernet/intel/ice/ice_switch.c b/drivers/net/ethernet/intel/ice/ice_switch.c index d98aa35c0337..1f83bb3d73bb 100644 --- a/drivers/net/ethernet/intel/ice/ice_switch.c +++ b/drivers/net/ethernet/intel/ice/ice_switch.c @@ -4734,41 +4734,6 @@ ice_create_recipe_group(struct ice_hw *hw, struct ice_sw_recipe *rm, return status; } -/** - * ice_get_fv - get field vectors/extraction sequences for spec. lookup types - * @hw: pointer to hardware structure - * @lkups: lookup elements or match criteria for the advanced recipe, one - * structure per protocol header - * @lkups_cnt: number of protocols - * @bm: bitmap of field vectors to consider - * @fv_list: pointer to a list that holds the returned field vectors - */ -static int -ice_get_fv(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups, u16 lkups_cnt, - unsigned long *bm, struct list_head *fv_list) -{ - u8 *prot_ids; - int status; - u16 i; - - prot_ids = kcalloc(lkups_cnt, sizeof(*prot_ids), GFP_KERNEL); - if (!prot_ids) - return -ENOMEM; - - for (i = 0; i < lkups_cnt; i++) - if (!ice_prot_type_to_id(lkups[i].type, &prot_ids[i])) { - status = -EIO; - goto free_mem; - } - - /* Find field vectors that include all specified protocol types */ - status = ice_get_sw_fv_list(hw, prot_ids, lkups_cnt, bm, fv_list); - -free_mem: - kfree(prot_ids); - return status; -} - /** * ice_tun_type_match_word - determine if tun type needs a match mask * @tun_type: tunnel type @@ -4917,11 +4882,11 @@ ice_add_adv_recipe(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups, /* Get bitmap of field vectors (profiles) that are compatible with the * rule request; only these will be searched in the subsequent call to - * ice_get_fv. + * ice_get_sw_fv_list. */ ice_get_compat_fv_bitmap(hw, rinfo, fv_bitmap); - status = ice_get_fv(hw, lkups, lkups_cnt, fv_bitmap, &rm->fv_list); + status = ice_get_sw_fv_list(hw, lkup_exts, fv_bitmap, &rm->fv_list); if (status) goto err_unroll; From patchwork Fri Mar 11 17:18:21 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Nguyen X-Patchwork-Id: 12778423 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2A81CC433F5 for ; Fri, 11 Mar 2022 17:19:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1350734AbiCKRUF (ORCPT ); Fri, 11 Mar 2022 12:20:05 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49012 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350718AbiCKRTx (ORCPT ); Fri, 11 Mar 2022 12:19:53 -0500 Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 148F1EA767 for ; Fri, 11 Mar 2022 09:18:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1647019124; x=1678555124; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=KObHFutrmOEH754RK9zoMjXEAGV+qV6/E0qwcq+ro7M=; b=jS5vBoDvOyYlCpOcVks+mkAYpyE/D0kcWm27BlAi7C8AY7jPh7bF5arH jNk2ZkY2KQXBpTkPl1215Cq9CjamvqtiKTtmeDXAVBGrEODq+EKLvv8IE kaBz3QdvBBFQvpzvMWNnp5YzGKX6pJJKlD3hq027lFnYsxMCjuAJoqUsP /1w1spxePYkk5jQimmEiTUXpoaCHBWngohlrykOdNVMcFKzyuc4+DDraB 8WnMFv0yK71xVYobrLS88zhlG/ZEF3e1iOq9DleKZ72lwVFuEAL8d0Rvl yfqR2obBJBQydltHvBMR2p/lVoRSsHHZiyW2LS7Q9LuDi4xTHEnwyPtc4 A==; X-IronPort-AV: E=McAfee;i="6200,9189,10283"; a="254441303" X-IronPort-AV: E=Sophos;i="5.90,174,1643702400"; d="scan'208";a="254441303" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Mar 2022 09:18:42 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.90,174,1643702400"; d="scan'208";a="612215426" Received: from anguy11-desk2.jf.intel.com ([10.166.244.147]) by fmsmga004.fm.intel.com with ESMTP; 11 Mar 2022 09:18:34 -0800 From: Tony Nguyen To: davem@davemloft.net, kuba@kernel.org Cc: Marcin Szycik , netdev@vger.kernel.org, anthony.l.nguyen@intel.com, michal.swiatkowski@linux.intel.com, wojciech.drewek@intel.com, jiri@resnulli.us, pablo@netfilter.org, laforge@gnumonks.org, xiyou.wangcong@gmail.com, jhs@mojatatu.com, osmocom-net-gprs@lists.osmocom.org, Sandeep Penigalapati Subject: [PATCH net-next v11 7/7] ice: Support GTP-U and GTP-C offload in switchdev Date: Fri, 11 Mar 2022 09:18:21 -0800 Message-Id: <20220311171821.3785992-8-anthony.l.nguyen@intel.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220311171821.3785992-1-anthony.l.nguyen@intel.com> References: <20220311171821.3785992-1-anthony.l.nguyen@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Marcin Szycik Add support for creating filters for GTP-U and GTP-C in switchdev mode. Add support for parsing GTP-specific options (QFI and PDU type) and TEID. By default, a filter for GTP-U will be added. To add a filter for GTP-C, specify enc_dst_port = 2123, e.g.: tc filter add dev $GTP0 ingress prio 1 flower enc_key_id 1337 \ enc_dst_port 2123 action mirred egress redirect dev $VF1_PR Note: GTP-U with outer IPv6 offload is not supported yet. Note: GTP-U with no payload offload is not supported yet. Signed-off-by: Marcin Szycik Reviewed-by: Michal Swiatkowski Tested-by: Sandeep Penigalapati Signed-off-by: Tony Nguyen --- drivers/net/ethernet/intel/ice/ice.h | 1 + .../net/ethernet/intel/ice/ice_flex_pipe.c | 31 +- .../net/ethernet/intel/ice/ice_flex_type.h | 6 +- .../ethernet/intel/ice/ice_protocol_type.h | 19 + drivers/net/ethernet/intel/ice/ice_switch.c | 615 +++++++++++++++++- drivers/net/ethernet/intel/ice/ice_switch.h | 9 + drivers/net/ethernet/intel/ice/ice_tc_lib.c | 105 ++- drivers/net/ethernet/intel/ice/ice_tc_lib.h | 3 + 8 files changed, 765 insertions(+), 24 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h index 23c39805a1b0..1a130ff562af 100644 --- a/drivers/net/ethernet/intel/ice/ice.h +++ b/drivers/net/ethernet/intel/ice/ice.h @@ -51,6 +51,7 @@ #include #include #include +#include #if IS_ENABLED(CONFIG_DCB) #include #endif /* CONFIG_DCB */ diff --git a/drivers/net/ethernet/intel/ice/ice_flex_pipe.c b/drivers/net/ethernet/intel/ice/ice_flex_pipe.c index c9bb338789ed..6a336e8d4e4d 100644 --- a/drivers/net/ethernet/intel/ice/ice_flex_pipe.c +++ b/drivers/net/ethernet/intel/ice/ice_flex_pipe.c @@ -1804,16 +1804,43 @@ static struct ice_buf_build *ice_pkg_buf_alloc(struct ice_hw *hw) return bld; } +static bool ice_is_gtp_u_profile(u16 prof_idx) +{ + return (prof_idx >= ICE_PROFID_IPV6_GTPU_TEID && + prof_idx <= ICE_PROFID_IPV6_GTPU_IPV6_TCP_INNER) || + prof_idx == ICE_PROFID_IPV4_GTPU_TEID; +} + +static bool ice_is_gtp_c_profile(u16 prof_idx) +{ + switch (prof_idx) { + case ICE_PROFID_IPV4_GTPC_TEID: + case ICE_PROFID_IPV4_GTPC_NO_TEID: + case ICE_PROFID_IPV6_GTPC_TEID: + case ICE_PROFID_IPV6_GTPC_NO_TEID: + return true; + default: + return false; + } +} + /** * ice_get_sw_prof_type - determine switch profile type * @hw: pointer to the HW structure * @fv: pointer to the switch field vector + * @prof_idx: profile index to check */ static enum ice_prof_type -ice_get_sw_prof_type(struct ice_hw *hw, struct ice_fv *fv) +ice_get_sw_prof_type(struct ice_hw *hw, struct ice_fv *fv, u32 prof_idx) { u16 i; + if (ice_is_gtp_c_profile(prof_idx)) + return ICE_PROF_TUN_GTPC; + + if (ice_is_gtp_u_profile(prof_idx)) + return ICE_PROF_TUN_GTPU; + for (i = 0; i < hw->blk[ICE_BLK_SW].es.fvw; i++) { /* UDP tunnel will have UDP_OF protocol ID and VNI offset */ if (fv->ew[i].prot_id == (u8)ICE_PROT_UDP_OF && @@ -1860,7 +1887,7 @@ ice_get_sw_fv_bitmap(struct ice_hw *hw, enum ice_prof_type req_profs, if (fv) { /* Determine field vector type */ - prof_type = ice_get_sw_prof_type(hw, fv); + prof_type = ice_get_sw_prof_type(hw, fv, offset); if (req_profs & prof_type) set_bit((u16)offset, bm); diff --git a/drivers/net/ethernet/intel/ice/ice_flex_type.h b/drivers/net/ethernet/intel/ice/ice_flex_type.h index 5735e9542a49..974d14a83b2e 100644 --- a/drivers/net/ethernet/intel/ice/ice_flex_type.h +++ b/drivers/net/ethernet/intel/ice/ice_flex_type.h @@ -417,6 +417,8 @@ enum ice_tunnel_type { TNL_VXLAN = 0, TNL_GENEVE, TNL_GRETAP, + TNL_GTPC, + TNL_GTPU, __TNL_TYPE_CNT, TNL_LAST = 0xFF, TNL_ALL = 0xFF, @@ -673,7 +675,9 @@ enum ice_prof_type { ICE_PROF_NON_TUN = 0x1, ICE_PROF_TUN_UDP = 0x2, ICE_PROF_TUN_GRE = 0x4, - ICE_PROF_TUN_ALL = 0x6, + ICE_PROF_TUN_GTPU = 0x8, + ICE_PROF_TUN_GTPC = 0x10, + ICE_PROF_TUN_ALL = 0x1E, ICE_PROF_ALL = 0xFF, }; diff --git a/drivers/net/ethernet/intel/ice/ice_protocol_type.h b/drivers/net/ethernet/intel/ice/ice_protocol_type.h index 385deaa021ac..3f64300b0e14 100644 --- a/drivers/net/ethernet/intel/ice/ice_protocol_type.h +++ b/drivers/net/ethernet/intel/ice/ice_protocol_type.h @@ -41,6 +41,8 @@ enum ice_protocol_type { ICE_VXLAN, ICE_GENEVE, ICE_NVGRE, + ICE_GTP, + ICE_GTP_NO_PAY, ICE_VXLAN_GPE, ICE_SCTP_IL, ICE_PROTOCOL_LAST @@ -52,6 +54,8 @@ enum ice_sw_tunnel_type { ICE_SW_TUN_VXLAN, ICE_SW_TUN_GENEVE, ICE_SW_TUN_NVGRE, + ICE_SW_TUN_GTPU, + ICE_SW_TUN_GTPC, ICE_ALL_TUNNELS /* All tunnel types including NVGRE */ }; @@ -182,6 +186,20 @@ struct ice_udp_tnl_hdr { __be32 vni; /* only use lower 24-bits */ }; +struct ice_udp_gtp_hdr { + u8 flags; + u8 msg_type; + __be16 rsrvd_len; + __be32 teid; + __be16 rsrvd_seq_nbr; + u8 rsrvd_n_pdu_nbr; + u8 rsrvd_next_ext; + u8 rsvrd_ext_len; + u8 pdu_type; + u8 qfi; + u8 rsvrd; +}; + struct ice_nvgre_hdr { __be16 flags; __be16 protocol; @@ -198,6 +216,7 @@ union ice_prot_hdr { struct ice_sctp_hdr sctp_hdr; struct ice_udp_tnl_hdr tnl_hdr; struct ice_nvgre_hdr nvgre_hdr; + struct ice_udp_gtp_hdr gtp_hdr; }; /* This is mapping table entry that maps every word within a given protocol diff --git a/drivers/net/ethernet/intel/ice/ice_switch.c b/drivers/net/ethernet/intel/ice/ice_switch.c index 1f83bb3d73bb..7f3d97595890 100644 --- a/drivers/net/ethernet/intel/ice/ice_switch.c +++ b/drivers/net/ethernet/intel/ice/ice_switch.c @@ -726,6 +726,495 @@ static const u8 dummy_vlan_udp_ipv6_packet[] = { 0x00, 0x00, /* 2 bytes for 4 byte alignment */ }; +/* Outer IPv4 + Outer UDP + GTP + Inner IPv4 + Inner TCP */ +static const +struct ice_dummy_pkt_offsets dummy_ipv4_gtpu_ipv4_tcp_packet_offsets[] = { + { ICE_MAC_OFOS, 0 }, + { ICE_IPV4_OFOS, 14 }, + { ICE_UDP_OF, 34 }, + { ICE_GTP, 42 }, + { ICE_IPV4_IL, 62 }, + { ICE_TCP_IL, 82 }, + { ICE_PROTOCOL_LAST, 0 }, +}; + +static const u8 dummy_ipv4_gtpu_ipv4_tcp_packet[] = { + 0x00, 0x00, 0x00, 0x00, /* Ethernet 0 */ + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x08, 0x00, + + 0x45, 0x00, 0x00, 0x58, /* IP 14 */ + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x11, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + + 0x00, 0x00, 0x08, 0x68, /* UDP 34 */ + 0x00, 0x44, 0x00, 0x00, + + 0x34, 0xff, 0x00, 0x34, /* ICE_GTP Header 42 */ + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x85, + + 0x02, 0x00, 0x00, 0x00, /* GTP_PDUSession_ExtensionHeader 54 */ + 0x00, 0x00, 0x00, 0x00, + + 0x45, 0x00, 0x00, 0x28, /* IP 62 */ + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x06, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + + 0x00, 0x00, 0x00, 0x00, /* TCP 82 */ + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x50, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + + 0x00, 0x00, /* 2 bytes for 4 byte alignment */ +}; + +/* Outer IPv4 + Outer UDP + GTP + Inner IPv4 + Inner UDP */ +static const +struct ice_dummy_pkt_offsets dummy_ipv4_gtpu_ipv4_udp_packet_offsets[] = { + { ICE_MAC_OFOS, 0 }, + { ICE_IPV4_OFOS, 14 }, + { ICE_UDP_OF, 34 }, + { ICE_GTP, 42 }, + { ICE_IPV4_IL, 62 }, + { ICE_UDP_ILOS, 82 }, + { ICE_PROTOCOL_LAST, 0 }, +}; + +static const u8 dummy_ipv4_gtpu_ipv4_udp_packet[] = { + 0x00, 0x00, 0x00, 0x00, /* Ethernet 0 */ + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x08, 0x00, + + 0x45, 0x00, 0x00, 0x4c, /* IP 14 */ + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x11, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + + 0x00, 0x00, 0x08, 0x68, /* UDP 34 */ + 0x00, 0x38, 0x00, 0x00, + + 0x34, 0xff, 0x00, 0x28, /* ICE_GTP Header 42 */ + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x85, + + 0x02, 0x00, 0x00, 0x00, /* GTP_PDUSession_ExtensionHeader 54 */ + 0x00, 0x00, 0x00, 0x00, + + 0x45, 0x00, 0x00, 0x1c, /* IP 62 */ + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x11, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + + 0x00, 0x00, 0x00, 0x00, /* UDP 82 */ + 0x00, 0x08, 0x00, 0x00, + + 0x00, 0x00, /* 2 bytes for 4 byte alignment */ +}; + +/* Outer IPv6 + Outer UDP + GTP + Inner IPv4 + Inner TCP */ +static const +struct ice_dummy_pkt_offsets dummy_ipv4_gtpu_ipv6_tcp_packet_offsets[] = { + { ICE_MAC_OFOS, 0 }, + { ICE_IPV4_OFOS, 14 }, + { ICE_UDP_OF, 34 }, + { ICE_GTP, 42 }, + { ICE_IPV6_IL, 62 }, + { ICE_TCP_IL, 102 }, + { ICE_PROTOCOL_LAST, 0 }, +}; + +static const u8 dummy_ipv4_gtpu_ipv6_tcp_packet[] = { + 0x00, 0x00, 0x00, 0x00, /* Ethernet 0 */ + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x08, 0x00, + + 0x45, 0x00, 0x00, 0x6c, /* IP 14 */ + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x11, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + + 0x00, 0x00, 0x08, 0x68, /* UDP 34 */ + 0x00, 0x58, 0x00, 0x00, + + 0x34, 0xff, 0x00, 0x48, /* ICE_GTP Header 42 */ + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x85, + + 0x02, 0x00, 0x00, 0x00, /* GTP_PDUSession_ExtensionHeader 54 */ + 0x00, 0x00, 0x00, 0x00, + + 0x60, 0x00, 0x00, 0x00, /* IPv6 62 */ + 0x00, 0x14, 0x06, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + + 0x00, 0x00, 0x00, 0x00, /* TCP 102 */ + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x50, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + + 0x00, 0x00, /* 2 bytes for 4 byte alignment */ +}; + +static const +struct ice_dummy_pkt_offsets dummy_ipv4_gtpu_ipv6_udp_packet_offsets[] = { + { ICE_MAC_OFOS, 0 }, + { ICE_IPV4_OFOS, 14 }, + { ICE_UDP_OF, 34 }, + { ICE_GTP, 42 }, + { ICE_IPV6_IL, 62 }, + { ICE_UDP_ILOS, 102 }, + { ICE_PROTOCOL_LAST, 0 }, +}; + +static const u8 dummy_ipv4_gtpu_ipv6_udp_packet[] = { + 0x00, 0x00, 0x00, 0x00, /* Ethernet 0 */ + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x08, 0x00, + + 0x45, 0x00, 0x00, 0x60, /* IP 14 */ + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x11, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + + 0x00, 0x00, 0x08, 0x68, /* UDP 34 */ + 0x00, 0x4c, 0x00, 0x00, + + 0x34, 0xff, 0x00, 0x3c, /* ICE_GTP Header 42 */ + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x85, + + 0x02, 0x00, 0x00, 0x00, /* GTP_PDUSession_ExtensionHeader 54 */ + 0x00, 0x00, 0x00, 0x00, + + 0x60, 0x00, 0x00, 0x00, /* IPv6 62 */ + 0x00, 0x08, 0x11, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + + 0x00, 0x00, 0x00, 0x00, /* UDP 102 */ + 0x00, 0x08, 0x00, 0x00, + + 0x00, 0x00, /* 2 bytes for 4 byte alignment */ +}; + +static const +struct ice_dummy_pkt_offsets dummy_ipv6_gtpu_ipv4_tcp_packet_offsets[] = { + { ICE_MAC_OFOS, 0 }, + { ICE_IPV6_OFOS, 14 }, + { ICE_UDP_OF, 54 }, + { ICE_GTP, 62 }, + { ICE_IPV4_IL, 82 }, + { ICE_TCP_IL, 102 }, + { ICE_PROTOCOL_LAST, 0 }, +}; + +static const u8 dummy_ipv6_gtpu_ipv4_tcp_packet[] = { + 0x00, 0x00, 0x00, 0x00, /* Ethernet 0 */ + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x86, 0xdd, + + 0x60, 0x00, 0x00, 0x00, /* IPv6 14 */ + 0x00, 0x44, 0x11, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + + 0x00, 0x00, 0x08, 0x68, /* UDP 54 */ + 0x00, 0x44, 0x00, 0x00, + + 0x34, 0xff, 0x00, 0x34, /* ICE_GTP Header 62 */ + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x85, + + 0x02, 0x00, 0x00, 0x00, /* GTP_PDUSession_ExtensionHeader 74 */ + 0x00, 0x00, 0x00, 0x00, + + 0x45, 0x00, 0x00, 0x28, /* IP 82 */ + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x06, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + + 0x00, 0x00, 0x00, 0x00, /* TCP 102 */ + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x50, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + + 0x00, 0x00, /* 2 bytes for 4 byte alignment */ +}; + +static const +struct ice_dummy_pkt_offsets dummy_ipv6_gtpu_ipv4_udp_packet_offsets[] = { + { ICE_MAC_OFOS, 0 }, + { ICE_IPV6_OFOS, 14 }, + { ICE_UDP_OF, 54 }, + { ICE_GTP, 62 }, + { ICE_IPV4_IL, 82 }, + { ICE_UDP_ILOS, 102 }, + { ICE_PROTOCOL_LAST, 0 }, +}; + +static const u8 dummy_ipv6_gtpu_ipv4_udp_packet[] = { + 0x00, 0x00, 0x00, 0x00, /* Ethernet 0 */ + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x86, 0xdd, + + 0x60, 0x00, 0x00, 0x00, /* IPv6 14 */ + 0x00, 0x38, 0x11, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + + 0x00, 0x00, 0x08, 0x68, /* UDP 54 */ + 0x00, 0x38, 0x00, 0x00, + + 0x34, 0xff, 0x00, 0x28, /* ICE_GTP Header 62 */ + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x85, + + 0x02, 0x00, 0x00, 0x00, /* GTP_PDUSession_ExtensionHeader 74 */ + 0x00, 0x00, 0x00, 0x00, + + 0x45, 0x00, 0x00, 0x1c, /* IP 82 */ + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x11, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + + 0x00, 0x00, 0x00, 0x00, /* UDP 102 */ + 0x00, 0x08, 0x00, 0x00, + + 0x00, 0x00, /* 2 bytes for 4 byte alignment */ +}; + +static const +struct ice_dummy_pkt_offsets dummy_ipv6_gtpu_ipv6_tcp_packet_offsets[] = { + { ICE_MAC_OFOS, 0 }, + { ICE_IPV6_OFOS, 14 }, + { ICE_UDP_OF, 54 }, + { ICE_GTP, 62 }, + { ICE_IPV6_IL, 82 }, + { ICE_TCP_IL, 122 }, + { ICE_PROTOCOL_LAST, 0 }, +}; + +static const u8 dummy_ipv6_gtpu_ipv6_tcp_packet[] = { + 0x00, 0x00, 0x00, 0x00, /* Ethernet 0 */ + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x86, 0xdd, + + 0x60, 0x00, 0x00, 0x00, /* IPv6 14 */ + 0x00, 0x58, 0x11, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + + 0x00, 0x00, 0x08, 0x68, /* UDP 54 */ + 0x00, 0x58, 0x00, 0x00, + + 0x34, 0xff, 0x00, 0x48, /* ICE_GTP Header 62 */ + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x85, + + 0x02, 0x00, 0x00, 0x00, /* GTP_PDUSession_ExtensionHeader 74 */ + 0x00, 0x00, 0x00, 0x00, + + 0x60, 0x00, 0x00, 0x00, /* IPv6 82 */ + 0x00, 0x14, 0x06, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + + 0x00, 0x00, 0x00, 0x00, /* TCP 122 */ + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x50, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + + 0x00, 0x00, /* 2 bytes for 4 byte alignment */ +}; + +static const +struct ice_dummy_pkt_offsets dummy_ipv6_gtpu_ipv6_udp_packet_offsets[] = { + { ICE_MAC_OFOS, 0 }, + { ICE_IPV6_OFOS, 14 }, + { ICE_UDP_OF, 54 }, + { ICE_GTP, 62 }, + { ICE_IPV6_IL, 82 }, + { ICE_UDP_ILOS, 122 }, + { ICE_PROTOCOL_LAST, 0 }, +}; + +static const u8 dummy_ipv6_gtpu_ipv6_udp_packet[] = { + 0x00, 0x00, 0x00, 0x00, /* Ethernet 0 */ + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x86, 0xdd, + + 0x60, 0x00, 0x00, 0x00, /* IPv6 14 */ + 0x00, 0x4c, 0x11, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + + 0x00, 0x00, 0x08, 0x68, /* UDP 54 */ + 0x00, 0x4c, 0x00, 0x00, + + 0x34, 0xff, 0x00, 0x3c, /* ICE_GTP Header 62 */ + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x85, + + 0x02, 0x00, 0x00, 0x00, /* GTP_PDUSession_ExtensionHeader 74 */ + 0x00, 0x00, 0x00, 0x00, + + 0x60, 0x00, 0x00, 0x00, /* IPv6 82 */ + 0x00, 0x08, 0x11, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + + 0x00, 0x00, 0x00, 0x00, /* UDP 122 */ + 0x00, 0x08, 0x00, 0x00, + + 0x00, 0x00, /* 2 bytes for 4 byte alignment */ +}; + +static const u8 dummy_ipv4_gtpu_ipv4_packet[] = { + 0x00, 0x00, 0x00, 0x00, /* ICE_MAC_OFOS 0 */ + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x08, 0x00, + + 0x45, 0x00, 0x00, 0x44, /* ICE_IPV4_OFOS 14 */ + 0x00, 0x00, 0x40, 0x00, + 0x40, 0x11, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + + 0x08, 0x68, 0x08, 0x68, /* ICE_UDP_OF 34 */ + 0x00, 0x00, 0x00, 0x00, + + 0x34, 0xff, 0x00, 0x28, /* ICE_GTP 42 */ + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x85, + + 0x02, 0x00, 0x00, 0x00, /* PDU Session extension header */ + 0x00, 0x00, 0x00, 0x00, + + 0x45, 0x00, 0x00, 0x14, /* ICE_IPV4_IL 62 */ + 0x00, 0x00, 0x40, 0x00, + 0x40, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, +}; + +static const +struct ice_dummy_pkt_offsets dummy_ipv4_gtp_no_pay_packet_offsets[] = { + { ICE_MAC_OFOS, 0 }, + { ICE_IPV4_OFOS, 14 }, + { ICE_UDP_OF, 34 }, + { ICE_GTP_NO_PAY, 42 }, + { ICE_PROTOCOL_LAST, 0 }, +}; + +static const +struct ice_dummy_pkt_offsets dummy_ipv6_gtp_no_pay_packet_offsets[] = { + { ICE_MAC_OFOS, 0 }, + { ICE_IPV6_OFOS, 14 }, + { ICE_UDP_OF, 54 }, + { ICE_GTP_NO_PAY, 62 }, + { ICE_PROTOCOL_LAST, 0 }, +}; + +static const u8 dummy_ipv6_gtp_packet[] = { + 0x00, 0x00, 0x00, 0x00, /* ICE_MAC_OFOS 0 */ + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x86, 0xdd, + + 0x60, 0x00, 0x00, 0x00, /* ICE_IPV6_OFOS 14 */ + 0x00, 0x6c, 0x11, 0x00, /* Next header UDP*/ + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + + 0x08, 0x68, 0x08, 0x68, /* ICE_UDP_OF 54 */ + 0x00, 0x00, 0x00, 0x00, + + 0x30, 0x00, 0x00, 0x28, /* ICE_GTP 62 */ + 0x00, 0x00, 0x00, 0x00, + + 0x00, 0x00, +}; + #define ICE_SW_RULE_RX_TX_ETH_HDR_SIZE \ (offsetof(struct ice_aqc_sw_rules_elem, pdata.lkup_tx_rx.hdr) + \ (DUMMY_ETH_HDR_LEN * \ @@ -4057,7 +4546,9 @@ static const struct ice_prot_ext_tbl_entry ice_prot_ext[ICE_PROTOCOL_LAST] = { { ICE_UDP_ILOS, { 0, 2 } }, { ICE_VXLAN, { 8, 10, 12, 14 } }, { ICE_GENEVE, { 8, 10, 12, 14 } }, - { ICE_NVGRE, { 0, 2, 4, 6 } }, + { ICE_NVGRE, { 0, 2, 4, 6 } }, + { ICE_GTP, { 8, 10, 12, 14, 16, 18, 20, 22 } }, + { ICE_GTP_NO_PAY, { 8, 10, 12, 14 } }, }; static struct ice_protocol_entry ice_prot_id_tbl[ICE_PROTOCOL_LAST] = { @@ -4075,7 +4566,9 @@ static struct ice_protocol_entry ice_prot_id_tbl[ICE_PROTOCOL_LAST] = { { ICE_UDP_ILOS, ICE_UDP_ILOS_HW }, { ICE_VXLAN, ICE_UDP_OF_HW }, { ICE_GENEVE, ICE_UDP_OF_HW }, - { ICE_NVGRE, ICE_GRE_OF_HW }, + { ICE_NVGRE, ICE_GRE_OF_HW }, + { ICE_GTP, ICE_UDP_OF_HW }, + { ICE_GTP_NO_PAY, ICE_UDP_ILOS_HW }, }; /** @@ -4745,6 +5238,8 @@ static bool ice_tun_type_match_word(enum ice_sw_tunnel_type tun_type, u16 *mask) case ICE_SW_TUN_GENEVE: case ICE_SW_TUN_VXLAN: case ICE_SW_TUN_NVGRE: + case ICE_SW_TUN_GTPU: + case ICE_SW_TUN_GTPC: *mask = ICE_TUN_FLAG_MASK; return true; @@ -4810,6 +5305,12 @@ ice_get_compat_fv_bitmap(struct ice_hw *hw, struct ice_adv_rule_info *rinfo, case ICE_SW_TUN_NVGRE: prof_type = ICE_PROF_TUN_GRE; break; + case ICE_SW_TUN_GTPU: + prof_type = ICE_PROF_TUN_GTPU; + break; + case ICE_SW_TUN_GTPC: + prof_type = ICE_PROF_TUN_GTPC; + break; case ICE_SW_TUN_AND_NON_TUN: default: prof_type = ICE_PROF_ALL; @@ -5010,17 +5511,17 @@ ice_find_dummy_packet(struct ice_adv_lkup_elem *lkups, u16 lkups_cnt, const u8 **pkt, u16 *pkt_len, const struct ice_dummy_pkt_offsets **offsets) { - bool tcp = false, udp = false, ipv6 = false, vlan = false; - bool ipv6_il = false; + bool inner_tcp = false, inner_udp = false, outer_ipv6 = false; + bool vlan = false, inner_ipv6 = false, gtp_no_pay = false; u16 i; for (i = 0; i < lkups_cnt; i++) { if (lkups[i].type == ICE_UDP_ILOS) - udp = true; + inner_udp = true; else if (lkups[i].type == ICE_TCP_IL) - tcp = true; + inner_tcp = true; else if (lkups[i].type == ICE_IPV6_OFOS) - ipv6 = true; + outer_ipv6 = true; else if (lkups[i].type == ICE_VLAN_OFOS) vlan = true; else if (lkups[i].type == ICE_ETYPE_OL && @@ -5028,29 +5529,103 @@ ice_find_dummy_packet(struct ice_adv_lkup_elem *lkups, u16 lkups_cnt, cpu_to_be16(ICE_IPV6_ETHER_ID) && lkups[i].m_u.ethertype.ethtype_id == cpu_to_be16(0xFFFF)) - ipv6 = true; + outer_ipv6 = true; else if (lkups[i].type == ICE_ETYPE_IL && lkups[i].h_u.ethertype.ethtype_id == cpu_to_be16(ICE_IPV6_ETHER_ID) && lkups[i].m_u.ethertype.ethtype_id == cpu_to_be16(0xFFFF)) - ipv6_il = true; + inner_ipv6 = true; + else if (lkups[i].type == ICE_IPV6_IL) + inner_ipv6 = true; + else if (lkups[i].type == ICE_GTP_NO_PAY) + gtp_no_pay = true; + } + + if (tun_type == ICE_SW_TUN_GTPU) { + if (outer_ipv6) { + if (gtp_no_pay) { + *pkt = dummy_ipv6_gtp_packet; + *pkt_len = sizeof(dummy_ipv6_gtp_packet); + *offsets = dummy_ipv6_gtp_no_pay_packet_offsets; + } else if (inner_ipv6) { + if (inner_udp) { + *pkt = dummy_ipv6_gtpu_ipv6_udp_packet; + *pkt_len = sizeof(dummy_ipv6_gtpu_ipv6_udp_packet); + *offsets = dummy_ipv6_gtpu_ipv6_udp_packet_offsets; + } else { + *pkt = dummy_ipv6_gtpu_ipv6_tcp_packet; + *pkt_len = sizeof(dummy_ipv6_gtpu_ipv6_tcp_packet); + *offsets = dummy_ipv6_gtpu_ipv6_tcp_packet_offsets; + } + } else { + if (inner_udp) { + *pkt = dummy_ipv6_gtpu_ipv4_udp_packet; + *pkt_len = sizeof(dummy_ipv6_gtpu_ipv4_udp_packet); + *offsets = dummy_ipv6_gtpu_ipv4_udp_packet_offsets; + } else { + *pkt = dummy_ipv6_gtpu_ipv4_tcp_packet; + *pkt_len = sizeof(dummy_ipv6_gtpu_ipv4_tcp_packet); + *offsets = dummy_ipv6_gtpu_ipv4_tcp_packet_offsets; + } + } + } else { + if (gtp_no_pay) { + *pkt = dummy_ipv4_gtpu_ipv4_packet; + *pkt_len = sizeof(dummy_ipv4_gtpu_ipv4_packet); + *offsets = dummy_ipv4_gtp_no_pay_packet_offsets; + } else if (inner_ipv6) { + if (inner_udp) { + *pkt = dummy_ipv4_gtpu_ipv6_udp_packet; + *pkt_len = sizeof(dummy_ipv4_gtpu_ipv6_udp_packet); + *offsets = dummy_ipv4_gtpu_ipv6_udp_packet_offsets; + } else { + *pkt = dummy_ipv4_gtpu_ipv6_tcp_packet; + *pkt_len = sizeof(dummy_ipv4_gtpu_ipv6_tcp_packet); + *offsets = dummy_ipv4_gtpu_ipv6_tcp_packet_offsets; + } + } else { + if (inner_udp) { + *pkt = dummy_ipv4_gtpu_ipv4_udp_packet; + *pkt_len = sizeof(dummy_ipv4_gtpu_ipv4_udp_packet); + *offsets = dummy_ipv4_gtpu_ipv4_udp_packet_offsets; + } else { + *pkt = dummy_ipv4_gtpu_ipv4_tcp_packet; + *pkt_len = sizeof(dummy_ipv4_gtpu_ipv4_tcp_packet); + *offsets = dummy_ipv4_gtpu_ipv4_tcp_packet_offsets; + } + } + } + return; + } + + if (tun_type == ICE_SW_TUN_GTPC) { + if (outer_ipv6) { + *pkt = dummy_ipv6_gtp_packet; + *pkt_len = sizeof(dummy_ipv6_gtp_packet); + *offsets = dummy_ipv6_gtp_no_pay_packet_offsets; + } else { + *pkt = dummy_ipv4_gtpu_ipv4_packet; + *pkt_len = sizeof(dummy_ipv4_gtpu_ipv4_packet); + *offsets = dummy_ipv4_gtp_no_pay_packet_offsets; + } + return; } if (tun_type == ICE_SW_TUN_NVGRE) { - if (tcp && ipv6_il) { + if (inner_tcp && inner_ipv6) { *pkt = dummy_gre_ipv6_tcp_packet; *pkt_len = sizeof(dummy_gre_ipv6_tcp_packet); *offsets = dummy_gre_ipv6_tcp_packet_offsets; return; } - if (tcp) { + if (inner_tcp) { *pkt = dummy_gre_tcp_packet; *pkt_len = sizeof(dummy_gre_tcp_packet); *offsets = dummy_gre_tcp_packet_offsets; return; } - if (ipv6_il) { + if (inner_ipv6) { *pkt = dummy_gre_ipv6_udp_packet; *pkt_len = sizeof(dummy_gre_ipv6_udp_packet); *offsets = dummy_gre_ipv6_udp_packet_offsets; @@ -5064,19 +5639,19 @@ ice_find_dummy_packet(struct ice_adv_lkup_elem *lkups, u16 lkups_cnt, if (tun_type == ICE_SW_TUN_VXLAN || tun_type == ICE_SW_TUN_GENEVE) { - if (tcp && ipv6_il) { + if (inner_tcp && inner_ipv6) { *pkt = dummy_udp_tun_ipv6_tcp_packet; *pkt_len = sizeof(dummy_udp_tun_ipv6_tcp_packet); *offsets = dummy_udp_tun_ipv6_tcp_packet_offsets; return; } - if (tcp) { + if (inner_tcp) { *pkt = dummy_udp_tun_tcp_packet; *pkt_len = sizeof(dummy_udp_tun_tcp_packet); *offsets = dummy_udp_tun_tcp_packet_offsets; return; } - if (ipv6_il) { + if (inner_ipv6) { *pkt = dummy_udp_tun_ipv6_udp_packet; *pkt_len = sizeof(dummy_udp_tun_ipv6_udp_packet); *offsets = dummy_udp_tun_ipv6_udp_packet_offsets; @@ -5088,7 +5663,7 @@ ice_find_dummy_packet(struct ice_adv_lkup_elem *lkups, u16 lkups_cnt, return; } - if (udp && !ipv6) { + if (inner_udp && !outer_ipv6) { if (vlan) { *pkt = dummy_vlan_udp_packet; *pkt_len = sizeof(dummy_vlan_udp_packet); @@ -5099,7 +5674,7 @@ ice_find_dummy_packet(struct ice_adv_lkup_elem *lkups, u16 lkups_cnt, *pkt_len = sizeof(dummy_udp_packet); *offsets = dummy_udp_packet_offsets; return; - } else if (udp && ipv6) { + } else if (inner_udp && outer_ipv6) { if (vlan) { *pkt = dummy_vlan_udp_ipv6_packet; *pkt_len = sizeof(dummy_vlan_udp_ipv6_packet); @@ -5110,7 +5685,7 @@ ice_find_dummy_packet(struct ice_adv_lkup_elem *lkups, u16 lkups_cnt, *pkt_len = sizeof(dummy_udp_ipv6_packet); *offsets = dummy_udp_ipv6_packet_offsets; return; - } else if ((tcp && ipv6) || ipv6) { + } else if ((inner_tcp && outer_ipv6) || outer_ipv6) { if (vlan) { *pkt = dummy_vlan_tcp_ipv6_packet; *pkt_len = sizeof(dummy_vlan_tcp_ipv6_packet); @@ -5216,6 +5791,10 @@ ice_fill_adv_dummy_packet(struct ice_adv_lkup_elem *lkups, u16 lkups_cnt, case ICE_GENEVE: len = sizeof(struct ice_udp_tnl_hdr); break; + case ICE_GTP_NO_PAY: + case ICE_GTP: + len = sizeof(struct ice_udp_gtp_hdr); + break; default: return -EINVAL; } diff --git a/drivers/net/ethernet/intel/ice/ice_switch.h b/drivers/net/ethernet/intel/ice/ice_switch.h index 7b42c51a3eb0..ed3d1d03befa 100644 --- a/drivers/net/ethernet/intel/ice/ice_switch.h +++ b/drivers/net/ethernet/intel/ice/ice_switch.h @@ -14,6 +14,15 @@ #define ICE_VSI_INVAL_ID 0xffff #define ICE_INVAL_Q_HANDLE 0xFFFF +/* Switch Profile IDs for Profile related switch rules */ +#define ICE_PROFID_IPV4_GTPC_TEID 41 +#define ICE_PROFID_IPV4_GTPC_NO_TEID 42 +#define ICE_PROFID_IPV4_GTPU_TEID 43 +#define ICE_PROFID_IPV6_GTPC_TEID 44 +#define ICE_PROFID_IPV6_GTPC_NO_TEID 45 +#define ICE_PROFID_IPV6_GTPU_TEID 46 +#define ICE_PROFID_IPV6_GTPU_IPV6_TCP_INNER 70 + #define ICE_SW_RULE_RX_TX_NO_HDR_SIZE \ (offsetof(struct ice_aqc_sw_rules_elem, pdata.lkup_tx_rx.hdr)) diff --git a/drivers/net/ethernet/intel/ice/ice_tc_lib.c b/drivers/net/ethernet/intel/ice/ice_tc_lib.c index fedc310c376c..3acd9f921c44 100644 --- a/drivers/net/ethernet/intel/ice/ice_tc_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_tc_lib.c @@ -27,6 +27,9 @@ ice_tc_count_lkups(u32 flags, struct ice_tc_flower_lyr_2_4_hdrs *headers, if (flags & ICE_TC_FLWR_FIELD_ENC_DST_MAC) lkups_cnt++; + if (flags & ICE_TC_FLWR_FIELD_ENC_OPTS) + lkups_cnt++; + if (flags & (ICE_TC_FLWR_FIELD_ENC_SRC_IPV4 | ICE_TC_FLWR_FIELD_ENC_DEST_IPV4 | ICE_TC_FLWR_FIELD_ENC_SRC_IPV6 | @@ -102,6 +105,11 @@ ice_proto_type_from_tunnel(enum ice_tunnel_type type) return ICE_GENEVE; case TNL_GRETAP: return ICE_NVGRE; + case TNL_GTPU: + /* NO_PAY profiles will not work with GTP-U */ + return ICE_GTP; + case TNL_GTPC: + return ICE_GTP_NO_PAY; default: return 0; } @@ -117,6 +125,10 @@ ice_sw_type_from_tunnel(enum ice_tunnel_type type) return ICE_SW_TUN_GENEVE; case TNL_GRETAP: return ICE_SW_TUN_NVGRE; + case TNL_GTPU: + return ICE_SW_TUN_GTPU; + case TNL_GTPC: + return ICE_SW_TUN_GTPC; default: return ICE_NON_TUN; } @@ -143,7 +155,15 @@ ice_tc_fill_tunnel_outer(u32 flags, struct ice_tc_flower_fltr *fltr, break; case TNL_GRETAP: list[i].h_u.nvgre_hdr.tni_flow = fltr->tenant_id; - memcpy(&list[i].m_u.nvgre_hdr.tni_flow, "\xff\xff\xff\xff", 4); + memcpy(&list[i].m_u.nvgre_hdr.tni_flow, + "\xff\xff\xff\xff", 4); + i++; + break; + case TNL_GTPC: + case TNL_GTPU: + list[i].h_u.gtp_hdr.teid = fltr->tenant_id; + memcpy(&list[i].m_u.gtp_hdr.teid, + "\xff\xff\xff\xff", 4); i++; break; default: @@ -160,6 +180,24 @@ ice_tc_fill_tunnel_outer(u32 flags, struct ice_tc_flower_fltr *fltr, i++; } + if (flags & ICE_TC_FLWR_FIELD_ENC_OPTS && + (fltr->tunnel_type == TNL_GTPU || fltr->tunnel_type == TNL_GTPC)) { + list[i].type = ice_proto_type_from_tunnel(fltr->tunnel_type); + + if (fltr->gtp_pdu_info_masks.pdu_type) { + list[i].h_u.gtp_hdr.pdu_type = + fltr->gtp_pdu_info_keys.pdu_type << 4; + memcpy(&list[i].m_u.gtp_hdr.pdu_type, "\xf0", 1); + } + + if (fltr->gtp_pdu_info_masks.qfi) { + list[i].h_u.gtp_hdr.qfi = fltr->gtp_pdu_info_keys.qfi; + memcpy(&list[i].m_u.gtp_hdr.qfi, "\x3f", 1); + } + + i++; + } + if (flags & (ICE_TC_FLWR_FIELD_ENC_SRC_IPV4 | ICE_TC_FLWR_FIELD_ENC_DEST_IPV4)) { list[i].type = ice_proto_type_from_ipv4(false); @@ -361,6 +399,12 @@ static int ice_tc_tun_get_type(struct net_device *tunnel_dev) if (netif_is_gretap(tunnel_dev) || netif_is_ip6gretap(tunnel_dev)) return TNL_GRETAP; + + /* Assume GTP-U by default in case of GTP netdev. + * GTP-C may be selected later, based on enc_dst_port. + */ + if (netif_is_gtp(tunnel_dev)) + return TNL_GTPU; return TNL_LAST; } @@ -760,6 +804,40 @@ ice_get_tunnel_device(struct net_device *dev, struct flow_rule *rule) return NULL; } +/** + * ice_parse_gtp_type - Sets GTP tunnel type to GTP-U or GTP-C + * @match: Flow match structure + * @fltr: Pointer to filter structure + * + * GTP-C/GTP-U is selected based on destination port number (enc_dst_port). + * Before calling this funtcion, fltr->tunnel_type should be set to TNL_GTPU, + * therefore making GTP-U the default choice (when destination port number is + * not specified). + */ +static int +ice_parse_gtp_type(struct flow_match_ports match, + struct ice_tc_flower_fltr *fltr) +{ + u16 dst_port; + + if (match.key->dst) { + dst_port = be16_to_cpu(match.key->dst); + + switch (dst_port) { + case 2152: + break; + case 2123: + fltr->tunnel_type = TNL_GTPC; + break; + default: + NL_SET_ERR_MSG_MOD(fltr->extack, "Unsupported GTP port number"); + return -EINVAL; + } + } + + return 0; +} + static int ice_parse_tunnel_attr(struct net_device *dev, struct flow_rule *rule, struct ice_tc_flower_fltr *fltr) @@ -815,8 +893,28 @@ ice_parse_tunnel_attr(struct net_device *dev, struct flow_rule *rule, struct flow_match_ports match; flow_rule_match_enc_ports(rule, &match); - if (ice_tc_set_port(match, fltr, headers, true)) - return -EINVAL; + + if (fltr->tunnel_type != TNL_GTPU) { + if (ice_tc_set_port(match, fltr, headers, true)) + return -EINVAL; + } else { + if (ice_parse_gtp_type(match, fltr)) + return -EINVAL; + } + } + + if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ENC_OPTS)) { + struct flow_match_enc_opts match; + + flow_rule_match_enc_opts(rule, &match); + + memcpy(&fltr->gtp_pdu_info_keys, &match.key->data[0], + sizeof(struct gtp_pdu_session_info)); + + memcpy(&fltr->gtp_pdu_info_masks, &match.mask->data[0], + sizeof(struct gtp_pdu_session_info)); + + fltr->flags |= ICE_TC_FLWR_FIELD_ENC_OPTS; } return 0; @@ -854,6 +952,7 @@ ice_parse_cls_flower(struct net_device *filter_dev, struct ice_vsi *vsi, BIT(FLOW_DISSECTOR_KEY_ENC_IPV4_ADDRS) | BIT(FLOW_DISSECTOR_KEY_ENC_IPV6_ADDRS) | BIT(FLOW_DISSECTOR_KEY_ENC_PORTS) | + BIT(FLOW_DISSECTOR_KEY_ENC_OPTS) | BIT(FLOW_DISSECTOR_KEY_ENC_IP) | BIT(FLOW_DISSECTOR_KEY_PORTS))) { NL_SET_ERR_MSG_MOD(fltr->extack, "Unsupported key used"); diff --git a/drivers/net/ethernet/intel/ice/ice_tc_lib.h b/drivers/net/ethernet/intel/ice/ice_tc_lib.h index 319049477959..e25e958f4396 100644 --- a/drivers/net/ethernet/intel/ice/ice_tc_lib.h +++ b/drivers/net/ethernet/intel/ice/ice_tc_lib.h @@ -22,6 +22,7 @@ #define ICE_TC_FLWR_FIELD_ENC_SRC_L4_PORT BIT(15) #define ICE_TC_FLWR_FIELD_ENC_DST_MAC BIT(16) #define ICE_TC_FLWR_FIELD_ETH_TYPE_ID BIT(17) +#define ICE_TC_FLWR_FIELD_ENC_OPTS BIT(18) #define ICE_TC_FLOWER_MASK_32 0xFFFFFFFF @@ -119,6 +120,8 @@ struct ice_tc_flower_fltr { struct ice_tc_flower_lyr_2_4_hdrs inner_headers; struct ice_vsi *src_vsi; __be32 tenant_id; + struct gtp_pdu_session_info gtp_pdu_info_keys; + struct gtp_pdu_session_info gtp_pdu_info_masks; u32 flags; u8 tunnel_type; struct ice_tc_flower_action action;