diff mbox series

[net-next] net: qualcomm: rmnet: Add side band flow control support

Message ID 20230920003337.1317132-1-quic_subashab@quicinc.com (mailing list archive)
State Superseded
Delegated to: Netdev Maintainers
Headers show
Series [net-next] net: qualcomm: rmnet: Add side band flow control support | expand

Checks

Context Check Description
netdev/series_format success Single patches do not need cover letters
netdev/tree_selection success Clearly marked for net-next
netdev/fixes_present success Fixes tag not required for -next series
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit fail Errors and warnings before: 5499 this patch: 4183
netdev/cc_maintainers success CCed 7 of 7 maintainers
netdev/build_clang fail Errors and warnings before: 1672 this patch: 336
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/deprecated_api success None detected
netdev/check_selftest success No net selftest shell script
netdev/verify_fixes success No Fixes tag
netdev/build_allmodconfig_warn fail Errors and warnings before: 5879 this patch: 4540
netdev/checkpatch warning WARNING: line length of 81 exceeds 80 columns WARNING: line length of 82 exceeds 80 columns WARNING: line length of 84 exceeds 80 columns WARNING: line length of 87 exceeds 80 columns
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/source_inline success Was 0 now: 0

Commit Message

Subash Abhinov Kasiviswanathan (KS) Sept. 20, 2023, 12:33 a.m. UTC
Individual rmnet devices map to specific network types such as internet,
multimedia messaging services, IP multimedia subsystem etc. Each of
these network types may support varying quality of service for different
bearers or traffic types.

The physical device interconnect to radio hardware may support a
higher data rate than what is actually supported by the radio network.
Any packets transmitted to the radio hardware which exceed the radio
network data rate limit maybe dropped. This patch tries to minimize the
loss of packets by adding support for bearer level flow control within a
rmnet device by ensuring that the packets transmitted do not exceed the
limit allowed by the radio network.

In order to support multiple bearers, rmnet must be created as a
multiqueue TX netdevice. Radio hardware communicates the supported
bearer information for a given network via side band signalling.
Consider the following mapping -

IPv4 UDP port 1234 - Mark 0x1001 - Queue 1
IPv6 TCP port 2345 - Mark 0x2001 - Queue 2

iptables can be used to install filters which mark packets matching these
specific traffic patterns and the RMNET_QUEUE_MAPPING_ADD operation can
then be to install the mapping of the mark to the specific txqueue.

If the traffic limit is exceeded for a particular bearer, radio hardware
would notify that the bearer cannot accept more packets and the
corresponding txqueue traffic can be stopped using RMNET_QUEUE_DISABLE.

Conversely, if radio hardware can send more traffic for a particular
bearer, RMNET_QUEUE_ENABLE can be used to allow traffic on that
particular txqueue. RMNET_QUEUE_MAPPING_REMOVE can be used to remove the
mark to queue mapping in case the radio network doesn't support that
particular bearer any longer.

Signed-off-by: Sean Tranchetti <quic_stranche@quicinc.com>
Signed-off-by: Subash Abhinov Kasiviswanathan <quic_subashab@quicinc.com>
---
 .../ethernet/qualcomm/rmnet/rmnet_config.c    | 98 ++++++++++++++++++-
 .../ethernet/qualcomm/rmnet/rmnet_config.h    |  2 +
 .../net/ethernet/qualcomm/rmnet/rmnet_vnd.c   | 21 ++++
 include/uapi/linux/if_link.h                  | 16 +++
 4 files changed, 136 insertions(+), 1 deletion(-)

Comments

Vadim Fedorenko Sept. 21, 2023, 10:51 a.m. UTC | #1
On 20/09/2023 01:33, Subash Abhinov Kasiviswanathan wrote:
> Individual rmnet devices map to specific network types such as internet,
> multimedia messaging services, IP multimedia subsystem etc. Each of
> these network types may support varying quality of service for different
> bearers or traffic types.
> 
> The physical device interconnect to radio hardware may support a
> higher data rate than what is actually supported by the radio network.
> Any packets transmitted to the radio hardware which exceed the radio
> network data rate limit maybe dropped. This patch tries to minimize the
> loss of packets by adding support for bearer level flow control within a
> rmnet device by ensuring that the packets transmitted do not exceed the
> limit allowed by the radio network.
> 
> In order to support multiple bearers, rmnet must be created as a
> multiqueue TX netdevice. Radio hardware communicates the supported
> bearer information for a given network via side band signalling.
> Consider the following mapping -
> 
> IPv4 UDP port 1234 - Mark 0x1001 - Queue 1
> IPv6 TCP port 2345 - Mark 0x2001 - Queue 2
> 
> iptables can be used to install filters which mark packets matching these
> specific traffic patterns and the RMNET_QUEUE_MAPPING_ADD operation can
> then be to install the mapping of the mark to the specific txqueue.
> 
> If the traffic limit is exceeded for a particular bearer, radio hardware
> would notify that the bearer cannot accept more packets and the
> corresponding txqueue traffic can be stopped using RMNET_QUEUE_DISABLE.
> 
> Conversely, if radio hardware can send more traffic for a particular
> bearer, RMNET_QUEUE_ENABLE can be used to allow traffic on that
> particular txqueue. RMNET_QUEUE_MAPPING_REMOVE can be used to remove the
> mark to queue mapping in case the radio network doesn't support that
> particular bearer any longer.
> 
> Signed-off-by: Sean Tranchetti <quic_stranche@quicinc.com>
> Signed-off-by: Subash Abhinov Kasiviswanathan <quic_subashab@quicinc.com>

[ ... ]

> +	case RMNET_QUEUE_ENABLE:
> +	case RMNET_QUEUE_DISABLE:
> +		p = xa_load(&priv->queue_map, mark);
> +		if (p && xa_is_value(p)) {
> +			txq = xa_is_value(p);

                  should it be xa_to_value(p)? otherwise txq is always 1

> +			if (txq >= dev->num_tx_queues) {
> +				NL_SET_ERR_MSG_MOD(extack,
> +						   "queue mapping limit exceeded");
> +				return -EINVAL;
> +			}
> +
> +			q = netdev_get_tx_queue(dev, txq);
> +			if (unlikely(!q)) {
> +				NL_SET_ERR_MSG_MOD(extack,
> +						   "unsupported queue mapping");
> +				return -EINVAL;
> +			}
> +

[ ... ]

>   
>   struct rmnet_port *rmnet_get_port_rcu(struct net_device *real_dev);
> diff --git a/drivers/net/ethernet/qualcomm/rmnet/rmnet_vnd.c b/drivers/net/ethernet/qualcomm/rmnet/rmnet_vnd.c
> index 046b5f7d8e7c..6dfee4a4d634 100644
> --- a/drivers/net/ethernet/qualcomm/rmnet/rmnet_vnd.c
> +++ b/drivers/net/ethernet/qualcomm/rmnet/rmnet_vnd.c
> @@ -1,5 +1,6 @@
>   // SPDX-License-Identifier: GPL-2.0-only
>   /* Copyright (c) 2013-2018, The Linux Foundation. All rights reserved.
> + * Copyright (c) 2023, Qualcomm Innovation Center, Inc. All rights reserved.
>    *
>    * RMNET Data virtual network driver
>    */
> @@ -158,6 +159,23 @@ static void rmnet_get_stats64(struct net_device *dev,
>   	s->tx_dropped = total_stats.tx_drops;
>   }
>   
> +static u16 rmnet_vnd_select_queue(struct net_device *dev,
> +				  struct sk_buff *skb,
> +				  struct net_device *sb_dev)
> +{
> +	struct rmnet_priv *priv = netdev_priv(dev);
> +	void *p = xa_load(&priv->queue_map, skb->mark);
> +	u8 txq;
> +
> +	if (!p && !xa_is_value(p))
> +		return 0;

The check is meaningless. I believe you were thinking about

if (!p || !xa_is_value(p))

But the main question: is it even possible to get something from xarray
that won't pass the check? queue_map is filled in the same code, there 
is now way (I believe) to change it from anywhere else, right?

> +
> +	txq = xa_to_value(p);
> +
> +	netdev_dbg(dev, "mark %u -> txq %u\n", skb->mark, txq);
> +	return (txq < dev->num_tx_queues) ? txq : 0;
> +}
Subash Abhinov Kasiviswanathan (KS) Sept. 21, 2023, 3:38 p.m. UTC | #2
On 9/21/2023 4:51 AM, Vadim Fedorenko wrote:
> On 20/09/2023 01:33, Subash Abhinov Kasiviswanathan wrote:
>> Individual rmnet devices map to specific network types such as internet,
>> multimedia messaging services, IP multimedia subsystem etc. Each of
>> these network types may support varying quality of service for different
>> bearers or traffic types.
>>
>                   should it be xa_to_value(p)? otherwise txq is always 1
> 

Agree, this does indeed need to be xa_to_value(p)

>> +static u16 rmnet_vnd_select_queue(struct net_device *dev,
>> +                  struct sk_buff *skb,
>> +                  struct net_device *sb_dev)
>> +{
>> +    struct rmnet_priv *priv = netdev_priv(dev);
>> +    void *p = xa_load(&priv->queue_map, skb->mark);
>> +    u8 txq;
>> +
>> +    if (!p && !xa_is_value(p))
>> +        return 0;
> 
> The check is meaningless. I believe you were thinking about
> 
> if (!p || !xa_is_value(p))
> 
> But the main question: is it even possible to get something from xarray
> that won't pass the check? queue_map is filled in the same code, there 
> is now way (I believe) to change it from anywhere else, right?
> 
>> +
>> +    txq = xa_to_value(p);
>> +
>> +    netdev_dbg(dev, "mark %u -> txq %u\n", skb->mark, txq);
>> +    return (txq < dev->num_tx_queues) ? txq : 0;
>> +}


I'll update these checks as well.
It is not possible for the txq value to exceed the tx_queue limit for a 
given rmnet device.

>
kernel test robot Sept. 23, 2023, 7:46 p.m. UTC | #3
Hi Subash,

kernel test robot noticed the following build errors:

[auto build test ERROR on net-next/main]

url:    https://github.com/intel-lab-lkp/linux/commits/Subash-Abhinov-Kasiviswanathan/net-qualcomm-rmnet-Add-side-band-flow-control-support/20230920-083445
base:   net-next/main
patch link:    https://lore.kernel.org/r/20230920003337.1317132-1-quic_subashab%40quicinc.com
patch subject: [PATCH net-next] net: qualcomm: rmnet: Add side band flow control support
config: i386-randconfig-016-20230923 (https://download.01.org/0day-ci/archive/20230924/202309240339.EjUjVM7v-lkp@intel.com/config)
compiler: gcc-12 (Debian 12.2.0-14) 12.2.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20230924/202309240339.EjUjVM7v-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202309240339.EjUjVM7v-lkp@intel.com/

All errors (new ones prefixed by >>):

   In file included from usr/include/linux/netdevice.h:32,
                    from ./usr/include/linux/if_arp.h:27,
                    from <command-line>:
>> usr/include/linux/if_link.h:1389:9: error: unknown type name 'u8'
    1389 |         u8 operation;
         |         ^~
   usr/include/linux/if_link.h:1390:9: error: unknown type name 'u8'
    1390 |         u8 txqueue;
         |         ^~
>> usr/include/linux/if_link.h:1391:9: error: unknown type name 'u16'
    1391 |         u16 padding;
         |         ^~~
>> usr/include/linux/if_link.h:1392:9: error: unknown type name 'u32'
    1392 |         u32 mark;
         |         ^~~
--
   In file included from <command-line>:
>> ./usr/include/linux/if_link.h:1389:9: error: unknown type name 'u8'
    1389 |         u8 operation;
         |         ^~
   ./usr/include/linux/if_link.h:1390:9: error: unknown type name 'u8'
    1390 |         u8 txqueue;
         |         ^~
>> ./usr/include/linux/if_link.h:1391:9: error: unknown type name 'u16'
    1391 |         u16 padding;
         |         ^~~
>> ./usr/include/linux/if_link.h:1392:9: error: unknown type name 'u32'
    1392 |         u32 mark;
         |         ^~~
diff mbox series

Patch

diff --git a/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.c b/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.c
index 39d24e07f306..2ef5c66757e9 100644
--- a/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.c
+++ b/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.c
@@ -1,5 +1,6 @@ 
 // SPDX-License-Identifier: GPL-2.0-only
 /* Copyright (c) 2013-2018, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2023, Qualcomm Innovation Center, Inc. All rights reserved.
  *
  * RMNET configuration engine
  */
@@ -19,6 +20,7 @@ 
 static const struct nla_policy rmnet_policy[IFLA_RMNET_MAX + 1] = {
 	[IFLA_RMNET_MUX_ID]	= { .type = NLA_U16 },
 	[IFLA_RMNET_FLAGS]	= { .len = sizeof(struct ifla_rmnet_flags) },
+	[IFLA_RMNET_QUEUE]	= { .len = sizeof(struct rmnet_queue_mapping) },
 };
 
 static int rmnet_is_real_dev_registered(const struct net_device *real_dev)
@@ -88,6 +90,68 @@  static int rmnet_register_real_device(struct net_device *real_dev,
 	return 0;
 }
 
+static int rmnet_update_queue_map(struct net_device *dev, u8 operation,
+				  u8 txqueue, u32 mark,
+				  struct netlink_ext_ack *extack)
+{
+	struct rmnet_priv *priv = netdev_priv(dev);
+	struct netdev_queue *q;
+	void *p;
+	u8 txq;
+
+	if (unlikely(txqueue >= dev->num_tx_queues)) {
+		NL_SET_ERR_MSG_MOD(extack, "invalid txqueue");
+		return -EINVAL;
+	}
+
+	switch (operation) {
+	case RMNET_QUEUE_MAPPING_ADD:
+		p = xa_store(&priv->queue_map, mark, xa_mk_value(txqueue), GFP_ATOMIC);
+		if (xa_is_err(p)) {
+			NL_SET_ERR_MSG_MOD(extack, "unable to add mapping");
+			return -EINVAL;
+		}
+		break;
+	case RMNET_QUEUE_MAPPING_REMOVE:
+		p = xa_erase(&priv->queue_map, mark);
+		if (xa_is_err(p)) {
+			NL_SET_ERR_MSG_MOD(extack, "unable to remove mapping");
+			return -EINVAL;
+		}
+		break;
+	case RMNET_QUEUE_ENABLE:
+	case RMNET_QUEUE_DISABLE:
+		p = xa_load(&priv->queue_map, mark);
+		if (p && xa_is_value(p)) {
+			txq = xa_is_value(p);
+			if (txq >= dev->num_tx_queues) {
+				NL_SET_ERR_MSG_MOD(extack,
+						   "queue mapping limit exceeded");
+				return -EINVAL;
+			}
+
+			q = netdev_get_tx_queue(dev, txq);
+			if (unlikely(!q)) {
+				NL_SET_ERR_MSG_MOD(extack,
+						   "unsupported queue mapping");
+				return -EINVAL;
+			}
+
+			operation == RMNET_QUEUE_ENABLE ? netif_tx_wake_queue(q) :
+							  netif_tx_stop_queue(q);
+		} else {
+			NL_SET_ERR_MSG_MOD(extack, "invalid queue mapping");
+			return -EINVAL;
+		}
+		break;
+	default:
+		NL_SET_ERR_MSG_MOD(extack, "unsupported operation");
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
 static void rmnet_unregister_bridge(struct rmnet_port *port)
 {
 	struct net_device *bridge_dev, *real_dev, *rmnet_dev;
@@ -175,8 +239,24 @@  static int rmnet_newlink(struct net *src_net, struct net_device *dev,
 	netdev_dbg(dev, "data format [0x%08X]\n", data_format);
 	port->data_format = data_format;
 
+	if (data[IFLA_RMNET_QUEUE]) {
+		struct rmnet_queue_mapping *queue_map;
+
+		queue_map = nla_data(data[IFLA_RMNET_QUEUE]);
+		if (rmnet_update_queue_map(dev, queue_map->operation,
+					   queue_map->txqueue, queue_map->mark,
+					   extack))
+			goto err3;
+
+		netdev_dbg(dev, "op %02x txq %02x mark %08x\n",
+			   queue_map->operation, queue_map->txqueue,
+			   queue_map->mark);
+	}
+
 	return 0;
 
+err3:
+	hlist_del_init_rcu(&ep->hlnode);
 err2:
 	unregister_netdevice(dev);
 	rmnet_vnd_dellink(mux_id, port, ep);
@@ -352,6 +432,20 @@  static int rmnet_changelink(struct net_device *dev, struct nlattr *tb[],
 		}
 	}
 
+	if (data[IFLA_RMNET_QUEUE]) {
+		struct rmnet_queue_mapping *queue_map;
+
+		queue_map = nla_data(data[IFLA_RMNET_QUEUE]);
+		if (rmnet_update_queue_map(dev, queue_map->operation,
+					   queue_map->txqueue, queue_map->mark,
+					   extack))
+			return -EINVAL;
+
+		netdev_dbg(dev, "op %02x txq %02x mark %08x\n",
+			   queue_map->operation, queue_map->txqueue,
+			   queue_map->mark);
+	}
+
 	return 0;
 }
 
@@ -361,7 +455,9 @@  static size_t rmnet_get_size(const struct net_device *dev)
 		/* IFLA_RMNET_MUX_ID */
 		nla_total_size(2) +
 		/* IFLA_RMNET_FLAGS */
-		nla_total_size(sizeof(struct ifla_rmnet_flags));
+		nla_total_size(sizeof(struct ifla_rmnet_flags)) +
+		/* IFLA_RMNET_QUEUE */
+		nla_total_size(sizeof(struct rmnet_queue_mapping));
 }
 
 static int rmnet_fill_info(struct sk_buff *skb, const struct net_device *dev)
diff --git a/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.h b/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.h
index ed112d51ac5a..ae8300fc5ed7 100644
--- a/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.h
+++ b/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.h
@@ -1,6 +1,7 @@ 
 /* SPDX-License-Identifier: GPL-2.0-only */
 /* Copyright (c) 2013-2014, 2016-2018, 2021 The Linux Foundation.
  * All rights reserved.
+ * Copyright (c) 2023, Qualcomm Innovation Center, Inc. All rights reserved.
  *
  * RMNET Data configuration engine
  */
@@ -87,6 +88,7 @@  struct rmnet_priv {
 	struct rmnet_pcpu_stats __percpu *pcpu_stats;
 	struct gro_cells gro_cells;
 	struct rmnet_priv_stats stats;
+	struct xarray queue_map;
 };
 
 struct rmnet_port *rmnet_get_port_rcu(struct net_device *real_dev);
diff --git a/drivers/net/ethernet/qualcomm/rmnet/rmnet_vnd.c b/drivers/net/ethernet/qualcomm/rmnet/rmnet_vnd.c
index 046b5f7d8e7c..6dfee4a4d634 100644
--- a/drivers/net/ethernet/qualcomm/rmnet/rmnet_vnd.c
+++ b/drivers/net/ethernet/qualcomm/rmnet/rmnet_vnd.c
@@ -1,5 +1,6 @@ 
 // SPDX-License-Identifier: GPL-2.0-only
 /* Copyright (c) 2013-2018, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2023, Qualcomm Innovation Center, Inc. All rights reserved.
  *
  * RMNET Data virtual network driver
  */
@@ -158,6 +159,23 @@  static void rmnet_get_stats64(struct net_device *dev,
 	s->tx_dropped = total_stats.tx_drops;
 }
 
+static u16 rmnet_vnd_select_queue(struct net_device *dev,
+				  struct sk_buff *skb,
+				  struct net_device *sb_dev)
+{
+	struct rmnet_priv *priv = netdev_priv(dev);
+	void *p = xa_load(&priv->queue_map, skb->mark);
+	u8 txq;
+
+	if (!p && !xa_is_value(p))
+		return 0;
+
+	txq = xa_to_value(p);
+
+	netdev_dbg(dev, "mark %u -> txq %u\n", skb->mark, txq);
+	return (txq < dev->num_tx_queues) ? txq : 0;
+}
+
 static const struct net_device_ops rmnet_vnd_ops = {
 	.ndo_start_xmit = rmnet_vnd_start_xmit,
 	.ndo_change_mtu = rmnet_vnd_change_mtu,
@@ -167,6 +185,7 @@  static const struct net_device_ops rmnet_vnd_ops = {
 	.ndo_init       = rmnet_vnd_init,
 	.ndo_uninit     = rmnet_vnd_uninit,
 	.ndo_get_stats64 = rmnet_get_stats64,
+	.ndo_select_queue = rmnet_vnd_select_queue,
 };
 
 static const char rmnet_gstrings_stats[][ETH_GSTRING_LEN] = {
@@ -334,6 +353,8 @@  int rmnet_vnd_newlink(u8 id, struct net_device *rmnet_dev,
 
 		priv->mux_id = id;
 
+		xa_init(&priv->queue_map);
+
 		netdev_dbg(rmnet_dev, "rmnet dev created\n");
 	}
 
diff --git a/include/uapi/linux/if_link.h b/include/uapi/linux/if_link.h
index ce3117df9cec..01485177f7ee 100644
--- a/include/uapi/linux/if_link.h
+++ b/include/uapi/linux/if_link.h
@@ -1368,6 +1368,7 @@  enum {
 	IFLA_RMNET_UNSPEC,
 	IFLA_RMNET_MUX_ID,
 	IFLA_RMNET_FLAGS,
+	IFLA_RMNET_QUEUE,
 	__IFLA_RMNET_MAX,
 };
 
@@ -1378,6 +1379,21 @@  struct ifla_rmnet_flags {
 	__u32	mask;
 };
 
+enum {
+	RMNET_QUEUE_OPERATION_UNSPEC,
+	RMNET_QUEUE_MAPPING_ADD,	/* Add new queue <-> mark mapping */
+	RMNET_QUEUE_MAPPING_REMOVE,	/* Remove existing queue <-> mark mapping */
+	RMNET_QUEUE_ENABLE,		/* Allow traffic on an existing queue */
+	RMNET_QUEUE_DISABLE,		/* Stop traffic on an existing queue */
+};
+
+struct rmnet_queue_mapping {
+	u8 operation;
+	u8 txqueue;
+	u16 padding;
+	u32 mark;
+};
+
 /* MCTP section */
 
 enum {