From patchwork Fri Apr 5 20:09:33 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Nambiar, Amritha" X-Patchwork-Id: 13619360 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 20285174EC9 for ; Fri, 5 Apr 2024 19:51:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.17 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712346689; cv=none; b=iEbKli5TAqym5wmie4Qv+RjrZxYxeJ2nYCUAG6wy8Y1XEpCIOF4mACtMCLSK9oOOGaPmpdZ4NI8UkH/W0NIufa1iNohuHNH7ahgtH7r5C1qMq1qRAs5rtvnkTxLU0+D1ayTVzzbjtn9eECCMr2lfOAkiFeiAev1SnkMl6rUjvRM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712346689; c=relaxed/simple; bh=btLhHHI4hiXypA42yiLY7DA6UrcImaLMmUGCmN1HOoM=; h=Subject:From:To:Cc:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=Pjjbbi2oK2inb2lTP8YICw/G397amT3WAupSu8697QtOd06cZuzoKf+Pgb+md5f2RIFjfi7PgQxSqfsNZJXTo1yWesi1Lb/alA16zW9qkkBsE3SVeHATaWxlfZLMhcSe5fJ1BYr5rHb0JwRGahQGsCP2w6jEpKHh7xX23pINE/U= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=Kt0aKmZ5; arc=none smtp.client-ip=198.175.65.17 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="Kt0aKmZ5" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1712346687; x=1743882687; h=subject:from:to:cc:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=btLhHHI4hiXypA42yiLY7DA6UrcImaLMmUGCmN1HOoM=; b=Kt0aKmZ5YGkhksfJ9qMVs0eYNzai2cHr24Y9pcicIKfwUUssZ6NJ3BoH kc06moMWttuSG9Ovmscum8UBvAm+aTuoRERIeVVYuXE5ONhHF4an2x9/n mgRwc5PZjYEDyIphpKNdg9r+gsjTA7mKqI2XcCO1R/8MWikDPJJA1WRfl kfSd9HZrUI0ju6ImRGMEdE2ADS6Hv2HQ03/XW/338gpTCr4SLAf9luCGJ /eCS/Rg1V0P3OLAzLAVBpiuF9BrzB/WE8vzcDbmp2N8+3aKyOj5FnaZuS oLZrx5o1dL6uPkpBF/gbJZAu3Jqr5R3ySQxUUk4vA3Ry8JjCZC+0/bHRh g==; X-CSE-ConnectionGUID: ZP80f42TQMWGWU5mVXfwFA== X-CSE-MsgGUID: bFog157xQEaz2CAaikzXCw== X-IronPort-AV: E=McAfee;i="6600,9927,11035"; a="7785899" X-IronPort-AV: E=Sophos;i="6.07,182,1708416000"; d="scan'208";a="7785899" Received: from fmviesa006.fm.intel.com ([10.60.135.146]) by orvoesa109.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Apr 2024 12:51:26 -0700 X-CSE-ConnectionGUID: 7f1iVORdTWG9D7ehJwR8ng== X-CSE-MsgGUID: Pz5dNaSSQVON/K7q41OImA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,182,1708416000"; d="scan'208";a="19289581" Received: from anambiarhost.jf.intel.com ([10.166.29.163]) by fmviesa006.fm.intel.com with ESMTP; 05 Apr 2024 12:51:25 -0700 Subject: [net-next, RFC PATCH 1/5] netdev-genl: spec: Extend netdev netlink spec in YAML for queue-set From: Amritha Nambiar To: netdev@vger.kernel.org, kuba@kernel.org, davem@davemloft.net Cc: edumazet@google.com, pabeni@redhat.com, ast@kernel.org, sdf@google.com, lorenzo@kernel.org, tariqt@nvidia.com, daniel@iogearbox.net, anthony.l.nguyen@intel.com, lucien.xin@gmail.com, hawk@kernel.org, sridhar.samudrala@intel.com, amritha.nambiar@intel.com Date: Fri, 05 Apr 2024 13:09:33 -0700 Message-ID: <171234777309.5075.4038375383551870109.stgit@anambiarhost.jf.intel.com> In-Reply-To: <171234737780.5075.5717254021446469741.stgit@anambiarhost.jf.intel.com> References: <171234737780.5075.5717254021446469741.stgit@anambiarhost.jf.intel.com> User-Agent: StGit/unknown-version Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC Add support in netlink spec(netdev.yaml) for queue-set command. Currently, the set command enables associating a NAPI ID for a queue, but can also be extended to support configuring other attributes. Also, add code generated from the spec. Signed-off-by: Amritha Nambiar --- Documentation/netlink/specs/netdev.yaml | 20 ++++++++++++++++++++ include/uapi/linux/netdev.h | 1 + net/core/netdev-genl-gen.c | 15 +++++++++++++++ net/core/netdev-genl-gen.h | 1 + net/core/netdev-genl.c | 5 +++++ tools/include/uapi/linux/netdev.h | 1 + 6 files changed, 43 insertions(+) diff --git a/Documentation/netlink/specs/netdev.yaml b/Documentation/netlink/specs/netdev.yaml index 76352dbd2be4..eda45ae31077 100644 --- a/Documentation/netlink/specs/netdev.yaml +++ b/Documentation/netlink/specs/netdev.yaml @@ -457,6 +457,26 @@ operations: attributes: - ifindex reply: *queue-get-op + - + name: queue-set + doc: User configuration of queue attributes. + The id, type and ifindex forms the queue header/identifier. Example, + to configure the NAPI instance associated with the queue, the napi-id + is the configurable attribute. + attribute-set: queue + do: + request: + attributes: + - ifindex + - type + - id + - napi-id + reply: &queue-set-op + attributes: + - id + - type + - napi-id + - ifindex - name: napi-get doc: Get information about NAPI instances configured on the system. diff --git a/include/uapi/linux/netdev.h b/include/uapi/linux/netdev.h index bb65ee840cda..80fac72da8b2 100644 --- a/include/uapi/linux/netdev.h +++ b/include/uapi/linux/netdev.h @@ -162,6 +162,7 @@ enum { NETDEV_CMD_PAGE_POOL_CHANGE_NTF, NETDEV_CMD_PAGE_POOL_STATS_GET, NETDEV_CMD_QUEUE_GET, + NETDEV_CMD_QUEUE_SET, NETDEV_CMD_NAPI_GET, NETDEV_CMD_QSTATS_GET, diff --git a/net/core/netdev-genl-gen.c b/net/core/netdev-genl-gen.c index 8d8ace9ef87f..cb5485dc5843 100644 --- a/net/core/netdev-genl-gen.c +++ b/net/core/netdev-genl-gen.c @@ -58,6 +58,14 @@ static const struct nla_policy netdev_queue_get_dump_nl_policy[NETDEV_A_QUEUE_IF [NETDEV_A_QUEUE_IFINDEX] = NLA_POLICY_MIN(NLA_U32, 1), }; +/* NETDEV_CMD_QUEUE_SET - do */ +static const struct nla_policy netdev_queue_set_nl_policy[NETDEV_A_QUEUE_NAPI_ID + 1] = { + [NETDEV_A_QUEUE_IFINDEX] = NLA_POLICY_MIN(NLA_U32, 1), + [NETDEV_A_QUEUE_TYPE] = NLA_POLICY_MAX(NLA_U32, 1), + [NETDEV_A_QUEUE_ID] = { .type = NLA_U32, }, + [NETDEV_A_QUEUE_NAPI_ID] = { .type = NLA_U32, }, +}; + /* NETDEV_CMD_NAPI_GET - do */ static const struct nla_policy netdev_napi_get_do_nl_policy[NETDEV_A_NAPI_ID + 1] = { [NETDEV_A_NAPI_ID] = { .type = NLA_U32, }, @@ -129,6 +137,13 @@ static const struct genl_split_ops netdev_nl_ops[] = { .maxattr = NETDEV_A_QUEUE_IFINDEX, .flags = GENL_CMD_CAP_DUMP, }, + { + .cmd = NETDEV_CMD_QUEUE_SET, + .doit = netdev_nl_queue_set_doit, + .policy = netdev_queue_set_nl_policy, + .maxattr = NETDEV_A_QUEUE_NAPI_ID, + .flags = GENL_CMD_CAP_DO, + }, { .cmd = NETDEV_CMD_NAPI_GET, .doit = netdev_nl_napi_get_doit, diff --git a/net/core/netdev-genl-gen.h b/net/core/netdev-genl-gen.h index 4db40fd5b4a9..be136c5ea5ad 100644 --- a/net/core/netdev-genl-gen.h +++ b/net/core/netdev-genl-gen.h @@ -26,6 +26,7 @@ int netdev_nl_page_pool_stats_get_dumpit(struct sk_buff *skb, int netdev_nl_queue_get_doit(struct sk_buff *skb, struct genl_info *info); int netdev_nl_queue_get_dumpit(struct sk_buff *skb, struct netlink_callback *cb); +int netdev_nl_queue_set_doit(struct sk_buff *skb, struct genl_info *info); int netdev_nl_napi_get_doit(struct sk_buff *skb, struct genl_info *info); int netdev_nl_napi_get_dumpit(struct sk_buff *skb, struct netlink_callback *cb); int netdev_nl_qstats_get_dumpit(struct sk_buff *skb, diff --git a/net/core/netdev-genl.c b/net/core/netdev-genl.c index 7004b3399c2b..d5b2e90e5709 100644 --- a/net/core/netdev-genl.c +++ b/net/core/netdev-genl.c @@ -674,6 +674,11 @@ int netdev_nl_qstats_get_dumpit(struct sk_buff *skb, return err; } +int netdev_nl_queue_set_doit(struct sk_buff *skb, struct genl_info *info) +{ + return -EOPNOTSUPP; +} + static int netdev_genl_netdevice_event(struct notifier_block *nb, unsigned long event, void *ptr) { diff --git a/tools/include/uapi/linux/netdev.h b/tools/include/uapi/linux/netdev.h index bb65ee840cda..80fac72da8b2 100644 --- a/tools/include/uapi/linux/netdev.h +++ b/tools/include/uapi/linux/netdev.h @@ -162,6 +162,7 @@ enum { NETDEV_CMD_PAGE_POOL_CHANGE_NTF, NETDEV_CMD_PAGE_POOL_STATS_GET, NETDEV_CMD_QUEUE_GET, + NETDEV_CMD_QUEUE_SET, NETDEV_CMD_NAPI_GET, NETDEV_CMD_QSTATS_GET, From patchwork Fri Apr 5 20:09:38 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Nambiar, Amritha" X-Patchwork-Id: 13619361 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.16]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 16840173354 for ; Fri, 5 Apr 2024 19:51:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.16 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712346693; cv=none; b=dSDYW8NrKD+NbINw+VRHjNEbRhOBBiLrOOHMB/bitODezryXA9g+5PSVxpCavGIk8aMjFtqpg4bfEyHVVKgZeYpy4eJ2nt6TjQTPKA8VFPPbsQzxH45dSYO8VI+SRo005ukQWOOgK819361yGIYDi4TQXJzwLfmUlQ7ZIE2/bco= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712346693; c=relaxed/simple; bh=VYrfrtQMnr7SvlUX3uUavwQYUeEP+/CPRtN5/mtYdZw=; h=Subject:From:To:Cc:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=KLY4Nfp+bJtLcujWIpjONW58TqO94zFtOgUnzz/Zg4QEOLVb42l4NJ/uKug6CiwjMzu4tK9d+3T8ezIiP4Wfdr9HvFPAKu9YgfC/Lb5WFDA+zgogaQdISgSUCwrRWD0eZAvA52WIDurVsQ6UpTW3AVBKIftu5ixhr+bLu4nT7BM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=k115wlmf; arc=none smtp.client-ip=198.175.65.16 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="k115wlmf" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1712346693; x=1743882693; h=subject:from:to:cc:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=VYrfrtQMnr7SvlUX3uUavwQYUeEP+/CPRtN5/mtYdZw=; b=k115wlmfQjlE8lQHkpxSbht06/JlSxmdokndluB5Z+Gzs1qljgqaRfJy jtuDTGVPEsyIMm9P9DEPFzffPWSk/DFPCav2XNGi+kWEY+iChSzrYPxvO ISc/NRaUTooob4CDAnPkHa0IF4hgFfzkHOnULFwdjryU57DpP9OSBmmM6 48+NH7vA70sFeON/MuiwjJ70vOAwY0MZUJGZTVnVDCtyiDRXtkX0+umxP 4ZTB+JtKGn1tqcrKc3QjH0nhmUGfdKSpJmXg/cUILse/CJHu5W4Aig2cF a+fa6hPQ4f7llrYcUf/b6gQB6oEP3sTDnH08z4c52MxDEuvSuqQ032CAY Q==; X-CSE-ConnectionGUID: wyrh5zMCRG2ZnaOpYX79mQ== X-CSE-MsgGUID: ec7UHl5MQ06g3X2E7Kw81w== X-IronPort-AV: E=McAfee;i="6600,9927,11035"; a="7817639" X-IronPort-AV: E=Sophos;i="6.07,182,1708416000"; d="scan'208";a="7817639" Received: from orviesa006.jf.intel.com ([10.64.159.146]) by orvoesa108.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Apr 2024 12:51:32 -0700 X-CSE-ConnectionGUID: fsXxWCO6QDS2GMW8BWplHQ== X-CSE-MsgGUID: nNyAc9cISbKh8fJPr57Q0w== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,182,1708416000"; d="scan'208";a="19700651" Received: from anambiarhost.jf.intel.com ([10.166.29.163]) by orviesa006.jf.intel.com with ESMTP; 05 Apr 2024 12:51:32 -0700 Subject: [net-next, RFC PATCH 2/5] netdev-genl: Add netlink framework functions for queue-set NAPI From: Amritha Nambiar To: netdev@vger.kernel.org, kuba@kernel.org, davem@davemloft.net Cc: edumazet@google.com, pabeni@redhat.com, ast@kernel.org, sdf@google.com, lorenzo@kernel.org, tariqt@nvidia.com, daniel@iogearbox.net, anthony.l.nguyen@intel.com, lucien.xin@gmail.com, hawk@kernel.org, sridhar.samudrala@intel.com, amritha.nambiar@intel.com Date: Fri, 05 Apr 2024 13:09:38 -0700 Message-ID: <171234777883.5075.17163018772262453896.stgit@anambiarhost.jf.intel.com> In-Reply-To: <171234737780.5075.5717254021446469741.stgit@anambiarhost.jf.intel.com> References: <171234737780.5075.5717254021446469741.stgit@anambiarhost.jf.intel.com> User-Agent: StGit/unknown-version Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC Implement the netdev netlink framework functions for associating a queue with NAPI ID. Signed-off-by: Amritha Nambiar --- include/linux/netdevice.h | 7 +++ net/core/netdev-genl.c | 117 +++++++++++++++++++++++++++++++++++++++------ 2 files changed, 108 insertions(+), 16 deletions(-) diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index 0c198620ac93..70df1cec4a60 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -1351,6 +1351,10 @@ struct netdev_net_notifier { * struct kernel_hwtstamp_config *kernel_config, * struct netlink_ext_ack *extack); * Change the hardware timestamping parameters for NIC device. + * + * int (*ndo_queue_set_napi)(struct net_device *dev, u32 q_idx, u32 q_type, + * struct napi_struct *napi); + * Change the NAPI instance associated with the queue. */ struct net_device_ops { int (*ndo_init)(struct net_device *dev); @@ -1596,6 +1600,9 @@ struct net_device_ops { int (*ndo_hwtstamp_set)(struct net_device *dev, struct kernel_hwtstamp_config *kernel_config, struct netlink_ext_ack *extack); + int (*ndo_queue_set_napi)(struct net_device *dev, + u32 q_idx, u32 q_type, + struct napi_struct *napi); }; /** diff --git a/net/core/netdev-genl.c b/net/core/netdev-genl.c index d5b2e90e5709..6b3d3165d76e 100644 --- a/net/core/netdev-genl.c +++ b/net/core/netdev-genl.c @@ -288,12 +288,29 @@ int netdev_nl_napi_get_dumpit(struct sk_buff *skb, struct netlink_callback *cb) return err; } +/* must be called under rtnl_lock() */ +static struct napi_struct * +napi_get_by_queue(struct net_device *netdev, u32 q_idx, u32 q_type) +{ + struct netdev_rx_queue *rxq; + struct netdev_queue *txq; + + switch (q_type) { + case NETDEV_QUEUE_TYPE_RX: + rxq = __netif_get_rx_queue(netdev, q_idx); + return rxq->napi; + case NETDEV_QUEUE_TYPE_TX: + txq = netdev_get_tx_queue(netdev, q_idx); + return txq->napi; + } + return NULL; +} + static int netdev_nl_queue_fill_one(struct sk_buff *rsp, struct net_device *netdev, u32 q_idx, u32 q_type, const struct genl_info *info) { - struct netdev_rx_queue *rxq; - struct netdev_queue *txq; + struct napi_struct *napi; void *hdr; hdr = genlmsg_iput(rsp, info); @@ -305,19 +322,9 @@ netdev_nl_queue_fill_one(struct sk_buff *rsp, struct net_device *netdev, nla_put_u32(rsp, NETDEV_A_QUEUE_IFINDEX, netdev->ifindex)) goto nla_put_failure; - switch (q_type) { - case NETDEV_QUEUE_TYPE_RX: - rxq = __netif_get_rx_queue(netdev, q_idx); - if (rxq->napi && nla_put_u32(rsp, NETDEV_A_QUEUE_NAPI_ID, - rxq->napi->napi_id)) - goto nla_put_failure; - break; - case NETDEV_QUEUE_TYPE_TX: - txq = netdev_get_tx_queue(netdev, q_idx); - if (txq->napi && nla_put_u32(rsp, NETDEV_A_QUEUE_NAPI_ID, - txq->napi->napi_id)) - goto nla_put_failure; - } + napi = napi_get_by_queue(netdev, q_idx, q_type); + if (napi && nla_put_u32(rsp, NETDEV_A_QUEUE_NAPI_ID, napi->napi_id)) + goto nla_put_failure; genlmsg_end(rsp, hdr); @@ -674,9 +681,87 @@ int netdev_nl_qstats_get_dumpit(struct sk_buff *skb, return err; } +static int +netdev_nl_queue_set_napi(struct sk_buff *rsp, struct net_device *netdev, + u32 q_idx, u32 q_type, u32 napi_id, + const struct genl_info *info) +{ + struct napi_struct *napi, *old_napi; + int err; + + if (!(netdev->flags & IFF_UP)) + return 0; + + err = netdev_nl_queue_validate(netdev, q_idx, q_type); + if (err) + return err; + + old_napi = napi_get_by_queue(netdev, q_idx, q_type); + if (old_napi && old_napi->napi_id == napi_id) + return 0; + + napi = napi_by_id(napi_id); + if (!napi) + return -EINVAL; + + err = netdev->netdev_ops->ndo_queue_set_napi(netdev, q_idx, q_type, napi); + + return err; +} + int netdev_nl_queue_set_doit(struct sk_buff *skb, struct genl_info *info) { - return -EOPNOTSUPP; + u32 q_id, q_type, ifindex; + struct net_device *netdev; + struct sk_buff *rsp; + u32 napi_id = 0; + int err = 0; + + if (GENL_REQ_ATTR_CHECK(info, NETDEV_A_QUEUE_ID) || + GENL_REQ_ATTR_CHECK(info, NETDEV_A_QUEUE_TYPE) || + GENL_REQ_ATTR_CHECK(info, NETDEV_A_QUEUE_IFINDEX)) + return -EINVAL; + + q_id = nla_get_u32(info->attrs[NETDEV_A_QUEUE_ID]); + q_type = nla_get_u32(info->attrs[NETDEV_A_QUEUE_TYPE]); + ifindex = nla_get_u32(info->attrs[NETDEV_A_QUEUE_IFINDEX]); + + if (info->attrs[NETDEV_A_QUEUE_NAPI_ID]) { + napi_id = nla_get_u32(info->attrs[NETDEV_A_QUEUE_NAPI_ID]); + if (napi_id < MIN_NAPI_ID) + return -EINVAL; + } + + rsp = genlmsg_new(GENLMSG_DEFAULT_SIZE, GFP_KERNEL); + if (!rsp) + return -ENOMEM; + + rtnl_lock(); + + netdev = __dev_get_by_index(genl_info_net(info), ifindex); + if (netdev) { + if (!napi_id) + GENL_SET_ERR_MSG(info, "No queue parameters changed\n"); + else + err = netdev_nl_queue_set_napi(rsp, netdev, q_id, + q_type, napi_id, info); + if (!err) + err = netdev_nl_queue_fill_one(rsp, netdev, q_id, + q_type, info); + } else { + err = -ENODEV; + } + + rtnl_unlock(); + + if (err) + goto err_free_msg; + + return genlmsg_reply(rsp, info); + +err_free_msg: + nlmsg_free(rsp); + return err; } static int netdev_genl_netdevice_event(struct notifier_block *nb, From patchwork Fri Apr 5 20:09:44 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Nambiar, Amritha" X-Patchwork-Id: 13619362 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.16]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D9FA3171E77 for ; Fri, 5 Apr 2024 19:51:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.16 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712346698; cv=none; b=AR83bbIB587yxJtbMrRSUOSU+wpmL61K4ayufvj58FczzQWe6yUGDJIFLZGorCYLOV8zhspg1wRm2xOEo3FFDmutzdPvmTv78ML0nOd68zriHmwoKOc4ab4nfaYOBYrLzVznytiqnR9+El8Atgu71zAQRByp1dGmkWnykk8uVKE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712346698; c=relaxed/simple; bh=cPvR+LOCg7nkDtMxz6zo1hLHWYmlUtjZhNMjWo27GU0=; h=Subject:From:To:Cc:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=DzNPMLTtd35xm0kROjZqVrohZuTWFuOrEyxBcSGv5JHjc1fVGU3N4cOepjdul9sY6CR0S/YQe+293yYo7oUWTwBYXHUJdL3diaB+v49awIlKPD5k0eNYdnuUvrFFaljTZoVchJRnjpqbye4lNKM4vaz2ybISbczU/0j1Nj+t/zc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=nD+SwHK4; arc=none smtp.client-ip=198.175.65.16 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="nD+SwHK4" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1712346697; x=1743882697; h=subject:from:to:cc:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=cPvR+LOCg7nkDtMxz6zo1hLHWYmlUtjZhNMjWo27GU0=; b=nD+SwHK4UUQtCArOc32uOWI2CN0v/qjM/pubi8KT1uV3vy75WkNAk9nT PLb1w3Ez22mVAg5G4N+VFk4Jud6Vl16Gv49BOGoqxaX/1Zb8n9NxC/Elh 9aGfQ25z5hN4UNAFjqegJWsViBhUF4WuY/FnI6uW/JMb1FmVtAWcbRDjV oWGWVX1zpHLQ+OsE4BcrR8JZVwgm/UUArO6MDYncANmc1xPMF/96nxOQz 23WzMji3rSaXXUpFHd3prh66vqoPND01HNVAREyW7VB05ulqlS4evKKSO 0G7TdEsPeWwHlQkxeH+wNK14QpWqfCqUIBbPApPufE7/zvZLx1TXPr7fY Q==; X-CSE-ConnectionGUID: +uMCPDFHS4S4YsgL8PuffQ== X-CSE-MsgGUID: bbvZj0/ISouRAyf2ALaOmg== X-IronPort-AV: E=McAfee;i="6600,9927,11035"; a="7817654" X-IronPort-AV: E=Sophos;i="6.07,182,1708416000"; d="scan'208";a="7817654" Received: from orviesa006.jf.intel.com ([10.64.159.146]) by orvoesa108.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Apr 2024 12:51:37 -0700 X-CSE-ConnectionGUID: rkwzNaR1SKaXWVVWtTWKwQ== X-CSE-MsgGUID: MBcH5K2zRCK092ZZ0jAY0A== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,182,1708416000"; d="scan'208";a="19700669" Received: from anambiarhost.jf.intel.com ([10.166.29.163]) by orviesa006.jf.intel.com with ESMTP; 05 Apr 2024 12:51:37 -0700 Subject: [net-next, RFC PATCH 3/5] ice: Add support to enable/disable a vector From: Amritha Nambiar To: netdev@vger.kernel.org, kuba@kernel.org, davem@davemloft.net Cc: edumazet@google.com, pabeni@redhat.com, ast@kernel.org, sdf@google.com, lorenzo@kernel.org, tariqt@nvidia.com, daniel@iogearbox.net, anthony.l.nguyen@intel.com, lucien.xin@gmail.com, hawk@kernel.org, sridhar.samudrala@intel.com, amritha.nambiar@intel.com Date: Fri, 05 Apr 2024 13:09:44 -0700 Message-ID: <171234778396.5075.3968986750172203483.stgit@anambiarhost.jf.intel.com> In-Reply-To: <171234737780.5075.5717254021446469741.stgit@anambiarhost.jf.intel.com> References: <171234737780.5075.5717254021446469741.stgit@anambiarhost.jf.intel.com> User-Agent: StGit/unknown-version Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC While configuring a queue from the userspace, the queue will have to be reset for the configuration to take effect in hardware. Resetting a queue has the dependency of resetting the vector it is on. Add the framework functions to enable/disable a single queue and hence the vector also. The existing support in the driver allows either enabling/disabling a single queue-pair or an entire VSI but not any random Tx or Rx queue. Signed-off-by: Amritha Nambiar --- drivers/net/ethernet/intel/ice/ice.h | 1 drivers/net/ethernet/intel/ice/ice_lib.c | 247 +++++++++++++++++++++++++++++ drivers/net/ethernet/intel/ice/ice_lib.h | 4 drivers/net/ethernet/intel/ice/ice_main.c | 2 drivers/net/ethernet/intel/ice/ice_xsk.c | 34 ---- 5 files changed, 253 insertions(+), 35 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h index a7e88d797d4c..a2c91fa88e14 100644 --- a/drivers/net/ethernet/intel/ice/ice.h +++ b/drivers/net/ethernet/intel/ice/ice.h @@ -1009,4 +1009,5 @@ static inline void ice_clear_rdma_cap(struct ice_pf *pf) } extern const struct xdp_metadata_ops ice_xdp_md_ops; +void ice_init_moderation(struct ice_q_vector *q_vector); #endif /* _ICE_H_ */ diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c index d06e7c82c433..35389189af1b 100644 --- a/drivers/net/ethernet/intel/ice/ice_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_lib.c @@ -4001,3 +4001,250 @@ ice_vsi_update_local_lb(struct ice_vsi *vsi, bool set) vsi->info = ctx.info; return 0; } + +/** + * ice_tx_queue_dis - Disable a Tx ring + * @vsi: VSI being configured + * @q_idx: Tx ring index + * + */ +static int ice_tx_queue_dis(struct ice_vsi *vsi, u16 q_idx) +{ + struct ice_txq_meta txq_meta = { }; + struct ice_tx_ring *tx_ring; + int err; + + if (q_idx >= vsi->num_txq) + return -EINVAL; + + netif_tx_stop_queue(netdev_get_tx_queue(vsi->netdev, q_idx)); + + tx_ring = vsi->tx_rings[q_idx]; + ice_fill_txq_meta(vsi, tx_ring, &txq_meta); + err = ice_vsi_stop_tx_ring(vsi, ICE_NO_RESET, 0, tx_ring, &txq_meta); + if (err) + return err; + + ice_clean_tx_ring(tx_ring); + + return 0; +} + +/** + * ice_tx_queue_ena - Enable a Tx ring + * @vsi: VSI being configured + * @q_idx: Tx ring index + * + */ +static int ice_tx_queue_ena(struct ice_vsi *vsi, u16 q_idx) +{ + struct ice_q_vector *q_vector; + struct ice_tx_ring *tx_ring; + int err; + + err = ice_vsi_cfg_single_txq(vsi, vsi->tx_rings, q_idx); + if (err) + return err; + + tx_ring = vsi->tx_rings[q_idx]; + q_vector = tx_ring->q_vector; + ice_cfg_txq_interrupt(vsi, tx_ring->reg_idx, q_vector->reg_idx, + q_vector->tx.itr_idx); + + netif_tx_start_queue(netdev_get_tx_queue(vsi->netdev, q_idx)); + + return 0; +} + +/** + * ice_rx_ring_dis_irq - clear the queue to interrupt mapping in HW + * @vsi: VSI being configured + * @rx_ring: Rx ring that will have its IRQ disabled + * + */ +static void ice_rx_ring_dis_irq(struct ice_vsi *vsi, struct ice_rx_ring *rx_ring) +{ + struct ice_hw *hw = &vsi->back->hw; + u16 reg; + u32 val; + + /* Clear QINT_RQCTL to clear the queue to interrupt mapping in HW */ + reg = rx_ring->reg_idx; + val = rd32(hw, QINT_RQCTL(reg)); + val &= ~QINT_RQCTL_CAUSE_ENA_M; + wr32(hw, QINT_RQCTL(reg), val); + + ice_flush(hw); +} + +/** + * ice_rx_queue_dis - Disable a Rx ring + * @vsi: VSI being configured + * @q_idx: Rx ring index + * + */ +static int ice_rx_queue_dis(struct ice_vsi *vsi, u16 q_idx) +{ + struct ice_rx_ring *rx_ring; + int err; + + if (q_idx >= vsi->num_rxq) + return -EINVAL; + + rx_ring = vsi->rx_rings[q_idx]; + ice_rx_ring_dis_irq(vsi, rx_ring); + + err = ice_vsi_ctrl_one_rx_ring(vsi, false, q_idx, true); + if (err) + return err; + + ice_clean_rx_ring(rx_ring); + + return 0; +} + +/** + * ice_rx_queue_ena - Enable a Rx ring + * @vsi: VSI being configured + * @q_idx: Tx ring index + * + */ +static int ice_rx_queue_ena(struct ice_vsi *vsi, u16 q_idx) +{ + struct ice_q_vector *q_vector; + struct ice_rx_ring *rx_ring; + int err; + + if (q_idx >= vsi->num_rxq) + return -EINVAL; + + err = ice_vsi_cfg_single_rxq(vsi, q_idx); + if (err) + return err; + + rx_ring = vsi->rx_rings[q_idx]; + q_vector = rx_ring->q_vector; + ice_cfg_rxq_interrupt(vsi, rx_ring->reg_idx, q_vector->reg_idx, + q_vector->rx.itr_idx); + + err = ice_vsi_ctrl_one_rx_ring(vsi, true, q_idx, true); + if (err) + return err; + + return 0; +} + +/** + * ice_qvec_toggle_napi - Enables/disables NAPI for a given q_vector + * @vsi: VSI that has netdev + * @q_vector: q_vector that has NAPI context + * @enable: true for enable, false for disable + */ +void +ice_qvec_toggle_napi(struct ice_vsi *vsi, struct ice_q_vector *q_vector, + bool enable) +{ + if (!vsi->netdev || !q_vector) + return; + + if (enable) + napi_enable(&q_vector->napi); + else + napi_disable(&q_vector->napi); +} + +/** + * ice_qvec_ena_irq - Enable IRQ for given queue vector + * @vsi: the VSI that contains queue vector + * @q_vector: queue vector + */ +void ice_qvec_ena_irq(struct ice_vsi *vsi, struct ice_q_vector *q_vector) +{ + struct ice_pf *pf = vsi->back; + struct ice_hw *hw = &pf->hw; + + ice_irq_dynamic_ena(hw, vsi, q_vector); + + ice_flush(hw); +} + +/** + * ice_qvec_configure - Setup initial interrupt configuration + * @vsi: the VSI that contains queue vector + * @q_vector: queue vector + */ +static void ice_qvec_configure(struct ice_vsi *vsi, struct ice_q_vector *q_vector) +{ + struct ice_hw *hw = &vsi->back->hw; + + ice_cfg_itr(hw, q_vector); + ice_init_moderation(q_vector); +} + +/** + * ice_q_vector_dis - Disable a vector and all queues on it + * @vsi: the VSI that contains queue vector + * @q_vector: queue vector + */ +static int __maybe_unused +ice_q_vector_dis(struct ice_vsi *vsi, struct ice_q_vector *q_vector) +{ + struct ice_hw *hw = &vsi->back->hw; + struct ice_rx_ring *rx_ring; + struct ice_tx_ring *tx_ring; + int err; + + /* Disable the vector */ + wr32(hw, GLINT_DYN_CTL(q_vector->reg_idx), 0); + ice_flush(hw); + synchronize_irq(q_vector->irq.virq); + + ice_qvec_toggle_napi(vsi, q_vector, false); + + /* Disable all rings on this vector */ + ice_for_each_rx_ring(rx_ring, q_vector->rx) { + err = ice_rx_queue_dis(vsi, rx_ring->q_index); + if (err) + return err; + } + + ice_for_each_tx_ring(tx_ring, q_vector->tx) { + err = ice_tx_queue_dis(vsi, tx_ring->q_index); + if (err) + return err; + } + return 0; +} + +/** + * ice_q_vector_ena - Enable a vector and all queues on it + * @vsi: the VSI that contains queue vector + * @q_vector: queue vector + */ +static int __maybe_unused +ice_q_vector_ena(struct ice_vsi *vsi, struct ice_q_vector *q_vector) +{ + struct ice_rx_ring *rx_ring; + struct ice_tx_ring *tx_ring; + int err; + + ice_qvec_configure(vsi, q_vector); + + /* enable all rings on this vector */ + ice_for_each_rx_ring(rx_ring, q_vector->rx) { + err = ice_rx_queue_ena(vsi, rx_ring->q_index); + if (err) + return err; + } + + ice_for_each_tx_ring(tx_ring, q_vector->tx) { + err = ice_tx_queue_ena(vsi, tx_ring->q_index); + if (err) + return err; + } + + ice_qvec_toggle_napi(vsi, q_vector, true); + ice_qvec_ena_irq(vsi, q_vector); + + return 0; +} diff --git a/drivers/net/ethernet/intel/ice/ice_lib.h b/drivers/net/ethernet/intel/ice/ice_lib.h index 9cd23afe5f15..00239c2efa92 100644 --- a/drivers/net/ethernet/intel/ice/ice_lib.h +++ b/drivers/net/ethernet/intel/ice/ice_lib.h @@ -160,4 +160,8 @@ void ice_set_feature_support(struct ice_pf *pf, enum ice_feature f); void ice_clear_feature_support(struct ice_pf *pf, enum ice_feature f); void ice_init_feature_support(struct ice_pf *pf); bool ice_vsi_is_rx_queue_active(struct ice_vsi *vsi); +void ice_qvec_ena_irq(struct ice_vsi *vsi, struct ice_q_vector *q_vector); +void +ice_qvec_toggle_napi(struct ice_vsi *vsi, struct ice_q_vector *q_vector, + bool enable); #endif /* !_ICE_LIB_H_ */ diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c index 9d751954782c..cd2f467fe3a0 100644 --- a/drivers/net/ethernet/intel/ice/ice_main.c +++ b/drivers/net/ethernet/intel/ice/ice_main.c @@ -6508,7 +6508,7 @@ static void ice_rx_dim_work(struct work_struct *work) * dynamic moderation mode or not in order to make sure hardware is in a known * state. */ -static void ice_init_moderation(struct ice_q_vector *q_vector) +void ice_init_moderation(struct ice_q_vector *q_vector) { struct ice_ring_container *rc; bool tx_dynamic, rx_dynamic; diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.c b/drivers/net/ethernet/intel/ice/ice_xsk.c index aa81d1162b81..f7708bbb769b 100644 --- a/drivers/net/ethernet/intel/ice/ice_xsk.c +++ b/drivers/net/ethernet/intel/ice/ice_xsk.c @@ -59,25 +59,6 @@ static void ice_qp_clean_rings(struct ice_vsi *vsi, u16 q_idx) ice_clean_rx_ring(vsi->rx_rings[q_idx]); } -/** - * ice_qvec_toggle_napi - Enables/disables NAPI for a given q_vector - * @vsi: VSI that has netdev - * @q_vector: q_vector that has NAPI context - * @enable: true for enable, false for disable - */ -static void -ice_qvec_toggle_napi(struct ice_vsi *vsi, struct ice_q_vector *q_vector, - bool enable) -{ - if (!vsi->netdev || !q_vector) - return; - - if (enable) - napi_enable(&q_vector->napi); - else - napi_disable(&q_vector->napi); -} - /** * ice_qvec_dis_irq - Mask off queue interrupt generation on given ring * @vsi: the VSI that contains queue vector being un-configured @@ -135,21 +116,6 @@ ice_qvec_cfg_msix(struct ice_vsi *vsi, struct ice_q_vector *q_vector) ice_flush(hw); } -/** - * ice_qvec_ena_irq - Enable IRQ for given queue vector - * @vsi: the VSI that contains queue vector - * @q_vector: queue vector - */ -static void ice_qvec_ena_irq(struct ice_vsi *vsi, struct ice_q_vector *q_vector) -{ - struct ice_pf *pf = vsi->back; - struct ice_hw *hw = &pf->hw; - - ice_irq_dynamic_ena(hw, vsi, q_vector); - - ice_flush(hw); -} - /** * ice_qp_dis - Disables a queue pair * @vsi: VSI of interest From patchwork Fri Apr 5 20:09:49 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Nambiar, Amritha" X-Patchwork-Id: 13619363 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.16]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 07F96174ED2 for ; Fri, 5 Apr 2024 19:51:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.16 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712346703; cv=none; b=m58s2igc4zVksqFCUmBCG+a+zOCPa3PL06sy96Jlr0uJiDWzcOf7ciHPFFAklIWvlI+276NLcmrZrVpbi8hong2VIVyrGEFlH+uruUdepIuYgeE5eqosdgIqJyCP1nMN9m1LyKWxVq7aQa2clOzVYNWsI0PiLUKmOUD0V8eB3UE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712346703; c=relaxed/simple; bh=TkmkeGAQLEB9Q6qrcaPILvk+NfnfHgKnY+XpVptAV1I=; h=Subject:From:To:Cc:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=DGWhueu1Dp70bG6BK7ps4RE0jnrM7+r9MfARlhE+9qI6pekSr+PY/j5onZrTpb1wS5kgbIsLT2LTXSwzUIBgcE6qpQ/fjU7cx0qmThcf0MXlkZMlDy/h651zh0H6LzqG5kmYTZkajdmFt49sRAELIXxWQ2CWMfzBIY/bnWPKDYg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=g+FyqyWl; arc=none smtp.client-ip=198.175.65.16 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="g+FyqyWl" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1712346702; x=1743882702; h=subject:from:to:cc:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=TkmkeGAQLEB9Q6qrcaPILvk+NfnfHgKnY+XpVptAV1I=; b=g+FyqyWlmeticYBAETUQJWdhV0B4xYU2R5RcG84FRhdKzJXd8DuwXicI TuPeQmPsZ6j3fGwF4ooRzS2fKTSd0+WwB9N/Ttda0Ef8HPbDQtm7M6Bsb 20/J9KrWThVsWHLXFVs8zDalJX3tOCqftnrloKuk814flC6hZ6R4xCLj+ +HXfMtwusEPnP+xMGFAaqlsO2n6pp3+70YXrkygN8XFtNGdQQ2k00CtKG ob3gy0Oi6qdFNcfbAPpMXKrMmxtYrUausjxSb3VqnOu4PXZGMlC6ZGVsz 0H2rUSgBaoLL2JZK7bmjdyRa8JI6eB0kFa3V8C4xh7DW3r25+YO8I82Pu A==; X-CSE-ConnectionGUID: msu2mHDxTGSHXNze352Jiw== X-CSE-MsgGUID: 3hZ5t3VcTACK9Gw98Lf7Mw== X-IronPort-AV: E=McAfee;i="6600,9927,11035"; a="7817666" X-IronPort-AV: E=Sophos;i="6.07,182,1708416000"; d="scan'208";a="7817666" Received: from orviesa006.jf.intel.com ([10.64.159.146]) by orvoesa108.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Apr 2024 12:51:42 -0700 X-CSE-ConnectionGUID: MKShCV0sQ3qIMXRmFRIPOg== X-CSE-MsgGUID: ygCa8Q4FRdaa/q9IKzANcg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,182,1708416000"; d="scan'208";a="19700694" Received: from anambiarhost.jf.intel.com ([10.166.29.163]) by orviesa006.jf.intel.com with ESMTP; 05 Apr 2024 12:51:42 -0700 Subject: [net-next,RFC PATCH 4/5] ice: Handle unused vectors dynamically From: Amritha Nambiar To: netdev@vger.kernel.org, kuba@kernel.org, davem@davemloft.net Cc: edumazet@google.com, pabeni@redhat.com, ast@kernel.org, sdf@google.com, lorenzo@kernel.org, tariqt@nvidia.com, daniel@iogearbox.net, anthony.l.nguyen@intel.com, lucien.xin@gmail.com, hawk@kernel.org, sridhar.samudrala@intel.com, amritha.nambiar@intel.com Date: Fri, 05 Apr 2024 13:09:49 -0700 Message-ID: <171234778911.5075.12956603794662346879.stgit@anambiarhost.jf.intel.com> In-Reply-To: <171234737780.5075.5717254021446469741.stgit@anambiarhost.jf.intel.com> References: <171234737780.5075.5717254021446469741.stgit@anambiarhost.jf.intel.com> User-Agent: StGit/unknown-version Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC When queues are moved between vectors, some vector[s] may get unused. The unused vector[s] need to be freed. When queue[s] gets assigned to previously unused and freed vector, this vector will need to be requested and setup. Add the framework functions for this. Signed-off-by: Amritha Nambiar --- drivers/net/ethernet/intel/ice/ice.h | 12 +++ drivers/net/ethernet/intel/ice/ice_lib.c | 117 +++++++++++++++++++++++++++++ drivers/net/ethernet/intel/ice/ice_lib.h | 6 + drivers/net/ethernet/intel/ice/ice_main.c | 12 --- 4 files changed, 136 insertions(+), 11 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h index a2c91fa88e14..d7b67821dc21 100644 --- a/drivers/net/ethernet/intel/ice/ice.h +++ b/drivers/net/ethernet/intel/ice/ice.h @@ -1010,4 +1010,16 @@ static inline void ice_clear_rdma_cap(struct ice_pf *pf) extern const struct xdp_metadata_ops ice_xdp_md_ops; void ice_init_moderation(struct ice_q_vector *q_vector); +void +ice_irq_affinity_notify(struct irq_affinity_notify *notify, + const cpumask_t *mask); +/** + * ice_irq_affinity_release - Callback for affinity notifier release + * @ref: internal core kernel usage + * + * This is a callback function used by the irq_set_affinity_notifier function + * to inform the current notification subscriber that they will no longer + * receive notifications. + */ +static inline void ice_irq_affinity_release(struct kref __always_unused *ref) {} #endif /* _ICE_H_ */ diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c index 35389189af1b..419d9561bc2a 100644 --- a/drivers/net/ethernet/intel/ice/ice_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_lib.c @@ -4248,3 +4248,120 @@ ice_q_vector_ena(struct ice_vsi *vsi, struct ice_q_vector *q_vector) return 0; } + +static void +ice_qvec_release_msix(struct ice_vsi *vsi, struct ice_q_vector *q_vector) +{ + struct ice_hw *hw = &vsi->back->hw; + struct ice_rx_ring *rx_ring; + struct ice_tx_ring *tx_ring; + + ice_write_intrl(q_vector, 0); + + ice_for_each_rx_ring(rx_ring, q_vector->rx) { + ice_write_itr(&q_vector->rx, 0); + wr32(hw, QINT_RQCTL(vsi->rxq_map[rx_ring->q_index]), 0); + } + + ice_for_each_tx_ring(tx_ring, q_vector->tx) { + ice_write_itr(&q_vector->tx, 0); + wr32(hw, QINT_TQCTL(vsi->txq_map[tx_ring->q_index]), 0); + } + + /* Disable the interrupt by writing to the register */ + wr32(hw, GLINT_DYN_CTL(q_vector->reg_idx), 0); + ice_flush(hw); +} + +/** + * ice_qvec_free - Free the MSI_X vector + * @vsi: the VSI that contains queue vector + * @q_vector: queue vector + */ +static void __maybe_unused +ice_qvec_free(struct ice_vsi *vsi, struct ice_q_vector *q_vector) +{ + int irq_num = q_vector->irq.virq; + struct ice_pf *pf = vsi->back; + + ice_qvec_release_msix(vsi, q_vector); + +#ifdef CONFIG_RFS_ACCEL + struct net_device *netdev = vsi->netdev; + + if (netdev && netdev->rx_cpu_rmap) + irq_cpu_rmap_remove(netdev->rx_cpu_rmap, irq_num); +#endif + + /* clear the affinity notifier in the IRQ descriptor */ + if (!IS_ENABLED(CONFIG_RFS_ACCEL)) + irq_set_affinity_notifier(irq_num, NULL); + + /* clear the affinity_mask in the IRQ descriptor */ + irq_set_affinity_hint(irq_num, NULL); + + synchronize_irq(irq_num); + devm_free_irq(ice_pf_to_dev(pf), irq_num, q_vector); +} + +/** + * ice_qvec_prep - Request and prepare a new MSI_X vector + * @vsi: the VSI that contains queue vector + * @q_vector: queue vector + */ +static int __maybe_unused +ice_qvec_prep(struct ice_vsi *vsi, struct ice_q_vector *q_vector) +{ + struct ice_pf *pf = vsi->back; + struct device *dev; + int err, irq_num; + + dev = ice_pf_to_dev(pf); + irq_num = q_vector->irq.virq; + + err = devm_request_irq(dev, irq_num, vsi->irq_handler, 0, + q_vector->name, q_vector); + if (err) { + netdev_err(vsi->netdev, "MSIX request_irq failed, error: %d\n", + err); + goto free_q_irqs; + } + + /* register for affinity change notifications */ + if (!IS_ENABLED(CONFIG_RFS_ACCEL)) { + struct irq_affinity_notify *affinity_notify; + + affinity_notify = &q_vector->affinity_notify; + affinity_notify->notify = ice_irq_affinity_notify; + affinity_notify->release = ice_irq_affinity_release; + irq_set_affinity_notifier(irq_num, affinity_notify); + } + + /* assign the mask for this irq */ + irq_set_affinity_hint(irq_num, &q_vector->affinity_mask); + +#ifdef CONFIG_RFS_ACCEL + struct net_device *netdev = vsi->netdev; + + if (!netdev) { + err = -EINVAL; + goto free_q_irqs; + } + + if (irq_cpu_rmap_add(netdev->rx_cpu_rmap, irq_num)) { + err = -EINVAL; + netdev_err(vsi->netdev, "Failed to setup CPU RMAP on irq %u: %pe\n", + irq_num, ERR_PTR(err)); + goto free_q_irqs; + } +#endif + return 0; + +free_q_irqs: + if (!IS_ENABLED(CONFIG_RFS_ACCEL)) + irq_set_affinity_notifier(irq_num, NULL); + irq_set_affinity_hint(irq_num, NULL); + devm_free_irq(dev, irq_num, &q_vector); + + return err; +} diff --git a/drivers/net/ethernet/intel/ice/ice_lib.h b/drivers/net/ethernet/intel/ice/ice_lib.h index 00239c2efa92..66a9709ff612 100644 --- a/drivers/net/ethernet/intel/ice/ice_lib.h +++ b/drivers/net/ethernet/intel/ice/ice_lib.h @@ -164,4 +164,10 @@ void ice_qvec_ena_irq(struct ice_vsi *vsi, struct ice_q_vector *q_vector); void ice_qvec_toggle_napi(struct ice_vsi *vsi, struct ice_q_vector *q_vector, bool enable); +static inline bool +ice_is_q_vector_unused(struct ice_q_vector *q_vector) +{ + return (!q_vector->num_ring_tx && !q_vector->num_ring_rx); +} + #endif /* !_ICE_LIB_H_ */ diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c index cd2f467fe3a0..0884b53a0b01 100644 --- a/drivers/net/ethernet/intel/ice/ice_main.c +++ b/drivers/net/ethernet/intel/ice/ice_main.c @@ -2476,7 +2476,7 @@ int ice_schedule_reset(struct ice_pf *pf, enum ice_reset_req reset) * This is a callback function used by the irq_set_affinity_notifier function * so that we may register to receive changes to the irq affinity masks. */ -static void +void ice_irq_affinity_notify(struct irq_affinity_notify *notify, const cpumask_t *mask) { @@ -2486,16 +2486,6 @@ ice_irq_affinity_notify(struct irq_affinity_notify *notify, cpumask_copy(&q_vector->affinity_mask, mask); } -/** - * ice_irq_affinity_release - Callback for affinity notifier release - * @ref: internal core kernel usage - * - * This is a callback function used by the irq_set_affinity_notifier function - * to inform the current notification subscriber that they will no longer - * receive notifications. - */ -static void ice_irq_affinity_release(struct kref __always_unused *ref) {} - /** * ice_vsi_ena_irq - Enable IRQ for the given VSI * @vsi: the VSI being configured From patchwork Fri Apr 5 20:09:54 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Nambiar, Amritha" X-Patchwork-Id: 13619364 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C3FAA171E77 for ; Fri, 5 Apr 2024 19:51:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.17 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712346709; cv=none; b=mR9Wjp59IItgZjrThblqbYk1AZ+jgB+/hgNcK4XKFry6JQxUkqPWkCaKEhurHwE8aq1YmU4r8TbVz35SnPWOPacwo6se/Olav5G9uBNdpDYV50J/qaf9C8j8Bw/rT/lSxLsUs5o/+fcJpdI7gLRLMgmbdc2AHqE8birlBicDHQo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712346709; c=relaxed/simple; bh=4uTH3OX3nmKPbZXtcSGIjLpdVnGTqIbYtW93fgvQvAk=; h=Subject:From:To:Cc:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=gu7xdIVr80y50mqIFEiwStwxRkvETeABTSJKZlURZCk2iK79P+GvuPefQOq+Vwb5kDK9rIgIeLKmk403F7wXijOgpBn0ngr5GrYtQm4A0AfP01t6mVHOQZAitgRFV+JeyuPiYeFSdzC+usFHCiFIJSJwL/r+VVRoMU2i6PkMUZE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=IL/XQVBp; arc=none smtp.client-ip=198.175.65.17 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="IL/XQVBp" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1712346708; x=1743882708; h=subject:from:to:cc:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=4uTH3OX3nmKPbZXtcSGIjLpdVnGTqIbYtW93fgvQvAk=; b=IL/XQVBp3iu6kQvGXNqB99bOd/lDxrpErhDafvCfnV0n1GAIfeq/xrTc MnP0bzpP4zaMRcU5VQ9TRmm+6Uy/f6ed4xTdgEsKW8X8hBt9DIDtZGNSJ UwFF9rBBdvxCw8dazZKCqyCnSxejIFgj2iQOFMRa5v21sAJZuCvOjOCGR fwjw/9PZpDn/YA8EYOky7ske6LAxxFUUkVIe1EEY3iZJ6FiOCfugw/vRD A3CtOVT+w9Ba3T6RYwGFdFZG24nXgiEHzEA3MKNz4hB8s2Rx5fol4E/9p A/v4L/QlSTqJL7dL0zE12U+m2MwjrjPOzUlRbisdCxvyKvhlIdR2x8O+7 g==; X-CSE-ConnectionGUID: 2Znmq3J8QZO9GQa3XV6ixQ== X-CSE-MsgGUID: ZEdjnQ10TCu8DjTZjfGW9w== X-IronPort-AV: E=McAfee;i="6600,9927,11035"; a="7785935" X-IronPort-AV: E=Sophos;i="6.07,182,1708416000"; d="scan'208";a="7785935" Received: from orviesa004.jf.intel.com ([10.64.159.144]) by orvoesa109.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Apr 2024 12:51:47 -0700 X-CSE-ConnectionGUID: Q3PQPBLiTOKx/7OoHr2pEw== X-CSE-MsgGUID: cxbd7Xg/QMmOgq2qggwOaw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,182,1708416000"; d="scan'208";a="23989929" Received: from anambiarhost.jf.intel.com ([10.166.29.163]) by orviesa004.jf.intel.com with ESMTP; 05 Apr 2024 12:51:47 -0700 Subject: [net-next, RFC PATCH 5/5] ice: Add driver support for ndo_queue_set_napi From: Amritha Nambiar To: netdev@vger.kernel.org, kuba@kernel.org, davem@davemloft.net Cc: edumazet@google.com, pabeni@redhat.com, ast@kernel.org, sdf@google.com, lorenzo@kernel.org, tariqt@nvidia.com, daniel@iogearbox.net, anthony.l.nguyen@intel.com, lucien.xin@gmail.com, hawk@kernel.org, sridhar.samudrala@intel.com, amritha.nambiar@intel.com Date: Fri, 05 Apr 2024 13:09:54 -0700 Message-ID: <171234779427.5075.586255342877398659.stgit@anambiarhost.jf.intel.com> In-Reply-To: <171234737780.5075.5717254021446469741.stgit@anambiarhost.jf.intel.com> References: <171234737780.5075.5717254021446469741.stgit@anambiarhost.jf.intel.com> User-Agent: StGit/unknown-version Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC Add support in the ice driver to change the NAPI instance associated with the queue. This is achieved by updating the interrupt vector association for the queue. Signed-off-by: Amritha Nambiar --- drivers/net/ethernet/intel/ice/ice_lib.c | 311 +++++++++++++++++++++++++++++ drivers/net/ethernet/intel/ice/ice_lib.h | 2 drivers/net/ethernet/intel/ice/ice_main.c | 1 3 files changed, 310 insertions(+), 4 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c index 419d9561bc2a..3a93b53a0da0 100644 --- a/drivers/net/ethernet/intel/ice/ice_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_lib.c @@ -4186,7 +4186,7 @@ static void ice_qvec_configure(struct ice_vsi *vsi, struct ice_q_vector *q_vecto * @vsi: the VSI that contains queue vector * @q_vector: queue vector */ -static int __maybe_unused +static int ice_q_vector_dis(struct ice_vsi *vsi, struct ice_q_vector *q_vector) { struct ice_hw *hw = &vsi->back->hw; @@ -4221,7 +4221,7 @@ ice_q_vector_dis(struct ice_vsi *vsi, struct ice_q_vector *q_vector) * @vsi: the VSI that contains queue vector * @q_vector: queue vector */ -static int __maybe_unused +static int ice_q_vector_ena(struct ice_vsi *vsi, struct ice_q_vector *q_vector) { struct ice_rx_ring *rx_ring; @@ -4278,7 +4278,7 @@ ice_qvec_release_msix(struct ice_vsi *vsi, struct ice_q_vector *q_vector) * @vsi: the VSI that contains queue vector * @q_vector: queue vector */ -static void __maybe_unused +static void ice_qvec_free(struct ice_vsi *vsi, struct ice_q_vector *q_vector) { int irq_num = q_vector->irq.virq; @@ -4309,7 +4309,7 @@ ice_qvec_free(struct ice_vsi *vsi, struct ice_q_vector *q_vector) * @vsi: the VSI that contains queue vector * @q_vector: queue vector */ -static int __maybe_unused +static int ice_qvec_prep(struct ice_vsi *vsi, struct ice_q_vector *q_vector) { struct ice_pf *pf = vsi->back; @@ -4365,3 +4365,306 @@ ice_qvec_prep(struct ice_vsi *vsi, struct ice_q_vector *q_vector) return err; } + +/** + * ice_vsi_rename_irq_msix + * @vsi: VSI being configured + * @basename: name for the vector + * + * Rename the vector. The default vector names assumed a 1:1 mapping between + * queues and vectors in a serial fashion. When the NAPI association for the + * queue is changed, it is possible to have multiple queues sharing a vector + * in a non-serial way. + */ +static void ice_vsi_rename_irq_msix(struct ice_vsi *vsi, char *basename) +{ + int q_vectors = vsi->num_q_vectors; + int vector, err; + + for (vector = 0; vector < q_vectors; vector++) { + struct ice_q_vector *q_vector = vsi->q_vectors[vector]; + + if (q_vector->tx.tx_ring && q_vector->rx.rx_ring) { + err = snprintf(q_vector->name, sizeof(q_vector->name) - 1, + "%s-%s", basename, "TxRx"); + } else if (q_vector->rx.rx_ring) { + err = snprintf(q_vector->name, sizeof(q_vector->name) - 1, + "%s-%s", basename, "rx"); + } else if (q_vector->tx.tx_ring) { + err = snprintf(q_vector->name, sizeof(q_vector->name) - 1, + "%s-%s", basename, "tx"); + } else { + err = snprintf(q_vector->name, sizeof(q_vector->name) - 1, + "%s", basename); + } + /* Catching the return quiets a Wformat-truncation complaint */ + if (err > sizeof(q_vector->name) - 1) + netdev_dbg(vsi->netdev, "vector name truncated, ignore\n"); + } +} + +/** + * ice_tx_ring_unmap_qvec - Unmap tx ring from its current q_vector + * @tx_ring: rx ring to be removed + * + * Unmap tx ring from its current vector association in SW + */ +static void +ice_tx_ring_unmap_qvec(struct ice_tx_ring *tx_ring) +{ + struct ice_q_vector *q_vector = tx_ring->q_vector; + struct ice_tx_ring *prev, *ring; + + /* Remove a tx ring from its corresponding vector's ring container */ + ring = q_vector->tx.tx_ring; + if (!ring) + return; + + if (tx_ring == ring) { + q_vector->tx.tx_ring = tx_ring->next; + q_vector->num_ring_tx--; + return; + } + + while (ring && ring != tx_ring) { + prev = ring; + ring = ring->next; + } + if (!ring) + return; + prev->next = ring->next; + q_vector->num_ring_tx--; +} + +/** + * ice_rx_ring_unmap_qvec - Unmap rx ring from its current q_vector + * @rx_ring: rx ring to be removed + * + * Unmap rx ring from its current vector association in SW + */ +static void +ice_rx_ring_unmap_qvec(struct ice_rx_ring *rx_ring) +{ + struct ice_q_vector *q_vector = rx_ring->q_vector; + struct ice_rx_ring *prev, *ring; + + /* Remove a rx ring from its corresponding vector's ring container */ + ring = q_vector->rx.rx_ring; + if (!ring) + return; + + if (rx_ring == ring) { + q_vector->rx.rx_ring = rx_ring->next; + q_vector->num_ring_rx--; + return; + } + + while (ring && ring != rx_ring) { + prev = ring; + ring = ring->next; + } + if (!ring) + return; + prev->next = ring->next; + q_vector->num_ring_rx--; +} + +static int +ice_tx_queue_update_q_vector(struct ice_vsi *vsi, u32 q_idx, + struct ice_q_vector *new_qvec) +{ + struct ice_q_vector *old_qvec; + struct ice_tx_ring *tx_ring; + int timeout = 50; + int err; + + if (q_idx >= vsi->num_txq) + return -EINVAL; + tx_ring = vsi->tx_rings[q_idx]; + if (!tx_ring) + return -EINVAL; + old_qvec = tx_ring->q_vector; + + if (old_qvec->irq.virq == new_qvec->irq.virq) + return 0; + + while (test_and_set_bit(ICE_CFG_BUSY, vsi->state)) { + timeout--; + if (!timeout) + return -EBUSY; + usleep_range(1000, 2000); + } + + err = ice_q_vector_dis(vsi, old_qvec); + if (err) + return err; + + ice_tx_ring_unmap_qvec(tx_ring); + + /* free vector if it has no queues as all of its queues are now moved */ + if (ice_is_q_vector_unused(old_qvec)) + ice_qvec_free(vsi, old_qvec); + + /* Prepare new q_vector if it was previously unused */ + if (ice_is_q_vector_unused(new_qvec)) { + err = ice_qvec_prep(vsi, new_qvec); + if (err) + return err; + } else { + err = ice_q_vector_dis(vsi, new_qvec); + if (err) + return err; + } + + tx_ring->q_vector = new_qvec; + tx_ring->next = new_qvec->tx.tx_ring; + new_qvec->tx.tx_ring = tx_ring; + new_qvec->num_ring_tx++; + + err = ice_q_vector_ena(vsi, new_qvec); + if (err) + return err; + + if (!ice_is_q_vector_unused(old_qvec)) { + err = ice_q_vector_ena(vsi, old_qvec); + if (err) + return err; + } + + clear_bit(ICE_CFG_BUSY, vsi->state); + + return 0; +} + +static int +ice_rx_queue_update_q_vector(struct ice_vsi *vsi, u32 q_idx, + struct ice_q_vector *new_qvec) +{ + struct ice_q_vector *old_qvec; + struct ice_rx_ring *rx_ring; + int timeout = 50; + int err; + + if (q_idx >= vsi->num_rxq) + return -EINVAL; + rx_ring = vsi->rx_rings[q_idx]; + if (!rx_ring) + return -EINVAL; + + old_qvec = rx_ring->q_vector; + + if (old_qvec->irq.virq == new_qvec->irq.virq) + return 0; + + while (test_and_set_bit(ICE_CFG_BUSY, vsi->state)) { + timeout--; + if (!timeout) + return -EBUSY; + usleep_range(1000, 2000); + } + + err = ice_q_vector_dis(vsi, old_qvec); + if (err) + return err; + + ice_rx_ring_unmap_qvec(rx_ring); + + /* free vector if it has no queues as all of its queues are now moved */ + if (ice_is_q_vector_unused(old_qvec)) + ice_qvec_free(vsi, old_qvec); + + /* Prepare new q_vector if it was previously unused */ + if (ice_is_q_vector_unused(new_qvec)) { + err = ice_qvec_prep(vsi, new_qvec); + if (err) + return err; + } else { + err = ice_q_vector_dis(vsi, new_qvec); + if (err) + return err; + } + + rx_ring->q_vector = new_qvec; + rx_ring->next = new_qvec->rx.rx_ring; + new_qvec->rx.rx_ring = rx_ring; + new_qvec->num_ring_rx++; + + err = ice_q_vector_ena(vsi, new_qvec); + if (err) + return err; + + if (!ice_is_q_vector_unused(old_qvec)) { + err = ice_q_vector_ena(vsi, old_qvec); + if (err) + return err; + } + + clear_bit(ICE_CFG_BUSY, vsi->state); + + return 0; +} + +/** + * ice_vsi_get_vector_from_irq + * @vsi: the VSI being configured + * @irq_num: Interrupt vector number + * + * Get the q_vector from the Linux interrupt vector number + */ +static struct ice_q_vector * +ice_vsi_get_vector_from_irq(struct ice_vsi *vsi, int irq_num) +{ + int i; + + ice_for_each_q_vector(vsi, i) { + if (vsi->q_vectors[i]->irq.virq == irq_num) + return vsi->q_vectors[i]; + } + return NULL; +} + +/** + * ice_queue_change_napi - Change the NAPI instance for the queue + * @dev: device to which NAPI and queue belong + * @q_idx: Index of queue + * @q_type: queue type as RX or TX + * @napi: NAPI context for the queue + */ +int ice_queue_change_napi(struct net_device *dev, u32 q_idx, u32 q_type, + struct napi_struct *napi) +{ + struct ice_netdev_priv *np = netdev_priv(dev); + char int_name[ICE_INT_NAME_STR_LEN]; + struct ice_q_vector *q_vector; + struct ice_vsi *vsi = np->vsi; + struct ice_pf *pf = vsi->back; + int err; + + q_vector = ice_vsi_get_vector_from_irq(vsi, napi->irq); + if (!q_vector) + return -EINVAL; + + switch (q_type) { + case NETDEV_QUEUE_TYPE_RX: + err = ice_rx_queue_update_q_vector(vsi, q_idx, q_vector); + if (err) + return err; + break; + case NETDEV_QUEUE_TYPE_TX: + err = ice_tx_queue_update_q_vector(vsi, q_idx, q_vector); + if (err) + return err; + break; + default: + return -EINVAL; + } + + snprintf(int_name, sizeof(int_name) - 1, "%s-%s", + dev_driver_string(ice_pf_to_dev(pf)), vsi->netdev->name); + ice_vsi_rename_irq_msix(vsi, int_name); + + /* Now report to the stack */ + netif_queue_set_napi(dev, q_idx, q_type, napi); + + return 0; +} diff --git a/drivers/net/ethernet/intel/ice/ice_lib.h b/drivers/net/ethernet/intel/ice/ice_lib.h index 66a9709ff612..d53f399487bf 100644 --- a/drivers/net/ethernet/intel/ice/ice_lib.h +++ b/drivers/net/ethernet/intel/ice/ice_lib.h @@ -170,4 +170,6 @@ ice_is_q_vector_unused(struct ice_q_vector *q_vector) return (!q_vector->num_ring_tx && !q_vector->num_ring_rx); } +int ice_queue_change_napi(struct net_device *dev, u32 q_idx, u32 q_type, + struct napi_struct *napi); #endif /* !_ICE_LIB_H_ */ diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c index 0884b53a0b01..08c20f8b17e2 100644 --- a/drivers/net/ethernet/intel/ice/ice_main.c +++ b/drivers/net/ethernet/intel/ice/ice_main.c @@ -9494,4 +9494,5 @@ static const struct net_device_ops ice_netdev_ops = { .ndo_bpf = ice_xdp, .ndo_xdp_xmit = ice_xdp_xmit, .ndo_xsk_wakeup = ice_xsk_wakeup, + .ndo_queue_set_napi = ice_queue_change_napi, };