From patchwork Fri Feb 12 03:21:01 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Huazhong Tan X-Patchwork-Id: 12084699 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 76A98C433DB for ; Fri, 12 Feb 2021 03:22:47 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3123F64E3C for ; Fri, 12 Feb 2021 03:22:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229740AbhBLDWd (ORCPT ); Thu, 11 Feb 2021 22:22:33 -0500 Received: from szxga05-in.huawei.com ([45.249.212.191]:12512 "EHLO szxga05-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229712AbhBLDW1 (ORCPT ); Thu, 11 Feb 2021 22:22:27 -0500 Received: from DGGEMS403-HUB.china.huawei.com (unknown [172.30.72.60]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4DcJfC5RFLzjMHk; Fri, 12 Feb 2021 11:20:19 +0800 (CST) Received: from SZA170332453E.china.huawei.com (10.46.104.160) by DGGEMS403-HUB.china.huawei.com (10.3.19.203) with Microsoft SMTP Server id 14.3.498.0; Fri, 12 Feb 2021 11:21:35 +0800 From: Huazhong Tan To: , CC: , , , , , Peng Li , Huazhong Tan Subject: [PATCH V2 net-next 01/13] net: hns3: refactor out hclge_cmd_convert_err_code() Date: Fri, 12 Feb 2021 11:21:01 +0800 Message-ID: <20210212032113.5384-2-tanhuazhong@huawei.com> X-Mailer: git-send-email 2.21.0.windows.1 In-Reply-To: <20210212032113.5384-1-tanhuazhong@huawei.com> References: <20210212032113.5384-1-tanhuazhong@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.46.104.160] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Peng Li To improve code readability and maintainability, refactor hclge_cmd_convert_err_code() with an array of imp_errcode and common_errno mapping, instead of a bloated switch/case. Signed-off-by: Peng Li Signed-off-by: Huazhong Tan --- .../hisilicon/hns3/hns3pf/hclge_cmd.c | 55 +++++++++---------- 1 file changed, 27 insertions(+), 28 deletions(-) diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.c index 6546b47bef88..cb2c955ce52c 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.c +++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.c @@ -189,36 +189,35 @@ static bool hclge_is_special_opcode(u16 opcode) return false; } +struct errcode { + u32 imp_errcode; + int common_errno; +}; + static int hclge_cmd_convert_err_code(u16 desc_ret) { - switch (desc_ret) { - case HCLGE_CMD_EXEC_SUCCESS: - return 0; - case HCLGE_CMD_NO_AUTH: - return -EPERM; - case HCLGE_CMD_NOT_SUPPORTED: - return -EOPNOTSUPP; - case HCLGE_CMD_QUEUE_FULL: - return -EXFULL; - case HCLGE_CMD_NEXT_ERR: - return -ENOSR; - case HCLGE_CMD_UNEXE_ERR: - return -ENOTBLK; - case HCLGE_CMD_PARA_ERR: - return -EINVAL; - case HCLGE_CMD_RESULT_ERR: - return -ERANGE; - case HCLGE_CMD_TIMEOUT: - return -ETIME; - case HCLGE_CMD_HILINK_ERR: - return -ENOLINK; - case HCLGE_CMD_QUEUE_ILLEGAL: - return -ENXIO; - case HCLGE_CMD_INVALID: - return -EBADR; - default: - return -EIO; - } + struct errcode hclge_cmd_errcode[] = { + {HCLGE_CMD_EXEC_SUCCESS, 0}, + {HCLGE_CMD_NO_AUTH, -EPERM}, + {HCLGE_CMD_NOT_SUPPORTED, -EOPNOTSUPP}, + {HCLGE_CMD_QUEUE_FULL, -EXFULL}, + {HCLGE_CMD_NEXT_ERR, -ENOSR}, + {HCLGE_CMD_UNEXE_ERR, -ENOTBLK}, + {HCLGE_CMD_PARA_ERR, -EINVAL}, + {HCLGE_CMD_RESULT_ERR, -ERANGE}, + {HCLGE_CMD_TIMEOUT, -ETIME}, + {HCLGE_CMD_HILINK_ERR, -ENOLINK}, + {HCLGE_CMD_QUEUE_ILLEGAL, -ENXIO}, + {HCLGE_CMD_INVALID, -EBADR}, + }; + u32 errcode_count = ARRAY_SIZE(hclge_cmd_errcode); + u32 i; + + for (i = 0; i < errcode_count; i++) + if (hclge_cmd_errcode[i].imp_errcode == desc_ret) + return hclge_cmd_errcode[i].common_errno; + + return -EIO; } static int hclge_cmd_check_retval(struct hclge_hw *hw, struct hclge_desc *desc, From patchwork Fri Feb 12 03:21:02 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Huazhong Tan X-Patchwork-Id: 12084701 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5A744C433E0 for ; Fri, 12 Feb 2021 03:22:47 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1623464E57 for ; Fri, 12 Feb 2021 03:22:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229730AbhBLDWb (ORCPT ); Thu, 11 Feb 2021 22:22:31 -0500 Received: from szxga05-in.huawei.com ([45.249.212.191]:12511 "EHLO szxga05-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229602AbhBLDW1 (ORCPT ); Thu, 11 Feb 2021 22:22:27 -0500 Received: from DGGEMS403-HUB.china.huawei.com (unknown [172.30.72.60]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4DcJfC4mrYzjMHd; Fri, 12 Feb 2021 11:20:19 +0800 (CST) Received: from SZA170332453E.china.huawei.com (10.46.104.160) by DGGEMS403-HUB.china.huawei.com (10.3.19.203) with Microsoft SMTP Server id 14.3.498.0; Fri, 12 Feb 2021 11:21:35 +0800 From: Huazhong Tan To: , CC: , , , , , Peng Li , Huazhong Tan Subject: [PATCH V2 net-next 02/13] net: hns3: refactor out hclgevf_cmd_convert_err_code() Date: Fri, 12 Feb 2021 11:21:02 +0800 Message-ID: <20210212032113.5384-3-tanhuazhong@huawei.com> X-Mailer: git-send-email 2.21.0.windows.1 In-Reply-To: <20210212032113.5384-1-tanhuazhong@huawei.com> References: <20210212032113.5384-1-tanhuazhong@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.46.104.160] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Peng Li To improve code readability and maintainability, refactor hclgevf_cmd_convert_err_code() with an array of imp_errcode and common_errno mapping, instead of a bloated switch/case. Signed-off-by: Peng Li Signed-off-by: Huazhong Tan --- .../hisilicon/hns3/hns3vf/hclgevf_cmd.c | 55 +++++++++---------- 1 file changed, 27 insertions(+), 28 deletions(-) diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_cmd.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_cmd.c index 0f93c2dd890d..603665e5bf39 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_cmd.c +++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_cmd.c @@ -176,36 +176,35 @@ void hclgevf_cmd_setup_basic_desc(struct hclgevf_desc *desc, desc->flag &= cpu_to_le16(~HCLGEVF_CMD_FLAG_WR); } +struct vf_errcode { + u32 imp_errcode; + int common_errno; +}; + static int hclgevf_cmd_convert_err_code(u16 desc_ret) { - switch (desc_ret) { - case HCLGEVF_CMD_EXEC_SUCCESS: - return 0; - case HCLGEVF_CMD_NO_AUTH: - return -EPERM; - case HCLGEVF_CMD_NOT_SUPPORTED: - return -EOPNOTSUPP; - case HCLGEVF_CMD_QUEUE_FULL: - return -EXFULL; - case HCLGEVF_CMD_NEXT_ERR: - return -ENOSR; - case HCLGEVF_CMD_UNEXE_ERR: - return -ENOTBLK; - case HCLGEVF_CMD_PARA_ERR: - return -EINVAL; - case HCLGEVF_CMD_RESULT_ERR: - return -ERANGE; - case HCLGEVF_CMD_TIMEOUT: - return -ETIME; - case HCLGEVF_CMD_HILINK_ERR: - return -ENOLINK; - case HCLGEVF_CMD_QUEUE_ILLEGAL: - return -ENXIO; - case HCLGEVF_CMD_INVALID: - return -EBADR; - default: - return -EIO; - } + struct vf_errcode hclgevf_cmd_errcode[] = { + {HCLGEVF_CMD_EXEC_SUCCESS, 0}, + {HCLGEVF_CMD_NO_AUTH, -EPERM}, + {HCLGEVF_CMD_NOT_SUPPORTED, -EOPNOTSUPP}, + {HCLGEVF_CMD_QUEUE_FULL, -EXFULL}, + {HCLGEVF_CMD_NEXT_ERR, -ENOSR}, + {HCLGEVF_CMD_UNEXE_ERR, -ENOTBLK}, + {HCLGEVF_CMD_PARA_ERR, -EINVAL}, + {HCLGEVF_CMD_RESULT_ERR, -ERANGE}, + {HCLGEVF_CMD_TIMEOUT, -ETIME}, + {HCLGEVF_CMD_HILINK_ERR, -ENOLINK}, + {HCLGEVF_CMD_QUEUE_ILLEGAL, -ENXIO}, + {HCLGEVF_CMD_INVALID, -EBADR}, + }; + u32 errcode_count = ARRAY_SIZE(hclgevf_cmd_errcode); + u32 i; + + for (i = 0; i < errcode_count; i++) + if (hclgevf_cmd_errcode[i].imp_errcode == desc_ret) + return hclgevf_cmd_errcode[i].common_errno; + + return -EIO; } /* hclgevf_cmd_send - send command to command queue From patchwork Fri Feb 12 03:21:03 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Huazhong Tan X-Patchwork-Id: 12084707 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8C1F1C433DB for ; Fri, 12 Feb 2021 03:22:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5A36764E3C for ; Fri, 12 Feb 2021 03:22:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229853AbhBLDWw (ORCPT ); Thu, 11 Feb 2021 22:22:52 -0500 Received: from szxga05-in.huawei.com ([45.249.212.191]:12513 "EHLO szxga05-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229710AbhBLDW2 (ORCPT ); Thu, 11 Feb 2021 22:22:28 -0500 Received: from DGGEMS403-HUB.china.huawei.com (unknown [172.30.72.60]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4DcJfC5fmbzjMHp; Fri, 12 Feb 2021 11:20:19 +0800 (CST) Received: from SZA170332453E.china.huawei.com (10.46.104.160) by DGGEMS403-HUB.china.huawei.com (10.3.19.203) with Microsoft SMTP Server id 14.3.498.0; Fri, 12 Feb 2021 11:21:36 +0800 From: Huazhong Tan To: , CC: , , , , , Peng Li , Huazhong Tan Subject: [PATCH V2 net-next 03/13] net: hns3: clean up hns3_dbg_cmd_write() Date: Fri, 12 Feb 2021 11:21:03 +0800 Message-ID: <20210212032113.5384-4-tanhuazhong@huawei.com> X-Mailer: git-send-email 2.21.0.windows.1 In-Reply-To: <20210212032113.5384-1-tanhuazhong@huawei.com> References: <20210212032113.5384-1-tanhuazhong@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.46.104.160] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Peng Li As more commands are added, hns3_dbg_cmd_write() is going to get more bloated, so move the part about command check into a separate function. Signed-off-by: Peng Li Signed-off-by: Huazhong Tan --- .../ethernet/hisilicon/hns3/hns3_debugfs.c | 44 +++++++++++-------- 1 file changed, 26 insertions(+), 18 deletions(-) diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_debugfs.c b/drivers/net/ethernet/hisilicon/hns3/hns3_debugfs.c index 818ac2c7c7ea..dd11c57027bb 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hns3_debugfs.c +++ b/drivers/net/ethernet/hisilicon/hns3/hns3_debugfs.c @@ -423,6 +423,30 @@ static ssize_t hns3_dbg_cmd_read(struct file *filp, char __user *buffer, return (*ppos = len); } +static int hns3_dbg_check_cmd(struct hnae3_handle *handle, char *cmd_buf) +{ + int ret = 0; + + if (strncmp(cmd_buf, "help", 4) == 0) + hns3_dbg_help(handle); + else if (strncmp(cmd_buf, "queue info", 10) == 0) + ret = hns3_dbg_queue_info(handle, cmd_buf); + else if (strncmp(cmd_buf, "queue map", 9) == 0) + ret = hns3_dbg_queue_map(handle); + else if (strncmp(cmd_buf, "bd info", 7) == 0) + ret = hns3_dbg_bd_info(handle, cmd_buf); + else if (strncmp(cmd_buf, "dev capability", 14) == 0) + hns3_dbg_dev_caps(handle); + else if (strncmp(cmd_buf, "dev spec", 8) == 0) + hns3_dbg_dev_specs(handle); + else if (handle->ae_algo->ops->dbg_run_cmd) + ret = handle->ae_algo->ops->dbg_run_cmd(handle, cmd_buf); + else + ret = -EOPNOTSUPP; + + return ret; +} + static ssize_t hns3_dbg_cmd_write(struct file *filp, const char __user *buffer, size_t count, loff_t *ppos) { @@ -430,7 +454,7 @@ static ssize_t hns3_dbg_cmd_write(struct file *filp, const char __user *buffer, struct hns3_nic_priv *priv = handle->priv; char *cmd_buf, *cmd_buf_tmp; int uncopied_bytes; - int ret = 0; + int ret; if (*ppos != 0) return 0; @@ -461,23 +485,7 @@ static ssize_t hns3_dbg_cmd_write(struct file *filp, const char __user *buffer, count = cmd_buf_tmp - cmd_buf + 1; } - if (strncmp(cmd_buf, "help", 4) == 0) - hns3_dbg_help(handle); - else if (strncmp(cmd_buf, "queue info", 10) == 0) - ret = hns3_dbg_queue_info(handle, cmd_buf); - else if (strncmp(cmd_buf, "queue map", 9) == 0) - ret = hns3_dbg_queue_map(handle); - else if (strncmp(cmd_buf, "bd info", 7) == 0) - ret = hns3_dbg_bd_info(handle, cmd_buf); - else if (strncmp(cmd_buf, "dev capability", 14) == 0) - hns3_dbg_dev_caps(handle); - else if (strncmp(cmd_buf, "dev spec", 8) == 0) - hns3_dbg_dev_specs(handle); - else if (handle->ae_algo->ops->dbg_run_cmd) - ret = handle->ae_algo->ops->dbg_run_cmd(handle, cmd_buf); - else - ret = -EOPNOTSUPP; - + ret = hns3_dbg_check_cmd(handle, cmd_buf); if (ret) hns3_dbg_help(handle); From patchwork Fri Feb 12 03:21:04 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Huazhong Tan X-Patchwork-Id: 12084709 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B2F1AC433E0 for ; Fri, 12 Feb 2021 03:23:32 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7574B64E57 for ; Fri, 12 Feb 2021 03:23:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229837AbhBLDXQ (ORCPT ); Thu, 11 Feb 2021 22:23:16 -0500 Received: from szxga05-in.huawei.com ([45.249.212.191]:12515 "EHLO szxga05-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229870AbhBLDXJ (ORCPT ); Thu, 11 Feb 2021 22:23:09 -0500 Received: from DGGEMS403-HUB.china.huawei.com (unknown [172.30.72.60]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4DcJfC5sSpzjMHr; Fri, 12 Feb 2021 11:20:19 +0800 (CST) Received: from SZA170332453E.china.huawei.com (10.46.104.160) by DGGEMS403-HUB.china.huawei.com (10.3.19.203) with Microsoft SMTP Server id 14.3.498.0; Fri, 12 Feb 2021 11:21:36 +0800 From: Huazhong Tan To: , CC: , , , , , Jiaran Zhang , Huazhong Tan Subject: [PATCH V2 net-next 04/13] net: hns3: use ipv6_addr_any() helper Date: Fri, 12 Feb 2021 11:21:04 +0800 Message-ID: <20210212032113.5384-5-tanhuazhong@huawei.com> X-Mailer: git-send-email 2.21.0.windows.1 In-Reply-To: <20210212032113.5384-1-tanhuazhong@huawei.com> References: <20210212032113.5384-1-tanhuazhong@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.46.104.160] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Jiaran Zhang Use common ipv6_addr_any() to determine if an addr is ipv6 any addr. Signed-off-by: Jiaran Zhang Signed-off-by: Huazhong Tan --- .../net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c | 13 +++++-------- 1 file changed, 5 insertions(+), 8 deletions(-) diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c index 7d81ffed4dc0..d3e68963967d 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c +++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c @@ -13,6 +13,7 @@ #include #include #include +#include #include #include "hclge_cmd.h" #include "hclge_dcb.h" @@ -5508,12 +5509,10 @@ static int hclge_fd_check_tcpip6_tuple(struct ethtool_tcpip6_spec *spec, BIT(INNER_IP_TOS); /* check whether src/dst ip address used */ - if (!spec->ip6src[0] && !spec->ip6src[1] && - !spec->ip6src[2] && !spec->ip6src[3]) + if (ipv6_addr_any((struct in6_addr *)spec->ip6src)) *unused_tuple |= BIT(INNER_SRC_IP); - if (!spec->ip6dst[0] && !spec->ip6dst[1] && - !spec->ip6dst[2] && !spec->ip6dst[3]) + if (ipv6_addr_any((struct in6_addr *)spec->ip6dst)) *unused_tuple |= BIT(INNER_DST_IP); if (!spec->psrc) @@ -5538,12 +5537,10 @@ static int hclge_fd_check_ip6_tuple(struct ethtool_usrip6_spec *spec, BIT(INNER_IP_TOS) | BIT(INNER_SRC_PORT) | BIT(INNER_DST_PORT); /* check whether src/dst ip address used */ - if (!spec->ip6src[0] && !spec->ip6src[1] && - !spec->ip6src[2] && !spec->ip6src[3]) + if (ipv6_addr_any((struct in6_addr *)spec->ip6src)) *unused_tuple |= BIT(INNER_SRC_IP); - if (!spec->ip6dst[0] && !spec->ip6dst[1] && - !spec->ip6dst[2] && !spec->ip6dst[3]) + if (ipv6_addr_any((struct in6_addr *)spec->ip6dst)) *unused_tuple |= BIT(INNER_DST_IP); if (!spec->l4_proto) From patchwork Fri Feb 12 03:21:05 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Huazhong Tan X-Patchwork-Id: 12084711 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C9770C433E6 for ; Fri, 12 Feb 2021 03:23:32 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9AD7E64E3C for ; Fri, 12 Feb 2021 03:23:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229905AbhBLDXW (ORCPT ); Thu, 11 Feb 2021 22:23:22 -0500 Received: from szxga05-in.huawei.com ([45.249.212.191]:12514 "EHLO szxga05-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229872AbhBLDXJ (ORCPT ); Thu, 11 Feb 2021 22:23:09 -0500 Received: from DGGEMS403-HUB.china.huawei.com (unknown [172.30.72.60]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4DcJfC5CnKzjMHh; Fri, 12 Feb 2021 11:20:19 +0800 (CST) Received: from SZA170332453E.china.huawei.com (10.46.104.160) by DGGEMS403-HUB.china.huawei.com (10.3.19.203) with Microsoft SMTP Server id 14.3.498.0; Fri, 12 Feb 2021 11:21:37 +0800 From: Huazhong Tan To: , CC: , , , , , Peng Li , Huazhong Tan Subject: [PATCH V2 net-next 05/13] net: hns3: refactor out hclge_set_vf_vlan_common() Date: Fri, 12 Feb 2021 11:21:05 +0800 Message-ID: <20210212032113.5384-6-tanhuazhong@huawei.com> X-Mailer: git-send-email 2.21.0.windows.1 In-Reply-To: <20210212032113.5384-1-tanhuazhong@huawei.com> References: <20210212032113.5384-1-tanhuazhong@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.46.104.160] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Peng Li To improve code readability and maintainability, separate the command handling part and the status parsing part from bloated hclge_set_vf_vlan_common(). Signed-off-by: Peng Li Signed-off-by: Huazhong Tan --- .../hisilicon/hns3/hns3pf/hclge_main.c | 73 ++++++++++++------- 1 file changed, 48 insertions(+), 25 deletions(-) diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c index d3e68963967d..3eb675d54d6f 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c +++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c @@ -8786,32 +8786,16 @@ static void hclge_enable_vlan_filter(struct hnae3_handle *handle, bool enable) handle->netdev_flags &= ~HNAE3_VLAN_FLTR; } -static int hclge_set_vf_vlan_common(struct hclge_dev *hdev, u16 vfid, - bool is_kill, u16 vlan, - __be16 proto) +static int hclge_set_vf_vlan_filter_cmd(struct hclge_dev *hdev, u16 vfid, + bool is_kill, u16 vlan, + struct hclge_desc *desc) { - struct hclge_vport *vport = &hdev->vport[vfid]; struct hclge_vlan_filter_vf_cfg_cmd *req0; struct hclge_vlan_filter_vf_cfg_cmd *req1; - struct hclge_desc desc[2]; u8 vf_byte_val; u8 vf_byte_off; int ret; - /* if vf vlan table is full, firmware will close vf vlan filter, it - * is unable and unnecessary to add new vlan id to vf vlan filter. - * If spoof check is enable, and vf vlan is full, it shouldn't add - * new vlan, because tx packets with these vlan id will be dropped. - */ - if (test_bit(vfid, hdev->vf_vlan_full) && !is_kill) { - if (vport->vf_info.spoofchk && vlan) { - dev_err(&hdev->pdev->dev, - "Can't add vlan due to spoof check is on and vf vlan table is full\n"); - return -EPERM; - } - return 0; - } - hclge_cmd_setup_basic_desc(&desc[0], HCLGE_OPC_VLAN_FILTER_VF_CFG, false); hclge_cmd_setup_basic_desc(&desc[1], @@ -8841,12 +8825,22 @@ static int hclge_set_vf_vlan_common(struct hclge_dev *hdev, u16 vfid, return ret; } + return 0; +} + +static int hclge_check_vf_vlan_cmd_status(struct hclge_dev *hdev, u16 vfid, + bool is_kill, struct hclge_desc *desc) +{ + struct hclge_vlan_filter_vf_cfg_cmd *req; + + req = (struct hclge_vlan_filter_vf_cfg_cmd *)desc[0].data; + if (!is_kill) { #define HCLGE_VF_VLAN_NO_ENTRY 2 - if (!req0->resp_code || req0->resp_code == 1) + if (!req->resp_code || req->resp_code == 1) return 0; - if (req0->resp_code == HCLGE_VF_VLAN_NO_ENTRY) { + if (req->resp_code == HCLGE_VF_VLAN_NO_ENTRY) { set_bit(vfid, hdev->vf_vlan_full); dev_warn(&hdev->pdev->dev, "vf vlan table is full, vf vlan filter is disabled\n"); @@ -8855,10 +8849,10 @@ static int hclge_set_vf_vlan_common(struct hclge_dev *hdev, u16 vfid, dev_err(&hdev->pdev->dev, "Add vf vlan filter fail, ret =%u.\n", - req0->resp_code); + req->resp_code); } else { #define HCLGE_VF_VLAN_DEL_NO_FOUND 1 - if (!req0->resp_code) + if (!req->resp_code) return 0; /* vf vlan filter is disabled when vf vlan table is full, @@ -8866,17 +8860,46 @@ static int hclge_set_vf_vlan_common(struct hclge_dev *hdev, u16 vfid, * Just return 0 without warning, avoid massive verbose * print logs when unload. */ - if (req0->resp_code == HCLGE_VF_VLAN_DEL_NO_FOUND) + if (req->resp_code == HCLGE_VF_VLAN_DEL_NO_FOUND) return 0; dev_err(&hdev->pdev->dev, "Kill vf vlan filter fail, ret =%u.\n", - req0->resp_code); + req->resp_code); } return -EIO; } +static int hclge_set_vf_vlan_common(struct hclge_dev *hdev, u16 vfid, + bool is_kill, u16 vlan, + __be16 proto) +{ + struct hclge_vport *vport = &hdev->vport[vfid]; + struct hclge_desc desc[2]; + int ret; + + /* if vf vlan table is full, firmware will close vf vlan filter, it + * is unable and unnecessary to add new vlan id to vf vlan filter. + * If spoof check is enable, and vf vlan is full, it shouldn't add + * new vlan, because tx packets with these vlan id will be dropped. + */ + if (test_bit(vfid, hdev->vf_vlan_full) && !is_kill) { + if (vport->vf_info.spoofchk && vlan) { + dev_err(&hdev->pdev->dev, + "Can't add vlan due to spoof check is on and vf vlan table is full\n"); + return -EPERM; + } + return 0; + } + + ret = hclge_set_vf_vlan_filter_cmd(hdev, vfid, is_kill, vlan, desc); + if (ret) + return ret; + + return hclge_check_vf_vlan_cmd_status(hdev, vfid, is_kill, desc); +} + static int hclge_set_port_vlan_filter(struct hclge_dev *hdev, __be16 proto, u16 vlan_id, bool is_kill) { From patchwork Fri Feb 12 03:21:06 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Huazhong Tan X-Patchwork-Id: 12084705 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9696BC433E9 for ; Fri, 12 Feb 2021 03:22:47 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 57FFE64E74 for ; Fri, 12 Feb 2021 03:22:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229777AbhBLDWh (ORCPT ); Thu, 11 Feb 2021 22:22:37 -0500 Received: from szxga05-in.huawei.com ([45.249.212.191]:12510 "EHLO szxga05-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229714AbhBLDW1 (ORCPT ); Thu, 11 Feb 2021 22:22:27 -0500 Received: from DGGEMS403-HUB.china.huawei.com (unknown [172.30.72.60]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4DcJfC50kVzjMHg; Fri, 12 Feb 2021 11:20:19 +0800 (CST) Received: from SZA170332453E.china.huawei.com (10.46.104.160) by DGGEMS403-HUB.china.huawei.com (10.3.19.203) with Microsoft SMTP Server id 14.3.498.0; Fri, 12 Feb 2021 11:21:37 +0800 From: Huazhong Tan To: , CC: , , , , , Jian Shen , Huazhong Tan Subject: [PATCH V2 net-next 06/13] net: hns3: refactor out hclge_get_rss_tuple() Date: Fri, 12 Feb 2021 11:21:06 +0800 Message-ID: <20210212032113.5384-7-tanhuazhong@huawei.com> X-Mailer: git-send-email 2.21.0.windows.1 In-Reply-To: <20210212032113.5384-1-tanhuazhong@huawei.com> References: <20210212032113.5384-1-tanhuazhong@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.46.104.160] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Jian Shen To improve code readability and maintainability, separate the flow type parsing part and the converting part from bloated hclge_get_rss_tuple(). Signed-off-by: Jian Shen Signed-off-by: Huazhong Tan --- .../hisilicon/hns3/hns3pf/hclge_main.c | 59 ++++++++++++------- 1 file changed, 38 insertions(+), 21 deletions(-) diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c index 3eb675d54d6f..17090c2b6c8b 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c +++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c @@ -4580,52 +4580,69 @@ static int hclge_set_rss_tuple(struct hnae3_handle *handle, return 0; } -static int hclge_get_rss_tuple(struct hnae3_handle *handle, - struct ethtool_rxnfc *nfc) +static int hclge_get_vport_rss_tuple(struct hclge_vport *vport, int flow_type, + u8 *tuple_sets) { - struct hclge_vport *vport = hclge_get_vport(handle); - u8 tuple_sets; - - nfc->data = 0; - - switch (nfc->flow_type) { + switch (flow_type) { case TCP_V4_FLOW: - tuple_sets = vport->rss_tuple_sets.ipv4_tcp_en; + *tuple_sets = vport->rss_tuple_sets.ipv4_tcp_en; break; case UDP_V4_FLOW: - tuple_sets = vport->rss_tuple_sets.ipv4_udp_en; + *tuple_sets = vport->rss_tuple_sets.ipv4_udp_en; break; case TCP_V6_FLOW: - tuple_sets = vport->rss_tuple_sets.ipv6_tcp_en; + *tuple_sets = vport->rss_tuple_sets.ipv6_tcp_en; break; case UDP_V6_FLOW: - tuple_sets = vport->rss_tuple_sets.ipv6_udp_en; + *tuple_sets = vport->rss_tuple_sets.ipv6_udp_en; break; case SCTP_V4_FLOW: - tuple_sets = vport->rss_tuple_sets.ipv4_sctp_en; + *tuple_sets = vport->rss_tuple_sets.ipv4_sctp_en; break; case SCTP_V6_FLOW: - tuple_sets = vport->rss_tuple_sets.ipv6_sctp_en; + *tuple_sets = vport->rss_tuple_sets.ipv6_sctp_en; break; case IPV4_FLOW: case IPV6_FLOW: - tuple_sets = HCLGE_S_IP_BIT | HCLGE_D_IP_BIT; + *tuple_sets = HCLGE_S_IP_BIT | HCLGE_D_IP_BIT; break; default: return -EINVAL; } - if (!tuple_sets) - return 0; + return 0; +} + +static u64 hclge_convert_rss_tuple(u8 tuple_sets) +{ + u64 tuple_data = 0; if (tuple_sets & HCLGE_D_PORT_BIT) - nfc->data |= RXH_L4_B_2_3; + tuple_data |= RXH_L4_B_2_3; if (tuple_sets & HCLGE_S_PORT_BIT) - nfc->data |= RXH_L4_B_0_1; + tuple_data |= RXH_L4_B_0_1; if (tuple_sets & HCLGE_D_IP_BIT) - nfc->data |= RXH_IP_DST; + tuple_data |= RXH_IP_DST; if (tuple_sets & HCLGE_S_IP_BIT) - nfc->data |= RXH_IP_SRC; + tuple_data |= RXH_IP_SRC; + + return tuple_data; +} + +static int hclge_get_rss_tuple(struct hnae3_handle *handle, + struct ethtool_rxnfc *nfc) +{ + struct hclge_vport *vport = hclge_get_vport(handle); + u8 tuple_sets; + int ret; + + nfc->data = 0; + + ret = hclge_get_vport_rss_tuple(vport, nfc->flow_type, &tuple_sets); + if (ret || !tuple_sets) + return ret; + + nfc->data = hclge_convert_rss_tuple(tuple_sets); return 0; } From patchwork Fri Feb 12 03:21:07 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Huazhong Tan X-Patchwork-Id: 12084713 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 49DF3C433DB for ; Fri, 12 Feb 2021 03:23:57 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1601C64E3C for ; Fri, 12 Feb 2021 03:23:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229690AbhBLDXs (ORCPT ); Thu, 11 Feb 2021 22:23:48 -0500 Received: from szxga05-in.huawei.com ([45.249.212.191]:12516 "EHLO szxga05-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229880AbhBLDXK (ORCPT ); Thu, 11 Feb 2021 22:23:10 -0500 Received: from DGGEMS403-HUB.china.huawei.com (unknown [172.30.72.59]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4DcJfP5MkjzjMHX; Fri, 12 Feb 2021 11:20:29 +0800 (CST) Received: from SZA170332453E.china.huawei.com (10.46.104.160) by DGGEMS403-HUB.china.huawei.com (10.3.19.203) with Microsoft SMTP Server id 14.3.498.0; Fri, 12 Feb 2021 11:21:45 +0800 From: Huazhong Tan To: , CC: , , , , , Jian Shen , Huazhong Tan Subject: [PATCH V2 net-next 07/13] net: hns3: refactor out hclgevf_get_rss_tuple() Date: Fri, 12 Feb 2021 11:21:07 +0800 Message-ID: <20210212032113.5384-8-tanhuazhong@huawei.com> X-Mailer: git-send-email 2.21.0.windows.1 In-Reply-To: <20210212032113.5384-1-tanhuazhong@huawei.com> References: <20210212032113.5384-1-tanhuazhong@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.46.104.160] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Jian Shen To improve code readability and maintainability, separate the flow type parsing part and the converting part from bloated hclgevf_get_rss_tuple(). Signed-off-by: Jian Shen Signed-off-by: Huazhong Tan --- .../hisilicon/hns3/hns3vf/hclgevf_main.c | 67 ++++++++++++------- 1 file changed, 42 insertions(+), 25 deletions(-) diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c index ece31693e624..c4ac2b9771e8 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c +++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c @@ -954,56 +954,73 @@ static int hclgevf_set_rss_tuple(struct hnae3_handle *handle, return 0; } -static int hclgevf_get_rss_tuple(struct hnae3_handle *handle, - struct ethtool_rxnfc *nfc) +static int hclgevf_get_rss_tuple_by_flow_type(struct hclgevf_dev *hdev, + int flow_type, u8 *tuple_sets) { - struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle); - struct hclgevf_rss_cfg *rss_cfg = &hdev->rss_cfg; - u8 tuple_sets; - - if (hdev->ae_dev->dev_version < HNAE3_DEVICE_VERSION_V2) - return -EOPNOTSUPP; - - nfc->data = 0; - - switch (nfc->flow_type) { + switch (flow_type) { case TCP_V4_FLOW: - tuple_sets = rss_cfg->rss_tuple_sets.ipv4_tcp_en; + *tuple_sets = hdev->rss_cfg.rss_tuple_sets.ipv4_tcp_en; break; case UDP_V4_FLOW: - tuple_sets = rss_cfg->rss_tuple_sets.ipv4_udp_en; + *tuple_sets = hdev->rss_cfg.rss_tuple_sets.ipv4_udp_en; break; case TCP_V6_FLOW: - tuple_sets = rss_cfg->rss_tuple_sets.ipv6_tcp_en; + *tuple_sets = hdev->rss_cfg.rss_tuple_sets.ipv6_tcp_en; break; case UDP_V6_FLOW: - tuple_sets = rss_cfg->rss_tuple_sets.ipv6_udp_en; + *tuple_sets = hdev->rss_cfg.rss_tuple_sets.ipv6_udp_en; break; case SCTP_V4_FLOW: - tuple_sets = rss_cfg->rss_tuple_sets.ipv4_sctp_en; + *tuple_sets = hdev->rss_cfg.rss_tuple_sets.ipv4_sctp_en; break; case SCTP_V6_FLOW: - tuple_sets = rss_cfg->rss_tuple_sets.ipv6_sctp_en; + *tuple_sets = hdev->rss_cfg.rss_tuple_sets.ipv6_sctp_en; break; case IPV4_FLOW: case IPV6_FLOW: - tuple_sets = HCLGEVF_S_IP_BIT | HCLGEVF_D_IP_BIT; + *tuple_sets = HCLGEVF_S_IP_BIT | HCLGEVF_D_IP_BIT; break; default: return -EINVAL; } - if (!tuple_sets) - return 0; + return 0; +} + +static u64 hclgevf_convert_rss_tuple(u8 tuple_sets) +{ + u64 tuple_data = 0; if (tuple_sets & HCLGEVF_D_PORT_BIT) - nfc->data |= RXH_L4_B_2_3; + tuple_data |= RXH_L4_B_2_3; if (tuple_sets & HCLGEVF_S_PORT_BIT) - nfc->data |= RXH_L4_B_0_1; + tuple_data |= RXH_L4_B_0_1; if (tuple_sets & HCLGEVF_D_IP_BIT) - nfc->data |= RXH_IP_DST; + tuple_data |= RXH_IP_DST; if (tuple_sets & HCLGEVF_S_IP_BIT) - nfc->data |= RXH_IP_SRC; + tuple_data |= RXH_IP_SRC; + + return tuple_data; +} + +static int hclgevf_get_rss_tuple(struct hnae3_handle *handle, + struct ethtool_rxnfc *nfc) +{ + struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle); + u8 tuple_sets; + int ret; + + if (hdev->ae_dev->dev_version < HNAE3_DEVICE_VERSION_V2) + return -EOPNOTSUPP; + + nfc->data = 0; + + ret = hclgevf_get_rss_tuple_by_flow_type(hdev, nfc->flow_type, + &tuple_sets); + if (ret || !tuple_sets) + return ret; + + nfc->data = hclgevf_convert_rss_tuple(tuple_sets); return 0; } From patchwork Fri Feb 12 03:21:08 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Huazhong Tan X-Patchwork-Id: 12084715 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 37B36C433E0 for ; Fri, 12 Feb 2021 03:23:57 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id ECE3A64E26 for ; Fri, 12 Feb 2021 03:23:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229872AbhBLDXa (ORCPT ); Thu, 11 Feb 2021 22:23:30 -0500 Received: from szxga05-in.huawei.com ([45.249.212.191]:12517 "EHLO szxga05-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229873AbhBLDXK (ORCPT ); Thu, 11 Feb 2021 22:23:10 -0500 Received: from DGGEMS403-HUB.china.huawei.com (unknown [172.30.72.59]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4DcJfP5cQMzjMHj; Fri, 12 Feb 2021 11:20:29 +0800 (CST) Received: from SZA170332453E.china.huawei.com (10.46.104.160) by DGGEMS403-HUB.china.huawei.com (10.3.19.203) with Microsoft SMTP Server id 14.3.498.0; Fri, 12 Feb 2021 11:21:46 +0800 From: Huazhong Tan To: , CC: , , , , , Jian Shen , Huazhong Tan Subject: [PATCH V2 net-next 08/13] net: hns3: split out hclge_dbg_dump_qos_buf_cfg() Date: Fri, 12 Feb 2021 11:21:08 +0800 Message-ID: <20210212032113.5384-9-tanhuazhong@huawei.com> X-Mailer: git-send-email 2.21.0.windows.1 In-Reply-To: <20210212032113.5384-1-tanhuazhong@huawei.com> References: <20210212032113.5384-1-tanhuazhong@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.46.104.160] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Jian Shen hclge_dbg_dump_qos_buf_cfg() is bloated, so split it into separate functions for readability and maintainability. Signed-off-by: Jian Shen Signed-off-by: Huazhong Tan --- .../hisilicon/hns3/hns3pf/hclge_debugfs.c | 158 +++++++++++++----- 1 file changed, 115 insertions(+), 43 deletions(-) diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_debugfs.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_debugfs.c index a0a33c02ce25..6b1d197df881 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_debugfs.c +++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_debugfs.c @@ -984,39 +984,39 @@ static void hclge_dbg_dump_qos_pri_map(struct hclge_dev *hdev) dev_info(&hdev->pdev->dev, "pri_7_to_tc: 0x%x\n", pri_map->pri7_tc); } -static void hclge_dbg_dump_qos_buf_cfg(struct hclge_dev *hdev) +static int hclge_dbg_dump_tx_buf_cfg(struct hclge_dev *hdev) { struct hclge_tx_buff_alloc_cmd *tx_buf_cmd; - struct hclge_rx_priv_buff_cmd *rx_buf_cmd; - struct hclge_rx_priv_wl_buf *rx_priv_wl; - struct hclge_rx_com_wl *rx_packet_cnt; - struct hclge_rx_com_thrd *rx_com_thrd; - struct hclge_rx_com_wl *rx_com_wl; - enum hclge_opcode_type cmd; - struct hclge_desc desc[2]; + struct hclge_desc desc; int i, ret; - cmd = HCLGE_OPC_TX_BUFF_ALLOC; - hclge_cmd_setup_basic_desc(desc, cmd, true); - ret = hclge_cmd_send(&hdev->hw, desc, 1); + hclge_cmd_setup_basic_desc(&desc, HCLGE_OPC_TX_BUFF_ALLOC, true); + ret = hclge_cmd_send(&hdev->hw, &desc, 1); if (ret) - goto err_qos_cmd_send; + return ret; dev_info(&hdev->pdev->dev, "dump qos buf cfg\n"); - - tx_buf_cmd = (struct hclge_tx_buff_alloc_cmd *)desc[0].data; + tx_buf_cmd = (struct hclge_tx_buff_alloc_cmd *)desc.data; for (i = 0; i < HCLGE_MAX_TC_NUM; i++) dev_info(&hdev->pdev->dev, "tx_packet_buf_tc_%d: 0x%x\n", i, le16_to_cpu(tx_buf_cmd->tx_pkt_buff[i])); - cmd = HCLGE_OPC_RX_PRIV_BUFF_ALLOC; - hclge_cmd_setup_basic_desc(desc, cmd, true); - ret = hclge_cmd_send(&hdev->hw, desc, 1); + return 0; +} + +static int hclge_dbg_dump_rx_priv_buf_cfg(struct hclge_dev *hdev) +{ + struct hclge_rx_priv_buff_cmd *rx_buf_cmd; + struct hclge_desc desc; + int i, ret; + + hclge_cmd_setup_basic_desc(&desc, HCLGE_OPC_RX_PRIV_BUFF_ALLOC, true); + ret = hclge_cmd_send(&hdev->hw, &desc, 1); if (ret) - goto err_qos_cmd_send; + return ret; dev_info(&hdev->pdev->dev, "\n"); - rx_buf_cmd = (struct hclge_rx_priv_buff_cmd *)desc[0].data; + rx_buf_cmd = (struct hclge_rx_priv_buff_cmd *)desc.data; for (i = 0; i < HCLGE_MAX_TC_NUM; i++) dev_info(&hdev->pdev->dev, "rx_packet_buf_tc_%d: 0x%x\n", i, le16_to_cpu(rx_buf_cmd->buf_num[i])); @@ -1024,43 +1024,61 @@ static void hclge_dbg_dump_qos_buf_cfg(struct hclge_dev *hdev) dev_info(&hdev->pdev->dev, "rx_share_buf: 0x%x\n", le16_to_cpu(rx_buf_cmd->shared_buf)); - cmd = HCLGE_OPC_RX_COM_WL_ALLOC; - hclge_cmd_setup_basic_desc(desc, cmd, true); - ret = hclge_cmd_send(&hdev->hw, desc, 1); + return 0; +} + +static int hclge_dbg_dump_rx_common_wl_cfg(struct hclge_dev *hdev) +{ + struct hclge_rx_com_wl *rx_com_wl; + struct hclge_desc desc; + int ret; + + hclge_cmd_setup_basic_desc(&desc, HCLGE_OPC_RX_COM_WL_ALLOC, true); + ret = hclge_cmd_send(&hdev->hw, &desc, 1); if (ret) - goto err_qos_cmd_send; + return ret; - rx_com_wl = (struct hclge_rx_com_wl *)desc[0].data; + rx_com_wl = (struct hclge_rx_com_wl *)desc.data; dev_info(&hdev->pdev->dev, "\n"); dev_info(&hdev->pdev->dev, "rx_com_wl: high: 0x%x, low: 0x%x\n", le16_to_cpu(rx_com_wl->com_wl.high), le16_to_cpu(rx_com_wl->com_wl.low)); - cmd = HCLGE_OPC_RX_GBL_PKT_CNT; - hclge_cmd_setup_basic_desc(desc, cmd, true); - ret = hclge_cmd_send(&hdev->hw, desc, 1); + return 0; +} + +static int hclge_dbg_dump_rx_global_pkt_cnt(struct hclge_dev *hdev) +{ + struct hclge_rx_com_wl *rx_packet_cnt; + struct hclge_desc desc; + int ret; + + hclge_cmd_setup_basic_desc(&desc, HCLGE_OPC_RX_GBL_PKT_CNT, true); + ret = hclge_cmd_send(&hdev->hw, &desc, 1); if (ret) - goto err_qos_cmd_send; + return ret; - rx_packet_cnt = (struct hclge_rx_com_wl *)desc[0].data; + rx_packet_cnt = (struct hclge_rx_com_wl *)desc.data; dev_info(&hdev->pdev->dev, "rx_global_packet_cnt: high: 0x%x, low: 0x%x\n", le16_to_cpu(rx_packet_cnt->com_wl.high), le16_to_cpu(rx_packet_cnt->com_wl.low)); - dev_info(&hdev->pdev->dev, "\n"); - if (!hnae3_dev_dcb_supported(hdev)) { - dev_info(&hdev->pdev->dev, - "Only DCB-supported dev supports rx priv wl\n"); - return; - } - cmd = HCLGE_OPC_RX_PRIV_WL_ALLOC; - hclge_cmd_setup_basic_desc(&desc[0], cmd, true); + return 0; +} + +static int hclge_dbg_dump_rx_priv_wl_buf_cfg(struct hclge_dev *hdev) +{ + struct hclge_rx_priv_wl_buf *rx_priv_wl; + struct hclge_desc desc[2]; + int i, ret; + + hclge_cmd_setup_basic_desc(&desc[0], HCLGE_OPC_RX_PRIV_WL_ALLOC, true); desc[0].flag |= cpu_to_le16(HCLGE_CMD_FLAG_NEXT); - hclge_cmd_setup_basic_desc(&desc[1], cmd, true); + hclge_cmd_setup_basic_desc(&desc[1], HCLGE_OPC_RX_PRIV_WL_ALLOC, true); ret = hclge_cmd_send(&hdev->hw, desc, 2); if (ret) - goto err_qos_cmd_send; + return ret; rx_priv_wl = (struct hclge_rx_priv_wl_buf *)desc[0].data; for (i = 0; i < HCLGE_TC_NUM_ONE_DESC; i++) @@ -1077,13 +1095,21 @@ static void hclge_dbg_dump_qos_buf_cfg(struct hclge_dev *hdev) le16_to_cpu(rx_priv_wl->tc_wl[i].high), le16_to_cpu(rx_priv_wl->tc_wl[i].low)); - cmd = HCLGE_OPC_RX_COM_THRD_ALLOC; - hclge_cmd_setup_basic_desc(&desc[0], cmd, true); + return 0; +} + +static int hclge_dbg_dump_rx_common_threshold_cfg(struct hclge_dev *hdev) +{ + struct hclge_rx_com_thrd *rx_com_thrd; + struct hclge_desc desc[2]; + int i, ret; + + hclge_cmd_setup_basic_desc(&desc[0], HCLGE_OPC_RX_COM_THRD_ALLOC, true); desc[0].flag |= cpu_to_le16(HCLGE_CMD_FLAG_NEXT); - hclge_cmd_setup_basic_desc(&desc[1], cmd, true); + hclge_cmd_setup_basic_desc(&desc[1], HCLGE_OPC_RX_COM_THRD_ALLOC, true); ret = hclge_cmd_send(&hdev->hw, desc, 2); if (ret) - goto err_qos_cmd_send; + return ret; dev_info(&hdev->pdev->dev, "\n"); rx_com_thrd = (struct hclge_rx_com_thrd *)desc[0].data; @@ -1100,6 +1126,52 @@ static void hclge_dbg_dump_qos_buf_cfg(struct hclge_dev *hdev) i + HCLGE_TC_NUM_ONE_DESC, le16_to_cpu(rx_com_thrd->com_thrd[i].high), le16_to_cpu(rx_com_thrd->com_thrd[i].low)); + + return 0; +} + +static void hclge_dbg_dump_qos_buf_cfg(struct hclge_dev *hdev) +{ + enum hclge_opcode_type cmd; + int ret; + + cmd = HCLGE_OPC_TX_BUFF_ALLOC; + ret = hclge_dbg_dump_tx_buf_cfg(hdev); + if (ret) + goto err_qos_cmd_send; + + cmd = HCLGE_OPC_RX_PRIV_BUFF_ALLOC; + ret = hclge_dbg_dump_rx_priv_buf_cfg(hdev); + if (ret) + goto err_qos_cmd_send; + + cmd = HCLGE_OPC_RX_COM_WL_ALLOC; + ret = hclge_dbg_dump_rx_common_wl_cfg(hdev); + if (ret) + goto err_qos_cmd_send; + + cmd = HCLGE_OPC_RX_GBL_PKT_CNT; + ret = hclge_dbg_dump_rx_global_pkt_cnt(hdev); + if (ret) + goto err_qos_cmd_send; + + dev_info(&hdev->pdev->dev, "\n"); + if (!hnae3_dev_dcb_supported(hdev)) { + dev_info(&hdev->pdev->dev, + "Only DCB-supported dev supports rx priv wl\n"); + return; + } + + cmd = HCLGE_OPC_RX_PRIV_WL_ALLOC; + ret = hclge_dbg_dump_rx_priv_wl_buf_cfg(hdev); + if (ret) + goto err_qos_cmd_send; + + cmd = HCLGE_OPC_RX_COM_THRD_ALLOC; + ret = hclge_dbg_dump_rx_common_threshold_cfg(hdev); + if (ret) + goto err_qos_cmd_send; + return; err_qos_cmd_send: From patchwork Fri Feb 12 03:24:13 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Huazhong Tan X-Patchwork-Id: 12084723 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B96E6C433E0 for ; Fri, 12 Feb 2021 03:25:21 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8B8D464E36 for ; Fri, 12 Feb 2021 03:25:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229958AbhBLDZT (ORCPT ); Thu, 11 Feb 2021 22:25:19 -0500 Received: from szxga07-in.huawei.com ([45.249.212.35]:13343 "EHLO szxga07-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229716AbhBLDZE (ORCPT ); Thu, 11 Feb 2021 22:25:04 -0500 Received: from DGGEMS409-HUB.china.huawei.com (unknown [172.30.72.58]) by szxga07-in.huawei.com (SkyGuard) with ESMTP id 4DcJjC6xDyz7k14; Fri, 12 Feb 2021 11:22:55 +0800 (CST) Received: from SZA170332453E.china.huawei.com (10.46.104.160) by DGGEMS409-HUB.china.huawei.com (10.3.19.209) with Microsoft SMTP Server id 14.3.498.0; Fri, 12 Feb 2021 11:24:14 +0800 From: Huazhong Tan To: , CC: , , , , , Yufeng Mo , Huazhong Tan Subject: [PATCH V2 net-next 09/13] net: hns3: split out hclge_cmd_send() Date: Fri, 12 Feb 2021 11:24:13 +0800 Message-ID: <20210212032417.13076-1-tanhuazhong@huawei.com> X-Mailer: git-send-email 2.21.0.windows.1 MIME-Version: 1.0 X-Originating-IP: [10.46.104.160] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Yufeng Mo hclge_cmd_send() is bloated, so split it into separate functions for readability and maintainability. Signed-off-by: Yufeng Mo Signed-off-by: Huazhong Tan --- .../hisilicon/hns3/hns3pf/hclge_cmd.c | 100 +++++++++++------- 1 file changed, 59 insertions(+), 41 deletions(-) diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.c index cb2c955ce52c..1bd0ddfaec4d 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.c +++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.c @@ -194,6 +194,22 @@ struct errcode { int common_errno; }; +static void hclge_cmd_copy_desc(struct hclge_hw *hw, struct hclge_desc *desc, + int num) +{ + struct hclge_desc *desc_to_use; + int handle = 0; + + while (handle < num) { + desc_to_use = &hw->cmq.csq.desc[hw->cmq.csq.next_to_use]; + *desc_to_use = desc[handle]; + (hw->cmq.csq.next_to_use)++; + if (hw->cmq.csq.next_to_use >= hw->cmq.csq.desc_num) + hw->cmq.csq.next_to_use = 0; + handle++; + } +} + static int hclge_cmd_convert_err_code(u16 desc_ret) { struct errcode hclge_cmd_errcode[] = { @@ -243,6 +259,44 @@ static int hclge_cmd_check_retval(struct hclge_hw *hw, struct hclge_desc *desc, return hclge_cmd_convert_err_code(desc_ret); } +static int hclge_cmd_check_result(struct hclge_hw *hw, struct hclge_desc *desc, + int num, int ntc) +{ + struct hclge_dev *hdev = container_of(hw, struct hclge_dev, hw); + bool is_completed = false; + u32 timeout = 0; + int handle, ret; + + /** + * If the command is sync, wait for the firmware to write back, + * if multi descriptors to be sent, use the first one to check + */ + if (HCLGE_SEND_SYNC(le16_to_cpu(desc->flag))) { + do { + if (hclge_cmd_csq_done(hw)) { + is_completed = true; + break; + } + udelay(1); + timeout++; + } while (timeout < hw->cmq.tx_timeout); + } + + if (!is_completed) + ret = -EBADE; + else + ret = hclge_cmd_check_retval(hw, desc, num, ntc); + + /* Clean the command send queue */ + handle = hclge_cmd_csq_clean(hw); + if (handle < 0) + ret = handle; + else if (handle != num) + dev_warn(&hdev->pdev->dev, + "cleaned %d, need to clean %d\n", handle, num); + return ret; +} + /** * hclge_cmd_send - send command to command queue * @hw: pointer to the hw struct @@ -256,11 +310,7 @@ int hclge_cmd_send(struct hclge_hw *hw, struct hclge_desc *desc, int num) { struct hclge_dev *hdev = container_of(hw, struct hclge_dev, hw); struct hclge_cmq_ring *csq = &hw->cmq.csq; - struct hclge_desc *desc_to_use; - bool complete = false; - u32 timeout = 0; - int handle = 0; - int retval; + int ret; int ntc; spin_lock_bh(&hw->cmq.csq.lock); @@ -284,49 +334,17 @@ int hclge_cmd_send(struct hclge_hw *hw, struct hclge_desc *desc, int num) * which will be use for hardware to write back */ ntc = hw->cmq.csq.next_to_use; - while (handle < num) { - desc_to_use = &hw->cmq.csq.desc[hw->cmq.csq.next_to_use]; - *desc_to_use = desc[handle]; - (hw->cmq.csq.next_to_use)++; - if (hw->cmq.csq.next_to_use >= hw->cmq.csq.desc_num) - hw->cmq.csq.next_to_use = 0; - handle++; - } + + hclge_cmd_copy_desc(hw, desc, num); /* Write to hardware */ hclge_write_dev(hw, HCLGE_NIC_CSQ_TAIL_REG, hw->cmq.csq.next_to_use); - /** - * If the command is sync, wait for the firmware to write back, - * if multi descriptors to be sent, use the first one to check - */ - if (HCLGE_SEND_SYNC(le16_to_cpu(desc->flag))) { - do { - if (hclge_cmd_csq_done(hw)) { - complete = true; - break; - } - udelay(1); - timeout++; - } while (timeout < hw->cmq.tx_timeout); - } - - if (!complete) - retval = -EBADE; - else - retval = hclge_cmd_check_retval(hw, desc, num, ntc); - - /* Clean the command send queue */ - handle = hclge_cmd_csq_clean(hw); - if (handle < 0) - retval = handle; - else if (handle != num) - dev_warn(&hdev->pdev->dev, - "cleaned %d, need to clean %d\n", handle, num); + ret = hclge_cmd_check_result(hw, desc, num, ntc); spin_unlock_bh(&hw->cmq.csq.lock); - return retval; + return ret; } static void hclge_set_default_capability(struct hclge_dev *hdev) From patchwork Fri Feb 12 03:24:14 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Huazhong Tan X-Patchwork-Id: 12084717 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 627E4C433E0 for ; Fri, 12 Feb 2021 03:25:14 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3017564E3C for ; Fri, 12 Feb 2021 03:25:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229919AbhBLDZN (ORCPT ); Thu, 11 Feb 2021 22:25:13 -0500 Received: from szxga07-in.huawei.com ([45.249.212.35]:13344 "EHLO szxga07-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229611AbhBLDZE (ORCPT ); Thu, 11 Feb 2021 22:25:04 -0500 Received: from DGGEMS409-HUB.china.huawei.com (unknown [172.30.72.58]) by szxga07-in.huawei.com (SkyGuard) with ESMTP id 4DcJjC6hznz7jxd; Fri, 12 Feb 2021 11:22:55 +0800 (CST) Received: from SZA170332453E.china.huawei.com (10.46.104.160) by DGGEMS409-HUB.china.huawei.com (10.3.19.209) with Microsoft SMTP Server id 14.3.498.0; Fri, 12 Feb 2021 11:24:14 +0800 From: Huazhong Tan To: , CC: , , , , , Yufeng Mo , Huazhong Tan Subject: [PATCH V2 net-next 10/13] net: hns3: split out hclgevf_cmd_send() Date: Fri, 12 Feb 2021 11:24:14 +0800 Message-ID: <20210212032417.13076-2-tanhuazhong@huawei.com> X-Mailer: git-send-email 2.21.0.windows.1 In-Reply-To: <20210212032417.13076-1-tanhuazhong@huawei.com> References: <20210212032417.13076-1-tanhuazhong@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.46.104.160] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Yufeng Mo hclgevf_cmd_send() is bloated, so split it into separate functions for readability and maintainability. Signed-off-by: Yufeng Mo Signed-off-by: Huazhong Tan --- .../hisilicon/hns3/hns3vf/hclgevf_cmd.c | 141 ++++++++++-------- 1 file changed, 81 insertions(+), 60 deletions(-) diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_cmd.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_cmd.c index 603665e5bf39..46700c427849 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_cmd.c +++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_cmd.c @@ -181,6 +181,22 @@ struct vf_errcode { int common_errno; }; +static void hclgevf_cmd_copy_desc(struct hclgevf_hw *hw, + struct hclgevf_desc *desc, int num) +{ + struct hclgevf_desc *desc_to_use; + int handle = 0; + + while (handle < num) { + desc_to_use = &hw->cmq.csq.desc[hw->cmq.csq.next_to_use]; + *desc_to_use = desc[handle]; + (hw->cmq.csq.next_to_use)++; + if (hw->cmq.csq.next_to_use == hw->cmq.csq.desc_num) + hw->cmq.csq.next_to_use = 0; + handle++; + } +} + static int hclgevf_cmd_convert_err_code(u16 desc_ret) { struct vf_errcode hclgevf_cmd_errcode[] = { @@ -207,6 +223,66 @@ static int hclgevf_cmd_convert_err_code(u16 desc_ret) return -EIO; } +static int hclgevf_cmd_check_retval(struct hclgevf_hw *hw, + struct hclgevf_desc *desc, int num, int ntc) +{ + u16 opcode, desc_ret; + int handle; + + opcode = le16_to_cpu(desc[0].opcode); + for (handle = 0; handle < num; handle++) { + /* Get the result of hardware write back */ + desc[handle] = hw->cmq.csq.desc[ntc]; + ntc++; + if (ntc == hw->cmq.csq.desc_num) + ntc = 0; + } + if (likely(!hclgevf_is_special_opcode(opcode))) + desc_ret = le16_to_cpu(desc[num - 1].retval); + else + desc_ret = le16_to_cpu(desc[0].retval); + hw->cmq.last_status = desc_ret; + + return hclgevf_cmd_convert_err_code(desc_ret); +} + +static int hclgevf_cmd_check_result(struct hclgevf_hw *hw, + struct hclgevf_desc *desc, int num, int ntc) +{ + struct hclgevf_dev *hdev = (struct hclgevf_dev *)hw->hdev; + bool is_completed = false; + u32 timeout = 0; + int handle, ret; + + /* If the command is sync, wait for the firmware to write back, + * if multi descriptors to be sent, use the first one to check + */ + if (HCLGEVF_SEND_SYNC(le16_to_cpu(desc->flag))) { + do { + if (hclgevf_cmd_csq_done(hw)) { + is_completed = true; + break; + } + udelay(1); + timeout++; + } while (timeout < hw->cmq.tx_timeout); + } + + if (!is_completed) + ret = -EBADE; + else + ret = hclgevf_cmd_check_retval(hw, desc, num, ntc); + + /* Clean the command send queue */ + handle = hclgevf_cmd_csq_clean(hw); + if (handle < 0) + ret = handle; + else if (handle != num) + dev_warn(&hdev->pdev->dev, + "cleaned %d, need to clean %d\n", handle, num); + return ret; +} + /* hclgevf_cmd_send - send command to command queue * @hw: pointer to the hw struct * @desc: prefilled descriptor for describing the command @@ -219,13 +295,7 @@ int hclgevf_cmd_send(struct hclgevf_hw *hw, struct hclgevf_desc *desc, int num) { struct hclgevf_dev *hdev = (struct hclgevf_dev *)hw->hdev; struct hclgevf_cmq_ring *csq = &hw->cmq.csq; - struct hclgevf_desc *desc_to_use; - bool complete = false; - u32 timeout = 0; - int handle = 0; - int status = 0; - u16 retval; - u16 opcode; + int ret; int ntc; spin_lock_bh(&hw->cmq.csq.lock); @@ -249,67 +319,18 @@ int hclgevf_cmd_send(struct hclgevf_hw *hw, struct hclgevf_desc *desc, int num) * which will be use for hardware to write back */ ntc = hw->cmq.csq.next_to_use; - opcode = le16_to_cpu(desc[0].opcode); - while (handle < num) { - desc_to_use = &hw->cmq.csq.desc[hw->cmq.csq.next_to_use]; - *desc_to_use = desc[handle]; - (hw->cmq.csq.next_to_use)++; - if (hw->cmq.csq.next_to_use == hw->cmq.csq.desc_num) - hw->cmq.csq.next_to_use = 0; - handle++; - } + + hclgevf_cmd_copy_desc(hw, desc, num); /* Write to hardware */ hclgevf_write_dev(hw, HCLGEVF_NIC_CSQ_TAIL_REG, hw->cmq.csq.next_to_use); - /* If the command is sync, wait for the firmware to write back, - * if multi descriptors to be sent, use the first one to check - */ - if (HCLGEVF_SEND_SYNC(le16_to_cpu(desc->flag))) { - do { - if (hclgevf_cmd_csq_done(hw)) - break; - udelay(1); - timeout++; - } while (timeout < hw->cmq.tx_timeout); - } - - if (hclgevf_cmd_csq_done(hw)) { - complete = true; - handle = 0; - - while (handle < num) { - /* Get the result of hardware write back */ - desc_to_use = &hw->cmq.csq.desc[ntc]; - desc[handle] = *desc_to_use; - - if (likely(!hclgevf_is_special_opcode(opcode))) - retval = le16_to_cpu(desc[handle].retval); - else - retval = le16_to_cpu(desc[0].retval); - - status = hclgevf_cmd_convert_err_code(retval); - hw->cmq.last_status = (enum hclgevf_cmd_status)retval; - ntc++; - handle++; - if (ntc == hw->cmq.csq.desc_num) - ntc = 0; - } - } - - if (!complete) - status = -EBADE; - - /* Clean the command send queue */ - handle = hclgevf_cmd_csq_clean(hw); - if (handle != num) - dev_warn(&hdev->pdev->dev, - "cleaned %d, need to clean %d\n", handle, num); + ret = hclgevf_cmd_check_result(hw, desc, num, ntc); spin_unlock_bh(&hw->cmq.csq.lock); - return status; + return ret; } static void hclgevf_set_default_capability(struct hclgevf_dev *hdev) From patchwork Fri Feb 12 03:24:15 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Huazhong Tan X-Patchwork-Id: 12084719 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CC189C433E0 for ; Fri, 12 Feb 2021 03:25:16 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9770A64E3C for ; Fri, 12 Feb 2021 03:25:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229929AbhBLDZP (ORCPT ); Thu, 11 Feb 2021 22:25:15 -0500 Received: from szxga07-in.huawei.com ([45.249.212.35]:13341 "EHLO szxga07-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229573AbhBLDZE (ORCPT ); Thu, 11 Feb 2021 22:25:04 -0500 Received: from DGGEMS409-HUB.china.huawei.com (unknown [172.30.72.58]) by szxga07-in.huawei.com (SkyGuard) with ESMTP id 4DcJjC6CfXz7j4v; Fri, 12 Feb 2021 11:22:55 +0800 (CST) Received: from SZA170332453E.china.huawei.com (10.46.104.160) by DGGEMS409-HUB.china.huawei.com (10.3.19.209) with Microsoft SMTP Server id 14.3.498.0; Fri, 12 Feb 2021 11:24:14 +0800 From: Huazhong Tan To: , CC: , , , , , Huazhong Tan Subject: [PATCH V2 net-next 11/13] net: hns3: refactor out hclge_set_rss_tuple() Date: Fri, 12 Feb 2021 11:24:15 +0800 Message-ID: <20210212032417.13076-3-tanhuazhong@huawei.com> X-Mailer: git-send-email 2.21.0.windows.1 In-Reply-To: <20210212032417.13076-1-tanhuazhong@huawei.com> References: <20210212032417.13076-1-tanhuazhong@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.46.104.160] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org To make it more readable and maintainable, split hclge_set_rss_tuple() into two parts. Signed-off-by: Huazhong Tan --- .../hisilicon/hns3/hns3pf/hclge_main.c | 42 +++++++++++++------ 1 file changed, 29 insertions(+), 13 deletions(-) diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c index 17090c2b6c8b..47a7115fdb5d 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c +++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c @@ -4501,22 +4501,12 @@ static u8 hclge_get_rss_hash_bits(struct ethtool_rxnfc *nfc) return hash_sets; } -static int hclge_set_rss_tuple(struct hnae3_handle *handle, - struct ethtool_rxnfc *nfc) +static int hclge_init_rss_tuple_cmd(struct hclge_vport *vport, + struct ethtool_rxnfc *nfc, + struct hclge_rss_input_tuple_cmd *req) { - struct hclge_vport *vport = hclge_get_vport(handle); struct hclge_dev *hdev = vport->back; - struct hclge_rss_input_tuple_cmd *req; - struct hclge_desc desc; u8 tuple_sets; - int ret; - - if (nfc->data & ~(RXH_IP_SRC | RXH_IP_DST | - RXH_L4_B_0_1 | RXH_L4_B_2_3)) - return -EINVAL; - - req = (struct hclge_rss_input_tuple_cmd *)desc.data; - hclge_cmd_setup_basic_desc(&desc, HCLGE_OPC_RSS_INPUT_TUPLE, false); req->ipv4_tcp_en = vport->rss_tuple_sets.ipv4_tcp_en; req->ipv4_udp_en = vport->rss_tuple_sets.ipv4_udp_en; @@ -4561,6 +4551,32 @@ static int hclge_set_rss_tuple(struct hnae3_handle *handle, return -EINVAL; } + return 0; +} + +static int hclge_set_rss_tuple(struct hnae3_handle *handle, + struct ethtool_rxnfc *nfc) +{ + struct hclge_vport *vport = hclge_get_vport(handle); + struct hclge_dev *hdev = vport->back; + struct hclge_rss_input_tuple_cmd *req; + struct hclge_desc desc; + int ret; + + if (nfc->data & ~(RXH_IP_SRC | RXH_IP_DST | + RXH_L4_B_0_1 | RXH_L4_B_2_3)) + return -EINVAL; + + req = (struct hclge_rss_input_tuple_cmd *)desc.data; + hclge_cmd_setup_basic_desc(&desc, HCLGE_OPC_RSS_INPUT_TUPLE, false); + + ret = hclge_init_rss_tuple_cmd(vport, nfc, req); + if (ret) { + dev_err(&hdev->pdev->dev, + "failed to init rss tuple cmd, ret = %d\n", ret); + return ret; + } + ret = hclge_cmd_send(&hdev->hw, &desc, 1); if (ret) { dev_err(&hdev->pdev->dev, From patchwork Fri Feb 12 03:24:16 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Huazhong Tan X-Patchwork-Id: 12084721 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3416FC433DB for ; Fri, 12 Feb 2021 03:25:19 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 093A164E3C for ; Fri, 12 Feb 2021 03:25:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229943AbhBLDZS (ORCPT ); Thu, 11 Feb 2021 22:25:18 -0500 Received: from szxga07-in.huawei.com ([45.249.212.35]:13342 "EHLO szxga07-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229674AbhBLDZE (ORCPT ); Thu, 11 Feb 2021 22:25:04 -0500 Received: from DGGEMS409-HUB.china.huawei.com (unknown [172.30.72.58]) by szxga07-in.huawei.com (SkyGuard) with ESMTP id 4DcJjC6Tbzz7jqq; Fri, 12 Feb 2021 11:22:55 +0800 (CST) Received: from SZA170332453E.china.huawei.com (10.46.104.160) by DGGEMS409-HUB.china.huawei.com (10.3.19.209) with Microsoft SMTP Server id 14.3.498.0; Fri, 12 Feb 2021 11:24:15 +0800 From: Huazhong Tan To: , CC: , , , , , Huazhong Tan Subject: [PATCH V2 net-next 12/13] net: hns3: refactor out hclgevf_set_rss_tuple() Date: Fri, 12 Feb 2021 11:24:16 +0800 Message-ID: <20210212032417.13076-4-tanhuazhong@huawei.com> X-Mailer: git-send-email 2.21.0.windows.1 In-Reply-To: <20210212032417.13076-1-tanhuazhong@huawei.com> References: <20210212032417.13076-1-tanhuazhong@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.46.104.160] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org To make it more readable and maintainable, split hclgevf_set_rss_tuple() into two parts. Signed-off-by: Huazhong Tan --- .../hisilicon/hns3/hns3vf/hclgevf_main.c | 47 +++++++++++++------ 1 file changed, 32 insertions(+), 15 deletions(-) diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c index c4ac2b9771e8..700e068764c8 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c +++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c @@ -873,25 +873,13 @@ static u8 hclgevf_get_rss_hash_bits(struct ethtool_rxnfc *nfc) return hash_sets; } -static int hclgevf_set_rss_tuple(struct hnae3_handle *handle, - struct ethtool_rxnfc *nfc) +static int hclgevf_init_rss_tuple_cmd(struct hnae3_handle *handle, + struct ethtool_rxnfc *nfc, + struct hclgevf_rss_input_tuple_cmd *req) { struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle); struct hclgevf_rss_cfg *rss_cfg = &hdev->rss_cfg; - struct hclgevf_rss_input_tuple_cmd *req; - struct hclgevf_desc desc; u8 tuple_sets; - int ret; - - if (hdev->ae_dev->dev_version < HNAE3_DEVICE_VERSION_V2) - return -EOPNOTSUPP; - - if (nfc->data & - ~(RXH_IP_SRC | RXH_IP_DST | RXH_L4_B_0_1 | RXH_L4_B_2_3)) - return -EINVAL; - - req = (struct hclgevf_rss_input_tuple_cmd *)desc.data; - hclgevf_cmd_setup_basic_desc(&desc, HCLGEVF_OPC_RSS_INPUT_TUPLE, false); req->ipv4_tcp_en = rss_cfg->rss_tuple_sets.ipv4_tcp_en; req->ipv4_udp_en = rss_cfg->rss_tuple_sets.ipv4_udp_en; @@ -936,6 +924,35 @@ static int hclgevf_set_rss_tuple(struct hnae3_handle *handle, return -EINVAL; } + return 0; +} + +static int hclgevf_set_rss_tuple(struct hnae3_handle *handle, + struct ethtool_rxnfc *nfc) +{ + struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle); + struct hclgevf_rss_cfg *rss_cfg = &hdev->rss_cfg; + struct hclgevf_rss_input_tuple_cmd *req; + struct hclgevf_desc desc; + int ret; + + if (hdev->ae_dev->dev_version < HNAE3_DEVICE_VERSION_V2) + return -EOPNOTSUPP; + + if (nfc->data & + ~(RXH_IP_SRC | RXH_IP_DST | RXH_L4_B_0_1 | RXH_L4_B_2_3)) + return -EINVAL; + + req = (struct hclgevf_rss_input_tuple_cmd *)desc.data; + hclgevf_cmd_setup_basic_desc(&desc, HCLGEVF_OPC_RSS_INPUT_TUPLE, false); + + ret = hclgevf_init_rss_tuple_cmd(handle, nfc, req); + if (ret) { + dev_err(&hdev->pdev->dev, + "failed to init rss tuple cmd, ret = %d\n", ret); + return ret; + } + ret = hclgevf_cmd_send(&hdev->hw, &desc, 1); if (ret) { dev_err(&hdev->pdev->dev, From patchwork Fri Feb 12 03:24:17 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Huazhong Tan X-Patchwork-Id: 12084725 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 18853C433DB for ; Fri, 12 Feb 2021 03:25:29 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id DCB0164E3C for ; Fri, 12 Feb 2021 03:25:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229964AbhBLDZX (ORCPT ); Thu, 11 Feb 2021 22:25:23 -0500 Received: from szxga05-in.huawei.com ([45.249.212.191]:12534 "EHLO szxga05-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229906AbhBLDZI (ORCPT ); Thu, 11 Feb 2021 22:25:08 -0500 Received: from DGGEMS409-HUB.china.huawei.com (unknown [172.30.72.59]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4DcJhs5JmDzMXGy; Fri, 12 Feb 2021 11:22:37 +0800 (CST) Received: from SZA170332453E.china.huawei.com (10.46.104.160) by DGGEMS409-HUB.china.huawei.com (10.3.19.209) with Microsoft SMTP Server id 14.3.498.0; Fri, 12 Feb 2021 11:24:15 +0800 From: Huazhong Tan To: , CC: , , , , , Hao Chen , Huazhong Tan Subject: [PATCH V2 net-next 13/13] net: hns3: refactor out hclge_rm_vport_all_mac_table() Date: Fri, 12 Feb 2021 11:24:17 +0800 Message-ID: <20210212032417.13076-5-tanhuazhong@huawei.com> X-Mailer: git-send-email 2.21.0.windows.1 In-Reply-To: <20210212032417.13076-1-tanhuazhong@huawei.com> References: <20210212032417.13076-1-tanhuazhong@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.46.104.160] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Hao Chen hclge_rm_vport_all_mac_table() is bloated, so split it into separate functions for readability and maintainability. Signed-off-by: Hao Chen Signed-off-by: Huazhong Tan --- .../hisilicon/hns3/hns3pf/hclge_main.c | 67 ++++++++++++------- 1 file changed, 43 insertions(+), 24 deletions(-) diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c index 47a7115fdb5d..34b744df6709 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c +++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c @@ -8353,36 +8353,18 @@ static void hclge_sync_mac_table(struct hclge_dev *hdev) } } -void hclge_rm_vport_all_mac_table(struct hclge_vport *vport, bool is_del_list, - enum HCLGE_MAC_ADDR_TYPE mac_type) +static void hclge_build_del_list(struct list_head *list, + bool is_del_list, + struct list_head *tmp_del_list) { - int (*unsync)(struct hclge_vport *vport, const unsigned char *addr); struct hclge_mac_node *mac_cfg, *tmp; - struct hclge_dev *hdev = vport->back; - struct list_head tmp_del_list, *list; - int ret; - - if (mac_type == HCLGE_MAC_ADDR_UC) { - list = &vport->uc_mac_list; - unsync = hclge_rm_uc_addr_common; - } else { - list = &vport->mc_mac_list; - unsync = hclge_rm_mc_addr_common; - } - - INIT_LIST_HEAD(&tmp_del_list); - - if (!is_del_list) - set_bit(vport->vport_id, hdev->vport_config_block); - - spin_lock_bh(&vport->mac_list_lock); list_for_each_entry_safe(mac_cfg, tmp, list, node) { switch (mac_cfg->state) { case HCLGE_MAC_TO_DEL: case HCLGE_MAC_ACTIVE: list_del(&mac_cfg->node); - list_add_tail(&mac_cfg->node, &tmp_del_list); + list_add_tail(&mac_cfg->node, tmp_del_list); break; case HCLGE_MAC_TO_ADD: if (is_del_list) { @@ -8392,10 +8374,18 @@ void hclge_rm_vport_all_mac_table(struct hclge_vport *vport, bool is_del_list, break; } } +} - spin_unlock_bh(&vport->mac_list_lock); +static void hclge_unsync_del_list(struct hclge_vport *vport, + int (*unsync)(struct hclge_vport *vport, + const unsigned char *addr), + bool is_del_list, + struct list_head *tmp_del_list) +{ + struct hclge_mac_node *mac_cfg, *tmp; + int ret; - list_for_each_entry_safe(mac_cfg, tmp, &tmp_del_list, node) { + list_for_each_entry_safe(mac_cfg, tmp, tmp_del_list, node) { ret = unsync(vport, mac_cfg->mac_addr); if (!ret || ret == -ENOENT) { /* clear all mac addr from hardware, but remain these @@ -8413,6 +8403,35 @@ void hclge_rm_vport_all_mac_table(struct hclge_vport *vport, bool is_del_list, mac_cfg->state = HCLGE_MAC_TO_DEL; } } +} + +void hclge_rm_vport_all_mac_table(struct hclge_vport *vport, bool is_del_list, + enum HCLGE_MAC_ADDR_TYPE mac_type) +{ + int (*unsync)(struct hclge_vport *vport, const unsigned char *addr); + struct hclge_dev *hdev = vport->back; + struct list_head tmp_del_list, *list; + + if (mac_type == HCLGE_MAC_ADDR_UC) { + list = &vport->uc_mac_list; + unsync = hclge_rm_uc_addr_common; + } else { + list = &vport->mc_mac_list; + unsync = hclge_rm_mc_addr_common; + } + + INIT_LIST_HEAD(&tmp_del_list); + + if (!is_del_list) + set_bit(vport->vport_id, hdev->vport_config_block); + + spin_lock_bh(&vport->mac_list_lock); + + hclge_build_del_list(list, is_del_list, &tmp_del_list); + + spin_unlock_bh(&vport->mac_list_lock); + + hclge_unsync_del_list(vport, unsync, is_del_list, &tmp_del_list); spin_lock_bh(&vport->mac_list_lock);