From patchwork Mon Dec 12 11:16:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Swiatkowski X-Patchwork-Id: 13071014 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4CC84C4332F for ; Mon, 12 Dec 2022 11:35:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232291AbiLLLfe (ORCPT ); Mon, 12 Dec 2022 06:35:34 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43148 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231849AbiLLLfO (ORCPT ); Mon, 12 Dec 2022 06:35:14 -0500 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 665675F5A for ; Mon, 12 Dec 2022 03:32:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1670844771; x=1702380771; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=7JM1baiJD2iMw81PtlfE+qey1ukL2LvHRr6pKrv5db4=; b=CMr7UPdBaNr8NmJ+TWCxky8erBOMAmaqdjX/D6l3pkI8haUl9ZzIIb/s mj6YiYcqJ4D32PvyZ+jWkXi4nmy8iLI9Mkyunwwf5h3znf6AdFOMeWLWU N44iuqbhUZJJFu+M1iL/8htSpVZWjTmiy7dVuDwERT3RMXThlpm8lc7Za 12Sf31+FGIvhQtR0v+8Q+Ni8vClwlB+OZbU3f1DgZr+fsE2iDVHDPWY+c 4vRjt7ZKHkZBItwGJEfwv5prDV1SBT75Gpz9JinqR4jRP0IGdyWPQTWUZ I2mPlYL+dhCUuXb05Yj9kbRUMHhmKPQH0FwpMZNXN1CRXKYE44+uuCMkw g==; X-IronPort-AV: E=McAfee;i="6500,9779,10558"; a="317861419" X-IronPort-AV: E=Sophos;i="5.96,238,1665471600"; d="scan'208";a="317861419" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Dec 2022 03:32:51 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10558"; a="893459675" X-IronPort-AV: E=Sophos;i="5.96,238,1665471600"; d="scan'208";a="893459675" Received: from wasp.igk.intel.com ([10.102.20.192]) by fmsmga006.fm.intel.com with ESMTP; 12 Dec 2022 03:32:47 -0800 From: Michal Swiatkowski To: intel-wired-lan@lists.osuosl.org Cc: alexandr.lobakin@intel.com, sridhar.samudrala@intel.com, wojciech.drewek@intel.com, lukasz.czapnik@intel.com, shiraz.saleem@intel.com, jesse.brandeburg@intel.com, mustafa.ismail@intel.com, przemyslaw.kitszel@intel.com, piotr.raczynski@intel.com, jacob.e.keller@intel.com, david.m.ertman@intel.com, leszek.kaliszczuk@intel.com, benjamin.mikailenko@intel.com, paul.m.stillwell.jr@intel.com, netdev@vger.kernel.org, kuba@kernel.org, leon@kernel.org, Michal Swiatkowski Subject: [PATCH net-next v1 01/10] ice: move RDMA init to ice_idc.c Date: Mon, 12 Dec 2022 12:16:36 +0100 Message-Id: <20221212111645.1198680-2-michal.swiatkowski@linux.intel.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20221212111645.1198680-1-michal.swiatkowski@linux.intel.com> References: <20221212111645.1198680-1-michal.swiatkowski@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Simplify probe flow by moving all RDMA related code to ice_init_rdma(). Unroll irq allocation if RDMA initialization fails. Implement ice_deinit_rdma() and use it in remove flow. Signed-off-by: Michal Swiatkowski Acked-by: Dave Ertman --- drivers/net/ethernet/intel/ice/ice.h | 1 + drivers/net/ethernet/intel/ice/ice_idc.c | 52 ++++++++++++++++++++++- drivers/net/ethernet/intel/ice/ice_main.c | 29 +++---------- 3 files changed, 57 insertions(+), 25 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h index ea64bcff108a..f461a1b3c100 100644 --- a/drivers/net/ethernet/intel/ice/ice.h +++ b/drivers/net/ethernet/intel/ice/ice.h @@ -909,6 +909,7 @@ void ice_print_link_msg(struct ice_vsi *vsi, bool isup); int ice_plug_aux_dev(struct ice_pf *pf); void ice_unplug_aux_dev(struct ice_pf *pf); int ice_init_rdma(struct ice_pf *pf); +void ice_deinit_rdma(struct ice_pf *pf); const char *ice_aq_str(enum ice_aq_err aq_err); bool ice_is_wol_supported(struct ice_hw *hw); void ice_fdir_del_all_fltrs(struct ice_vsi *vsi); diff --git a/drivers/net/ethernet/intel/ice/ice_idc.c b/drivers/net/ethernet/intel/ice/ice_idc.c index 9bf6fa5ed4c8..2148e49679b1 100644 --- a/drivers/net/ethernet/intel/ice/ice_idc.c +++ b/drivers/net/ethernet/intel/ice/ice_idc.c @@ -6,6 +6,8 @@ #include "ice_lib.h" #include "ice_dcb_lib.h" +static DEFINE_IDA(ice_aux_ida); + /** * ice_get_auxiliary_drv - retrieve iidc_auxiliary_drv struct * @pf: pointer to PF struct @@ -248,6 +250,17 @@ static int ice_reserve_rdma_qvector(struct ice_pf *pf) return 0; } +/** + * ice_free_rdma_qvector - free vector resources reserved for RDMA driver + * @pf: board private structure to initialize + */ +static void ice_free_rdma_qvector(struct ice_pf *pf) +{ + pf->num_avail_sw_msix -= pf->num_rdma_msix; + ice_free_res(pf->irq_tracker, pf->rdma_base_vector, + ICE_RES_RDMA_VEC_ID); +} + /** * ice_adev_release - function to be mapped to AUX dev's release op * @dev: pointer to device to free @@ -334,12 +347,47 @@ int ice_init_rdma(struct ice_pf *pf) struct device *dev = &pf->pdev->dev; int ret; + if (!ice_is_rdma_ena(pf)) { + dev_warn(dev, "RDMA is not supported on this device\n"); + return 0; + } + + pf->aux_idx = ida_alloc(&ice_aux_ida, GFP_KERNEL); + if (pf->aux_idx < 0) { + dev_err(dev, "Failed to allocate device ID for AUX driver\n"); + return -ENOMEM; + } + /* Reserve vector resources */ ret = ice_reserve_rdma_qvector(pf); if (ret < 0) { dev_err(dev, "failed to reserve vectors for RDMA\n"); - return ret; + goto err_reserve_rdma_qvector; } pf->rdma_mode |= IIDC_RDMA_PROTOCOL_ROCEV2; - return ice_plug_aux_dev(pf); + ret = ice_plug_aux_dev(pf); + if (ret) + goto err_plug_aux_dev; + return 0; + +err_plug_aux_dev: + ice_free_rdma_qvector(pf); +err_reserve_rdma_qvector: + pf->adev = NULL; + ida_free(&ice_aux_ida, pf->aux_idx); + return ret; +} + +/** + * ice_deinit_rdma - deinitialize RDMA on PF + * @pf: ptr to ice_pf + */ +void ice_deinit_rdma(struct ice_pf *pf) +{ + if (!ice_is_rdma_ena(pf)) + return; + + ice_unplug_aux_dev(pf); + ice_free_rdma_qvector(pf); + ida_free(&ice_aux_ida, pf->aux_idx); } diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c index d01d1073ffec..59a88c00b91d 100644 --- a/drivers/net/ethernet/intel/ice/ice_main.c +++ b/drivers/net/ethernet/intel/ice/ice_main.c @@ -44,7 +44,6 @@ MODULE_PARM_DESC(debug, "netif level (0=none,...,16=all), hw debug_mask (0x8XXXX MODULE_PARM_DESC(debug, "netif level (0=none,...,16=all)"); #endif /* !CONFIG_DYNAMIC_DEBUG */ -static DEFINE_IDA(ice_aux_ida); DEFINE_STATIC_KEY_FALSE(ice_xdp_locking_key); EXPORT_SYMBOL(ice_xdp_locking_key); @@ -5011,30 +5010,16 @@ ice_probe(struct pci_dev *pdev, const struct pci_device_id __always_unused *ent) /* ready to go, so clear down state bit */ clear_bit(ICE_DOWN, pf->state); - if (ice_is_rdma_ena(pf)) { - pf->aux_idx = ida_alloc(&ice_aux_ida, GFP_KERNEL); - if (pf->aux_idx < 0) { - dev_err(dev, "Failed to allocate device ID for AUX driver\n"); - err = -ENOMEM; - goto err_devlink_reg_param; - } - - err = ice_init_rdma(pf); - if (err) { - dev_err(dev, "Failed to initialize RDMA: %d\n", err); - err = -EIO; - goto err_init_aux_unroll; - } - } else { - dev_warn(dev, "RDMA is not supported on this device\n"); + err = ice_init_rdma(pf); + if (err) { + dev_err(dev, "Failed to initialize RDMA: %d\n", err); + err = -EIO; + goto err_devlink_reg_param; } ice_devlink_register(pf); return 0; -err_init_aux_unroll: - pf->adev = NULL; - ida_free(&ice_aux_ida, pf->aux_idx); err_devlink_reg_param: ice_devlink_unregister_params(pf); err_netdev_reg: @@ -5152,9 +5137,7 @@ static void ice_remove(struct pci_dev *pdev) ice_service_task_stop(pf); ice_aq_cancel_waiting_tasks(pf); - ice_unplug_aux_dev(pf); - if (pf->aux_idx >= 0) - ida_free(&ice_aux_ida, pf->aux_idx); + ice_deinit_rdma(pf); ice_devlink_unregister_params(pf); set_bit(ICE_DOWN, pf->state); From patchwork Mon Dec 12 11:16:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Swiatkowski X-Patchwork-Id: 13071015 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A03D7C4332F for ; Mon, 12 Dec 2022 11:35:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232200AbiLLLfo (ORCPT ); Mon, 12 Dec 2022 06:35:44 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42332 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232090AbiLLLfR (ORCPT ); Mon, 12 Dec 2022 06:35:17 -0500 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DD549BC0F for ; Mon, 12 Dec 2022 03:32:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1670844775; x=1702380775; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=hQRjRJRxu6zvKZ+KBQHMEO2sO57BiWQYMINIf++VJxo=; b=k+qKDPO2hHNJFSybnak5CsdFXWFpLpVDl74zJGejRg48MHgi9N1W1QLh FQyAk/uTPANyIc2YjkUfiP7Gc/7ldNfZzum//+IevXIbuPTmRm3aiPttK tik6rtKifOlQw41mk5mIpWAG3MG2hR2qg6JM4Ge3V1wfpjy+WycO/1jbJ rCb6/N60M8PrHZK3zwP5AgYQzD1p6GqOVE6HGH3blVi6Y2Zuhknwfnlqc 2jNtfc+pCw6U5a0IzEaeVqsfPalxa92TIJHAevt+ThSGJzFUvWVJYPyul kydTSW5YqfM0PMFkT7Smsof4SCyIlqEKuwBurwy9j1KRC/oK7pmDTxTxM A==; X-IronPort-AV: E=McAfee;i="6500,9779,10558"; a="317861439" X-IronPort-AV: E=Sophos;i="5.96,238,1665471600"; d="scan'208";a="317861439" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Dec 2022 03:32:55 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10558"; a="893459697" X-IronPort-AV: E=Sophos;i="5.96,238,1665471600"; d="scan'208";a="893459697" Received: from wasp.igk.intel.com ([10.102.20.192]) by fmsmga006.fm.intel.com with ESMTP; 12 Dec 2022 03:32:51 -0800 From: Michal Swiatkowski To: intel-wired-lan@lists.osuosl.org Cc: alexandr.lobakin@intel.com, sridhar.samudrala@intel.com, wojciech.drewek@intel.com, lukasz.czapnik@intel.com, shiraz.saleem@intel.com, jesse.brandeburg@intel.com, mustafa.ismail@intel.com, przemyslaw.kitszel@intel.com, piotr.raczynski@intel.com, jacob.e.keller@intel.com, david.m.ertman@intel.com, leszek.kaliszczuk@intel.com, benjamin.mikailenko@intel.com, paul.m.stillwell.jr@intel.com, netdev@vger.kernel.org, kuba@kernel.org, leon@kernel.org, Michal Swiatkowski Subject: [PATCH net-next v1 02/10] ice: alloc id for RDMA using xa_array Date: Mon, 12 Dec 2022 12:16:37 +0100 Message-Id: <20221212111645.1198680-3-michal.swiatkowski@linux.intel.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20221212111645.1198680-1-michal.swiatkowski@linux.intel.com> References: <20221212111645.1198680-1-michal.swiatkowski@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Use xa_array instead of deprecated ida to alloc id for RDMA aux driver. Signed-off-by: Michal Swiatkowski --- drivers/net/ethernet/intel/ice/ice_idc.c | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_idc.c b/drivers/net/ethernet/intel/ice/ice_idc.c index 2148e49679b1..36f6d34d5cb5 100644 --- a/drivers/net/ethernet/intel/ice/ice_idc.c +++ b/drivers/net/ethernet/intel/ice/ice_idc.c @@ -6,7 +6,7 @@ #include "ice_lib.h" #include "ice_dcb_lib.h" -static DEFINE_IDA(ice_aux_ida); +static DEFINE_XARRAY_ALLOC1(ice_aux_id); /** * ice_get_auxiliary_drv - retrieve iidc_auxiliary_drv struct @@ -352,8 +352,9 @@ int ice_init_rdma(struct ice_pf *pf) return 0; } - pf->aux_idx = ida_alloc(&ice_aux_ida, GFP_KERNEL); - if (pf->aux_idx < 0) { + ret = xa_alloc(&ice_aux_id, &pf->aux_idx, NULL, XA_LIMIT(1, INT_MAX), + GFP_KERNEL); + if (ret) { dev_err(dev, "Failed to allocate device ID for AUX driver\n"); return -ENOMEM; } @@ -374,7 +375,7 @@ int ice_init_rdma(struct ice_pf *pf) ice_free_rdma_qvector(pf); err_reserve_rdma_qvector: pf->adev = NULL; - ida_free(&ice_aux_ida, pf->aux_idx); + xa_erase(&ice_aux_id, pf->aux_idx); return ret; } @@ -389,5 +390,5 @@ void ice_deinit_rdma(struct ice_pf *pf) ice_unplug_aux_dev(pf); ice_free_rdma_qvector(pf); - ida_free(&ice_aux_ida, pf->aux_idx); + xa_erase(&ice_aux_id, pf->aux_idx); } From patchwork Mon Dec 12 11:16:38 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Swiatkowski X-Patchwork-Id: 13071016 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 34837C4332F for ; Mon, 12 Dec 2022 11:35:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232231AbiLLLfy (ORCPT ); Mon, 12 Dec 2022 06:35:54 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42742 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232296AbiLLLfT (ORCPT ); Mon, 12 Dec 2022 06:35:19 -0500 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7AE879FEB for ; Mon, 12 Dec 2022 03:32:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1670844779; x=1702380779; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=mqi2OHuSPRAcGS5qn801+zTuJ/queMoUZHtLnNe0JVM=; b=dPISZniUNvWzokgVaeMVGaxO1+/N+wrOs5Vrw5g1R4gJFXvEYZXZvATt tNCFXcg/0WQDnw/u/PFokd4FfBQvrM/X6vreLzLPOLfAVNGuFhklws/Or MzxMV1wyOwL98pE6kPLPXaHrWb/b59YJcBRxTBWNWbPWiyrh13FyMD78c TBdDU7l7broM7W5vCaKxZomd6mWBMLeXCTGhq4TiYcwAJioIxCP0cW8hn FFA4NbkKOiZvSGJXNVTYt4RAGriZ7KMsMek64lLsu7n6rjwSeOk46ffq5 HzyhITgGV1CQWeOglgoFi6RiRZgulhl+mwNlMyfQFNNS9MPkgfkQB1g25 w==; X-IronPort-AV: E=McAfee;i="6500,9779,10558"; a="317861453" X-IronPort-AV: E=Sophos;i="5.96,238,1665471600"; d="scan'208";a="317861453" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Dec 2022 03:32:59 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10558"; a="893459719" X-IronPort-AV: E=Sophos;i="5.96,238,1665471600"; d="scan'208";a="893459719" Received: from wasp.igk.intel.com ([10.102.20.192]) by fmsmga006.fm.intel.com with ESMTP; 12 Dec 2022 03:32:55 -0800 From: Michal Swiatkowski To: intel-wired-lan@lists.osuosl.org Cc: alexandr.lobakin@intel.com, sridhar.samudrala@intel.com, wojciech.drewek@intel.com, lukasz.czapnik@intel.com, shiraz.saleem@intel.com, jesse.brandeburg@intel.com, mustafa.ismail@intel.com, przemyslaw.kitszel@intel.com, piotr.raczynski@intel.com, jacob.e.keller@intel.com, david.m.ertman@intel.com, leszek.kaliszczuk@intel.com, benjamin.mikailenko@intel.com, paul.m.stillwell.jr@intel.com, netdev@vger.kernel.org, kuba@kernel.org, leon@kernel.org, Michal Swiatkowski Subject: [PATCH net-next v1 03/10] ice: cleanup in VSI config/deconfig code Date: Mon, 12 Dec 2022 12:16:38 +0100 Message-Id: <20221212111645.1198680-4-michal.swiatkowski@linux.intel.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20221212111645.1198680-1-michal.swiatkowski@linux.intel.com> References: <20221212111645.1198680-1-michal.swiatkowski@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Do few small cleanups: 1) Rename the function to reflect that it doesn't configure all things related to VSI. ice_vsi_cfg_lan() better fits to what function is doing. ice_vsi_cfg() can be use to name function that will configure whole VSI. 2) Remove unused ethtype field from VSI. There is no need to set ethtype here, because it is never used. 3) Remove unnecessary check for ICE_VSI_CHNL. There is check for ICE_VSI_CHNL in ice_vsi_get_qs, so there is no need to check it before calling the function. 4) Simplify ice_vsi_alloc() call. There is no need to check the type of VSI before calling ice_vsi_alloc(). For ICE_VSI_CHNL vf is always NULL (ice_vsi_setup() is called with vf=NULL). For ICE_VSI_VF or ICE_VSI_CTRL ch is always NULL and for other VSI types ch and vf are always NULL. 5) Remove unnecessary call to ice_vsi_dis_irq(). ice_vsi_dis_irq() will be called in ice_vsi_close() flow (ice_vsi_close() -> ice_vsi_down() -> ice_vsi_dis_irq()). Remove unnecessary call. 6) Don't remove specific filters in release. All hw filters are removed in ice_fltr_remove_alli(), which is always called in VSI release flow. There is no need to remove only ethertype filters before calling ice_fltr_remove_all(). 7) Rename ice_vsi_clear() to ice_vsi_free(). As ice_vsi_clear() only free memory allocated in ice_vsi_alloc() rename it to ice_vsi_free() which better shows what function is doing. 8) Free coalesce param in rebuild. There is potential memory leak if configuration of VSI lan fails. Free coalesce to avoid it. Signed-off-by: Michal Swiatkowski --- drivers/net/ethernet/intel/ice/ice.h | 3 +- drivers/net/ethernet/intel/ice/ice_ethtool.c | 2 +- drivers/net/ethernet/intel/ice/ice_lib.c | 51 +++++++------------- drivers/net/ethernet/intel/ice/ice_lib.h | 2 +- drivers/net/ethernet/intel/ice/ice_main.c | 12 ++--- 5 files changed, 26 insertions(+), 44 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h index f461a1b3c100..70a9609f1b80 100644 --- a/drivers/net/ethernet/intel/ice/ice.h +++ b/drivers/net/ethernet/intel/ice/ice.h @@ -354,7 +354,6 @@ struct ice_vsi { struct ice_vf *vf; /* VF associated with this VSI */ - u16 ethtype; /* Ethernet protocol for pause frame */ u16 num_gfltr; u16 num_bfltr; @@ -891,7 +890,7 @@ ice_fetch_u64_stats_per_ring(struct u64_stats_sync *syncp, int ice_up(struct ice_vsi *vsi); int ice_down(struct ice_vsi *vsi); int ice_down_up(struct ice_vsi *vsi); -int ice_vsi_cfg(struct ice_vsi *vsi); +int ice_vsi_cfg_lan(struct ice_vsi *vsi); struct ice_vsi *ice_lb_vsi_setup(struct ice_pf *pf, struct ice_port_info *pi); int ice_vsi_determine_xdp_res(struct ice_vsi *vsi); int ice_prepare_xdp_rings(struct ice_vsi *vsi, struct bpf_prog *prog); diff --git a/drivers/net/ethernet/intel/ice/ice_ethtool.c b/drivers/net/ethernet/intel/ice/ice_ethtool.c index 626480677cc1..63b7568a9d72 100644 --- a/drivers/net/ethernet/intel/ice/ice_ethtool.c +++ b/drivers/net/ethernet/intel/ice/ice_ethtool.c @@ -656,7 +656,7 @@ static int ice_lbtest_prepare_rings(struct ice_vsi *vsi) if (status) goto err_setup_rx_ring; - status = ice_vsi_cfg(vsi); + status = ice_vsi_cfg_lan(vsi); if (status) goto err_setup_rx_ring; diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c index 703f73e54561..a7225de4a1e1 100644 --- a/drivers/net/ethernet/intel/ice/ice_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_lib.c @@ -348,7 +348,7 @@ static void ice_vsi_free_arrays(struct ice_vsi *vsi) } /** - * ice_vsi_clear - clean up and deallocate the provided VSI + * ice_vsi_free - clean up and deallocate the provided VSI * @vsi: pointer to VSI being cleared * * This deallocates the VSI's queue resources, removes it from the PF's @@ -356,7 +356,7 @@ static void ice_vsi_free_arrays(struct ice_vsi *vsi) * * Returns 0 on success, negative on failure */ -int ice_vsi_clear(struct ice_vsi *vsi) +int ice_vsi_free(struct ice_vsi *vsi) { struct ice_pf *pf = NULL; struct device *dev; @@ -2668,12 +2668,7 @@ ice_vsi_setup(struct ice_pf *pf, struct ice_port_info *pi, struct ice_vsi *vsi; int ret, i; - if (vsi_type == ICE_VSI_CHNL) - vsi = ice_vsi_alloc(pf, vsi_type, ch, NULL); - else if (vsi_type == ICE_VSI_VF || vsi_type == ICE_VSI_CTRL) - vsi = ice_vsi_alloc(pf, vsi_type, NULL, vf); - else - vsi = ice_vsi_alloc(pf, vsi_type, NULL, NULL); + vsi = ice_vsi_alloc(pf, vsi_type, ch, vf); if (!vsi) { dev_err(dev, "could not allocate VSI\n"); @@ -2682,17 +2677,13 @@ ice_vsi_setup(struct ice_pf *pf, struct ice_port_info *pi, vsi->port_info = pi; vsi->vsw = pf->first_sw; - if (vsi->type == ICE_VSI_PF) - vsi->ethtype = ETH_P_PAUSE; ice_alloc_fd_res(vsi); - if (vsi_type != ICE_VSI_CHNL) { - if (ice_vsi_get_qs(vsi)) { - dev_err(dev, "Failed to allocate queues. vsi->idx = %d\n", - vsi->idx); - goto unroll_vsi_alloc; - } + if (ice_vsi_get_qs(vsi)) { + dev_err(dev, "Failed to allocate queues. vsi->idx = %d\n", + vsi->idx); + goto unroll_vsi_alloc; } /* set RSS capabilities */ @@ -2857,7 +2848,7 @@ ice_vsi_setup(struct ice_pf *pf, struct ice_port_info *pi, unroll_vsi_alloc: if (vsi_type == ICE_VSI_VF) ice_enable_lag(pf->lag); - ice_vsi_clear(vsi); + ice_vsi_free(vsi); return NULL; } @@ -3181,9 +3172,6 @@ int ice_vsi_release(struct ice_vsi *vsi) if (test_bit(ICE_FLAG_RSS_ENA, pf->flags)) ice_rss_clean(vsi); - /* Disable VSI and free resources */ - if (vsi->type != ICE_VSI_LB) - ice_vsi_dis_irq(vsi); ice_vsi_close(vsi); /* SR-IOV determines needed MSIX resources all at once instead of per @@ -3199,18 +3187,12 @@ int ice_vsi_release(struct ice_vsi *vsi) pf->num_avail_sw_msix += vsi->num_q_vectors; } - if (!ice_is_safe_mode(pf)) { - if (vsi->type == ICE_VSI_PF) { - ice_fltr_remove_eth(vsi, ETH_P_PAUSE, ICE_FLTR_TX, - ICE_DROP_PACKET); - ice_cfg_sw_lldp(vsi, true, false); - /* The Rx rule will only exist to remove if the LLDP FW - * engine is currently stopped - */ - if (!test_bit(ICE_FLAG_FW_LLDP_AGENT, pf->flags)) - ice_cfg_sw_lldp(vsi, false, false); - } - } + /* The Rx rule will only exist to remove if the LLDP FW + * engine is currently stopped + */ + if (!ice_is_safe_mode(pf) && vsi->type == ICE_VSI_PF && + !test_bit(ICE_FLAG_FW_LLDP_AGENT, pf->flags)) + ice_cfg_sw_lldp(vsi, false, false); if (ice_is_vsi_dflt_vsi(vsi)) ice_clear_dflt_vsi(vsi); @@ -3247,7 +3229,7 @@ int ice_vsi_release(struct ice_vsi *vsi) * for ex: during rmmod. */ if (!ice_is_reset_in_progress(pf->state)) - ice_vsi_clear(vsi); + ice_vsi_free(vsi); return 0; } @@ -3601,6 +3583,7 @@ int ice_vsi_rebuild(struct ice_vsi *vsi, bool init_vsi) ret = -EIO; goto err_vectors; } else { + kfree(coalesce); return ice_schedule_reset(pf, ICE_RESET_PFR); } } @@ -3621,7 +3604,7 @@ int ice_vsi_rebuild(struct ice_vsi *vsi, bool init_vsi) kfree(coalesce); return ret; err_vsi: - ice_vsi_clear(vsi); + ice_vsi_free(vsi); set_bit(ICE_RESET_FAILED, pf->state); kfree(coalesce); return ret; diff --git a/drivers/net/ethernet/intel/ice/ice_lib.h b/drivers/net/ethernet/intel/ice/ice_lib.h index dcdf69a693e9..6203114b805c 100644 --- a/drivers/net/ethernet/intel/ice/ice_lib.h +++ b/drivers/net/ethernet/intel/ice/ice_lib.h @@ -42,7 +42,7 @@ void ice_cfg_sw_lldp(struct ice_vsi *vsi, bool tx, bool create); int ice_set_link(struct ice_vsi *vsi, bool ena); void ice_vsi_delete(struct ice_vsi *vsi); -int ice_vsi_clear(struct ice_vsi *vsi); +int ice_vsi_free(struct ice_vsi *vsi); int ice_vsi_cfg_tc(struct ice_vsi *vsi, u8 ena_tc); diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c index 59a88c00b91d..bfab9a713533 100644 --- a/drivers/net/ethernet/intel/ice/ice_main.c +++ b/drivers/net/ethernet/intel/ice/ice_main.c @@ -6206,12 +6206,12 @@ static int ice_vsi_vlan_setup(struct ice_vsi *vsi) } /** - * ice_vsi_cfg - Setup the VSI + * ice_vsi_cfg_lan - Setup the VSI lan related config * @vsi: the VSI being configured * * Return 0 on success and negative value on error */ -int ice_vsi_cfg(struct ice_vsi *vsi) +int ice_vsi_cfg_lan(struct ice_vsi *vsi) { int err; @@ -6428,7 +6428,7 @@ int ice_up(struct ice_vsi *vsi) { int err; - err = ice_vsi_cfg(vsi); + err = ice_vsi_cfg_lan(vsi); if (!err) err = ice_up_complete(vsi); @@ -6996,7 +6996,7 @@ int ice_vsi_open_ctrl(struct ice_vsi *vsi) if (err) goto err_setup_rx; - err = ice_vsi_cfg(vsi); + err = ice_vsi_cfg_lan(vsi); if (err) goto err_setup_rx; @@ -7050,7 +7050,7 @@ int ice_vsi_open(struct ice_vsi *vsi) if (err) goto err_setup_rx; - err = ice_vsi_cfg(vsi); + err = ice_vsi_cfg_lan(vsi); if (err) goto err_setup_rx; @@ -8484,7 +8484,7 @@ static void ice_remove_q_channels(struct ice_vsi *vsi, bool rem_fltr) ice_vsi_delete(ch->ch_vsi); /* Delete VSI from PF and HW VSI arrays */ - ice_vsi_clear(ch->ch_vsi); + ice_vsi_free(ch->ch_vsi); /* free the channel */ kfree(ch); From patchwork Mon Dec 12 11:16:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Swiatkowski X-Patchwork-Id: 13071017 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B1C3AC4332F for ; Mon, 12 Dec 2022 11:36:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232130AbiLLLgE (ORCPT ); Mon, 12 Dec 2022 06:36:04 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43032 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231556AbiLLLfZ (ORCPT ); Mon, 12 Dec 2022 06:35:25 -0500 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3A2109FE0 for ; Mon, 12 Dec 2022 03:33:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1670844784; x=1702380784; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=PeMi+4B6TjJc2u6sW9VQeao9Kzg2jXvD5ui9Dc/jNmc=; b=Q340ajdouuuoYHKhHxgwxVaWg1BayzXdyEhzdZVhQq1fcKQJhqZGmDbu g0Ryy0TR+onB3do12FVvUaYxuYZeFDDb/TTxiC672QmxKG/65sfITuWKV 8kfLGAzuQN4HvQSOEigOMQJxE4zRgSoF5wR100q+5OWnyFCQ1fqcaDiKM LWA4dljjjlq1EYoIh9Z2k+ILta8a8yxl6HFELyte+0P1EEA0mb47MVzJ5 kSU4aQVIeQ04IY8nZV8E4sa9dB4PxfmheC7KLOi9IZ4VyIZEDSCAlfWSa GXV/eAC/uLwBlGr8H0Eo2zSA4dP4Xk7aCs8k69HA0gFK7GvpG3yAawXro w==; X-IronPort-AV: E=McAfee;i="6500,9779,10558"; a="317861467" X-IronPort-AV: E=Sophos;i="5.96,238,1665471600"; d="scan'208";a="317861467" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Dec 2022 03:33:03 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10558"; a="893459749" X-IronPort-AV: E=Sophos;i="5.96,238,1665471600"; d="scan'208";a="893459749" Received: from wasp.igk.intel.com ([10.102.20.192]) by fmsmga006.fm.intel.com with ESMTP; 12 Dec 2022 03:32:59 -0800 From: Michal Swiatkowski To: intel-wired-lan@lists.osuosl.org Cc: alexandr.lobakin@intel.com, sridhar.samudrala@intel.com, wojciech.drewek@intel.com, lukasz.czapnik@intel.com, shiraz.saleem@intel.com, jesse.brandeburg@intel.com, mustafa.ismail@intel.com, przemyslaw.kitszel@intel.com, piotr.raczynski@intel.com, jacob.e.keller@intel.com, david.m.ertman@intel.com, leszek.kaliszczuk@intel.com, benjamin.mikailenko@intel.com, paul.m.stillwell.jr@intel.com, netdev@vger.kernel.org, kuba@kernel.org, leon@kernel.org, Michal Swiatkowski Subject: [PATCH net-next v1 04/10] ice: split ice_vsi_setup into smaller functions Date: Mon, 12 Dec 2022 12:16:39 +0100 Message-Id: <20221212111645.1198680-5-michal.swiatkowski@linux.intel.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20221212111645.1198680-1-michal.swiatkowski@linux.intel.com> References: <20221212111645.1198680-1-michal.swiatkowski@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Main goal is to reuse the same functions in VSI config and rebuild paths. To do this split ice_vsi_setup into smaller pieces and reuse it during rebuild. ice_vsi_alloc() should only alloc memory, not set the default values for VSI. Move setting defaults to separate function. This will allow config of already allocated VSI, for example in reload path. The path is mostly moving code around without introducing new functionality. Functions ice_vsi_cfg() and ice_vsi_decfg() were added, but they are using code that already exist. Use flag to pass information about VSI initialization during rebuild instead of using boolean value. Signed-off-by: Michal Swiatkowski Co-developed-by: Jacob Keller Signed-off-by: Jacob Keller --- drivers/net/ethernet/intel/ice/ice_lib.c | 914 +++++++++----------- drivers/net/ethernet/intel/ice/ice_lib.h | 7 +- drivers/net/ethernet/intel/ice/ice_main.c | 12 +- drivers/net/ethernet/intel/ice/ice_vf_lib.c | 2 +- 4 files changed, 432 insertions(+), 503 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c index a7225de4a1e1..9549290c76ab 100644 --- a/drivers/net/ethernet/intel/ice/ice_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_lib.c @@ -347,6 +347,106 @@ static void ice_vsi_free_arrays(struct ice_vsi *vsi) } } +/** + * ice_vsi_free_stats - Free the ring statistics structures + * @vsi: VSI pointer + */ +static void ice_vsi_free_stats(struct ice_vsi *vsi) +{ + struct ice_vsi_stats *vsi_stat; + struct ice_pf *pf = vsi->back; + int i; + + if (vsi->type == ICE_VSI_CHNL) + return; + if (!pf->vsi_stats) + return; + + vsi_stat = pf->vsi_stats[vsi->idx]; + if (!vsi_stat) + return; + + ice_for_each_alloc_txq(vsi, i) { + if (vsi_stat->tx_ring_stats[i]) { + kfree_rcu(vsi_stat->tx_ring_stats[i], rcu); + WRITE_ONCE(vsi_stat->tx_ring_stats[i], NULL); + } + } + + ice_for_each_alloc_rxq(vsi, i) { + if (vsi_stat->rx_ring_stats[i]) { + kfree_rcu(vsi_stat->rx_ring_stats[i], rcu); + WRITE_ONCE(vsi_stat->rx_ring_stats[i], NULL); + } + } + + kfree(vsi_stat->tx_ring_stats); + kfree(vsi_stat->rx_ring_stats); + kfree(vsi_stat); + pf->vsi_stats[vsi->idx] = NULL; +} + +/** + * ice_vsi_alloc_ring_stats - Allocates Tx and Rx ring stats for the VSI + * @vsi: VSI which is having stats allocated + */ +static int ice_vsi_alloc_ring_stats(struct ice_vsi *vsi) +{ + struct ice_ring_stats **tx_ring_stats; + struct ice_ring_stats **rx_ring_stats; + struct ice_vsi_stats *vsi_stats; + struct ice_pf *pf = vsi->back; + u16 i; + + vsi_stats = pf->vsi_stats[vsi->idx]; + tx_ring_stats = vsi_stats->tx_ring_stats; + rx_ring_stats = vsi_stats->rx_ring_stats; + + /* Allocate Tx ring stats */ + ice_for_each_alloc_txq(vsi, i) { + struct ice_ring_stats *ring_stats; + struct ice_tx_ring *ring; + + ring = vsi->tx_rings[i]; + ring_stats = tx_ring_stats[i]; + + if (!ring_stats) { + ring_stats = kzalloc(sizeof(*ring_stats), GFP_KERNEL); + if (!ring_stats) + goto err_out; + + WRITE_ONCE(tx_ring_stats[i], ring_stats); + } + + ring->ring_stats = ring_stats; + } + + /* Allocate Rx ring stats */ + ice_for_each_alloc_rxq(vsi, i) { + struct ice_ring_stats *ring_stats; + struct ice_rx_ring *ring; + + ring = vsi->rx_rings[i]; + ring_stats = rx_ring_stats[i]; + + if (!ring_stats) { + ring_stats = kzalloc(sizeof(*ring_stats), GFP_KERNEL); + if (!ring_stats) + goto err_out; + + WRITE_ONCE(rx_ring_stats[i], ring_stats); + } + + ring->ring_stats = ring_stats; + } + + return 0; + +err_out: + ice_vsi_free_stats(vsi); + return -ENOMEM; +} + /** * ice_vsi_free - clean up and deallocate the provided VSI * @vsi: pointer to VSI being cleared @@ -384,6 +484,7 @@ int ice_vsi_free(struct ice_vsi *vsi) if (vsi->idx < pf->next_vsi && vsi->type == ICE_VSI_CTRL && vsi->vf) pf->next_vsi = vsi->idx; + ice_vsi_free_stats(vsi); ice_vsi_free_arrays(vsi); mutex_unlock(&pf->sw_mutex); devm_kfree(dev, vsi); @@ -490,9 +591,57 @@ static int ice_vsi_alloc_stat_arrays(struct ice_vsi *vsi) return -ENOMEM; } +/** + * ice_vsi_alloc_def - set default values for already allocated VSI + * @vsi: ptr to VSI + * @vf: VF for ICE_VSI_VF and ICE_VSI_CTRL + * @ch: ptr to channel + */ +static int +ice_vsi_alloc_def(struct ice_vsi *vsi, struct ice_vf *vf, + struct ice_channel *ch) +{ + if (vsi->type != ICE_VSI_CHNL) { + ice_vsi_set_num_qs(vsi, vf); + if (ice_vsi_alloc_arrays(vsi)) + return -ENOMEM; + } + + switch (vsi->type) { + case ICE_VSI_SWITCHDEV_CTRL: + /* Setup eswitch MSIX irq handler for VSI */ + vsi->irq_handler = ice_eswitch_msix_clean_rings; + break; + case ICE_VSI_PF: + /* Setup default MSIX irq handler for VSI */ + vsi->irq_handler = ice_msix_clean_rings; + break; + case ICE_VSI_CTRL: + /* Setup ctrl VSI MSIX irq handler */ + vsi->irq_handler = ice_msix_clean_ctrl_vsi; + break; + case ICE_VSI_CHNL: + if (!ch) + return -EINVAL; + + vsi->num_rxq = ch->num_rxq; + vsi->num_txq = ch->num_txq; + vsi->next_base_q = ch->base_q; + break; + case ICE_VSI_VF: + break; + default: + ice_vsi_free_arrays(vsi); + return -EINVAL; + } + + return 0; +} + /** * ice_vsi_alloc - Allocates the next available struct VSI in the PF * @pf: board private structure + * @pi: pointer to the port_info instance * @vsi_type: type of VSI * @ch: ptr to channel * @vf: VF for ICE_VSI_VF and ICE_VSI_CTRL @@ -504,8 +653,9 @@ static int ice_vsi_alloc_stat_arrays(struct ice_vsi *vsi) * returns a pointer to a VSI on success, NULL on failure. */ static struct ice_vsi * -ice_vsi_alloc(struct ice_pf *pf, enum ice_vsi_type vsi_type, - struct ice_channel *ch, struct ice_vf *vf) +ice_vsi_alloc(struct ice_pf *pf, struct ice_port_info *pi, + enum ice_vsi_type vsi_type, struct ice_channel *ch, + struct ice_vf *vf) { struct device *dev = ice_pf_to_dev(pf); struct ice_vsi *vsi = NULL; @@ -531,61 +681,11 @@ ice_vsi_alloc(struct ice_pf *pf, enum ice_vsi_type vsi_type, vsi->type = vsi_type; vsi->back = pf; + vsi->port_info = pi; + /* For VSIs which don't have a connected VF, this will be NULL */ + vsi->vf = vf; set_bit(ICE_VSI_DOWN, vsi->state); - if (vsi_type == ICE_VSI_VF) - ice_vsi_set_num_qs(vsi, vf); - else if (vsi_type != ICE_VSI_CHNL) - ice_vsi_set_num_qs(vsi, NULL); - - switch (vsi->type) { - case ICE_VSI_SWITCHDEV_CTRL: - if (ice_vsi_alloc_arrays(vsi)) - goto err_rings; - - /* Setup eswitch MSIX irq handler for VSI */ - vsi->irq_handler = ice_eswitch_msix_clean_rings; - break; - case ICE_VSI_PF: - if (ice_vsi_alloc_arrays(vsi)) - goto err_rings; - - /* Setup default MSIX irq handler for VSI */ - vsi->irq_handler = ice_msix_clean_rings; - break; - case ICE_VSI_CTRL: - if (ice_vsi_alloc_arrays(vsi)) - goto err_rings; - - /* Setup ctrl VSI MSIX irq handler */ - vsi->irq_handler = ice_msix_clean_ctrl_vsi; - - /* For the PF control VSI this is NULL, for the VF control VSI - * this will be the first VF to allocate it. - */ - vsi->vf = vf; - break; - case ICE_VSI_VF: - if (ice_vsi_alloc_arrays(vsi)) - goto err_rings; - vsi->vf = vf; - break; - case ICE_VSI_CHNL: - if (!ch) - goto err_rings; - vsi->num_rxq = ch->num_rxq; - vsi->num_txq = ch->num_txq; - vsi->next_base_q = ch->base_q; - break; - case ICE_VSI_LB: - if (ice_vsi_alloc_arrays(vsi)) - goto err_rings; - break; - default: - dev_warn(dev, "Unknown VSI type %d\n", vsi->type); - goto unlock_pf; - } - if (vsi->type == ICE_VSI_CTRL && !vf) { /* Use the last VSI slot as the index for PF control VSI */ vsi->idx = pf->num_alloc_vsi - 1; @@ -604,15 +704,6 @@ ice_vsi_alloc(struct ice_pf *pf, enum ice_vsi_type vsi_type, if (vsi->type == ICE_VSI_CTRL && vf) vf->ctrl_vsi_idx = vsi->idx; - /* allocate memory for Tx/Rx ring stat pointers */ - if (ice_vsi_alloc_stat_arrays(vsi)) - goto err_rings; - - goto unlock_pf; - -err_rings: - devm_kfree(dev, vsi); - vsi = NULL; unlock_pf: mutex_unlock(&pf->sw_mutex); return vsi; @@ -1177,12 +1268,12 @@ ice_chnl_vsi_setup_q_map(struct ice_vsi *vsi, struct ice_vsi_ctx *ctxt) /** * ice_vsi_init - Create and initialize a VSI * @vsi: the VSI being configured - * @init_vsi: is this call creating a VSI + * @init_vsi: flag, tell if VSI need to be initialized * * This initializes a VSI context depending on the VSI type to be added and * passes it down to the add_vsi aq command to create a new VSI. */ -static int ice_vsi_init(struct ice_vsi *vsi, bool init_vsi) +static int ice_vsi_init(struct ice_vsi *vsi, int init_vsi) { struct ice_pf *pf = vsi->back; struct ice_hw *hw = &pf->hw; @@ -1244,7 +1335,7 @@ static int ice_vsi_init(struct ice_vsi *vsi, bool init_vsi) /* if updating VSI context, make sure to set valid_section: * to indicate which section of VSI context being updated */ - if (!init_vsi) + if (!(init_vsi & ICE_VSI_FLAG_INIT)) ctxt->info.valid_sections |= cpu_to_le16(ICE_AQ_VSI_PROP_Q_OPT_VALID); } @@ -1257,7 +1348,8 @@ static int ice_vsi_init(struct ice_vsi *vsi, bool init_vsi) if (ret) goto out; - if (!init_vsi) /* means VSI being updated */ + if (!(init_vsi & ICE_VSI_FLAG_INIT)) + /* means VSI being updated */ /* must to indicate which section of VSI context are * being modified */ @@ -1272,7 +1364,7 @@ static int ice_vsi_init(struct ice_vsi *vsi, bool init_vsi) cpu_to_le16(ICE_AQ_VSI_PROP_SECURITY_VALID); } - if (init_vsi) { + if (init_vsi & ICE_VSI_FLAG_INIT) { ret = ice_add_vsi(hw, vsi->idx, ctxt, NULL); if (ret) { dev_err(dev, "Add VSI failed, err %d\n", ret); @@ -1584,142 +1676,42 @@ static int ice_vsi_alloc_rings(struct ice_vsi *vsi) } /** - * ice_vsi_free_stats - Free the ring statistics structures - * @vsi: VSI pointer + * ice_vsi_manage_rss_lut - disable/enable RSS + * @vsi: the VSI being changed + * @ena: boolean value indicating if this is an enable or disable request + * + * In the event of disable request for RSS, this function will zero out RSS + * LUT, while in the event of enable request for RSS, it will reconfigure RSS + * LUT. */ -static void ice_vsi_free_stats(struct ice_vsi *vsi) +void ice_vsi_manage_rss_lut(struct ice_vsi *vsi, bool ena) { - struct ice_vsi_stats *vsi_stat; - struct ice_pf *pf = vsi->back; - int i; - - if (vsi->type == ICE_VSI_CHNL) - return; - if (!pf->vsi_stats) - return; + u8 *lut; - vsi_stat = pf->vsi_stats[vsi->idx]; - if (!vsi_stat) + lut = kzalloc(vsi->rss_table_size, GFP_KERNEL); + if (!lut) return; - ice_for_each_alloc_txq(vsi, i) { - if (vsi_stat->tx_ring_stats[i]) { - kfree_rcu(vsi_stat->tx_ring_stats[i], rcu); - WRITE_ONCE(vsi_stat->tx_ring_stats[i], NULL); - } - } - - ice_for_each_alloc_rxq(vsi, i) { - if (vsi_stat->rx_ring_stats[i]) { - kfree_rcu(vsi_stat->rx_ring_stats[i], rcu); - WRITE_ONCE(vsi_stat->rx_ring_stats[i], NULL); - } + if (ena) { + if (vsi->rss_lut_user) + memcpy(lut, vsi->rss_lut_user, vsi->rss_table_size); + else + ice_fill_rss_lut(lut, vsi->rss_table_size, + vsi->rss_size); } - kfree(vsi_stat->tx_ring_stats); - kfree(vsi_stat->rx_ring_stats); - kfree(vsi_stat); - pf->vsi_stats[vsi->idx] = NULL; + ice_set_rss_lut(vsi, lut, vsi->rss_table_size); + kfree(lut); } /** - * ice_vsi_alloc_ring_stats - Allocates Tx and Rx ring stats for the VSI - * @vsi: VSI which is having stats allocated + * ice_vsi_cfg_crc_strip - Configure CRC stripping for a VSI + * @vsi: VSI to be configured + * @disable: set to true to have FCS / CRC in the frame data */ -static int ice_vsi_alloc_ring_stats(struct ice_vsi *vsi) +void ice_vsi_cfg_crc_strip(struct ice_vsi *vsi, bool disable) { - struct ice_ring_stats **tx_ring_stats; - struct ice_ring_stats **rx_ring_stats; - struct ice_vsi_stats *vsi_stats; - struct ice_pf *pf = vsi->back; - u16 i; - - vsi_stats = pf->vsi_stats[vsi->idx]; - tx_ring_stats = vsi_stats->tx_ring_stats; - rx_ring_stats = vsi_stats->rx_ring_stats; - - /* Allocate Tx ring stats */ - ice_for_each_alloc_txq(vsi, i) { - struct ice_ring_stats *ring_stats; - struct ice_tx_ring *ring; - - ring = vsi->tx_rings[i]; - ring_stats = tx_ring_stats[i]; - - if (!ring_stats) { - ring_stats = kzalloc(sizeof(*ring_stats), GFP_KERNEL); - if (!ring_stats) - goto err_out; - - WRITE_ONCE(tx_ring_stats[i], ring_stats); - } - - ring->ring_stats = ring_stats; - } - - /* Allocate Rx ring stats */ - ice_for_each_alloc_rxq(vsi, i) { - struct ice_ring_stats *ring_stats; - struct ice_rx_ring *ring; - - ring = vsi->rx_rings[i]; - ring_stats = rx_ring_stats[i]; - - if (!ring_stats) { - ring_stats = kzalloc(sizeof(*ring_stats), GFP_KERNEL); - if (!ring_stats) - goto err_out; - - WRITE_ONCE(rx_ring_stats[i], ring_stats); - } - - ring->ring_stats = ring_stats; - } - - return 0; - -err_out: - ice_vsi_free_stats(vsi); - return -ENOMEM; -} - -/** - * ice_vsi_manage_rss_lut - disable/enable RSS - * @vsi: the VSI being changed - * @ena: boolean value indicating if this is an enable or disable request - * - * In the event of disable request for RSS, this function will zero out RSS - * LUT, while in the event of enable request for RSS, it will reconfigure RSS - * LUT. - */ -void ice_vsi_manage_rss_lut(struct ice_vsi *vsi, bool ena) -{ - u8 *lut; - - lut = kzalloc(vsi->rss_table_size, GFP_KERNEL); - if (!lut) - return; - - if (ena) { - if (vsi->rss_lut_user) - memcpy(lut, vsi->rss_lut_user, vsi->rss_table_size); - else - ice_fill_rss_lut(lut, vsi->rss_table_size, - vsi->rss_size); - } - - ice_set_rss_lut(vsi, lut, vsi->rss_table_size); - kfree(lut); -} - -/** - * ice_vsi_cfg_crc_strip - Configure CRC stripping for a VSI - * @vsi: VSI to be configured - * @disable: set to true to have FCS / CRC in the frame data - */ -void ice_vsi_cfg_crc_strip(struct ice_vsi *vsi, bool disable) -{ - int i; + int i; ice_for_each_rxq(vsi, i) if (disable) @@ -2645,39 +2637,89 @@ static void ice_set_agg_vsi(struct ice_vsi *vsi) } /** - * ice_vsi_setup - Set up a VSI by a given type - * @pf: board private structure - * @pi: pointer to the port_info instance - * @vsi_type: VSI type - * @vf: pointer to VF to which this VSI connects. This field is used primarily - * for the ICE_VSI_VF type. Other VSI types should pass NULL. - * @ch: ptr to channel - * - * This allocates the sw VSI structure and its queue resources. + * ice_free_vf_ctrl_res - Free the VF control VSI resource + * @pf: pointer to PF structure + * @vsi: the VSI to free resources for * - * Returns pointer to the successfully allocated and configured VSI sw struct on - * success, NULL on failure. + * Check if the VF control VSI resource is still in use. If no VF is using it + * any more, release the VSI resource. Otherwise, leave it to be cleaned up + * once no other VF uses it. */ -struct ice_vsi * -ice_vsi_setup(struct ice_pf *pf, struct ice_port_info *pi, - enum ice_vsi_type vsi_type, struct ice_vf *vf, - struct ice_channel *ch) +static void ice_free_vf_ctrl_res(struct ice_pf *pf, struct ice_vsi *vsi) +{ + struct ice_vf *vf; + unsigned int bkt; + + rcu_read_lock(); + ice_for_each_vf_rcu(pf, bkt, vf) { + if (vf != vsi->vf && vf->ctrl_vsi_idx != ICE_NO_VSI) { + rcu_read_unlock(); + return; + } + } + rcu_read_unlock(); + + /* No other VFs left that have control VSI. It is now safe to reclaim + * SW interrupts back to the common pool. + */ + ice_free_res(pf->irq_tracker, vsi->base_vector, + ICE_RES_VF_CTRL_VEC_ID); + pf->num_avail_sw_msix += vsi->num_q_vectors; +} + +static int ice_vsi_cfg_tc_lan(struct ice_pf *pf, struct ice_vsi *vsi) { u16 max_txqs[ICE_MAX_TRAFFIC_CLASS] = { 0 }; struct device *dev = ice_pf_to_dev(pf); - struct ice_vsi *vsi; int ret, i; - vsi = ice_vsi_alloc(pf, vsi_type, ch, vf); + /* configure VSI nodes based on number of queues and TC's */ + ice_for_each_traffic_class(i) { + if (!(vsi->tc_cfg.ena_tc & BIT(i))) + continue; + + if (vsi->type == ICE_VSI_CHNL) { + if (!vsi->alloc_txq && vsi->num_txq) + max_txqs[i] = vsi->num_txq; + else + max_txqs[i] = pf->num_lan_tx; + } else { + max_txqs[i] = vsi->alloc_txq; + } + } - if (!vsi) { - dev_err(dev, "could not allocate VSI\n"); - return NULL; + dev_dbg(dev, "vsi->tc_cfg.ena_tc = %d\n", vsi->tc_cfg.ena_tc); + ret = ice_cfg_vsi_lan(vsi->port_info, vsi->idx, vsi->tc_cfg.ena_tc, + max_txqs); + if (ret) { + dev_err(dev, "VSI %d failed lan queue config, error %d\n", + vsi->vsi_num, ret); + return ret; } - vsi->port_info = pi; + return 0; +} + +/** + * ice_vsi_cfg_def - configure default VSI based on the type + * @vsi: pointer to VSI + * @vf: pointer to VF to which this VSI connects. This field is used primarily + * for the ICE_VSI_VF type. Other VSI types should pass NULL. + * @ch: ptr to channel + */ +static int +ice_vsi_cfg_def(struct ice_vsi *vsi, struct ice_vf *vf, struct ice_channel *ch) +{ + struct device *dev = ice_pf_to_dev(vsi->back); + struct ice_pf *pf = vsi->back; + int ret; + vsi->vsw = pf->first_sw; + ret = ice_vsi_alloc_def(vsi, vf, ch); + if (ret) + return ret; + ice_alloc_fd_res(vsi); if (ice_vsi_get_qs(vsi)) { @@ -2724,6 +2766,14 @@ ice_vsi_setup(struct ice_pf *pf, struct ice_port_info *pi, goto unroll_vector_base; ice_vsi_map_rings_to_vectors(vsi); + if (ice_is_xdp_ena_vsi(vsi)) { + ret = ice_vsi_determine_xdp_res(vsi); + if (ret) + goto unroll_vector_base; + ret = ice_prepare_xdp_rings(vsi, vsi->xdp_prog); + if (ret) + goto unroll_vector_base; + } /* ICE_VSI_CTRL does not need RSS so skip RSS processing */ if (vsi->type != ICE_VSI_CTRL) @@ -2788,30 +2838,140 @@ ice_vsi_setup(struct ice_pf *pf, struct ice_port_info *pi, goto unroll_vsi_init; } - /* configure VSI nodes based on number of queues and TC's */ - ice_for_each_traffic_class(i) { - if (!(vsi->tc_cfg.ena_tc & BIT(i))) - continue; + return 0; - if (vsi->type == ICE_VSI_CHNL) { - if (!vsi->alloc_txq && vsi->num_txq) - max_txqs[i] = vsi->num_txq; - else - max_txqs[i] = pf->num_lan_tx; - } else { - max_txqs[i] = vsi->alloc_txq; - } +unroll_vector_base: + /* reclaim SW interrupts back to the common pool */ + ice_free_res(pf->irq_tracker, vsi->base_vector, vsi->idx); + pf->num_avail_sw_msix += vsi->num_q_vectors; +unroll_alloc_q_vector: + ice_vsi_free_q_vectors(vsi); +unroll_vsi_init: + ice_vsi_delete(vsi); +unroll_get_qs: + ice_vsi_put_qs(vsi); +unroll_vsi_alloc: + ice_vsi_free_arrays(vsi); + return ret; +} + +/** + * ice_vsi_cfg - configure VSI and tc on it + * @vsi: pointer to VSI + * @vf: pointer to VF to which this VSI connects. This field is used primarily + * for the ICE_VSI_VF type. Other VSI types should pass NULL. + * @ch: ptr to channel + */ +int ice_vsi_cfg(struct ice_vsi *vsi, struct ice_vf *vf, struct ice_channel *ch) +{ + int ret; + + ret = ice_vsi_cfg_def(vsi, vf, ch); + if (ret) + return ret; + + ret = ice_vsi_cfg_tc_lan(vsi->back, vsi); + if (ret) + ice_vsi_decfg(vsi); + + return ret; +} + +/** + * ice_vsi_decfg - remove all VSI configuration + * @vsi: pointer to VSI + */ +void ice_vsi_decfg(struct ice_vsi *vsi) +{ + struct ice_pf *pf = vsi->back; + int err; + + /* The Rx rule will only exist to remove if the LLDP FW + * engine is currently stopped + */ + if (!ice_is_safe_mode(pf) && vsi->type == ICE_VSI_PF && + !test_bit(ICE_FLAG_FW_LLDP_AGENT, pf->flags)) + ice_cfg_sw_lldp(vsi, false, false); + + ice_fltr_remove_all(vsi); + ice_rm_vsi_lan_cfg(vsi->port_info, vsi->idx); + err = ice_rm_vsi_rdma_cfg(vsi->port_info, vsi->idx); + if (err) + dev_err(ice_pf_to_dev(pf), "Failed to remove RDMA scheduler config for VSI %u, err %d\n", + vsi->vsi_num, err); + + if (ice_is_xdp_ena_vsi(vsi)) + /* return value check can be skipped here, it always returns + * 0 if reset is in progress + */ + ice_destroy_xdp_rings(vsi); + + ice_vsi_clear_rings(vsi); + ice_vsi_free_q_vectors(vsi); + ice_vsi_delete(vsi); + ice_vsi_put_qs(vsi); + ice_vsi_free_arrays(vsi); + + /* SR-IOV determines needed MSIX resources all at once instead of per + * VSI since when VFs are spawned we know how many VFs there are and how + * many interrupts each VF needs. SR-IOV MSIX resources are also + * cleared in the same manner. + */ + if (vsi->type == ICE_VSI_CTRL && vsi->vf) { + ice_free_vf_ctrl_res(pf, vsi); + } else if (vsi->type != ICE_VSI_VF) { + /* reclaim SW interrupts back to the common pool */ + ice_free_res(pf->irq_tracker, vsi->base_vector, vsi->idx); + pf->num_avail_sw_msix += vsi->num_q_vectors; + vsi->base_vector = 0; } - dev_dbg(dev, "vsi->tc_cfg.ena_tc = %d\n", vsi->tc_cfg.ena_tc); - ret = ice_cfg_vsi_lan(vsi->port_info, vsi->idx, vsi->tc_cfg.ena_tc, - max_txqs); - if (ret) { - dev_err(dev, "VSI %d failed lan queue config, error %d\n", - vsi->vsi_num, ret); - goto unroll_clear_rings; + if (vsi->type == ICE_VSI_VF && + vsi->agg_node && vsi->agg_node->valid) + vsi->agg_node->num_vsis--; + if (vsi->agg_node) { + vsi->agg_node->valid = false; + vsi->agg_node->agg_id = 0; + } +} + +/** + * ice_vsi_setup - Set up a VSI by a given type + * @pf: board private structure + * @pi: pointer to the port_info instance + * @vsi_type: VSI type + * @vf: pointer to VF to which this VSI connects. This field is used primarily + * for the ICE_VSI_VF type. Other VSI types should pass NULL. + * @ch: ptr to channel + * + * This allocates the sw VSI structure and its queue resources. + * + * Returns pointer to the successfully allocated and configured VSI sw struct on + * success, NULL on failure. + */ +struct ice_vsi * +ice_vsi_setup(struct ice_pf *pf, struct ice_port_info *pi, + enum ice_vsi_type vsi_type, struct ice_vf *vf, + struct ice_channel *ch) +{ + struct device *dev = ice_pf_to_dev(pf); + struct ice_vsi *vsi; + int ret; + + vsi = ice_vsi_alloc(pf, pi, vsi_type, ch, vf); + if (!vsi) { + dev_err(dev, "could not allocate VSI\n"); + return NULL; } + /* allocate memory for Tx/Rx ring stat pointers */ + if (ice_vsi_alloc_stat_arrays(vsi)) + goto err_alloc; + + ret = ice_vsi_cfg(vsi, vf, ch); + if (ret) + goto err_vsi_cfg; + /* Add switch rule to drop all Tx Flow Control Frames, of look up * type ETHERTYPE from VSIs, and restrict malicious VF from sending * out PAUSE or PFC frames. If enabled, FW can still send FC frames. @@ -2821,31 +2981,20 @@ ice_vsi_setup(struct ice_pf *pf, struct ice_port_info *pi, * be dropped so that VFs cannot send LLDP packets to reconfig DCB * settings in the HW. */ - if (!ice_is_safe_mode(pf)) - if (vsi->type == ICE_VSI_PF) { - ice_fltr_add_eth(vsi, ETH_P_PAUSE, ICE_FLTR_TX, - ICE_DROP_PACKET); - ice_cfg_sw_lldp(vsi, true, true); - } + if (!ice_is_safe_mode(pf) && vsi->type == ICE_VSI_PF) { + ice_fltr_add_eth(vsi, ETH_P_PAUSE, ICE_FLTR_TX, + ICE_DROP_PACKET); + ice_cfg_sw_lldp(vsi, true, true); + } if (!vsi->agg_node) ice_set_agg_vsi(vsi); + return vsi; -unroll_clear_rings: - ice_vsi_clear_rings(vsi); -unroll_vector_base: - /* reclaim SW interrupts back to the common pool */ - ice_free_res(pf->irq_tracker, vsi->base_vector, vsi->idx); - pf->num_avail_sw_msix += vsi->num_q_vectors; -unroll_alloc_q_vector: - ice_vsi_free_q_vectors(vsi); -unroll_vsi_init: +err_vsi_cfg: ice_vsi_free_stats(vsi); - ice_vsi_delete(vsi); -unroll_get_qs: - ice_vsi_put_qs(vsi); -unroll_vsi_alloc: +err_alloc: if (vsi_type == ICE_VSI_VF) ice_enable_lag(pf->lag); ice_vsi_free(vsi); @@ -3111,37 +3260,6 @@ void ice_napi_del(struct ice_vsi *vsi) netif_napi_del(&vsi->q_vectors[v_idx]->napi); } -/** - * ice_free_vf_ctrl_res - Free the VF control VSI resource - * @pf: pointer to PF structure - * @vsi: the VSI to free resources for - * - * Check if the VF control VSI resource is still in use. If no VF is using it - * any more, release the VSI resource. Otherwise, leave it to be cleaned up - * once no other VF uses it. - */ -static void ice_free_vf_ctrl_res(struct ice_pf *pf, struct ice_vsi *vsi) -{ - struct ice_vf *vf; - unsigned int bkt; - - rcu_read_lock(); - ice_for_each_vf_rcu(pf, bkt, vf) { - if (vf != vsi->vf && vf->ctrl_vsi_idx != ICE_NO_VSI) { - rcu_read_unlock(); - return; - } - } - rcu_read_unlock(); - - /* No other VFs left that have control VSI. It is now safe to reclaim - * SW interrupts back to the common pool. - */ - ice_free_res(pf->irq_tracker, vsi->base_vector, - ICE_RES_VF_CTRL_VEC_ID); - pf->num_avail_sw_msix += vsi->num_q_vectors; -} - /** * ice_vsi_release - Delete a VSI and free its resources * @vsi: the VSI being removed @@ -3151,7 +3269,6 @@ static void ice_free_vf_ctrl_res(struct ice_pf *pf, struct ice_vsi *vsi) int ice_vsi_release(struct ice_vsi *vsi) { struct ice_pf *pf; - int err; if (!vsi->back) return -ENODEV; @@ -3169,41 +3286,14 @@ int ice_vsi_release(struct ice_vsi *vsi) clear_bit(ICE_VSI_NETDEV_REGISTERED, vsi->state); } + if (vsi->type == ICE_VSI_PF) + ice_devlink_destroy_pf_port(pf); + if (test_bit(ICE_FLAG_RSS_ENA, pf->flags)) ice_rss_clean(vsi); ice_vsi_close(vsi); - - /* SR-IOV determines needed MSIX resources all at once instead of per - * VSI since when VFs are spawned we know how many VFs there are and how - * many interrupts each VF needs. SR-IOV MSIX resources are also - * cleared in the same manner. - */ - if (vsi->type == ICE_VSI_CTRL && vsi->vf) { - ice_free_vf_ctrl_res(pf, vsi); - } else if (vsi->type != ICE_VSI_VF) { - /* reclaim SW interrupts back to the common pool */ - ice_free_res(pf->irq_tracker, vsi->base_vector, vsi->idx); - pf->num_avail_sw_msix += vsi->num_q_vectors; - } - - /* The Rx rule will only exist to remove if the LLDP FW - * engine is currently stopped - */ - if (!ice_is_safe_mode(pf) && vsi->type == ICE_VSI_PF && - !test_bit(ICE_FLAG_FW_LLDP_AGENT, pf->flags)) - ice_cfg_sw_lldp(vsi, false, false); - - if (ice_is_vsi_dflt_vsi(vsi)) - ice_clear_dflt_vsi(vsi); - ice_fltr_remove_all(vsi); - ice_rm_vsi_lan_cfg(vsi->port_info, vsi->idx); - err = ice_rm_vsi_rdma_cfg(vsi->port_info, vsi->idx); - if (err) - dev_err(ice_pf_to_dev(vsi->back), "Failed to remove RDMA scheduler config for VSI %u, err %d\n", - vsi->vsi_num, err); - ice_vsi_delete(vsi); - ice_vsi_free_q_vectors(vsi); + ice_vsi_decfg(vsi); if (vsi->netdev) { if (test_bit(ICE_VSI_NETDEV_REGISTERED, vsi->state)) { @@ -3217,13 +3307,6 @@ int ice_vsi_release(struct ice_vsi *vsi) } } - if (vsi->type == ICE_VSI_VF && - vsi->agg_node && vsi->agg_node->valid) - vsi->agg_node->num_vsis--; - ice_vsi_clear_rings(vsi); - ice_vsi_free_stats(vsi); - ice_vsi_put_qs(vsi); - /* retain SW VSI data structure since it is needed to unregister and * free VSI netdev when PF is not in reset recovery pending state,\ * for ex: during rmmod. @@ -3392,29 +3475,24 @@ ice_vsi_realloc_stat_arrays(struct ice_vsi *vsi, int prev_txq, int prev_rxq) /** * ice_vsi_rebuild - Rebuild VSI after reset * @vsi: VSI to be rebuild - * @init_vsi: is this an initialization or a reconfigure of the VSI + * @init_vsi: flag, tell if VSI need to be initialized * * Returns 0 on success and negative value on failure */ -int ice_vsi_rebuild(struct ice_vsi *vsi, bool init_vsi) +int ice_vsi_rebuild(struct ice_vsi *vsi, int init_vsi) { - u16 max_txqs[ICE_MAX_TRAFFIC_CLASS] = { 0 }; struct ice_coalesce_stored *coalesce; - int ret, i, prev_txq, prev_rxq; + int ret, prev_txq, prev_rxq; int prev_num_q_vectors = 0; - enum ice_vsi_type vtype; struct ice_pf *pf; if (!vsi) return -EINVAL; pf = vsi->back; - vtype = vsi->type; - if (WARN_ON(vtype == ICE_VSI_VF && !vsi->vf)) + if (WARN_ON(vsi->type == ICE_VSI_VF && !vsi->vf)) return -EINVAL; - ice_vsi_init_vlan_ops(vsi); - coalesce = kcalloc(vsi->num_q_vectors, sizeof(struct ice_coalesce_stored), GFP_KERNEL); if (!coalesce) @@ -3425,163 +3503,16 @@ int ice_vsi_rebuild(struct ice_vsi *vsi, bool init_vsi) prev_txq = vsi->num_txq; prev_rxq = vsi->num_rxq; - ice_rm_vsi_lan_cfg(vsi->port_info, vsi->idx); - ret = ice_rm_vsi_rdma_cfg(vsi->port_info, vsi->idx); + ice_vsi_decfg(vsi); + ret = ice_vsi_cfg_def(vsi, vsi->vf, vsi->ch); if (ret) - dev_err(ice_pf_to_dev(vsi->back), "Failed to remove RDMA scheduler config for VSI %u, err %d\n", - vsi->vsi_num, ret); - ice_vsi_free_q_vectors(vsi); - - /* SR-IOV determines needed MSIX resources all at once instead of per - * VSI since when VFs are spawned we know how many VFs there are and how - * many interrupts each VF needs. SR-IOV MSIX resources are also - * cleared in the same manner. - */ - if (vtype != ICE_VSI_VF) { - /* reclaim SW interrupts back to the common pool */ - ice_free_res(pf->irq_tracker, vsi->base_vector, vsi->idx); - pf->num_avail_sw_msix += vsi->num_q_vectors; - vsi->base_vector = 0; - } - - if (ice_is_xdp_ena_vsi(vsi)) - /* return value check can be skipped here, it always returns - * 0 if reset is in progress - */ - ice_destroy_xdp_rings(vsi); - ice_vsi_put_qs(vsi); - ice_vsi_clear_rings(vsi); - ice_vsi_free_arrays(vsi); - if (vtype == ICE_VSI_VF) - ice_vsi_set_num_qs(vsi, vsi->vf); - else - ice_vsi_set_num_qs(vsi, NULL); - - ret = ice_vsi_alloc_arrays(vsi); - if (ret < 0) - goto err_vsi; - - ice_vsi_get_qs(vsi); - - ice_alloc_fd_res(vsi); - ice_vsi_set_tc_cfg(vsi); - - /* Initialize VSI struct elements and create VSI in FW */ - ret = ice_vsi_init(vsi, init_vsi); - if (ret < 0) - goto err_vsi; - - switch (vtype) { - case ICE_VSI_CTRL: - case ICE_VSI_SWITCHDEV_CTRL: - case ICE_VSI_PF: - ret = ice_vsi_alloc_q_vectors(vsi); - if (ret) - goto err_rings; - - ret = ice_vsi_setup_vector_base(vsi); - if (ret) - goto err_vectors; - - ret = ice_vsi_set_q_vectors_reg_idx(vsi); - if (ret) - goto err_vectors; - - ret = ice_vsi_alloc_rings(vsi); - if (ret) - goto err_vectors; - - ret = ice_vsi_alloc_ring_stats(vsi); - if (ret) - goto err_vectors; - - ice_vsi_map_rings_to_vectors(vsi); - - vsi->stat_offsets_loaded = false; - if (ice_is_xdp_ena_vsi(vsi)) { - ret = ice_vsi_determine_xdp_res(vsi); - if (ret) - goto err_vectors; - ret = ice_prepare_xdp_rings(vsi, vsi->xdp_prog); - if (ret) - goto err_vectors; - } - /* ICE_VSI_CTRL does not need RSS so skip RSS processing */ - if (vtype != ICE_VSI_CTRL) - /* Do not exit if configuring RSS had an issue, at - * least receive traffic on first queue. Hence no - * need to capture return value - */ - if (test_bit(ICE_FLAG_RSS_ENA, pf->flags)) - ice_vsi_cfg_rss_lut_key(vsi); - - /* disable or enable CRC stripping */ - if (vsi->netdev) - ice_vsi_cfg_crc_strip(vsi, !!(vsi->netdev->features & - NETIF_F_RXFCS)); - - break; - case ICE_VSI_VF: - ret = ice_vsi_alloc_q_vectors(vsi); - if (ret) - goto err_rings; - - ret = ice_vsi_set_q_vectors_reg_idx(vsi); - if (ret) - goto err_vectors; - - ret = ice_vsi_alloc_rings(vsi); - if (ret) - goto err_vectors; - - ret = ice_vsi_alloc_ring_stats(vsi); - if (ret) - goto err_vectors; - - vsi->stat_offsets_loaded = false; - break; - case ICE_VSI_CHNL: - if (test_bit(ICE_FLAG_RSS_ENA, pf->flags)) { - ice_vsi_cfg_rss_lut_key(vsi); - ice_vsi_set_rss_flow_fld(vsi); - } - break; - default: - break; - } - - /* configure VSI nodes based on number of queues and TC's */ - for (i = 0; i < vsi->tc_cfg.numtc; i++) { - /* configure VSI nodes based on number of queues and TC's. - * ADQ creates VSIs for each TC/Channel but doesn't - * allocate queues instead it reconfigures the PF queues - * as per the TC command. So max_txqs should point to the - * PF Tx queues. - */ - if (vtype == ICE_VSI_CHNL) - max_txqs[i] = pf->num_lan_tx; - else - max_txqs[i] = vsi->alloc_txq; - - if (ice_is_xdp_ena_vsi(vsi)) - max_txqs[i] += vsi->num_xdp_txq; - } - - if (test_bit(ICE_FLAG_TC_MQPRIO, pf->flags)) - /* If MQPRIO is set, means channel code path, hence for main - * VSI's, use TC as 1 - */ - ret = ice_cfg_vsi_lan(vsi->port_info, vsi->idx, 1, max_txqs); - else - ret = ice_cfg_vsi_lan(vsi->port_info, vsi->idx, - vsi->tc_cfg.ena_tc, max_txqs); + goto err_vsi_cfg; + ret = ice_vsi_cfg_tc_lan(pf, vsi); if (ret) { - dev_err(ice_pf_to_dev(pf), "VSI %d failed lan queue config, error %d\n", - vsi->vsi_num, ret); - if (init_vsi) { + if (init_vsi & ICE_VSI_FLAG_INIT) { ret = -EIO; - goto err_vectors; + goto err_vsi_cfg_tc_lan; } else { kfree(coalesce); return ice_schedule_reset(pf, ICE_RESET_PFR); @@ -3589,23 +3520,16 @@ int ice_vsi_rebuild(struct ice_vsi *vsi, bool init_vsi) } if (ice_vsi_realloc_stat_arrays(vsi, prev_txq, prev_rxq)) - goto err_vectors; + goto err_vsi_cfg_tc_lan; ice_vsi_rebuild_set_coalesce(vsi, coalesce, prev_num_q_vectors); kfree(coalesce); return 0; -err_vectors: - ice_vsi_free_q_vectors(vsi); -err_rings: - ice_vsi_clear_rings(vsi); - set_bit(ICE_RESET_FAILED, pf->state); - kfree(coalesce); - return ret; -err_vsi: - ice_vsi_free(vsi); - set_bit(ICE_RESET_FAILED, pf->state); +err_vsi_cfg_tc_lan: + ice_vsi_decfg(vsi); +err_vsi_cfg: kfree(coalesce); return ret; } diff --git a/drivers/net/ethernet/intel/ice/ice_lib.h b/drivers/net/ethernet/intel/ice/ice_lib.h index 6203114b805c..ad4d5314ca76 100644 --- a/drivers/net/ethernet/intel/ice/ice_lib.h +++ b/drivers/net/ethernet/intel/ice/ice_lib.h @@ -61,8 +61,11 @@ int ice_vsi_release(struct ice_vsi *vsi); void ice_vsi_close(struct ice_vsi *vsi); +int ice_vsi_cfg(struct ice_vsi *vsi, struct ice_vf *vf, + struct ice_channel *ch); int ice_ena_vsi(struct ice_vsi *vsi, bool locked); +void ice_vsi_decfg(struct ice_vsi *vsi); void ice_dis_vsi(struct ice_vsi *vsi, bool locked); int ice_free_res(struct ice_res_tracker *res, u16 index, u16 id); @@ -70,7 +73,9 @@ int ice_free_res(struct ice_res_tracker *res, u16 index, u16 id); int ice_get_res(struct ice_pf *pf, struct ice_res_tracker *res, u16 needed, u16 id); -int ice_vsi_rebuild(struct ice_vsi *vsi, bool init_vsi); +#define ICE_VSI_FLAG_INIT BIT(0) +#define ICE_VSI_FLAG_NO_INIT 0 +int ice_vsi_rebuild(struct ice_vsi *vsi, int init_vsi); bool ice_is_reset_in_progress(unsigned long *state); int ice_wait_for_reset(struct ice_pf *pf, unsigned long timeout); diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c index bfab9a713533..8e648b2b34d9 100644 --- a/drivers/net/ethernet/intel/ice/ice_main.c +++ b/drivers/net/ethernet/intel/ice/ice_main.c @@ -4225,13 +4225,13 @@ int ice_vsi_recfg_qs(struct ice_vsi *vsi, int new_rx, int new_tx) /* set for the next time the netdev is started */ if (!netif_running(vsi->netdev)) { - ice_vsi_rebuild(vsi, false); + ice_vsi_rebuild(vsi, ICE_VSI_FLAG_NO_INIT); dev_dbg(ice_pf_to_dev(pf), "Link is down, queue count change happens when link is brought up\n"); goto done; } ice_vsi_close(vsi); - ice_vsi_rebuild(vsi, false); + ice_vsi_rebuild(vsi, ICE_VSI_FLAG_NO_INIT); ice_pf_dcb_recfg(pf); ice_vsi_open(vsi); done: @@ -7135,7 +7135,7 @@ static int ice_vsi_rebuild_by_type(struct ice_pf *pf, enum ice_vsi_type type) continue; /* rebuild the VSI */ - err = ice_vsi_rebuild(vsi, true); + err = ice_vsi_rebuild(vsi, ICE_VSI_FLAG_INIT); if (err) { dev_err(dev, "rebuild VSI failed, err %d, VSI index %d, type %s\n", err, vsi->idx, ice_vsi_type_str(type)); @@ -8544,7 +8544,7 @@ static int ice_rebuild_channels(struct ice_pf *pf) type = vsi->type; /* rebuild ADQ VSI */ - err = ice_vsi_rebuild(vsi, true); + err = ice_vsi_rebuild(vsi, ICE_VSI_FLAG_INIT); if (err) { dev_err(dev, "VSI (type:%s) at index %d rebuild failed, err %d\n", ice_vsi_type_str(type), vsi->idx, err); @@ -8776,14 +8776,14 @@ static int ice_setup_tc_mqprio_qdisc(struct net_device *netdev, void *type_data) cur_rxq = vsi->num_rxq; /* proceed with rebuild main VSI using correct number of queues */ - ret = ice_vsi_rebuild(vsi, false); + ret = ice_vsi_rebuild(vsi, ICE_VSI_FLAG_NO_INIT); if (ret) { /* fallback to current number of queues */ dev_info(dev, "Rebuild failed with new queues, try with current number of queues\n"); vsi->req_txq = cur_txq; vsi->req_rxq = cur_rxq; clear_bit(ICE_RESET_FAILED, pf->state); - if (ice_vsi_rebuild(vsi, false)) { + if (ice_vsi_rebuild(vsi, ICE_VSI_FLAG_NO_INIT)) { dev_err(dev, "Rebuild of main VSI failed again\n"); return ret; } diff --git a/drivers/net/ethernet/intel/ice/ice_vf_lib.c b/drivers/net/ethernet/intel/ice/ice_vf_lib.c index 375eb6493f0f..c3b406df269f 100644 --- a/drivers/net/ethernet/intel/ice/ice_vf_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_vf_lib.c @@ -256,7 +256,7 @@ static int ice_vf_rebuild_vsi(struct ice_vf *vf) if (WARN_ON(!vsi)) return -EINVAL; - if (ice_vsi_rebuild(vsi, true)) { + if (ice_vsi_rebuild(vsi, ICE_VSI_FLAG_INIT)) { dev_err(ice_pf_to_dev(pf), "failed to rebuild VF %d VSI\n", vf->vf_id); return -EIO; From patchwork Mon Dec 12 11:16:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Swiatkowski X-Patchwork-Id: 13071018 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5DBA0C04FDE for ; Mon, 12 Dec 2022 11:36:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232082AbiLLLgH (ORCPT ); Mon, 12 Dec 2022 06:36:07 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43062 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231952AbiLLLf2 (ORCPT ); Mon, 12 Dec 2022 06:35:28 -0500 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F17EE62F3 for ; Mon, 12 Dec 2022 03:33:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1670844788; x=1702380788; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=blZt6kNB7h+IVWZ2yD3/Bndv24v3tOAAvbW91M+5zkg=; b=ljQ7M9DxjNxqKXvO0aDzBSriGIHZO+HB2RjMLwpTgV8HcK7h5Roo6n2+ 8i02Zr3sV0bftsnnYdZu2jWmYN8HWVn1rNBaSj4RqdUWsJCCG+/ySnU30 2BHO19c0O+rgHMlE08UwuDhQjUEmE36Pj31L8OQX2bPPxEpaSV91IHw96 YwOVLvW/WgFFiK4T4LgAlK3AZ7UgnoPS4O5uF80Tl0j7ALTUJnLK+ykLg meubHSrNotSIe96uvWyAwBu3qVuVfrKJHTtPBlrw82inYSfTCM3YF++KQ bEN5XX7k4+HAG6s7zANfb0/H/Q+fG6QtYEwYNrXiU2eOOWioE6tmVRmKZ Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10558"; a="317861485" X-IronPort-AV: E=Sophos;i="5.96,238,1665471600"; d="scan'208";a="317861485" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Dec 2022 03:33:07 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10558"; a="893459764" X-IronPort-AV: E=Sophos;i="5.96,238,1665471600"; d="scan'208";a="893459764" Received: from wasp.igk.intel.com ([10.102.20.192]) by fmsmga006.fm.intel.com with ESMTP; 12 Dec 2022 03:33:03 -0800 From: Michal Swiatkowski To: intel-wired-lan@lists.osuosl.org Cc: alexandr.lobakin@intel.com, sridhar.samudrala@intel.com, wojciech.drewek@intel.com, lukasz.czapnik@intel.com, shiraz.saleem@intel.com, jesse.brandeburg@intel.com, mustafa.ismail@intel.com, przemyslaw.kitszel@intel.com, piotr.raczynski@intel.com, jacob.e.keller@intel.com, david.m.ertman@intel.com, leszek.kaliszczuk@intel.com, benjamin.mikailenko@intel.com, paul.m.stillwell.jr@intel.com, netdev@vger.kernel.org, kuba@kernel.org, leon@kernel.org, Michal Swiatkowski Subject: [PATCH net-next v1 05/10] ice: stop hard coding the ICE_VSI_CTRL location Date: Mon, 12 Dec 2022 12:16:40 +0100 Message-Id: <20221212111645.1198680-6-michal.swiatkowski@linux.intel.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20221212111645.1198680-1-michal.swiatkowski@linux.intel.com> References: <20221212111645.1198680-1-michal.swiatkowski@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Jacob Keller When allocating the ICE_VSI_CTRL, the allocated struct ice_vsi pointer is stored into the PF's pf->vsi array at a fixed location. This was historically done on the basis that it could provide an O(1) lookup for the special control VSI. Since we store the ctrl_vsi_idx, we already have O(1) lookup regardless of where in the array we store this VSI. Simplify the logic in ice_vsi_alloc by using the same method of storing the control VSI as other types of VSIs. Signed-off-by: Jacob Keller Signed-off-by: Michal Swiatkowski --- drivers/net/ethernet/intel/ice/ice_lib.c | 34 +++++++++++------------- 1 file changed, 15 insertions(+), 19 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c index 9549290c76ab..eba990120a06 100644 --- a/drivers/net/ethernet/intel/ice/ice_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_lib.c @@ -479,10 +479,7 @@ int ice_vsi_free(struct ice_vsi *vsi) /* updates the PF for this cleared VSI */ pf->vsi[vsi->idx] = NULL; - if (vsi->idx < pf->next_vsi && vsi->type != ICE_VSI_CTRL) - pf->next_vsi = vsi->idx; - if (vsi->idx < pf->next_vsi && vsi->type == ICE_VSI_CTRL && vsi->vf) - pf->next_vsi = vsi->idx; + pf->next_vsi = vsi->idx; ice_vsi_free_stats(vsi); ice_vsi_free_arrays(vsi); @@ -686,23 +683,22 @@ ice_vsi_alloc(struct ice_pf *pf, struct ice_port_info *pi, vsi->vf = vf; set_bit(ICE_VSI_DOWN, vsi->state); - if (vsi->type == ICE_VSI_CTRL && !vf) { - /* Use the last VSI slot as the index for PF control VSI */ - vsi->idx = pf->num_alloc_vsi - 1; - pf->ctrl_vsi_idx = vsi->idx; - pf->vsi[vsi->idx] = vsi; - } else { - /* fill slot and make note of the index */ - vsi->idx = pf->next_vsi; - pf->vsi[pf->next_vsi] = vsi; + /* fill slot and make note of the index */ + vsi->idx = pf->next_vsi; + pf->vsi[pf->next_vsi] = vsi; - /* prepare pf->next_vsi for next use */ - pf->next_vsi = ice_get_free_slot(pf->vsi, pf->num_alloc_vsi, - pf->next_vsi); - } + /* prepare pf->next_vsi for next use */ + pf->next_vsi = ice_get_free_slot(pf->vsi, pf->num_alloc_vsi, + pf->next_vsi); - if (vsi->type == ICE_VSI_CTRL && vf) - vf->ctrl_vsi_idx = vsi->idx; + if (vsi->type == ICE_VSI_CTRL) { + if (vf) { + vf->ctrl_vsi_idx = vsi->idx; + } else { + WARN_ON(pf->ctrl_vsi_idx != ICE_NO_VSI); + pf->ctrl_vsi_idx = vsi->idx; + } + } unlock_pf: mutex_unlock(&pf->sw_mutex); From patchwork Mon Dec 12 11:16:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Swiatkowski X-Patchwork-Id: 13071019 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9315EC4332F for ; Mon, 12 Dec 2022 11:36:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232203AbiLLLgZ (ORCPT ); Mon, 12 Dec 2022 06:36:25 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42342 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232226AbiLLLfw (ORCPT ); Mon, 12 Dec 2022 06:35:52 -0500 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 73B28FACE for ; Mon, 12 Dec 2022 03:33:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1670844792; x=1702380792; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=4qBemkVgf2MyuQqqTXIr8Z6qQK7bf5sqdrIdEFaqAkQ=; b=KTEfmA4iytcZ+uypHb6j/k4g/3kUPHiUtTPQWvVt1fGUb6H0wnp8U8UO hPoB/4KfR1kyL0sQ4mSaNqiSrkYNBIjJuG9HGK+vX0TRS0DqvzZKBfvkn hIrXogsifLf9VVjKgZUVeRAv7knkIShInbsNh4NxgSZu4zYN9h8qW4bBs 0REQBrTd38dxc/hPi7M+kUE62iW7WDTepJ6pnq7HVlKVJcFxwGiTYMYPR R98OpfP4zAbsfH3kzkqP694noW0MUMpUpnh6YhVECtL3Kjzrq1qrmuaoC 33kLwTeKvl0l5fd/uYDbQ/LZPtP0jznWYEvnbzJ7ep2kn4gxPOpO2Sfoz g==; X-IronPort-AV: E=McAfee;i="6500,9779,10558"; a="317861500" X-IronPort-AV: E=Sophos;i="5.96,238,1665471600"; d="scan'208";a="317861500" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Dec 2022 03:33:12 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10558"; a="893459797" X-IronPort-AV: E=Sophos;i="5.96,238,1665471600"; d="scan'208";a="893459797" Received: from wasp.igk.intel.com ([10.102.20.192]) by fmsmga006.fm.intel.com with ESMTP; 12 Dec 2022 03:33:07 -0800 From: Michal Swiatkowski To: intel-wired-lan@lists.osuosl.org Cc: alexandr.lobakin@intel.com, sridhar.samudrala@intel.com, wojciech.drewek@intel.com, lukasz.czapnik@intel.com, shiraz.saleem@intel.com, jesse.brandeburg@intel.com, mustafa.ismail@intel.com, przemyslaw.kitszel@intel.com, piotr.raczynski@intel.com, jacob.e.keller@intel.com, david.m.ertman@intel.com, leszek.kaliszczuk@intel.com, benjamin.mikailenko@intel.com, paul.m.stillwell.jr@intel.com, netdev@vger.kernel.org, kuba@kernel.org, leon@kernel.org, Michal Swiatkowski Subject: [PATCH net-next v1 06/10] ice: split probe into smaller functions Date: Mon, 12 Dec 2022 12:16:41 +0100 Message-Id: <20221212111645.1198680-7-michal.swiatkowski@linux.intel.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20221212111645.1198680-1-michal.swiatkowski@linux.intel.com> References: <20221212111645.1198680-1-michal.swiatkowski@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Part of code from probe can be reused in reload flow. Move this code to separate function. Create unroll functions for each part of initialization, like: ice_init_dev() and ice_deinit_dev(). It simplifies unrolling and can be used in remove flow. Avoid freeing port info as it could be reused in reload path. Will be freed in remove path since is allocated via devm_kzalloc(). Signed-off-by: Michal Swiatkowski Reviewed-by: Przemek Kitszel --- drivers/net/ethernet/intel/ice/ice.h | 2 + drivers/net/ethernet/intel/ice/ice_common.c | 11 +- drivers/net/ethernet/intel/ice/ice_main.c | 897 ++++++++++++-------- 3 files changed, 559 insertions(+), 351 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h index 70a9609f1b80..99c7003d9f35 100644 --- a/drivers/net/ethernet/intel/ice/ice.h +++ b/drivers/net/ethernet/intel/ice/ice.h @@ -933,6 +933,8 @@ int ice_open(struct net_device *netdev); int ice_open_internal(struct net_device *netdev); int ice_stop(struct net_device *netdev); void ice_service_task_schedule(struct ice_pf *pf); +int ice_load(struct ice_pf *pf); +void ice_unload(struct ice_pf *pf); /** * ice_set_rdma_cap - enable RDMA support diff --git a/drivers/net/ethernet/intel/ice/ice_common.c b/drivers/net/ethernet/intel/ice/ice_common.c index 0e9584e50d82..dd1c9bf20c0a 100644 --- a/drivers/net/ethernet/intel/ice/ice_common.c +++ b/drivers/net/ethernet/intel/ice/ice_common.c @@ -1113,8 +1113,10 @@ int ice_init_hw(struct ice_hw *hw) if (status) goto err_unroll_cqinit; - hw->port_info = devm_kzalloc(ice_hw_to_dev(hw), - sizeof(*hw->port_info), GFP_KERNEL); + if (!hw->port_info) + hw->port_info = devm_kzalloc(ice_hw_to_dev(hw), + sizeof(*hw->port_info), + GFP_KERNEL); if (!hw->port_info) { status = -ENOMEM; goto err_unroll_cqinit; @@ -1242,11 +1244,6 @@ void ice_deinit_hw(struct ice_hw *hw) ice_free_hw_tbls(hw); mutex_destroy(&hw->tnl_lock); - if (hw->port_info) { - devm_kfree(ice_hw_to_dev(hw), hw->port_info); - hw->port_info = NULL; - } - /* Attempt to disable FW logging before shutting down control queues */ ice_cfg_fw_log(hw, false); ice_destroy_all_ctrlq(hw); diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c index 8e648b2b34d9..d8f51aee78ff 100644 --- a/drivers/net/ethernet/intel/ice/ice_main.c +++ b/drivers/net/ethernet/intel/ice/ice_main.c @@ -3428,53 +3428,6 @@ static void ice_set_netdev_features(struct net_device *netdev) netdev->hw_features |= NETIF_F_RXFCS; } -/** - * ice_cfg_netdev - Allocate, configure and register a netdev - * @vsi: the VSI associated with the new netdev - * - * Returns 0 on success, negative value on failure - */ -static int ice_cfg_netdev(struct ice_vsi *vsi) -{ - struct ice_netdev_priv *np; - struct net_device *netdev; - u8 mac_addr[ETH_ALEN]; - - netdev = alloc_etherdev_mqs(sizeof(*np), vsi->alloc_txq, - vsi->alloc_rxq); - if (!netdev) - return -ENOMEM; - - set_bit(ICE_VSI_NETDEV_ALLOCD, vsi->state); - vsi->netdev = netdev; - np = netdev_priv(netdev); - np->vsi = vsi; - - ice_set_netdev_features(netdev); - - ice_set_ops(netdev); - - if (vsi->type == ICE_VSI_PF) { - SET_NETDEV_DEV(netdev, ice_pf_to_dev(vsi->back)); - ether_addr_copy(mac_addr, vsi->port_info->mac.perm_addr); - eth_hw_addr_set(netdev, mac_addr); - ether_addr_copy(netdev->perm_addr, mac_addr); - } - - netdev->priv_flags |= IFF_UNICAST_FLT; - - /* Setup netdev TC information */ - ice_vsi_cfg_netdev_tc(vsi, vsi->tc_cfg.ena_tc); - - /* setup watchdog timeout value to be 5 second */ - netdev->watchdog_timeo = 5 * HZ; - - netdev->min_mtu = ETH_MIN_MTU; - netdev->max_mtu = ICE_MAX_MTU; - - return 0; -} - /** * ice_fill_rss_lut - Fill the RSS lookup table with default values * @lut: Lookup table @@ -3727,76 +3680,6 @@ static int ice_tc_indir_block_register(struct ice_vsi *vsi) return flow_indr_dev_register(ice_indr_setup_tc_cb, np); } -/** - * ice_setup_pf_sw - Setup the HW switch on startup or after reset - * @pf: board private structure - * - * Returns 0 on success, negative value on failure - */ -static int ice_setup_pf_sw(struct ice_pf *pf) -{ - struct device *dev = ice_pf_to_dev(pf); - bool dvm = ice_is_dvm_ena(&pf->hw); - struct ice_vsi *vsi; - int status; - - if (ice_is_reset_in_progress(pf->state)) - return -EBUSY; - - status = ice_aq_set_port_params(pf->hw.port_info, dvm, NULL); - if (status) - return -EIO; - - vsi = ice_pf_vsi_setup(pf, pf->hw.port_info); - if (!vsi) - return -ENOMEM; - - /* init channel list */ - INIT_LIST_HEAD(&vsi->ch_list); - - status = ice_cfg_netdev(vsi); - if (status) - goto unroll_vsi_setup; - /* netdev has to be configured before setting frame size */ - ice_vsi_cfg_frame_size(vsi); - - /* init indirect block notifications */ - status = ice_tc_indir_block_register(vsi); - if (status) { - dev_err(dev, "Failed to register netdev notifier\n"); - goto unroll_cfg_netdev; - } - - /* Setup DCB netlink interface */ - ice_dcbnl_setup(vsi); - - /* registering the NAPI handler requires both the queues and - * netdev to be created, which are done in ice_pf_vsi_setup() - * and ice_cfg_netdev() respectively - */ - ice_napi_add(vsi); - - status = ice_init_mac_fltr(pf); - if (status) - goto unroll_napi_add; - - return 0; - -unroll_napi_add: - ice_tc_indir_block_unregister(vsi); -unroll_cfg_netdev: - ice_napi_del(vsi); - if (vsi->netdev) { - clear_bit(ICE_VSI_NETDEV_ALLOCD, vsi->state); - free_netdev(vsi->netdev); - vsi->netdev = NULL; - } - -unroll_vsi_setup: - ice_vsi_release(vsi); - return status; -} - /** * ice_get_avail_q_count - Get count of queues in use * @pf_qmap: bitmap to get queue use count from @@ -4494,6 +4377,21 @@ static int ice_init_fdir(struct ice_pf *pf) return err; } +static void ice_deinit_fdir(struct ice_pf *pf) +{ + struct ice_vsi *vsi = ice_get_ctrl_vsi(pf); + + if (!vsi) + return; + + ice_vsi_manage_fdir(vsi, false); + ice_vsi_release(vsi); + if (pf->ctrl_vsi_idx != ICE_NO_VSI) { + pf->vsi[pf->ctrl_vsi_idx] = NULL; + pf->ctrl_vsi_idx = ICE_NO_VSI; + } +} + /** * ice_get_opt_fw_name - return optional firmware file name or NULL * @pf: pointer to the PF instance @@ -4663,133 +4561,198 @@ static void ice_print_wake_reason(struct ice_pf *pf) /** * ice_register_netdev - register netdev - * @pf: pointer to the PF struct + * @vsi: pointer to the VSI struct */ -static int ice_register_netdev(struct ice_pf *pf) +static int ice_register_netdev(struct ice_vsi *vsi) { - struct ice_vsi *vsi; - int err = 0; + int err; - vsi = ice_get_main_vsi(pf); if (!vsi || !vsi->netdev) return -EIO; err = register_netdev(vsi->netdev); if (err) - goto err_register_netdev; + return err; set_bit(ICE_VSI_NETDEV_REGISTERED, vsi->state); netif_carrier_off(vsi->netdev); netif_tx_stop_all_queues(vsi->netdev); return 0; -err_register_netdev: - free_netdev(vsi->netdev); - vsi->netdev = NULL; - clear_bit(ICE_VSI_NETDEV_ALLOCD, vsi->state); - return err; +} + +static void ice_unregister_netdev(struct ice_vsi *vsi) +{ + if (!vsi || !vsi->netdev) + return; + + unregister_netdev(vsi->netdev); + clear_bit(ICE_VSI_NETDEV_REGISTERED, vsi->state); } /** - * ice_probe - Device initialization routine - * @pdev: PCI device information struct - * @ent: entry in ice_pci_tbl + * ice_cfg_netdev - Allocate, configure and register a netdev + * @vsi: the VSI associated with the new netdev * - * Returns 0 on success, negative on failure + * Returns 0 on success, negative value on failure */ -static int -ice_probe(struct pci_dev *pdev, const struct pci_device_id __always_unused *ent) +static int ice_cfg_netdev(struct ice_vsi *vsi) { - struct device *dev = &pdev->dev; - struct ice_vsi *vsi; - struct ice_pf *pf; - struct ice_hw *hw; - int i, err; + struct ice_netdev_priv *np; + struct net_device *netdev; + u8 mac_addr[ETH_ALEN]; - if (pdev->is_virtfn) { - dev_err(dev, "can't probe a virtual function\n"); - return -EINVAL; - } + netdev = alloc_etherdev_mqs(sizeof(*np), vsi->alloc_txq, + vsi->alloc_rxq); + if (!netdev) + return -ENOMEM; - /* this driver uses devres, see - * Documentation/driver-api/driver-model/devres.rst - */ - err = pcim_enable_device(pdev); - if (err) - return err; + set_bit(ICE_VSI_NETDEV_ALLOCD, vsi->state); + vsi->netdev = netdev; + np = netdev_priv(netdev); + np->vsi = vsi; - err = pcim_iomap_regions(pdev, BIT(ICE_BAR0), dev_driver_string(dev)); - if (err) { - dev_err(dev, "BAR0 I/O map error %d\n", err); - return err; + ice_set_netdev_features(netdev); + ice_set_ops(netdev); + + if (vsi->type == ICE_VSI_PF) { + SET_NETDEV_DEV(netdev, ice_pf_to_dev(vsi->back)); + ether_addr_copy(mac_addr, vsi->port_info->mac.perm_addr); + eth_hw_addr_set(netdev, mac_addr); } - pf = ice_allocate_pf(dev); - if (!pf) - return -ENOMEM; + netdev->priv_flags |= IFF_UNICAST_FLT; - /* initialize Auxiliary index to invalid value */ - pf->aux_idx = -1; + /* Setup netdev TC information */ + ice_vsi_cfg_netdev_tc(vsi, vsi->tc_cfg.ena_tc); - /* set up for high or low DMA */ - err = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64)); - if (err) { - dev_err(dev, "DMA configuration failed: 0x%x\n", err); + netdev->max_mtu = ICE_MAX_MTU; + + return 0; +} + +static void ice_decfg_netdev(struct ice_vsi *vsi) +{ + clear_bit(ICE_VSI_NETDEV_ALLOCD, vsi->state); + free_netdev(vsi->netdev); + vsi->netdev = NULL; +} + +static int ice_start_eth(struct ice_vsi *vsi) +{ + int err; + + err = ice_init_mac_fltr(vsi->back); + if (err) return err; - } - pci_enable_pcie_error_reporting(pdev); - pci_set_master(pdev); + rtnl_lock(); + err = ice_vsi_open(vsi); + rtnl_unlock(); - pf->pdev = pdev; - pci_set_drvdata(pdev, pf); - set_bit(ICE_DOWN, pf->state); - /* Disable service task until DOWN bit is cleared */ - set_bit(ICE_SERVICE_DIS, pf->state); + return err; +} - hw = &pf->hw; - hw->hw_addr = pcim_iomap_table(pdev)[ICE_BAR0]; - pci_save_state(pdev); +static int ice_init_eth(struct ice_pf *pf) +{ + struct ice_vsi *vsi = ice_get_main_vsi(pf); + struct device *dev = ice_pf_to_dev(pf); + int err; - hw->back = pf; - hw->vendor_id = pdev->vendor; - hw->device_id = pdev->device; - pci_read_config_byte(pdev, PCI_REVISION_ID, &hw->revision_id); - hw->subsystem_vendor_id = pdev->subsystem_vendor; - hw->subsystem_device_id = pdev->subsystem_device; - hw->bus.device = PCI_SLOT(pdev->devfn); - hw->bus.func = PCI_FUNC(pdev->devfn); - ice_set_ctrlq_len(hw); + if (!vsi) + return -EINVAL; - pf->msg_enable = netif_msg_init(debug, ICE_DFLT_NETIF_M); + /* init channel list */ + INIT_LIST_HEAD(&vsi->ch_list); -#ifndef CONFIG_DYNAMIC_DEBUG - if (debug < -1) - hw->debug_mask = debug; -#endif + err = ice_cfg_netdev(vsi); + if (err) + return err; + /* Setup DCB netlink interface */ + ice_dcbnl_setup(vsi); - err = ice_init_hw(hw); + err = ice_set_cpu_rx_rmap(vsi); if (err) { - dev_err(dev, "ice_init_hw failed: %d\n", err); - err = -EIO; - goto err_exit_unroll; + dev_err(dev, "Failed to set CPU Rx map VSI %d error %d\n", + vsi->vsi_num, err); + goto err_set_cpu_rx_rmap; } + err = ice_init_mac_fltr(pf); + if (err) + goto err_init_mac_fltr; - ice_init_feature_support(pf); + err = ice_devlink_create_pf_port(pf); + if (err) + goto err_devlink_create_pf_port; - err = ice_init_ddp_config(hw, pf); + SET_NETDEV_DEVLINK_PORT(vsi->netdev, &pf->devlink_port); - /* during topology change ice_init_hw may fail */ - if (err) { - err = -EIO; - goto err_exit_unroll; - } + err = ice_register_netdev(vsi); + if (err) + goto err_register_netdev; - /* if ice_init_ddp_config fails, ICE_FLAG_ADV_FEATURES bit won't be - * set in pf->state, which will cause ice_is_safe_mode to return - * true - */ - if (ice_is_safe_mode(pf)) { + err = ice_tc_indir_block_register(vsi); + if (err) + goto err_tc_indir_block_register; + + ice_napi_add(vsi); + + return 0; + +err_tc_indir_block_register: + ice_unregister_netdev(vsi); +err_register_netdev: + ice_devlink_destroy_pf_port(pf); +err_devlink_create_pf_port: +err_init_mac_fltr: + ice_free_cpu_rx_rmap(vsi); +err_set_cpu_rx_rmap: + ice_decfg_netdev(vsi); + return err; +} + +static void ice_deinit_eth(struct ice_pf *pf) +{ + struct ice_vsi *vsi = ice_get_main_vsi(pf); + + if (!vsi) + return; + + ice_vsi_close(vsi); + ice_unregister_netdev(vsi); + ice_devlink_destroy_pf_port(pf); + ice_free_cpu_rx_rmap(vsi); + ice_tc_indir_block_unregister(vsi); + ice_decfg_netdev(vsi); +} + +static int ice_init_dev(struct ice_pf *pf) +{ + struct device *dev = ice_pf_to_dev(pf); + struct ice_hw *hw = &pf->hw; + int err; + + err = ice_init_hw(hw); + if (err) { + dev_err(dev, "ice_init_hw failed: %d\n", err); + return err; + } + + ice_init_feature_support(pf); + + err = ice_init_ddp_config(hw, pf); + + /* during topology change ice_init_hw may fail */ + if (err) { + err = -EIO; + goto err_init_pf; + } + + /* if ice_init_ddp_config fails, ICE_FLAG_ADV_FEATURES bit won't be + * set in pf->state, which will cause ice_is_safe_mode to return + * true + */ + if (ice_is_safe_mode(pf)) { /* we already got function/device capabilities but these don't * reflect what the driver needs to do in safe mode. Instead of * adding conditional logic everywhere to ignore these @@ -4801,62 +4764,38 @@ ice_probe(struct pci_dev *pdev, const struct pci_device_id __always_unused *ent) err = ice_init_pf(pf); if (err) { dev_err(dev, "ice_init_pf failed: %d\n", err); - goto err_init_pf_unroll; + goto err_init_pf; } - ice_devlink_init_regions(pf); - pf->hw.udp_tunnel_nic.set_port = ice_udp_tunnel_set_port; pf->hw.udp_tunnel_nic.unset_port = ice_udp_tunnel_unset_port; pf->hw.udp_tunnel_nic.flags = UDP_TUNNEL_NIC_INFO_MAY_SLEEP; pf->hw.udp_tunnel_nic.shared = &pf->hw.udp_tunnel_shared; - i = 0; if (pf->hw.tnl.valid_count[TNL_VXLAN]) { - pf->hw.udp_tunnel_nic.tables[i].n_entries = + pf->hw.udp_tunnel_nic.tables[0].n_entries = pf->hw.tnl.valid_count[TNL_VXLAN]; - pf->hw.udp_tunnel_nic.tables[i].tunnel_types = + pf->hw.udp_tunnel_nic.tables[0].tunnel_types = UDP_TUNNEL_TYPE_VXLAN; - i++; } if (pf->hw.tnl.valid_count[TNL_GENEVE]) { - pf->hw.udp_tunnel_nic.tables[i].n_entries = + pf->hw.udp_tunnel_nic.tables[1].n_entries = pf->hw.tnl.valid_count[TNL_GENEVE]; - pf->hw.udp_tunnel_nic.tables[i].tunnel_types = + pf->hw.udp_tunnel_nic.tables[1].tunnel_types = UDP_TUNNEL_TYPE_GENEVE; - i++; - } - - pf->num_alloc_vsi = hw->func_caps.guar_num_vsi; - if (!pf->num_alloc_vsi) { - err = -EIO; - goto err_init_pf_unroll; - } - if (pf->num_alloc_vsi > UDP_TUNNEL_NIC_MAX_SHARING_DEVICES) { - dev_warn(&pf->pdev->dev, - "limiting the VSI count due to UDP tunnel limitation %d > %d\n", - pf->num_alloc_vsi, UDP_TUNNEL_NIC_MAX_SHARING_DEVICES); - pf->num_alloc_vsi = UDP_TUNNEL_NIC_MAX_SHARING_DEVICES; - } - - pf->vsi = devm_kcalloc(dev, pf->num_alloc_vsi, sizeof(*pf->vsi), - GFP_KERNEL); - if (!pf->vsi) { - err = -ENOMEM; - goto err_init_pf_unroll; } pf->vsi_stats = devm_kcalloc(dev, pf->num_alloc_vsi, sizeof(*pf->vsi_stats), GFP_KERNEL); if (!pf->vsi_stats) { err = -ENOMEM; - goto err_init_vsi_unroll; + goto err_alloc_stats; } err = ice_init_interrupt_scheme(pf); if (err) { dev_err(dev, "ice_init_interrupt_scheme failed: %d\n", err); err = -EIO; - goto err_init_vsi_stats_unroll; + goto err_init_interrupt_scheme; } /* In case of MSIX we are going to setup the misc vector right here @@ -4867,49 +4806,96 @@ ice_probe(struct pci_dev *pdev, const struct pci_device_id __always_unused *ent) err = ice_req_irq_msix_misc(pf); if (err) { dev_err(dev, "setup of misc vector failed: %d\n", err); - goto err_init_interrupt_unroll; + goto err_req_irq_msix_misc; } - /* create switch struct for the switch element created by FW on boot */ - pf->first_sw = devm_kzalloc(dev, sizeof(*pf->first_sw), GFP_KERNEL); - if (!pf->first_sw) { - err = -ENOMEM; - goto err_msix_misc_unroll; - } + return 0; - if (hw->evb_veb) - pf->first_sw->bridge_mode = BRIDGE_MODE_VEB; - else - pf->first_sw->bridge_mode = BRIDGE_MODE_VEPA; +err_req_irq_msix_misc: + ice_clear_interrupt_scheme(pf); +err_init_interrupt_scheme: + devm_kfree(dev, pf->vsi_stats); +err_alloc_stats: + ice_deinit_pf(pf); +err_init_pf: + ice_deinit_hw(hw); + return err; +} - pf->first_sw->pf = pf; +static void ice_deinit_dev(struct ice_pf *pf) +{ + ice_free_irq_msix_misc(pf); + ice_clear_interrupt_scheme(pf); + ice_deinit_pf(pf); + ice_deinit_hw(&pf->hw); +} - /* record the sw_id available for later use */ - pf->first_sw->sw_id = hw->port_info->sw_id; +static void ice_init_features(struct ice_pf *pf) +{ + struct device *dev = ice_pf_to_dev(pf); - err = ice_setup_pf_sw(pf); - if (err) { - dev_err(dev, "probe failed due to setup PF switch: %d\n", err); - goto err_alloc_sw_unroll; - } + if (ice_is_safe_mode(pf)) + return; - clear_bit(ICE_SERVICE_DIS, pf->state); + /* initialize DDP driven features */ + if (test_bit(ICE_FLAG_PTP_SUPPORTED, pf->flags)) + ice_ptp_init(pf); - /* tell the firmware we are up */ - err = ice_send_version(pf); - if (err) { - dev_err(dev, "probe failed sending driver version %s. error: %d\n", - UTS_RELEASE, err); - goto err_send_version_unroll; + if (ice_is_feature_supported(pf, ICE_F_GNSS)) + ice_gnss_init(pf); + + /* Note: Flow director init failure is non-fatal to load */ + if (ice_init_fdir(pf)) + dev_err(dev, "could not initialize flow director\n"); + + /* Note: DCB init failure is non-fatal to load */ + if (ice_init_pf_dcb(pf, false)) { + clear_bit(ICE_FLAG_DCB_CAPABLE, pf->flags); + clear_bit(ICE_FLAG_DCB_ENA, pf->flags); + } else { + ice_cfg_lldp_mib_change(&pf->hw, true); } - /* since everything is good, start the service timer */ - mod_timer(&pf->serv_tmr, round_jiffies(jiffies + pf->serv_tmr_period)); + if (ice_init_lag(pf)) + dev_warn(dev, "Failed to init link aggregation support\n"); +} + +static void ice_deinit_features(struct ice_pf *pf) +{ + ice_deinit_lag(pf); + if (test_bit(ICE_FLAG_DCB_CAPABLE, pf->flags)) + ice_cfg_lldp_mib_change(&pf->hw, false); + ice_deinit_fdir(pf); + if (ice_is_feature_supported(pf, ICE_F_GNSS)) + ice_gnss_exit(pf); + if (test_bit(ICE_FLAG_PTP_SUPPORTED, pf->flags)) + ice_ptp_release(pf); +} + +static void ice_init_wakeup(struct ice_pf *pf) +{ + /* Save wakeup reason register for later use */ + pf->wakeup_reason = rd32(&pf->hw, PFPM_WUS); + + /* check for a power management event */ + ice_print_wake_reason(pf); + + /* clear wake status, all bits */ + wr32(&pf->hw, PFPM_WUS, U32_MAX); + + /* Disable WoL at init, wait for user to enable */ + device_set_wakeup_enable(ice_pf_to_dev(pf), false); +} + +static int ice_init_link(struct ice_pf *pf) +{ + struct device *dev = ice_pf_to_dev(pf); + int err; err = ice_init_link_events(pf->hw.port_info); if (err) { dev_err(dev, "ice_init_link_events failed: %d\n", err); - goto err_send_version_unroll; + return err; } /* not a fatal error if this fails */ @@ -4945,106 +4931,336 @@ ice_probe(struct pci_dev *pdev, const struct pci_device_id __always_unused *ent) set_bit(ICE_FLAG_NO_MEDIA, pf->flags); } - ice_verify_cacheline_size(pf); + return err; +} - /* Save wakeup reason register for later use */ - pf->wakeup_reason = rd32(hw, PFPM_WUS); +static int ice_init_pf_sw(struct ice_pf *pf) +{ + bool dvm = ice_is_dvm_ena(&pf->hw); + struct ice_vsi *vsi; + int err; - /* check for a power management event */ - ice_print_wake_reason(pf); + /* create switch struct for the switch element created by FW on boot */ + pf->first_sw = kzalloc(sizeof(*pf->first_sw), GFP_KERNEL); + if (!pf->first_sw) + return -ENOMEM; - /* clear wake status, all bits */ - wr32(hw, PFPM_WUS, U32_MAX); + if (pf->hw.evb_veb) + pf->first_sw->bridge_mode = BRIDGE_MODE_VEB; + else + pf->first_sw->bridge_mode = BRIDGE_MODE_VEPA; - /* Disable WoL at init, wait for user to enable */ - device_set_wakeup_enable(dev, false); + pf->first_sw->pf = pf; - if (ice_is_safe_mode(pf)) { - ice_set_safe_mode_vlan_cfg(pf); - goto probe_done; + /* record the sw_id available for later use */ + pf->first_sw->sw_id = pf->hw.port_info->sw_id; + + err = ice_aq_set_port_params(pf->hw.port_info, dvm, NULL); + if (err) + goto err_aq_set_port_params; + + vsi = ice_pf_vsi_setup(pf, pf->hw.port_info); + if (!vsi) { + err = -ENOMEM; + goto err_pf_vsi_setup; } - /* initialize DDP driven features */ - if (test_bit(ICE_FLAG_PTP_SUPPORTED, pf->flags)) - ice_ptp_init(pf); + return 0; - if (ice_is_feature_supported(pf, ICE_F_GNSS)) - ice_gnss_init(pf); +err_pf_vsi_setup: +err_aq_set_port_params: + kfree(pf->first_sw); + return err; +} - /* Note: Flow director init failure is non-fatal to load */ - if (ice_init_fdir(pf)) - dev_err(dev, "could not initialize flow director\n"); +static void ice_deinit_pf_sw(struct ice_pf *pf) +{ + struct ice_vsi *vsi = ice_get_main_vsi(pf); - /* Note: DCB init failure is non-fatal to load */ - if (ice_init_pf_dcb(pf, false)) { - clear_bit(ICE_FLAG_DCB_CAPABLE, pf->flags); - clear_bit(ICE_FLAG_DCB_ENA, pf->flags); - } else { - ice_cfg_lldp_mib_change(&pf->hw, true); + if (!vsi) + return; + + ice_vsi_release(vsi); + kfree(pf->first_sw); +} + +static int ice_alloc_vsis(struct ice_pf *pf) +{ + struct device *dev = ice_pf_to_dev(pf); + + pf->num_alloc_vsi = pf->hw.func_caps.guar_num_vsi; + if (!pf->num_alloc_vsi) + return -EIO; + + if (pf->num_alloc_vsi > UDP_TUNNEL_NIC_MAX_SHARING_DEVICES) { + dev_warn(dev, + "limiting the VSI count due to UDP tunnel limitation %d > %d\n", + pf->num_alloc_vsi, UDP_TUNNEL_NIC_MAX_SHARING_DEVICES); + pf->num_alloc_vsi = UDP_TUNNEL_NIC_MAX_SHARING_DEVICES; } - if (ice_init_lag(pf)) - dev_warn(dev, "Failed to init link aggregation support\n"); + pf->vsi = devm_kcalloc(dev, pf->num_alloc_vsi, sizeof(*pf->vsi), + GFP_KERNEL); + if (!pf->vsi) + return -ENOMEM; - /* print PCI link speed and width */ - pcie_print_link_status(pf->pdev); + return 0; +} -probe_done: - err = ice_devlink_create_pf_port(pf); +static void ice_dealloc_vsis(struct ice_pf *pf) +{ + pf->num_alloc_vsi = 0; + devm_kfree(ice_pf_to_dev(pf), pf->vsi); + pf->vsi = NULL; +} + +static int ice_init_devlink(struct ice_pf *pf) +{ + int err; + + err = ice_devlink_register_params(pf); if (err) - goto err_create_pf_port; + return err; - vsi = ice_get_main_vsi(pf); - if (!vsi || !vsi->netdev) - goto err_netdev_reg; + ice_devlink_init_regions(pf); + ice_devlink_register(pf); - SET_NETDEV_DEVLINK_PORT(vsi->netdev, &pf->devlink_port); + return 0; +} - err = ice_register_netdev(pf); +static void ice_deinit_devlink(struct ice_pf *pf) +{ + ice_devlink_unregister(pf); + ice_devlink_destroy_regions(pf); + ice_devlink_unregister_params(pf); +} + +static int ice_init(struct ice_pf *pf) +{ + int err; + + err = ice_init_dev(pf); if (err) - goto err_netdev_reg; + return err; - err = ice_devlink_register_params(pf); + err = ice_alloc_vsis(pf); if (err) - goto err_netdev_reg; + goto err_alloc_vsis; + + err = ice_init_pf_sw(pf); + if (err) + goto err_init_pf_sw; + + ice_init_wakeup(pf); + + err = ice_init_link(pf); + if (err) + goto err_init_link; + + err = ice_send_version(pf); + if (err) + goto err_init_link; + + ice_verify_cacheline_size(pf); + + if (ice_is_safe_mode(pf)) + ice_set_safe_mode_vlan_cfg(pf); + else + /* print PCI link speed and width */ + pcie_print_link_status(pf->pdev); /* ready to go, so clear down state bit */ clear_bit(ICE_DOWN, pf->state); + clear_bit(ICE_SERVICE_DIS, pf->state); + + /* since everything is good, start the service timer */ + mod_timer(&pf->serv_tmr, round_jiffies(jiffies + pf->serv_tmr_period)); + + return 0; + +err_init_link: + ice_deinit_pf_sw(pf); +err_init_pf_sw: + ice_dealloc_vsis(pf); +err_alloc_vsis: + ice_deinit_dev(pf); + return err; +} + +static void ice_deinit(struct ice_pf *pf) +{ + set_bit(ICE_SERVICE_DIS, pf->state); + set_bit(ICE_DOWN, pf->state); + + ice_deinit_dev(pf); + ice_dealloc_vsis(pf); + ice_deinit_pf_sw(pf); +} + +/** + * ice_load - load pf by init hw and starting VSI + * @pf: pointer to the pf instance + */ +int ice_load(struct ice_pf *pf) +{ + struct ice_vsi *vsi; + int err; + + err = ice_reset(&pf->hw, ICE_RESET_PFR); + if (err) + return err; + + err = ice_init_dev(pf); + if (err) + return err; + + vsi = ice_get_main_vsi(pf); + err = ice_vsi_cfg(vsi, NULL, NULL); + if (err) + goto err_vsi_cfg; + + err = ice_start_eth(ice_get_main_vsi(pf)); + if (err) + goto err_start_eth; + err = ice_init_rdma(pf); + if (err) + goto err_init_rdma; + + ice_init_features(pf); + ice_service_task_restart(pf); + + clear_bit(ICE_DOWN, pf->state); + + return 0; + +err_init_rdma: + ice_vsi_close(ice_get_main_vsi(pf)); +err_start_eth: + ice_vsi_decfg(ice_get_main_vsi(pf)); +err_vsi_cfg: + ice_deinit_dev(pf); + return err; +} + +/** + * ice_unload - unload pf by stopping VSI and deinit hw + * @pf: pointer to the pf instance + */ +void ice_unload(struct ice_pf *pf) +{ + ice_deinit_features(pf); + ice_deinit_rdma(pf); + ice_vsi_close(ice_get_main_vsi(pf)); + ice_vsi_decfg(ice_get_main_vsi(pf)); + ice_deinit_dev(pf); +} + +/** + * ice_probe - Device initialization routine + * @pdev: PCI device information struct + * @ent: entry in ice_pci_tbl + * + * Returns 0 on success, negative on failure + */ +static int +ice_probe(struct pci_dev *pdev, const struct pci_device_id __always_unused *ent) +{ + struct device *dev = &pdev->dev; + struct ice_pf *pf; + struct ice_hw *hw; + int err; + + if (pdev->is_virtfn) { + dev_err(dev, "can't probe a virtual function\n"); + return -EINVAL; + } + + /* this driver uses devres, see + * Documentation/driver-api/driver-model/devres.rst + */ + err = pcim_enable_device(pdev); + if (err) + return err; + + err = pcim_iomap_regions(pdev, BIT(ICE_BAR0), dev_driver_string(dev)); if (err) { - dev_err(dev, "Failed to initialize RDMA: %d\n", err); - err = -EIO; - goto err_devlink_reg_param; + dev_err(dev, "BAR0 I/O map error %d\n", err); + return err; } - ice_devlink_register(pf); - return 0; + pf = ice_allocate_pf(dev); + if (!pf) + return -ENOMEM; -err_devlink_reg_param: - ice_devlink_unregister_params(pf); -err_netdev_reg: - ice_devlink_destroy_pf_port(pf); -err_create_pf_port: -err_send_version_unroll: - ice_vsi_release_all(pf); -err_alloc_sw_unroll: - set_bit(ICE_SERVICE_DIS, pf->state); + /* initialize Auxiliary index to invalid value */ + pf->aux_idx = -1; + + /* set up for high or low DMA */ + err = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64)); + if (err) { + dev_err(dev, "DMA configuration failed: 0x%x\n", err); + return err; + } + + pci_enable_pcie_error_reporting(pdev); + pci_set_master(pdev); + + pf->pdev = pdev; + pci_set_drvdata(pdev, pf); set_bit(ICE_DOWN, pf->state); - devm_kfree(dev, pf->first_sw); -err_msix_misc_unroll: - ice_free_irq_msix_misc(pf); -err_init_interrupt_unroll: - ice_clear_interrupt_scheme(pf); -err_init_vsi_stats_unroll: - devm_kfree(dev, pf->vsi_stats); - pf->vsi_stats = NULL; -err_init_vsi_unroll: - devm_kfree(dev, pf->vsi); -err_init_pf_unroll: - ice_deinit_pf(pf); - ice_devlink_destroy_regions(pf); - ice_deinit_hw(hw); -err_exit_unroll: + /* Disable service task until DOWN bit is cleared */ + set_bit(ICE_SERVICE_DIS, pf->state); + + hw = &pf->hw; + hw->hw_addr = pcim_iomap_table(pdev)[ICE_BAR0]; + pci_save_state(pdev); + + hw->back = pf; + hw->port_info = NULL; + hw->vendor_id = pdev->vendor; + hw->device_id = pdev->device; + pci_read_config_byte(pdev, PCI_REVISION_ID, &hw->revision_id); + hw->subsystem_vendor_id = pdev->subsystem_vendor; + hw->subsystem_device_id = pdev->subsystem_device; + hw->bus.device = PCI_SLOT(pdev->devfn); + hw->bus.func = PCI_FUNC(pdev->devfn); + ice_set_ctrlq_len(hw); + + pf->msg_enable = netif_msg_init(debug, ICE_DFLT_NETIF_M); + +#ifndef CONFIG_DYNAMIC_DEBUG + if (debug < -1) + hw->debug_mask = debug; +#endif + + err = ice_init(pf); + if (err) + goto err_init; + + err = ice_init_eth(pf); + if (err) + goto err_init_eth; + + err = ice_init_rdma(pf); + if (err) + goto err_init_rdma; + + err = ice_init_devlink(pf); + if (err) + goto err_init_devlink; + + ice_init_features(pf); + + return 0; + +err_init_devlink: + ice_deinit_rdma(pf); +err_init_rdma: + ice_deinit_eth(pf); +err_init_eth: + ice_deinit(pf); +err_init: pci_disable_pcie_error_reporting(pdev); pci_disable_device(pdev); return err; @@ -5120,7 +5336,7 @@ static void ice_remove(struct pci_dev *pdev) struct ice_pf *pf = pci_get_drvdata(pdev); int i; - ice_devlink_unregister(pf); + ice_deinit_devlink(pf); for (i = 0; i < ICE_MAX_RESET_WAIT; i++) { if (!ice_is_reset_in_progress(pf->state)) break; @@ -5137,23 +5353,17 @@ static void ice_remove(struct pci_dev *pdev) ice_service_task_stop(pf); ice_aq_cancel_waiting_tasks(pf); - ice_deinit_rdma(pf); - ice_devlink_unregister_params(pf); set_bit(ICE_DOWN, pf->state); - ice_deinit_lag(pf); - if (test_bit(ICE_FLAG_PTP_SUPPORTED, pf->flags)) - ice_ptp_release(pf); - if (ice_is_feature_supported(pf, ICE_F_GNSS)) - ice_gnss_exit(pf); + ice_deinit_features(pf); + ice_deinit_rdma(pf); if (!ice_is_safe_mode(pf)) ice_remove_arfs(pf); ice_setup_mc_magic_wake(pf); ice_vsi_release_all(pf); mutex_destroy(&(&pf->hw)->fdir_fltr_lock); - ice_devlink_destroy_pf_port(pf); ice_set_wake(pf); - ice_free_irq_msix_misc(pf); + ice_deinit_dev(pf); ice_for_each_vsi(pf, i) { if (!pf->vsi[i]) continue; @@ -5171,7 +5381,6 @@ static void ice_remove(struct pci_dev *pdev) */ ice_reset(&pf->hw, ICE_RESET_PFR); pci_wait_for_pending_transaction(pdev); - ice_clear_interrupt_scheme(pf); pci_disable_pcie_error_reporting(pdev); pci_disable_device(pdev); } From patchwork Mon Dec 12 11:16:42 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Swiatkowski X-Patchwork-Id: 13071020 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E5D70C4332F for ; Mon, 12 Dec 2022 11:36:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231705AbiLLLgq (ORCPT ); Mon, 12 Dec 2022 06:36:46 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43080 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232169AbiLLLgF (ORCPT ); Mon, 12 Dec 2022 06:36:05 -0500 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CB517FD01 for ; Mon, 12 Dec 2022 03:33:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1670844796; x=1702380796; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=unS16nVWy3isgLs85nEntpNArvPSZg+8ZBDHAbDTZtc=; b=XhpOirjxd5tOdkazSqCCvgh4ZH0zMnyRNMrXQy7Ootgzahux4OB2YGYW Q4iwoA01ZR0AeS8MP2c7Kvtmfh7ql1mIcANkha+GMDRQ3s0YUyAw4ZRTU 4ycVTR/pZn3UpQ2Y7J8vJom90hbwcNnWnK2iVj4qBdo0oQEtHP7y1wNsl qD2Xe7+HXxysqoLPcB3yjV067cORuIQKwOKy+OntVf5zMhKz+TxmApSQV rTZPP7I80QOcfK8057dKsRG0TpSkMNb9aDguxa5G16U/8vnjCWTcuM9Ae rCC3+hdx29oYGwHfyDHcvkzkGckZajgkbbZnG6zsKw/SEu7ownwR4cY2j A==; X-IronPort-AV: E=McAfee;i="6500,9779,10558"; a="317861515" X-IronPort-AV: E=Sophos;i="5.96,238,1665471600"; d="scan'208";a="317861515" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Dec 2022 03:33:16 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10558"; a="893459813" X-IronPort-AV: E=Sophos;i="5.96,238,1665471600"; d="scan'208";a="893459813" Received: from wasp.igk.intel.com ([10.102.20.192]) by fmsmga006.fm.intel.com with ESMTP; 12 Dec 2022 03:33:12 -0800 From: Michal Swiatkowski To: intel-wired-lan@lists.osuosl.org Cc: alexandr.lobakin@intel.com, sridhar.samudrala@intel.com, wojciech.drewek@intel.com, lukasz.czapnik@intel.com, shiraz.saleem@intel.com, jesse.brandeburg@intel.com, mustafa.ismail@intel.com, przemyslaw.kitszel@intel.com, piotr.raczynski@intel.com, jacob.e.keller@intel.com, david.m.ertman@intel.com, leszek.kaliszczuk@intel.com, benjamin.mikailenko@intel.com, paul.m.stillwell.jr@intel.com, netdev@vger.kernel.org, kuba@kernel.org, leon@kernel.org, Michal Swiatkowski Subject: [PATCH net-next v1 07/10] ice: sync netdev filters after clearing VSI Date: Mon, 12 Dec 2022 12:16:42 +0100 Message-Id: <20221212111645.1198680-8-michal.swiatkowski@linux.intel.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20221212111645.1198680-1-michal.swiatkowski@linux.intel.com> References: <20221212111645.1198680-1-michal.swiatkowski@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org In driver reload path the netdev isn't removed, but VSI is. Remove filters on netdev right after removing them on VSI. Signed-off-by: Michal Swiatkowski --- drivers/net/ethernet/intel/ice/ice_fltr.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/drivers/net/ethernet/intel/ice/ice_fltr.c b/drivers/net/ethernet/intel/ice/ice_fltr.c index 40e678cfb507..aff7a141c30d 100644 --- a/drivers/net/ethernet/intel/ice/ice_fltr.c +++ b/drivers/net/ethernet/intel/ice/ice_fltr.c @@ -208,6 +208,11 @@ static int ice_fltr_remove_eth_list(struct ice_vsi *vsi, struct list_head *list) void ice_fltr_remove_all(struct ice_vsi *vsi) { ice_remove_vsi_fltr(&vsi->back->hw, vsi->idx); + /* sync netdev filters if exist */ + if (vsi->netdev) { + __dev_uc_unsync(vsi->netdev, NULL); + __dev_mc_unsync(vsi->netdev, NULL); + } } /** From patchwork Mon Dec 12 11:16:43 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Swiatkowski X-Patchwork-Id: 13071022 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CE1FDC4332F for ; Mon, 12 Dec 2022 11:38:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231926AbiLLLiP (ORCPT ); Mon, 12 Dec 2022 06:38:15 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50032 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232245AbiLLLgk (ORCPT ); Mon, 12 Dec 2022 06:36:40 -0500 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AD17A10072 for ; Mon, 12 Dec 2022 03:33:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1670844800; x=1702380800; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=De8/hoqshrRcFoO6dpkoJyws8syDggPVjTXTixkRy+E=; b=FxBAzD842elZaRWQHOrAb0X9XBRqNKdakOcQE5FR6v/TFqdnLAw+ad9M WWzzT7QEsXlBnGLskdVLPTa1al+2HgXITFqGgB84ppQ9ZHaoxRsJFwg3X FY9b7AIFQDQ2syCbmu9gruWuRVHYYMQgDV2ANEMOPFDAsBvpEHD8Ows5J pZf+ErQUg2WbVA35iqj5Hzf4Ep8e27q2OMYgmLcnA94h1c6w0LJnGoUiT QbGddlm6PbdOxpOBU8T1qC5Zq7S8cWGx1cj4FeE6lbiaDkk8rVxjhY2y0 CQKcOskn1rhSEIb2zSXF9OwW+chg+gL6gPlsR7Ugku/T1SfjasMML6tw8 A==; X-IronPort-AV: E=McAfee;i="6500,9779,10558"; a="317861533" X-IronPort-AV: E=Sophos;i="5.96,238,1665471600"; d="scan'208";a="317861533" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Dec 2022 03:33:20 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10558"; a="893459833" X-IronPort-AV: E=Sophos;i="5.96,238,1665471600"; d="scan'208";a="893459833" Received: from wasp.igk.intel.com ([10.102.20.192]) by fmsmga006.fm.intel.com with ESMTP; 12 Dec 2022 03:33:16 -0800 From: Michal Swiatkowski To: intel-wired-lan@lists.osuosl.org Cc: alexandr.lobakin@intel.com, sridhar.samudrala@intel.com, wojciech.drewek@intel.com, lukasz.czapnik@intel.com, shiraz.saleem@intel.com, jesse.brandeburg@intel.com, mustafa.ismail@intel.com, przemyslaw.kitszel@intel.com, piotr.raczynski@intel.com, jacob.e.keller@intel.com, david.m.ertman@intel.com, leszek.kaliszczuk@intel.com, benjamin.mikailenko@intel.com, paul.m.stillwell.jr@intel.com, netdev@vger.kernel.org, kuba@kernel.org, leon@kernel.org, Michal Swiatkowski Subject: [PATCH net-next v1 08/10] ice: move VSI delete outside deconfig Date: Mon, 12 Dec 2022 12:16:43 +0100 Message-Id: <20221212111645.1198680-9-michal.swiatkowski@linux.intel.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20221212111645.1198680-1-michal.swiatkowski@linux.intel.com> References: <20221212111645.1198680-1-michal.swiatkowski@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org In deconfig VSI shouldn't be deleted from hw. Rewrite VSI delete function to reflect that sometimes it is only needed to remove VSI from hw without freeing the memory: ice_vsi_delete() -> delete from HW and free memory ice_vsi_delete_from_hw() -> delete only from HW Value returned from ice_vsi_free() is never used. Change return type to void. Signed-off-by: Michal Swiatkowski --- drivers/net/ethernet/intel/ice/ice_lib.c | 28 +++++++++++------------ drivers/net/ethernet/intel/ice/ice_lib.h | 1 - drivers/net/ethernet/intel/ice/ice_main.c | 5 +--- 3 files changed, 14 insertions(+), 20 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c index eba990120a06..ae6ce6a74d03 100644 --- a/drivers/net/ethernet/intel/ice/ice_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_lib.c @@ -282,10 +282,10 @@ static int ice_get_free_slot(void *array, int size, int curr) } /** - * ice_vsi_delete - delete a VSI from the switch + * ice_vsi_delete_from_hw - delete a VSI from the switch * @vsi: pointer to VSI being removed */ -void ice_vsi_delete(struct ice_vsi *vsi) +static void ice_vsi_delete_from_hw(struct ice_vsi *vsi) { struct ice_pf *pf = vsi->back; struct ice_vsi_ctx *ctxt; @@ -453,26 +453,21 @@ static int ice_vsi_alloc_ring_stats(struct ice_vsi *vsi) * * This deallocates the VSI's queue resources, removes it from the PF's * VSI array if necessary, and deallocates the VSI - * - * Returns 0 on success, negative on failure */ -int ice_vsi_free(struct ice_vsi *vsi) +static void ice_vsi_free(struct ice_vsi *vsi) { struct ice_pf *pf = NULL; struct device *dev; - if (!vsi) - return 0; - - if (!vsi->back) - return -EINVAL; + if (!vsi || !vsi->back) + return; pf = vsi->back; dev = ice_pf_to_dev(pf); if (!pf->vsi[vsi->idx] || pf->vsi[vsi->idx] != vsi) { dev_dbg(dev, "vsi does not exist at pf->vsi[%d]\n", vsi->idx); - return -EINVAL; + return; } mutex_lock(&pf->sw_mutex); @@ -485,8 +480,12 @@ int ice_vsi_free(struct ice_vsi *vsi) ice_vsi_free_arrays(vsi); mutex_unlock(&pf->sw_mutex); devm_kfree(dev, vsi); +} - return 0; +void ice_vsi_delete(struct ice_vsi *vsi) +{ + ice_vsi_delete_from_hw(vsi); + ice_vsi_free(vsi); } /** @@ -2843,7 +2842,7 @@ ice_vsi_cfg_def(struct ice_vsi *vsi, struct ice_vf *vf, struct ice_channel *ch) unroll_alloc_q_vector: ice_vsi_free_q_vectors(vsi); unroll_vsi_init: - ice_vsi_delete(vsi); + ice_vsi_delete_from_hw(vsi); unroll_get_qs: ice_vsi_put_qs(vsi); unroll_vsi_alloc: @@ -2904,7 +2903,6 @@ void ice_vsi_decfg(struct ice_vsi *vsi) ice_vsi_clear_rings(vsi); ice_vsi_free_q_vectors(vsi); - ice_vsi_delete(vsi); ice_vsi_put_qs(vsi); ice_vsi_free_arrays(vsi); @@ -3308,7 +3306,7 @@ int ice_vsi_release(struct ice_vsi *vsi) * for ex: during rmmod. */ if (!ice_is_reset_in_progress(pf->state)) - ice_vsi_free(vsi); + ice_vsi_delete(vsi); return 0; } diff --git a/drivers/net/ethernet/intel/ice/ice_lib.h b/drivers/net/ethernet/intel/ice/ice_lib.h index ad4d5314ca76..8905f8721a76 100644 --- a/drivers/net/ethernet/intel/ice/ice_lib.h +++ b/drivers/net/ethernet/intel/ice/ice_lib.h @@ -42,7 +42,6 @@ void ice_cfg_sw_lldp(struct ice_vsi *vsi, bool tx, bool create); int ice_set_link(struct ice_vsi *vsi, bool ena); void ice_vsi_delete(struct ice_vsi *vsi); -int ice_vsi_free(struct ice_vsi *vsi); int ice_vsi_cfg_tc(struct ice_vsi *vsi, u8 ena_tc); diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c index d8f51aee78ff..fbeac890a606 100644 --- a/drivers/net/ethernet/intel/ice/ice_main.c +++ b/drivers/net/ethernet/intel/ice/ice_main.c @@ -8689,12 +8689,9 @@ static void ice_remove_q_channels(struct ice_vsi *vsi, bool rem_fltr) /* clear the VSI from scheduler tree */ ice_rm_vsi_lan_cfg(ch->ch_vsi->port_info, ch->ch_vsi->idx); - /* Delete VSI from FW */ + /* Delete VSI from FW, PF and HW VSI arrays */ ice_vsi_delete(ch->ch_vsi); - /* Delete VSI from PF and HW VSI arrays */ - ice_vsi_free(ch->ch_vsi); - /* free the channel */ kfree(ch); } From patchwork Mon Dec 12 11:16:44 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Swiatkowski X-Patchwork-Id: 13071021 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5E368C4167B for ; Mon, 12 Dec 2022 11:38:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231343AbiLLLiN (ORCPT ); Mon, 12 Dec 2022 06:38:13 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50012 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232367AbiLLLhH (ORCPT ); Mon, 12 Dec 2022 06:37:07 -0500 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C097211C33 for ; Mon, 12 Dec 2022 03:33:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1670844811; x=1702380811; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=YywNEnk5AF8Sgvr1H+zIO8qQ/lr1Hm6wdnB4cuX72sE=; b=bfgMk2MI7hzXio7p9wWdW7yJX1eDX9JXGYkYGj7Y87ZAQrsBehxikS0B Hq0O0qSMs98mYxislw7CkhjfYf43dStFrJD590SGB+eisCBXT/Yh4RWg9 jQyLypM2uGiQsvFbbVrHsz9s2k652y0HNWWN32nNXGyXnzeFTbXY2vdCe ibpm3CGWFHYD9yYJnNux6cBOOf0EXB1sOfM+Sx62WCR1ekBxXjLC2Kj3V jieYcUl6VzXU3ie1zpeBKKUpavTX7vyk7YtdlrQSx4DAgN3yBP6SUr6ld DZoSFL5Aa3QOkAFevADRqueQFZEAW2bID+KiPSAXKhnP92QDpSOjGmdfM g==; X-IronPort-AV: E=McAfee;i="6500,9779,10558"; a="317861544" X-IronPort-AV: E=Sophos;i="5.96,238,1665471600"; d="scan'208";a="317861544" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Dec 2022 03:33:24 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10558"; a="893459864" X-IronPort-AV: E=Sophos;i="5.96,238,1665471600"; d="scan'208";a="893459864" Received: from wasp.igk.intel.com ([10.102.20.192]) by fmsmga006.fm.intel.com with ESMTP; 12 Dec 2022 03:33:20 -0800 From: Michal Swiatkowski To: intel-wired-lan@lists.osuosl.org Cc: alexandr.lobakin@intel.com, sridhar.samudrala@intel.com, wojciech.drewek@intel.com, lukasz.czapnik@intel.com, shiraz.saleem@intel.com, jesse.brandeburg@intel.com, mustafa.ismail@intel.com, przemyslaw.kitszel@intel.com, piotr.raczynski@intel.com, jacob.e.keller@intel.com, david.m.ertman@intel.com, leszek.kaliszczuk@intel.com, benjamin.mikailenko@intel.com, paul.m.stillwell.jr@intel.com, netdev@vger.kernel.org, kuba@kernel.org, leon@kernel.org, Michal Swiatkowski Subject: [PATCH net-next v1 09/10] ice: update VSI instead of init in some case Date: Mon, 12 Dec 2022 12:16:44 +0100 Message-Id: <20221212111645.1198680-10-michal.swiatkowski@linux.intel.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20221212111645.1198680-1-michal.swiatkowski@linux.intel.com> References: <20221212111645.1198680-1-michal.swiatkowski@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org ice_vsi_cfg() is called from different contexts: 1) VSI exsist in HW, but it is reconfigured, because of changing queues for example -> update instead of init should be used 2) VSI doesn't exsist, because rest has happened -> init command should be sent To support both cases pass boolean value which will store information what type of command has to be sent to HW. Signed-off-by: Michal Swiatkowski --- drivers/net/ethernet/intel/ice/ice_lib.c | 16 ++++++++++------ drivers/net/ethernet/intel/ice/ice_lib.h | 4 ++-- drivers/net/ethernet/intel/ice/ice_main.c | 2 +- 3 files changed, 13 insertions(+), 9 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c index ae6ce6a74d03..cd9a345acdfa 100644 --- a/drivers/net/ethernet/intel/ice/ice_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_lib.c @@ -2701,9 +2701,11 @@ static int ice_vsi_cfg_tc_lan(struct ice_pf *pf, struct ice_vsi *vsi) * @vf: pointer to VF to which this VSI connects. This field is used primarily * for the ICE_VSI_VF type. Other VSI types should pass NULL. * @ch: ptr to channel + * @init_vsi: is this an initialization or a reconfigure of the VSI */ static int -ice_vsi_cfg_def(struct ice_vsi *vsi, struct ice_vf *vf, struct ice_channel *ch) +ice_vsi_cfg_def(struct ice_vsi *vsi, struct ice_vf *vf, struct ice_channel *ch, + int init_vsi) { struct device *dev = ice_pf_to_dev(vsi->back); struct ice_pf *pf = vsi->back; @@ -2730,7 +2732,7 @@ ice_vsi_cfg_def(struct ice_vsi *vsi, struct ice_vf *vf, struct ice_channel *ch) ice_vsi_set_tc_cfg(vsi); /* create the VSI */ - ret = ice_vsi_init(vsi, true); + ret = ice_vsi_init(vsi, init_vsi); if (ret) goto unroll_get_qs; @@ -2856,12 +2858,14 @@ ice_vsi_cfg_def(struct ice_vsi *vsi, struct ice_vf *vf, struct ice_channel *ch) * @vf: pointer to VF to which this VSI connects. This field is used primarily * for the ICE_VSI_VF type. Other VSI types should pass NULL. * @ch: ptr to channel + * @init_vsi: is this an initialization or a reconfigure of the VSI */ -int ice_vsi_cfg(struct ice_vsi *vsi, struct ice_vf *vf, struct ice_channel *ch) +int ice_vsi_cfg(struct ice_vsi *vsi, struct ice_vf *vf, struct ice_channel *ch, + int init_vsi) { int ret; - ret = ice_vsi_cfg_def(vsi, vf, ch); + ret = ice_vsi_cfg_def(vsi, vf, ch, init_vsi); if (ret) return ret; @@ -2962,7 +2966,7 @@ ice_vsi_setup(struct ice_pf *pf, struct ice_port_info *pi, if (ice_vsi_alloc_stat_arrays(vsi)) goto err_alloc; - ret = ice_vsi_cfg(vsi, vf, ch); + ret = ice_vsi_cfg(vsi, vf, ch, ICE_VSI_FLAG_INIT); if (ret) goto err_vsi_cfg; @@ -3498,7 +3502,7 @@ int ice_vsi_rebuild(struct ice_vsi *vsi, int init_vsi) prev_rxq = vsi->num_rxq; ice_vsi_decfg(vsi); - ret = ice_vsi_cfg_def(vsi, vsi->vf, vsi->ch); + ret = ice_vsi_cfg_def(vsi, vsi->vf, vsi->ch, init_vsi); if (ret) goto err_vsi_cfg; diff --git a/drivers/net/ethernet/intel/ice/ice_lib.h b/drivers/net/ethernet/intel/ice/ice_lib.h index 8905f8721a76..b76f05e1f8a3 100644 --- a/drivers/net/ethernet/intel/ice/ice_lib.h +++ b/drivers/net/ethernet/intel/ice/ice_lib.h @@ -60,8 +60,6 @@ int ice_vsi_release(struct ice_vsi *vsi); void ice_vsi_close(struct ice_vsi *vsi); -int ice_vsi_cfg(struct ice_vsi *vsi, struct ice_vf *vf, - struct ice_channel *ch); int ice_ena_vsi(struct ice_vsi *vsi, bool locked); void ice_vsi_decfg(struct ice_vsi *vsi); @@ -75,6 +73,8 @@ ice_get_res(struct ice_pf *pf, struct ice_res_tracker *res, u16 needed, u16 id); #define ICE_VSI_FLAG_INIT BIT(0) #define ICE_VSI_FLAG_NO_INIT 0 int ice_vsi_rebuild(struct ice_vsi *vsi, int init_vsi); +int ice_vsi_cfg(struct ice_vsi *vsi, struct ice_vf *vf, + struct ice_channel *ch, int init_vsi); bool ice_is_reset_in_progress(unsigned long *state); int ice_wait_for_reset(struct ice_pf *pf, unsigned long timeout); diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c index fbeac890a606..49c1e9782bf0 100644 --- a/drivers/net/ethernet/intel/ice/ice_main.c +++ b/drivers/net/ethernet/intel/ice/ice_main.c @@ -5115,7 +5115,7 @@ int ice_load(struct ice_pf *pf) return err; vsi = ice_get_main_vsi(pf); - err = ice_vsi_cfg(vsi, NULL, NULL); + err = ice_vsi_cfg(vsi, NULL, NULL, ICE_VSI_FLAG_INIT); if (err) goto err_vsi_cfg; From patchwork Mon Dec 12 11:16:45 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Swiatkowski X-Patchwork-Id: 13071023 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C711CC4167B for ; Mon, 12 Dec 2022 11:38:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232034AbiLLLiR (ORCPT ); Mon, 12 Dec 2022 06:38:17 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50096 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232440AbiLLLhR (ORCPT ); Mon, 12 Dec 2022 06:37:17 -0500 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A99601263F for ; Mon, 12 Dec 2022 03:33:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1670844818; x=1702380818; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=k7F0RsBShYShjyyust5V+qOrYvPPyL755O2kv+7WwHo=; b=b1rCkGh+UW18+JUuronaN/vaQ9h2HJwaY08iGJAUEzL0UXQdw68IUIT8 YbXqqyvSTqXIe205PyNWUilZtpt6tjOXuI9+6yQ4gQgrSrfN8oNi7AeZq Zh3KpiJQu7grheDpH4UCoyPJxMNqk7mC4dNrRonOpwZyZ3M/p0zczajnN V0A/kp2eanfRHekKjRdjqXY3Hjdq8EWMaXVKHYLW7gPLo+dvVP9iSPoEl lMk9aCIKFFcVeI6kj2nbkUfMPIpE4aG9XQ4vqQF5qrbbScqdAzQ75c3jX sHYuPDYbrCfuXYl14Lq0K9YuK1EB5KulLKsC+XuE0el8486BsJChNcRQP Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10558"; a="317861570" X-IronPort-AV: E=Sophos;i="5.96,238,1665471600"; d="scan'208";a="317861570" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Dec 2022 03:33:28 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10558"; a="893459888" X-IronPort-AV: E=Sophos;i="5.96,238,1665471600"; d="scan'208";a="893459888" Received: from wasp.igk.intel.com ([10.102.20.192]) by fmsmga006.fm.intel.com with ESMTP; 12 Dec 2022 03:33:24 -0800 From: Michal Swiatkowski To: intel-wired-lan@lists.osuosl.org Cc: alexandr.lobakin@intel.com, sridhar.samudrala@intel.com, wojciech.drewek@intel.com, lukasz.czapnik@intel.com, shiraz.saleem@intel.com, jesse.brandeburg@intel.com, mustafa.ismail@intel.com, przemyslaw.kitszel@intel.com, piotr.raczynski@intel.com, jacob.e.keller@intel.com, david.m.ertman@intel.com, leszek.kaliszczuk@intel.com, benjamin.mikailenko@intel.com, paul.m.stillwell.jr@intel.com, netdev@vger.kernel.org, kuba@kernel.org, leon@kernel.org, Michal Swiatkowski Subject: [PATCH net-next v1 10/10] ice: implement devlink reinit action Date: Mon, 12 Dec 2022 12:16:45 +0100 Message-Id: <20221212111645.1198680-11-michal.swiatkowski@linux.intel.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20221212111645.1198680-1-michal.swiatkowski@linux.intel.com> References: <20221212111645.1198680-1-michal.swiatkowski@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Call ice_unload() and ice_load() in driver reinit flow. Block reinit when switchdev, ADQ or SRIOV is active. In reload path we don't want to rebuild all features. Ask user to remove them instead of quitely removing it in reload path. Signed-off-by: Michal Swiatkowski --- drivers/net/ethernet/intel/ice/ice_devlink.c | 103 +++++++++++++++---- 1 file changed, 81 insertions(+), 22 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_devlink.c b/drivers/net/ethernet/intel/ice/ice_devlink.c index 3d109193b7ea..77ae1e0ed734 100644 --- a/drivers/net/ethernet/intel/ice/ice_devlink.c +++ b/drivers/net/ethernet/intel/ice/ice_devlink.c @@ -525,10 +525,7 @@ static int ice_devlink_txbalance_validate(struct devlink *devlink, u32 id, /** * ice_devlink_reload_empr_start - Start EMP reset to activate new firmware - * @devlink: pointer to the devlink instance to reload - * @netns_change: if true, the network namespace is changing - * @action: the action to perform. Must be DEVLINK_RELOAD_ACTION_FW_ACTIVATE - * @limit: limits on what reload should do, such as not resetting + * @pf: pointer to the pf instance * @extack: netlink extended ACK structure * * Allow user to activate new Embedded Management Processor firmware by @@ -541,12 +538,9 @@ static int ice_devlink_txbalance_validate(struct devlink *devlink, u32 id, * any source. */ static int -ice_devlink_reload_empr_start(struct devlink *devlink, bool netns_change, - enum devlink_reload_action action, - enum devlink_reload_limit limit, +ice_devlink_reload_empr_start(struct ice_pf *pf, struct netlink_ext_ack *extack) { - struct ice_pf *pf = devlink_priv(devlink); struct device *dev = ice_pf_to_dev(pf); struct ice_hw *hw = &pf->hw; u8 pending; @@ -584,12 +578,52 @@ ice_devlink_reload_empr_start(struct devlink *devlink, bool netns_change, return 0; } +/** + * ice_devlink_reload_down - prepare for reload + * @devlink: pointer to the devlink instance to reload + * @netns_change: if true, the network namespace is changing + * @action: the action to perform + * @limit: limits on what reload should do, such as not resetting + * @extack: netlink extended ACK structure + */ +static int +ice_devlink_reload_down(struct devlink *devlink, bool netns_change, + enum devlink_reload_action action, + enum devlink_reload_limit limit, + struct netlink_ext_ack *extack) +{ + struct ice_pf *pf = devlink_priv(devlink); + + switch (action) { + case DEVLINK_RELOAD_ACTION_DRIVER_REINIT: + if (ice_is_eswitch_mode_switchdev(pf)) { + NL_SET_ERR_MSG_MOD(extack, + "Go to legacy mode before doing reinit\n"); + return -EOPNOTSUPP; + } + if (ice_is_adq_active(pf)) { + NL_SET_ERR_MSG_MOD(extack, + "Turn off ADQ before doing reinit\n"); + return -EOPNOTSUPP; + } + if (ice_has_vfs(pf)) { + NL_SET_ERR_MSG_MOD(extack, + "Remove all VFs before doing reinit\n"); + return -EOPNOTSUPP; + } + ice_unload(pf); + return 0; + case DEVLINK_RELOAD_ACTION_FW_ACTIVATE: + return ice_devlink_reload_empr_start(pf, extack); + default: + WARN_ON(1); + return -EOPNOTSUPP; + } +} + /** * ice_devlink_reload_empr_finish - Wait for EMP reset to finish - * @devlink: pointer to the devlink instance reloading - * @action: the action requested - * @limit: limits imposed by userspace, such as not resetting - * @actions_performed: on return, indicate what actions actually performed + * @pf: pointer to the pf instance * @extack: netlink extended ACK structure * * Wait for driver to finish rebuilding after EMP reset is completed. This @@ -597,17 +631,11 @@ ice_devlink_reload_empr_start(struct devlink *devlink, bool netns_change, * for the driver's rebuild to complete. */ static int -ice_devlink_reload_empr_finish(struct devlink *devlink, - enum devlink_reload_action action, - enum devlink_reload_limit limit, - u32 *actions_performed, +ice_devlink_reload_empr_finish(struct ice_pf *pf, struct netlink_ext_ack *extack) { - struct ice_pf *pf = devlink_priv(devlink); int err; - *actions_performed = BIT(DEVLINK_RELOAD_ACTION_FW_ACTIVATE); - err = ice_wait_for_reset(pf, 60 * HZ); if (err) { NL_SET_ERR_MSG_MOD(extack, "Device still resetting after 1 minute"); @@ -1346,12 +1374,43 @@ static int ice_devlink_set_parent(struct devlink_rate *devlink_rate, return status; } +/** + * ice_devlink_reload_up - do reload up after reinit + * @devlink: pointer to the devlink instance reloading + * @action: the action requested + * @limit: limits imposed by userspace, such as not resetting + * @actions_performed: on return, indicate what actions actually performed + * @extack: netlink extended ACK structure + */ +static int +ice_devlink_reload_up(struct devlink *devlink, + enum devlink_reload_action action, + enum devlink_reload_limit limit, + u32 *actions_performed, + struct netlink_ext_ack *extack) +{ + struct ice_pf *pf = devlink_priv(devlink); + + switch (action) { + case DEVLINK_RELOAD_ACTION_DRIVER_REINIT: + *actions_performed = BIT(DEVLINK_RELOAD_ACTION_DRIVER_REINIT); + return ice_load(pf); + case DEVLINK_RELOAD_ACTION_FW_ACTIVATE: + *actions_performed = BIT(DEVLINK_RELOAD_ACTION_FW_ACTIVATE); + return ice_devlink_reload_empr_finish(pf, extack); + default: + WARN_ON(1); + return -EOPNOTSUPP; + } +} + static const struct devlink_ops ice_devlink_ops = { .supported_flash_update_params = DEVLINK_SUPPORT_FLASH_UPDATE_OVERWRITE_MASK, - .reload_actions = BIT(DEVLINK_RELOAD_ACTION_FW_ACTIVATE), + .reload_actions = BIT(DEVLINK_RELOAD_ACTION_DRIVER_REINIT) | + BIT(DEVLINK_RELOAD_ACTION_FW_ACTIVATE), /* The ice driver currently does not support driver reinit */ - .reload_down = ice_devlink_reload_empr_start, - .reload_up = ice_devlink_reload_empr_finish, + .reload_down = ice_devlink_reload_down, + .reload_up = ice_devlink_reload_up, .port_split = ice_devlink_port_split, .port_unsplit = ice_devlink_port_unsplit, .eswitch_mode_get = ice_eswitch_mode_get,