From patchwork Mon Oct 28 10:03:35 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Swiatkowski X-Patchwork-Id: 13853229 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DDBD41922FB for ; Mon, 28 Oct 2024 10:03:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.19 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730109839; cv=none; b=l+pY8lCIa4Isnye0jSdDeHnnWflAhlWnQ66YbRO1G5hJmo0jDWSbTN8lqj9N1vI3YnaXm3zecQLg2HKXGIk78V6BlZrI+az5/rClhhZex6jJ2ftEGgtwi3VDKVWZfpBrUiRpGOYil5/pQIsbtWUX7/n8cS2v4J6/R0SpXxZRjTY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730109839; c=relaxed/simple; bh=4Tr20t/onkLGJV9ndjvP4Cj6/Hz/nAOn6ozTrM54QR0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=tINl4+TMORPiOp0foOVOc9ZusqN9Yz7oXlXEytmAg2LgbuJIknwLLYSuiKyhVr6JjTeTYNidHwXf6eoucAvH5rlKk6qXOvvCl4KI4RW0UVYBCVTsTlG2zhojTw0Zlx17xj9fgBKsMXwEGqn+7h2LLGY6ojmok36xDp1UIGuBa0s= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=IAT25Xqg; arc=none smtp.client-ip=198.175.65.19 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="IAT25Xqg" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1730109837; x=1761645837; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=4Tr20t/onkLGJV9ndjvP4Cj6/Hz/nAOn6ozTrM54QR0=; b=IAT25XqgKJGaQTVW4LG9+CnFDflCmY4p02NDiQyPz2XykwPhAaX09A8l 4lGm9NtM5uaIqoeS9pyIbkPF1k/txB10njVAURvGmGhUEgGsA4Y6v9rmM 9RxLz6ked3KdKdChO1KtdJhANe1CthhKBOEh9LyEtPW6kIOPGSEonAVuL Bau+Pog6xxmes/tsvkuCsv0/Qf+05TGb8aoa4w6Q/RJGF49t07GPF7o98 pmusoHncKVE4hk6cOa5+5cAHAjxG8QQVw5qRZ+1UW+SyRLLbx1/cnKAYs Zko0UOGE9OmH+wGeecI9jthw0hFizKwWZMg21ZDSbGYN9Zo5ZQ26jYaWq Q==; X-CSE-ConnectionGUID: CIuhrMPUTZqVJaEOpCr8Zg== X-CSE-MsgGUID: Qd91DmnLSqGAlm8dlIfmAA== X-IronPort-AV: E=McAfee;i="6700,10204,11222"; a="29560735" X-IronPort-AV: E=Sophos;i="6.11,199,1725346800"; d="scan'208";a="29560735" Received: from fmviesa004.fm.intel.com ([10.60.135.144]) by orvoesa111.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Oct 2024 03:03:57 -0700 X-CSE-ConnectionGUID: Z85v1r2KTsirO82gofryrg== X-CSE-MsgGUID: kTtiQIbmQMW+o8RDOTcmrA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.11,238,1725346800"; d="scan'208";a="86182310" Received: from gk3153-dr2-r750-36946.igk.intel.com ([10.102.20.192]) by fmviesa004.fm.intel.com with ESMTP; 28 Oct 2024 03:03:52 -0700 From: Michal Swiatkowski To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, pawel.chmielewski@intel.com, sridhar.samudrala@intel.com, jacob.e.keller@intel.com, pio.raczynski@gmail.com, konrad.knitter@intel.com, marcin.szycik@intel.com, wojciech.drewek@intel.com, nex.sw.ncis.nat.hpm.dev@intel.com, przemyslaw.kitszel@intel.com, jiri@resnulli.us, horms@kernel.org, David.Laight@ACULAB.COM Subject: [iwl-next v6 3/9] ice: remove splitting MSI-X between features Date: Mon, 28 Oct 2024 11:03:35 +0100 Message-ID: <20241028100341.16631-4-michal.swiatkowski@linux.intel.com> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20241028100341.16631-1-michal.swiatkowski@linux.intel.com> References: <20241028100341.16631-1-michal.swiatkowski@linux.intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org With dynamic approach to alloc MSI-X there is no sense to statically split MSI-X between PF features. Splitting was also calculating needed MSI-X. Move this part to separate function and use as max value. Remove ICE_ESWITCH_MSIX, as there is no need for additional MSI-X for switchdev. Reviewed-by: Wojciech Drewek Signed-off-by: Michal Swiatkowski --- drivers/net/ethernet/intel/ice/ice.h | 2 - drivers/net/ethernet/intel/ice/ice_irq.c | 172 +++-------------------- 2 files changed, 16 insertions(+), 158 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h index c6cd7e993ed8..3a771a25b97b 100644 --- a/drivers/net/ethernet/intel/ice/ice.h +++ b/drivers/net/ethernet/intel/ice/ice.h @@ -98,8 +98,6 @@ #define ICE_MIN_MSIX (ICE_MIN_LAN_TXRX_MSIX + ICE_MIN_LAN_OICR_MSIX) #define ICE_FDIR_MSIX 2 #define ICE_RDMA_NUM_AEQ_MSIX 4 -#define ICE_MIN_RDMA_MSIX 2 -#define ICE_ESWITCH_MSIX 1 #define ICE_NO_VSI 0xffff #define ICE_VSI_MAP_CONTIG 0 #define ICE_VSI_MAP_SCATTER 1 diff --git a/drivers/net/ethernet/intel/ice/ice_irq.c b/drivers/net/ethernet/intel/ice/ice_irq.c index 0659b96b9b8c..4a50a6dc817e 100644 --- a/drivers/net/ethernet/intel/ice/ice_irq.c +++ b/drivers/net/ethernet/intel/ice/ice_irq.c @@ -84,155 +84,11 @@ static struct ice_irq_entry *ice_get_irq_res(struct ice_pf *pf, bool dyn_only) return entry; } -/** - * ice_reduce_msix_usage - Reduce usage of MSI-X vectors - * @pf: board private structure - * @v_remain: number of remaining MSI-X vectors to be distributed - * - * Reduce the usage of MSI-X vectors when entire request cannot be fulfilled. - * pf->num_lan_msix and pf->num_rdma_msix values are set based on number of - * remaining vectors. - */ -static void ice_reduce_msix_usage(struct ice_pf *pf, int v_remain) +static int ice_get_default_msix_amount(struct ice_pf *pf) { - int v_rdma; - - if (!ice_is_rdma_ena(pf)) { - pf->num_lan_msix = v_remain; - return; - } - - /* RDMA needs at least 1 interrupt in addition to AEQ MSIX */ - v_rdma = ICE_RDMA_NUM_AEQ_MSIX + 1; - - if (v_remain < ICE_MIN_LAN_TXRX_MSIX + ICE_MIN_RDMA_MSIX) { - dev_warn(ice_pf_to_dev(pf), "Not enough MSI-X vectors to support RDMA.\n"); - clear_bit(ICE_FLAG_RDMA_ENA, pf->flags); - - pf->num_rdma_msix = 0; - pf->num_lan_msix = ICE_MIN_LAN_TXRX_MSIX; - } else if ((v_remain < ICE_MIN_LAN_TXRX_MSIX + v_rdma) || - (v_remain - v_rdma < v_rdma)) { - /* Support minimum RDMA and give remaining vectors to LAN MSIX - */ - pf->num_rdma_msix = ICE_MIN_RDMA_MSIX; - pf->num_lan_msix = v_remain - ICE_MIN_RDMA_MSIX; - } else { - /* Split remaining MSIX with RDMA after accounting for AEQ MSIX - */ - pf->num_rdma_msix = (v_remain - ICE_RDMA_NUM_AEQ_MSIX) / 2 + - ICE_RDMA_NUM_AEQ_MSIX; - pf->num_lan_msix = v_remain - pf->num_rdma_msix; - } -} - -/** - * ice_ena_msix_range - Request a range of MSIX vectors from the OS - * @pf: board private structure - * - * Compute the number of MSIX vectors wanted and request from the OS. Adjust - * device usage if there are not enough vectors. Return the number of vectors - * reserved or negative on failure. - */ -static int ice_ena_msix_range(struct ice_pf *pf) -{ - int num_cpus, hw_num_msix, v_other, v_wanted, v_actual; - struct device *dev = ice_pf_to_dev(pf); - int err; - - hw_num_msix = pf->hw.func_caps.common_cap.num_msix_vectors; - num_cpus = num_online_cpus(); - - /* LAN miscellaneous handler */ - v_other = ICE_MIN_LAN_OICR_MSIX; - - /* Flow Director */ - if (test_bit(ICE_FLAG_FD_ENA, pf->flags)) - v_other += ICE_FDIR_MSIX; - - /* switchdev */ - v_other += ICE_ESWITCH_MSIX; - - v_wanted = v_other; - - /* LAN traffic */ - pf->num_lan_msix = num_cpus; - v_wanted += pf->num_lan_msix; - - /* RDMA auxiliary driver */ - if (ice_is_rdma_ena(pf)) { - pf->num_rdma_msix = num_cpus + ICE_RDMA_NUM_AEQ_MSIX; - v_wanted += pf->num_rdma_msix; - } - - if (v_wanted > hw_num_msix) { - int v_remain; - - dev_warn(dev, "not enough device MSI-X vectors. wanted = %d, available = %d\n", - v_wanted, hw_num_msix); - - if (hw_num_msix < ICE_MIN_MSIX) { - err = -ERANGE; - goto exit_err; - } - - v_remain = hw_num_msix - v_other; - if (v_remain < ICE_MIN_LAN_TXRX_MSIX) { - v_other = ICE_MIN_MSIX - ICE_MIN_LAN_TXRX_MSIX; - v_remain = ICE_MIN_LAN_TXRX_MSIX; - } - - ice_reduce_msix_usage(pf, v_remain); - v_wanted = pf->num_lan_msix + pf->num_rdma_msix + v_other; - - dev_notice(dev, "Reducing request to %d MSI-X vectors for LAN traffic.\n", - pf->num_lan_msix); - if (ice_is_rdma_ena(pf)) - dev_notice(dev, "Reducing request to %d MSI-X vectors for RDMA.\n", - pf->num_rdma_msix); - } - - /* actually reserve the vectors */ - v_actual = pci_alloc_irq_vectors(pf->pdev, ICE_MIN_MSIX, v_wanted, - PCI_IRQ_MSIX); - if (v_actual < 0) { - dev_err(dev, "unable to reserve MSI-X vectors\n"); - err = v_actual; - goto exit_err; - } - - if (v_actual < v_wanted) { - dev_warn(dev, "not enough OS MSI-X vectors. requested = %d, obtained = %d\n", - v_wanted, v_actual); - - if (v_actual < ICE_MIN_MSIX) { - /* error if we can't get minimum vectors */ - pci_free_irq_vectors(pf->pdev); - err = -ERANGE; - goto exit_err; - } else { - int v_remain = v_actual - v_other; - - if (v_remain < ICE_MIN_LAN_TXRX_MSIX) - v_remain = ICE_MIN_LAN_TXRX_MSIX; - - ice_reduce_msix_usage(pf, v_remain); - - dev_notice(dev, "Enabled %d MSI-X vectors for LAN traffic.\n", - pf->num_lan_msix); - - if (ice_is_rdma_ena(pf)) - dev_notice(dev, "Enabled %d MSI-X vectors for RDMA.\n", - pf->num_rdma_msix); - } - } - - return v_actual; - -exit_err: - pf->num_rdma_msix = 0; - pf->num_lan_msix = 0; - return err; + return ICE_MIN_LAN_OICR_MSIX + num_online_cpus() + + (test_bit(ICE_FLAG_FD_ENA, pf->flags) ? ICE_FDIR_MSIX : 0) + + (ice_is_rdma_ena(pf) ? num_online_cpus() + ICE_RDMA_NUM_AEQ_MSIX : 0); } /** @@ -259,17 +115,21 @@ int ice_init_interrupt_scheme(struct ice_pf *pf) pf->msix.min = ICE_MIN_MSIX; if (!pf->msix.max) - pf->msix.max = total_vectors / 2; - - vectors = ice_ena_msix_range(pf); + pf->msix.max = min(total_vectors, + ice_get_default_msix_amount(pf)); - if (vectors < 0) - return -ENOMEM; - - if (pci_msix_can_alloc_dyn(pf->pdev)) + if (pci_msix_can_alloc_dyn(pf->pdev)) { + vectors = pf->msix.min; max_vectors = total_vectors; - else + } else { + vectors = pf->msix.max; max_vectors = vectors; + } + + vectors = pci_alloc_irq_vectors(pf->pdev, pf->msix.min, vectors, + PCI_IRQ_MSIX); + if (vectors < pf->msix.min) + return -ENOMEM; ice_init_irq_tracker(pf, max_vectors, vectors);