From patchwork Wed Jul 24 23:38:54 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Nikolova, Tatyana E" X-Patchwork-Id: 13741440 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3613013D635 for ; Wed, 24 Jul 2024 23:40:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.7 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721864439; cv=none; b=Ud7OBMenAmQBXu8IPmSfpMi/VCUl8DjDWbYUb8Kwjj/0oYV7zTtPnA8Tem39TvmFqBHZgVdnMk+k4VTwE5y0waHWidCxGe7svm2J8Rv3J/eAqAYOgEDRD4+Lz7SErlA52SbA+HNeYJ0rLJ5wVz6N7+9iuvOgMRk7Kt3K7MRDxSo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721864439; c=relaxed/simple; bh=D3tbyhGezhFFSpxS7B5Ti2rsTOJvlyX8J34dI+FLCrY=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=TXnL0FxVCZtFM+PN3ywl4nnqRUkvvYSDpFIqXg3nPp9dH4QbhF5bcLzwPmqgOiC13RabJlpZJMj8EfOJ3jL+0ZfnyOL3qSAvoNOugoz//+hKZQeTpU2n7iisSCc0jYQlV99yBxgvk1uMHKNzlzSHCKK6/6rhE8zNM0vzpK4MdE8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=TBWq/3FX; arc=none smtp.client-ip=192.198.163.7 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="TBWq/3FX" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1721864437; x=1753400437; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=D3tbyhGezhFFSpxS7B5Ti2rsTOJvlyX8J34dI+FLCrY=; b=TBWq/3FXYiT0MX9k/FhxRt2CClMeH0RNwjLP6aH5C73fweoLH+jYltNl Km42n2re+8GzgI8yVMfcMMk5N8+4LQSaks93aVOFQqkFoaWNfXELnos2M 9ZL8kuVqPeOpJJCMZCQ9czWYWOrs2kRVlCVXQjfrSA2JO4IzJ34nnw2bu 0LsOWAh/ozVKTMc9OZvN+BweDgVQrIQBbp9YBy5RsL5BvUzIqPUYMaWMZ 2pjLh3n/eZgb+mWEp7PfHyAu8+hP8VzYE6jrbrwKmKlp+ka5BGFPiZjaH KjiCpxsGgHnfeQS0ywCwRj9xQ5qfY76XFfWd+T3JgjcH3OUjxsO7cSM9B g==; X-CSE-ConnectionGUID: qoeNebCKTFWIVzuhzfSgiw== X-CSE-MsgGUID: Pv/v/9UsS+yGIgItBzprOA== X-IronPort-AV: E=McAfee;i="6700,10204,11143"; a="44999739" X-IronPort-AV: E=Sophos;i="6.09,234,1716274800"; d="scan'208";a="44999739" Received: from fmviesa006.fm.intel.com ([10.60.135.146]) by fmvoesa101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Jul 2024 16:40:35 -0700 X-CSE-ConnectionGUID: vOWURArjSdS3/AESvN+zJQ== X-CSE-MsgGUID: dHssaaIUT7SoXlGxFP3C1g== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.09,234,1716274800"; d="scan'208";a="52425993" Received: from tenikolo-mobl1.amr.corp.intel.com ([10.124.96.138]) by fmviesa006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Jul 2024 16:40:34 -0700 From: Tatyana Nikolova To: jgg@nvidia.com, leon@kernel.org Cc: linux-rdma@vger.kernel.org, mustafa.ismail@intel.com, Joshua Hay , Tatyana Nikolova Subject: [RFC PATCH 02/25] idpf: use reserved rdma vectors from control plane Date: Wed, 24 Jul 2024 18:38:54 -0500 Message-Id: <20240724233917.704-3-tatyana.e.nikolova@intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20240724233917.704-1-tatyana.e.nikolova@intel.com> References: <20240724233917.704-1-tatyana.e.nikolova@intel.com> Precedence: bulk X-Mailing-List: linux-rdma@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Joshua Hay Fetch the number of reserved rdma vectors from the control plane. Adjust the number of reserved lan vectors if necessary. Adjust the minimum number of vectors the OS should reserve to include rdma; and fail if the OS cannot reserve enough vectors for the minimum number of lan and rdma vectors required. Create a separate msix table for the reserved rdma vectors, which will just get handed off to the rdma core device to do with what it will. Signed-off-by: Joshua Hay Signed-off-by: Tatyana Nikolova --- drivers/net/ethernet/intel/idpf/idpf.h | 24 +++++++++- drivers/net/ethernet/intel/idpf/idpf_lib.c | 70 +++++++++++++++++++++++------ drivers/net/ethernet/intel/idpf/idpf_txrx.h | 1 + drivers/net/ethernet/intel/idpf/virtchnl2.h | 5 ++- 4 files changed, 83 insertions(+), 17 deletions(-) diff --git a/drivers/net/ethernet/intel/idpf/idpf.h b/drivers/net/ethernet/intel/idpf/idpf.h index e7a0365..d25e783 100644 --- a/drivers/net/ethernet/intel/idpf/idpf.h +++ b/drivers/net/ethernet/intel/idpf/idpf.h @@ -493,10 +493,11 @@ struct idpf_vport_config { * @flags: See enum idpf_flags * @reset_reg: See struct idpf_reset_reg * @hw: Device access data - * @num_req_msix: Requested number of MSIX vectors * @num_avail_msix: Available number of MSIX vectors * @num_msix_entries: Number of entries in MSIX table * @msix_entries: MSIX table + * @num_rdma_msix_entries: Available number of MSIX vectors for RDMA + * @rdma_msix_entries: RDMA MSIX table * @req_vec_chunks: Requested vector chunk data * @mb_vector: Mailbox vector data * @vector_stack: Stack to store the msix vector indexes @@ -546,10 +547,11 @@ struct idpf_adapter { DECLARE_BITMAP(flags, IDPF_FLAGS_NBITS); struct idpf_reset_reg reset_reg; struct idpf_hw hw; - u16 num_req_msix; u16 num_avail_msix; u16 num_msix_entries; struct msix_entry *msix_entries; + u16 num_rdma_msix_entries; + struct msix_entry *rdma_msix_entries; struct virtchnl2_alloc_vectors *req_vec_chunks; struct idpf_q_vector mb_vector; struct idpf_vector_lifo vector_stack; @@ -612,6 +614,15 @@ static inline int idpf_is_queue_model_split(u16 q_model) bool idpf_is_capability_ena(struct idpf_adapter *adapter, bool all, enum idpf_cap_field field, u64 flag); +/** + * idpf_is_rdma_cap_ena - Determine if RDMA is supported + * @adapter: private data struct + */ +static inline bool idpf_is_rdma_cap_ena(struct idpf_adapter *adapter) +{ + return idpf_is_cap_ena(adapter, IDPF_OTHER_CAPS, VIRTCHNL2_CAP_RDMA); +} + #define IDPF_CAP_RSS (\ VIRTCHNL2_CAP_RSS_IPV4_TCP |\ VIRTCHNL2_CAP_RSS_IPV4_TCP |\ @@ -667,6 +678,15 @@ static inline u16 idpf_get_reserved_vecs(struct idpf_adapter *adapter) } /** + * idpf_get_reserved_rdma_vecs - Get reserved RDMA vectors + * @adapter: private data struct + */ +static inline u16 idpf_get_reserved_rdma_vecs(struct idpf_adapter *adapter) +{ + return le16_to_cpu(adapter->caps.num_rdma_allocated_vectors); +} + +/** * idpf_get_default_vports - Get default number of vports * @adapter: private data struct */ diff --git a/drivers/net/ethernet/intel/idpf/idpf_lib.c b/drivers/net/ethernet/intel/idpf/idpf_lib.c index 52ceda6..0b96518 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_lib.c +++ b/drivers/net/ethernet/intel/idpf/idpf_lib.c @@ -88,6 +88,8 @@ void idpf_intr_rel(struct idpf_adapter *adapter) idpf_deinit_vector_stack(adapter); kfree(adapter->msix_entries); adapter->msix_entries = NULL; + kfree(adapter->rdma_msix_entries); + adapter->rdma_msix_entries = NULL; } /** @@ -316,13 +318,29 @@ int idpf_req_rel_vector_indexes(struct idpf_adapter *adapter, */ int idpf_intr_req(struct idpf_adapter *adapter) { + u16 num_lan_vecs, min_lan_vecs, num_rdma_vecs = 0, min_rdma_vecs = 0; u16 default_vports = idpf_get_default_vports(adapter); int num_q_vecs, total_vecs, num_vec_ids; int min_vectors, v_actual, err; unsigned int vector; u16 *vecids; + int i; total_vecs = idpf_get_reserved_vecs(adapter); + num_lan_vecs = total_vecs; + if (idpf_is_rdma_cap_ena(adapter)) { + num_rdma_vecs = idpf_get_reserved_rdma_vecs(adapter); + min_rdma_vecs = IDPF_MIN_RDMA_VEC; + + if (num_rdma_vecs < min_rdma_vecs) { + /* If idpf_get_reserved_rdma_vecs is 0 or less than the + * minimum, the difference is taken from the LAN vecs + */ + num_lan_vecs -= (min_rdma_vecs - num_rdma_vecs); + num_rdma_vecs = min_rdma_vecs; + } + } + num_q_vecs = total_vecs - IDPF_MBX_Q_VEC; err = idpf_send_alloc_vectors_msg(adapter, num_q_vecs); @@ -333,27 +351,44 @@ int idpf_intr_req(struct idpf_adapter *adapter) return -EAGAIN; } - min_vectors = IDPF_MBX_Q_VEC + IDPF_MIN_Q_VEC * default_vports; + min_lan_vecs = IDPF_MBX_Q_VEC + IDPF_MIN_Q_VEC * default_vports; + min_vectors = min_lan_vecs + min_rdma_vecs; v_actual = pci_alloc_irq_vectors(adapter->pdev, min_vectors, total_vecs, PCI_IRQ_MSIX); if (v_actual < min_vectors) { - dev_err(&adapter->pdev->dev, "Failed to allocate MSIX vectors: %d\n", + dev_err(&adapter->pdev->dev, "Failed to allocate minimum MSIX vectors required: %d\n", v_actual); err = -EAGAIN; goto send_dealloc_vecs; } - adapter->msix_entries = kcalloc(v_actual, sizeof(struct msix_entry), - GFP_KERNEL); + if (idpf_is_rdma_cap_ena(adapter)) { + if (v_actual < total_vecs) { + dev_warn(&adapter->pdev->dev, + "Warning: not enough vectors available. Defaulting to minimum for RDMA and remaining for LAN.\n"); + num_rdma_vecs = IDPF_MIN_RDMA_VEC; + } + adapter->rdma_msix_entries = + kcalloc(num_rdma_vecs, + sizeof(struct msix_entry), GFP_KERNEL); + if (!adapter->rdma_msix_entries) { + err = -ENOMEM; + goto free_irq; + } + } + + num_lan_vecs = v_actual - num_rdma_vecs; + adapter->msix_entries = kcalloc(num_lan_vecs, sizeof(struct msix_entry), + GFP_KERNEL); if (!adapter->msix_entries) { err = -ENOMEM; - goto free_irq; + goto free_rdma_msix; } idpf_set_mb_vec_id(adapter); - vecids = kcalloc(total_vecs, sizeof(u16), GFP_KERNEL); + vecids = kcalloc(v_actual, sizeof(u16), GFP_KERNEL); if (!vecids) { err = -ENOMEM; goto free_msix; @@ -373,25 +408,29 @@ int idpf_intr_req(struct idpf_adapter *adapter) goto free_vecids; } } else { - int i; - for (i = 0; i < v_actual; i++) vecids[i] = i; } - for (vector = 0; vector < v_actual; vector++) { - adapter->msix_entries[vector].entry = vecids[vector]; - adapter->msix_entries[vector].vector = + for (i = 0, vector = 0; vector < num_lan_vecs; vector++, i++) { + adapter->msix_entries[i].entry = vecids[vector]; + adapter->msix_entries[i].vector = + pci_irq_vector(adapter->pdev, vector); + } + for (i = 0; i < num_rdma_vecs; vector++, i++) { + adapter->rdma_msix_entries[i].entry = vecids[vector]; + adapter->rdma_msix_entries[i].vector = pci_irq_vector(adapter->pdev, vector); } - adapter->num_req_msix = total_vecs; - adapter->num_msix_entries = v_actual; /* 'num_avail_msix' is used to distribute excess vectors to the vports * after considering the minimum vectors required per each default * vport */ - adapter->num_avail_msix = v_actual - min_vectors; + adapter->num_avail_msix = num_lan_vecs - min_lan_vecs; + adapter->num_msix_entries = num_lan_vecs; + if (idpf_is_rdma_cap_ena(adapter)) + adapter->num_rdma_msix_entries = num_rdma_vecs; /* Fill MSIX vector lifo stack with vector indexes */ err = idpf_init_vector_stack(adapter); @@ -413,6 +452,9 @@ int idpf_intr_req(struct idpf_adapter *adapter) free_msix: kfree(adapter->msix_entries); adapter->msix_entries = NULL; +free_rdma_msix: + kfree(adapter->rdma_msix_entries); + adapter->rdma_msix_entries = NULL; free_irq: pci_free_irq_vectors(adapter->pdev); send_dealloc_vecs: diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.h b/drivers/net/ethernet/intel/idpf/idpf_txrx.h index 3d046b8..9cb3e8a 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.h +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.h @@ -54,6 +54,7 @@ /* Default vector sharing */ #define IDPF_MBX_Q_VEC 1 #define IDPF_MIN_Q_VEC 1 +#define IDPF_MIN_RDMA_VEC 4 #define IDPF_DFLT_TX_Q_DESC_COUNT 512 #define IDPF_DFLT_TX_COMPLQ_DESC_COUNT 512 diff --git a/drivers/net/ethernet/intel/idpf/virtchnl2.h b/drivers/net/ethernet/intel/idpf/virtchnl2.h index 63deb12..80c17e4 100644 --- a/drivers/net/ethernet/intel/idpf/virtchnl2.h +++ b/drivers/net/ethernet/intel/idpf/virtchnl2.h @@ -473,6 +473,8 @@ struct virtchnl2_version_info { * segment offload. * @max_hdr_buf_per_lso: Max number of header buffers that can be used for * an LSO. + * @num_rdma_allocated_vectors: Maximum number of allocated RDMA vectors for + * the device. * @pad1: Padding for future extensions. * * Dataplane driver sends this message to CP to negotiate capabilities and @@ -520,7 +522,8 @@ struct virtchnl2_get_capabilities { __le32 device_type; u8 min_sso_packet_len; u8 max_hdr_buf_per_lso; - u8 pad1[10]; + __le16 num_rdma_allocated_vectors; + u8 pad1[8]; }; VIRTCHNL2_CHECK_STRUCT_LEN(80, virtchnl2_get_capabilities);