From patchwork Sat Aug 24 03:19:00 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Nikolova, Tatyana E" X-Patchwork-Id: 13776192 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.16]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0E99E1863E; Sat, 24 Aug 2024 03:20:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.16 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724469643; cv=none; b=BBVs1v7sD28XsGTQne9C4GZwEDSzAMukH/f6syu24elV7NAlcNgYdFzMvyhpZF5lpl27MLEa/qOvFonEnuDIt6txZllPJXEokTGW4e/L7dUtiBplvuP/+5SizDZ9JkSYla8pDqietYGIYe1oKvusoZxBonbxwswcc3JYitWYHe4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724469643; c=relaxed/simple; bh=wPwVlIkeaNhIghdGFpKGeGjOljbdvssdGI68xzURduc=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=qOSBsNsbEZd0nV6eflHjCgqzuDCNHDc5up3/PdZekKJ2MQMrq1xTvBikpXetphZbGsR3O3gVafBbJZC0zYPPp4hpOa/bKJPYzQy9lUbzVfDTAucjbwgpRXXZ17sF3dRR0tYS89MTRXo/1EmQ2yfVtNQBcn3WXWmSCAfjYMLgx60= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=WSX3Zszt; arc=none smtp.client-ip=192.198.163.16 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="WSX3Zszt" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1724469640; x=1756005640; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=wPwVlIkeaNhIghdGFpKGeGjOljbdvssdGI68xzURduc=; b=WSX3ZsztdD6Vd7jL6LaeT3aSxTUCGRhMdrWVYa/WJdyFb8hUwktVux/Z NET3qtlpWSrDFoDOyQmnalU8j5r++dJ0Y4UwFIgI5jD2Ho6Mlg61DzQ44 zqrUtPr9TFf7J1QApEr/6CYsWX3jXPe5qztGDZFlueXJBSjR9eJ22G5/9 joBHO8t40S4H7xxYxj92UYMepPy2tITdyOdhQQk41jX7eMSh+wJq1lPtk xIGLdKCH/NA7sJizOHjFLhX6XUpl2pFy7iZp4rzl0EoKvWszV8kYpVRZb ScEO7ZNKpwKxMjrmPeDP7mZeFppp3PNAhH9ViSwGAnmEV3H42xs47oS9v g==; X-CSE-ConnectionGUID: 1k1nIMkKTB61pki1AWSIBQ== X-CSE-MsgGUID: HJZg6uv+Qcm5GePwQbh1XA== X-IronPort-AV: E=McAfee;i="6700,10204,11173"; a="13187767" X-IronPort-AV: E=Sophos;i="6.10,172,1719903600"; d="scan'208";a="13187767" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by fmvoesa110.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Aug 2024 20:20:38 -0700 X-CSE-ConnectionGUID: ElRZJAekT/mjdSwHIymLcQ== X-CSE-MsgGUID: U/5BeVl9Skuk13e8d8Lpmw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.10,172,1719903600"; d="scan'208";a="99492071" Received: from tenikolo-mobl1.amr.corp.intel.com ([10.124.36.66]) by smtpauth.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Aug 2024 20:20:37 -0700 From: Tatyana Nikolova To: jgg@nvidia.com, leon@kernel.org Cc: linux-rdma@vger.kernel.org, netdev@vger.kernel.org, Dave Ertman , Mustafa Ismail , Shiraz Saleem , Tatyana Nikolova Subject: [RFC v2 01/25] iidc/ice/irdma: Update IDC to support multiple consumers Date: Fri, 23 Aug 2024 22:19:00 -0500 Message-Id: <20240824031924.421-2-tatyana.e.nikolova@intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20240824031924.421-1-tatyana.e.nikolova@intel.com> References: <20240824031924.421-1-tatyana.e.nikolova@intel.com> Precedence: bulk X-Mailing-List: linux-rdma@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Dave Ertman To support RDMA for E2000 product, the idpf driver will require to use IDC interface with the irdma auxiliary driver, thus becoming a second consumer of it. This requires the IDC be updated to support multiple consumers. The use of exported symbols no longer makes sense because it will require all core drivers (ice/idpf) that can interface with irdma auxiliary driver to be loaded even if hardware is not present for those drivers. To address this, implement an ops struct that will be universal set of naked function pointers that will be populated by each core driver for the irdma auxiliary driver to call. Also previously, the ice driver was just exporting its entire pf struct to the auxiliary driver, but since each core driver will have its own different pf sturct, implenent a universal struct that all core drivers can export to the auxiliary driver through the probe call. The iidc.h header file will be divided into two files. The first, idc_rdma.h, will host all of the generic header info that will be need for RDMA support in the auxiliary device. The second, iidc_rdma.h, will contain specific elements used by Intel drivers to support RDMA. This will be primarily the implementation of a new struct that will be assigned under the new generic opaque element of idc_priv in the idc_core_dev_info struct. Update ice and irdma to conform with the new IIDC interface definitions. Signed-off-by: Dave Ertman Co-developed-by: Mustafa Ismail Signed-off-by: Mustafa Ismail Co-developed-by: Shiraz Saleem Signed-off-by: Shiraz Saleem Signed-off-by: Tatyana Nikolova --- drivers/infiniband/hw/irdma/main.c | 110 ++++---- drivers/infiniband/hw/irdma/main.h | 3 +- drivers/infiniband/hw/irdma/osdep.h | 4 +- .../net/ethernet/intel/ice/devlink/devlink.c | 41 ++- drivers/net/ethernet/intel/ice/ice.h | 6 +- drivers/net/ethernet/intel/ice/ice_dcb_lib.c | 46 +++- drivers/net/ethernet/intel/ice/ice_dcb_lib.h | 4 + drivers/net/ethernet/intel/ice/ice_ethtool.c | 8 +- drivers/net/ethernet/intel/ice/ice_idc.c | 245 +++++++++++------- drivers/net/ethernet/intel/ice/ice_idc_int.h | 5 +- drivers/net/ethernet/intel/ice/ice_main.c | 18 +- include/linux/net/intel/idc_rdma.h | 138 ++++++++++ include/linux/net/intel/iidc.h | 107 -------- include/linux/net/intel/iidc_rdma.h | 61 +++++ 14 files changed, 512 insertions(+), 284 deletions(-) create mode 100644 include/linux/net/intel/idc_rdma.h delete mode 100644 include/linux/net/intel/iidc.h create mode 100644 include/linux/net/intel/iidc_rdma.h diff --git a/drivers/infiniband/hw/irdma/main.c b/drivers/infiniband/hw/irdma/main.c index 3f13200ff71b..9b6f1d8bf06a 100644 --- a/drivers/infiniband/hw/irdma/main.c +++ b/drivers/infiniband/hw/irdma/main.c @@ -1,7 +1,6 @@ // SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB /* Copyright (c) 2015 - 2021 Intel Corporation */ #include "main.h" -#include "../../../net/ethernet/intel/ice/ice.h" MODULE_ALIAS("i40iw"); MODULE_AUTHOR("Intel Corporation, "); @@ -61,7 +60,7 @@ static void irdma_log_invalid_mtu(u16 mtu, struct irdma_sc_dev *dev) } static void irdma_fill_qos_info(struct irdma_l2params *l2params, - struct iidc_qos_params *qos_info) + struct iidc_rdma_qos_params *qos_info) { int i; @@ -85,12 +84,13 @@ static void irdma_fill_qos_info(struct irdma_l2params *l2params, } } -static void irdma_iidc_event_handler(struct ice_pf *pf, struct iidc_event *event) +static void irdma_idc_event_handler(struct idc_rdma_core_dev_info *cdev_info, + struct idc_rdma_event *event) { - struct irdma_device *iwdev = dev_get_drvdata(&pf->adev->dev); + struct irdma_device *iwdev = dev_get_drvdata(&cdev_info->adev->dev); struct irdma_l2params l2params = {}; - if (*event->type & BIT(IIDC_EVENT_AFTER_MTU_CHANGE)) { + if (*event->type & BIT(IDC_RDMA_EVENT_AFTER_MTU_CHANGE)) { ibdev_dbg(&iwdev->ibdev, "CLNT: new MTU = %d\n", iwdev->netdev->mtu); if (iwdev->vsi.mtu != iwdev->netdev->mtu) { l2params.mtu = iwdev->netdev->mtu; @@ -98,25 +98,26 @@ static void irdma_iidc_event_handler(struct ice_pf *pf, struct iidc_event *event irdma_log_invalid_mtu(l2params.mtu, &iwdev->rf->sc_dev); irdma_change_l2params(&iwdev->vsi, &l2params); } - } else if (*event->type & BIT(IIDC_EVENT_BEFORE_TC_CHANGE)) { + } else if (*event->type & BIT(IDC_RDMA_EVENT_BEFORE_TC_CHANGE)) { if (iwdev->vsi.tc_change_pending) return; irdma_prep_tc_change(iwdev); - } else if (*event->type & BIT(IIDC_EVENT_AFTER_TC_CHANGE)) { - struct iidc_qos_params qos_info = {}; + } else if (*event->type & BIT(IDC_RDMA_EVENT_AFTER_TC_CHANGE)) { + struct iidc_rdma_priv_dev_info *idc_priv = cdev_info->idc_priv; if (!iwdev->vsi.tc_change_pending) return; l2params.tc_changed = true; ibdev_dbg(&iwdev->ibdev, "CLNT: TC Change\n"); - ice_get_qos_params(pf, &qos_info); - irdma_fill_qos_info(&l2params, &qos_info); + + irdma_fill_qos_info(&l2params, &idc_priv->qos_info); if (iwdev->rf->protocol_used != IRDMA_IWARP_PROTOCOL_ONLY) - iwdev->dcb_vlan_mode = qos_info.num_tc > 1 && !l2params.dscp_mode; + iwdev->dcb_vlan_mode = + l2params.num_tc > 1 && !l2params.dscp_mode; irdma_change_l2params(&iwdev->vsi, &l2params); - } else if (*event->type & BIT(IIDC_EVENT_CRIT_ERR)) { + } else if (*event->type & BIT(IDC_RDMA_EVENT_CRIT_ERR)) { ibdev_warn(&iwdev->ibdev, "ICE OICR event notification: oicr = 0x%08x\n", event->reg); if (event->reg & IRDMAPFINT_OICR_PE_CRITERR_M) { @@ -151,10 +152,10 @@ static void irdma_iidc_event_handler(struct ice_pf *pf, struct iidc_event *event */ static void irdma_request_reset(struct irdma_pci_f *rf) { - struct ice_pf *pf = rf->cdev; + struct idc_rdma_core_dev_info *cdev_info = rf->cdev; ibdev_warn(&rf->iwdev->ibdev, "Requesting a reset\n"); - ice_rdma_request_reset(pf, IIDC_PFR); + cdev_info->ops->request_reset(rf->cdev, IDC_FUNC_RESET); } /** @@ -166,14 +167,15 @@ static int irdma_lan_register_qset(struct irdma_sc_vsi *vsi, struct irdma_ws_node *tc_node) { struct irdma_device *iwdev = vsi->back_vsi; - struct ice_pf *pf = iwdev->rf->cdev; + struct idc_rdma_core_dev_info *cdev_info = iwdev->rf->cdev; + struct iidc_rdma_priv_dev_info *idc_priv = cdev_info->idc_priv; struct iidc_rdma_qset_params qset = {}; int ret; qset.qs_handle = tc_node->qs_handle; qset.tc = tc_node->traffic_class; qset.vport_id = vsi->vsi_idx; - ret = ice_add_rdma_qset(pf, &qset); + ret = idc_priv->priv_ops->alloc_res(cdev_info, &qset); if (ret) { ibdev_dbg(&iwdev->ibdev, "WS: LAN alloc_res for rdma qset failed.\n"); return ret; @@ -194,7 +196,8 @@ static void irdma_lan_unregister_qset(struct irdma_sc_vsi *vsi, struct irdma_ws_node *tc_node) { struct irdma_device *iwdev = vsi->back_vsi; - struct ice_pf *pf = iwdev->rf->cdev; + struct idc_rdma_core_dev_info *cdev_info = iwdev->rf->cdev; + struct iidc_rdma_priv_dev_info *idc_priv = cdev_info->idc_priv; struct iidc_rdma_qset_params qset = {}; qset.qs_handle = tc_node->qs_handle; @@ -202,40 +205,48 @@ static void irdma_lan_unregister_qset(struct irdma_sc_vsi *vsi, qset.vport_id = vsi->vsi_idx; qset.teid = tc_node->l2_sched_node_id; - if (ice_del_rdma_qset(pf, &qset)) + if (idc_priv->priv_ops->free_res(cdev_info, &qset)) ibdev_dbg(&iwdev->ibdev, "WS: LAN free_res for rdma qset failed.\n"); } static void irdma_remove(struct auxiliary_device *aux_dev) { - struct iidc_auxiliary_dev *iidc_adev = container_of(aux_dev, - struct iidc_auxiliary_dev, - adev); - struct ice_pf *pf = iidc_adev->pf; + struct idc_rdma_core_auxiliary_dev *idc_adev = + container_of(aux_dev, struct idc_rdma_core_auxiliary_dev, adev); + struct idc_rdma_core_dev_info *cdev_info = idc_adev->cdev_info; + struct iidc_rdma_priv_dev_info *idc_priv = cdev_info->idc_priv; struct irdma_device *iwdev = auxiliary_get_drvdata(aux_dev); + idc_priv->priv_ops->update_vport_filter(cdev_info, + iwdev->vsi_num, false); irdma_ib_unregister_device(iwdev); - ice_rdma_update_vsi_filter(pf, iwdev->vsi_num, false); - pr_debug("INIT: Gen2 PF[%d] device remove success\n", PCI_FUNC(pf->pdev->devfn)); + pr_debug("INIT: Gen2 PF[%d] device remove success\n", PCI_FUNC(cdev_info->pdev->devfn)); } -static void irdma_fill_device_info(struct irdma_device *iwdev, struct ice_pf *pf, - struct ice_vsi *vsi) +static void irdma_fill_device_info(struct irdma_device *iwdev, + struct idc_rdma_core_dev_info *cdev_info) { + struct iidc_rdma_priv_dev_info *idc_priv = cdev_info->idc_priv; struct irdma_pci_f *rf = iwdev->rf; - rf->cdev = pf; + rf->sc_dev.hw = &rf->hw; + rf->iwdev = iwdev; + rf->cdev = cdev_info; + rf->hw.hw_addr = idc_priv->hw_addr; + rf->pcidev = cdev_info->pdev; + rf->hw.device = &rf->pcidev->dev; + rf->msix_count = cdev_info->msix_count; + rf->pf_id = idc_priv->pf_id; + rf->msix_entries = cdev_info->msix_entries; + rf->gen_ops.register_qset = irdma_lan_register_qset; rf->gen_ops.unregister_qset = irdma_lan_unregister_qset; - rf->hw.hw_addr = pf->hw.hw_addr; - rf->pcidev = pf->pdev; - rf->msix_count = pf->num_rdma_msix; - rf->pf_id = pf->hw.pf_id; - rf->msix_entries = &pf->msix_entries[pf->rdma_base_vector]; - rf->default_vsi.vsi_idx = vsi->vsi_num; - rf->protocol_used = pf->rdma_mode & IIDC_RDMA_PROTOCOL_ROCEV2 ? - IRDMA_ROCE_PROTOCOL_ONLY : IRDMA_IWARP_PROTOCOL_ONLY; + + rf->default_vsi.vsi_idx = idc_priv->vport_id; + rf->protocol_used = + cdev_info->rdma_protocol == IDC_RDMA_PROTOCOL_ROCEV2 ? + IRDMA_ROCE_PROTOCOL_ONLY : IRDMA_IWARP_PROTOCOL_ONLY; rf->rdma_ver = IRDMA_GEN_2; rf->rsrc_profile = IRDMA_HMC_PROFILE_DEFAULT; rf->rst_to = IRDMA_RST_TIMEOUT_HZ; @@ -243,8 +254,9 @@ static void irdma_fill_device_info(struct irdma_device *iwdev, struct ice_pf *pf rf->limits_sel = 7; rf->iwdev = iwdev; mutex_init(&iwdev->ah_tbl_lock); - iwdev->netdev = vsi->netdev; - iwdev->vsi_num = vsi->vsi_num; + + iwdev->netdev = idc_priv->netdev; + iwdev->vsi_num = idc_priv->vport_id; iwdev->init_state = INITIAL_STATE; iwdev->roce_cwnd = IRDMA_ROCE_CWND_DEFAULT; iwdev->roce_ackcreds = IRDMA_ROCE_ACKCREDS_DEFAULT; @@ -256,19 +268,15 @@ static void irdma_fill_device_info(struct irdma_device *iwdev, struct ice_pf *pf static int irdma_probe(struct auxiliary_device *aux_dev, const struct auxiliary_device_id *id) { - struct iidc_auxiliary_dev *iidc_adev = container_of(aux_dev, - struct iidc_auxiliary_dev, - adev); - struct ice_pf *pf = iidc_adev->pf; - struct ice_vsi *vsi = ice_get_main_vsi(pf); - struct iidc_qos_params qos_info = {}; + struct idc_rdma_core_auxiliary_dev *idc_adev = + container_of(aux_dev, struct idc_rdma_core_auxiliary_dev, adev); + struct idc_rdma_core_dev_info *cdev_info = idc_adev->cdev_info; + struct iidc_rdma_priv_dev_info *idc_priv = cdev_info->idc_priv; struct irdma_device *iwdev; struct irdma_pci_f *rf; struct irdma_l2params l2params = {}; int err; - if (!vsi) - return -EIO; iwdev = ib_alloc_device(irdma_device, ibdev); if (!iwdev) return -ENOMEM; @@ -278,7 +286,7 @@ static int irdma_probe(struct auxiliary_device *aux_dev, const struct auxiliary_ return -ENOMEM; } - irdma_fill_device_info(iwdev, pf, vsi); + irdma_fill_device_info(iwdev, cdev_info); rf = iwdev->rf; err = irdma_ctrl_init_hw(rf); @@ -286,8 +294,7 @@ static int irdma_probe(struct auxiliary_device *aux_dev, const struct auxiliary_ goto err_ctrl_init; l2params.mtu = iwdev->netdev->mtu; - ice_get_qos_params(pf, &qos_info); - irdma_fill_qos_info(&l2params, &qos_info); + irdma_fill_qos_info(&l2params, &idc_priv->qos_info); if (iwdev->rf->protocol_used != IRDMA_IWARP_PROTOCOL_ONLY) iwdev->dcb_vlan_mode = l2params.num_tc > 1 && !l2params.dscp_mode; @@ -299,7 +306,8 @@ static int irdma_probe(struct auxiliary_device *aux_dev, const struct auxiliary_ if (err) goto err_ibreg; - ice_rdma_update_vsi_filter(pf, iwdev->vsi_num, true); + idc_priv->priv_ops->update_vport_filter(cdev_info, iwdev->vsi_num, + true); ibdev_dbg(&iwdev->ibdev, "INIT: Gen2 PF[%d] device probe success\n", PCI_FUNC(rf->pcidev->devfn)); auxiliary_set_drvdata(aux_dev, iwdev); @@ -325,13 +333,13 @@ static const struct auxiliary_device_id irdma_auxiliary_id_table[] = { MODULE_DEVICE_TABLE(auxiliary, irdma_auxiliary_id_table); -static struct iidc_auxiliary_drv irdma_auxiliary_drv = { +static struct idc_rdma_core_auxiliary_drv irdma_auxiliary_drv = { .adrv = { .id_table = irdma_auxiliary_id_table, .probe = irdma_probe, .remove = irdma_remove, }, - .event_handler = irdma_iidc_event_handler, + .event_handler = irdma_idc_event_handler, }; static int __init irdma_init_module(void) diff --git a/drivers/infiniband/hw/irdma/main.h b/drivers/infiniband/hw/irdma/main.h index 9f0ed6e84471..e81f37583138 100644 --- a/drivers/infiniband/hw/irdma/main.h +++ b/drivers/infiniband/hw/irdma/main.h @@ -29,7 +29,8 @@ #include #endif #include -#include +#include +#include #include #include #include diff --git a/drivers/infiniband/hw/irdma/osdep.h b/drivers/infiniband/hw/irdma/osdep.h index e1e3d3ae72b7..b41134b7ea5f 100644 --- a/drivers/infiniband/hw/irdma/osdep.h +++ b/drivers/infiniband/hw/irdma/osdep.h @@ -5,7 +5,9 @@ #include #include -#include +#include +#include + #include #include diff --git a/drivers/net/ethernet/intel/ice/devlink/devlink.c b/drivers/net/ethernet/intel/ice/devlink/devlink.c index 810a901d7afd..57faed00cd0f 100644 --- a/drivers/net/ethernet/intel/ice/devlink/devlink.c +++ b/drivers/net/ethernet/intel/ice/devlink/devlink.c @@ -1284,8 +1284,14 @@ ice_devlink_enable_roce_get(struct devlink *devlink, u32 id, struct devlink_param_gset_ctx *ctx) { struct ice_pf *pf = devlink_priv(devlink); + struct idc_rdma_core_dev_info *cdev; - ctx->val.vbool = pf->rdma_mode & IIDC_RDMA_PROTOCOL_ROCEV2 ? true : false; + cdev = pf->cdev_info; + if (!cdev) + return 0; + + ctx->val.vbool = cdev->rdma_protocol & IDC_RDMA_PROTOCOL_ROCEV2 ? + true : false; return 0; } @@ -1295,19 +1301,24 @@ static int ice_devlink_enable_roce_set(struct devlink *devlink, u32 id, struct netlink_ext_ack *extack) { struct ice_pf *pf = devlink_priv(devlink); + struct idc_rdma_core_dev_info *cdev; bool roce_ena = ctx->val.vbool; int ret; + cdev = pf->cdev_info; + if (!cdev) + return -EINVAL; + if (!roce_ena) { ice_unplug_aux_dev(pf); - pf->rdma_mode &= ~IIDC_RDMA_PROTOCOL_ROCEV2; + cdev->rdma_protocol &= ~IDC_RDMA_PROTOCOL_ROCEV2; return 0; } - pf->rdma_mode |= IIDC_RDMA_PROTOCOL_ROCEV2; + cdev->rdma_protocol |= IDC_RDMA_PROTOCOL_ROCEV2; ret = ice_plug_aux_dev(pf); if (ret) - pf->rdma_mode &= ~IIDC_RDMA_PROTOCOL_ROCEV2; + cdev->rdma_protocol &= ~IDC_RDMA_PROTOCOL_ROCEV2; return ret; } @@ -1318,11 +1329,16 @@ ice_devlink_enable_roce_validate(struct devlink *devlink, u32 id, struct netlink_ext_ack *extack) { struct ice_pf *pf = devlink_priv(devlink); + struct idc_rdma_core_dev_info *cdev; + + cdev = pf->cdev_info; + if (!cdev) + return -EINVAL; if (!test_bit(ICE_FLAG_RDMA_ENA, pf->flags)) return -EOPNOTSUPP; - if (pf->rdma_mode & IIDC_RDMA_PROTOCOL_IWARP) { + if (cdev->rdma_protocol & IDC_RDMA_PROTOCOL_IWARP) { NL_SET_ERR_MSG_MOD(extack, "iWARP is currently enabled. This device cannot enable iWARP and RoCEv2 simultaneously"); return -EOPNOTSUPP; } @@ -1336,7 +1352,8 @@ ice_devlink_enable_iw_get(struct devlink *devlink, u32 id, { struct ice_pf *pf = devlink_priv(devlink); - ctx->val.vbool = pf->rdma_mode & IIDC_RDMA_PROTOCOL_IWARP; + ctx->val.vbool = pf->cdev_info->rdma_protocol & + IDC_RDMA_PROTOCOL_IWARP; return 0; } @@ -1346,19 +1363,23 @@ static int ice_devlink_enable_iw_set(struct devlink *devlink, u32 id, struct netlink_ext_ack *extack) { struct ice_pf *pf = devlink_priv(devlink); + struct idc_rdma_core_dev_info *cdev; bool iw_ena = ctx->val.vbool; int ret; + cdev = pf->cdev_info; + if (!cdev) + return -EINVAL; if (!iw_ena) { ice_unplug_aux_dev(pf); - pf->rdma_mode &= ~IIDC_RDMA_PROTOCOL_IWARP; + cdev->rdma_protocol &= ~IDC_RDMA_PROTOCOL_IWARP; return 0; } - pf->rdma_mode |= IIDC_RDMA_PROTOCOL_IWARP; + cdev->rdma_protocol |= IDC_RDMA_PROTOCOL_IWARP; ret = ice_plug_aux_dev(pf); if (ret) - pf->rdma_mode &= ~IIDC_RDMA_PROTOCOL_IWARP; + cdev->rdma_protocol &= ~IDC_RDMA_PROTOCOL_IWARP; return ret; } @@ -1373,7 +1394,7 @@ ice_devlink_enable_iw_validate(struct devlink *devlink, u32 id, if (!test_bit(ICE_FLAG_RDMA_ENA, pf->flags)) return -EOPNOTSUPP; - if (pf->rdma_mode & IIDC_RDMA_PROTOCOL_ROCEV2) { + if (pf->cdev_info->rdma_protocol & IDC_RDMA_PROTOCOL_ROCEV2) { NL_SET_ERR_MSG_MOD(extack, "RoCEv2 is currently enabled. This device cannot enable iWARP and RoCEv2 simultaneously"); return -EOPNOTSUPP; } diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h index caaa10157909..9070452b3b33 100644 --- a/drivers/net/ethernet/intel/ice/ice.h +++ b/drivers/net/ethernet/intel/ice/ice.h @@ -405,7 +405,6 @@ struct ice_vsi { u16 req_rxq; /* User requested Rx queues */ u16 num_rx_desc; u16 num_tx_desc; - u16 qset_handle[ICE_MAX_TRAFFIC_CLASS]; struct ice_tc_cfg tc_cfg; struct bpf_prog *xdp_prog; struct ice_tx_ring **xdp_rings; /* XDP ring array */ @@ -550,7 +549,6 @@ struct ice_pf { struct devlink_port devlink_port; /* OS reserved IRQ details */ - struct msix_entry *msix_entries; struct ice_irq_tracker irq_tracker; /* First MSIX vector used by SR-IOV VFs. Calculated by subtracting the * number of MSIX vectors needed for all SR-IOV VFs from the number of @@ -591,7 +589,6 @@ struct ice_pf { struct gnss_serial *gnss_serial; struct gnss_device *gnss_dev; u16 num_rdma_msix; /* Total MSIX vectors for RDMA driver */ - u16 rdma_base_vector; /* spinlock to protect the AdminQ wait list */ spinlock_t aq_wait_lock; @@ -624,14 +621,12 @@ struct ice_pf { struct ice_hw_port_stats stats_prev; struct ice_hw hw; u8 stat_prev_loaded:1; /* has previous stats been loaded */ - u8 rdma_mode; u16 dcbx_cap; u32 tx_timeout_count; unsigned long tx_timeout_last_recovery; u32 tx_timeout_recovery_level; char int_name[ICE_INT_NAME_STR_LEN]; char int_name_ll_ts[ICE_INT_NAME_STR_LEN]; - struct auxiliary_device *adev; int aux_idx; u32 sw_int_count; /* count of tc_flower filters specific to channel (aka where filter @@ -659,6 +654,7 @@ struct ice_pf { struct ice_agg_node vf_agg_node[ICE_MAX_VF_AGG_NODES]; struct ice_dplls dplls; struct device *hwmon_dev; + struct idc_rdma_core_dev_info *cdev_info; }; extern struct workqueue_struct *ice_lag_wq; diff --git a/drivers/net/ethernet/intel/ice/ice_dcb_lib.c b/drivers/net/ethernet/intel/ice/ice_dcb_lib.c index a94e7072b570..c85d86c0c9c5 100644 --- a/drivers/net/ethernet/intel/ice/ice_dcb_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_dcb_lib.c @@ -352,7 +352,7 @@ int ice_pf_dcb_cfg(struct ice_pf *pf, struct ice_dcbx_cfg *new_cfg, bool locked) struct ice_dcbx_cfg *old_cfg, *curr_cfg; struct device *dev = ice_pf_to_dev(pf); int ret = ICE_DCB_NO_HW_CHG; - struct iidc_event *event; + struct idc_rdma_event *event; struct ice_vsi *pf_vsi; curr_cfg = &pf->hw.port_info->qos_cfg.local_dcbx_cfg; @@ -404,7 +404,7 @@ int ice_pf_dcb_cfg(struct ice_pf *pf, struct ice_dcbx_cfg *new_cfg, bool locked) goto free_cfg; } - set_bit(IIDC_EVENT_BEFORE_TC_CHANGE, event->type); + set_bit(IDC_RDMA_EVENT_BEFORE_TC_CHANGE, event->type); ice_send_event_to_aux(pf, event); kfree(event); @@ -739,7 +739,9 @@ static int ice_dcb_noncontig_cfg(struct ice_pf *pf) void ice_pf_dcb_recfg(struct ice_pf *pf, bool locked) { struct ice_dcbx_cfg *dcbcfg = &pf->hw.port_info->qos_cfg.local_dcbx_cfg; - struct iidc_event *event; + struct iidc_rdma_priv_dev_info *privd; + struct idc_rdma_core_dev_info *cdev; + struct idc_rdma_event *event; u8 tc_map = 0; int v, ret; @@ -782,13 +784,17 @@ void ice_pf_dcb_recfg(struct ice_pf *pf, bool locked) if (vsi->type == ICE_VSI_PF) ice_dcbnl_set_all(vsi); } - if (!locked) { + + cdev = pf->cdev_info; + if (cdev && !locked) { + privd = cdev->idc_priv; + ice_setup_dcb_qos_info(pf, &privd->qos_info); /* Notify the AUX drivers that TC change is finished */ event = kzalloc(sizeof(*event), GFP_KERNEL); if (!event) return; - set_bit(IIDC_EVENT_AFTER_TC_CHANGE, event->type); + set_bit(IDC_RDMA_EVENT_AFTER_TC_CHANGE, event->type); ice_send_event_to_aux(pf, event); kfree(event); } @@ -943,6 +949,36 @@ ice_tx_prepare_vlan_flags_dcb(struct ice_tx_ring *tx_ring, } } +/** + * ice_setup_dcb_qos_info - Setup DCB QoS information + * @pf: ptr to ice_pf + * @qos_info: QoS param instance + */ +void ice_setup_dcb_qos_info(struct ice_pf *pf, struct iidc_rdma_qos_params *qos_info) +{ + struct ice_dcbx_cfg *dcbx_cfg; + unsigned int i; + u32 up2tc; + + if (!pf || !qos_info) + return; + + dcbx_cfg = &pf->hw.port_info->qos_cfg.local_dcbx_cfg; + up2tc = rd32(&pf->hw, PRTDCB_TUP2TC); + + qos_info->num_tc = ice_dcb_get_num_tc(dcbx_cfg); + + for (i = 0; i < IIDC_MAX_USER_PRIORITY; i++) + qos_info->up2tc[i] = (up2tc >> (i * 3)) & 0x7; + + for (i = 0; i < IEEE_8021QAZ_MAX_TCS; i++) + qos_info->tc_info[i].rel_bw = dcbx_cfg->etscfg.tcbwtable[i]; + + qos_info->pfc_mode = dcbx_cfg->pfc_mode; + for (i = 0; i < ICE_DSCP_NUM_VAL; i++) + qos_info->dscp_map[i] = dcbx_cfg->dscp_map[i]; +} + /** * ice_dcb_is_mib_change_pending - Check if MIB change is pending * @state: MIB change state diff --git a/drivers/net/ethernet/intel/ice/ice_dcb_lib.h b/drivers/net/ethernet/intel/ice/ice_dcb_lib.h index 800879a88c5e..80efbf00a474 100644 --- a/drivers/net/ethernet/intel/ice/ice_dcb_lib.h +++ b/drivers/net/ethernet/intel/ice/ice_dcb_lib.h @@ -31,6 +31,8 @@ void ice_tx_prepare_vlan_flags_dcb(struct ice_tx_ring *tx_ring, struct ice_tx_buf *first); void +ice_setup_dcb_qos_info(struct ice_pf *pf, struct iidc_rdma_qos_params *qos_info); +void ice_dcb_process_lldp_set_mib_change(struct ice_pf *pf, struct ice_rq_event_info *event); /** @@ -134,5 +136,7 @@ static inline void ice_update_dcb_stats(struct ice_pf *pf) { } static inline void ice_dcb_process_lldp_set_mib_change(struct ice_pf *pf, struct ice_rq_event_info *event) { } static inline void ice_set_cgd_num(struct ice_tlan_ctx *tlan_ctx, u8 dcb_tc) { } +static inline void +ice_setup_dcb_qos_info(struct ice_pf *pf, struct iidc_rdma_qos_params *qos_info) { } #endif /* CONFIG_DCB */ #endif /* _ICE_DCB_LIB_H_ */ diff --git a/drivers/net/ethernet/intel/ice/ice_ethtool.c b/drivers/net/ethernet/intel/ice/ice_ethtool.c index 8c990c976132..ee5e88b7cb89 100644 --- a/drivers/net/ethernet/intel/ice/ice_ethtool.c +++ b/drivers/net/ethernet/intel/ice/ice_ethtool.c @@ -3992,11 +3992,11 @@ static int ice_set_channels(struct net_device *dev, struct ethtool_channels *ch) return -EINVAL; } - if (pf->adev) { + if (pf->cdev_info->adev) { mutex_lock(&pf->adev_mutex); - device_lock(&pf->adev->dev); + device_lock(&pf->cdev_info->adev->dev); locked = true; - if (pf->adev->dev.driver) { + if (pf->cdev_info->adev->dev.driver) { netdev_err(dev, "Cannot change channels when RDMA is active\n"); ret = -EBUSY; goto adev_unlock; @@ -4015,7 +4015,7 @@ static int ice_set_channels(struct net_device *dev, struct ethtool_channels *ch) adev_unlock: if (locked) { - device_unlock(&pf->adev->dev); + device_unlock(&pf->cdev_info->adev->dev); mutex_unlock(&pf->adev_mutex); } return ret; diff --git a/drivers/net/ethernet/intel/ice/ice_idc.c b/drivers/net/ethernet/intel/ice/ice_idc.c index 145b27f2a4ce..3454b91afcaa 100644 --- a/drivers/net/ethernet/intel/ice/ice_idc.c +++ b/drivers/net/ethernet/intel/ice/ice_idc.c @@ -9,21 +9,22 @@ static DEFINE_XARRAY_ALLOC1(ice_aux_id); /** - * ice_get_auxiliary_drv - retrieve iidc_auxiliary_drv struct - * @pf: pointer to PF struct + * ice_get_auxiliary_drv - retrieve iidc_rdma_core_auxiliary_drv struct + * @cdev: pointer to iidc_rdma_core_dev_info struct * * This function has to be called with a device_lock on the - * pf->adev.dev to avoid race conditions. + * cdev->adev.dev to avoid race conditions. */ -static struct iidc_auxiliary_drv *ice_get_auxiliary_drv(struct ice_pf *pf) +static struct idc_rdma_core_auxiliary_drv +*ice_get_auxiliary_drv(struct idc_rdma_core_dev_info *cdev) { struct auxiliary_device *adev; - adev = pf->adev; + adev = cdev->adev; if (!adev || !adev->dev.driver) return NULL; - return container_of(adev->dev.driver, struct iidc_auxiliary_drv, + return container_of(adev->dev.driver, struct idc_rdma_core_auxiliary_drv, adrv.driver); } @@ -32,44 +33,52 @@ static struct iidc_auxiliary_drv *ice_get_auxiliary_drv(struct ice_pf *pf) * @pf: pointer to PF struct * @event: event struct */ -void ice_send_event_to_aux(struct ice_pf *pf, struct iidc_event *event) +void ice_send_event_to_aux(struct ice_pf *pf, struct idc_rdma_event *event) { - struct iidc_auxiliary_drv *iadrv; + struct idc_rdma_core_auxiliary_drv *iadrv; + struct idc_rdma_core_dev_info *cdev; if (WARN_ON_ONCE(!in_task())) return; + cdev = pf->cdev_info; + if (!cdev) + return; + mutex_lock(&pf->adev_mutex); - if (!pf->adev) + if (!cdev->adev) goto finish; - device_lock(&pf->adev->dev); - iadrv = ice_get_auxiliary_drv(pf); + device_lock(&cdev->adev->dev); + iadrv = ice_get_auxiliary_drv(cdev); if (iadrv && iadrv->event_handler) - iadrv->event_handler(pf, event); - device_unlock(&pf->adev->dev); + iadrv->event_handler(cdev, event); + device_unlock(&cdev->adev->dev); finish: mutex_unlock(&pf->adev_mutex); } /** * ice_add_rdma_qset - Add Leaf Node for RDMA Qset - * @pf: PF struct + * @cdev: pointer to iidc_rdma_core_dev_info struct * @qset: Resource to be allocated */ -int ice_add_rdma_qset(struct ice_pf *pf, struct iidc_rdma_qset_params *qset) +static int ice_add_rdma_qset(struct idc_rdma_core_dev_info *cdev, + struct iidc_rdma_qset_params *qset) { u16 max_rdmaqs[ICE_MAX_TRAFFIC_CLASS]; struct ice_vsi *vsi; struct device *dev; + struct ice_pf *pf; u32 qset_teid; u16 qs_handle; int status; int i; - if (WARN_ON(!pf || !qset)) + if (WARN_ON(!cdev || !qset)) return -EINVAL; + pf = pci_get_drvdata(cdev->pdev); dev = ice_pf_to_dev(pf); if (!ice_is_rdma_ena(pf)) @@ -100,27 +109,28 @@ int ice_add_rdma_qset(struct ice_pf *pf, struct iidc_rdma_qset_params *qset) dev_err(dev, "Failed VSI RDMA Qset enable\n"); return status; } - vsi->qset_handle[qset->tc] = qset->qs_handle; qset->teid = qset_teid; return 0; } -EXPORT_SYMBOL_GPL(ice_add_rdma_qset); /** * ice_del_rdma_qset - Delete leaf node for RDMA Qset - * @pf: PF struct + * @cdev: pointer to iidc_rdma_core_dev_info struct * @qset: Resource to be freed */ -int ice_del_rdma_qset(struct ice_pf *pf, struct iidc_rdma_qset_params *qset) +static int ice_del_rdma_qset(struct idc_rdma_core_dev_info *cdev, + struct iidc_rdma_qset_params *qset) { struct ice_vsi *vsi; + struct ice_pf *pf; u32 teid; u16 q_id; - if (WARN_ON(!pf || !qset)) + if (WARN_ON(!cdev || !qset)) return -EINVAL; + pf = pci_get_drvdata(cdev->pdev); vsi = ice_find_vsi(pf, qset->vport_id); if (!vsi) { dev_err(ice_pf_to_dev(pf), "RDMA Invalid VSI\n"); @@ -130,57 +140,56 @@ int ice_del_rdma_qset(struct ice_pf *pf, struct iidc_rdma_qset_params *qset) q_id = qset->qs_handle; teid = qset->teid; - vsi->qset_handle[qset->tc] = 0; - return ice_dis_vsi_rdma_qset(vsi->port_info, 1, &teid, &q_id); } -EXPORT_SYMBOL_GPL(ice_del_rdma_qset); /** * ice_rdma_request_reset - accept request from RDMA to perform a reset - * @pf: struct for PF + * @cdev: pointer to iidc_rdma_core_dev_info struct * @reset_type: type of reset */ -int ice_rdma_request_reset(struct ice_pf *pf, enum iidc_reset_type reset_type) +static int ice_rdma_request_reset(struct idc_rdma_core_dev_info *cdev, + enum idc_rdma_reset_type reset_type) { enum ice_reset_req reset; + struct ice_pf *pf; - if (WARN_ON(!pf)) + if (WARN_ON(!cdev)) return -EINVAL; + pf = pci_get_drvdata(cdev->pdev); + switch (reset_type) { - case IIDC_PFR: + case IDC_FUNC_RESET: reset = ICE_RESET_PFR; break; - case IIDC_CORER: + case IDC_DEV_RESET: reset = ICE_RESET_CORER; break; - case IIDC_GLOBR: - reset = ICE_RESET_GLOBR; - break; default: - dev_err(ice_pf_to_dev(pf), "incorrect reset request\n"); return -EINVAL; } return ice_schedule_reset(pf, reset); } -EXPORT_SYMBOL_GPL(ice_rdma_request_reset); /** * ice_rdma_update_vsi_filter - update main VSI filters for RDMA - * @pf: pointer to struct for PF + * @cdev: pointer to iidc_rdma_core_dev_info struct * @vsi_id: VSI HW idx to update filter on * @enable: bool whether to enable or disable filters */ -int ice_rdma_update_vsi_filter(struct ice_pf *pf, u16 vsi_id, bool enable) +static int ice_rdma_update_vsi_filter(struct idc_rdma_core_dev_info *cdev, + u16 vsi_id, bool enable) { struct ice_vsi *vsi; + struct ice_pf *pf; int status; - if (WARN_ON(!pf)) + if (WARN_ON(!cdev)) return -EINVAL; + pf = pci_get_drvdata(cdev->pdev); vsi = ice_find_vsi(pf, vsi_id); if (!vsi) return -EINVAL; @@ -198,35 +207,6 @@ int ice_rdma_update_vsi_filter(struct ice_pf *pf, u16 vsi_id, bool enable) return status; } -EXPORT_SYMBOL_GPL(ice_rdma_update_vsi_filter); - -/** - * ice_get_qos_params - parse QoS params for RDMA consumption - * @pf: pointer to PF struct - * @qos: set of QoS values - */ -void ice_get_qos_params(struct ice_pf *pf, struct iidc_qos_params *qos) -{ - struct ice_dcbx_cfg *dcbx_cfg; - unsigned int i; - u32 up2tc; - - dcbx_cfg = &pf->hw.port_info->qos_cfg.local_dcbx_cfg; - up2tc = rd32(&pf->hw, PRTDCB_TUP2TC); - - qos->num_tc = ice_dcb_get_num_tc(dcbx_cfg); - for (i = 0; i < IIDC_MAX_USER_PRIORITY; i++) - qos->up2tc[i] = (up2tc >> (i * 3)) & 0x7; - - for (i = 0; i < IEEE_8021QAZ_MAX_TCS; i++) - qos->tc_info[i].rel_bw = dcbx_cfg->etscfg.tcbwtable[i]; - - qos->pfc_mode = dcbx_cfg->pfc_mode; - if (qos->pfc_mode == IIDC_DSCP_PFC_MODE) - for (i = 0; i < IIDC_MAX_DSCP_MAPPING; i++) - qos->dscp_map[i] = dcbx_cfg->dscp_map[i]; -} -EXPORT_SYMBOL_GPL(ice_get_qos_params); /** * ice_alloc_rdma_qvectors - Allocate vector resources for RDMA driver @@ -234,22 +214,26 @@ EXPORT_SYMBOL_GPL(ice_get_qos_params); */ static int ice_alloc_rdma_qvectors(struct ice_pf *pf) { + struct idc_rdma_core_dev_info *cdev; + + cdev = pf->cdev_info; + if (!cdev) + return -EINVAL; + if (ice_is_rdma_ena(pf)) { int i; - pf->msix_entries = kcalloc(pf->num_rdma_msix, - sizeof(*pf->msix_entries), - GFP_KERNEL); - if (!pf->msix_entries) + cdev->msix_entries = kcalloc(pf->num_rdma_msix, + sizeof(*cdev->msix_entries), + GFP_KERNEL); + if (!cdev->msix_entries) return -ENOMEM; - /* RDMA is the only user of pf->msix_entries array */ - pf->rdma_base_vector = 0; - for (i = 0; i < pf->num_rdma_msix; i++) { - struct msix_entry *entry = &pf->msix_entries[i]; + struct msix_entry *entry; struct msi_map map; + entry = &cdev->msix_entries[i]; map = ice_alloc_irq(pf, false); if (map.index < 0) break; @@ -267,32 +251,49 @@ static int ice_alloc_rdma_qvectors(struct ice_pf *pf) */ static void ice_free_rdma_qvector(struct ice_pf *pf) { + struct idc_rdma_core_dev_info *cdev; int i; - if (!pf->msix_entries) + cdev = pf->cdev_info; + if (!cdev) + return; + + if (!cdev->msix_entries) return; for (i = 0; i < pf->num_rdma_msix; i++) { struct msi_map map; - map.index = pf->msix_entries[i].entry; - map.virq = pf->msix_entries[i].vector; + map.index = cdev->msix_entries[i].entry; + map.virq = cdev->msix_entries[i].vector; ice_free_irq(pf, map); } - kfree(pf->msix_entries); - pf->msix_entries = NULL; + kfree(cdev->msix_entries); + cdev->msix_entries = NULL; } +/* Initialize the ice_ops struct, which is used in 'ice_init_rdma' */ +static const struct idc_rdma_core_ops idc_c_ops = { + .vc_send_sync = NULL, + .request_reset = ice_rdma_request_reset, +}; + +static const struct iidc_rdma_priv_ops iidc_p_ops = { + .alloc_res = ice_add_rdma_qset, + .free_res = ice_del_rdma_qset, + .update_vport_filter = ice_rdma_update_vsi_filter, +}; + /** * ice_adev_release - function to be mapped to AUX dev's release op * @dev: pointer to device to free */ static void ice_adev_release(struct device *dev) { - struct iidc_auxiliary_dev *iadev; + struct idc_rdma_core_auxiliary_dev *iadev; - iadev = container_of(dev, struct iidc_auxiliary_dev, adev.dev); + iadev = container_of(dev, struct idc_rdma_core_auxiliary_dev, adev.dev); kfree(iadev); } @@ -302,7 +303,8 @@ static void ice_adev_release(struct device *dev) */ int ice_plug_aux_dev(struct ice_pf *pf) { - struct iidc_auxiliary_dev *iadev; + struct idc_rdma_core_auxiliary_dev *iadev; + struct idc_rdma_core_dev_info *cdev; struct auxiliary_device *adev; int ret; @@ -312,17 +314,22 @@ int ice_plug_aux_dev(struct ice_pf *pf) if (!ice_is_rdma_ena(pf)) return 0; + cdev = pf->cdev_info; + if (!cdev) + return -EINVAL; + iadev = kzalloc(sizeof(*iadev), GFP_KERNEL); if (!iadev) return -ENOMEM; adev = &iadev->adev; - iadev->pf = pf; + iadev->cdev_info = cdev; adev->id = pf->aux_idx; adev->dev.release = ice_adev_release; adev->dev.parent = &pf->pdev->dev; - adev->name = pf->rdma_mode & IIDC_RDMA_PROTOCOL_ROCEV2 ? "roce" : "iwarp"; + adev->name = cdev->rdma_protocol & IDC_RDMA_PROTOCOL_ROCEV2 ? + "roce" : "iwarp"; ret = auxiliary_device_init(adev); if (ret) { @@ -337,7 +344,7 @@ int ice_plug_aux_dev(struct ice_pf *pf) } mutex_lock(&pf->adev_mutex); - pf->adev = adev; + cdev->adev = adev; mutex_unlock(&pf->adev_mutex); return 0; @@ -351,8 +358,8 @@ void ice_unplug_aux_dev(struct ice_pf *pf) struct auxiliary_device *adev; mutex_lock(&pf->adev_mutex); - adev = pf->adev; - pf->adev = NULL; + adev = pf->cdev_info->adev; + pf->cdev_info->adev = NULL; mutex_unlock(&pf->adev_mutex); if (adev) { @@ -361,6 +368,30 @@ void ice_unplug_aux_dev(struct ice_pf *pf) } } +/** + * ice_init_rdma_qos_info - initialize qos_info for RDMA aux + * @pf: pointer to ice_pf + * @qos_info: pointer to qos_info struct + */ +static void +ice_init_rdma_qos_info(struct ice_pf *pf, struct iidc_rdma_qos_params *qos_info) +{ + int j; + + /* setup qos_info fields with defaults */ + qos_info->num_tc = 1; + + for (j = 0; j < IIDC_MAX_USER_PRIORITY; j++) + qos_info->up2tc[j] = 0; + + qos_info->tc_info[0].rel_bw = 100; + for (j = 1; j < IEEE_8021QAZ_MAX_TCS; j++) + qos_info->tc_info[j].rel_bw = 0; + + /* for DCB, override the qos_info defaults. */ + ice_setup_dcb_qos_info(pf, qos_info); +} + /** * ice_init_rdma - initializes PF for RDMA use * @pf: ptr to ice_pf @@ -368,6 +399,8 @@ void ice_unplug_aux_dev(struct ice_pf *pf) int ice_init_rdma(struct ice_pf *pf) { struct device *dev = &pf->pdev->dev; + struct iidc_rdma_priv_dev_info *privd; + struct idc_rdma_core_dev_info *cdev; int ret; if (!ice_is_rdma_ena(pf)) { @@ -375,20 +408,46 @@ int ice_init_rdma(struct ice_pf *pf) return 0; } + cdev = kzalloc(sizeof(*cdev), GFP_KERNEL); + if (!cdev) + return -ENOMEM; + + pf->cdev_info = cdev; + + privd = kzalloc(sizeof(*privd), GFP_KERNEL); + if (!privd) { + ret = -ENOMEM; + goto err_privd_alloc; + } + + privd->pf_id = pf->hw.pf_id; ret = xa_alloc(&ice_aux_id, &pf->aux_idx, NULL, XA_LIMIT(1, INT_MAX), GFP_KERNEL); if (ret) { dev_err(dev, "Failed to allocate device ID for AUX driver\n"); - return -ENOMEM; + ret = -ENOMEM; + goto err_alloc_xa; } + cdev->ops = &idc_c_ops; + cdev->idc_priv = privd; + privd->priv_ops = &iidc_p_ops; + privd->netdev = pf->vsi[0]->netdev; + + cdev->msix_count = pf->num_rdma_msix; + privd->hw_addr = (u8 __iomem *)pf->hw.hw_addr; + cdev->pdev = pf->pdev; + privd->vport_id = pf->vsi[0]->vsi_num; + /* Reserve vector resources */ ret = ice_alloc_rdma_qvectors(pf); if (ret < 0) { dev_err(dev, "failed to reserve vectors for RDMA\n"); goto err_reserve_rdma_qvector; } - pf->rdma_mode |= IIDC_RDMA_PROTOCOL_ROCEV2; + + pf->cdev_info->rdma_protocol |= IDC_RDMA_PROTOCOL_ROCEV2; + ice_init_rdma_qos_info(pf, &privd->qos_info); ret = ice_plug_aux_dev(pf); if (ret) goto err_plug_aux_dev; @@ -397,8 +456,14 @@ int ice_init_rdma(struct ice_pf *pf) err_plug_aux_dev: ice_free_rdma_qvector(pf); err_reserve_rdma_qvector: - pf->adev = NULL; + pf->cdev_info->adev = NULL; xa_erase(&ice_aux_id, pf->aux_idx); +err_alloc_xa: + kfree(privd); +err_privd_alloc: + kfree(cdev); + pf->cdev_info = NULL; + return ret; } diff --git a/drivers/net/ethernet/intel/ice/ice_idc_int.h b/drivers/net/ethernet/intel/ice/ice_idc_int.h index 4b0c86757df9..8a1e9ed9b103 100644 --- a/drivers/net/ethernet/intel/ice/ice_idc_int.h +++ b/drivers/net/ethernet/intel/ice/ice_idc_int.h @@ -4,10 +4,11 @@ #ifndef _ICE_IDC_INT_H_ #define _ICE_IDC_INT_H_ -#include +#include +#include struct ice_pf; -void ice_send_event_to_aux(struct ice_pf *pf, struct iidc_event *event); +void ice_send_event_to_aux(struct ice_pf *pf, struct idc_rdma_event *event); #endif /* !_ICE_IDC_INT_H_ */ diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c index 3de020020bc4..ed2b0a6be8d0 100644 --- a/drivers/net/ethernet/intel/ice/ice_main.c +++ b/drivers/net/ethernet/intel/ice/ice_main.c @@ -2380,11 +2380,11 @@ static void ice_service_task(struct work_struct *work) } if (test_and_clear_bit(ICE_AUX_ERR_PENDING, pf->state)) { - struct iidc_event *event; + struct idc_rdma_event *event; event = kzalloc(sizeof(*event), GFP_KERNEL); if (event) { - set_bit(IIDC_EVENT_CRIT_ERR, event->type); + set_bit(IDC_RDMA_EVENT_CRIT_ERR, event->type); /* report the entire OICR value to AUX driver */ swap(event->reg, pf->oicr_err_reg); ice_send_event_to_aux(pf, event); @@ -2403,11 +2403,11 @@ static void ice_service_task(struct work_struct *work) ice_plug_aux_dev(pf); if (test_and_clear_bit(ICE_FLAG_MTU_CHANGED, pf->flags)) { - struct iidc_event *event; + struct idc_rdma_event *event; event = kzalloc(sizeof(*event), GFP_KERNEL); if (event) { - set_bit(IIDC_EVENT_AFTER_MTU_CHANGE, event->type); + set_bit(IDC_RDMA_EVENT_AFTER_MTU_CHANGE, event->type); ice_send_event_to_aux(pf, event); kfree(event); } @@ -9258,6 +9258,7 @@ ice_setup_tc(struct net_device *netdev, enum tc_setup_type type, { struct ice_netdev_priv *np = netdev_priv(netdev); struct ice_pf *pf = np->vsi->back; + struct idc_rdma_core_dev_info *cdev; bool locked = false; int err; @@ -9273,11 +9274,12 @@ ice_setup_tc(struct net_device *netdev, enum tc_setup_type type, return -EOPNOTSUPP; } - if (pf->adev) { + cdev = pf->cdev_info; + if (cdev && cdev->adev) { mutex_lock(&pf->adev_mutex); - device_lock(&pf->adev->dev); + device_lock(&cdev->adev->dev); locked = true; - if (pf->adev->dev.driver) { + if (cdev->adev->dev.driver) { netdev_err(netdev, "Cannot change qdisc when RDMA is active\n"); err = -EBUSY; goto adev_unlock; @@ -9291,7 +9293,7 @@ ice_setup_tc(struct net_device *netdev, enum tc_setup_type type, adev_unlock: if (locked) { - device_unlock(&pf->adev->dev); + device_unlock(&cdev->adev->dev); mutex_unlock(&pf->adev_mutex); } return err; diff --git a/include/linux/net/intel/idc_rdma.h b/include/linux/net/intel/idc_rdma.h new file mode 100644 index 000000000000..5c31c6d1cc8a --- /dev/null +++ b/include/linux/net/intel/idc_rdma.h @@ -0,0 +1,138 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (C) 2021, Intel Corporation. */ + +#ifndef _IDC_RDMA_H_ +#define _IDC_RDMA_H_ + +#include +#include +#include +#include +#include + +#define IDC_RDMA_ROCE_NAME "roce" +#define IDC_RDMA_IWARP_NAME "iwarp" + +enum idc_rdma_reset_type { + IDC_FUNC_RESET, + IDC_DEV_RESET, +}; + +enum idc_rdma_event_type { + IDC_RDMA_EVENT_BEFORE_MTU_CHANGE, + IDC_RDMA_EVENT_AFTER_MTU_CHANGE, + IDC_RDMA_EVENT_BEFORE_TC_CHANGE, + IDC_RDMA_EVENT_AFTER_TC_CHANGE, + IDC_RDMA_EVENT_WARN_RESET, + IDC_RDMA_EVENT_CRIT_ERR, + IDC_RDMA_EVENT_NBITS, /* must be last */ +}; + +struct idc_rdma_event { + DECLARE_BITMAP(type, IDC_RDMA_EVENT_NBITS); + u32 reg; +}; + +enum idc_rdma_protocol { + IDC_RDMA_PROTOCOL_IWARP = BIT(0), + IDC_RDMA_PROTOCOL_ROCEV2 = BIT(1), +}; + +struct idc_rdma_qv_info { + u32 v_idx; + u16 ceq_idx; + u16 aeq_idx; + u8 itr_idx; +}; + +struct idc_rdma_qvlist_info { + u32 num_vectors; + struct idc_rdma_qv_info qv_info[]; +}; + +struct idc_rdma_core_dev_info; + +/* Following APIs are implemented by core PCI driver */ +struct idc_rdma_core_ops { + int (*vc_send_sync)(struct idc_rdma_core_dev_info *cdev_info, u8 *msg, + u16 len, u8 *recv_msg, u16 *recv_len); + int (*vc_queue_vec_map_unmap)(struct idc_rdma_core_dev_info *cdev_info, + struct idc_rdma_qvlist_info *qvl_info, + bool map); + /* vport_dev_ctrl is for RDMA CORE driver to indicate it is either ready + * for individual vport aux devices, or it is leaving the state where it + * can support vports and they need to be downed + */ + int (*vport_dev_ctrl)(struct idc_rdma_core_dev_info *cdev_info, + bool up); + int (*request_reset)(struct idc_rdma_core_dev_info *cdev_info, + enum idc_rdma_reset_type reset_type); +}; + +enum idc_function_type { + IDC_FUNCTION_TYPE_PF, + IDC_FUNCTION_TYPE_VF, +}; + +struct idc_rdma_lan_mapped_mem_region { + u8 __iomem *region_addr; + __le64 size; + __le64 start_offset; +}; + +/* struct to be populated by core LAN PCI driver */ +struct idc_rdma_core_dev_info { + struct pci_dev *pdev; /* PCI device of corresponding to main function */ + struct auxiliary_device *adev; + struct idc_rdma_lan_mapped_mem_region *mapped_mem_regions; + __le16 num_memory_regions; + /* Current active RDMA protocol */ + enum idc_rdma_protocol rdma_protocol; + enum idc_function_type ftype; + struct msix_entry *msix_entries; + u16 msix_count; /* How many vectors are reserved for this device */ + /* Following struct contains function pointers to be initialized + * by core PCI driver and called by auxiliary driver + */ + const struct idc_rdma_core_ops *ops; + void *idc_priv; +}; + +struct idc_rdma_core_auxiliary_dev { + struct auxiliary_device adev; + struct idc_rdma_core_dev_info *cdev_info; +}; + +/* struct to be populated by core LAN PCI driver */ +struct idc_rdma_vport_dev_info { + struct auxiliary_device *adev; + struct auxiliary_device *core_adev; + struct net_device *netdev; + u16 vport_id; +}; + +struct idc_rdma_vport_auxiliary_dev { + struct auxiliary_device adev; + struct idc_rdma_vport_dev_info *vdev_info; +}; + +/* structures representing the auxiliary drivers. These structs are to be + * allocated and populated by the auxiliary drivers' owner. The core PCI + * driver will access these ops by performing a container_of on the + * auxiliary_device->dev.driver. + */ +struct idc_rdma_core_auxiliary_drv { + struct auxiliary_driver adrv; + void (*event_handler)(struct idc_rdma_core_dev_info *cdev, + struct idc_rdma_event *event); + int (*vc_receive)(struct idc_rdma_core_dev_info *cdev_info, u8 *msg, + u16 len); +}; + +struct idc_rdma_vport_auxiliary_drv { + struct auxiliary_driver adrv; + void (*event_handler)(struct idc_rdma_vport_dev_info *vdev, + struct idc_rdma_event *event); +}; + +#endif /* _IDC_RDMA_H_*/ diff --git a/include/linux/net/intel/iidc.h b/include/linux/net/intel/iidc.h deleted file mode 100644 index 1c1332e4df26..000000000000 --- a/include/linux/net/intel/iidc.h +++ /dev/null @@ -1,107 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 */ -/* Copyright (C) 2021, Intel Corporation. */ - -#ifndef _IIDC_H_ -#define _IIDC_H_ - -#include -#include -#include -#include -#include -#include - -enum iidc_event_type { - IIDC_EVENT_BEFORE_MTU_CHANGE, - IIDC_EVENT_AFTER_MTU_CHANGE, - IIDC_EVENT_BEFORE_TC_CHANGE, - IIDC_EVENT_AFTER_TC_CHANGE, - IIDC_EVENT_CRIT_ERR, - IIDC_EVENT_NBITS /* must be last */ -}; - -enum iidc_reset_type { - IIDC_PFR, - IIDC_CORER, - IIDC_GLOBR, -}; - -enum iidc_rdma_protocol { - IIDC_RDMA_PROTOCOL_IWARP = BIT(0), - IIDC_RDMA_PROTOCOL_ROCEV2 = BIT(1), -}; - -#define IIDC_MAX_USER_PRIORITY 8 -#define IIDC_MAX_DSCP_MAPPING 64 -#define IIDC_DSCP_PFC_MODE 0x1 - -/* Struct to hold per RDMA Qset info */ -struct iidc_rdma_qset_params { - /* Qset TEID returned to the RDMA driver in - * ice_add_rdma_qset and used by RDMA driver - * for calls to ice_del_rdma_qset - */ - u32 teid; /* Qset TEID */ - u16 qs_handle; /* RDMA driver provides this */ - u16 vport_id; /* VSI index */ - u8 tc; /* TC branch the Qset should belong to */ -}; - -struct iidc_qos_info { - u64 tc_ctx; - u8 rel_bw; - u8 prio_type; - u8 egress_virt_up; - u8 ingress_virt_up; -}; - -/* Struct to pass QoS info */ -struct iidc_qos_params { - struct iidc_qos_info tc_info[IEEE_8021QAZ_MAX_TCS]; - u8 up2tc[IIDC_MAX_USER_PRIORITY]; - u8 vport_relative_bw; - u8 vport_priority_type; - u8 num_tc; - u8 pfc_mode; - u8 dscp_map[IIDC_MAX_DSCP_MAPPING]; -}; - -struct iidc_event { - DECLARE_BITMAP(type, IIDC_EVENT_NBITS); - u32 reg; -}; - -struct ice_pf; - -int ice_add_rdma_qset(struct ice_pf *pf, struct iidc_rdma_qset_params *qset); -int ice_del_rdma_qset(struct ice_pf *pf, struct iidc_rdma_qset_params *qset); -int ice_rdma_request_reset(struct ice_pf *pf, enum iidc_reset_type reset_type); -int ice_rdma_update_vsi_filter(struct ice_pf *pf, u16 vsi_id, bool enable); -void ice_get_qos_params(struct ice_pf *pf, struct iidc_qos_params *qos); - -/* Structure representing auxiliary driver tailored information about the core - * PCI dev, each auxiliary driver using the IIDC interface will have an - * instance of this struct dedicated to it. - */ - -struct iidc_auxiliary_dev { - struct auxiliary_device adev; - struct ice_pf *pf; -}; - -/* structure representing the auxiliary driver. This struct is to be - * allocated and populated by the auxiliary driver's owner. The core PCI - * driver will access these ops by performing a container_of on the - * auxiliary_device->dev.driver. - */ -struct iidc_auxiliary_drv { - struct auxiliary_driver adrv; - /* This event_handler is meant to be a blocking call. For instance, - * when a BEFORE_MTU_CHANGE event comes in, the event_handler will not - * return until the auxiliary driver is ready for the MTU change to - * happen. - */ - void (*event_handler)(struct ice_pf *pf, struct iidc_event *event); -}; - -#endif /* _IIDC_H_*/ diff --git a/include/linux/net/intel/iidc_rdma.h b/include/linux/net/intel/iidc_rdma.h new file mode 100644 index 000000000000..2e30b04a8a75 --- /dev/null +++ b/include/linux/net/intel/iidc_rdma.h @@ -0,0 +1,61 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (C) 2021, Intel Corporation. */ + +#ifndef _IIDC_RDMA_H_ +#define _IIDC_RDMA_H_ + +#include + +#define IIDC_MAX_USER_PRIORITY 8 +#define IIDC_MAX_DSCP_MAPPING 64 +#define IIDC_DSCP_PFC_MODE 0x1 + +/* Struct to hold per RDMA Qset info */ +struct iidc_rdma_qset_params { + /* Qset TEID returned to the RDMA driver in + * ice_add_rdma_qset and used by RDMA driver + * for calls to ice_del_rdma_qset + */ + u32 teid; /* Qset TEID */ + u16 qs_handle; /* RDMA driver provides this */ + u16 vport_id; /* VSI index */ + u8 tc; /* TC branch the Qset should belong to */ +}; + +struct iidc_rdma_qos_info { + u64 tc_ctx; + u8 rel_bw; + u8 prio_type; + u8 egress_virt_up; + u8 ingress_virt_up; +}; + +/* Struct to pass QoS info */ +struct iidc_rdma_qos_params { + struct iidc_rdma_qos_info tc_info[IEEE_8021QAZ_MAX_TCS]; + u8 up2tc[IIDC_MAX_USER_PRIORITY]; + u8 vport_relative_bw; + u8 vport_priority_type; + u8 num_tc; + u8 pfc_mode; + u8 dscp_map[IIDC_MAX_DSCP_MAPPING]; +}; + +struct iidc_rdma_priv_ops { + int (*alloc_res)(struct idc_rdma_core_dev_info *cdev_info, + struct iidc_rdma_qset_params *qset); + int (*free_res)(struct idc_rdma_core_dev_info *cdev_info, + struct iidc_rdma_qset_params *qset); + int (*update_vport_filter)(struct idc_rdma_core_dev_info *cdev_info, + u16 vport_id, bool enable); +}; + +struct iidc_rdma_priv_dev_info { + u8 pf_id; + u16 vport_id; + struct net_device *netdev; + struct iidc_rdma_qos_params qos_info; + const struct iidc_rdma_priv_ops *priv_ops; + u8 __iomem *hw_addr; +}; +#endif /* _IDC_RDMA_H_*/ From patchwork Sat Aug 24 03:19:01 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Nikolova, Tatyana E" X-Patchwork-Id: 13776190 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.16]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AFEB722092; Sat, 24 Aug 2024 03:20:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.16 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724469642; cv=none; b=SfPypGy0Qi0Oe+3NtowahHEuEOOtfgBqHyKImM5q2HovsbBRI/2pBaqktGHAbMq91xWiWzmOzVSQOtbRx5zqNSHKm8ZQgnNMKRgsDFgQWqdJtKk1okmVx8rksrt+0J0P2tRogtiksoooXCQI60cgl5/oc/AHFjHGIg990S0H8Pk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724469642; c=relaxed/simple; bh=GxRHMsKtQlsvD0pV0UY5vbT82OsgqfrD9ebFFJq1mV8=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=kHq9eR0RjM5Yu+pgYwO97gkPLTPwB2M+pkQLXoS0o6njlx9TJmzX96nYHpWqZNrplOjdELx4NliUvltoQtu3id2tF5kQJdRmCOU+xKOozLbHHaWDwZhUzd/fZ14sGKFCXTSOMdpKP2XTAkOSYZDDOTfkkbOMFQZtGYszq3ejhkQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=VDXLC7kq; arc=none smtp.client-ip=192.198.163.16 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="VDXLC7kq" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1724469640; x=1756005640; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=GxRHMsKtQlsvD0pV0UY5vbT82OsgqfrD9ebFFJq1mV8=; b=VDXLC7kqbz8dHtydz8/FpGeA7gg8yAUIqErmjg64kSyHCwAdzfVSxrMc jvEKdf3fb7QsU8Toi4+sXmt1cqFi8IQPZaQUlGgLKA+kWjVNLZKFZSGGn 5Dnbqx66yDDxtqsvaRK+UwYwI7yd8OVeYRCHDYXv1APxAYnnkvP96HS11 XsUsghQ3+mU8HbxXq38NIydikl0gZ3rmXsj3WMPMv1yWttNJ/P+vWmZvw 2It5/peNh3KJiTDj5Rako79smk9sqKe9KXo9U4QMiz4HcJD1eeZuDFW2V U9AiZ68PlbcQqqFPQpmSrcELwTvZdfzrpOs+l0/yzKc+4znsagPc/ElQ7 A==; X-CSE-ConnectionGUID: qO5WkonjTRezhkgKFPjavw== X-CSE-MsgGUID: pc6et48eR+Ol5XWyGgCxww== X-IronPort-AV: E=McAfee;i="6700,10204,11173"; a="13187770" X-IronPort-AV: E=Sophos;i="6.10,172,1719903600"; d="scan'208";a="13187770" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by fmvoesa110.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Aug 2024 20:20:39 -0700 X-CSE-ConnectionGUID: R4pFilnnQmOPX7QuQ2n/iQ== X-CSE-MsgGUID: CSSRTcz6RLWfTijH1sWOeg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.10,172,1719903600"; d="scan'208";a="99492074" Received: from tenikolo-mobl1.amr.corp.intel.com ([10.124.36.66]) by smtpauth.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Aug 2024 20:20:38 -0700 From: Tatyana Nikolova To: jgg@nvidia.com, leon@kernel.org Cc: linux-rdma@vger.kernel.org, netdev@vger.kernel.org, Joshua Hay , Tatyana Nikolova Subject: [RFC v2 02/25] idpf: use reserved rdma vectors from control plane Date: Fri, 23 Aug 2024 22:19:01 -0500 Message-Id: <20240824031924.421-3-tatyana.e.nikolova@intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20240824031924.421-1-tatyana.e.nikolova@intel.com> References: <20240824031924.421-1-tatyana.e.nikolova@intel.com> Precedence: bulk X-Mailing-List: linux-rdma@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Joshua Hay Fetch the number of reserved rdma vectors from the control plane. Adjust the number of reserved lan vectors if necessary. Adjust the minimum number of vectors the OS should reserve to include rdma; and fail if the OS cannot reserve enough vectors for the minimum number of lan and rdma vectors required. Create a separate msix table for the reserved rdma vectors, which will just get handed off to the rdma core device to do with what it will. v2: * Use default minimum if control plane does not provide any reserved rdma vecs. Otherwise, if control plane does not provide enough reserved rdma vecs, fail. * Use the actual number of vectors when getting vecids. Signed-off-by: Joshua Hay Signed-off-by: Tatyana Nikolova --- drivers/net/ethernet/intel/idpf/idpf.h | 24 ++++++- drivers/net/ethernet/intel/idpf/idpf_lib.c | 76 +++++++++++++++++---- drivers/net/ethernet/intel/idpf/idpf_txrx.h | 1 + drivers/net/ethernet/intel/idpf/virtchnl2.h | 5 +- 4 files changed, 88 insertions(+), 18 deletions(-) diff --git a/drivers/net/ethernet/intel/idpf/idpf.h b/drivers/net/ethernet/intel/idpf/idpf.h index 2c31ad87587a..85503fb5dd69 100644 --- a/drivers/net/ethernet/intel/idpf/idpf.h +++ b/drivers/net/ethernet/intel/idpf/idpf.h @@ -489,10 +489,11 @@ struct idpf_vc_xn_manager; * @flags: See enum idpf_flags * @reset_reg: See struct idpf_reset_reg * @hw: Device access data - * @num_req_msix: Requested number of MSIX vectors * @num_avail_msix: Available number of MSIX vectors * @num_msix_entries: Number of entries in MSIX table * @msix_entries: MSIX table + * @num_rdma_msix_entries: Available number of MSIX vectors for RDMA + * @rdma_msix_entries: RDMA MSIX table * @req_vec_chunks: Requested vector chunk data * @mb_vector: Mailbox vector data * @vector_stack: Stack to store the msix vector indexes @@ -542,10 +543,11 @@ struct idpf_adapter { DECLARE_BITMAP(flags, IDPF_FLAGS_NBITS); struct idpf_reset_reg reset_reg; struct idpf_hw hw; - u16 num_req_msix; u16 num_avail_msix; u16 num_msix_entries; struct msix_entry *msix_entries; + u16 num_rdma_msix_entries; + struct msix_entry *rdma_msix_entries; struct virtchnl2_alloc_vectors *req_vec_chunks; struct idpf_q_vector mb_vector; struct idpf_vector_lifo vector_stack; @@ -609,6 +611,15 @@ static inline int idpf_is_queue_model_split(u16 q_model) bool idpf_is_capability_ena(struct idpf_adapter *adapter, bool all, enum idpf_cap_field field, u64 flag); +/** + * idpf_is_rdma_cap_ena - Determine if RDMA is supported + * @adapter: private data struct + */ +static inline bool idpf_is_rdma_cap_ena(struct idpf_adapter *adapter) +{ + return idpf_is_cap_ena(adapter, IDPF_OTHER_CAPS, VIRTCHNL2_CAP_RDMA); +} + #define IDPF_CAP_RSS (\ VIRTCHNL2_CAP_RSS_IPV4_TCP |\ VIRTCHNL2_CAP_RSS_IPV4_TCP |\ @@ -663,6 +674,15 @@ static inline u16 idpf_get_reserved_vecs(struct idpf_adapter *adapter) return le16_to_cpu(adapter->caps.num_allocated_vectors); } +/** + * idpf_get_reserved_rdma_vecs - Get reserved RDMA vectors + * @adapter: private data struct + */ +static inline u16 idpf_get_reserved_rdma_vecs(struct idpf_adapter *adapter) +{ + return le16_to_cpu(adapter->caps.num_rdma_allocated_vectors); +} + /** * idpf_get_default_vports - Get default number of vports * @adapter: private data struct diff --git a/drivers/net/ethernet/intel/idpf/idpf_lib.c b/drivers/net/ethernet/intel/idpf/idpf_lib.c index 5dbf2b4ba1b0..e985f27051de 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_lib.c +++ b/drivers/net/ethernet/intel/idpf/idpf_lib.c @@ -87,6 +87,8 @@ void idpf_intr_rel(struct idpf_adapter *adapter) idpf_deinit_vector_stack(adapter); kfree(adapter->msix_entries); adapter->msix_entries = NULL; + kfree(adapter->rdma_msix_entries); + adapter->rdma_msix_entries = NULL; } /** @@ -314,13 +316,33 @@ int idpf_req_rel_vector_indexes(struct idpf_adapter *adapter, */ int idpf_intr_req(struct idpf_adapter *adapter) { + u16 num_lan_vecs, min_lan_vecs, num_rdma_vecs = 0, min_rdma_vecs = 0; u16 default_vports = idpf_get_default_vports(adapter); int num_q_vecs, total_vecs, num_vec_ids; int min_vectors, v_actual, err; unsigned int vector; u16 *vecids; + int i; total_vecs = idpf_get_reserved_vecs(adapter); + num_lan_vecs = total_vecs; + if (idpf_is_rdma_cap_ena(adapter)) { + num_rdma_vecs = idpf_get_reserved_rdma_vecs(adapter); + min_rdma_vecs = IDPF_MIN_RDMA_VEC; + + if (!num_rdma_vecs) { + /* If idpf_get_reserved_rdma_vecs is 0, vectors are + * pulled from the LAN pool. + */ + num_rdma_vecs = min_rdma_vecs; + } else if (num_rdma_vecs < min_rdma_vecs) { + dev_err(&adapter->pdev->dev, + "Not enough vectors reserved for rdma (min: %u, current: %u)\n", + min_rdma_vecs, num_rdma_vecs); + return -EINVAL; + } + } + num_q_vecs = total_vecs - IDPF_MBX_Q_VEC; err = idpf_send_alloc_vectors_msg(adapter, num_q_vecs); @@ -331,27 +353,44 @@ int idpf_intr_req(struct idpf_adapter *adapter) return -EAGAIN; } - min_vectors = IDPF_MBX_Q_VEC + IDPF_MIN_Q_VEC * default_vports; + min_lan_vecs = IDPF_MBX_Q_VEC + IDPF_MIN_Q_VEC * default_vports; + min_vectors = min_lan_vecs + min_rdma_vecs; v_actual = pci_alloc_irq_vectors(adapter->pdev, min_vectors, total_vecs, PCI_IRQ_MSIX); if (v_actual < min_vectors) { - dev_err(&adapter->pdev->dev, "Failed to allocate MSIX vectors: %d\n", + dev_err(&adapter->pdev->dev, "Failed to allocate minimum MSIX vectors required: %d\n", v_actual); err = -EAGAIN; goto send_dealloc_vecs; } - adapter->msix_entries = kcalloc(v_actual, sizeof(struct msix_entry), - GFP_KERNEL); + if (idpf_is_rdma_cap_ena(adapter)) { + if (v_actual < total_vecs) { + dev_warn(&adapter->pdev->dev, + "Warning: not enough vectors available. Defaulting to minimum for RDMA and remaining for LAN.\n"); + num_rdma_vecs = IDPF_MIN_RDMA_VEC; + } + adapter->rdma_msix_entries = + kcalloc(num_rdma_vecs, + sizeof(struct msix_entry), GFP_KERNEL); + if (!adapter->rdma_msix_entries) { + err = -ENOMEM; + goto free_irq; + } + } + + num_lan_vecs = v_actual - num_rdma_vecs; + adapter->msix_entries = kcalloc(num_lan_vecs, sizeof(struct msix_entry), + GFP_KERNEL); if (!adapter->msix_entries) { err = -ENOMEM; - goto free_irq; + goto free_rdma_msix; } idpf_set_mb_vec_id(adapter); - vecids = kcalloc(total_vecs, sizeof(u16), GFP_KERNEL); + vecids = kcalloc(v_actual, sizeof(u16), GFP_KERNEL); if (!vecids) { err = -ENOMEM; goto free_msix; @@ -364,32 +403,36 @@ int idpf_intr_req(struct idpf_adapter *adapter) ac = adapter->req_vec_chunks; vchunks = &ac->vchunks; - num_vec_ids = idpf_get_vec_ids(adapter, vecids, total_vecs, + num_vec_ids = idpf_get_vec_ids(adapter, vecids, v_actual, vchunks); if (num_vec_ids < v_actual) { err = -EINVAL; goto free_vecids; } } else { - int i; - for (i = 0; i < v_actual; i++) vecids[i] = i; } - for (vector = 0; vector < v_actual; vector++) { - adapter->msix_entries[vector].entry = vecids[vector]; - adapter->msix_entries[vector].vector = + for (i = 0, vector = 0; vector < num_lan_vecs; vector++, i++) { + adapter->msix_entries[i].entry = vecids[vector]; + adapter->msix_entries[i].vector = + pci_irq_vector(adapter->pdev, vector); + } + for (i = 0; i < num_rdma_vecs; vector++, i++) { + adapter->rdma_msix_entries[i].entry = vecids[vector]; + adapter->rdma_msix_entries[i].vector = pci_irq_vector(adapter->pdev, vector); } - adapter->num_req_msix = total_vecs; - adapter->num_msix_entries = v_actual; /* 'num_avail_msix' is used to distribute excess vectors to the vports * after considering the minimum vectors required per each default * vport */ - adapter->num_avail_msix = v_actual - min_vectors; + adapter->num_avail_msix = num_lan_vecs - min_lan_vecs; + adapter->num_msix_entries = num_lan_vecs; + if (idpf_is_rdma_cap_ena(adapter)) + adapter->num_rdma_msix_entries = num_rdma_vecs; /* Fill MSIX vector lifo stack with vector indexes */ err = idpf_init_vector_stack(adapter); @@ -411,6 +454,9 @@ int idpf_intr_req(struct idpf_adapter *adapter) free_msix: kfree(adapter->msix_entries); adapter->msix_entries = NULL; +free_rdma_msix: + kfree(adapter->rdma_msix_entries); + adapter->rdma_msix_entries = NULL; free_irq: pci_free_irq_vectors(adapter->pdev); send_dealloc_vecs: diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.h b/drivers/net/ethernet/intel/idpf/idpf_txrx.h index 6215dbee5546..63f3ba7d1ab3 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.h +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.h @@ -57,6 +57,7 @@ /* Default vector sharing */ #define IDPF_MBX_Q_VEC 1 #define IDPF_MIN_Q_VEC 1 +#define IDPF_MIN_RDMA_VEC 4 #define IDPF_DFLT_TX_Q_DESC_COUNT 512 #define IDPF_DFLT_TX_COMPLQ_DESC_COUNT 512 diff --git a/drivers/net/ethernet/intel/idpf/virtchnl2.h b/drivers/net/ethernet/intel/idpf/virtchnl2.h index 63deb120359c..80c17e4a394e 100644 --- a/drivers/net/ethernet/intel/idpf/virtchnl2.h +++ b/drivers/net/ethernet/intel/idpf/virtchnl2.h @@ -473,6 +473,8 @@ VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_version_info); * segment offload. * @max_hdr_buf_per_lso: Max number of header buffers that can be used for * an LSO. + * @num_rdma_allocated_vectors: Maximum number of allocated RDMA vectors for + * the device. * @pad1: Padding for future extensions. * * Dataplane driver sends this message to CP to negotiate capabilities and @@ -520,7 +522,8 @@ struct virtchnl2_get_capabilities { __le32 device_type; u8 min_sso_packet_len; u8 max_hdr_buf_per_lso; - u8 pad1[10]; + __le16 num_rdma_allocated_vectors; + u8 pad1[8]; }; VIRTCHNL2_CHECK_STRUCT_LEN(80, virtchnl2_get_capabilities); From patchwork Sat Aug 24 03:19:02 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Nikolova, Tatyana E" X-Patchwork-Id: 13776191 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.16]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 255BD29CFB; Sat, 24 Aug 2024 03:20:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.16 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724469643; cv=none; b=bSsE+vQQun5JOfF3LJJ3asSkDaYFhDAafdrnpdOUNxUZ1qtu2TMuFKeYPr2BtNKiVdqwgKYhJqhtMiJ271RatQOyyK00beGG6rzb+owZIDYWIQp14xkVyivz5S8HM12Xk/5UVlp/0CEMzf1lZ3LEcrTgqcC6PBfwqwZ1kmh0GgQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724469643; c=relaxed/simple; bh=PP+U0LTwmrmgMb+yhWXSf0DiOEdXv4/bcRbPuADeh5M=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=D/iHIFhmEgDBruD7A3LJYd1cFGwYZXSwCPPDqAWiMiLJwIIavIlSQt7u7nBSfIanFASs5zjGhOlKrMplPtn5KuiRTf++UClbbgKvoSY8LLBwqW84K8MpJmXdSflQyq1Eyuv1J1ZvJxEmKf9uAuLvil0QUgTWrl4lTovjrGCdrL8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=eba3/D+S; arc=none smtp.client-ip=192.198.163.16 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="eba3/D+S" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1724469641; x=1756005641; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=PP+U0LTwmrmgMb+yhWXSf0DiOEdXv4/bcRbPuADeh5M=; b=eba3/D+SFtP4lh5sobcDtU76nyjzWRdzDORkVZ8B5cs98+d9x1h7Yh4a cHbV6HjkhWhcuMmuMtGgPQt/xfCHJJ5A+Z5UekGa+uPwZeyEudRx3iXhz /flVOJDxyA9I0MY3J5KQOEhr1ghc2VG86I3/dN1yByGaVstTeePZmKhwO 0SqpcGEuQYlEDlu3arnkbthlDjDzSzmQaxVS9U+0C7hdnx8fE8TWQnhXk NBR0OauNgA0hGuoXqIv121R8RIKKS5VFa3iLZ67I/CER1f32GwW6xhEOR b5JRpP2Nh17R478NvLtyXZIk4VtNEE84Fc5vUpAPQRYXtTLrjPhqDmPgA Q==; X-CSE-ConnectionGUID: XTbLsz9HS7WdtMaYGGuHVA== X-CSE-MsgGUID: IXH6WpNuRcqK9FvpAPASOw== X-IronPort-AV: E=McAfee;i="6700,10204,11173"; a="13187773" X-IronPort-AV: E=Sophos;i="6.10,172,1719903600"; d="scan'208";a="13187773" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by fmvoesa110.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Aug 2024 20:20:40 -0700 X-CSE-ConnectionGUID: y4K9M2zrTdKAP70EXOuj1g== X-CSE-MsgGUID: kBB4236CR8aCjDB82ryRWQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.10,172,1719903600"; d="scan'208";a="99492077" Received: from tenikolo-mobl1.amr.corp.intel.com ([10.124.36.66]) by smtpauth.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Aug 2024 20:20:39 -0700 From: Tatyana Nikolova To: jgg@nvidia.com, leon@kernel.org Cc: linux-rdma@vger.kernel.org, netdev@vger.kernel.org, Joshua Hay , Tatyana Nikolova Subject: [RFC v2 03/25] idpf: implement core rdma auxiliary dev create, init, and destroy Date: Fri, 23 Aug 2024 22:19:02 -0500 Message-Id: <20240824031924.421-4-tatyana.e.nikolova@intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20240824031924.421-1-tatyana.e.nikolova@intel.com> References: <20240824031924.421-1-tatyana.e.nikolova@intel.com> Precedence: bulk X-Mailing-List: linux-rdma@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Joshua Hay Add the initial idpf_idc.c file with the functions to kick off the idc initialization, create and initialize a core rdma auxiliary device, and destroy said device. The rdma core has a dependency on the vports being created by the control plane before it can be initialized. Therefore, once all the vports are up after a hard reset (either during driver load a function level reset), the core rdma device info will be created. It is populated with the function type (as distinguished by the idc initialization function pointer), the core idc_ops function points (just stubs for now), the reserved rdma msix table, and various other info the core rdma auxiliary driver will need. It is then plugged on to the bus. During a function level reset or driver unload, the device will be unplugged from the bus and destroyed. Signed-off-by: Joshua Hay Signed-off-by: Tatyana Nikolova --- drivers/net/ethernet/intel/idpf/Makefile | 1 + drivers/net/ethernet/intel/idpf/idpf.h | 8 + drivers/net/ethernet/intel/idpf/idpf_dev.c | 11 + drivers/net/ethernet/intel/idpf/idpf_idc.c | 212 ++++++++++++++++++ drivers/net/ethernet/intel/idpf/idpf_lib.c | 4 + drivers/net/ethernet/intel/idpf/idpf_vf_dev.c | 11 + .../net/ethernet/intel/idpf/idpf_virtchnl.c | 17 ++ .../net/ethernet/intel/idpf/idpf_virtchnl.h | 3 + 8 files changed, 267 insertions(+) create mode 100644 drivers/net/ethernet/intel/idpf/idpf_idc.c diff --git a/drivers/net/ethernet/intel/idpf/Makefile b/drivers/net/ethernet/intel/idpf/Makefile index 2ce01a0b5898..bde9c893d8a1 100644 --- a/drivers/net/ethernet/intel/idpf/Makefile +++ b/drivers/net/ethernet/intel/idpf/Makefile @@ -10,6 +10,7 @@ idpf-y := \ idpf_controlq_setup.o \ idpf_dev.o \ idpf_ethtool.o \ + idpf_idc.o \ idpf_lib.o \ idpf_main.o \ idpf_txrx.o \ diff --git a/drivers/net/ethernet/intel/idpf/idpf.h b/drivers/net/ethernet/intel/idpf/idpf.h index 85503fb5dd69..cb9344596bfb 100644 --- a/drivers/net/ethernet/intel/idpf/idpf.h +++ b/drivers/net/ethernet/intel/idpf/idpf.h @@ -17,6 +17,7 @@ struct idpf_vport_max_q; #include #include #include +#include #include "virtchnl2.h" #include "idpf_txrx.h" @@ -203,6 +204,8 @@ struct idpf_reg_ops { */ struct idpf_dev_ops { struct idpf_reg_ops reg_ops; + + int (*idc_init)(struct idpf_adapter *adapter); }; /** @@ -580,6 +583,7 @@ struct idpf_adapter { struct idpf_vc_xn_manager *vcxn_mngr; struct idpf_dev_ops dev_ops; + struct idc_rdma_core_dev_info *cdev_info; int num_vfs; bool crc_enable; bool req_tx_splitq; @@ -854,5 +858,9 @@ int idpf_sriov_configure(struct pci_dev *pdev, int num_vfs); u8 idpf_vport_get_hsplit(const struct idpf_vport *vport); bool idpf_vport_set_hsplit(const struct idpf_vport *vport, u8 val); +int idpf_idc_init(struct idpf_adapter *adapter); +int idpf_idc_init_aux_core_dev(struct idpf_adapter *adapter, + enum idc_function_type ftype); +void idpf_idc_deinit_core_aux_device(struct idc_rdma_core_dev_info *cdev_info); #endif /* !_IDPF_H_ */ diff --git a/drivers/net/ethernet/intel/idpf/idpf_dev.c b/drivers/net/ethernet/intel/idpf/idpf_dev.c index 3df9935685e9..f4c56915b934 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_dev.c +++ b/drivers/net/ethernet/intel/idpf/idpf_dev.c @@ -143,6 +143,15 @@ static void idpf_trigger_reset(struct idpf_adapter *adapter, idpf_get_reg_addr(adapter, PFGEN_CTRL)); } +/** + * idpf_idc_register - register for IDC callbacks + * @adapter: Driver specific private structure + */ +static int idpf_idc_register(struct idpf_adapter *adapter) +{ + return idpf_idc_init_aux_core_dev(adapter, IDC_FUNCTION_TYPE_PF); +} + /** * idpf_reg_ops_init - Initialize register API function pointers * @adapter: Driver specific private structure @@ -163,4 +172,6 @@ static void idpf_reg_ops_init(struct idpf_adapter *adapter) void idpf_dev_ops_init(struct idpf_adapter *adapter) { idpf_reg_ops_init(adapter); + + adapter->dev_ops.idc_init = idpf_idc_register; } diff --git a/drivers/net/ethernet/intel/idpf/idpf_idc.c b/drivers/net/ethernet/intel/idpf/idpf_idc.c new file mode 100644 index 000000000000..9eb0e0cfc61f --- /dev/null +++ b/drivers/net/ethernet/intel/idpf/idpf_idc.c @@ -0,0 +1,212 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright (C) 2022 Intel Corporation */ + +#include "idpf.h" +#include "idpf_virtchnl.h" + +static DEFINE_IDA(idpf_idc_ida); + +#define IDPF_IDC_MAX_ADEV_NAME_LEN 15 + +/** + * idpf_idc_init - Called to initialize IDC + * @adapter: driver private data structure + */ +int idpf_idc_init(struct idpf_adapter *adapter) +{ + int err; + + if (!idpf_is_rdma_cap_ena(adapter) || + !adapter->dev_ops.idc_init) + return 0; + + err = adapter->dev_ops.idc_init(adapter); + if (err) + dev_err(&adapter->pdev->dev, "failed to initialize idc: %d\n", + err); + + return err; +} + +/** + * idpf_core_adev_release - function to be mapped to aux dev's release op + * @dev: pointer to device to free + */ +static void idpf_core_adev_release(struct device *dev) +{ + struct idc_rdma_core_auxiliary_dev *iadev; + + iadev = container_of(dev, struct idc_rdma_core_auxiliary_dev, adev.dev); + kfree(iadev); + iadev = NULL; +} + +/* idpf_plug_core_aux_dev - allocate and register an Auxiliary device + * @cdev_info: idc core device info pointer + */ +static int idpf_plug_core_aux_dev(struct idc_rdma_core_dev_info *cdev_info) +{ + struct idc_rdma_core_auxiliary_dev *iadev; + char name[IDPF_IDC_MAX_ADEV_NAME_LEN]; + struct auxiliary_device *adev; + int err; + + iadev = (struct idc_rdma_core_auxiliary_dev *) + kzalloc(sizeof(*iadev), GFP_KERNEL); + if (!iadev) + return -ENOMEM; + + adev = &iadev->adev; + cdev_info->adev = adev; + iadev->cdev_info = cdev_info; + + adev->id = ida_alloc(&idpf_idc_ida, GFP_KERNEL); + if (adev->id < 0) { + pr_err("failed to allocate unique device ID for Auxiliary driver\n"); + err = -ENOMEM; + goto err_ida_alloc; + } + adev->dev.release = idpf_core_adev_release; + adev->dev.parent = &cdev_info->pdev->dev; + sprintf(name, "%04x.rdma.core", cdev_info->pdev->vendor); + adev->name = name; + + err = auxiliary_device_init(adev); + if (err) + goto err_aux_dev_init; + + err = auxiliary_device_add(adev); + if (err) + goto err_aux_dev_add; + + return 0; + +err_aux_dev_add: + cdev_info->adev = NULL; + auxiliary_device_uninit(adev); +err_aux_dev_init: + ida_free(&idpf_idc_ida, adev->id); +err_ida_alloc: + kfree(iadev); + + return err; +} + +/* idpf_unplug_aux_dev - unregister and free an Auxiliary device + * @adev: auxiliary device struct + */ +static void idpf_unplug_aux_dev(struct auxiliary_device *adev) +{ + auxiliary_device_delete(adev); + auxiliary_device_uninit(adev); + + ida_free(&idpf_idc_ida, adev->id); +} + +/** + * idpf_idc_vport_dev_ctrl - Called by an Auxiliary Driver + * @cdev_info: idc core device info pointer + * @up: RDMA core driver status + * + * This callback function is accessed by an Auxiliary Driver to indicate + * whether core driver is ready to support vport driver load or if vport + * drivers need to be taken down. + */ +static int +idpf_idc_vport_dev_ctrl(struct idc_rdma_core_dev_info *cdev_info, + bool up) +{ + return -EOPNOTSUPP; +} + +/** + * idpf_idc_request_reset - Called by an Auxiliary Driver + * @cdev_info: idc core device info pointer + * @reset_type: function, core or other + * + * This callback function is accessed by an Auxiliary Driver to request a reset + * on the Auxiliary Device + */ +static int +idpf_idc_request_reset(struct idc_rdma_core_dev_info *cdev_info, + enum idc_rdma_reset_type __always_unused reset_type) +{ + return -EOPNOTSUPP; +} + +/* Implemented by the Auxiliary Device and called by the Auxiliary Driver */ +static const struct idc_rdma_core_ops idc_ops = { + .vport_dev_ctrl = idpf_idc_vport_dev_ctrl, + .request_reset = idpf_idc_request_reset, + .vc_send_sync = idpf_idc_rdma_vc_send_sync, +}; + +/** + * idpf_idc_init_msix_data - initialize MSIX data for the cdev_info structure + * @adapter: driver private data structure + */ +static void +idpf_idc_init_msix_data(struct idpf_adapter *adapter) +{ + struct idc_rdma_core_dev_info *cdev_info; + + if (!adapter->rdma_msix_entries) + return; + + cdev_info = adapter->cdev_info; + + cdev_info->msix_entries = adapter->rdma_msix_entries; + cdev_info->msix_count = adapter->num_rdma_msix_entries; +} + +/** + * idpf_idc_init_aux_core_dev - initialize Auxiliary Device(s) + * @adapter: driver private data structure + * @ftype: PF or VF + */ +int idpf_idc_init_aux_core_dev(struct idpf_adapter *adapter, + enum idc_function_type ftype) +{ + struct idc_rdma_core_dev_info *cdev_info; + int err; + + adapter->cdev_info = (struct idc_rdma_core_dev_info *) + kzalloc(sizeof(struct idc_rdma_core_dev_info), GFP_KERNEL); + if (!adapter->cdev_info) + return -ENOMEM; + + cdev_info = adapter->cdev_info; + cdev_info->pdev = adapter->pdev; + cdev_info->ops = &idc_ops; + cdev_info->rdma_protocol = IDC_RDMA_PROTOCOL_ROCEV2; + cdev_info->ftype = ftype; + + idpf_idc_init_msix_data(adapter); + + err = idpf_plug_core_aux_dev(cdev_info); + if (err) + goto err_plug_aux_dev; + + return 0; + +err_plug_aux_dev: + kfree(cdev_info); + adapter->cdev_info = NULL; + + return err; +} + +/** + * idpf_idc_deinit_core_aux_device - de-initialize Auxiliary Device(s) + * @cdev_info: idc core device info pointer + */ +void idpf_idc_deinit_core_aux_device(struct idc_rdma_core_dev_info *cdev_info) +{ + if (!cdev_info) + return; + + idpf_unplug_aux_dev(cdev_info->adev); + + kfree(cdev_info->mapped_mem_regions); + kfree(cdev_info); +} diff --git a/drivers/net/ethernet/intel/idpf/idpf_lib.c b/drivers/net/ethernet/intel/idpf/idpf_lib.c index e985f27051de..ff7dcbced76c 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_lib.c +++ b/drivers/net/ethernet/intel/idpf/idpf_lib.c @@ -1861,6 +1861,10 @@ static int idpf_init_hard_reset(struct idpf_adapter *adapter) unlock_mutex: mutex_unlock(&adapter->vport_ctrl_lock); + /* Wait until all vports are created to init RDMA CORE AUX */ + if (!err) + err = idpf_idc_init(adapter); + return err; } diff --git a/drivers/net/ethernet/intel/idpf/idpf_vf_dev.c b/drivers/net/ethernet/intel/idpf/idpf_vf_dev.c index 629cb5cb7c9f..db6a5951a594 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_vf_dev.c +++ b/drivers/net/ethernet/intel/idpf/idpf_vf_dev.c @@ -141,6 +141,15 @@ static void idpf_vf_trigger_reset(struct idpf_adapter *adapter, idpf_send_mb_msg(adapter, VIRTCHNL2_OP_RESET_VF, 0, NULL, 0); } +/** + * idpf_idc_vf_register - register for IDC callbacks + * @adapter: Driver specific private structure + */ +static int idpf_idc_vf_register(struct idpf_adapter *adapter) +{ + return idpf_idc_init_aux_core_dev(adapter, IDC_FUNCTION_TYPE_VF); +} + /** * idpf_vf_reg_ops_init - Initialize register API function pointers * @adapter: Driver specific private structure @@ -161,4 +170,6 @@ static void idpf_vf_reg_ops_init(struct idpf_adapter *adapter) void idpf_vf_dev_ops_init(struct idpf_adapter *adapter) { idpf_vf_reg_ops_init(adapter); + + adapter->dev_ops.idc_init = idpf_idc_vf_register; } diff --git a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c index 70986e12da28..d5067932de00 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c +++ b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c @@ -893,6 +893,7 @@ static int idpf_send_get_caps_msg(struct idpf_adapter *adapter) caps.other_caps = cpu_to_le64(VIRTCHNL2_CAP_SRIOV | + VIRTCHNL2_CAP_RDMA | VIRTCHNL2_CAP_MACFILTER | VIRTCHNL2_CAP_SPLITQ_QSCHED | VIRTCHNL2_CAP_PROMISC | @@ -3090,6 +3091,7 @@ void idpf_vc_core_deinit(struct idpf_adapter *adapter) idpf_vc_xn_shutdown(adapter->vcxn_mngr); idpf_deinit_task(adapter); + idpf_idc_deinit_core_aux_device(adapter->cdev_info); idpf_intr_rel(adapter); cancel_delayed_work_sync(&adapter->serv_task); @@ -3732,3 +3734,18 @@ int idpf_set_promiscuous(struct idpf_adapter *adapter, return reply_sz < 0 ? reply_sz : 0; } + +/** + * idpf_idc_rdma_vc_send_sync - virtchnl send callback for IDC registered drivers + * @cdev_info: idc core device info pointer + * @send_msg: message to send + * @msg_size: size of message to send + * @recv_msg: message to populate on reception of response + * @recv_len: length of message copied into recv_msg or 0 on error + */ +int idpf_idc_rdma_vc_send_sync(struct idc_rdma_core_dev_info *cdev_info, + u8 *send_msg, u16 msg_size, + u8 *recv_msg, u16 *recv_len) +{ + return -EOPNOTSUPP; +} diff --git a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.h b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.h index 83da5d8da56b..6163cfaeccae 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.h +++ b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.h @@ -66,5 +66,8 @@ int idpf_send_get_stats_msg(struct idpf_vport *vport); int idpf_send_set_sriov_vfs_msg(struct idpf_adapter *adapter, u16 num_vfs); int idpf_send_get_set_rss_key_msg(struct idpf_vport *vport, bool get); int idpf_send_get_set_rss_lut_msg(struct idpf_vport *vport, bool get); +int idpf_idc_rdma_vc_send_sync(struct idc_rdma_core_dev_info *cdev_info, + u8 *send_msg, u16 msg_size, + u8 *recv_msg, u16 *recv_len); #endif /* _IDPF_VIRTCHNL_H_ */ From patchwork Sat Aug 24 03:19:03 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Nikolova, Tatyana E" X-Patchwork-Id: 13776193 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.16]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CDBC52AF1D; Sat, 24 Aug 2024 03:20:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.16 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724469644; cv=none; b=cVYSZhk/vLuHfoASBlfoBnB/SlrjJB1ijG3VFdy3s6cG8ZpoRjKMlgOyysQGDrEerhvdpHCIQtih9avIjmeOTyEcfimx7n/UEVjfqw92fOJbuX0h6WWWo+ffg6pF4zsXyMd5BmUwLHowv8rkGX6vvrfdSwbQeAhOsDxcyavtacQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724469644; c=relaxed/simple; bh=3hF75XyVl8jE5Hr5jzPTLntB0NZg8s35xEv5pzkD8XM=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=OE8EISWrJfwieulDPuk5ZCLd+8ZKdkIEHNGM/TphR0/DJeh4c4yFrgzMbv0z8efDNcEk8D4uVyzotTyYEJ0CVHUulB9A4t03GTbZUiEC6vJatsos7AFHo3ac63DpTC/8+bSe97HyDjeimdYU6UUtOCMZfZ/cBtP8gzrhbiCBAkI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=SU5xh5kU; arc=none smtp.client-ip=192.198.163.16 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="SU5xh5kU" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1724469643; x=1756005643; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=3hF75XyVl8jE5Hr5jzPTLntB0NZg8s35xEv5pzkD8XM=; b=SU5xh5kUWJR1BZkYQjz0dMs3E7gxJdX6WDrchQDmCDle9ivWI8rojT/M CE4Z7YM7GzHjGFDnGQDAYxwYwBMLS0rbH/BWDgnhzw2xD2bpLYo1uNWse 2nYf75VhKx45HBVdcf8AD3LBxz9jcf5N7ZkIdmTpHQcv1AZ/gAe3a20/8 6fA9V8/FOwHc5qalTdHeTHWri620UZIDHlrecHuvldeSLESCWZuHUoC7s +Q+BEyMS68z1LQvJ4a17adTSONsjSP+DTDByusnJDkwTIcDgOY07yi5nW 8x3zxQTEUe9OmNWEHHBR7yVZt6oTKqwQ2efUZmcBMlQ0vC3jFz0Hw25yy A==; X-CSE-ConnectionGUID: JquNem+vRFOdq+n8uhDc9Q== X-CSE-MsgGUID: fE+L4moRQS+q25QR6t/V9Q== X-IronPort-AV: E=McAfee;i="6700,10204,11173"; a="13187776" X-IronPort-AV: E=Sophos;i="6.10,172,1719903600"; d="scan'208";a="13187776" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by fmvoesa110.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Aug 2024 20:20:40 -0700 X-CSE-ConnectionGUID: aBa6mbmVSDaogGdFvwUiRA== X-CSE-MsgGUID: wzA4nWLAT82pMQ0eKyv4RQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.10,172,1719903600"; d="scan'208";a="99492080" Received: from tenikolo-mobl1.amr.corp.intel.com ([10.124.36.66]) by smtpauth.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Aug 2024 20:20:39 -0700 From: Tatyana Nikolova To: jgg@nvidia.com, leon@kernel.org Cc: linux-rdma@vger.kernel.org, netdev@vger.kernel.org, Joshua Hay , Tatyana Nikolova Subject: [RFC v2 04/25] idpf: prevent deadlock with irdma get link settings Date: Fri, 23 Aug 2024 22:19:03 -0500 Message-Id: <20240824031924.421-5-tatyana.e.nikolova@intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20240824031924.421-1-tatyana.e.nikolova@intel.com> References: <20240824031924.421-1-tatyana.e.nikolova@intel.com> Precedence: bulk X-Mailing-List: linux-rdma@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Joshua Hay When the rdma core deinitializes (reset or remove), it is calling get_link_ksettings. Add logic to get link settings to avoid even taking the lock during a reset or remove, since we will not care what the link settings are in that state anyways. Signed-off-by: Joshua Hay Signed-off-by: Tatyana Nikolova --- drivers/net/ethernet/intel/idpf/idpf_ethtool.c | 17 +++++++++++------ 1 file changed, 11 insertions(+), 6 deletions(-) diff --git a/drivers/net/ethernet/intel/idpf/idpf_ethtool.c b/drivers/net/ethernet/intel/idpf/idpf_ethtool.c index 3806ddd3ce4a..d287a5537167 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_ethtool.c +++ b/drivers/net/ethernet/intel/idpf/idpf_ethtool.c @@ -1296,20 +1296,25 @@ static void idpf_set_msglevel(struct net_device *netdev, u32 data) static int idpf_get_link_ksettings(struct net_device *netdev, struct ethtool_link_ksettings *cmd) { + struct idpf_adapter *adapter = idpf_netdev_to_adapter(netdev); struct idpf_vport *vport; - idpf_vport_ctrl_lock(netdev); - vport = idpf_netdev_to_vport(netdev); - ethtool_link_ksettings_zero_link_mode(cmd, supported); cmd->base.autoneg = AUTONEG_DISABLE; cmd->base.port = PORT_NONE; + cmd->base.duplex = DUPLEX_UNKNOWN; + cmd->base.speed = SPEED_UNKNOWN; + + if (idpf_is_reset_in_prog(adapter) || + test_bit(IDPF_REMOVE_IN_PROG, adapter->flags)) + return 0; + + idpf_vport_ctrl_lock(netdev); + vport = idpf_netdev_to_vport(netdev); + if (vport->link_up) { cmd->base.duplex = DUPLEX_FULL; cmd->base.speed = vport->link_speed_mbps; - } else { - cmd->base.duplex = DUPLEX_UNKNOWN; - cmd->base.speed = SPEED_UNKNOWN; } idpf_vport_ctrl_unlock(netdev); From patchwork Sat Aug 24 03:19:04 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Nikolova, Tatyana E" X-Patchwork-Id: 13776194 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.16]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6A09F33997; Sat, 24 Aug 2024 03:20:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.16 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724469645; cv=none; b=kLA5Pu+qZ5/mRNPgk8SkbF4qOnv3bOC11vUqgj/g9ihTvJ7qp1LakUp0zrSWggDpKd7CwkYICYZAnUwdK8TlrF3Pbu4xgtHtmyS/AC3Ous2rBuDnXiANnOi+F6mtSMA6+tJ04NYfhc/QnIg8pt/jJVv9bdFDcuUp5vts9IyigsQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724469645; c=relaxed/simple; bh=3oE/r81zH5g6H80Ri2NlRsYmiUn98l3XjsZlE/m06lQ=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=NuvPbH2QDm2qzmfVfbNuE9pwRNNym0MeCvCP+WkRBsnal6WQTh47ZvpPhyW8sS5nVbcHb80h/BcKXXO+ecd5tZIqLeXuA+BcqZdmX0PKm7ocCIRMYiw3Yy/BgR99Vw3v0SVqDKxH/yD9YnaSbViZLiUJAs4bmEeN3jM/TLTxKLI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=Ct6v1aCn; arc=none smtp.client-ip=192.198.163.16 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="Ct6v1aCn" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1724469643; x=1756005643; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=3oE/r81zH5g6H80Ri2NlRsYmiUn98l3XjsZlE/m06lQ=; b=Ct6v1aCnO+rG5RAB9P8jod40ufryKdmT2r3mKXwrc9vqfQHybi6mPaRK W9wDMCQjpA7Jjxx1HRu1SUEaBD9cJVjYIoV5BPF/dIjdZ/ip9vDdrb7QO YehMqa6FPXC9lcQqut9Yp27H9J1F3niQxFeJIkik9GFrqrn9LRmzMbeuZ SOf64zABIqmd+7QEGPKHYE/BYfAqTgoevWTwb4fnYB1dT2zHtaczUpQ1O F1WfPeOwO0rMO9n43Pip6yS7VaMDAYfUm09WVqYFEvYWfYfD2dzy+Hc5Y /10WK6UXYbS6b5DAdStrbVr1P/79frJgMvVwLUrI6H/lABJpbz5ez//Hp A==; X-CSE-ConnectionGUID: hzqyEf2vRM+XQFUGqorS4A== X-CSE-MsgGUID: +Y+ZceVbQ+WOX0gNBbH2LQ== X-IronPort-AV: E=McAfee;i="6700,10204,11173"; a="13187779" X-IronPort-AV: E=Sophos;i="6.10,172,1719903600"; d="scan'208";a="13187779" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by fmvoesa110.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Aug 2024 20:20:41 -0700 X-CSE-ConnectionGUID: 0bmpC/KPSBKAryUiK5s4Nw== X-CSE-MsgGUID: yyk/Yu8+R76LDR1++bKS2A== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.10,172,1719903600"; d="scan'208";a="99492084" Received: from tenikolo-mobl1.amr.corp.intel.com ([10.124.36.66]) by smtpauth.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Aug 2024 20:20:40 -0700 From: Tatyana Nikolova To: jgg@nvidia.com, leon@kernel.org Cc: linux-rdma@vger.kernel.org, netdev@vger.kernel.org, Joshua Hay , Tatyana Nikolova Subject: [RFC v2 05/25] idpf: implement rdma vport auxiliary dev create, init, and destroy Date: Fri, 23 Aug 2024 22:19:04 -0500 Message-Id: <20240824031924.421-6-tatyana.e.nikolova@intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20240824031924.421-1-tatyana.e.nikolova@intel.com> References: <20240824031924.421-1-tatyana.e.nikolova@intel.com> Precedence: bulk X-Mailing-List: linux-rdma@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Joshua Hay Implement the functions to create, initialize, and destroy an rdma vport auxiliary device. The vport aux dev creation is dependent on the core aux device to call idpf_idc_vport_dev_ctrl to signal that it is ready for vport aux devices. Implement that core callback to either create and initialize the vport aux dev or deinitialize. Rdma vport aux dev creation is also dependent on the control plane to tell us the vport is rdma enabled. Add a flag in the create vport message to signal individual vport rdma capabilities. v2: Guard against unplugging vport aux dev twice. This is possible if irdma is unloaded and then idpf is unloaded. irdma calls idpf_idc_vport_dev_down during its unload which calls unplug. Set the adev to NULL in dev_down, so that the following call to deinit_vport_aux_device during idpf unload will return early from unplug. Signed-off-by: Joshua Hay Signed-off-by: Tatyana Nikolova --- drivers/net/ethernet/intel/idpf/idpf.h | 3 + drivers/net/ethernet/intel/idpf/idpf_idc.c | 174 +++++++++++++++++++- drivers/net/ethernet/intel/idpf/idpf_lib.c | 2 + drivers/net/ethernet/intel/idpf/virtchnl2.h | 13 +- 4 files changed, 189 insertions(+), 3 deletions(-) diff --git a/drivers/net/ethernet/intel/idpf/idpf.h b/drivers/net/ethernet/intel/idpf/idpf.h index cb9344596bfb..2299be4aee4b 100644 --- a/drivers/net/ethernet/intel/idpf/idpf.h +++ b/drivers/net/ethernet/intel/idpf/idpf.h @@ -315,6 +315,8 @@ struct idpf_vport { u32 rxq_model; struct libeth_rx_pt *rx_ptype_lkup; + struct idc_rdma_vport_dev_info *vdev_info; + struct idpf_adapter *adapter; struct net_device *netdev; DECLARE_BITMAP(flags, IDPF_VPORT_FLAGS_NBITS); @@ -862,5 +864,6 @@ int idpf_idc_init(struct idpf_adapter *adapter); int idpf_idc_init_aux_core_dev(struct idpf_adapter *adapter, enum idc_function_type ftype); void idpf_idc_deinit_core_aux_device(struct idc_rdma_core_dev_info *cdev_info); +void idpf_idc_deinit_vport_aux_device(struct idc_rdma_vport_dev_info *vdev_info); #endif /* !_IDPF_H_ */ diff --git a/drivers/net/ethernet/intel/idpf/idpf_idc.c b/drivers/net/ethernet/intel/idpf/idpf_idc.c index 9eb0e0cfc61f..1f22868b8d75 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_idc.c +++ b/drivers/net/ethernet/intel/idpf/idpf_idc.c @@ -28,6 +28,111 @@ int idpf_idc_init(struct idpf_adapter *adapter) return err; } +/** + * idpf_vport_adev_release - function to be mapped to aux dev's release op + * @dev: pointer to device to free + */ +static void idpf_vport_adev_release(struct device *dev) +{ + struct idc_rdma_vport_auxiliary_dev *iadev; + + iadev = container_of(dev, struct idc_rdma_vport_auxiliary_dev, adev.dev); + kfree(iadev); + iadev = NULL; +} + +/* idpf_plug_vport_aux_dev - allocate and register a vport Auxiliary device + * @cdev_info: idc core device info pointer + * @vdev_info: idc vport device info pointer + */ +static int idpf_plug_vport_aux_dev(struct idc_rdma_core_dev_info *cdev_info, + struct idc_rdma_vport_dev_info *vdev_info) +{ + struct idc_rdma_vport_auxiliary_dev *iadev; + char name[IDPF_IDC_MAX_ADEV_NAME_LEN]; + struct auxiliary_device *adev; + int err; + + iadev = (struct idc_rdma_vport_auxiliary_dev *) + kzalloc(sizeof(*iadev), GFP_KERNEL); + if (!iadev) + return -ENOMEM; + + adev = &iadev->adev; + vdev_info->adev = &iadev->adev; + iadev->vdev_info = vdev_info; + + adev->id = ida_alloc(&idpf_idc_ida, GFP_KERNEL); + if (adev->id < 0) { + pr_err("failed to allocate unique device ID for Auxiliary driver\n"); + err = -ENOMEM; + goto err_ida_alloc; + } + adev->dev.release = idpf_vport_adev_release; + adev->dev.parent = &cdev_info->pdev->dev; + sprintf(name, "%04x.rdma.vdev", cdev_info->pdev->vendor); + adev->name = name; + + err = auxiliary_device_init(adev); + if (err) + goto err_aux_dev_init; + + err = auxiliary_device_add(adev); + if (err) + goto err_aux_dev_add; + + return 0; + +err_aux_dev_add: + vdev_info->adev = NULL; + auxiliary_device_uninit(adev); +err_aux_dev_init: + ida_free(&idpf_idc_ida, adev->id); +err_ida_alloc: + kfree(iadev); + + return err; +} + +/** + * idpf_idc_init_aux_vport_dev - initialize vport Auxiliary Device(s) + * @vport: virtual port data struct + */ +static int idpf_idc_init_aux_vport_dev(struct idpf_vport *vport) +{ + struct idpf_adapter *adapter = vport->adapter; + struct idc_rdma_vport_dev_info *vdev_info; + struct idc_rdma_core_dev_info *cdev_info; + struct virtchnl2_create_vport *vport_msg; + int err; + + vport_msg = (struct virtchnl2_create_vport *) + adapter->vport_params_recvd[vport->idx]; + + if (!(le16_to_cpu(vport_msg->vport_flags) & VIRTCHNL2_VPORT_ENABLE_RDMA)) + return 0; + + vport->vdev_info = (struct idc_rdma_vport_dev_info *) + kzalloc(sizeof(*vdev_info), GFP_KERNEL); + if (!vport->vdev_info) + return -ENOMEM; + + cdev_info = vport->adapter->cdev_info; + + vdev_info = vport->vdev_info; + vdev_info->vport_id = vport->vport_id; + vdev_info->netdev = vport->netdev; + vdev_info->core_adev = cdev_info->adev; + + err = idpf_plug_vport_aux_dev(cdev_info, vdev_info); + if (err) { + kfree(vdev_info); + return err; + } + + return 0; +} + /** * idpf_core_adev_release - function to be mapped to aux dev's release op * @dev: pointer to device to free @@ -97,12 +202,58 @@ static int idpf_plug_core_aux_dev(struct idc_rdma_core_dev_info *cdev_info) */ static void idpf_unplug_aux_dev(struct auxiliary_device *adev) { + if (!adev) + return; + auxiliary_device_delete(adev); auxiliary_device_uninit(adev); ida_free(&idpf_idc_ida, adev->id); } +/** + * idpf_idc_vport_dev_up - called when CORE is ready for vport aux devs + * @adapter: private data struct + */ +static int idpf_idc_vport_dev_up(struct idpf_adapter *adapter) +{ + int i, err = 0; + + for (i = 0; i < adapter->num_alloc_vports; i++) { + struct idpf_vport *vport = adapter->vports[i]; + + if (!vport) + continue; + + if (!vport->vdev_info) + err = idpf_idc_init_aux_vport_dev(vport); + else + err = idpf_plug_vport_aux_dev(vport->adapter->cdev_info, + vport->vdev_info); + } + + return err; +} + +/** + * idpf_idc_vport_dev_down - called CORE is leaving vport aux dev support state + * @adapter: private data struct + */ +static void idpf_idc_vport_dev_down(struct idpf_adapter *adapter) +{ + int i; + + for (i = 0; i < adapter->num_alloc_vports; i++) { + struct idpf_vport *vport = adapter->vports[i]; + + if (!vport) + continue; + + idpf_unplug_aux_dev(vport->vdev_info->adev); + vport->vdev_info->adev = NULL; + } +} + /** * idpf_idc_vport_dev_ctrl - Called by an Auxiliary Driver * @cdev_info: idc core device info pointer @@ -116,7 +267,14 @@ static int idpf_idc_vport_dev_ctrl(struct idc_rdma_core_dev_info *cdev_info, bool up) { - return -EOPNOTSUPP; + struct idpf_adapter *adapter = pci_get_drvdata(cdev_info->pdev); + + if (up) + return idpf_idc_vport_dev_up(adapter); + + idpf_idc_vport_dev_down(adapter); + + return 0; } /** @@ -210,3 +368,17 @@ void idpf_idc_deinit_core_aux_device(struct idc_rdma_core_dev_info *cdev_info) kfree(cdev_info->mapped_mem_regions); kfree(cdev_info); } + +/** + * idpf_idc_deinit_vport_aux_device - de-initialize Auxiliary Device(s) + * @vdev_info: idc vport device info pointer + */ +void idpf_idc_deinit_vport_aux_device(struct idc_rdma_vport_dev_info *vdev_info) +{ + if (!vdev_info) + return; + + idpf_unplug_aux_dev(vdev_info->adev); + + kfree(vdev_info); +} diff --git a/drivers/net/ethernet/intel/idpf/idpf_lib.c b/drivers/net/ethernet/intel/idpf/idpf_lib.c index ff7dcbced76c..d4fb8b1652ae 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_lib.c +++ b/drivers/net/ethernet/intel/idpf/idpf_lib.c @@ -1058,6 +1058,8 @@ static void idpf_vport_dealloc(struct idpf_vport *vport) struct idpf_adapter *adapter = vport->adapter; unsigned int i = vport->idx; + idpf_idc_deinit_vport_aux_device(vport->vdev_info); + idpf_deinit_mac_addr(vport); idpf_vport_stop(vport); diff --git a/drivers/net/ethernet/intel/idpf/virtchnl2.h b/drivers/net/ethernet/intel/idpf/virtchnl2.h index 80c17e4a394e..673a39e6698d 100644 --- a/drivers/net/ethernet/intel/idpf/virtchnl2.h +++ b/drivers/net/ethernet/intel/idpf/virtchnl2.h @@ -562,6 +562,15 @@ struct virtchnl2_queue_reg_chunks { }; VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_queue_reg_chunks); +/** + * enum virtchnl2_vport_flags - Vport flags + * @VIRTCHNL2_VPORT_ENABLE_RDMA: RDMA is enabled for this vport + */ +enum virtchnl2_vport_flags { + /* VIRTCHNL2_VPORT_* bits [0:3] rsvd */ + VIRTCHNL2_VPORT_ENABLE_RDMA = BIT(4), +}; + /** * struct virtchnl2_create_vport - Create vport config info. * @vport_type: See enum virtchnl2_vport_type. @@ -580,7 +589,7 @@ VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_queue_reg_chunks); * @max_mtu: Max MTU. CP populates this field on response. * @vport_id: Vport id. CP populates this field on response. * @default_mac_addr: Default MAC address. - * @pad: Padding. + * @vport_flags: See enum virtchnl2_vport_flags * @rx_desc_ids: See VIRTCHNL2_RX_DESC_IDS definitions. * @tx_desc_ids: See VIRTCHNL2_TX_DESC_IDS definitions. * @pad1: Padding. @@ -613,7 +622,7 @@ struct virtchnl2_create_vport { __le16 max_mtu; __le32 vport_id; u8 default_mac_addr[ETH_ALEN]; - __le16 pad; + __le16 vport_flags; __le64 rx_desc_ids; __le64 tx_desc_ids; u8 pad1[72]; From patchwork Sat Aug 24 03:19:05 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Nikolova, Tatyana E" X-Patchwork-Id: 13776195 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.16]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D1DFF38382; Sat, 24 Aug 2024 03:20:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.16 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724469645; cv=none; b=asWlgXphRyp4UDGoqU/2+ro3BP5dTifqiBHLpvoXV5Kc4LSOmt0DXh+8hqdP5ENn+KoGFetwMLsADUVEVXO0AHRnwJ48jNIk5FaZbkLxu1ObuQwB1km5OF460keG2sD3dnvmy8qsa8T8xg6j/vpHNSjbthd8YDLUwVtxHi9U1gs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724469645; c=relaxed/simple; bh=XHhTY7xzIB3NQjloeZyAACTpkUyYpjkaUPV1cWpH2JE=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=sbcX6l02QkWYNRQBXkKPBLFz1J0c0kJkIRmXb79aNjZsjYe2UsgB/5frUQNmORdwslWm/gtoX04+Zn17+FBnPU3JSJpE6gcT579F0F7fPHm8dbGPpOhctKan+00gC1LuI2J2qLvL7wSMMsbF2OHC5E6T/qS/XODeHNr2rYaBJTo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=hQd/GIEv; arc=none smtp.client-ip=192.198.163.16 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="hQd/GIEv" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1724469644; x=1756005644; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=XHhTY7xzIB3NQjloeZyAACTpkUyYpjkaUPV1cWpH2JE=; b=hQd/GIEveFfnCFrPD1owqN7DYb+PTy945ouDifA4mygz6OyrygXQ0jVx 9zZCrSQaicLV9/cmDYgIRfAd6UleqJU4iV5rylBRnEM6dGuYbXUpBRFMO KGrFLio4/ig9l857MnRIuofwg0vDwslkLGlpRijAE3C3K+eObUa9yzxAx yMV/MlZ2/R+g/l+kAR3Ed1cEcnjU8ECi1lQOaqjmc8HPkV2FTl497VK2m h+BW2IpGyPUT6bwHp33kIzFq2Qpv49Dra3WA1fYm31T+QbciTziul1VZb WiDw8SdECHLRJs/sZdWsFBhFMWD3Pq2rpKfmw3heDI+gmXdOngiXUx2Up g==; X-CSE-ConnectionGUID: MMGDHjQRQCyWTerkb2zpCw== X-CSE-MsgGUID: sVVPWrkYRti2/cM+IWpcPA== X-IronPort-AV: E=McAfee;i="6700,10204,11173"; a="13187782" X-IronPort-AV: E=Sophos;i="6.10,172,1719903600"; d="scan'208";a="13187782" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by fmvoesa110.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Aug 2024 20:20:41 -0700 X-CSE-ConnectionGUID: 3/DYd6+rTni0Fx4j1W8BYg== X-CSE-MsgGUID: UGgDgwUdSS2tuiPj4xg65w== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.10,172,1719903600"; d="scan'208";a="99492087" Received: from tenikolo-mobl1.amr.corp.intel.com ([10.124.36.66]) by smtpauth.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Aug 2024 20:20:41 -0700 From: Tatyana Nikolova To: jgg@nvidia.com, leon@kernel.org Cc: linux-rdma@vger.kernel.org, netdev@vger.kernel.org, Joshua Hay , Tatyana Nikolova Subject: [RFC v2 06/25] idpf: implement remaining idc rdma core callbacks and handlers Date: Fri, 23 Aug 2024 22:19:05 -0500 Message-Id: <20240824031924.421-7-tatyana.e.nikolova@intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20240824031924.421-1-tatyana.e.nikolova@intel.com> References: <20240824031924.421-1-tatyana.e.nikolova@intel.com> Precedence: bulk X-Mailing-List: linux-rdma@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Joshua Hay Implement the idpf_idc_request_reset and idpf_idc_rdma_vc_send_sync callbacks for the rdma core auxiliary driver to issue reset events to the idpf and send (synchronous) virtchnl messages to the control plane respectively. Implement and plumb the reset handler for the opposite flow as well, i.e. when the idpf is resetiing and needs to notify the rdma core auxiliary driver. Signed-off-by: Joshua Hay Signed-off-by: Tatyana Nikolova --- drivers/net/ethernet/intel/idpf/idpf.h | 1 + drivers/net/ethernet/intel/idpf/idpf_idc.c | 43 ++++++++++++++++++- drivers/net/ethernet/intel/idpf/idpf_lib.c | 2 + .../net/ethernet/intel/idpf/idpf_virtchnl.c | 23 +++++++++- drivers/net/ethernet/intel/idpf/virtchnl2.h | 3 +- 5 files changed, 69 insertions(+), 3 deletions(-) diff --git a/drivers/net/ethernet/intel/idpf/idpf.h b/drivers/net/ethernet/intel/idpf/idpf.h index 2299be4aee4b..5ff01927d471 100644 --- a/drivers/net/ethernet/intel/idpf/idpf.h +++ b/drivers/net/ethernet/intel/idpf/idpf.h @@ -865,5 +865,6 @@ int idpf_idc_init_aux_core_dev(struct idpf_adapter *adapter, enum idc_function_type ftype); void idpf_idc_deinit_core_aux_device(struct idc_rdma_core_dev_info *cdev_info); void idpf_idc_deinit_vport_aux_device(struct idc_rdma_vport_dev_info *vdev_info); +void idpf_idc_issue_reset_event(struct idc_rdma_core_dev_info *cdev_info); #endif /* !_IDPF_H_ */ diff --git a/drivers/net/ethernet/intel/idpf/idpf_idc.c b/drivers/net/ethernet/intel/idpf/idpf_idc.c index 1f22868b8d75..ac6746fd87f9 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_idc.c +++ b/drivers/net/ethernet/intel/idpf/idpf_idc.c @@ -211,6 +211,38 @@ static void idpf_unplug_aux_dev(struct auxiliary_device *adev) ida_free(&idpf_idc_ida, adev->id); } +/** + * idpf_idc_issue_reset_event - Function to handle reset IDC event + * @cdev_info: idc core device info pointer + */ +void idpf_idc_issue_reset_event(struct idc_rdma_core_dev_info *cdev_info) +{ + enum idc_rdma_event_type event_type = IDC_RDMA_EVENT_WARN_RESET; + struct idc_rdma_core_auxiliary_drv *iadrv; + struct idc_rdma_event event = { }; + struct auxiliary_device *adev; + + if (!cdev_info) + /* RDMA is not enabled */ + return; + + set_bit(event_type, event.type); + + device_lock(&cdev_info->adev->dev); + + adev = cdev_info->adev; + if (!adev || !adev->dev.driver) + goto unlock; + + iadrv = container_of(adev->dev.driver, + struct idc_rdma_core_auxiliary_drv, + adrv.driver); + if (iadrv && iadrv->event_handler) + iadrv->event_handler(cdev_info, &event); +unlock: + device_unlock(&cdev_info->adev->dev); +} + /** * idpf_idc_vport_dev_up - called when CORE is ready for vport aux devs * @adapter: private data struct @@ -289,7 +321,16 @@ static int idpf_idc_request_reset(struct idc_rdma_core_dev_info *cdev_info, enum idc_rdma_reset_type __always_unused reset_type) { - return -EOPNOTSUPP; + struct idpf_adapter *adapter = pci_get_drvdata(cdev_info->pdev); + + if (!idpf_is_reset_in_prog(adapter)) { + set_bit(IDPF_HR_FUNC_RESET, adapter->flags); + queue_delayed_work(adapter->vc_event_wq, + &adapter->vc_event_task, + msecs_to_jiffies(10)); + } + + return 0; } /* Implemented by the Auxiliary Device and called by the Auxiliary Driver */ diff --git a/drivers/net/ethernet/intel/idpf/idpf_lib.c b/drivers/net/ethernet/intel/idpf/idpf_lib.c index d4fb8b1652ae..db0b7ffd8df1 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_lib.c +++ b/drivers/net/ethernet/intel/idpf/idpf_lib.c @@ -1817,6 +1817,8 @@ static int idpf_init_hard_reset(struct idpf_adapter *adapter) } else if (test_and_clear_bit(IDPF_HR_FUNC_RESET, adapter->flags)) { bool is_reset = idpf_is_reset_detected(adapter); + idpf_idc_issue_reset_event(adapter->cdev_info); + idpf_set_vport_state(adapter); idpf_vc_core_deinit(adapter); if (!is_reset) diff --git a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c index d5067932de00..6b7ead2d4cf2 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c +++ b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c @@ -3747,5 +3747,26 @@ int idpf_idc_rdma_vc_send_sync(struct idc_rdma_core_dev_info *cdev_info, u8 *send_msg, u16 msg_size, u8 *recv_msg, u16 *recv_len) { - return -EOPNOTSUPP; + struct idpf_adapter *adapter = pci_get_drvdata(cdev_info->pdev); + struct idpf_vc_xn_params xn_params = { }; + ssize_t reply_sz; + u16 recv_size; + + if (!recv_msg || !recv_len || msg_size > IDPF_CTLQ_MAX_BUF_LEN) + return -EINVAL; + + recv_size = min_t(u16, *recv_len, IDPF_CTLQ_MAX_BUF_LEN); + *recv_len = 0; + xn_params.vc_op = VIRTCHNL2_OP_RDMA; + xn_params.timeout_ms = IDPF_VC_XN_DEFAULT_TIMEOUT_MSEC; + xn_params.send_buf.iov_base = send_msg; + xn_params.send_buf.iov_len = msg_size; + xn_params.recv_buf.iov_base = recv_msg; + xn_params.recv_buf.iov_len = recv_size; + reply_sz = idpf_vc_xn_exec(adapter, &xn_params); + if (reply_sz < 0) + return reply_sz; + *recv_len = reply_sz; + + return 0; } diff --git a/drivers/net/ethernet/intel/idpf/virtchnl2.h b/drivers/net/ethernet/intel/idpf/virtchnl2.h index 673a39e6698d..e6541152ca58 100644 --- a/drivers/net/ethernet/intel/idpf/virtchnl2.h +++ b/drivers/net/ethernet/intel/idpf/virtchnl2.h @@ -62,8 +62,9 @@ enum virtchnl2_op { VIRTCHNL2_OP_GET_PTYPE_INFO = 526, /* Opcode 527 and 528 are reserved for VIRTCHNL2_OP_GET_PTYPE_ID and * VIRTCHNL2_OP_GET_PTYPE_INFO_RAW. - * Opcodes 529, 530, 531, 532 and 533 are reserved. */ + VIRTCHNL2_OP_RDMA = 529, + /* Opcodes 530 through 533 are reserved. */ VIRTCHNL2_OP_LOOPBACK = 534, VIRTCHNL2_OP_ADD_MAC_ADDR = 535, VIRTCHNL2_OP_DEL_MAC_ADDR = 536, From patchwork Sat Aug 24 03:19:06 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Nikolova, Tatyana E" X-Patchwork-Id: 13776196 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.16]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 838453A1C9; Sat, 24 Aug 2024 03:20:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.16 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724469646; cv=none; b=DVUO4igHYsk8boVhsjDkmRRUMOIsxMXHpsE/8A9CZiMJkJ3pCwDydEbNVFRfl9VDElJbicwd6l+a2g/X9f0j0iJAFQOKxH2ZPIuOToumVBciUoExyoQ1N5UzYUB2T4T9+iuVS/gB6POW2EI5XWNJWFy9AFxR1OEA+pWUK5Zlf/Y= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724469646; c=relaxed/simple; bh=0C/mXMYUaWsQaA5qG6O1xbnW+L6fYJXrXOmaRpz+/Co=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=BQKJwm4hLQdfuN+AcLKyHG9MU2P1NiVqSOShb8T7tt0DP86TSD4KX0/g2kNCYbFi0xwW7wqMtAv55j3zcEgOGbEi+OBWSTVvEVH8Nv69e7uEs7OcyBCVoBLpT7f69HTHsSIwFRAIkRiT+zSUELKnx66xUGFGLKBJVcRFkS8nY8k= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=Z7VCi5qs; arc=none smtp.client-ip=192.198.163.16 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="Z7VCi5qs" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1724469644; x=1756005644; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=0C/mXMYUaWsQaA5qG6O1xbnW+L6fYJXrXOmaRpz+/Co=; b=Z7VCi5qsYsh2jAPe3rpw8jEmdk4KjulXLjulrddvHxf088SAVp3S1EWE 5LrrXs4hthoKA8hBIJxinE0ogTbbvWBWWQ0dE/VvigQQd3XuwOyircY/g PnpW5omug26OTSBXq1gxVBRtSepwhZO65uvEMOlrmKY/LSQ1Vz58LXP58 TkuMow2/3ibFsRmlfsEecc2FGEiNz//Ps/NKffutqNmb7O/bedYr42wLo ebmDQVCV4h2eHkmk7kXgV/usOdZV2RqB4Ymn6F3XVNs5XY5hQgt8Jfvet jAxSOTbSIa7NDQcYENE0AYiOjdG3gV5PrLAeZIt6J3sAiU24Gyh3ztMTw A==; X-CSE-ConnectionGUID: x3C1U1ZiRBuqR2wgRgR0xw== X-CSE-MsgGUID: yGeBN9kHS3ucaZNzUxQV8A== X-IronPort-AV: E=McAfee;i="6700,10204,11173"; a="13187785" X-IronPort-AV: E=Sophos;i="6.10,172,1719903600"; d="scan'208";a="13187785" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by fmvoesa110.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Aug 2024 20:20:42 -0700 X-CSE-ConnectionGUID: ri/fKwGnSuiX3uduyxa9Kw== X-CSE-MsgGUID: sj5SERKbSi2HQQdvulmGFQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.10,172,1719903600"; d="scan'208";a="99492090" Received: from tenikolo-mobl1.amr.corp.intel.com ([10.124.36.66]) by smtpauth.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Aug 2024 20:20:41 -0700 From: Tatyana Nikolova To: jgg@nvidia.com, leon@kernel.org Cc: linux-rdma@vger.kernel.org, netdev@vger.kernel.org, Joshua Hay , Tatyana Nikolova Subject: [RFC v2 07/25] idpf: use actual mbx receive payload length Date: Fri, 23 Aug 2024 22:19:06 -0500 Message-Id: <20240824031924.421-8-tatyana.e.nikolova@intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20240824031924.421-1-tatyana.e.nikolova@intel.com> References: <20240824031924.421-1-tatyana.e.nikolova@intel.com> Precedence: bulk X-Mailing-List: linux-rdma@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Joshua Hay When a mailbox message is received, the driver is checking for a non 0 datalen in the controlq descriptor. If it is valid, the payload is attached to the ctlq message to give to the upper layer. However, the payload response size given to the upper layer was taken from the buffer metadata which is _always_ the max buffer size. This meant the API was returning 4K as the payload size for all messages. This went unnoticed since the virtchnl exchange response logic was checking for a response size less than 0 (error), not less than exact size, or not greater than or equal to the max mailbox buffer size (4K). All of these checks will pass in the success case since the size provided is always 4K. Fetch the actual payload length from the value provided in the descriptor data_len field (instead of the buffer metadata). Unfortunately, this means we lose some extra error parsing for variable sized virtchnl responses such as create vport and get ptypes. However, the original checks weren't really helping anyways since the size was _always_ 4K. Signed-off-by: Joshua Hay Signed-off-by: Tatyana Nikolova --- drivers/net/ethernet/intel/idpf/idpf_virtchnl.c | 9 +-------- 1 file changed, 1 insertion(+), 8 deletions(-) diff --git a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c index 6b7ead2d4cf2..2a9669017b2e 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c +++ b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c @@ -666,7 +666,7 @@ idpf_vc_xn_forward_reply(struct idpf_adapter *adapter, if (ctlq_msg->data_len) { payload = ctlq_msg->ctx.indirect.payload->va; - payload_size = ctlq_msg->ctx.indirect.payload->size; + payload_size = ctlq_msg->data_len; } xn->reply_sz = payload_size; @@ -1296,10 +1296,6 @@ int idpf_send_create_vport_msg(struct idpf_adapter *adapter, err = reply_sz; goto free_vport_params; } - if (reply_sz < IDPF_CTLQ_MAX_BUF_LEN) { - err = -EIO; - goto free_vport_params; - } return 0; @@ -2603,9 +2599,6 @@ int idpf_send_get_rx_ptype_msg(struct idpf_vport *vport) if (reply_sz < 0) return reply_sz; - if (reply_sz < IDPF_CTLQ_MAX_BUF_LEN) - return -EIO; - ptypes_recvd += le16_to_cpu(ptype_info->num_ptypes); if (ptypes_recvd > max_ptype) return -EINVAL; From patchwork Sat Aug 24 03:19:07 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Nikolova, Tatyana E" X-Patchwork-Id: 13776197 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.16]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6524C3D3B8; Sat, 24 Aug 2024 03:20:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.16 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724469646; cv=none; b=MjwRmISnljqwH/aJTUbNa9GYHVyPa7gJPY09oWeezzREKUnwa2OetJTlorWsKAC4A1mCObKV2OMNIT0DPU2zUnS6QO6VvZ+XzF4SzlM2mSAZ6jvBLFSNE5mk7uk3jM8v6PGfdo3CG+Qx77wmPeAr1EzqceT+5UqDnJGZy6q+V6s= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724469646; c=relaxed/simple; bh=xgKS2rR96ToGxk5i/MjvnzjXscsYJABmaHVYM3gb3vI=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=Yph7644Q4vFoJw/rgo1JUT+PSvLaLJADY6tOR1iRX1YzHQxxIY0xfYte5PERNOU5kyPQoDQYZlx8sYkti/jWzMZlgSiSSlbUnphZdiKnPHUWRZHFg5tUcqgyuCDWMNHUYplJkv1ZpfSKBphc5EN3OIZEdrfk+bfCnG+hP9doaEs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=WvOVAi1I; arc=none smtp.client-ip=192.198.163.16 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="WvOVAi1I" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1724469645; x=1756005645; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=xgKS2rR96ToGxk5i/MjvnzjXscsYJABmaHVYM3gb3vI=; b=WvOVAi1IkOYlV6+RQK09KrIUAvAiVZjEyBgHsZa9HeB2vtUlf1sDCFO0 Q/IoATiKG3M+HDzVRVN5IXbWhkOdHYqgIQTh/qxaiqv2cdZhglowXUb8i jxxoCzey3zI1qBFaF2WkRtrsbjnQgiYeoA77I9Zgw371pOENice4qUlW6 VV1tl5c9c5OQUdauSN4VVbQcm0DnT5hs9IApzf4Ye5HfUG6g9z3s6ee/t GZiUS5ZbmKloq778NvU0Q0jUOU2EYOig0K49MfP+PKVLWUJo4wBgaOhqa XISqBSI72k5h0t2RHbL8tusQfuKzVl9Rb+ieE71zFhwdRmbR48Qu4NUkf Q==; X-CSE-ConnectionGUID: 2AV9teILQM++DZSwzZ49cA== X-CSE-MsgGUID: ap5lDEaCROO4ahI1MDCCyA== X-IronPort-AV: E=McAfee;i="6700,10204,11173"; a="13187788" X-IronPort-AV: E=Sophos;i="6.10,172,1719903600"; d="scan'208";a="13187788" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by fmvoesa110.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Aug 2024 20:20:42 -0700 X-CSE-ConnectionGUID: ebKuCH0QQv+DrqPDtSZNnQ== X-CSE-MsgGUID: B2TX9a5pQReGo3v5kpU3OA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.10,172,1719903600"; d="scan'208";a="99492093" Received: from tenikolo-mobl1.amr.corp.intel.com ([10.124.36.66]) by smtpauth.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Aug 2024 20:20:42 -0700 From: Tatyana Nikolova To: jgg@nvidia.com, leon@kernel.org Cc: linux-rdma@vger.kernel.org, netdev@vger.kernel.org, Joshua Hay , Tatyana Nikolova Subject: [RFC v2 08/25] idpf: implement idc vport aux driver mtu change handler Date: Fri, 23 Aug 2024 22:19:07 -0500 Message-Id: <20240824031924.421-9-tatyana.e.nikolova@intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20240824031924.421-1-tatyana.e.nikolova@intel.com> References: <20240824031924.421-1-tatyana.e.nikolova@intel.com> Precedence: bulk X-Mailing-List: linux-rdma@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Joshua Hay The only event an rdma vport aux driver cares about right now is an MTU change on its underlying vport. Implement and plumb the handler to signal the pre MTU change event and post MTU change events to the rdma vport aux driver. Signed-off-by: Joshua Hay Signed-off-by: Tatyana Nikolova --- drivers/net/ethernet/intel/idpf/idpf.h | 2 ++ drivers/net/ethernet/intel/idpf/idpf_idc.c | 31 ++++++++++++++++++++++ drivers/net/ethernet/intel/idpf/idpf_lib.c | 11 +++++--- 3 files changed, 41 insertions(+), 3 deletions(-) diff --git a/drivers/net/ethernet/intel/idpf/idpf.h b/drivers/net/ethernet/intel/idpf/idpf.h index 5ff01927d471..53b39985eefd 100644 --- a/drivers/net/ethernet/intel/idpf/idpf.h +++ b/drivers/net/ethernet/intel/idpf/idpf.h @@ -866,5 +866,7 @@ int idpf_idc_init_aux_core_dev(struct idpf_adapter *adapter, void idpf_idc_deinit_core_aux_device(struct idc_rdma_core_dev_info *cdev_info); void idpf_idc_deinit_vport_aux_device(struct idc_rdma_vport_dev_info *vdev_info); void idpf_idc_issue_reset_event(struct idc_rdma_core_dev_info *cdev_info); +void idpf_idc_vdev_mtu_event(struct idc_rdma_vport_dev_info *vdev_info, + enum idc_rdma_event_type event_type); #endif /* !_IDPF_H_ */ diff --git a/drivers/net/ethernet/intel/idpf/idpf_idc.c b/drivers/net/ethernet/intel/idpf/idpf_idc.c index ac6746fd87f9..53c3b79d66f5 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_idc.c +++ b/drivers/net/ethernet/intel/idpf/idpf_idc.c @@ -133,6 +133,37 @@ static int idpf_idc_init_aux_vport_dev(struct idpf_vport *vport) return 0; } +/** + * idpf_idc_vdev_mtu_event - Function to handle IDC vport mtu change events + * @vdev_info: idc vport device info pointer + * @event_type: type of event to pass to handler + */ +void idpf_idc_vdev_mtu_event(struct idc_rdma_vport_dev_info *vdev_info, + enum idc_rdma_event_type event_type) +{ + struct idc_rdma_vport_auxiliary_drv *iadrv; + struct idc_rdma_event event = { }; + struct auxiliary_device *adev; + + if (!vdev_info) + /* RDMA is not enabled */ + return; + + set_bit(event_type, event.type); + + device_lock(&vdev_info->adev->dev); + adev = vdev_info->adev; + if (!adev || !adev->dev.driver) + goto unlock; + iadrv = container_of(adev->dev.driver, + struct idc_rdma_vport_auxiliary_drv, + adrv.driver); + if (iadrv && iadrv->event_handler) + iadrv->event_handler(vdev_info, &event); +unlock: + device_unlock(&vdev_info->adev->dev); +} + /** * idpf_core_adev_release - function to be mapped to aux dev's release op * @dev: pointer to device to free diff --git a/drivers/net/ethernet/intel/idpf/idpf_lib.c b/drivers/net/ethernet/intel/idpf/idpf_lib.c index db0b7ffd8df1..0cf4d419b45b 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_lib.c +++ b/drivers/net/ethernet/intel/idpf/idpf_lib.c @@ -1944,6 +1944,8 @@ int idpf_initiate_soft_reset(struct idpf_vport *vport, idpf_vport_calc_num_q_desc(new_vport); break; case IDPF_SR_MTU_CHANGE: + idpf_idc_vdev_mtu_event(vport->vdev_info, + IDC_RDMA_EVENT_BEFORE_MTU_CHANGE); case IDPF_SR_RSC_CHANGE: break; default: @@ -1955,6 +1957,7 @@ int idpf_initiate_soft_reset(struct idpf_vport *vport, err = idpf_vport_queues_alloc(new_vport); if (err) goto free_vport; + if (current_state <= __IDPF_VPORT_DOWN) { idpf_send_delete_queues_msg(vport); } else { @@ -1991,15 +1994,17 @@ int idpf_initiate_soft_reset(struct idpf_vport *vport, if (current_state == __IDPF_VPORT_UP) err = idpf_vport_open(vport, false); - kfree(new_vport); - - return err; + goto free_vport; err_reset: idpf_vport_queues_rel(new_vport); free_vport: kfree(new_vport); + if (reset_cause == IDPF_SR_MTU_CHANGE) + idpf_idc_vdev_mtu_event(vport->vdev_info, + IDC_RDMA_EVENT_AFTER_MTU_CHANGE); + return err; } From patchwork Sat Aug 24 03:19:08 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Nikolova, Tatyana E" X-Patchwork-Id: 13776199 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.16]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A35C03FBB3; Sat, 24 Aug 2024 03:20:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.16 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724469649; cv=none; b=HUfDsv5fyRg6uX/BWqGi8XsXqYHgs9B2pJMsvCtNNJd6i9v9ep936kZf/2tM4IRSw0Y+K2tdzI26QF9wT2bsp3DJ35HUuSVdD/1GE3WmUtEtLVQqrrnlVYH4j6Ee8zyP4cfyaqu5K564qD6w/R7xdfTWfaeQ/6526hKJIrPYRs0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724469649; c=relaxed/simple; bh=VpC5DxicHL7DlM6AX9c5XobPrsl/XmGW6F3TOJAIC70=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=VVlJGBdL14G0P7UT0vQZrmF1J3MgXiFsOs4s+iaotBE5GhN/S1ESQxkZ8NK1nIL1iwzeLIpe8wiEdIeEGZwLX3oB+nnwxWVi8CuRc2Yy0Op7JIlK4rAqamnFWIO2kga0av4z1He/nVOQgr7VaQDqPjs+eF4DcAGsMKUqjnZzc7k= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=RluHinoh; arc=none smtp.client-ip=192.198.163.16 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="RluHinoh" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1724469645; x=1756005645; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=VpC5DxicHL7DlM6AX9c5XobPrsl/XmGW6F3TOJAIC70=; b=RluHinoh/cwqV00KmdyoYolVY7G0FukEKa5wUoOwB8Ba5eetigF55c3f I6vYoUDlkPLhTxs4zKIDeLGeHgyD82DP8ENcm5msQzrwEwVyqrlPOdppj 3yq/VOQ+omYZKFdl0C6SJKHVKZ8FOt9bKFVJ7MhJWVyyhEkZbQE7gVh7z SoCzz0VB0NoYu2X1EA20d5KE8cD7z47GISf/jAQty7MMJen3l9LMaHGrX PpqXowHKB6FuQqMnfNmHbSt2Vo70WlEVEI6doteQwUNAD2ptnIQlrCZr9 EJgBdJ/ugnvEKUqotZkCqYhm/qhjnPhLQP4dYMrPnQND+Anm7YMZhI27Q Q==; X-CSE-ConnectionGUID: mLrP0Mi0QtObu+6djUsszg== X-CSE-MsgGUID: MjKAD/+RQ9eySdn6Sd/8CQ== X-IronPort-AV: E=McAfee;i="6700,10204,11173"; a="13187791" X-IronPort-AV: E=Sophos;i="6.10,172,1719903600"; d="scan'208";a="13187791" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by fmvoesa110.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Aug 2024 20:20:43 -0700 X-CSE-ConnectionGUID: acBaNre2SQGtzXHBC78MzA== X-CSE-MsgGUID: RB3mYt3xT8+2WOQieGNtkw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.10,172,1719903600"; d="scan'208";a="99492096" Received: from tenikolo-mobl1.amr.corp.intel.com ([10.124.36.66]) by smtpauth.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Aug 2024 20:20:42 -0700 From: Tatyana Nikolova To: jgg@nvidia.com, leon@kernel.org Cc: linux-rdma@vger.kernel.org, netdev@vger.kernel.org, Joshua Hay , Tatyana Nikolova Subject: [RFC v2 09/25] idpf: implement get lan mmio memory regions Date: Fri, 23 Aug 2024 22:19:08 -0500 Message-Id: <20240824031924.421-10-tatyana.e.nikolova@intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20240824031924.421-1-tatyana.e.nikolova@intel.com> References: <20240824031924.421-1-tatyana.e.nikolova@intel.com> Precedence: bulk X-Mailing-List: linux-rdma@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Joshua Hay The rdma driver needs to map its own mmio regions for the sake of performance, meaning the idpf needs to avoid mapping portions of the bar space. However, to be vendor agnostic, the idpf cannot assume where these are and must avoid mapping hard coded regions. Instead, the idpf will map the bare minimum to load and communicate with the control plane, i.e. the mailbox registers and the reset state registers. The idpf will then call a new virtchnl op to fetch a list of mmio regions that it should map. All other registers will calculate which region they should store their address from. v2: * Remove unnecessary casts * s/wr32/idpf_mbx_wr32/ * Cache pci_resource_start return val * Use automatic freeing * Use devm versions of ioremap * Fix up kcalloc params * Other general cleanup Signed-off-by: Joshua Hay Signed-off-by: Tatyana Nikolova --- drivers/net/ethernet/intel/idpf/idpf.h | 65 +++++++- .../net/ethernet/intel/idpf/idpf_controlq.c | 14 +- .../net/ethernet/intel/idpf/idpf_controlq.h | 15 +- drivers/net/ethernet/intel/idpf/idpf_dev.c | 35 +++-- drivers/net/ethernet/intel/idpf/idpf_idc.c | 26 +++- drivers/net/ethernet/intel/idpf/idpf_main.c | 32 +++- drivers/net/ethernet/intel/idpf/idpf_mem.h | 8 +- drivers/net/ethernet/intel/idpf/idpf_vf_dev.c | 31 ++-- .../net/ethernet/intel/idpf/idpf_virtchnl.c | 143 +++++++++++++++++- drivers/net/ethernet/intel/idpf/virtchnl2.h | 30 +++- 10 files changed, 348 insertions(+), 51 deletions(-) diff --git a/drivers/net/ethernet/intel/idpf/idpf.h b/drivers/net/ethernet/intel/idpf/idpf.h index 53b39985eefd..963174dfe76e 100644 --- a/drivers/net/ethernet/intel/idpf/idpf.h +++ b/drivers/net/ethernet/intel/idpf/idpf.h @@ -190,7 +190,8 @@ struct idpf_vport_max_q { * @trigger_reset: Trigger a reset to occur */ struct idpf_reg_ops { - void (*ctlq_reg_init)(struct idpf_ctlq_create_info *cq); + void (*ctlq_reg_init)(struct idpf_adapter *adapter, + struct idpf_ctlq_create_info *cq); int (*intr_reg_init)(struct idpf_vport *vport); void (*mb_intr_reg_init)(struct idpf_adapter *adapter); void (*reset_reg_init)(struct idpf_adapter *adapter); @@ -198,6 +199,11 @@ struct idpf_reg_ops { enum idpf_flags trig_cause); }; +#define IDPF_PF_MBX_REGION_SZ 4096 +#define IDPF_PF_RSTAT_REGION_SZ 2048 +#define IDPF_VF_MBX_REGION_SZ 10240 +#define IDPF_VF_RSTAT_REGION_SZ 2048 + /** * struct idpf_dev_ops - Device specific operations * @reg_ops: Register operations @@ -206,6 +212,11 @@ struct idpf_dev_ops { struct idpf_reg_ops reg_ops; int (*idc_init)(struct idpf_adapter *adapter); + + resource_size_t mbx_reg_start; + resource_size_t mbx_reg_sz; + resource_size_t rstat_reg_start; + resource_size_t rstat_reg_sz; }; /** @@ -727,6 +738,35 @@ static inline u8 idpf_get_min_tx_pkt_len(struct idpf_adapter *adapter) return pkt_len ? pkt_len : IDPF_TX_MIN_PKT_LEN; } +/** + * idpf_get_mbx_reg_addr - Get BAR0 mailbox register address + * @adapter: private data struct + * @reg_offset: register offset value + * + * Based on the register offset, return the actual BAR0 register address + */ +static inline void __iomem *idpf_get_mbx_reg_addr(struct idpf_adapter *adapter, + resource_size_t reg_offset) +{ + return adapter->hw.mbx.addr + reg_offset; +} + +/** + * idpf_get_rstat_reg_addr - Get BAR0 rstat register address + * @adapter: private data struct + * @reg_offset: register offset value + * + * Based on the register offset, return the actual BAR0 register address + */ +static inline +void __iomem *idpf_get_rstat_reg_addr(struct idpf_adapter *adapter, + resource_size_t reg_offset) +{ + reg_offset -= adapter->dev_ops.rstat_reg_start; + + return adapter->hw.rstat.addr + reg_offset; +} + /** * idpf_get_reg_addr - Get BAR0 register address * @adapter: private data struct @@ -737,7 +777,26 @@ static inline u8 idpf_get_min_tx_pkt_len(struct idpf_adapter *adapter) static inline void __iomem *idpf_get_reg_addr(struct idpf_adapter *adapter, resource_size_t reg_offset) { - return (void __iomem *)(adapter->hw.hw_addr + reg_offset); + struct idpf_hw *hw = &adapter->hw; + + for (int i = 0; i < hw->num_lan_regs; i++) { + struct idpf_mmio_reg *region = &hw->lan_regs[i]; + + if (reg_offset >= region->addr_start && + reg_offset < (region->addr_start + region->addr_len)) { + reg_offset -= region->addr_start; + + return region->addr + reg_offset; + } + } + + /* It's impossible to hit this case with offsets from the CP. But if we + * do for any other reason, the kernel will panic on that register + * access. Might as well do it here to make it clear what's happening. + */ + BUG(); + + return NULL; } /** @@ -751,7 +810,7 @@ static inline bool idpf_is_reset_detected(struct idpf_adapter *adapter) if (!adapter->hw.arq) return true; - return !(readl(idpf_get_reg_addr(adapter, adapter->hw.arq->reg.len)) & + return !(readl(idpf_get_mbx_reg_addr(adapter, adapter->hw.arq->reg.len)) & adapter->hw.arq->reg.len_mask); } diff --git a/drivers/net/ethernet/intel/idpf/idpf_controlq.c b/drivers/net/ethernet/intel/idpf/idpf_controlq.c index 4849590a5591..8d875cbcd1c4 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_controlq.c +++ b/drivers/net/ethernet/intel/idpf/idpf_controlq.c @@ -36,19 +36,19 @@ static void idpf_ctlq_init_regs(struct idpf_hw *hw, struct idpf_ctlq_info *cq, { /* Update tail to post pre-allocated buffers for rx queues */ if (is_rxq) - wr32(hw, cq->reg.tail, (u32)(cq->ring_size - 1)); + idpf_mbx_wr32(hw, cq->reg.tail, (u32)(cq->ring_size - 1)); /* For non-Mailbox control queues only TAIL need to be set */ if (cq->q_id != -1) return; /* Clear Head for both send or receive */ - wr32(hw, cq->reg.head, 0); + idpf_mbx_wr32(hw, cq->reg.head, 0); /* set starting point */ - wr32(hw, cq->reg.bal, lower_32_bits(cq->desc_ring.pa)); - wr32(hw, cq->reg.bah, upper_32_bits(cq->desc_ring.pa)); - wr32(hw, cq->reg.len, (cq->ring_size | cq->reg.len_ena_mask)); + idpf_mbx_wr32(hw, cq->reg.bal, lower_32_bits(cq->desc_ring.pa)); + idpf_mbx_wr32(hw, cq->reg.bah, upper_32_bits(cq->desc_ring.pa)); + idpf_mbx_wr32(hw, cq->reg.len, (cq->ring_size | cq->reg.len_ena_mask)); } /** @@ -329,7 +329,7 @@ int idpf_ctlq_send(struct idpf_hw *hw, struct idpf_ctlq_info *cq, */ dma_wmb(); - wr32(hw, cq->reg.tail, cq->next_to_use); + idpf_mbx_wr32(hw, cq->reg.tail, cq->next_to_use); err_unlock: mutex_unlock(&cq->cq_lock); @@ -518,7 +518,7 @@ int idpf_ctlq_post_rx_buffs(struct idpf_hw *hw, struct idpf_ctlq_info *cq, dma_wmb(); - wr32(hw, cq->reg.tail, cq->next_to_post); + idpf_mbx_wr32(hw, cq->reg.tail, cq->next_to_post); } mutex_unlock(&cq->cq_lock); diff --git a/drivers/net/ethernet/intel/idpf/idpf_controlq.h b/drivers/net/ethernet/intel/idpf/idpf_controlq.h index c1aba09e9856..439d98faf0aa 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_controlq.h +++ b/drivers/net/ethernet/intel/idpf/idpf_controlq.h @@ -94,12 +94,23 @@ struct idpf_mbxq_desc { u32 pf_vf_id; /* used by CP when sending to PF */ }; +#define IDPF_MMIO_REGION_NUM_DFLT_OTHERS 3 + +struct idpf_mmio_reg { + void __iomem *addr; + resource_size_t addr_start; + resource_size_t addr_len; +}; + /* Define the driver hardware struct to replace other control structs as needed * Align to ctlq_hw_info */ struct idpf_hw { - void __iomem *hw_addr; - resource_size_t hw_addr_len; + struct idpf_mmio_reg mbx; + struct idpf_mmio_reg rstat; + /* Array of remaining LAN BAR regions */ + int num_lan_regs; + struct idpf_mmio_reg *lan_regs; struct idpf_adapter *back; diff --git a/drivers/net/ethernet/intel/idpf/idpf_dev.c b/drivers/net/ethernet/intel/idpf/idpf_dev.c index f4c56915b934..c364beb13ae9 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_dev.c +++ b/drivers/net/ethernet/intel/idpf/idpf_dev.c @@ -9,9 +9,11 @@ /** * idpf_ctlq_reg_init - initialize default mailbox registers + * @adapter: adapter structure * @cq: pointer to the array of create control queues */ -static void idpf_ctlq_reg_init(struct idpf_ctlq_create_info *cq) +static void idpf_ctlq_reg_init(struct idpf_adapter *adapter, + struct idpf_ctlq_create_info *cq) { int i; @@ -21,22 +23,22 @@ static void idpf_ctlq_reg_init(struct idpf_ctlq_create_info *cq) switch (ccq->type) { case IDPF_CTLQ_TYPE_MAILBOX_TX: /* set head and tail registers in our local struct */ - ccq->reg.head = PF_FW_ATQH; - ccq->reg.tail = PF_FW_ATQT; - ccq->reg.len = PF_FW_ATQLEN; - ccq->reg.bah = PF_FW_ATQBAH; - ccq->reg.bal = PF_FW_ATQBAL; + ccq->reg.head = PF_FW_ATQH - adapter->dev_ops.mbx_reg_start; + ccq->reg.tail = PF_FW_ATQT - adapter->dev_ops.mbx_reg_start; + ccq->reg.len = PF_FW_ATQLEN - adapter->dev_ops.mbx_reg_start; + ccq->reg.bah = PF_FW_ATQBAH - adapter->dev_ops.mbx_reg_start; + ccq->reg.bal = PF_FW_ATQBAL - adapter->dev_ops.mbx_reg_start; ccq->reg.len_mask = PF_FW_ATQLEN_ATQLEN_M; ccq->reg.len_ena_mask = PF_FW_ATQLEN_ATQENABLE_M; ccq->reg.head_mask = PF_FW_ATQH_ATQH_M; break; case IDPF_CTLQ_TYPE_MAILBOX_RX: /* set head and tail registers in our local struct */ - ccq->reg.head = PF_FW_ARQH; - ccq->reg.tail = PF_FW_ARQT; - ccq->reg.len = PF_FW_ARQLEN; - ccq->reg.bah = PF_FW_ARQBAH; - ccq->reg.bal = PF_FW_ARQBAL; + ccq->reg.head = PF_FW_ARQH - adapter->dev_ops.mbx_reg_start; + ccq->reg.tail = PF_FW_ARQT - adapter->dev_ops.mbx_reg_start; + ccq->reg.len = PF_FW_ARQLEN - adapter->dev_ops.mbx_reg_start; + ccq->reg.bah = PF_FW_ARQBAH - adapter->dev_ops.mbx_reg_start; + ccq->reg.bal = PF_FW_ARQBAL - adapter->dev_ops.mbx_reg_start; ccq->reg.len_mask = PF_FW_ARQLEN_ARQLEN_M; ccq->reg.len_ena_mask = PF_FW_ARQLEN_ARQENABLE_M; ccq->reg.head_mask = PF_FW_ARQH_ARQH_M; @@ -124,7 +126,7 @@ static int idpf_intr_reg_init(struct idpf_vport *vport) */ static void idpf_reset_reg_init(struct idpf_adapter *adapter) { - adapter->reset_reg.rstat = idpf_get_reg_addr(adapter, PFGEN_RSTAT); + adapter->reset_reg.rstat = idpf_get_rstat_reg_addr(adapter, PFGEN_RSTAT); adapter->reset_reg.rstat_m = PFGEN_RSTAT_PFR_STATE_M; } @@ -138,9 +140,9 @@ static void idpf_trigger_reset(struct idpf_adapter *adapter, { u32 reset_reg; - reset_reg = readl(idpf_get_reg_addr(adapter, PFGEN_CTRL)); + reset_reg = readl(idpf_get_rstat_reg_addr(adapter, PFGEN_CTRL)); writel(reset_reg | PFGEN_CTRL_PFSWR, - idpf_get_reg_addr(adapter, PFGEN_CTRL)); + idpf_get_rstat_reg_addr(adapter, PFGEN_CTRL)); } /** @@ -174,4 +176,9 @@ void idpf_dev_ops_init(struct idpf_adapter *adapter) idpf_reg_ops_init(adapter); adapter->dev_ops.idc_init = idpf_idc_register; + + adapter->dev_ops.mbx_reg_start = PF_FW_BASE; + adapter->dev_ops.mbx_reg_sz = IDPF_PF_MBX_REGION_SZ; + adapter->dev_ops.rstat_reg_start = PFGEN_RTRIG; + adapter->dev_ops.rstat_reg_sz = IDPF_PF_RSTAT_REGION_SZ; } diff --git a/drivers/net/ethernet/intel/idpf/idpf_idc.c b/drivers/net/ethernet/intel/idpf/idpf_idc.c index 53c3b79d66f5..e3c603f693c0 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_idc.c +++ b/drivers/net/ethernet/intel/idpf/idpf_idc.c @@ -398,7 +398,7 @@ int idpf_idc_init_aux_core_dev(struct idpf_adapter *adapter, enum idc_function_type ftype) { struct idc_rdma_core_dev_info *cdev_info; - int err; + int err, i; adapter->cdev_info = (struct idc_rdma_core_dev_info *) kzalloc(sizeof(struct idc_rdma_core_dev_info), GFP_KERNEL); @@ -411,14 +411,36 @@ int idpf_idc_init_aux_core_dev(struct idpf_adapter *adapter, cdev_info->rdma_protocol = IDC_RDMA_PROTOCOL_ROCEV2; cdev_info->ftype = ftype; + cdev_info->mapped_mem_regions = + kcalloc(adapter->hw.num_lan_regs, + sizeof(struct idc_rdma_lan_mapped_mem_region), + GFP_KERNEL); + if (!cdev_info->mapped_mem_regions) { + err = -ENOMEM; + goto err_plug_aux_dev; + } + + cdev_info->num_memory_regions = cpu_to_le16(adapter->hw.num_lan_regs); + for (i = 0; i < adapter->hw.num_lan_regs; i++) { + cdev_info->mapped_mem_regions[i].region_addr = + adapter->hw.lan_regs[i].addr; + cdev_info->mapped_mem_regions[i].size = + cpu_to_le64(adapter->hw.lan_regs[i].addr_len); + cdev_info->mapped_mem_regions[i].start_offset = + cpu_to_le64(adapter->hw.lan_regs[i].addr_start); + } + idpf_idc_init_msix_data(adapter); err = idpf_plug_core_aux_dev(cdev_info); if (err) - goto err_plug_aux_dev; + goto err_free_mem_regions; return 0; +err_free_mem_regions: + kfree(cdev_info->mapped_mem_regions); + cdev_info->mapped_mem_regions = NULL; err_plug_aux_dev: kfree(cdev_info); adapter->cdev_info = NULL; diff --git a/drivers/net/ethernet/intel/idpf/idpf_main.c b/drivers/net/ethernet/intel/idpf/idpf_main.c index db476b3314c8..1521c3267900 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_main.c +++ b/drivers/net/ethernet/intel/idpf/idpf_main.c @@ -101,15 +101,37 @@ static void idpf_shutdown(struct pci_dev *pdev) */ static int idpf_cfg_hw(struct idpf_adapter *adapter) { + resource_size_t res_start, mbx_start, rstat_start; struct pci_dev *pdev = adapter->pdev; struct idpf_hw *hw = &adapter->hw; + struct device *dev = &pdev->dev; + long len; + + res_start = pci_resource_start(pdev, 0); + + /* Map mailbox space for virtchnl communication */ + mbx_start = res_start + adapter->dev_ops.mbx_reg_start; + len = adapter->dev_ops.mbx_reg_sz; + hw->mbx.addr = devm_ioremap(dev, mbx_start, len); + if (!hw->mbx.addr) { + pci_err(pdev, "failed to allocate bar0 mbx region\n"); + + return -ENOMEM; + } + hw->mbx.addr_start = adapter->dev_ops.mbx_reg_start; + hw->mbx.addr_len = len; - hw->hw_addr = pcim_iomap_table(pdev)[0]; - if (!hw->hw_addr) { - pci_err(pdev, "failed to allocate PCI iomap table\n"); + /* Map rstat space for resets */ + rstat_start = res_start + adapter->dev_ops.rstat_reg_start; + len = adapter->dev_ops.rstat_reg_sz; + hw->rstat.addr = devm_ioremap(dev, rstat_start, len); + if (!hw->rstat.addr) { + pci_err(pdev, "failed to allocate bar0 rstat region\n"); return -ENOMEM; } + hw->rstat.addr_start = adapter->dev_ops.rstat_reg_start; + hw->rstat.addr_len = len; hw->back = adapter; @@ -156,9 +178,9 @@ static int idpf_probe(struct pci_dev *pdev, const struct pci_device_id *ent) if (err) goto err_free; - err = pcim_iomap_regions(pdev, BIT(0), pci_name(pdev)); + err = pci_request_mem_regions(pdev, pci_name(pdev)); if (err) { - pci_err(pdev, "pcim_iomap_regions failed %pe\n", ERR_PTR(err)); + pci_err(pdev, "pci_request_mem_regions failed %pe\n", ERR_PTR(err)); goto err_free; } diff --git a/drivers/net/ethernet/intel/idpf/idpf_mem.h b/drivers/net/ethernet/intel/idpf/idpf_mem.h index b21a04fccf0f..6938bc4f3a03 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_mem.h +++ b/drivers/net/ethernet/intel/idpf/idpf_mem.h @@ -12,9 +12,9 @@ struct idpf_dma_mem { size_t size; }; -#define wr32(a, reg, value) writel((value), ((a)->hw_addr + (reg))) -#define rd32(a, reg) readl((a)->hw_addr + (reg)) -#define wr64(a, reg, value) writeq((value), ((a)->hw_addr + (reg))) -#define rd64(a, reg) readq((a)->hw_addr + (reg)) +#define idpf_mbx_wr32(a, reg, value) writel((value), ((a)->mbx.addr + (reg))) +#define idpf_mbx_rd32(a, reg) readl((a)->mbx.addr + (reg)) +#define idpf_mbx_wr64(a, reg, value) writeq((value), ((a)->mbx.addr + (reg))) +#define idpf_mbx_rd64(a, reg) readq((a)->mbx.addr + (reg)) #endif /* _IDPF_MEM_H_ */ diff --git a/drivers/net/ethernet/intel/idpf/idpf_vf_dev.c b/drivers/net/ethernet/intel/idpf/idpf_vf_dev.c index db6a5951a594..5ad66b69c7c4 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_vf_dev.c +++ b/drivers/net/ethernet/intel/idpf/idpf_vf_dev.c @@ -9,9 +9,11 @@ /** * idpf_vf_ctlq_reg_init - initialize default mailbox registers + * @adapter: adapter structure * @cq: pointer to the array of create control queues */ -static void idpf_vf_ctlq_reg_init(struct idpf_ctlq_create_info *cq) +static void idpf_vf_ctlq_reg_init(struct idpf_adapter *adapter, + struct idpf_ctlq_create_info *cq) { int i; @@ -21,22 +23,22 @@ static void idpf_vf_ctlq_reg_init(struct idpf_ctlq_create_info *cq) switch (ccq->type) { case IDPF_CTLQ_TYPE_MAILBOX_TX: /* set head and tail registers in our local struct */ - ccq->reg.head = VF_ATQH; - ccq->reg.tail = VF_ATQT; - ccq->reg.len = VF_ATQLEN; - ccq->reg.bah = VF_ATQBAH; - ccq->reg.bal = VF_ATQBAL; + ccq->reg.head = VF_ATQH - adapter->dev_ops.mbx_reg_start; + ccq->reg.tail = VF_ATQT - adapter->dev_ops.mbx_reg_start; + ccq->reg.len = VF_ATQLEN - adapter->dev_ops.mbx_reg_start; + ccq->reg.bah = VF_ATQBAH - adapter->dev_ops.mbx_reg_start; + ccq->reg.bal = VF_ATQBAL - adapter->dev_ops.mbx_reg_start; ccq->reg.len_mask = VF_ATQLEN_ATQLEN_M; ccq->reg.len_ena_mask = VF_ATQLEN_ATQENABLE_M; ccq->reg.head_mask = VF_ATQH_ATQH_M; break; case IDPF_CTLQ_TYPE_MAILBOX_RX: /* set head and tail registers in our local struct */ - ccq->reg.head = VF_ARQH; - ccq->reg.tail = VF_ARQT; - ccq->reg.len = VF_ARQLEN; - ccq->reg.bah = VF_ARQBAH; - ccq->reg.bal = VF_ARQBAL; + ccq->reg.head = VF_ARQH - adapter->dev_ops.mbx_reg_start; + ccq->reg.tail = VF_ARQT - adapter->dev_ops.mbx_reg_start; + ccq->reg.len = VF_ARQLEN - adapter->dev_ops.mbx_reg_start; + ccq->reg.bah = VF_ARQBAH - adapter->dev_ops.mbx_reg_start; + ccq->reg.bal = VF_ARQBAL - adapter->dev_ops.mbx_reg_start; ccq->reg.len_mask = VF_ARQLEN_ARQLEN_M; ccq->reg.len_ena_mask = VF_ARQLEN_ARQENABLE_M; ccq->reg.head_mask = VF_ARQH_ARQH_M; @@ -123,7 +125,7 @@ static int idpf_vf_intr_reg_init(struct idpf_vport *vport) */ static void idpf_vf_reset_reg_init(struct idpf_adapter *adapter) { - adapter->reset_reg.rstat = idpf_get_reg_addr(adapter, VFGEN_RSTAT); + adapter->reset_reg.rstat = idpf_get_rstat_reg_addr(adapter, VFGEN_RSTAT); adapter->reset_reg.rstat_m = VFGEN_RSTAT_VFR_STATE_M; } @@ -172,4 +174,9 @@ void idpf_vf_dev_ops_init(struct idpf_adapter *adapter) idpf_vf_reg_ops_init(adapter); adapter->dev_ops.idc_init = idpf_idc_vf_register; + + adapter->dev_ops.mbx_reg_start = VF_BASE; + adapter->dev_ops.mbx_reg_sz = IDPF_VF_MBX_REGION_SZ; + adapter->dev_ops.rstat_reg_start = VFGEN_RSTAT; + adapter->dev_ops.rstat_reg_sz = IDPF_VF_RSTAT_REGION_SZ; } diff --git a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c index 2a9669017b2e..a9439e90bba8 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c +++ b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c @@ -894,6 +894,7 @@ static int idpf_send_get_caps_msg(struct idpf_adapter *adapter) caps.other_caps = cpu_to_le64(VIRTCHNL2_CAP_SRIOV | VIRTCHNL2_CAP_RDMA | + VIRTCHNL2_CAP_LAN_MEMORY_REGIONS | VIRTCHNL2_CAP_MACFILTER | VIRTCHNL2_CAP_SPLITQ_QSCHED | VIRTCHNL2_CAP_PROMISC | @@ -915,6 +916,122 @@ static int idpf_send_get_caps_msg(struct idpf_adapter *adapter) return 0; } +/** + * idpf_send_get_lan_memory_regions - Send virtchnl get LAN memory regions msg + * @adapter: Driver specific private struct + */ +static int idpf_send_get_lan_memory_regions(struct idpf_adapter *adapter) +{ + struct virtchnl2_get_lan_memory_regions *rcvd_regions __free(kfree); + struct idpf_vc_xn_params xn_params = { + .vc_op = VIRTCHNL2_OP_GET_LAN_MEMORY_REGIONS, + .recv_buf.iov_len = IDPF_CTLQ_MAX_BUF_LEN, + .timeout_ms = IDPF_VC_XN_DEFAULT_TIMEOUT_MSEC, + }; + int num_regions, size; + struct idpf_hw *hw; + ssize_t reply_sz; + int err = 0; + + rcvd_regions = kzalloc(IDPF_CTLQ_MAX_BUF_LEN, GFP_KERNEL); + if (!rcvd_regions) + return -ENOMEM; + + xn_params.recv_buf.iov_base = rcvd_regions; + reply_sz = idpf_vc_xn_exec(adapter, &xn_params); + if (reply_sz < 0) + return reply_sz; + + num_regions = le16_to_cpu(rcvd_regions->num_memory_regions); + size = struct_size(rcvd_regions, mem_reg, num_regions); + if (reply_sz < size) + return -EIO; + + if (size > IDPF_CTLQ_MAX_BUF_LEN) + return -EINVAL; + + hw = &adapter->hw; + hw->lan_regs = kcalloc(num_regions, sizeof(*hw->lan_regs), GFP_KERNEL); + if (!hw->lan_regs) + return -ENOMEM; + + for (int i = 0; i < num_regions; i++) { + hw->lan_regs[i].addr_len = + le64_to_cpu(rcvd_regions->mem_reg[i].size); + hw->lan_regs[i].addr_start = + le64_to_cpu(rcvd_regions->mem_reg[i].start_offset); + } + hw->num_lan_regs = num_regions; + + return err; +} + +/** + * idpf_calc_remaining_mmio_regs - calcuate MMIO regions outside mbx and rstat + * @adapter: Driver specific private structure + * + * Called when idpf_send_get_lan_memory_regions fails or is not supported. This + * will calculate the offsets and sizes for the regions before, in between, and + * after the mailbox and rstat MMIO mappings. + */ +static int idpf_calc_remaining_mmio_regs(struct idpf_adapter *adapter) +{ + struct idpf_hw *hw = &adapter->hw; + + hw->num_lan_regs = IDPF_MMIO_REGION_NUM_DFLT_OTHERS; + hw->lan_regs = kcalloc(hw->num_lan_regs, sizeof(*hw->lan_regs), + GFP_KERNEL); + if (!hw->lan_regs) + return -ENOMEM; + + /* Region preceding mailbox */ + hw->lan_regs[0].addr_start = 0; + hw->lan_regs[0].addr_len = adapter->dev_ops.mbx_reg_start; + /* Region between mailbox and rstat */ + hw->lan_regs[1].addr_start = adapter->dev_ops.mbx_reg_start + + adapter->dev_ops.mbx_reg_sz; + hw->lan_regs[1].addr_len = adapter->dev_ops.rstat_reg_start - + hw->lan_regs[1].addr_start; + /* Region after rstat */ + hw->lan_regs[2].addr_start = adapter->dev_ops.rstat_reg_start + + adapter->dev_ops.rstat_reg_sz; + hw->lan_regs[2].addr_len = pci_resource_len(adapter->pdev, 0) - + hw->lan_regs[2].addr_start; + + return 0; +} + +/** + * idpf_map_lan_mmio_regs - map remaining LAN BAR regions + * @adapter: Driver specific private structure + */ +static int idpf_map_lan_mmio_regs(struct idpf_adapter *adapter) +{ + struct pci_dev *pdev = adapter->pdev; + struct idpf_hw *hw = &adapter->hw; + resource_size_t res_start; + + res_start = pci_resource_start(pdev, 0); + + for (int i = 0; i < hw->num_lan_regs; i++) { + resource_size_t start; + long len; + + len = hw->lan_regs[i].addr_len; + if (!len) + continue; + start = hw->lan_regs[i].addr_start + res_start; + + hw->lan_regs[i].addr = devm_ioremap(&pdev->dev, start, len); + if (!hw->lan_regs[i].addr) { + pci_err(pdev, "failed to allocate bar0 region\n"); + return -ENOMEM; + } + } + + return 0; +} + /** * idpf_vport_alloc_max_qs - Allocate max queues for a vport * @adapter: Driver specific private structure @@ -2826,7 +2943,7 @@ int idpf_init_dflt_mbx(struct idpf_adapter *adapter) struct idpf_hw *hw = &adapter->hw; int err; - adapter->dev_ops.reg_ops.ctlq_reg_init(ctlq_info); + adapter->dev_ops.reg_ops.ctlq_reg_init(adapter, ctlq_info); err = idpf_ctlq_init(hw, IDPF_NUM_DFLT_MBX_Q, ctlq_info); if (err) @@ -2986,6 +3103,30 @@ int idpf_vc_core_init(struct idpf_adapter *adapter) msleep(task_delay); } + if (idpf_is_cap_ena(adapter, IDPF_OTHER_CAPS, VIRTCHNL2_CAP_LAN_MEMORY_REGIONS)) { + err = idpf_send_get_lan_memory_regions(adapter); + if (err) { + dev_err(&adapter->pdev->dev, "Failed to get LAN memory regions: %d\n", + err); + return -EINVAL; + } + } else { + /* Fallback to mapping the remaining regions of the entire BAR */ + err = idpf_calc_remaining_mmio_regs(adapter); + if (err) { + dev_err(&adapter->pdev->dev, "Failed to allocate bar0 region(s): %d\n", + err); + return -ENOMEM; + } + } + + err = idpf_map_lan_mmio_regs(adapter); + if (err) { + dev_err(&adapter->pdev->dev, "Failed to map bar0 region(s): %d\n", + err); + return -ENOMEM; + } + pci_sriov_set_totalvfs(adapter->pdev, idpf_get_max_vfs(adapter)); num_max_vports = idpf_get_max_vports(adapter); adapter->max_vports = num_max_vports; diff --git a/drivers/net/ethernet/intel/idpf/virtchnl2.h b/drivers/net/ethernet/intel/idpf/virtchnl2.h index e6541152ca58..92ab03c4394b 100644 --- a/drivers/net/ethernet/intel/idpf/virtchnl2.h +++ b/drivers/net/ethernet/intel/idpf/virtchnl2.h @@ -69,6 +69,8 @@ enum virtchnl2_op { VIRTCHNL2_OP_ADD_MAC_ADDR = 535, VIRTCHNL2_OP_DEL_MAC_ADDR = 536, VIRTCHNL2_OP_CONFIG_PROMISCUOUS_MODE = 537, + /* Opcodes 538 through 548 are reserved. */ + VIRTCHNL2_OP_GET_LAN_MEMORY_REGIONS = 549, }; /** @@ -202,7 +204,8 @@ enum virtchnl2_cap_other { VIRTCHNL2_CAP_RX_FLEX_DESC = BIT_ULL(17), VIRTCHNL2_CAP_PTYPE = BIT_ULL(18), VIRTCHNL2_CAP_LOOPBACK = BIT_ULL(19), - /* Other capability 20 is reserved */ + /* Other capability 20-21 is reserved */ + VIRTCHNL2_CAP_LAN_MEMORY_REGIONS = BIT_ULL(22), /* this must be the last capability */ VIRTCHNL2_CAP_OEM = BIT_ULL(63), @@ -1283,4 +1286,29 @@ struct virtchnl2_promisc_info { }; VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_promisc_info); +/** + * struct virtchnl2_mem_region - MMIO memory region + * @start_offset: starting offset of the MMIO memory region + * @size: size of the MMIO memory region + */ +struct virtchnl2_mem_region { + __le64 start_offset; + __le64 size; +}; +VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_mem_region); + +/** + * struct vitchnl2_mem_region - List of LAN MMIO memory regions + * @num_memory_regions: number of memory regions + * @mem_reg: List with memory region info + * + * PF/VF sends this message to learn what LAN MMIO memory regions it should map. + */ +struct virtchnl2_get_lan_memory_regions { + __le16 num_memory_regions; + u8 pad[6]; + struct virtchnl2_mem_region mem_reg[]; +}; +VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_get_lan_memory_regions); + #endif /* _VIRTCHNL_2_H_ */ From patchwork Sat Aug 24 03:19:09 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Nikolova, Tatyana E" X-Patchwork-Id: 13776198 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.16]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 374F743AD2; Sat, 24 Aug 2024 03:20:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.16 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724469648; cv=none; b=aCJaZVh+tAQWFixAGphnGOYhql+H1L9zMBkGOTnIW/LGIjGvn4eG/Khz5RhGUa5e6g83ENlUwhtH29b73AZhkapE+8afgl32bHalH5FRELcfBacaNPp2ogJHuECyTRXZv45WFyPWlW9n5PVhpKBGnyys6sNRnLGvnDZ+TRxucPk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724469648; c=relaxed/simple; bh=Y9rV1UTBx9VIlvbBNpYR099EK5hokcVXT+OWqPJp3Xk=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=biFqdeVVHF15pMSq2peiBL7HNJU0PNNz+A1QUMyfgyWRQPgt4C5DZPCw5nWEaynJaPUKfMghbVUr95SAJgx2IikIJfLiRCN+SPYlfK16ClGmHlaNUcMp6mVWg1+G0W0dWon0UEunuIE13UGoBDUY7/2O4uCOHb+xchWeNE2l14s= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=RGUZDNVn; arc=none smtp.client-ip=192.198.163.16 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="RGUZDNVn" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1724469646; x=1756005646; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Y9rV1UTBx9VIlvbBNpYR099EK5hokcVXT+OWqPJp3Xk=; b=RGUZDNVnzgQRf7UeK1t56sra84pPLguvH2ZV1AhQR5ugSFVDUog96E0F WV5ebJQ9as1rFmV9RhJovRJi0rnhfhyDPWFgnmqMlgSmbmpDZlyLNiLjX 8XQ8KAI1IHpUpNRHYFwegr1Qwj4AQzxfL3v1HVMEzMTRNp/161gJyV7i4 3qZDLMSJzCmKW8WoHb7u3SZPKQM0KVEC5IArqEOmB4wmKRNG1RFx0LDRY vyfCOuoCtONOeZI96wZTCKCVDDKgjHKIIGu2MMJkHugXyFRQ0LXVWCel7 VrHWRHXLKi/GbVxxfEqU9Y0KxOyxfARpLphu654OZTJ7n2himmR09ZGnF w==; X-CSE-ConnectionGUID: ROoWFDZ1Stmm/LY9nA83xA== X-CSE-MsgGUID: iBT0zb/xQgOT1G4Oil/EiA== X-IronPort-AV: E=McAfee;i="6700,10204,11173"; a="13187794" X-IronPort-AV: E=Sophos;i="6.10,172,1719903600"; d="scan'208";a="13187794" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by fmvoesa110.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Aug 2024 20:20:44 -0700 X-CSE-ConnectionGUID: kMW+uAQuSx+771QAvk8l2g== X-CSE-MsgGUID: HJ0aK5qnTWutxvsKW62TyA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.10,172,1719903600"; d="scan'208";a="99492100" Received: from tenikolo-mobl1.amr.corp.intel.com ([10.124.36.66]) by smtpauth.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Aug 2024 20:20:43 -0700 From: Tatyana Nikolova To: jgg@nvidia.com, leon@kernel.org Cc: linux-rdma@vger.kernel.org, netdev@vger.kernel.org, Mustafa Ismail , Tatyana Nikolova Subject: [RFC v2 10/25] RDMA/irdma: Refactor GEN2 auxiliary driver Date: Fri, 23 Aug 2024 22:19:09 -0500 Message-Id: <20240824031924.421-11-tatyana.e.nikolova@intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20240824031924.421-1-tatyana.e.nikolova@intel.com> References: <20240824031924.421-1-tatyana.e.nikolova@intel.com> Precedence: bulk X-Mailing-List: linux-rdma@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Mustafa Ismail Refactor the irdma auxiliary driver and associated interfaces out of main.c and into a standalone GEN2-specific source file and rename as gen_2 driver. This is in preparation for adding GEN3 auxiliary drivers. Each HW generation will have its own gen-specific interface file. Additionally, move the Address Handle hash table and associated locks under rf struct. This will allow GEN3 code to migrate to use it easily. Signed-off-by: Mustafa Ismail Signed-off-by: Tatyana Nikolova --- drivers/infiniband/hw/irdma/Makefile | 1 + drivers/infiniband/hw/irdma/i40iw_if.c | 2 + drivers/infiniband/hw/irdma/icrdma_if.c | 265 +++++++++++++++++++++++ drivers/infiniband/hw/irdma/main.c | 272 +----------------------- drivers/infiniband/hw/irdma/main.h | 9 +- drivers/infiniband/hw/irdma/verbs.c | 16 +- 6 files changed, 290 insertions(+), 275 deletions(-) create mode 100644 drivers/infiniband/hw/irdma/icrdma_if.c diff --git a/drivers/infiniband/hw/irdma/Makefile b/drivers/infiniband/hw/irdma/Makefile index 48c3854235a0..2522e4ca650b 100644 --- a/drivers/infiniband/hw/irdma/Makefile +++ b/drivers/infiniband/hw/irdma/Makefile @@ -13,6 +13,7 @@ irdma-objs := cm.o \ hw.o \ i40iw_hw.o \ i40iw_if.o \ + icrdma_if.o \ icrdma_hw.o \ main.o \ pble.o \ diff --git a/drivers/infiniband/hw/irdma/i40iw_if.c b/drivers/infiniband/hw/irdma/i40iw_if.c index cc50a7070371..6fa807ef4545 100644 --- a/drivers/infiniband/hw/irdma/i40iw_if.c +++ b/drivers/infiniband/hw/irdma/i40iw_if.c @@ -75,6 +75,8 @@ static void i40iw_fill_device_info(struct irdma_device *iwdev, struct i40e_info struct irdma_pci_f *rf = iwdev->rf; rf->rdma_ver = IRDMA_GEN_1; + rf->sc_dev.hw = &rf->hw; + rf->sc_dev.hw_attrs.uk_attrs.hw_rev = IRDMA_GEN_1; rf->gen_ops.request_reset = i40iw_request_reset; rf->pcidev = cdev_info->pcidev; rf->pf_id = cdev_info->fid; diff --git a/drivers/infiniband/hw/irdma/icrdma_if.c b/drivers/infiniband/hw/irdma/icrdma_if.c new file mode 100644 index 000000000000..5fcbf695a1d3 --- /dev/null +++ b/drivers/infiniband/hw/irdma/icrdma_if.c @@ -0,0 +1,265 @@ +// SPDX-License-Identifier: GPL-2.0 or Linux-OpenIB +// /* Copyright (c) 2015 - 2024 Intel Corporation */ +#include "main.h" + +static void icrdma_prep_tc_change(struct irdma_device *iwdev) +{ + iwdev->vsi.tc_change_pending = true; + irdma_sc_suspend_resume_qps(&iwdev->vsi, IRDMA_OP_SUSPEND); + + /* Wait for all qp's to suspend */ + wait_event_timeout(iwdev->suspend_wq, + !atomic_read(&iwdev->vsi.qp_suspend_reqs), + msecs_to_jiffies(IRDMA_EVENT_TIMEOUT_MS)); + irdma_ws_reset(&iwdev->vsi); +} + +static void icrdma_idc_event_handler(struct idc_rdma_core_dev_info *cdev_info, + struct idc_rdma_event *event) +{ + struct irdma_device *iwdev = dev_get_drvdata(&cdev_info->adev->dev); + struct irdma_l2params l2params = {}; + + if (*event->type & BIT(IDC_RDMA_EVENT_AFTER_MTU_CHANGE)) { + ibdev_dbg(&iwdev->ibdev, "CLNT: new MTU = %d\n", iwdev->netdev->mtu); + if (iwdev->vsi.mtu != iwdev->netdev->mtu) { + l2params.mtu = iwdev->netdev->mtu; + l2params.mtu_changed = true; + irdma_log_invalid_mtu(l2params.mtu, &iwdev->rf->sc_dev); + irdma_change_l2params(&iwdev->vsi, &l2params); + } + } else if (*event->type & BIT(IDC_RDMA_EVENT_BEFORE_TC_CHANGE)) { + if (iwdev->vsi.tc_change_pending) + return; + + icrdma_prep_tc_change(iwdev); + } else if (*event->type & BIT(IDC_RDMA_EVENT_AFTER_TC_CHANGE)) { + struct iidc_rdma_priv_dev_info *idc_priv = cdev_info->idc_priv; + + if (!iwdev->vsi.tc_change_pending) + return; + + l2params.tc_changed = true; + ibdev_dbg(&iwdev->ibdev, "CLNT: TC Change\n"); + + irdma_fill_qos_info(&l2params, &idc_priv->qos_info); + if (iwdev->rf->protocol_used != IRDMA_IWARP_PROTOCOL_ONLY) + iwdev->dcb_vlan_mode = + l2params.num_tc > 1 && !l2params.dscp_mode; + irdma_change_l2params(&iwdev->vsi, &l2params); + } else if (*event->type & BIT(IDC_RDMA_EVENT_CRIT_ERR)) { + ibdev_warn(&iwdev->ibdev, "ICE OICR event notification: oicr = 0x%08x\n", + event->reg); + if (event->reg & IRDMAPFINT_OICR_PE_CRITERR_M) { + u32 pe_criterr; + + pe_criterr = readl(iwdev->rf->sc_dev.hw_regs[IRDMA_GLPE_CRITERR]); +#define IRDMA_Q1_RESOURCE_ERR 0x0001024d + if (pe_criterr != IRDMA_Q1_RESOURCE_ERR) { + ibdev_err(&iwdev->ibdev, "critical PE Error, GLPE_CRITERR=0x%08x\n", + pe_criterr); + iwdev->rf->reset = true; + } else { + ibdev_warn(&iwdev->ibdev, "Q1 Resource Check\n"); + } + } + if (event->reg & IRDMAPFINT_OICR_HMC_ERR_M) { + ibdev_err(&iwdev->ibdev, "HMC Error\n"); + iwdev->rf->reset = true; + } + if (event->reg & IRDMAPFINT_OICR_PE_PUSH_M) { + ibdev_err(&iwdev->ibdev, "PE Push Error\n"); + iwdev->rf->reset = true; + } + if (iwdev->rf->reset) + iwdev->rf->gen_ops.request_reset(iwdev->rf); + } +} + +/** + * icrdma_lan_register_qset - Register qset with LAN driver + * @vsi: vsi structure + * @tc_node: Traffic class node + */ +static int icrdma_lan_register_qset(struct irdma_sc_vsi *vsi, + struct irdma_ws_node *tc_node) +{ + struct irdma_device *iwdev = vsi->back_vsi; + struct idc_rdma_core_dev_info *cdev_info = iwdev->rf->cdev; + struct iidc_rdma_priv_dev_info *idc_priv = cdev_info->idc_priv; + struct iidc_rdma_qset_params qset = {}; + int ret; + + qset.qs_handle = tc_node->qs_handle; + qset.tc = tc_node->traffic_class; + qset.vport_id = vsi->vsi_idx; + ret = idc_priv->priv_ops->alloc_res(cdev_info, &qset); + if (ret) { + ibdev_dbg(&iwdev->ibdev, "WS: LAN alloc_res for rdma qset failed.\n"); + return ret; + } + + tc_node->l2_sched_node_id = qset.teid; + vsi->qos[tc_node->user_pri].l2_sched_node_id = qset.teid; + + return 0; +} + +/** + * icrdma_lan_unregister_qset - Unregister qset with LAN driver + * @vsi: vsi structure + * @tc_node: Traffic class node + */ +static void icrdma_lan_unregister_qset(struct irdma_sc_vsi *vsi, + struct irdma_ws_node *tc_node) +{ + struct irdma_device *iwdev = vsi->back_vsi; + struct idc_rdma_core_dev_info *cdev_info = iwdev->rf->cdev; + struct iidc_rdma_priv_dev_info *idc_priv = cdev_info->idc_priv; + struct iidc_rdma_qset_params qset = {}; + + qset.qs_handle = tc_node->qs_handle; + qset.tc = tc_node->traffic_class; + qset.vport_id = vsi->vsi_idx; + qset.teid = tc_node->l2_sched_node_id; + + if (idc_priv->priv_ops->free_res(cdev_info, &qset)) + ibdev_dbg(&iwdev->ibdev, "WS: LAN free_res for rdma qset failed.\n"); +} + +static void icrdma_fill_device_info(struct irdma_device *iwdev, + struct idc_rdma_core_dev_info *cdev_info) +{ + struct iidc_rdma_priv_dev_info *idc_priv = cdev_info->idc_priv; + struct irdma_pci_f *rf = iwdev->rf; + + rf->sc_dev.hw = &rf->hw; + rf->iwdev = iwdev; + rf->cdev = cdev_info; + rf->hw.hw_addr = idc_priv->hw_addr; + rf->pcidev = cdev_info->pdev; + rf->hw.device = &rf->pcidev->dev; + rf->msix_count = cdev_info->msix_count; + rf->pf_id = idc_priv->pf_id; + rf->msix_entries = cdev_info->msix_entries; + rf->rdma_ver = IRDMA_GEN_2; + rf->sc_dev.hw_attrs.uk_attrs.hw_rev = IRDMA_GEN_2; + + rf->gen_ops.register_qset = icrdma_lan_register_qset; + rf->gen_ops.unregister_qset = icrdma_lan_unregister_qset; + + rf->default_vsi.vsi_idx = idc_priv->vport_id; + rf->protocol_used = + cdev_info->rdma_protocol == IDC_RDMA_PROTOCOL_ROCEV2 ? + IRDMA_ROCE_PROTOCOL_ONLY : IRDMA_IWARP_PROTOCOL_ONLY; + rf->rsrc_profile = IRDMA_HMC_PROFILE_DEFAULT; + rf->rst_to = IRDMA_RST_TIMEOUT_HZ; + rf->gen_ops.request_reset = irdma_request_reset; + rf->limits_sel = 7; + mutex_init(&rf->ah_tbl_lock); + + iwdev->netdev = idc_priv->netdev; + iwdev->vsi_num = idc_priv->vport_id; + iwdev->init_state = INITIAL_STATE; + iwdev->roce_cwnd = IRDMA_ROCE_CWND_DEFAULT; + iwdev->roce_ackcreds = IRDMA_ROCE_ACKCREDS_DEFAULT; + iwdev->rcv_wnd = IRDMA_CM_DEFAULT_RCV_WND_SCALED; + iwdev->rcv_wscale = IRDMA_CM_DEFAULT_RCV_WND_SCALE; + if (iwdev->rf->protocol_used != IRDMA_IWARP_PROTOCOL_ONLY) + iwdev->roce_mode = true; +} + +static int icrdma_probe(struct auxiliary_device *aux_dev, const struct auxiliary_device_id *id) +{ + struct idc_rdma_core_auxiliary_dev *idc_adev = + container_of(aux_dev, struct idc_rdma_core_auxiliary_dev, adev); + struct idc_rdma_core_dev_info *cdev_info = idc_adev->cdev_info; + struct iidc_rdma_priv_dev_info *idc_priv = cdev_info->idc_priv; + struct irdma_device *iwdev; + struct irdma_pci_f *rf; + struct irdma_l2params l2params = {}; + int err; + + iwdev = ib_alloc_device(irdma_device, ibdev); + if (!iwdev) + return -ENOMEM; + iwdev->rf = kzalloc(sizeof(*rf), GFP_KERNEL); + if (!iwdev->rf) { + ib_dealloc_device(&iwdev->ibdev); + return -ENOMEM; + } + + icrdma_fill_device_info(iwdev, cdev_info); + rf = iwdev->rf; + + err = irdma_ctrl_init_hw(rf); + if (err) + goto err_ctrl_init; + + l2params.mtu = iwdev->netdev->mtu; + irdma_fill_qos_info(&l2params, &idc_priv->qos_info); + if (iwdev->rf->protocol_used != IRDMA_IWARP_PROTOCOL_ONLY) + iwdev->dcb_vlan_mode = l2params.num_tc > 1 && !l2params.dscp_mode; + + err = irdma_rt_init_hw(iwdev, &l2params); + if (err) + goto err_rt_init; + + err = irdma_ib_register_device(iwdev); + if (err) + goto err_ibreg; + + idc_priv->priv_ops->update_vport_filter(cdev_info, iwdev->vsi_num, + true); + + ibdev_dbg(&iwdev->ibdev, "INIT: Gen[%d] PF[%d] device probe success\n", + rf->rdma_ver, PCI_FUNC(rf->pcidev->devfn)); + + auxiliary_set_drvdata(aux_dev, iwdev); + + return 0; + +err_ibreg: + irdma_rt_deinit_hw(iwdev); +err_rt_init: + irdma_ctrl_deinit_hw(rf); +err_ctrl_init: + kfree(iwdev->rf); + ib_dealloc_device(&iwdev->ibdev); + + return err; +} + +static void icrdma_remove(struct auxiliary_device *aux_dev) +{ + struct idc_rdma_core_auxiliary_dev *idc_adev = + container_of(aux_dev, struct idc_rdma_core_auxiliary_dev, adev); + struct idc_rdma_core_dev_info *cdev_info = idc_adev->cdev_info; + struct iidc_rdma_priv_dev_info *idc_priv = cdev_info->idc_priv; + struct irdma_device *iwdev = auxiliary_get_drvdata(aux_dev); + u8 rdma_ver = iwdev->rf->rdma_ver; + + idc_priv->priv_ops->update_vport_filter(cdev_info, + iwdev->vsi_num, false); + irdma_ib_unregister_device(iwdev); + pr_debug("INIT: Gen[%d] func[%d] device remove success\n", + rdma_ver, PCI_FUNC(cdev_info->pdev->devfn)); +} + +static const struct auxiliary_device_id icrdma_auxiliary_id_table[] = { + {.name = "ice.iwarp", }, + {.name = "ice.roce", }, + {}, +}; + +MODULE_DEVICE_TABLE(auxiliary, icrdma_auxiliary_id_table); + +struct idc_rdma_core_auxiliary_drv icrdma_core_auxiliary_drv = { + .adrv = { + .name = "gen_2", + .id_table = icrdma_auxiliary_id_table, + .probe = icrdma_probe, + .remove = icrdma_remove, + }, + .event_handler = icrdma_idc_event_handler, +}; diff --git a/drivers/infiniband/hw/irdma/main.c b/drivers/infiniband/hw/irdma/main.c index 9b6f1d8bf06a..ee59ca10451c 100644 --- a/drivers/infiniband/hw/irdma/main.c +++ b/drivers/infiniband/hw/irdma/main.c @@ -39,19 +39,7 @@ static void irdma_unregister_notifiers(void) unregister_netdevice_notifier(&irdma_netdevice_notifier); } -static void irdma_prep_tc_change(struct irdma_device *iwdev) -{ - iwdev->vsi.tc_change_pending = true; - irdma_sc_suspend_resume_qps(&iwdev->vsi, IRDMA_OP_SUSPEND); - - /* Wait for all qp's to suspend */ - wait_event_timeout(iwdev->suspend_wq, - !atomic_read(&iwdev->vsi.qp_suspend_reqs), - msecs_to_jiffies(IRDMA_EVENT_TIMEOUT_MS)); - irdma_ws_reset(&iwdev->vsi); -} - -static void irdma_log_invalid_mtu(u16 mtu, struct irdma_sc_dev *dev) +void irdma_log_invalid_mtu(u16 mtu, struct irdma_sc_dev *dev) { if (mtu < IRDMA_MIN_MTU_IPV4) ibdev_warn(to_ibdev(dev), "MTU setting [%d] too low for RDMA traffic. Minimum MTU is 576 for IPv4\n", mtu); @@ -59,8 +47,8 @@ static void irdma_log_invalid_mtu(u16 mtu, struct irdma_sc_dev *dev) ibdev_warn(to_ibdev(dev), "MTU setting [%d] too low for RDMA traffic. Minimum MTU is 1280 for IPv6\\n", mtu); } -static void irdma_fill_qos_info(struct irdma_l2params *l2params, - struct iidc_rdma_qos_params *qos_info) +void irdma_fill_qos_info(struct irdma_l2params *l2params, + struct iidc_rdma_qos_params *qos_info) { int i; @@ -84,73 +72,11 @@ static void irdma_fill_qos_info(struct irdma_l2params *l2params, } } -static void irdma_idc_event_handler(struct idc_rdma_core_dev_info *cdev_info, - struct idc_rdma_event *event) -{ - struct irdma_device *iwdev = dev_get_drvdata(&cdev_info->adev->dev); - struct irdma_l2params l2params = {}; - - if (*event->type & BIT(IDC_RDMA_EVENT_AFTER_MTU_CHANGE)) { - ibdev_dbg(&iwdev->ibdev, "CLNT: new MTU = %d\n", iwdev->netdev->mtu); - if (iwdev->vsi.mtu != iwdev->netdev->mtu) { - l2params.mtu = iwdev->netdev->mtu; - l2params.mtu_changed = true; - irdma_log_invalid_mtu(l2params.mtu, &iwdev->rf->sc_dev); - irdma_change_l2params(&iwdev->vsi, &l2params); - } - } else if (*event->type & BIT(IDC_RDMA_EVENT_BEFORE_TC_CHANGE)) { - if (iwdev->vsi.tc_change_pending) - return; - - irdma_prep_tc_change(iwdev); - } else if (*event->type & BIT(IDC_RDMA_EVENT_AFTER_TC_CHANGE)) { - struct iidc_rdma_priv_dev_info *idc_priv = cdev_info->idc_priv; - - if (!iwdev->vsi.tc_change_pending) - return; - - l2params.tc_changed = true; - ibdev_dbg(&iwdev->ibdev, "CLNT: TC Change\n"); - - irdma_fill_qos_info(&l2params, &idc_priv->qos_info); - if (iwdev->rf->protocol_used != IRDMA_IWARP_PROTOCOL_ONLY) - iwdev->dcb_vlan_mode = - l2params.num_tc > 1 && !l2params.dscp_mode; - irdma_change_l2params(&iwdev->vsi, &l2params); - } else if (*event->type & BIT(IDC_RDMA_EVENT_CRIT_ERR)) { - ibdev_warn(&iwdev->ibdev, "ICE OICR event notification: oicr = 0x%08x\n", - event->reg); - if (event->reg & IRDMAPFINT_OICR_PE_CRITERR_M) { - u32 pe_criterr; - - pe_criterr = readl(iwdev->rf->sc_dev.hw_regs[IRDMA_GLPE_CRITERR]); -#define IRDMA_Q1_RESOURCE_ERR 0x0001024d - if (pe_criterr != IRDMA_Q1_RESOURCE_ERR) { - ibdev_err(&iwdev->ibdev, "critical PE Error, GLPE_CRITERR=0x%08x\n", - pe_criterr); - iwdev->rf->reset = true; - } else { - ibdev_warn(&iwdev->ibdev, "Q1 Resource Check\n"); - } - } - if (event->reg & IRDMAPFINT_OICR_HMC_ERR_M) { - ibdev_err(&iwdev->ibdev, "HMC Error\n"); - iwdev->rf->reset = true; - } - if (event->reg & IRDMAPFINT_OICR_PE_PUSH_M) { - ibdev_err(&iwdev->ibdev, "PE Push Error\n"); - iwdev->rf->reset = true; - } - if (iwdev->rf->reset) - iwdev->rf->gen_ops.request_reset(iwdev->rf); - } -} - /** * irdma_request_reset - Request a reset * @rf: RDMA PCI function */ -static void irdma_request_reset(struct irdma_pci_f *rf) +void irdma_request_reset(struct irdma_pci_f *rf) { struct idc_rdma_core_dev_info *cdev_info = rf->cdev; @@ -158,190 +84,6 @@ static void irdma_request_reset(struct irdma_pci_f *rf) cdev_info->ops->request_reset(rf->cdev, IDC_FUNC_RESET); } -/** - * irdma_lan_register_qset - Register qset with LAN driver - * @vsi: vsi structure - * @tc_node: Traffic class node - */ -static int irdma_lan_register_qset(struct irdma_sc_vsi *vsi, - struct irdma_ws_node *tc_node) -{ - struct irdma_device *iwdev = vsi->back_vsi; - struct idc_rdma_core_dev_info *cdev_info = iwdev->rf->cdev; - struct iidc_rdma_priv_dev_info *idc_priv = cdev_info->idc_priv; - struct iidc_rdma_qset_params qset = {}; - int ret; - - qset.qs_handle = tc_node->qs_handle; - qset.tc = tc_node->traffic_class; - qset.vport_id = vsi->vsi_idx; - ret = idc_priv->priv_ops->alloc_res(cdev_info, &qset); - if (ret) { - ibdev_dbg(&iwdev->ibdev, "WS: LAN alloc_res for rdma qset failed.\n"); - return ret; - } - - tc_node->l2_sched_node_id = qset.teid; - vsi->qos[tc_node->user_pri].l2_sched_node_id = qset.teid; - - return 0; -} - -/** - * irdma_lan_unregister_qset - Unregister qset with LAN driver - * @vsi: vsi structure - * @tc_node: Traffic class node - */ -static void irdma_lan_unregister_qset(struct irdma_sc_vsi *vsi, - struct irdma_ws_node *tc_node) -{ - struct irdma_device *iwdev = vsi->back_vsi; - struct idc_rdma_core_dev_info *cdev_info = iwdev->rf->cdev; - struct iidc_rdma_priv_dev_info *idc_priv = cdev_info->idc_priv; - struct iidc_rdma_qset_params qset = {}; - - qset.qs_handle = tc_node->qs_handle; - qset.tc = tc_node->traffic_class; - qset.vport_id = vsi->vsi_idx; - qset.teid = tc_node->l2_sched_node_id; - - if (idc_priv->priv_ops->free_res(cdev_info, &qset)) - ibdev_dbg(&iwdev->ibdev, "WS: LAN free_res for rdma qset failed.\n"); -} - -static void irdma_remove(struct auxiliary_device *aux_dev) -{ - struct idc_rdma_core_auxiliary_dev *idc_adev = - container_of(aux_dev, struct idc_rdma_core_auxiliary_dev, adev); - struct idc_rdma_core_dev_info *cdev_info = idc_adev->cdev_info; - struct iidc_rdma_priv_dev_info *idc_priv = cdev_info->idc_priv; - struct irdma_device *iwdev = auxiliary_get_drvdata(aux_dev); - - idc_priv->priv_ops->update_vport_filter(cdev_info, - iwdev->vsi_num, false); - irdma_ib_unregister_device(iwdev); - - pr_debug("INIT: Gen2 PF[%d] device remove success\n", PCI_FUNC(cdev_info->pdev->devfn)); -} - -static void irdma_fill_device_info(struct irdma_device *iwdev, - struct idc_rdma_core_dev_info *cdev_info) -{ - struct iidc_rdma_priv_dev_info *idc_priv = cdev_info->idc_priv; - struct irdma_pci_f *rf = iwdev->rf; - - rf->sc_dev.hw = &rf->hw; - rf->iwdev = iwdev; - rf->cdev = cdev_info; - rf->hw.hw_addr = idc_priv->hw_addr; - rf->pcidev = cdev_info->pdev; - rf->hw.device = &rf->pcidev->dev; - rf->msix_count = cdev_info->msix_count; - rf->pf_id = idc_priv->pf_id; - rf->msix_entries = cdev_info->msix_entries; - - rf->gen_ops.register_qset = irdma_lan_register_qset; - rf->gen_ops.unregister_qset = irdma_lan_unregister_qset; - - rf->default_vsi.vsi_idx = idc_priv->vport_id; - rf->protocol_used = - cdev_info->rdma_protocol == IDC_RDMA_PROTOCOL_ROCEV2 ? - IRDMA_ROCE_PROTOCOL_ONLY : IRDMA_IWARP_PROTOCOL_ONLY; - rf->rdma_ver = IRDMA_GEN_2; - rf->rsrc_profile = IRDMA_HMC_PROFILE_DEFAULT; - rf->rst_to = IRDMA_RST_TIMEOUT_HZ; - rf->gen_ops.request_reset = irdma_request_reset; - rf->limits_sel = 7; - rf->iwdev = iwdev; - mutex_init(&iwdev->ah_tbl_lock); - - iwdev->netdev = idc_priv->netdev; - iwdev->vsi_num = idc_priv->vport_id; - iwdev->init_state = INITIAL_STATE; - iwdev->roce_cwnd = IRDMA_ROCE_CWND_DEFAULT; - iwdev->roce_ackcreds = IRDMA_ROCE_ACKCREDS_DEFAULT; - iwdev->rcv_wnd = IRDMA_CM_DEFAULT_RCV_WND_SCALED; - iwdev->rcv_wscale = IRDMA_CM_DEFAULT_RCV_WND_SCALE; - if (rf->protocol_used == IRDMA_ROCE_PROTOCOL_ONLY) - iwdev->roce_mode = true; -} - -static int irdma_probe(struct auxiliary_device *aux_dev, const struct auxiliary_device_id *id) -{ - struct idc_rdma_core_auxiliary_dev *idc_adev = - container_of(aux_dev, struct idc_rdma_core_auxiliary_dev, adev); - struct idc_rdma_core_dev_info *cdev_info = idc_adev->cdev_info; - struct iidc_rdma_priv_dev_info *idc_priv = cdev_info->idc_priv; - struct irdma_device *iwdev; - struct irdma_pci_f *rf; - struct irdma_l2params l2params = {}; - int err; - - iwdev = ib_alloc_device(irdma_device, ibdev); - if (!iwdev) - return -ENOMEM; - iwdev->rf = kzalloc(sizeof(*rf), GFP_KERNEL); - if (!iwdev->rf) { - ib_dealloc_device(&iwdev->ibdev); - return -ENOMEM; - } - - irdma_fill_device_info(iwdev, cdev_info); - rf = iwdev->rf; - - err = irdma_ctrl_init_hw(rf); - if (err) - goto err_ctrl_init; - - l2params.mtu = iwdev->netdev->mtu; - irdma_fill_qos_info(&l2params, &idc_priv->qos_info); - if (iwdev->rf->protocol_used != IRDMA_IWARP_PROTOCOL_ONLY) - iwdev->dcb_vlan_mode = l2params.num_tc > 1 && !l2params.dscp_mode; - - err = irdma_rt_init_hw(iwdev, &l2params); - if (err) - goto err_rt_init; - - err = irdma_ib_register_device(iwdev); - if (err) - goto err_ibreg; - - idc_priv->priv_ops->update_vport_filter(cdev_info, iwdev->vsi_num, - true); - - ibdev_dbg(&iwdev->ibdev, "INIT: Gen2 PF[%d] device probe success\n", PCI_FUNC(rf->pcidev->devfn)); - auxiliary_set_drvdata(aux_dev, iwdev); - - return 0; - -err_ibreg: - irdma_rt_deinit_hw(iwdev); -err_rt_init: - irdma_ctrl_deinit_hw(rf); -err_ctrl_init: - kfree(iwdev->rf); - ib_dealloc_device(&iwdev->ibdev); - - return err; -} - -static const struct auxiliary_device_id irdma_auxiliary_id_table[] = { - {.name = "ice.iwarp", }, - {.name = "ice.roce", }, - {}, -}; - -MODULE_DEVICE_TABLE(auxiliary, irdma_auxiliary_id_table); - -static struct idc_rdma_core_auxiliary_drv irdma_auxiliary_drv = { - .adrv = { - .id_table = irdma_auxiliary_id_table, - .probe = irdma_probe, - .remove = irdma_remove, - }, - .event_handler = irdma_idc_event_handler, -}; - static int __init irdma_init_module(void) { int ret; @@ -353,10 +95,10 @@ static int __init irdma_init_module(void) return ret; } - ret = auxiliary_driver_register(&irdma_auxiliary_drv.adrv); + ret = auxiliary_driver_register(&icrdma_core_auxiliary_drv.adrv); if (ret) { auxiliary_driver_unregister(&i40iw_auxiliary_drv); - pr_err("Failed irdma auxiliary_driver_register() ret=%d\n", + pr_err("Failed icrdma(gen_2) auxiliary_driver_register() ret=%d\n", ret); return ret; } @@ -369,7 +111,7 @@ static int __init irdma_init_module(void) static void __exit irdma_exit_module(void) { irdma_unregister_notifiers(); - auxiliary_driver_unregister(&irdma_auxiliary_drv.adrv); + auxiliary_driver_unregister(&icrdma_core_auxiliary_drv.adrv); auxiliary_driver_unregister(&i40iw_auxiliary_drv); } diff --git a/drivers/infiniband/hw/irdma/main.h b/drivers/infiniband/hw/irdma/main.h index e81f37583138..7360e171f4c2 100644 --- a/drivers/infiniband/hw/irdma/main.h +++ b/drivers/infiniband/hw/irdma/main.h @@ -55,6 +55,7 @@ #include "puda.h" extern struct auxiliary_driver i40iw_auxiliary_drv; +extern struct idc_rdma_core_auxiliary_drv icrdma_core_auxiliary_drv; #define IRDMA_FW_VER_DEFAULT 2 #define IRDMA_HW_VER 2 @@ -329,6 +330,8 @@ struct irdma_pci_f { void *back_fcn; struct irdma_gen_ops gen_ops; struct irdma_device *iwdev; + DECLARE_HASHTABLE(ah_hash_tbl, 8); + struct mutex ah_tbl_lock; /* protect AH hash table access */ }; struct irdma_device { @@ -338,8 +341,6 @@ struct irdma_device { struct workqueue_struct *cleanup_wq; struct irdma_sc_vsi vsi; struct irdma_cm_core cm_core; - DECLARE_HASHTABLE(ah_hash_tbl, 8); - struct mutex ah_tbl_lock; /* protect AH hash table access */ u32 roce_cwnd; u32 roce_ackcreds; u32 vendor_id; @@ -555,4 +556,8 @@ int irdma_netdevice_event(struct notifier_block *notifier, unsigned long event, void *ptr); void irdma_add_ip(struct irdma_device *iwdev); void cqp_compl_worker(struct work_struct *work); +void irdma_fill_qos_info(struct irdma_l2params *l2params, + struct iidc_rdma_qos_params *qos_info); +void irdma_request_reset(struct irdma_pci_f *rf); +void irdma_log_invalid_mtu(u16 mtu, struct irdma_sc_dev *dev); #endif /* IRDMA_MAIN_H */ diff --git a/drivers/infiniband/hw/irdma/verbs.c b/drivers/infiniband/hw/irdma/verbs.c index 6a107decb704..65466c1c72c5 100644 --- a/drivers/infiniband/hw/irdma/verbs.c +++ b/drivers/infiniband/hw/irdma/verbs.c @@ -4530,7 +4530,7 @@ static bool irdma_ah_exists(struct irdma_device *iwdev, new_ah->sc_ah.ah_info.dest_ip_addr[2] ^ new_ah->sc_ah.ah_info.dest_ip_addr[3]; - hash_for_each_possible(iwdev->ah_hash_tbl, ah, list, key) { + hash_for_each_possible(iwdev->rf->ah_hash_tbl, ah, list, key) { /* Set ah_valid and ah_id the same so memcmp can work */ new_ah->sc_ah.ah_info.ah_idx = ah->sc_ah.ah_info.ah_idx; new_ah->sc_ah.ah_info.ah_valid = ah->sc_ah.ah_info.ah_valid; @@ -4556,14 +4556,14 @@ static int irdma_destroy_ah(struct ib_ah *ibah, u32 ah_flags) struct irdma_ah *ah = to_iwah(ibah); if ((ah_flags & RDMA_DESTROY_AH_SLEEPABLE) && ah->parent_ah) { - mutex_lock(&iwdev->ah_tbl_lock); + mutex_lock(&iwdev->rf->ah_tbl_lock); if (!refcount_dec_and_test(&ah->parent_ah->refcnt)) { - mutex_unlock(&iwdev->ah_tbl_lock); + mutex_unlock(&iwdev->rf->ah_tbl_lock); return 0; } hash_del(&ah->parent_ah->list); kfree(ah->parent_ah); - mutex_unlock(&iwdev->ah_tbl_lock); + mutex_unlock(&iwdev->rf->ah_tbl_lock); } irdma_ah_cqp_op(iwdev->rf, &ah->sc_ah, IRDMA_OP_AH_DESTROY, @@ -4600,11 +4600,11 @@ static int irdma_create_user_ah(struct ib_ah *ibah, err = irdma_setup_ah(ibah, attr); if (err) return err; - mutex_lock(&iwdev->ah_tbl_lock); + mutex_lock(&iwdev->rf->ah_tbl_lock); if (!irdma_ah_exists(iwdev, ah)) { err = irdma_create_hw_ah(iwdev, ah, true); if (err) { - mutex_unlock(&iwdev->ah_tbl_lock); + mutex_unlock(&iwdev->rf->ah_tbl_lock); return err; } /* Add new AH to list */ @@ -4616,11 +4616,11 @@ static int irdma_create_user_ah(struct ib_ah *ibah, parent_ah->sc_ah.ah_info.dest_ip_addr[3]; ah->parent_ah = parent_ah; - hash_add(iwdev->ah_hash_tbl, &parent_ah->list, key); + hash_add(iwdev->rf->ah_hash_tbl, &parent_ah->list, key); refcount_set(&parent_ah->refcnt, 1); } } - mutex_unlock(&iwdev->ah_tbl_lock); + mutex_unlock(&iwdev->rf->ah_tbl_lock); uresp.ah_id = ah->sc_ah.ah_info.ah_idx; err = ib_copy_to_udata(udata, &uresp, min(sizeof(uresp), udata->outlen)); From patchwork Sat Aug 24 03:19:10 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Nikolova, Tatyana E" X-Patchwork-Id: 13776200 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.16]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 13271481A3; Sat, 24 Aug 2024 03:20:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.16 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724469650; cv=none; b=BZqwTSTgG3c9Uf0+xB7w9uUrOx2esV59+Xf7IH+8LtiNcGfvZFp5WyqafyVcBqSBSNiJAOIJLFc+JGOHhbSQSLaZ5EttdkxCh/GMBcgIRSqlZKJ+g2kNipPRPuOrPm/N2rxQCJzwHBoUMfOdAjavzpf84OpQ3tIg7nqc0wR1Hpw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724469650; c=relaxed/simple; bh=EUrxlqk0QFrRlNEYKqcqwgvK+ouJmsmHw7DXMAHzBds=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=oi8MaNyrxBlI5k76lgkj7f8NdOOo69I0cgBkmc+bMeRx0zoZU9sebrCv51KzZBF4wEu+ybmdiX/+Xpq7Qqcs5R8jJUfGn2BdQGTloBB/ID8mxrXWHYzXlQSrPY0NBxCk5BjIOB9uTw9jfr2Bb94D7H6A6iwM1vfU+iapH9ddVsQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=dqINFr0M; arc=none smtp.client-ip=192.198.163.16 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="dqINFr0M" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1724469647; x=1756005647; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=EUrxlqk0QFrRlNEYKqcqwgvK+ouJmsmHw7DXMAHzBds=; b=dqINFr0MptLPRyXN2+8nkxc0R5JAtn36x1KihU5FqV6AWXz9rrHzPJFH cHhcL5V1l2E7RfKyVRWtpd5dggoulXSnDWpuN2z1rfL1jmwVy7SPdEbiW 8aRkcA880nTGgLzUr3kGoVRq3VUmWfeXgkGXe/3Iq+S7oJAvhFhoxndo0 /IfrS9G/4a8NVauFEzFOC/HzWLxspyDJgQUbSuvHNgJMG8ub+IZebNiOQ JIBbr5ediUojt73p4kWtso8hUQ3g1iD60ONR7iylasKZb0yxJ5jT4u8/v gxINHxh7aqEY+oXWyBtGJJk6Re8Tzd6xz4y6yUggGefsyUqbv7ml3zsXI Q==; X-CSE-ConnectionGUID: I90aCFzHRqGZDRG0r1LKSg== X-CSE-MsgGUID: ZMx2QZxKRIiQBpU/y81lZQ== X-IronPort-AV: E=McAfee;i="6700,10204,11173"; a="13187797" X-IronPort-AV: E=Sophos;i="6.10,172,1719903600"; d="scan'208";a="13187797" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by fmvoesa110.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Aug 2024 20:20:44 -0700 X-CSE-ConnectionGUID: 1wKB9RvTRu+MVi4jMOLGRQ== X-CSE-MsgGUID: DXmKjdP4TXCZtOMCTzNs+g== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.10,172,1719903600"; d="scan'208";a="99492103" Received: from tenikolo-mobl1.amr.corp.intel.com ([10.124.36.66]) by smtpauth.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Aug 2024 20:20:43 -0700 From: Tatyana Nikolova To: jgg@nvidia.com, leon@kernel.org Cc: linux-rdma@vger.kernel.org, netdev@vger.kernel.org, Mustafa Ismail , Tatyana Nikolova Subject: [RFC v2 11/25] RDMA/irdma: Add GEN3 core driver support Date: Fri, 23 Aug 2024 22:19:10 -0500 Message-Id: <20240824031924.421-12-tatyana.e.nikolova@intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20240824031924.421-1-tatyana.e.nikolova@intel.com> References: <20240824031924.421-1-tatyana.e.nikolova@intel.com> Precedence: bulk X-Mailing-List: linux-rdma@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Mustafa Ismail Introduce support for the GEN3 auxiliary core driver, which is responsible for initializing PCI-level RDMA resources. Facilitate host-driver communication with the device's Control Plane (CP) to discover capabilities and perform privileged operations through an RDMA- specific messaging interface built atop the IDPF mailbox and virtchannel protocol. Establish the RDMA virtual channel message interface and incorporate operations to retrieve the hardware version and discover capabilities from the CP. Additionally, set up the RDMA MMIO regions and initialize the RF structure. Signed-off-by: Mustafa Ismail Signed-off-by: Tatyana Nikolova --- drivers/infiniband/hw/irdma/Makefile | 2 + drivers/infiniband/hw/irdma/ctrl.c | 438 ++++++++++++++++++++--- drivers/infiniband/hw/irdma/defs.h | 30 +- drivers/infiniband/hw/irdma/hmc.c | 18 +- drivers/infiniband/hw/irdma/hmc.h | 19 +- drivers/infiniband/hw/irdma/hw.c | 4 + drivers/infiniband/hw/irdma/i40iw_if.c | 1 + drivers/infiniband/hw/irdma/icrdma_if.c | 2 + drivers/infiniband/hw/irdma/ig3rdma_hw.h | 11 + drivers/infiniband/hw/irdma/ig3rdma_if.c | 171 +++++++++ drivers/infiniband/hw/irdma/irdma.h | 3 + drivers/infiniband/hw/irdma/main.c | 55 +++ drivers/infiniband/hw/irdma/main.h | 4 + drivers/infiniband/hw/irdma/pble.c | 20 +- drivers/infiniband/hw/irdma/type.h | 63 +++- drivers/infiniband/hw/irdma/user.h | 4 +- drivers/infiniband/hw/irdma/virtchnl.c | 300 ++++++++++++++++ drivers/infiniband/hw/irdma/virtchnl.h | 96 +++++ 18 files changed, 1166 insertions(+), 75 deletions(-) create mode 100644 drivers/infiniband/hw/irdma/ig3rdma_hw.h create mode 100644 drivers/infiniband/hw/irdma/ig3rdma_if.c create mode 100644 drivers/infiniband/hw/irdma/virtchnl.c create mode 100644 drivers/infiniband/hw/irdma/virtchnl.h diff --git a/drivers/infiniband/hw/irdma/Makefile b/drivers/infiniband/hw/irdma/Makefile index 2522e4ca650b..3aa63b913377 100644 --- a/drivers/infiniband/hw/irdma/Makefile +++ b/drivers/infiniband/hw/irdma/Makefile @@ -13,6 +13,7 @@ irdma-objs := cm.o \ hw.o \ i40iw_hw.o \ i40iw_if.o \ + ig3rdma_if.o\ icrdma_if.o \ icrdma_hw.o \ main.o \ @@ -23,6 +24,7 @@ irdma-objs := cm.o \ uk.o \ utils.o \ verbs.o \ + virtchnl.o \ ws.o \ CFLAGS_trace.o = -I$(src) diff --git a/drivers/infiniband/hw/irdma/ctrl.c b/drivers/infiniband/hw/irdma/ctrl.c index 6aed6169c07d..9d7b151a6b95 100644 --- a/drivers/infiniband/hw/irdma/ctrl.c +++ b/drivers/infiniband/hw/irdma/ctrl.c @@ -2794,7 +2794,10 @@ static u64 irdma_sc_decode_fpm_commit(struct irdma_sc_dev *dev, __le64 *buf, obj_info[rsrc_idx].cnt = (u32)FLD_RS_64(dev, temp, IRDMA_COMMIT_FPM_CQCNT); break; case IRDMA_HMC_IW_APBVT_ENTRY: - obj_info[rsrc_idx].cnt = 1; + if (dev->hw_attrs.uk_attrs.hw_rev <= IRDMA_GEN_2) + obj_info[rsrc_idx].cnt = 1; + else + obj_info[rsrc_idx].cnt = 0; break; default: obj_info[rsrc_idx].cnt = (u32)temp; @@ -2829,7 +2832,8 @@ irdma_sc_parse_fpm_commit_buf(struct irdma_sc_dev *dev, __le64 *buf, IRDMA_HMC_IW_QP); irdma_sc_decode_fpm_commit(dev, buf, 8, info, IRDMA_HMC_IW_CQ); - /* skiping RSRVD */ + irdma_sc_decode_fpm_commit(dev, buf, 16, info, + IRDMA_HMC_IW_SRQ); irdma_sc_decode_fpm_commit(dev, buf, 24, info, IRDMA_HMC_IW_HTE); irdma_sc_decode_fpm_commit(dev, buf, 32, info, @@ -2864,15 +2868,17 @@ irdma_sc_parse_fpm_commit_buf(struct irdma_sc_dev *dev, __le64 *buf, IRDMA_HMC_IW_HDR); irdma_sc_decode_fpm_commit(dev, buf, 152, info, IRDMA_HMC_IW_MD); - irdma_sc_decode_fpm_commit(dev, buf, 160, info, - IRDMA_HMC_IW_OOISC); - irdma_sc_decode_fpm_commit(dev, buf, 168, info, - IRDMA_HMC_IW_OOISCFFL); + if (dev->cqp->protocol_used == IRDMA_IWARP_PROTOCOL_ONLY) { + irdma_sc_decode_fpm_commit(dev, buf, 160, info, + IRDMA_HMC_IW_OOISC); + irdma_sc_decode_fpm_commit(dev, buf, 168, info, + IRDMA_HMC_IW_OOISCFFL); + } } /* searching for the last object in HMC to find the size of the HMC area. */ for (i = IRDMA_HMC_IW_QP; i < IRDMA_HMC_IW_MAX; i++) { - if (info[i].base > max_base) { + if (info[i].base > max_base && info[i].cnt) { max_base = info[i].base; last_hmc_obj = i; } @@ -2937,6 +2943,14 @@ static int irdma_sc_parse_fpm_query_buf(struct irdma_sc_dev *dev, __le64 *buf, hmc_info->first_sd_index = (u16)FIELD_GET(IRDMA_QUERY_FPM_FIRST_PE_SD_INDEX, temp); max_pe_sds = (u16)FIELD_GET(IRDMA_QUERY_FPM_MAX_PE_SDS, temp); + /* Reduce SD count for unprivleged functions by 1 to account for PBLE + * backing page rounding + */ + if (dev->hw_attrs.uk_attrs.hw_rev <= IRDMA_GEN_2 && + (hmc_info->hmc_fn_id >= dev->hw_attrs.first_hw_vf_fpm_id || + !dev->privileged)) + max_pe_sds--; + hmc_fpm_misc->max_sds = max_pe_sds; hmc_info->sd_table.sd_cnt = max_pe_sds + hmc_info->first_sd_index; get_64bit_val(buf, 8, &temp); @@ -2949,11 +2963,17 @@ static int irdma_sc_parse_fpm_query_buf(struct irdma_sc_dev *dev, __le64 *buf, size = (u32)(temp >> 32); obj_info[IRDMA_HMC_IW_CQ].size = BIT_ULL(size); + irdma_sc_decode_fpm_query(buf, 24, obj_info, IRDMA_HMC_IW_SRQ); irdma_sc_decode_fpm_query(buf, 32, obj_info, IRDMA_HMC_IW_HTE); irdma_sc_decode_fpm_query(buf, 40, obj_info, IRDMA_HMC_IW_ARP); - obj_info[IRDMA_HMC_IW_APBVT_ENTRY].size = 8192; - obj_info[IRDMA_HMC_IW_APBVT_ENTRY].max_cnt = 1; + if (dev->hw_attrs.uk_attrs.hw_rev >= IRDMA_GEN_3) { + obj_info[IRDMA_HMC_IW_APBVT_ENTRY].size = 0; + obj_info[IRDMA_HMC_IW_APBVT_ENTRY].max_cnt = 0; + } else { + obj_info[IRDMA_HMC_IW_APBVT_ENTRY].size = 8192; + obj_info[IRDMA_HMC_IW_APBVT_ENTRY].max_cnt = 1; + } irdma_sc_decode_fpm_query(buf, 48, obj_info, IRDMA_HMC_IW_MR); irdma_sc_decode_fpm_query(buf, 56, obj_info, IRDMA_HMC_IW_XF); @@ -2962,7 +2982,7 @@ static int irdma_sc_parse_fpm_query_buf(struct irdma_sc_dev *dev, __le64 *buf, obj_info[IRDMA_HMC_IW_XFFL].max_cnt = (u32)temp; obj_info[IRDMA_HMC_IW_XFFL].size = 4; hmc_fpm_misc->xf_block_size = FIELD_GET(IRDMA_QUERY_FPM_XFBLOCKSIZE, temp); - if (!hmc_fpm_misc->xf_block_size) + if (obj_info[IRDMA_HMC_IW_XF].max_cnt && !hmc_fpm_misc->xf_block_size) return -EINVAL; irdma_sc_decode_fpm_query(buf, 72, obj_info, IRDMA_HMC_IW_Q1); @@ -2998,17 +3018,30 @@ static int irdma_sc_parse_fpm_query_buf(struct irdma_sc_dev *dev, __le64 *buf, obj_info[IRDMA_HMC_IW_RRFFL].max_cnt) return -EINVAL; + if (!obj_info[IRDMA_HMC_IW_XF].max_cnt) + obj_info[IRDMA_HMC_IW_RRF].max_cnt = IRDMA_HMC_MIN_RRF; + irdma_sc_decode_fpm_query(buf, 144, obj_info, IRDMA_HMC_IW_HDR); irdma_sc_decode_fpm_query(buf, 152, obj_info, IRDMA_HMC_IW_MD); - irdma_sc_decode_fpm_query(buf, 160, obj_info, IRDMA_HMC_IW_OOISC); - - get_64bit_val(buf, 168, &temp); - obj_info[IRDMA_HMC_IW_OOISCFFL].max_cnt = (u32)temp; - obj_info[IRDMA_HMC_IW_OOISCFFL].size = 4; - hmc_fpm_misc->ooiscf_block_size = FIELD_GET(IRDMA_QUERY_FPM_OOISCFBLOCKSIZE, temp); - if (!hmc_fpm_misc->ooiscf_block_size && - obj_info[IRDMA_HMC_IW_OOISCFFL].max_cnt) - return -EINVAL; + + if (dev->cqp->protocol_used == IRDMA_IWARP_PROTOCOL_ONLY) { + irdma_sc_decode_fpm_query(buf, 160, obj_info, IRDMA_HMC_IW_OOISC); + + get_64bit_val(buf, 168, &temp); + obj_info[IRDMA_HMC_IW_OOISCFFL].max_cnt = (u32)temp; + obj_info[IRDMA_HMC_IW_OOISCFFL].size = 4; + hmc_fpm_misc->ooiscf_block_size = FIELD_GET(IRDMA_QUERY_FPM_OOISCFBLOCKSIZE, temp); + if (!hmc_fpm_misc->ooiscf_block_size && + obj_info[IRDMA_HMC_IW_OOISCFFL].max_cnt) + return -EINVAL; + } + + if (dev->hw_attrs.uk_attrs.hw_rev >= IRDMA_GEN_3) { + get_64bit_val(buf, 176, &temp); + hmc_fpm_misc->loc_mem_pages = (u32)FIELD_GET(IRDMA_QUERY_FPM_LOC_MEM_PAGES, temp); + if (!hmc_fpm_misc->loc_mem_pages) + return -EINVAL; + } return 0; } @@ -4335,6 +4368,26 @@ int irdma_sc_init_iw_hmc(struct irdma_sc_dev *dev, u8 hmc_fn_id) return ret_code; } +/** + * irdma_set_loc_mem() - set a local memory bit field + * @buf: ptr to a buffer where local memory gets enabled + */ +static void irdma_set_loc_mem(__le64 *buf) +{ + u64 loc_mem_en = BIT_ULL(ENABLE_LOC_MEM); + u32 offset; + u64 temp; + + for (offset = 0; offset < IRDMA_COMMIT_FPM_BUF_SIZE; + offset += sizeof(__le64)) { + if (offset == IRDMA_PBLE_COMMIT_OFFSET) + continue; + get_64bit_val(buf, offset, &temp); + if (temp) + set_64bit_val(buf, offset, temp | loc_mem_en); + } +} + /** * irdma_sc_cfg_iw_fpm() - commits hmc obj cnt values using cqp * command and populates fpm base address in hmc_info @@ -4356,7 +4409,7 @@ static int irdma_sc_cfg_iw_fpm(struct irdma_sc_dev *dev, u8 hmc_fn_id) set_64bit_val(buf, 0, (u64)obj_info[IRDMA_HMC_IW_QP].cnt); set_64bit_val(buf, 8, (u64)obj_info[IRDMA_HMC_IW_CQ].cnt); - set_64bit_val(buf, 16, (u64)0); /* RSRVD */ + set_64bit_val(buf, 16, (u64)obj_info[IRDMA_HMC_IW_SRQ].cnt); set_64bit_val(buf, 24, (u64)obj_info[IRDMA_HMC_IW_HTE].cnt); set_64bit_val(buf, 32, (u64)obj_info[IRDMA_HMC_IW_ARP].cnt); set_64bit_val(buf, 40, (u64)0); /* RSVD */ @@ -4383,7 +4436,9 @@ static int irdma_sc_cfg_iw_fpm(struct irdma_sc_dev *dev, u8 hmc_fn_id) (u64)obj_info[IRDMA_HMC_IW_OOISC].cnt); set_64bit_val(buf, 168, (u64)obj_info[IRDMA_HMC_IW_OOISCFFL].cnt); - + if (dev->hw_attrs.uk_attrs.hw_rev >= IRDMA_GEN_3 && + dev->hmc_fpm_misc.loc_mem_pages) + irdma_set_loc_mem(buf); commit_fpm_mem.pa = dev->fpm_commit_buf_pa; commit_fpm_mem.va = dev->fpm_commit_buf; @@ -4592,6 +4647,7 @@ static bool irdma_cqp_ring_full(struct irdma_sc_cqp *cqp) static u32 irdma_est_sd(struct irdma_sc_dev *dev, struct irdma_hmc_info *hmc_info) { + struct irdma_hmc_obj_info *pble_info; int i; u64 size = 0; u64 sd; @@ -4600,12 +4656,22 @@ static u32 irdma_est_sd(struct irdma_sc_dev *dev, if (i != IRDMA_HMC_IW_PBLE) size += round_up(hmc_info->hmc_obj[i].cnt * hmc_info->hmc_obj[i].size, 512); - size += round_up(hmc_info->hmc_obj[IRDMA_HMC_IW_PBLE].cnt * - hmc_info->hmc_obj[IRDMA_HMC_IW_PBLE].size, 512); + + pble_info = &hmc_info->hmc_obj[IRDMA_HMC_IW_PBLE]; + if (dev->privileged) + size += round_up(pble_info->cnt * pble_info->size, 512); if (size & 0x1FFFFF) sd = (size >> 21) + 1; /* add 1 for remainder */ else sd = size >> 21; + if (!dev->privileged && !dev->hmc_fpm_misc.loc_mem_pages) { + /* 2MB alignment for VF PBLE HMC */ + size = pble_info->cnt * pble_info->size; + if (size & 0x1FFFFF) + sd += (size >> 21) + 1; /* add 1 for remainder */ + else + sd += size >> 21; + } if (sd > 0xFFFFFFFF) { ibdev_dbg(to_ibdev(dev), "HMC: sd overflow[%lld]\n", sd); sd = 0xFFFFFFFF - 1; @@ -4785,6 +4851,272 @@ static void cfg_fpm_value_gen_2(struct irdma_sc_dev *dev, hmc_fpm_misc->ooiscf_block_size; } +/** + * irdma_get_rsrc_mem_config - configure resources if local memory or host + * @dev: sc device struct + * @is_mrte_loc_mem: if true, MR's to be in local memory because sd=loc pages + * + * Only mr can be configured host or local memory if qp's are in local memory. + * If qp is in local memory, then all resource object will be in local memory + * except mr which can be either host or local memory. The only exception + * is pble's which are always in host memory. + */ +static void irdma_get_rsrc_mem_config(struct irdma_sc_dev *dev, bool is_mrte_loc_mem) +{ + struct irdma_hmc_info *hmc_info = dev->hmc_info; + int i; + + for (i = IRDMA_HMC_IW_QP; i < IRDMA_HMC_IW_MAX; i++) + hmc_info->hmc_obj[i].mem_loc = IRDMA_LOC_MEM; + + if (dev->feature_info[IRDMA_OBJ_1] && !is_mrte_loc_mem) { + u8 mem_type; + + mem_type = (u8)FIELD_GET(IRDMA_MR_MEM_LOC, dev->feature_info[IRDMA_OBJ_1]); + + hmc_info->hmc_obj[IRDMA_HMC_IW_MR].mem_loc = + (mem_type & IRDMA_OBJ_LOC_MEM_BIT) ? + IRDMA_LOC_MEM : IRDMA_HOST_MEM; + } else { + hmc_info->hmc_obj[IRDMA_HMC_IW_MR].mem_loc = IRDMA_LOC_MEM; + } + + hmc_info->hmc_obj[IRDMA_HMC_IW_PBLE].mem_loc = IRDMA_HOST_MEM; + + ibdev_dbg(to_ibdev(dev), "HMC: INFO: mrte_mem_loc = %d pble = %d\n", + hmc_info->hmc_obj[IRDMA_HMC_IW_MR].mem_loc, + hmc_info->hmc_obj[IRDMA_HMC_IW_PBLE].mem_loc); +} + +/** + * irdma_cfg_sd_mem - allocate sd memory + * @dev: sc device struct + * @hmc_info: ptr to irdma_hmc_obj_info struct + */ +static int irdma_cfg_sd_mem(struct irdma_sc_dev *dev, + struct irdma_hmc_info *hmc_info) +{ + struct irdma_virt_mem virt_mem; + u32 mem_size; + + mem_size = sizeof(struct irdma_hmc_sd_entry) * hmc_info->sd_table.sd_cnt; + virt_mem.size = mem_size; + virt_mem.va = kzalloc(virt_mem.size, GFP_KERNEL); + if (!virt_mem.va) + return -ENOMEM; + hmc_info->sd_table.sd_entry = virt_mem.va; + + return 0; +} + +/** + * irdma_get_objs_pages - get number of 2M pages needed + * @dev: sc device struct + * @hmc_info: pointer to the HMC configuration information struct + * @mem_loc: pages for local or host memory + */ +static u32 irdma_get_objs_pages(struct irdma_sc_dev *dev, + struct irdma_hmc_info *hmc_info, + enum irdma_hmc_obj_mem mem_loc) +{ + u64 size = 0; + int i; + + for (i = IRDMA_HMC_IW_QP; i < IRDMA_HMC_IW_MAX; i++) { + if (hmc_info->hmc_obj[i].mem_loc == mem_loc) { + size += round_up(hmc_info->hmc_obj[i].cnt * + hmc_info->hmc_obj[i].size, 512); + } + } + + return DIV_ROUND_UP(size, IRDMA_HMC_PAGE_SIZE); +} + +/** + * irdma_set_host_hmc_rsrc_gen_3 - calculate host hmc resources for gen 3 + * @dev: sc device struct + */ +static void irdma_set_host_hmc_rsrc_gen_3(struct irdma_sc_dev *dev) +{ + struct irdma_hmc_fpm_misc *hmc_fpm_misc; + struct irdma_hmc_info *hmc_info; + enum irdma_hmc_obj_mem mrte_loc; + u32 mrwanted, pblewanted; + u32 avail_sds, mr_sds; + + hmc_info = dev->hmc_info; + hmc_fpm_misc = &dev->hmc_fpm_misc; + avail_sds = hmc_fpm_misc->max_sds; + mrte_loc = hmc_info->hmc_obj[IRDMA_HMC_IW_MR].mem_loc; + mrwanted = hmc_info->hmc_obj[IRDMA_HMC_IW_MR].cnt; + pblewanted = hmc_info->hmc_obj[IRDMA_HMC_IW_PBLE].max_cnt; + + if (mrte_loc == IRDMA_HOST_MEM && avail_sds > IRDMA_MIN_PBLE_PAGES) { + mr_sds = avail_sds - IRDMA_MIN_PBLE_PAGES; + mrwanted = min(mrwanted, mr_sds * MAX_MR_PER_SD); + hmc_info->hmc_obj[IRDMA_HMC_IW_MR].cnt = mrwanted; + avail_sds -= DIV_ROUND_UP(mrwanted, MAX_MR_PER_SD); + } + + pblewanted = min(pblewanted, avail_sds * MAX_PBLE_PER_SD); + hmc_info->hmc_obj[IRDMA_HMC_IW_PBLE].cnt = pblewanted; +} + +/** + * irdma_set_loc_hmc_rsrc_gen_3 - calculate hmc resources for gen 3 + * @dev: sc device struct + * @max_pages: max local memory available + * @qpwanted: number of qp's wanted + */ +static int irdma_set_loc_hmc_rsrc_gen_3(struct irdma_sc_dev *dev, + u32 max_pages, + u32 qpwanted) +{ + struct irdma_hmc_fpm_misc *hmc_fpm_misc; + u32 xf_cnt, timer_cnt, pages_needed; + struct irdma_hmc_info *hmc_info; + u32 ird, ord, min_ird; + + hmc_info = dev->hmc_info; + hmc_fpm_misc = &dev->hmc_fpm_misc; + ird = dev->hw_attrs.max_hw_ird; + ord = dev->hw_attrs.max_hw_ord; + min_ird = IRDMA_MIN_IRD; + + hmc_info->hmc_obj[IRDMA_HMC_IW_HDR].cnt = qpwanted; + hmc_info->hmc_obj[IRDMA_HMC_IW_QP].cnt = qpwanted; + + hmc_info->hmc_obj[IRDMA_HMC_IW_CQ].cnt = + min(hmc_info->hmc_obj[IRDMA_HMC_IW_CQ].cnt, qpwanted * 2); + + hmc_info->hmc_obj[IRDMA_HMC_IW_FSIAV].cnt = + min(qpwanted * 8, hmc_info->hmc_obj[IRDMA_HMC_IW_FSIAV].max_cnt); + + hmc_info->hmc_obj[IRDMA_HMC_IW_RRF].cnt = + min(hmc_info->hmc_obj[IRDMA_HMC_IW_RRF].max_cnt, + IRDMA_RRF_MULTIPLIER * qpwanted); + + if (hmc_info->hmc_obj[IRDMA_HMC_IW_RRFFL].max_cnt) + hmc_info->hmc_obj[IRDMA_HMC_IW_RRFFL].cnt = + hmc_info->hmc_obj[IRDMA_HMC_IW_RRF].cnt / + hmc_fpm_misc->rrf_block_size; + + xf_cnt = IRDMA_XF_MULTIPLIER * qpwanted; + xf_cnt = min(hmc_info->hmc_obj[IRDMA_HMC_IW_XF].max_cnt, xf_cnt); + hmc_info->hmc_obj[IRDMA_HMC_IW_XF].cnt = xf_cnt; + + if (xf_cnt) + hmc_info->hmc_obj[IRDMA_HMC_IW_XFFL].cnt = + xf_cnt / hmc_fpm_misc->xf_block_size; + + timer_cnt = (round_up(qpwanted, 512) / 512 + 1) * + hmc_fpm_misc->timer_bucket; + hmc_info->hmc_obj[IRDMA_HMC_IW_TIMER].cnt = + min(timer_cnt, hmc_info->hmc_obj[IRDMA_HMC_IW_TIMER].cnt); + + do { + hmc_info->hmc_obj[IRDMA_HMC_IW_Q1].cnt = ird * 2 * qpwanted; + hmc_info->hmc_obj[IRDMA_HMC_IW_Q1FL].cnt = + hmc_info->hmc_obj[IRDMA_HMC_IW_Q1].cnt / hmc_fpm_misc->q1_block_size; + + pages_needed = irdma_get_objs_pages(dev, hmc_info, IRDMA_LOC_MEM); + if (pages_needed <= max_pages) + break; + + ird /= 2; + ord /= 2; + } while (ird >= IRDMA_MIN_IRD); + + if (ird < IRDMA_MIN_IRD) { + ibdev_dbg(to_ibdev(dev), "HMC: FAIL: IRD=%u Q1 CNT = %u\n", + ird, hmc_info->hmc_obj[IRDMA_HMC_IW_Q1].cnt); + return -EINVAL; + } + + dev->hw_attrs.max_hw_ird = ird; + dev->hw_attrs.max_hw_ord = ord; + hmc_fpm_misc->max_sds -= pages_needed; + + return 0; +} + +/** + * cfg_fpm_value_gen_3 - configure fpm for gen 3 + * @dev: sc device struct + * @hmc_info: ptr to irdma_hmc_obj_info struct + * @hmc_fpm_misc: ptr to fpm data + */ +static int cfg_fpm_value_gen_3(struct irdma_sc_dev *dev, + struct irdma_hmc_info *hmc_info, + struct irdma_hmc_fpm_misc *hmc_fpm_misc) +{ + enum irdma_hmc_obj_mem mrte_loc; + u32 mrwanted, qpwanted; + int i, ret_code = 0; + u32 loc_mem_pages; + bool is_mrte_loc_mem; + + loc_mem_pages = hmc_fpm_misc->loc_mem_pages; + is_mrte_loc_mem = hmc_fpm_misc->loc_mem_pages == hmc_fpm_misc->max_sds ? + true : false; + + irdma_get_rsrc_mem_config(dev, is_mrte_loc_mem); + mrte_loc = hmc_info->hmc_obj[IRDMA_HMC_IW_MR].mem_loc; + + if (is_mrte_loc_mem) + loc_mem_pages -= IRDMA_MIN_PBLE_PAGES; + + ibdev_dbg(to_ibdev(dev), + "HMC: mrte_loc %d loc_mem %u fpm max sds %u host_obj %d\n", + hmc_info->hmc_obj[IRDMA_HMC_IW_MR].mem_loc, + hmc_fpm_misc->loc_mem_pages, hmc_fpm_misc->max_sds, + is_mrte_loc_mem); + + mrwanted = hmc_info->hmc_obj[IRDMA_HMC_IW_MR].max_cnt; + qpwanted = hmc_info->hmc_obj[IRDMA_HMC_IW_QP].max_cnt; + hmc_info->hmc_obj[IRDMA_HMC_IW_HDR].cnt = qpwanted; + + hmc_info->hmc_obj[IRDMA_HMC_IW_OOISC].max_cnt = 0; + hmc_info->hmc_obj[IRDMA_HMC_IW_OOISCFFL].max_cnt = 0; + hmc_info->hmc_obj[IRDMA_HMC_IW_HTE].max_cnt = 0; + hmc_info->hmc_obj[IRDMA_HMC_IW_FSIMC].max_cnt = 0; + hmc_info->hmc_obj[IRDMA_HMC_IW_FSIAV].max_cnt = + min(hmc_info->hmc_obj[IRDMA_HMC_IW_FSIAV].max_cnt, + (u32)IRDMA_FSIAV_CNT_MAX); + for (i = IRDMA_HMC_IW_QP; i < IRDMA_HMC_IW_MAX; i++) + hmc_info->hmc_obj[i].cnt = hmc_info->hmc_obj[i].max_cnt; + + while (qpwanted >= IRDMA_MIN_QP_CNT) { + if (!irdma_set_loc_hmc_rsrc_gen_3(dev, loc_mem_pages, qpwanted)) + break; + + qpwanted /= 2; + if (mrte_loc == IRDMA_LOC_MEM) { + mrwanted = qpwanted * IRDMA_MIN_MR_PER_QP; + hmc_info->hmc_obj[IRDMA_HMC_IW_MR].cnt = + min(hmc_info->hmc_obj[IRDMA_HMC_IW_MR].max_cnt, mrwanted); + } + } + + if (qpwanted < IRDMA_MIN_QP_CNT) { + ibdev_dbg(to_ibdev(dev), + "HMC: ERROR: could not allocate fpm resources\n"); + return -EINVAL; + } + + irdma_set_host_hmc_rsrc_gen_3(dev); + ret_code = irdma_sc_cfg_iw_fpm(dev, dev->hmc_fn_id); + if (ret_code) { + ibdev_dbg(to_ibdev(dev), + "HMC: cfg_iw_fpm returned error_code[x%08X]\n", + readl(dev->hw_regs[IRDMA_CQPERRCODES])); + + return ret_code; + } + + return irdma_cfg_sd_mem(dev, hmc_info); +} + /** * irdma_cfg_fpm_val - configure HMC objects * @dev: sc device struct @@ -4792,16 +5124,15 @@ static void cfg_fpm_value_gen_2(struct irdma_sc_dev *dev, */ int irdma_cfg_fpm_val(struct irdma_sc_dev *dev, u32 qp_count) { - struct irdma_virt_mem virt_mem; - u32 i, mem_size; u32 qpwanted, mrwanted, pblewanted; - u32 powerof2, hte; + u32 powerof2, hte, i; u32 sd_needed; u32 sd_diff; u32 loop_count = 0; struct irdma_hmc_info *hmc_info; struct irdma_hmc_fpm_misc *hmc_fpm_misc; int ret_code = 0; + u32 max_sds; hmc_info = dev->hmc_info; hmc_fpm_misc = &dev->hmc_fpm_misc; @@ -4814,14 +5145,16 @@ int irdma_cfg_fpm_val(struct irdma_sc_dev *dev, u32 qp_count) return ret_code; } + max_sds = hmc_fpm_misc->max_sds; + + if (dev->hw_attrs.uk_attrs.hw_rev >= IRDMA_GEN_3) + return cfg_fpm_value_gen_3(dev, hmc_info, hmc_fpm_misc); + for (i = IRDMA_HMC_IW_QP; i < IRDMA_HMC_IW_MAX; i++) hmc_info->hmc_obj[i].cnt = hmc_info->hmc_obj[i].max_cnt; sd_needed = irdma_est_sd(dev, hmc_info); - ibdev_dbg(to_ibdev(dev), - "HMC: FW max resources sd_needed[%08d] first_sd_index[%04d]\n", - sd_needed, hmc_info->first_sd_index); - ibdev_dbg(to_ibdev(dev), "HMC: sd count %d where max sd is %d\n", - hmc_info->sd_table.sd_cnt, hmc_fpm_misc->max_sds); + ibdev_dbg(to_ibdev(dev), "HMC: sd count %u where max sd is %u\n", + hmc_info->sd_table.sd_cnt, max_sds); qpwanted = min(qp_count, hmc_info->hmc_obj[IRDMA_HMC_IW_QP].max_cnt); @@ -4835,8 +5168,8 @@ int irdma_cfg_fpm_val(struct irdma_sc_dev *dev, u32 qp_count) pblewanted = hmc_info->hmc_obj[IRDMA_HMC_IW_PBLE].max_cnt; ibdev_dbg(to_ibdev(dev), - "HMC: req_qp=%d max_sd=%d, max_qp = %d, max_cq=%d, max_mr=%d, max_pble=%d, mc=%d, av=%d\n", - qp_count, hmc_fpm_misc->max_sds, + "HMC: req_qp=%d max_sd=%u, max_qp = %u, max_cq=%u, max_mr=%u, max_pble=%u, mc=%d, av=%u\n", + qp_count, max_sds, hmc_info->hmc_obj[IRDMA_HMC_IW_QP].max_cnt, hmc_info->hmc_obj[IRDMA_HMC_IW_CQ].max_cnt, hmc_info->hmc_obj[IRDMA_HMC_IW_MR].max_cnt, @@ -4849,7 +5182,6 @@ int irdma_cfg_fpm_val(struct irdma_sc_dev *dev, u32 qp_count) hmc_info->hmc_obj[IRDMA_HMC_IW_FSIAV].max_cnt; hmc_info->hmc_obj[IRDMA_HMC_IW_ARP].cnt = hmc_info->hmc_obj[IRDMA_HMC_IW_ARP].max_cnt; - hmc_info->hmc_obj[IRDMA_HMC_IW_APBVT_ENTRY].cnt = 1; while (irdma_q1_cnt(dev, hmc_info, qpwanted) > hmc_info->hmc_obj[IRDMA_HMC_IW_Q1].max_cnt) @@ -4860,7 +5192,7 @@ int irdma_cfg_fpm_val(struct irdma_sc_dev *dev, u32 qp_count) hmc_info->hmc_obj[IRDMA_HMC_IW_QP].cnt = qpwanted; hmc_info->hmc_obj[IRDMA_HMC_IW_CQ].cnt = min(2 * qpwanted, hmc_info->hmc_obj[IRDMA_HMC_IW_CQ].cnt); - hmc_info->hmc_obj[IRDMA_HMC_IW_RESERVED].cnt = 0; /* Reserved */ + hmc_info->hmc_obj[IRDMA_HMC_IW_SRQ].cnt = 0; /* Reserved */ hmc_info->hmc_obj[IRDMA_HMC_IW_MR].cnt = mrwanted; hte = round_up(qpwanted + hmc_info->hmc_obj[IRDMA_HMC_IW_FSIMC].cnt, 512); @@ -4898,11 +5230,12 @@ int irdma_cfg_fpm_val(struct irdma_sc_dev *dev, u32 qp_count) if (!(loop_count % 2) && qpwanted > 128) { qpwanted /= 2; } else { - mrwanted /= 2; pblewanted /= 2; + mrwanted /= 2; } continue; } + if (dev->cqp->hmc_profile != IRDMA_HMC_PROFILE_FAVOR_VF && pblewanted > (512 * FPM_MULTIPLIER * sd_diff)) { pblewanted -= 256 * FPM_MULTIPLIER * sd_diff; @@ -4928,14 +5261,13 @@ int irdma_cfg_fpm_val(struct irdma_sc_dev *dev, u32 qp_count) if (sd_needed > hmc_fpm_misc->max_sds) { ibdev_dbg(to_ibdev(dev), - "HMC: cfg_fpm failed loop_cnt=%d, sd_needed=%d, max sd count %d\n", + "HMC: cfg_fpm failed loop_cnt=%u, sd_needed=%u, max sd count %u\n", loop_count, sd_needed, hmc_info->sd_table.sd_cnt); return -EINVAL; } - if (loop_count > 1 && sd_needed < hmc_fpm_misc->max_sds) { - pblewanted += (hmc_fpm_misc->max_sds - sd_needed) * 256 * - FPM_MULTIPLIER; + if (loop_count > 1 && sd_needed < max_sds) { + pblewanted += (max_sds - sd_needed) * 256 * FPM_MULTIPLIER; hmc_info->hmc_obj[IRDMA_HMC_IW_PBLE].cnt = pblewanted; sd_needed = irdma_est_sd(dev, hmc_info); } @@ -4959,18 +5291,7 @@ int irdma_cfg_fpm_val(struct irdma_sc_dev *dev, u32 qp_count) return ret_code; } - mem_size = sizeof(struct irdma_hmc_sd_entry) * - (hmc_info->sd_table.sd_cnt + hmc_info->first_sd_index + 1); - virt_mem.size = mem_size; - virt_mem.va = kzalloc(virt_mem.size, GFP_KERNEL); - if (!virt_mem.va) { - ibdev_dbg(to_ibdev(dev), - "HMC: failed to allocate memory for sd_entry buffer\n"); - return -ENOMEM; - } - hmc_info->sd_table.sd_entry = virt_mem.va; - - return ret_code; + return irdma_cfg_sd_mem(dev, hmc_info); } /** @@ -5381,6 +5702,7 @@ int irdma_sc_dev_init(enum irdma_vers ver, struct irdma_sc_dev *dev, dev->fpm_commit_buf = info->fpm_commit_buf; dev->hw = info->hw; dev->hw->hw_addr = info->bar0; + dev->protocol_used = info->protocol_used; /* Setup the hardware limits, hmc may limit further */ dev->hw_attrs.min_hw_qp_id = IRDMA_MIN_IW_QP_ID; dev->hw_attrs.min_hw_aeq_size = IRDMA_MIN_AEQ_ENTRIES; @@ -5409,7 +5731,17 @@ int irdma_sc_dev_init(enum irdma_vers ver, struct irdma_sc_dev *dev, dev->hw_attrs.max_sleep_count = IRDMA_SLEEP_COUNT; dev->hw_attrs.max_cqp_compl_wait_time_ms = CQP_COMPL_WAIT_TIME_MS; - dev->hw_attrs.uk_attrs.hw_rev = ver; + if (!dev->privileged) { + ret_code = irdma_vchnl_req_get_hmc_fcn(dev); + if (ret_code) { + ibdev_dbg(to_ibdev(dev), + "DEV: Get HMC function ret = %d\n", + ret_code); + + return ret_code; + } + } + irdma_sc_init_hw(dev); if (irdma_wait_pe_ready(dev)) diff --git a/drivers/infiniband/hw/irdma/defs.h b/drivers/infiniband/hw/irdma/defs.h index 2cb4b96db721..7825896c445c 100644 --- a/drivers/infiniband/hw/irdma/defs.h +++ b/drivers/infiniband/hw/irdma/defs.h @@ -114,6 +114,12 @@ enum irdma_protocol_used { #define IRDMA_UPDATE_SD_BUFF_SIZE 128 #define IRDMA_FEATURE_BUF_SIZE (8 * IRDMA_MAX_FEATURES) +#define ENABLE_LOC_MEM 63 +#define MAX_PBLE_PER_SD 0x40000 +#define MAX_PBLE_SD_PER_FCN 0x400 +#define MAX_MR_PER_SD 0x8000 +#define MAX_MR_SD_PER_FCN 0x80 +#define IRDMA_PBLE_COMMIT_OFFSET 112 #define IRDMA_MAX_QUANTA_PER_WR 8 #define IRDMA_QP_SW_MAX_WQ_QUANTA 32768 @@ -396,6 +402,11 @@ enum irdma_cqp_op_type { #define IRDMA_CQPSQ_STATS_HMC_FCN_INDEX GENMASK_ULL(5, 0) #define IRDMA_CQPSQ_WS_WQEVALID BIT_ULL(63) #define IRDMA_CQPSQ_WS_NODEOP GENMASK_ULL(53, 52) +#define IRDMA_SD_MAX GENMASK_ULL(15, 0) +#define IRDMA_MEM_MAX GENMASK_ULL(15, 0) +#define IRDMA_QP_MEM_LOC GENMASK_ULL(47, 44) +#define IRDMA_MR_MEM_LOC_S 24 +#define IRDMA_MR_MEM_LOC GENMASK_ULL(27, 24) #define IRDMA_CQPSQ_WS_ENABLENODE BIT_ULL(62) #define IRDMA_CQPSQ_WS_NODETYPE BIT_ULL(61) @@ -660,10 +671,12 @@ enum irdma_cqp_op_type { #define IRDMA_CQPSQ_AEQ_VMAP BIT_ULL(47) #define IRDMA_CQPSQ_AEQ_FIRSTPMPBLIDX GENMASK_ULL(27, 0) -#define IRDMA_COMMIT_FPM_QPCNT GENMASK_ULL(18, 0) +#define IRDMA_COMMIT_FPM_QPCNT_S 0 +#define IRDMA_COMMIT_FPM_QPCNT GENMASK_ULL(20, 0) #define IRDMA_COMMIT_FPM_BASE_S 32 -#define IRDMA_CQPSQ_CFPM_HMCFNID GENMASK_ULL(5, 0) +#define IRDMA_CQPSQ_CFPM_HMCFNID GENMASK_ULL(15, 0) + #define IRDMA_CQPSQ_FWQE_AECODE GENMASK_ULL(15, 0) #define IRDMA_CQPSQ_FWQE_AESOURCE GENMASK_ULL(19, 16) #define IRDMA_CQPSQ_FWQE_RQMNERR GENMASK_ULL(15, 0) @@ -903,10 +916,17 @@ enum irdma_cqp_op_type { #define IRDMAPFINT_OICR_PE_PUSH_M BIT(27) #define IRDMAPFINT_OICR_PE_CRITERR_M BIT(28) -#define IRDMA_QUERY_FPM_MAX_QPS GENMASK_ULL(18, 0) -#define IRDMA_QUERY_FPM_MAX_CQS GENMASK_ULL(19, 0) +#define IRDMA_QUERY_FPM_LOC_MEM_PAGES_S 32 +#define IRDMA_QUERY_FPM_LOC_MEM_PAGES GENMASK_ULL(63, 32) +#define IRDMA_QUERY_FPM_MAX_QPS_S 0 +#define IRDMA_QUERY_FPM_MAX_QPS GENMASK_ULL(31, 0) +#define IRDMA_QUERY_FPM_MAX_CQS_S 0 +#define IRDMA_QUERY_FPM_MAX_CQS GENMASK_ULL(31, 0) +#define IRDMA_QUERY_FPM_FIRST_PE_SD_INDEX_S 0 #define IRDMA_QUERY_FPM_FIRST_PE_SD_INDEX GENMASK_ULL(13, 0) -#define IRDMA_QUERY_FPM_MAX_PE_SDS GENMASK_ULL(45, 32) +#define IRDMA_QUERY_FPM_MAX_PE_SDS_S 32 +#define IRDMA_QUERY_FPM_MAX_PE_SDS GENMASK_ULL(44, 32) + #define IRDMA_QUERY_FPM_MAX_CEQS GENMASK_ULL(9, 0) #define IRDMA_QUERY_FPM_XFBLOCKSIZE GENMASK_ULL(63, 32) #define IRDMA_QUERY_FPM_Q1BLOCKSIZE GENMASK_ULL(63, 32) diff --git a/drivers/infiniband/hw/irdma/hmc.c b/drivers/infiniband/hw/irdma/hmc.c index ac58088a8e41..da18add141da 100644 --- a/drivers/infiniband/hw/irdma/hmc.c +++ b/drivers/infiniband/hw/irdma/hmc.c @@ -5,6 +5,7 @@ #include "defs.h" #include "type.h" #include "protos.h" +#include "virtchnl.h" /** * irdma_find_sd_index_limit - finds segment descriptor index limit @@ -228,6 +229,10 @@ int irdma_sc_create_hmc_obj(struct irdma_sc_dev *dev, bool pd_error = false; int ret_code = 0; + if (dev->hw_attrs.uk_attrs.hw_rev >= IRDMA_GEN_3 && + dev->hmc_info->hmc_obj[info->rsrc_type].mem_loc == IRDMA_LOC_MEM) + return 0; + if (info->start_idx >= info->hmc_info->hmc_obj[info->rsrc_type].cnt) return -EINVAL; @@ -330,7 +335,7 @@ static int irdma_finish_del_sd_reg(struct irdma_sc_dev *dev, u32 i, sd_idx; struct irdma_dma_mem *mem; - if (!reset) + if (dev->privileged && !reset) ret_code = irdma_hmc_sd_grp(dev, info->hmc_info, info->hmc_info->sd_indexes[0], info->del_sd_cnt, false); @@ -376,6 +381,9 @@ int irdma_sc_del_hmc_obj(struct irdma_sc_dev *dev, u32 i, j; int ret_code = 0; + if (dev->hmc_info->hmc_obj[info->rsrc_type].mem_loc == IRDMA_LOC_MEM) + return 0; + if (info->start_idx >= info->hmc_info->hmc_obj[info->rsrc_type].cnt) { ibdev_dbg(to_ibdev(dev), "HMC: error start_idx[%04d] >= [type %04d].cnt[%04d]\n", @@ -589,7 +597,10 @@ int irdma_add_pd_table_entry(struct irdma_sc_dev *dev, pd_entry->sd_index = sd_idx; pd_entry->valid = true; pd_table->use_cnt++; - irdma_invalidate_pf_hmc_pd(dev, sd_idx, rel_pd_idx); + + if (hmc_info->hmc_fn_id < dev->hw_attrs.first_hw_vf_fpm_id && + dev->privileged) + irdma_invalidate_pf_hmc_pd(dev, sd_idx, rel_pd_idx); } pd_entry->bp.use_cnt++; @@ -640,7 +651,8 @@ int irdma_remove_pd_bp(struct irdma_sc_dev *dev, pd_addr = pd_table->pd_page_addr.va; pd_addr += rel_pd_idx; memset(pd_addr, 0, sizeof(u64)); - irdma_invalidate_pf_hmc_pd(dev, sd_idx, idx); + if (dev->privileged && dev->hmc_fn_id == hmc_info->hmc_fn_id) + irdma_invalidate_pf_hmc_pd(dev, sd_idx, idx); if (!pd_entry->rsrc_pg) { mem = &pd_entry->bp.addr; diff --git a/drivers/infiniband/hw/irdma/hmc.h b/drivers/infiniband/hw/irdma/hmc.h index 415f9e23bbf6..257a5d22aa96 100644 --- a/drivers/infiniband/hw/irdma/hmc.h +++ b/drivers/infiniband/hw/irdma/hmc.h @@ -16,11 +16,21 @@ #define IRDMA_HMC_PD_BP_BUF_ALIGNMENT 4096 #define IRDMA_FIRST_VF_FPM_ID 8 #define FPM_MULTIPLIER 1024 +#define IRDMA_OBJ_LOC_MEM_BIT 0x4 +#define IRDMA_XF_MULTIPLIER 16 +#define IRDMA_RRF_MULTIPLIER 8 +#define IRDMA_MIN_PBLE_PAGES 3 +#define IRDMA_HMC_PAGE_SIZE 2097152 +#define IRDMA_MIN_MR_PER_QP 4 +#define IRDMA_MIN_QP_CNT 64 +#define IRDMA_FSIAV_CNT_MAX 1048576 +#define IRDMA_MIN_IRD 8 +#define IRDMA_HMC_MIN_RRF 16 enum irdma_hmc_rsrc_type { IRDMA_HMC_IW_QP = 0, IRDMA_HMC_IW_CQ = 1, - IRDMA_HMC_IW_RESERVED = 2, + IRDMA_HMC_IW_SRQ = 2, IRDMA_HMC_IW_HTE = 3, IRDMA_HMC_IW_ARP = 4, IRDMA_HMC_IW_APBVT_ENTRY = 5, @@ -48,11 +58,17 @@ enum irdma_sd_entry_type { IRDMA_SD_TYPE_DIRECT = 2, }; +enum irdma_hmc_obj_mem { + IRDMA_HOST_MEM = 0, + IRDMA_LOC_MEM = 1, +}; + struct irdma_hmc_obj_info { u64 base; u32 max_cnt; u32 cnt; u64 size; + enum irdma_hmc_obj_mem mem_loc; }; struct irdma_hmc_bp { @@ -117,6 +133,7 @@ struct irdma_update_sds_info { struct irdma_ccq_cqe_info; struct irdma_hmc_fcn_info { u32 vf_id; + u8 protocol_used; u8 free_fcn; }; diff --git a/drivers/infiniband/hw/irdma/hw.c b/drivers/infiniband/hw/irdma/hw.c index ad50b77282f8..288131466a19 100644 --- a/drivers/infiniband/hw/irdma/hw.c +++ b/drivers/infiniband/hw/irdma/hw.c @@ -33,6 +33,7 @@ static struct irdma_rsrc_limits rsrc_limits_table[] = { static enum irdma_hmc_rsrc_type iw_hmc_obj_types[] = { IRDMA_HMC_IW_QP, IRDMA_HMC_IW_CQ, + IRDMA_HMC_IW_SRQ, IRDMA_HMC_IW_HTE, IRDMA_HMC_IW_ARP, IRDMA_HMC_IW_APBVT_ENTRY, @@ -1571,6 +1572,8 @@ static void irdma_del_init_mem(struct irdma_pci_f *rf) { struct irdma_sc_dev *dev = &rf->sc_dev; + if (!rf->sc_dev.privileged) + irdma_vchnl_req_put_hmc_fcn(&rf->sc_dev); kfree(dev->hmc_info->sd_table.sd_entry); dev->hmc_info->sd_table.sd_entry = NULL; vfree(rf->mem_rsrc); @@ -1637,6 +1640,7 @@ static int irdma_initialize_dev(struct irdma_pci_f *rf) info.bar0 = rf->hw.hw_addr; info.hmc_fn_id = rf->pf_id; + info.protocol_used = rf->protocol_used; info.hw = &rf->hw; status = irdma_sc_dev_init(rf->rdma_ver, &rf->sc_dev, &info); if (status) diff --git a/drivers/infiniband/hw/irdma/i40iw_if.c b/drivers/infiniband/hw/irdma/i40iw_if.c index 6fa807ef4545..15e036ddaffb 100644 --- a/drivers/infiniband/hw/irdma/i40iw_if.c +++ b/drivers/infiniband/hw/irdma/i40iw_if.c @@ -77,6 +77,7 @@ static void i40iw_fill_device_info(struct irdma_device *iwdev, struct i40e_info rf->rdma_ver = IRDMA_GEN_1; rf->sc_dev.hw = &rf->hw; rf->sc_dev.hw_attrs.uk_attrs.hw_rev = IRDMA_GEN_1; + rf->sc_dev.privileged = true; rf->gen_ops.request_reset = i40iw_request_reset; rf->pcidev = cdev_info->pcidev; rf->pf_id = cdev_info->fid; diff --git a/drivers/infiniband/hw/irdma/icrdma_if.c b/drivers/infiniband/hw/irdma/icrdma_if.c index 5fcbf695a1d3..0ddccf1082ff 100644 --- a/drivers/infiniband/hw/irdma/icrdma_if.c +++ b/drivers/infiniband/hw/irdma/icrdma_if.c @@ -144,6 +144,8 @@ static void icrdma_fill_device_info(struct irdma_device *iwdev, rf->msix_entries = cdev_info->msix_entries; rf->rdma_ver = IRDMA_GEN_2; rf->sc_dev.hw_attrs.uk_attrs.hw_rev = IRDMA_GEN_2; + rf->sc_dev.is_pf = true; + rf->sc_dev.privileged = true; rf->gen_ops.register_qset = icrdma_lan_register_qset; rf->gen_ops.unregister_qset = icrdma_lan_unregister_qset; diff --git a/drivers/infiniband/hw/irdma/ig3rdma_hw.h b/drivers/infiniband/hw/irdma/ig3rdma_hw.h new file mode 100644 index 000000000000..4c3d186bbe81 --- /dev/null +++ b/drivers/infiniband/hw/irdma/ig3rdma_hw.h @@ -0,0 +1,11 @@ +/* SPDX-License-Identifier: GPL-2.0 or Linux-OpenIB */ +/* Copyright (c) 2021 - 2024 Intel Corporation */ +#ifndef IG3RDMA_HW_H +#define IG3RDMA_HW_H + +#define IG3_PF_RDMA_REGION_OFFSET 0xBC00000 +#define IG3_PF_RDMA_REGION_LEN 0x401000 +#define IG3_VF_RDMA_REGION_OFFSET 0x8C00 +#define IG3_VF_RDMA_REGION_LEN 0x8400 + +#endif /* IG3RDMA_HW_H*/ diff --git a/drivers/infiniband/hw/irdma/ig3rdma_if.c b/drivers/infiniband/hw/irdma/ig3rdma_if.c new file mode 100644 index 000000000000..70b1ed3723a4 --- /dev/null +++ b/drivers/infiniband/hw/irdma/ig3rdma_if.c @@ -0,0 +1,171 @@ +// SPDX-License-Identifier: GPL-2.0 or Linux-OpenIB +/* Copyright (c) 2023 - 2024 Intel Corporation */ +#include "main.h" +#include "ig3rdma_hw.h" + +static void ig3rdma_idc_core_event_handler(struct idc_rdma_core_dev_info *cdev_info, + struct idc_rdma_event *event) +{ + struct irdma_pci_f *rf = auxiliary_get_drvdata(cdev_info->adev); + + if (*event->type & BIT(IDC_RDMA_EVENT_WARN_RESET)) { + rf->reset = true; + rf->sc_dev.vchnl_up = false; + } +} + +static int ig3rdma_cfg_regions(struct irdma_hw *hw, + struct idc_rdma_core_dev_info *cdev_info) +{ + struct pci_dev *pdev = cdev_info->pdev; + int i; + + switch (cdev_info->ftype) { + case IDC_FUNCTION_TYPE_PF: + hw->rdma_reg.len = IG3_PF_RDMA_REGION_LEN; + hw->rdma_reg.offset = IG3_PF_RDMA_REGION_OFFSET; + break; + case IDC_FUNCTION_TYPE_VF: + hw->rdma_reg.len = IG3_VF_RDMA_REGION_LEN; + hw->rdma_reg.offset = IG3_VF_RDMA_REGION_OFFSET; + break; + default: + return -ENODEV; + } + + hw->rdma_reg.addr = ioremap(pci_resource_start(pdev, 0) + hw->rdma_reg.offset, + hw->rdma_reg.len); + + if (!hw->rdma_reg.addr) + return -ENOMEM; + + hw->io_regs = kcalloc(cdev_info->num_memory_regions, + sizeof(struct irdma_mmio_region), GFP_KERNEL); + + if (!hw->io_regs) { + iounmap(hw->rdma_reg.addr); + return -ENOMEM; + } + + hw->num_io_regions = le16_to_cpu(cdev_info->num_memory_regions); + for (i = 0; i < cdev_info->num_memory_regions; i++) { + hw->io_regs[i].addr = + cdev_info->mapped_mem_regions[i].region_addr; + hw->io_regs[i].len = + cdev_info->mapped_mem_regions[i].size; + hw->io_regs[i].offset = + cdev_info->mapped_mem_regions[i].start_offset; + } + + return 0; +} + +static void ig3rdma_decfg_rf(struct irdma_pci_f *rf) +{ + struct irdma_hw *hw = &rf->hw; + + destroy_workqueue(rf->vchnl_wq); + kfree(hw->io_regs); + iounmap(hw->rdma_reg.addr); +} + +static int ig3rdma_cfg_rf(struct irdma_pci_f *rf, + struct idc_rdma_core_dev_info *cdev_info) +{ + int err; + + rf->sc_dev.hw = &rf->hw; + rf->cdev = cdev_info; + rf->pcidev = cdev_info->pdev; + rf->hw.device = &rf->pcidev->dev; + rf->msix_count = cdev_info->msix_count; + rf->msix_entries = cdev_info->msix_entries; + + err = irdma_vchnl_init(rf, cdev_info, &rf->rdma_ver); + if (err) + return err; + + err = ig3rdma_cfg_regions(&rf->hw, cdev_info); + if (err) { + destroy_workqueue(rf->vchnl_wq); + return err; + } + + rf->protocol_used = IRDMA_ROCE_PROTOCOL_ONLY; + rf->rsrc_profile = IRDMA_HMC_PROFILE_DEFAULT; + rf->rst_to = IRDMA_RST_TIMEOUT_HZ; + rf->gen_ops.request_reset = irdma_request_reset; + rf->limits_sel = 7; + mutex_init(&rf->ah_tbl_lock); + + return 0; +} + +static int ig3rdma_core_probe(struct auxiliary_device *aux_dev, + const struct auxiliary_device_id *id) +{ + struct idc_rdma_core_auxiliary_dev *idc_adev = + container_of(aux_dev, struct idc_rdma_core_auxiliary_dev, adev); + struct idc_rdma_core_dev_info *cdev_info = idc_adev->cdev_info; + struct irdma_pci_f *rf; + int err; + + rf = kzalloc(sizeof(*rf), GFP_KERNEL); + if (!rf) + return -ENOMEM; + + err = ig3rdma_cfg_rf(rf, cdev_info); + if (err) + goto err_cfg_rf; + + err = irdma_ctrl_init_hw(rf); + if (err) + goto err_ctrl_init; + + auxiliary_set_drvdata(aux_dev, rf); + + err = cdev_info->ops->vport_dev_ctrl(cdev_info, true); + if (err) + goto err_vport_ctrl; + + return 0; + +err_vport_ctrl: + irdma_ctrl_deinit_hw(rf); +err_ctrl_init: + ig3rdma_decfg_rf(rf); +err_cfg_rf: + kfree(rf); + + return err; +} + +static void ig3rdma_core_remove(struct auxiliary_device *aux_dev) +{ + struct idc_rdma_core_auxiliary_dev *idc_adev = + container_of(aux_dev, struct idc_rdma_core_auxiliary_dev, adev); + struct idc_rdma_core_dev_info *cdev_info = idc_adev->cdev_info; + struct irdma_pci_f *rf = auxiliary_get_drvdata(aux_dev); + + cdev_info->ops->vport_dev_ctrl(cdev_info, false); + irdma_ctrl_deinit_hw(rf); + ig3rdma_decfg_rf(rf); + kfree(rf); +} + +static const struct auxiliary_device_id ig3rdma_core_auxiliary_id_table[] = { + {.name = "idpf.8086.rdma.core", }, + {}, +}; + +MODULE_DEVICE_TABLE(auxiliary, ig3rdma_core_auxiliary_id_table); + +struct idc_rdma_core_auxiliary_drv ig3rdma_core_auxiliary_drv = { + .adrv = { + .name = "core", + .id_table = ig3rdma_core_auxiliary_id_table, + .probe = ig3rdma_core_probe, + .remove = ig3rdma_core_remove, + }, + .event_handler = ig3rdma_idc_core_event_handler, +}; \ No newline at end of file diff --git a/drivers/infiniband/hw/irdma/irdma.h b/drivers/infiniband/hw/irdma/irdma.h index 20d2e7393e3d..769170445f88 100644 --- a/drivers/infiniband/hw/irdma/irdma.h +++ b/drivers/infiniband/hw/irdma/irdma.h @@ -107,6 +107,9 @@ enum irdma_vers { IRDMA_GEN_RSVD, IRDMA_GEN_1, IRDMA_GEN_2, + IRDMA_GEN_3, + IRDMA_GEN_NEXT, + IRDMA_GEN_MAX = IRDMA_GEN_NEXT-1 }; struct irdma_uk_attrs { diff --git a/drivers/infiniband/hw/irdma/main.c b/drivers/infiniband/hw/irdma/main.c index ee59ca10451c..e9524de1c10f 100644 --- a/drivers/infiniband/hw/irdma/main.c +++ b/drivers/infiniband/hw/irdma/main.c @@ -7,6 +7,23 @@ MODULE_AUTHOR("Intel Corporation, "); MODULE_DESCRIPTION("Intel(R) Ethernet Protocol Driver for RDMA"); MODULE_LICENSE("Dual BSD/GPL"); +int irdma_vchnl_send_sync(struct irdma_sc_dev *dev, u8 *msg, u16 len, + u8 *recv_msg, u16 *recv_len) +{ + struct idc_rdma_core_dev_info *cdev_info = dev_to_rf(dev)->cdev; + int ret; + + ret = cdev_info->ops->vc_send_sync(cdev_info, msg, len, recv_msg, + recv_len); + if (ret == -ETIMEDOUT) { + ibdev_err(&(dev_to_rf(dev)->iwdev->ibdev), + "Virtual channel Req <-> Resp completion timeout\n"); + dev->vchnl_up = false; + } + + return ret; +} + static struct notifier_block irdma_inetaddr_notifier = { .notifier_call = irdma_inetaddr_event }; @@ -103,16 +120,54 @@ static int __init irdma_init_module(void) return ret; } + ret = auxiliary_driver_register(&ig3rdma_core_auxiliary_drv.adrv); + if (ret) { + auxiliary_driver_unregister(&icrdma_core_auxiliary_drv.adrv); + auxiliary_driver_unregister(&i40iw_auxiliary_drv); + pr_err("Failed ig3rdma(gen_3) core auxiliary_driver_register() ret=%d\n", + ret); + + return ret; + } irdma_register_notifiers(); return 0; } +int irdma_vchnl_init(struct irdma_pci_f *rf, + struct idc_rdma_core_dev_info *cdev_info, u8 *rdma_ver) +{ + struct irdma_vchnl_init_info virt_info; + u8 gen = rf->rdma_ver; + int ret; + + rf->vchnl_wq = alloc_ordered_workqueue("irdma-virtchnl-wq", 0); + if (!rf->vchnl_wq) + return -ENOMEM; + + mutex_init(&rf->sc_dev.vchnl_mutex); + + virt_info.is_pf = !cdev_info->ftype; + virt_info.hw_rev = gen; + virt_info.privileged = gen == IRDMA_GEN_2; + virt_info.vchnl_wq = rf->vchnl_wq; + ret = irdma_sc_vchnl_init(&rf->sc_dev, &virt_info); + if (ret) { + destroy_workqueue(rf->vchnl_wq); + return ret; + } + + *rdma_ver = rf->sc_dev.hw_attrs.uk_attrs.hw_rev; + + return 0; +} + static void __exit irdma_exit_module(void) { irdma_unregister_notifiers(); auxiliary_driver_unregister(&icrdma_core_auxiliary_drv.adrv); auxiliary_driver_unregister(&i40iw_auxiliary_drv); + auxiliary_driver_unregister(&ig3rdma_core_auxiliary_drv.adrv); } module_init(irdma_init_module); diff --git a/drivers/infiniband/hw/irdma/main.h b/drivers/infiniband/hw/irdma/main.h index 7360e171f4c2..a7f3d197a390 100644 --- a/drivers/infiniband/hw/irdma/main.h +++ b/drivers/infiniband/hw/irdma/main.h @@ -55,6 +55,7 @@ #include "puda.h" extern struct auxiliary_driver i40iw_auxiliary_drv; +extern struct idc_rdma_core_auxiliary_drv ig3rdma_core_auxiliary_drv; extern struct idc_rdma_core_auxiliary_drv icrdma_core_auxiliary_drv; #define IRDMA_FW_VER_DEFAULT 2 @@ -326,6 +327,7 @@ struct irdma_pci_f { wait_queue_head_t vchnl_waitq; struct workqueue_struct *cqp_cmpl_wq; struct work_struct cqp_cmpl_work; + struct workqueue_struct *vchnl_wq; struct irdma_sc_vsi default_vsi; void *back_fcn; struct irdma_gen_ops gen_ops; @@ -556,6 +558,8 @@ int irdma_netdevice_event(struct notifier_block *notifier, unsigned long event, void *ptr); void irdma_add_ip(struct irdma_device *iwdev); void cqp_compl_worker(struct work_struct *work); +int irdma_vchnl_init(struct irdma_pci_f *rf, + struct idc_rdma_core_dev_info *cdev_info, u8 *rdma_ver); void irdma_fill_qos_info(struct irdma_l2params *l2params, struct iidc_rdma_qos_params *qos_info); void irdma_request_reset(struct irdma_pci_f *rf); diff --git a/drivers/infiniband/hw/irdma/pble.c b/drivers/infiniband/hw/irdma/pble.c index e7ce6840755f..2ef60e6e7cae 100644 --- a/drivers/infiniband/hw/irdma/pble.c +++ b/drivers/infiniband/hw/irdma/pble.c @@ -193,8 +193,15 @@ static enum irdma_sd_entry_type irdma_get_type(struct irdma_sc_dev *dev, { enum irdma_sd_entry_type sd_entry_type; - sd_entry_type = !idx->rel_pd_idx && pages == IRDMA_HMC_PD_CNT_IN_SD ? - IRDMA_SD_TYPE_DIRECT : IRDMA_SD_TYPE_PAGED; + if (dev->hw_attrs.uk_attrs.hw_rev >= IRDMA_GEN_3) + sd_entry_type = (!idx->rel_pd_idx && + pages == IRDMA_HMC_PD_CNT_IN_SD) ? + IRDMA_SD_TYPE_DIRECT : IRDMA_SD_TYPE_PAGED; + else + sd_entry_type = (!idx->rel_pd_idx && + pages == IRDMA_HMC_PD_CNT_IN_SD && + dev->privileged) ? + IRDMA_SD_TYPE_DIRECT : IRDMA_SD_TYPE_PAGED; return sd_entry_type; } @@ -279,10 +286,11 @@ static int add_pble_prm(struct irdma_hmc_pble_rsrc *pble_rsrc) sd_reg_val = (sd_entry_type == IRDMA_SD_TYPE_PAGED) ? sd_entry->u.pd_table.pd_page_addr.pa : sd_entry->u.bp.addr.pa; - - if (!sd_entry->valid) { - ret_code = irdma_hmc_sd_one(dev, hmc_info->hmc_fn_id, sd_reg_val, - idx->sd_idx, sd_entry->entry_type, true); + if ((dev->privileged && !sd_entry->valid) || + dev->hw_attrs.uk_attrs.hw_rev >= IRDMA_GEN_3) { + ret_code = irdma_hmc_sd_one(dev, hmc_info->hmc_fn_id, + sd_reg_val, idx->sd_idx, + sd_entry->entry_type, true); if (ret_code) goto error; } diff --git a/drivers/infiniband/hw/irdma/type.h b/drivers/infiniband/hw/irdma/type.h index 59b34afa867b..cfcb5d938d76 100644 --- a/drivers/infiniband/hw/irdma/type.h +++ b/drivers/infiniband/hw/irdma/type.h @@ -8,6 +8,8 @@ #include "hmc.h" #include "uda.h" #include "ws.h" +#include "virtchnl.h" + #define IRDMA_DEBUG_ERR "ERR" #define IRDMA_DEBUG_INIT "INIT" #define IRDMA_DEBUG_DEV "DEV" @@ -159,7 +161,34 @@ enum irdma_hw_stats_index { enum irdma_feature_type { IRDMA_FEATURE_FW_INFO = 0, IRDMA_HW_VERSION_INFO = 1, + IRDMA_QP_MAX_INCR = 2, + IRDMA_CQ_MAX_INCR = 3, + IRDMA_CEQ_MAX_INCR = 4, + IRDMA_SD_MAX_INCR = 5, + IRDMA_QP_SMALL = 6, + IRDMA_QP_MEDIUM = 7, + IRDMA_QP_LARGE = 8, + IRDMA_QP_XLARGE = 9, + IRDMA_CQ_SMALL = 10, + IRDMA_CQ_MEDIUM = 11, + IRDMA_CQ_LARGE = 12, + IRDMA_CQ_XLARGE = 13, + IRDMA_CEQ_SMALL = 14, + IRDMA_CEQ_MEDIUM = 15, + IRDMA_CEQ_LARGE = 16, + IRDMA_CEQ_XLARGE = 17, + IRDMA_SD_SMALL = 18, + IRDMA_SD_MEDIUM = 19, + IRDMA_SD_LARGE = 20, + IRDMA_SD_XLARGE = 21, + IRDMA_OBJ_1 = 22, + IRDMA_OBJ_2 = 23, + IRDMA_ENDPT_TRK = 24, + IRDMA_FTN_INLINE_MAX = 25, IRDMA_QSETS_MAX = 26, + IRDMA_ASO = 27, + IRDMA_FTN_FLAGS = 32, + IRDMA_FTN_NOP = 33, IRDMA_MAX_FEATURES, /* Must be last entry */ }; @@ -310,9 +339,21 @@ struct irdma_vsi_pestat { spinlock_t lock; /* rdma stats lock */ }; +struct irdma_mmio_region { + u8 __iomem *addr; + resource_size_t len; + resource_size_t offset; +}; + struct irdma_hw { - u8 __iomem *hw_addr; - u8 __iomem *priv_hw_addr; + union { + u8 __iomem *hw_addr; + struct { + struct irdma_mmio_region rdma_reg; /* RDMA region */ + struct irdma_mmio_region *io_regs; /* Non-RDMA MMIO regions */ + u16 num_io_regions; /* Number of Non-RDMA MMIO regions */ + }; + }; struct device *device; struct irdma_hmc_info hmc; }; @@ -518,6 +559,7 @@ struct irdma_ws_node_info { struct irdma_hmc_fpm_misc { u32 max_ceqs; u32 max_sds; + u32 loc_mem_pages; u32 xf_block_size; u32 q1_block_size; u32 ht_multiplier; @@ -526,6 +568,7 @@ struct irdma_hmc_fpm_misc { u32 ooiscf_block_size; }; +#define IRDMA_VCHNL_MAX_MSG_SIZE 512 #define IRDMA_LEAF_DEFAULT_REL_BW 64 #define IRDMA_PARENT_DEFAULT_REL_BW 1 @@ -601,19 +644,28 @@ struct irdma_sc_dev { u64 cqp_cmd_stats[IRDMA_MAX_CQP_OPS]; struct irdma_hw_attrs hw_attrs; struct irdma_hmc_info *hmc_info; + struct irdma_vchnl_rdma_caps vc_caps; + u8 vc_recv_buf[IRDMA_VCHNL_MAX_MSG_SIZE]; + u16 vc_recv_len; struct irdma_sc_cqp *cqp; struct irdma_sc_aeq *aeq; struct irdma_sc_ceq *ceq[IRDMA_CEQ_MAX_COUNT]; struct irdma_sc_cq *ccq; const struct irdma_irq_ops *irq_ops; + struct irdma_qos qos[IRDMA_MAX_USER_PRIORITY]; struct irdma_hmc_fpm_misc hmc_fpm_misc; struct irdma_ws_node *ws_tree_root; struct mutex ws_mutex; /* ws tree mutex */ + u32 vchnl_ver; u16 num_vfs; - u8 hmc_fn_id; + u16 hmc_fn_id; u8 vf_id; + bool privileged:1; bool vchnl_up:1; bool ceq_valid:1; + bool is_pf:1; + u8 protocol_used; + struct mutex vchnl_mutex; /* mutex to synchronize RDMA virtual channel messages */ u8 pci_rev; int (*ws_add)(struct irdma_sc_vsi *vsi, u8 user_pri); void (*ws_remove)(struct irdma_sc_vsi *vsi, u8 user_pri); @@ -731,7 +783,8 @@ struct irdma_device_init_info { __le64 *fpm_commit_buf; struct irdma_hw *hw; void __iomem *bar0; - u8 hmc_fn_id; + enum irdma_protocol_used protocol_used; + u16 hmc_fn_id; }; struct irdma_ceq_init_info { @@ -972,7 +1025,7 @@ struct irdma_allocate_stag_info { bool use_hmc_fcn_index:1; bool use_pf_rid:1; bool all_memory:1; - u8 hmc_fcn_index; + u16 hmc_fcn_index; }; struct irdma_mw_alloc_info { diff --git a/drivers/infiniband/hw/irdma/user.h b/drivers/infiniband/hw/irdma/user.h index 380e4a47aede..8fd7eebef11d 100644 --- a/drivers/infiniband/hw/irdma/user.h +++ b/drivers/infiniband/hw/irdma/user.h @@ -55,8 +55,8 @@ enum irdma_device_caps_const { IRDMA_CEQE_SIZE = 1, IRDMA_CQP_CTX_SIZE = 8, IRDMA_SHADOW_AREA_SIZE = 8, - IRDMA_QUERY_FPM_BUF_SIZE = 176, - IRDMA_COMMIT_FPM_BUF_SIZE = 176, + IRDMA_QUERY_FPM_BUF_SIZE = 192, + IRDMA_COMMIT_FPM_BUF_SIZE = 192, IRDMA_GATHER_STATS_BUF_SIZE = 1024, IRDMA_MIN_IW_QP_ID = 0, IRDMA_MAX_IW_QP_ID = 262143, diff --git a/drivers/infiniband/hw/irdma/virtchnl.c b/drivers/infiniband/hw/irdma/virtchnl.c new file mode 100644 index 000000000000..2abfc3961f3e --- /dev/null +++ b/drivers/infiniband/hw/irdma/virtchnl.c @@ -0,0 +1,300 @@ +// SPDX-License-Identifier: GPL-2.0 or Linux-OpenIB +/* Copyright (c) 2015 - 2024 Intel Corporation */ +#include "osdep.h" +#include "hmc.h" +#include "defs.h" +#include "type.h" +#include "protos.h" +#include "virtchnl.h" +#include "ws.h" +#include "i40iw_hw.h" + +/** + * irdma_sc_vchnl_init - Initialize dev virtchannel and get hw_rev + * @dev: dev structure to update + * @info: virtchannel info parameters to fill into the dev structure + */ +int irdma_sc_vchnl_init(struct irdma_sc_dev *dev, + struct irdma_vchnl_init_info *info) +{ + dev->vchnl_up = true; + dev->privileged = info->privileged; + dev->is_pf = info->is_pf; + dev->hw_attrs.uk_attrs.hw_rev = info->hw_rev; + + if (!dev->privileged) { + int ret = irdma_vchnl_req_get_ver(dev, IRDMA_VCHNL_CHNL_VER_MAX, + &dev->vchnl_ver); + + ibdev_dbg(to_ibdev(dev), + "DEV: Get Channel version ret = %d, version is %u\n", + ret, dev->vchnl_ver); + + if (ret) + return ret; + + ret = irdma_vchnl_req_get_caps(dev); + if (ret) + return ret; + + dev->hw_attrs.uk_attrs.hw_rev = dev->vc_caps.hw_rev; + } + + return 0; +} + +/** + * irdma_vchnl_req_verify_resp - Verify requested response size + * @vchnl_req: vchnl message requested + * @resp_len: response length sent from vchnl peer + */ +static int irdma_vchnl_req_verify_resp(struct irdma_vchnl_req *vchnl_req, + u16 resp_len) +{ + switch (vchnl_req->vchnl_msg->op_code) { + case IRDMA_VCHNL_OP_GET_VER: + case IRDMA_VCHNL_OP_GET_HMC_FCN: + case IRDMA_VCHNL_OP_PUT_HMC_FCN: + if (resp_len != vchnl_req->parm_len) + return -EBADMSG; + break; + case IRDMA_VCHNL_OP_GET_RDMA_CAPS: + if (resp_len < IRDMA_VCHNL_OP_GET_RDMA_CAPS_MIN_SIZE) + return -EBADMSG; + break; + default: + return -EOPNOTSUPP; + } + + return 0; +} + +static void irdma_free_vchnl_req_msg(struct irdma_vchnl_req *vchnl_req) +{ + kfree(vchnl_req->vchnl_msg); +} + +static int irdma_alloc_vchnl_req_msg(struct irdma_vchnl_req *vchnl_req, + struct irdma_vchnl_req_init_info *info) +{ + struct irdma_vchnl_op_buf *vchnl_msg; + + vchnl_msg = kzalloc(IRDMA_VCHNL_MAX_MSG_SIZE, GFP_KERNEL); + + if (!vchnl_msg) + return -ENOMEM; + + vchnl_msg->op_ctx = (uintptr_t)vchnl_req; + vchnl_msg->buf_len = sizeof(*vchnl_msg) + info->req_parm_len; + if (info->req_parm_len) + memcpy(vchnl_msg->buf, info->req_parm, info->req_parm_len); + vchnl_msg->op_code = info->op_code; + vchnl_msg->op_ver = info->op_ver; + + vchnl_req->vchnl_msg = vchnl_msg; + vchnl_req->parm = info->resp_parm; + vchnl_req->parm_len = info->resp_parm_len; + + return 0; +} + +static int irdma_vchnl_req_send_sync(struct irdma_sc_dev *dev, + struct irdma_vchnl_req_init_info *info) +{ + u16 resp_len = sizeof(dev->vc_recv_buf); + struct irdma_vchnl_req vchnl_req = {}; + u16 msg_len; + u8 *msg; + int ret; + + ret = irdma_alloc_vchnl_req_msg(&vchnl_req, info); + if (ret) + return ret; + + msg_len = vchnl_req.vchnl_msg->buf_len; + msg = (u8 *)vchnl_req.vchnl_msg; + + mutex_lock(&dev->vchnl_mutex); + ret = irdma_vchnl_send_sync(dev, msg, msg_len, dev->vc_recv_buf, + &resp_len); + dev->vc_recv_len = resp_len; + if (ret) + goto exit; + + ret = irdma_vchnl_req_get_resp(dev, &vchnl_req); +exit: + mutex_unlock(&dev->vchnl_mutex); + ibdev_dbg(to_ibdev(dev), + "VIRT: virtual channel send %s caller: %pS ret=%d op=%u op_ver=%u req_len=%u parm_len=%u resp_len=%u\n", + !ret ? "SUCCEEDS" : "FAILS", __builtin_return_address(0), + ret, vchnl_req.vchnl_msg->op_code, + vchnl_req.vchnl_msg->op_ver, vchnl_req.vchnl_msg->buf_len, + vchnl_req.parm_len, vchnl_req.resp_len); + irdma_free_vchnl_req_msg(&vchnl_req); + + return ret; +} + +/** + * irdma_vchnl_req_get_ver - Request Channel version + * @dev: RDMA device pointer + * @ver_req: Virtual channel version requested + * @ver_res: Virtual channel version response + */ +int irdma_vchnl_req_get_ver(struct irdma_sc_dev *dev, u16 ver_req, u32 *ver_res) +{ + struct irdma_vchnl_req_init_info info = {}; + int ret; + + if (!dev->vchnl_up) + return -EBUSY; + + info.op_code = IRDMA_VCHNL_OP_GET_VER; + info.op_ver = ver_req; + info.resp_parm = ver_res; + info.resp_parm_len = sizeof(*ver_res); + + ret = irdma_vchnl_req_send_sync(dev, &info); + if (ret) + return ret; + + if (*ver_res < IRDMA_VCHNL_CHNL_VER_MIN) { + ibdev_dbg(to_ibdev(dev), + "VIRT: %s unsupported vchnl version 0x%0x\n", + __func__, *ver_res); + return -EOPNOTSUPP; + } + + return 0; +} + +/** + * irdma_vchnl_req_get_hmc_fcn - Request VF HMC Function + * @dev: RDMA device pointer + */ +int irdma_vchnl_req_get_hmc_fcn(struct irdma_sc_dev *dev) +{ + struct irdma_vchnl_req_hmc_info req_hmc = {}; + struct irdma_vchnl_resp_hmc_info resp_hmc = {}; + struct irdma_vchnl_req_init_info info = {}; + int ret; + + if (!dev->vchnl_up) + return -EBUSY; + + info.op_code = IRDMA_VCHNL_OP_GET_HMC_FCN; + if (dev->hw_attrs.uk_attrs.hw_rev >= IRDMA_GEN_3) { + info.op_ver = IRDMA_VCHNL_OP_GET_HMC_FCN_V2; + req_hmc.protocol_used = dev->protocol_used; + info.req_parm_len = sizeof(req_hmc); + info.req_parm = &req_hmc; + info.resp_parm = &resp_hmc; + info.resp_parm_len = sizeof(resp_hmc); + } + + ret = irdma_vchnl_req_send_sync(dev, &info); + + if (ret) + return ret; + + if (dev->hw_attrs.uk_attrs.hw_rev >= IRDMA_GEN_3) { + int i; + + for (i = 0; i < IRDMA_MAX_USER_PRIORITY; i++) { + dev->qos[i].qs_handle = resp_hmc.qs_handle[i]; + dev->qos[i].valid = true; + } + } + return 0; +} + +/** + * irdma_vchnl_req_put_hmc_fcn - Free VF HMC Function + * @dev: RDMA device pointer + */ +int irdma_vchnl_req_put_hmc_fcn(struct irdma_sc_dev *dev) +{ + struct irdma_vchnl_req_init_info info = {}; + + if (!dev->vchnl_up) + return -EBUSY; + + info.op_code = IRDMA_VCHNL_OP_PUT_HMC_FCN; + info.op_ver = IRDMA_VCHNL_OP_PUT_HMC_FCN_V0; + + return irdma_vchnl_req_send_sync(dev, &info); +} + +/** + * irdma_vchnl_req_get_caps - Request RDMA capabilities + * @dev: RDMA device pointer + */ +int irdma_vchnl_req_get_caps(struct irdma_sc_dev *dev) +{ + struct irdma_vchnl_req_init_info info = {}; + int ret; + + if (!dev->vchnl_up) + return -EBUSY; + + info.op_code = IRDMA_VCHNL_OP_GET_RDMA_CAPS; + info.op_ver = IRDMA_VCHNL_OP_GET_RDMA_CAPS_V0; + info.resp_parm = &dev->vc_caps; + info.resp_parm_len = sizeof(dev->vc_caps); + + ret = irdma_vchnl_req_send_sync(dev, &info); + + if (ret) + return ret; + + if (dev->vc_caps.hw_rev > IRDMA_GEN_MAX || + dev->vc_caps.hw_rev < IRDMA_GEN_2) { + ibdev_dbg(to_ibdev(dev), + "ERR: %s unsupported hw_rev version 0x%0x\n", + __func__, dev->vc_caps.hw_rev); + return -EOPNOTSUPP; + } + + return 0; +} + +/** + * irdma_vchnl_req_get_resp - Receive the inbound vchnl response. + * @dev: Dev pointer + * @vchnl_req: Vchannel request + */ +int irdma_vchnl_req_get_resp(struct irdma_sc_dev *dev, + struct irdma_vchnl_req *vchnl_req) +{ + struct irdma_vchnl_resp_buf *vchnl_msg_resp = + (struct irdma_vchnl_resp_buf *)dev->vc_recv_buf; + u16 resp_len; + int ret; + + if ((uintptr_t)vchnl_req != (uintptr_t)vchnl_msg_resp->op_ctx) { + ibdev_dbg(to_ibdev(dev), + "VIRT: error vchnl context value does not match\n"); + return -EBADMSG; + } + + resp_len = dev->vc_recv_len - sizeof(*vchnl_msg_resp); + resp_len = min(resp_len, vchnl_req->parm_len); + + ret = irdma_vchnl_req_verify_resp(vchnl_req, resp_len); + if (ret) + return ret; + + ret = (int)vchnl_msg_resp->op_ret; + if (ret) + return ret; + + vchnl_req->resp_len = 0; + if (vchnl_req->parm_len && vchnl_req->parm && resp_len) { + memcpy(vchnl_req->parm, vchnl_msg_resp->buf, resp_len); + vchnl_req->resp_len = resp_len; + ibdev_dbg(to_ibdev(dev), "VIRT: Got response, data size %u\n", + resp_len); + } + + return 0; +} diff --git a/drivers/infiniband/hw/irdma/virtchnl.h b/drivers/infiniband/hw/irdma/virtchnl.h new file mode 100644 index 000000000000..fb28fa09763b --- /dev/null +++ b/drivers/infiniband/hw/irdma/virtchnl.h @@ -0,0 +1,96 @@ +/* SPDX-License-Identifier: GPL-2.0 or Linux-OpenIB */ +/* Copyright (c) 2015 - 2024 Intel Corporation */ +#ifndef IRDMA_VIRTCHNL_H +#define IRDMA_VIRTCHNL_H + +#include "hmc.h" +#include "irdma.h" + +/* IRDMA_VCHNL_CHNL_VER_V0 is for legacy hw, no longer supported. */ +#define IRDMA_VCHNL_CHNL_VER_V2 2 +#define IRDMA_VCHNL_CHNL_VER_MIN IRDMA_VCHNL_CHNL_VER_V2 +#define IRDMA_VCHNL_CHNL_VER_MAX IRDMA_VCHNL_CHNL_VER_V2 +#define IRDMA_VCHNL_OP_GET_HMC_FCN_V0 0 +#define IRDMA_VCHNL_OP_GET_HMC_FCN_V1 1 +#define IRDMA_VCHNL_OP_GET_HMC_FCN_V2 2 +#define IRDMA_VCHNL_OP_PUT_HMC_FCN_V0 0 +#define IRDMA_VCHNL_OP_GET_RDMA_CAPS_V0 0 +#define IRDMA_VCHNL_OP_GET_RDMA_CAPS_MIN_SIZE 1 + +enum irdma_vchnl_ops { + IRDMA_VCHNL_OP_GET_VER = 0, + IRDMA_VCHNL_OP_GET_HMC_FCN = 1, + IRDMA_VCHNL_OP_PUT_HMC_FCN = 2, + IRDMA_VCHNL_OP_GET_RDMA_CAPS = 13, +}; + +struct irdma_vchnl_req_hmc_info { + u8 protocol_used; + u8 disable_qos; +} __packed; + +struct irdma_vchnl_resp_hmc_info { + u16 hmc_func; + u16 qs_handle[IRDMA_MAX_USER_PRIORITY]; +} __packed; + +struct irdma_vchnl_op_buf { + u16 op_code; + u16 op_ver; + u16 buf_len; + u16 rsvd; + u64 op_ctx; + u8 buf[]; +} __packed; + +struct irdma_vchnl_resp_buf { + u64 op_ctx; + u16 buf_len; + s16 op_ret; + u16 rsvd[2]; + u8 buf[]; +} __packed; + +struct irdma_vchnl_rdma_caps { + u8 hw_rev; + u16 cqp_timeout_s; + u16 cqp_def_timeout_s; + u16 max_hw_push_len; +} __packed; + +struct irdma_vchnl_init_info { + struct workqueue_struct *vchnl_wq; + enum irdma_vers hw_rev; + bool privileged; + bool is_pf; +}; + +struct irdma_vchnl_req { + struct irdma_vchnl_op_buf *vchnl_msg; + void *parm; + u32 vf_id; + u16 parm_len; + u16 resp_len; +}; + +struct irdma_vchnl_req_init_info { + void *req_parm; + void *resp_parm; + u16 req_parm_len; + u16 resp_parm_len; + u16 op_code; + u16 op_ver; +} __packed; + +int irdma_sc_vchnl_init(struct irdma_sc_dev *dev, + struct irdma_vchnl_init_info *info); +int irdma_vchnl_send_sync(struct irdma_sc_dev *dev, u8 *msg, u16 len, + u8 *recv_msg, u16 *recv_len); +int irdma_vchnl_req_get_ver(struct irdma_sc_dev *dev, u16 ver_req, + u32 *ver_res); +int irdma_vchnl_req_get_hmc_fcn(struct irdma_sc_dev *dev); +int irdma_vchnl_req_put_hmc_fcn(struct irdma_sc_dev *dev); +int irdma_vchnl_req_get_caps(struct irdma_sc_dev *dev); +int irdma_vchnl_req_get_resp(struct irdma_sc_dev *dev, + struct irdma_vchnl_req *vc_req); +#endif /* IRDMA_VIRTCHNL_H */ From patchwork Sat Aug 24 03:19:11 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Nikolova, Tatyana E" X-Patchwork-Id: 13776201 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.16]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C7C0D41C6C; Sat, 24 Aug 2024 03:20:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.16 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724469650; cv=none; b=p5N1fhT0TqzhrwZ2CSwJoO+SPVoCCMdDWyh26mNV6Ji9moznRGRhJ0X/qxmCGCVTQMb+C9MzdMKw3xi10pCKWVCEwRZOdAkaCNaJ5efaTzdzxDR4UkxBzIO9vmAmpOk/nrst7sEE71rtGIn/vjiWe3guOpzBTYA5eJBy/z+Oesc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724469650; c=relaxed/simple; bh=O9cpMYG5AsWcIcTxzcx8hrNVfQz6Wf1ZWvqRwEA1FoY=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=Lg6cx5FRzYJaymXAxNpqvd8ZACVeu/N1LvhwTcLnJqdRGjMfSUokYCk1dzWc5XLp2EcqukozohGYNAcZ0MH/B8JKuBiO350H9S61af/19wvzJhECPrH2tNp2dZgzrteW4ma69Ucf4q5oCxje6yAYUkmVonJg9sk0e1KurthSACE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=fkr21fn0; arc=none smtp.client-ip=192.198.163.16 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="fkr21fn0" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1724469649; x=1756005649; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=O9cpMYG5AsWcIcTxzcx8hrNVfQz6Wf1ZWvqRwEA1FoY=; b=fkr21fn0Y4SzMx9DB6w7PxqBmIQFWWVnKKed3A0zCcAeJAYXXwbdx8WA uxzgglrpG7CMYFm2uq2Nsz5Qb+DskSh3Vudgp3fKUaBNkEK9kaaTJB9lp ukFpcUq4jJEUvPz6YmicJQGHJwZ8tkeK+VeFh2gQX4KKKtexdzxfmRI6K 0L/nVgOelijasjyj19nB5zy9T6hMEdftsi/VHbpp9DzqCrxqNfDNI4lDB OlfXWaT/JaPZqBdbHXIqognErCSCmdNukc1jTdh0N2iX5vr6aSDllfPtR Eam0cbGy7iFjUKIt9ux6k5gJFkX18BfuS3Fucd/fKFajLDCKpbo8FyLqL Q==; X-CSE-ConnectionGUID: J8PUP097S1CcOg3mGdxZZQ== X-CSE-MsgGUID: y2PMesQvT0u+7yWEqptHnw== X-IronPort-AV: E=McAfee;i="6700,10204,11173"; a="13187800" X-IronPort-AV: E=Sophos;i="6.10,172,1719903600"; d="scan'208";a="13187800" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by fmvoesa110.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Aug 2024 20:20:45 -0700 X-CSE-ConnectionGUID: 3atypH53RTCEq9vzawKrSA== X-CSE-MsgGUID: oZjzGdpuQFWtet5RMG+J3A== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.10,172,1719903600"; d="scan'208";a="99492106" Received: from tenikolo-mobl1.amr.corp.intel.com ([10.124.36.66]) by smtpauth.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Aug 2024 20:20:44 -0700 From: Tatyana Nikolova To: jgg@nvidia.com, leon@kernel.org Cc: linux-rdma@vger.kernel.org, netdev@vger.kernel.org, Christopher Bednarz , Tatyana Nikolova Subject: [RFC v2 12/25] RDMA/irdma: Discover and set up GEN3 hardware register layout Date: Fri, 23 Aug 2024 22:19:11 -0500 Message-Id: <20240824031924.421-13-tatyana.e.nikolova@intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20240824031924.421-1-tatyana.e.nikolova@intel.com> References: <20240824031924.421-1-tatyana.e.nikolova@intel.com> Precedence: bulk X-Mailing-List: linux-rdma@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Christopher Bednarz Discover the hardware register layout for GEN3 devices through an RDMA virtual channel operation with the Control Plane (CP). Set up the corresponding hardware attributes specific to GEN3 devices. Signed-off-by: Christopher Bednarz Signed-off-by: Tatyana Nikolova --- drivers/infiniband/hw/irdma/Makefile | 1 + drivers/infiniband/hw/irdma/ctrl.c | 31 ++-- drivers/infiniband/hw/irdma/defs.h | 12 +- drivers/infiniband/hw/irdma/i40iw_hw.c | 2 + drivers/infiniband/hw/irdma/i40iw_hw.h | 2 + drivers/infiniband/hw/irdma/icrdma_hw.c | 3 + drivers/infiniband/hw/irdma/icrdma_hw.h | 5 +- drivers/infiniband/hw/irdma/ig3rdma_hw.c | 65 +++++++++ drivers/infiniband/hw/irdma/ig3rdma_hw.h | 18 +++ drivers/infiniband/hw/irdma/irdma.h | 5 + drivers/infiniband/hw/irdma/virtchnl.c | 178 +++++++++++++++++++++++ drivers/infiniband/hw/irdma/virtchnl.h | 44 ++++++ 12 files changed, 351 insertions(+), 15 deletions(-) create mode 100644 drivers/infiniband/hw/irdma/ig3rdma_hw.c diff --git a/drivers/infiniband/hw/irdma/Makefile b/drivers/infiniband/hw/irdma/Makefile index 3aa63b913377..03ceb9e5475f 100644 --- a/drivers/infiniband/hw/irdma/Makefile +++ b/drivers/infiniband/hw/irdma/Makefile @@ -16,6 +16,7 @@ irdma-objs := cm.o \ ig3rdma_if.o\ icrdma_if.o \ icrdma_hw.o \ + ig3rdma_hw.o\ main.o \ pble.o \ puda.o \ diff --git a/drivers/infiniband/hw/irdma/ctrl.c b/drivers/infiniband/hw/irdma/ctrl.c index 9d7b151a6b95..34875cb2ceff 100644 --- a/drivers/infiniband/hw/irdma/ctrl.c +++ b/drivers/infiniband/hw/irdma/ctrl.c @@ -5677,6 +5677,9 @@ static inline void irdma_sc_init_hw(struct irdma_sc_dev *dev) case IRDMA_GEN_2: icrdma_init_hw(dev); break; + case IRDMA_GEN_3: + ig3rdma_init_hw(dev); + break; } } @@ -5744,18 +5747,26 @@ int irdma_sc_dev_init(enum irdma_vers ver, struct irdma_sc_dev *dev, irdma_sc_init_hw(dev); - if (irdma_wait_pe_ready(dev)) - return -ETIMEDOUT; + if (dev->privileged) { + if (irdma_wait_pe_ready(dev)) + return -ETIMEDOUT; - val = readl(dev->hw_regs[IRDMA_GLPCI_LBARCTRL]); - db_size = (u8)FIELD_GET(IRDMA_GLPCI_LBARCTRL_PE_DB_SIZE, val); - if (db_size != IRDMA_PE_DB_SIZE_4M && db_size != IRDMA_PE_DB_SIZE_8M) { - ibdev_dbg(to_ibdev(dev), - "DEV: RDMA PE doorbell is not enabled in CSR val 0x%x db_size=%d\n", - val, db_size); - return -ENODEV; + val = readl(dev->hw_regs[IRDMA_GLPCI_LBARCTRL]); + db_size = (u8)FIELD_GET(IRDMA_GLPCI_LBARCTRL_PE_DB_SIZE, val); + if (db_size != IRDMA_PE_DB_SIZE_4M && + db_size != IRDMA_PE_DB_SIZE_8M) { + ibdev_dbg(to_ibdev(dev), + "DEV: RDMA PE doorbell is not enabled in CSR val 0x%x db_size=%d\n", + val, db_size); + return -ENODEV; + } + } else { + ret_code = irdma_vchnl_req_get_reg_layout(dev); + if (ret_code) + ibdev_dbg(to_ibdev(dev), + "DEV: Get Register layout failed ret = %d\n", + ret_code); } - dev->db_addr = dev->hw->hw_addr + (uintptr_t)dev->hw_regs[IRDMA_DB_ADDR_OFFSET]; return ret_code; } diff --git a/drivers/infiniband/hw/irdma/defs.h b/drivers/infiniband/hw/irdma/defs.h index 7825896c445c..fe75737554b2 100644 --- a/drivers/infiniband/hw/irdma/defs.h +++ b/drivers/infiniband/hw/irdma/defs.h @@ -115,6 +115,7 @@ enum irdma_protocol_used { #define IRDMA_FEATURE_BUF_SIZE (8 * IRDMA_MAX_FEATURES) #define ENABLE_LOC_MEM 63 +#define IRDMA_ATOMICS_ALLOWED_BIT 1 #define MAX_PBLE_PER_SD 0x40000 #define MAX_PBLE_SD_PER_FCN 0x400 #define MAX_MR_PER_SD 0x8000 @@ -127,7 +128,7 @@ enum irdma_protocol_used { #define IRDMA_QP_SW_MAX_RQ_QUANTA 32768 #define IRDMA_MAX_QP_WRS(max_quanta_per_wr) \ ((IRDMA_QP_SW_MAX_WQ_QUANTA - IRDMA_SQ_RSVD) / (max_quanta_per_wr)) - +#define IRDMA_SRQ_MAX_QUANTA 262144 #define IRDMAQP_TERM_SEND_TERM_AND_FIN 0 #define IRDMAQP_TERM_SEND_TERM_ONLY 1 #define IRDMAQP_TERM_SEND_FIN_ONLY 2 @@ -153,8 +154,13 @@ enum irdma_protocol_used { #define IRDMA_SQ_RSVD 258 #define IRDMA_RQ_RSVD 1 -#define IRDMA_FEATURE_RTS_AE 1ULL -#define IRDMA_FEATURE_CQ_RESIZE 2ULL +#define IRDMA_FEATURE_RTS_AE BIT_ULL(0) +#define IRDMA_FEATURE_CQ_RESIZE BIT_ULL(1) +#define IRDMA_FEATURE_64_BYTE_CQE BIT_ULL(5) +#define IRDMA_FEATURE_ATOMIC_OPS BIT_ULL(6) +#define IRDMA_FEATURE_SRQ BIT_ULL(7) +#define IRDMA_FEATURE_CQE_TIMESTAMPING BIT_ULL(8) + #define IRDMAQP_OP_RDMA_WRITE 0x00 #define IRDMAQP_OP_RDMA_READ 0x01 #define IRDMAQP_OP_RDMA_SEND 0x03 diff --git a/drivers/infiniband/hw/irdma/i40iw_hw.c b/drivers/infiniband/hw/irdma/i40iw_hw.c index ce61a27cb1f6..60c1f2b1811d 100644 --- a/drivers/infiniband/hw/irdma/i40iw_hw.c +++ b/drivers/infiniband/hw/irdma/i40iw_hw.c @@ -85,6 +85,7 @@ static u64 i40iw_masks[IRDMA_MAX_MASKS] = { I40E_CQPSQ_CQ_CEQID, I40E_CQPSQ_CQ_CQID, I40E_COMMIT_FPM_CQCNT, + I40E_CQPSQ_UPESD_HMCFNID, }; static u64 i40iw_shifts[IRDMA_MAX_SHIFTS] = { @@ -94,6 +95,7 @@ static u64 i40iw_shifts[IRDMA_MAX_SHIFTS] = { I40E_CQPSQ_CQ_CEQID_S, I40E_CQPSQ_CQ_CQID_S, I40E_COMMIT_FPM_CQCNT_S, + I40E_CQPSQ_UPESD_HMCFNID_S, }; /** diff --git a/drivers/infiniband/hw/irdma/i40iw_hw.h b/drivers/infiniband/hw/irdma/i40iw_hw.h index e1db84d8a62c..0095b327afcc 100644 --- a/drivers/infiniband/hw/irdma/i40iw_hw.h +++ b/drivers/infiniband/hw/irdma/i40iw_hw.h @@ -123,6 +123,8 @@ #define I40E_CQPSQ_CQ_CQID GENMASK_ULL(15, 0) #define I40E_COMMIT_FPM_CQCNT_S 0 #define I40E_COMMIT_FPM_CQCNT GENMASK_ULL(17, 0) +#define I40E_CQPSQ_UPESD_HMCFNID_S 0 +#define I40E_CQPSQ_UPESD_HMCFNID GENMASK_ULL(5, 0) #define I40E_VSIQF_CTL(_VSI) (0x0020D800 + ((_VSI) * 4)) diff --git a/drivers/infiniband/hw/irdma/icrdma_hw.c b/drivers/infiniband/hw/irdma/icrdma_hw.c index 941d3edffadb..32f26284a788 100644 --- a/drivers/infiniband/hw/irdma/icrdma_hw.c +++ b/drivers/infiniband/hw/irdma/icrdma_hw.c @@ -38,6 +38,7 @@ static u64 icrdma_masks[IRDMA_MAX_MASKS] = { ICRDMA_CQPSQ_CQ_CEQID, ICRDMA_CQPSQ_CQ_CQID, ICRDMA_COMMIT_FPM_CQCNT, + ICRDMA_CQPSQ_UPESD_HMCFNID, }; static u64 icrdma_shifts[IRDMA_MAX_SHIFTS] = { @@ -47,6 +48,7 @@ static u64 icrdma_shifts[IRDMA_MAX_SHIFTS] = { ICRDMA_CQPSQ_CQ_CEQID_S, ICRDMA_CQPSQ_CQ_CQID_S, ICRDMA_COMMIT_FPM_CQCNT_S, + ICRDMA_CQPSQ_UPESD_HMCFNID_S, }; /** @@ -194,6 +196,7 @@ void icrdma_init_hw(struct irdma_sc_dev *dev) dev->hw_attrs.max_hw_ord = ICRDMA_MAX_ORD_SIZE; dev->hw_attrs.max_stat_inst = ICRDMA_MAX_STATS_COUNT; dev->hw_attrs.max_stat_idx = IRDMA_HW_STAT_INDEX_MAX_GEN_2; + dev->hw_attrs.max_hw_device_pages = ICRDMA_MAX_PUSH_PAGE_COUNT; dev->hw_attrs.uk_attrs.min_hw_wq_size = ICRDMA_MIN_WQ_SIZE; dev->hw_attrs.uk_attrs.max_hw_sq_chunk = IRDMA_MAX_QUANTA_PER_WR; diff --git a/drivers/infiniband/hw/irdma/icrdma_hw.h b/drivers/infiniband/hw/irdma/icrdma_hw.h index 697b9572b5c6..d97944ab45da 100644 --- a/drivers/infiniband/hw/irdma/icrdma_hw.h +++ b/drivers/infiniband/hw/irdma/icrdma_hw.h @@ -58,14 +58,15 @@ #define ICRDMA_CQPSQ_CQ_CQID GENMASK_ULL(18, 0) #define ICRDMA_COMMIT_FPM_CQCNT_S 0 #define ICRDMA_COMMIT_FPM_CQCNT GENMASK_ULL(19, 0) - +#define ICRDMA_CQPSQ_UPESD_HMCFNID_S 0 +#define ICRDMA_CQPSQ_UPESD_HMCFNID GENMASK_ULL(5, 0) enum icrdma_device_caps_const { ICRDMA_MAX_STATS_COUNT = 128, ICRDMA_MAX_IRD_SIZE = 127, ICRDMA_MAX_ORD_SIZE = 255, ICRDMA_MIN_WQ_SIZE = 8 /* WQEs */, - + ICRDMA_MAX_PUSH_PAGE_COUNT = 256, }; void icrdma_init_hw(struct irdma_sc_dev *dev); diff --git a/drivers/infiniband/hw/irdma/ig3rdma_hw.c b/drivers/infiniband/hw/irdma/ig3rdma_hw.c new file mode 100644 index 000000000000..83ef6af82a8f --- /dev/null +++ b/drivers/infiniband/hw/irdma/ig3rdma_hw.c @@ -0,0 +1,65 @@ +// SPDX-License-Identifier: GPL-2.0 or Linux-OpenIB +/* Copyright (c) 2018 - 2024 Intel Corporation */ +#include "osdep.h" +#include "type.h" +#include "protos.h" +#include "ig3rdma_hw.h" + +void ig3rdma_init_hw(struct irdma_sc_dev *dev) +{ + dev->hw_attrs.uk_attrs.hw_rev = IRDMA_GEN_3; + dev->hw_attrs.uk_attrs.max_hw_wq_frags = IG3RDMA_MAX_WQ_FRAGMENT_COUNT; + dev->hw_attrs.uk_attrs.max_hw_read_sges = IG3RDMA_MAX_SGE_RD; + dev->hw_attrs.uk_attrs.max_hw_sq_chunk = IRDMA_MAX_QUANTA_PER_WR; + dev->hw_attrs.first_hw_vf_fpm_id = 0; + dev->hw_attrs.max_hw_vf_fpm_id = IG3_MAX_APFS + IG3_MAX_AVFS; + dev->hw_attrs.uk_attrs.feature_flags |= IRDMA_FEATURE_64_BYTE_CQE; + if (dev->feature_info[IRDMA_FTN_FLAGS] & IRDMA_ATOMICS_ALLOWED_BIT) + dev->hw_attrs.uk_attrs.feature_flags |= + IRDMA_FEATURE_ATOMIC_OPS; + dev->hw_attrs.uk_attrs.feature_flags |= IRDMA_FEATURE_CQE_TIMESTAMPING; + + dev->hw_attrs.uk_attrs.feature_flags |= IRDMA_FEATURE_SRQ; + dev->hw_attrs.uk_attrs.feature_flags |= IRDMA_FEATURE_RTS_AE | + IRDMA_FEATURE_CQ_RESIZE; + dev->hw_attrs.page_size_cap = SZ_4K | SZ_2M | SZ_1G; + dev->hw_attrs.max_hw_ird = IG3RDMA_MAX_IRD_SIZE; + dev->hw_attrs.max_hw_ord = IG3RDMA_MAX_ORD_SIZE; + dev->hw_attrs.uk_attrs.min_hw_wq_size = IG3RDMA_MIN_WQ_SIZE; + dev->hw_attrs.uk_attrs.max_hw_srq_quanta = IRDMA_SRQ_MAX_QUANTA; + dev->hw_attrs.uk_attrs.max_hw_inline = IG3RDMA_MAX_INLINE_DATA_SIZE; + dev->hw_attrs.max_hw_device_pages = + dev->is_pf ? IG3RDMA_MAX_PF_PUSH_PAGE_COUNT : IG3RDMA_MAX_VF_PUSH_PAGE_COUNT; +} + +static void __iomem *__ig3rdma_get_reg_addr(struct irdma_mmio_region *region, u64 reg_offset) +{ + if (reg_offset >= region->offset && + reg_offset < (region->offset + region->len)) { + reg_offset -= region->offset; + + return region->addr + reg_offset; + } + + return NULL; +} + +void __iomem *ig3rdma_get_reg_addr(struct irdma_hw *hw, u64 reg_offset) +{ + u8 __iomem *reg_addr; + int i; + + reg_addr = __ig3rdma_get_reg_addr(&hw->rdma_reg, reg_offset); + if (reg_addr) + return reg_addr; + + for (i = 0; i < hw->num_io_regions; i++) { + reg_addr = __ig3rdma_get_reg_addr(&hw->io_regs[i], reg_offset); + if (reg_addr) + return reg_addr; + } + + WARN_ON_ONCE(1); + + return NULL; +} diff --git a/drivers/infiniband/hw/irdma/ig3rdma_hw.h b/drivers/infiniband/hw/irdma/ig3rdma_hw.h index 4c3d186bbe81..d07933082788 100644 --- a/drivers/infiniband/hw/irdma/ig3rdma_hw.h +++ b/drivers/infiniband/hw/irdma/ig3rdma_hw.h @@ -3,9 +3,27 @@ #ifndef IG3RDMA_HW_H #define IG3RDMA_HW_H +#define IG3_MAX_APFS 1 +#define IG3_MAX_AVFS 0 + #define IG3_PF_RDMA_REGION_OFFSET 0xBC00000 #define IG3_PF_RDMA_REGION_LEN 0x401000 #define IG3_VF_RDMA_REGION_OFFSET 0x8C00 #define IG3_VF_RDMA_REGION_LEN 0x8400 +enum ig3rdma_device_caps_const { + IG3RDMA_MAX_WQ_FRAGMENT_COUNT = 14, + IG3RDMA_MAX_SGE_RD = 14, + + IG3RDMA_MAX_STATS_COUNT = 128, + + IG3RDMA_MAX_IRD_SIZE = 2048, + IG3RDMA_MAX_ORD_SIZE = 2048, + IG3RDMA_MIN_WQ_SIZE = 16 /* WQEs */, + IG3RDMA_MAX_INLINE_DATA_SIZE = 216, + IG3RDMA_MAX_PF_PUSH_PAGE_COUNT = 8192, + IG3RDMA_MAX_VF_PUSH_PAGE_COUNT = 16, +}; + +void __iomem *ig3rdma_get_reg_addr(struct irdma_hw *hw, u64 reg_offset); #endif /* IG3RDMA_HW_H*/ diff --git a/drivers/infiniband/hw/irdma/irdma.h b/drivers/infiniband/hw/irdma/irdma.h index 769170445f88..4dc6bf5b2e97 100644 --- a/drivers/infiniband/hw/irdma/irdma.h +++ b/drivers/infiniband/hw/irdma/irdma.h @@ -67,6 +67,7 @@ enum irdma_shifts { IRDMA_CQPSQ_CQ_CEQID_S, IRDMA_CQPSQ_CQ_CQID_S, IRDMA_COMMIT_FPM_CQCNT_S, + IRDMA_CQPSQ_UPESD_HMCFNID_S, IRDMA_MAX_SHIFTS, }; @@ -77,6 +78,7 @@ enum irdma_masks { IRDMA_CQPSQ_CQ_CEQID_M, IRDMA_CQPSQ_CQ_CQID_M, IRDMA_COMMIT_FPM_CQCNT_M, + IRDMA_CQPSQ_UPESD_HMCFNID_M, IRDMA_MAX_MASKS, /* Must be last entry */ }; @@ -121,6 +123,7 @@ struct irdma_uk_attrs { u32 max_hw_wq_quanta; u32 min_hw_cq_size; u32 max_hw_cq_size; + u32 max_hw_srq_quanta; u16 max_hw_sq_chunk; u16 min_hw_wq_size; u8 hw_rev; @@ -156,4 +159,6 @@ struct irdma_hw_attrs { void i40iw_init_hw(struct irdma_sc_dev *dev); void icrdma_init_hw(struct irdma_sc_dev *dev); +void ig3rdma_init_hw(struct irdma_sc_dev *dev); +void __iomem *ig3rdma_get_reg_addr(struct irdma_hw *hw, u64 reg_offset); #endif /* IRDMA_H*/ diff --git a/drivers/infiniband/hw/irdma/virtchnl.c b/drivers/infiniband/hw/irdma/virtchnl.c index 2abfc3961f3e..fcb8ef2dd28b 100644 --- a/drivers/infiniband/hw/irdma/virtchnl.c +++ b/drivers/infiniband/hw/irdma/virtchnl.c @@ -9,6 +9,51 @@ #include "ws.h" #include "i40iw_hw.h" +struct vchnl_reg_map_elem { + u16 reg_id; + u16 reg_idx; + bool pg_rel; +}; + +struct vchnl_regfld_map_elem { + u16 regfld_id; + u16 regfld_idx; +}; + +static struct vchnl_reg_map_elem vchnl_reg_map[] = { + {IRDMA_VCHNL_REG_ID_CQPTAIL, IRDMA_CQPTAIL, false}, + {IRDMA_VCHNL_REG_ID_CQPDB, IRDMA_CQPDB, false}, + {IRDMA_VCHNL_REG_ID_CCQPSTATUS, IRDMA_CCQPSTATUS, false}, + {IRDMA_VCHNL_REG_ID_CCQPHIGH, IRDMA_CCQPHIGH, false}, + {IRDMA_VCHNL_REG_ID_CCQPLOW, IRDMA_CCQPLOW, false}, + {IRDMA_VCHNL_REG_ID_CQARM, IRDMA_CQARM, false}, + {IRDMA_VCHNL_REG_ID_CQACK, IRDMA_CQACK, false}, + {IRDMA_VCHNL_REG_ID_AEQALLOC, IRDMA_AEQALLOC, false}, + {IRDMA_VCHNL_REG_ID_CQPERRCODES, IRDMA_CQPERRCODES, false}, + {IRDMA_VCHNL_REG_ID_WQEALLOC, IRDMA_WQEALLOC, false}, + {IRDMA_VCHNL_REG_ID_DB_ADDR_OFFSET, IRDMA_DB_ADDR_OFFSET, false }, + {IRDMA_VCHNL_REG_ID_DYN_CTL, IRDMA_GLINT_DYN_CTL, false }, + {IRDMA_VCHNL_REG_INV_ID, IRDMA_VCHNL_REG_INV_ID, false } +}; + +static struct vchnl_regfld_map_elem vchnl_regfld_map[] = { + {IRDMA_VCHNL_REGFLD_ID_CCQPSTATUS_CQP_OP_ERR, IRDMA_CCQPSTATUS_CCQP_ERR_M}, + {IRDMA_VCHNL_REGFLD_ID_CCQPSTATUS_CCQP_DONE, IRDMA_CCQPSTATUS_CCQP_DONE_M}, + {IRDMA_VCHNL_REGFLD_ID_CQPSQ_STAG_PDID, IRDMA_CQPSQ_STAG_PDID_M}, + {IRDMA_VCHNL_REGFLD_ID_CQPSQ_CQ_CEQID, IRDMA_CQPSQ_CQ_CEQID_M}, + {IRDMA_VCHNL_REGFLD_ID_CQPSQ_CQ_CQID, IRDMA_CQPSQ_CQ_CQID_M}, + {IRDMA_VCHNL_REGFLD_ID_COMMIT_FPM_CQCNT, IRDMA_COMMIT_FPM_CQCNT_M}, + {IRDMA_VCHNL_REGFLD_ID_UPESD_HMCN_ID, IRDMA_CQPSQ_UPESD_HMCFNID_M}, + {IRDMA_VCHNL_REGFLD_INV_ID, IRDMA_VCHNL_REGFLD_INV_ID} +}; + +#define IRDMA_VCHNL_REG_COUNT ARRAY_SIZE(vchnl_reg_map) +#define IRDMA_VCHNL_REGFLD_COUNT ARRAY_SIZE(vchnl_regfld_map) +#define IRDMA_VCHNL_REGFLD_BUF_SIZE \ + (IRDMA_VCHNL_REG_COUNT * sizeof(struct irdma_vchnl_reg_info) + \ + IRDMA_VCHNL_REGFLD_COUNT * sizeof(struct irdma_vchnl_reg_field_info)) +#define IRDMA_REGMAP_RESP_BUF_SIZE (IRDMA_VCHNL_RESP_MIN_SIZE + IRDMA_VCHNL_REGFLD_BUF_SIZE) + /** * irdma_sc_vchnl_init - Initialize dev virtchannel and get hw_rev * @dev: dev structure to update @@ -62,6 +107,8 @@ static int irdma_vchnl_req_verify_resp(struct irdma_vchnl_req *vchnl_req, if (resp_len < IRDMA_VCHNL_OP_GET_RDMA_CAPS_MIN_SIZE) return -EBADMSG; break; + case IRDMA_VCHNL_OP_GET_REG_LAYOUT: + break; default: return -EOPNOTSUPP; } @@ -135,6 +182,137 @@ static int irdma_vchnl_req_send_sync(struct irdma_sc_dev *dev, return ret; } +/** + * irdma_vchnl_req_get_reg_layout - Get Register Layout + * @dev: RDMA device pointer + */ +int irdma_vchnl_req_get_reg_layout(struct irdma_sc_dev *dev) +{ + u16 reg_idx, reg_id, tmp_reg_id, regfld_idx, regfld_id, tmp_regfld_id; + struct irdma_vchnl_reg_field_info *regfld_array = NULL; + u8 resp_buffer[IRDMA_REGMAP_RESP_BUF_SIZE] = {}; + struct vchnl_regfld_map_elem *regfld_map_array; + struct irdma_vchnl_req_init_info info = {}; + struct vchnl_reg_map_elem *reg_map_array; + struct irdma_vchnl_reg_info *reg_array; + u8 num_bits, shift_cnt; + u16 buf_len = 0; + u64 bitmask; + u32 rindex; + int ret; + + if (!dev->vchnl_up) + return -EBUSY; + + info.op_code = IRDMA_VCHNL_OP_GET_REG_LAYOUT; + info.op_ver = IRDMA_VCHNL_OP_GET_REG_LAYOUT_V0; + info.resp_parm = resp_buffer; + info.resp_parm_len = sizeof(resp_buffer); + + ret = irdma_vchnl_req_send_sync(dev, &info); + + if (ret) + return ret; + + /* parse the response buffer and update reg info*/ + /* Parse registers till invalid */ + /* Parse register fields till invalid */ + reg_array = (struct irdma_vchnl_reg_info *)resp_buffer; + for (rindex = 0; rindex < IRDMA_VCHNL_REG_COUNT; rindex++) { + buf_len += sizeof(struct irdma_vchnl_reg_info); + if (buf_len >= sizeof(resp_buffer)) + return -ENOMEM; + + regfld_array = + (struct irdma_vchnl_reg_field_info *)®_array[rindex + 1]; + reg_id = reg_array[rindex].reg_id; + if (reg_id == IRDMA_VCHNL_REG_INV_ID) + break; + + reg_id &= ~IRDMA_VCHNL_REG_PAGE_REL; + if (reg_id >= IRDMA_VCHNL_REG_COUNT) + return -EINVAL; + + /* search regmap for register index in hw_regs.*/ + reg_map_array = vchnl_reg_map; + do { + tmp_reg_id = reg_map_array->reg_id; + if (tmp_reg_id == reg_id) + break; + + reg_map_array++; + } while (tmp_reg_id != IRDMA_VCHNL_REG_INV_ID); + if (tmp_reg_id != reg_id) + continue; + + reg_idx = reg_map_array->reg_idx; + + /* Page relative, DB Offset do not need bar offset */ + if (reg_idx == IRDMA_DB_ADDR_OFFSET || + (reg_array[rindex].reg_id & IRDMA_VCHNL_REG_PAGE_REL)) { + dev->hw_regs[reg_idx] = + (u32 __iomem *)(uintptr_t)reg_array[rindex].reg_offset; + continue; + } + + /* Update the local HW struct */ + dev->hw_regs[reg_idx] = ig3rdma_get_reg_addr(dev->hw, + reg_array[rindex].reg_offset); + if (!dev->hw_regs[reg_idx]) + return -EINVAL; + } + + if (!regfld_array) + return -ENOMEM; + + /* set up doorbell variables using mapped DB page */ + dev->wqe_alloc_db = dev->hw_regs[IRDMA_WQEALLOC]; + dev->cq_arm_db = dev->hw_regs[IRDMA_CQARM]; + dev->aeq_alloc_db = dev->hw_regs[IRDMA_AEQALLOC]; + dev->cqp_db = dev->hw_regs[IRDMA_CQPDB]; + dev->cq_ack_db = dev->hw_regs[IRDMA_CQACK]; + + for (rindex = 0; rindex < IRDMA_VCHNL_REGFLD_COUNT; rindex++) { + buf_len += sizeof(struct irdma_vchnl_reg_field_info); + if ((buf_len - 1) > sizeof(resp_buffer)) + break; + + if (regfld_array[rindex].fld_id == IRDMA_VCHNL_REGFLD_INV_ID) + break; + + regfld_id = regfld_array[rindex].fld_id; + regfld_map_array = vchnl_regfld_map; + do { + tmp_regfld_id = regfld_map_array->regfld_id; + if (tmp_regfld_id == regfld_id) + break; + + regfld_map_array++; + } while (tmp_regfld_id != IRDMA_VCHNL_REGFLD_INV_ID); + + if (tmp_regfld_id != regfld_id) + continue; + + regfld_idx = regfld_map_array->regfld_idx; + + num_bits = regfld_array[rindex].fld_bits; + shift_cnt = regfld_array[rindex].fld_shift; + if ((num_bits + shift_cnt > 64) || !num_bits) { + ibdev_dbg(to_ibdev(dev), + "ERR: Invalid field mask id %d bits %d shift %d", + regfld_id, num_bits, shift_cnt); + + continue; + } + + bitmask = (1ULL << num_bits) - 1; + dev->hw_masks[regfld_idx] = bitmask << shift_cnt; + dev->hw_shifts[regfld_idx] = shift_cnt; + } + + return 0; +} + /** * irdma_vchnl_req_get_ver - Request Channel version * @dev: RDMA device pointer diff --git a/drivers/infiniband/hw/irdma/virtchnl.h b/drivers/infiniband/hw/irdma/virtchnl.h index fb28fa09763b..20526c0b4285 100644 --- a/drivers/infiniband/hw/irdma/virtchnl.h +++ b/drivers/infiniband/hw/irdma/virtchnl.h @@ -14,13 +14,44 @@ #define IRDMA_VCHNL_OP_GET_HMC_FCN_V1 1 #define IRDMA_VCHNL_OP_GET_HMC_FCN_V2 2 #define IRDMA_VCHNL_OP_PUT_HMC_FCN_V0 0 +#define IRDMA_VCHNL_OP_GET_REG_LAYOUT_V0 0 #define IRDMA_VCHNL_OP_GET_RDMA_CAPS_V0 0 #define IRDMA_VCHNL_OP_GET_RDMA_CAPS_MIN_SIZE 1 +#define IRDMA_VCHNL_REG_ID_CQPTAIL 0 +#define IRDMA_VCHNL_REG_ID_CQPDB 1 +#define IRDMA_VCHNL_REG_ID_CCQPSTATUS 2 +#define IRDMA_VCHNL_REG_ID_CCQPHIGH 3 +#define IRDMA_VCHNL_REG_ID_CCQPLOW 4 +#define IRDMA_VCHNL_REG_ID_CQARM 5 +#define IRDMA_VCHNL_REG_ID_CQACK 6 +#define IRDMA_VCHNL_REG_ID_AEQALLOC 7 +#define IRDMA_VCHNL_REG_ID_CQPERRCODES 8 +#define IRDMA_VCHNL_REG_ID_WQEALLOC 9 +#define IRDMA_VCHNL_REG_ID_IPCONFIG0 10 +#define IRDMA_VCHNL_REG_ID_DB_ADDR_OFFSET 11 +#define IRDMA_VCHNL_REG_ID_DYN_CTL 12 +#define IRDMA_VCHNL_REG_ID_AEQITRMASK 13 +#define IRDMA_VCHNL_REG_ID_CEQITRMASK 14 +#define IRDMA_VCHNL_REG_INV_ID 0xFFFF +#define IRDMA_VCHNL_REG_PAGE_REL 0x8000 + +#define IRDMA_VCHNL_REGFLD_ID_CCQPSTATUS_CQP_OP_ERR 2 +#define IRDMA_VCHNL_REGFLD_ID_CCQPSTATUS_CCQP_DONE 5 +#define IRDMA_VCHNL_REGFLD_ID_CQPSQ_STAG_PDID 6 +#define IRDMA_VCHNL_REGFLD_ID_CQPSQ_CQ_CEQID 7 +#define IRDMA_VCHNL_REGFLD_ID_CQPSQ_CQ_CQID 8 +#define IRDMA_VCHNL_REGFLD_ID_COMMIT_FPM_CQCNT 9 +#define IRDMA_VCHNL_REGFLD_ID_UPESD_HMCN_ID 10 +#define IRDMA_VCHNL_REGFLD_INV_ID 0xFFFF + +#define IRDMA_VCHNL_RESP_MIN_SIZE (sizeof(struct irdma_vchnl_resp_buf)) + enum irdma_vchnl_ops { IRDMA_VCHNL_OP_GET_VER = 0, IRDMA_VCHNL_OP_GET_HMC_FCN = 1, IRDMA_VCHNL_OP_PUT_HMC_FCN = 2, + IRDMA_VCHNL_OP_GET_REG_LAYOUT = 11, IRDMA_VCHNL_OP_GET_RDMA_CAPS = 13, }; @@ -65,6 +96,18 @@ struct irdma_vchnl_init_info { bool is_pf; }; +struct irdma_vchnl_reg_info { + u32 reg_offset; + u16 field_cnt; + u16 reg_id; /* High bit of reg_id: bar or page relative */ +}; + +struct irdma_vchnl_reg_field_info { + u8 fld_shift; + u8 fld_bits; + u16 fld_id; +}; + struct irdma_vchnl_req { struct irdma_vchnl_op_buf *vchnl_msg; void *parm; @@ -93,4 +136,5 @@ int irdma_vchnl_req_put_hmc_fcn(struct irdma_sc_dev *dev); int irdma_vchnl_req_get_caps(struct irdma_sc_dev *dev); int irdma_vchnl_req_get_resp(struct irdma_sc_dev *dev, struct irdma_vchnl_req *vc_req); +int irdma_vchnl_req_get_reg_layout(struct irdma_sc_dev *dev); #endif /* IRDMA_VIRTCHNL_H */ From patchwork Sat Aug 24 03:19:12 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Nikolova, Tatyana E" X-Patchwork-Id: 13776202 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.16]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C8E0E54656; Sat, 24 Aug 2024 03:20:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.16 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724469652; cv=none; b=fHXrycbVcsecIfhyXSQG0gFmjUdEs+4n14tRChYVkfuN8ChfYSub9P9hFjWkwF866ueEUXjBIZVt8XMfeuOkLdlvDM+aits6wE02M1XWdlHFDz+AKRyLbeRM7v1UL8kPX8HiFO3GRbsKs9v57v1WdJo039p1rCfUlvN5NOkwHv8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724469652; c=relaxed/simple; bh=WFIRqGAQAl6IKBou3s6SLFumGaNC4AbqW5VSTBg+tjI=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=q66n57uC5QQQjQw3Akg+f2dwc2plWMqAHN7f9x7ujvId3MSIRsWAlW27wo4vY/VIz30Xso1zXpPVZlwHnCuRP50k4pcAzjpH/05XODS1RjsPfrwMsLEY0iuZorBxHftZOesykZA068ndf0vFYE+qNwKcFxiezGXzWG2Kq0223PA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=YgL8IivS; arc=none smtp.client-ip=192.198.163.16 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="YgL8IivS" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1724469650; x=1756005650; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=WFIRqGAQAl6IKBou3s6SLFumGaNC4AbqW5VSTBg+tjI=; b=YgL8IivSClEzZxhuFLqiQiUS4MnNeBM2br0T+zVyOlzac9GbdSV83HMI DNoiUVJq2HkDnkvkHzLro5hCMPYKW61G2hzct2zfULAq91mw+QtRL5Wlp obFMJ9zjjp0icmYFfJA/VJfFkiiPQAdP1yxYuQE+/SaUuo9EaFhh/OJKU XxLYB/X6Q8vd4F0Jkg8ZwvItA04OPRW2/qsxro/OGAq9IymfvVTWXsVbR 32kYnuBXDx10QDYFHnyP1554hmKiTvrfADlYt/7EORZy3XK48iFPF/26n Z1N8E9iu73PfP8RyZmyTVOm9xHsDNxr7928TIetUSem9itaS78+LkrBRz A==; X-CSE-ConnectionGUID: oo6m9K4pTyO1roHvmEaJcw== X-CSE-MsgGUID: ZWdvTmuWQBGnweuVDfqiwg== X-IronPort-AV: E=McAfee;i="6700,10204,11173"; a="13187803" X-IronPort-AV: E=Sophos;i="6.10,172,1719903600"; d="scan'208";a="13187803" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by fmvoesa110.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Aug 2024 20:20:45 -0700 X-CSE-ConnectionGUID: FyPJ0gmUQayTjVnlIOZ0ow== X-CSE-MsgGUID: wJNzcmLcRIawxIjpmkzYKQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.10,172,1719903600"; d="scan'208";a="99492110" Received: from tenikolo-mobl1.amr.corp.intel.com ([10.124.36.66]) by smtpauth.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Aug 2024 20:20:45 -0700 From: Tatyana Nikolova To: jgg@nvidia.com, leon@kernel.org Cc: linux-rdma@vger.kernel.org, netdev@vger.kernel.org, Krzysztof Czurylo , Tatyana Nikolova Subject: [RFC v2 13/25] RDMA/irdma: Add GEN3 CQP support with deferred completions Date: Fri, 23 Aug 2024 22:19:12 -0500 Message-Id: <20240824031924.421-14-tatyana.e.nikolova@intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20240824031924.421-1-tatyana.e.nikolova@intel.com> References: <20240824031924.421-1-tatyana.e.nikolova@intel.com> Precedence: bulk X-Mailing-List: linux-rdma@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Krzysztof Czurylo GEN3 introduces asynchronous handling of Control QP (CQP) operations to minimize head-of-line blocking. Create the CQP using the updated GEN3- specific descriptor fields and implement the necessary support for this deferred completion mechanism. Signed-off-by: Krzysztof Czurylo Signed-off-by: Tatyana Nikolova --- drivers/infiniband/hw/irdma/ctrl.c | 254 ++++++++++++++++++++++++++- drivers/infiniband/hw/irdma/defs.h | 15 ++ drivers/infiniband/hw/irdma/hw.c | 89 ++++++++-- drivers/infiniband/hw/irdma/main.h | 2 + drivers/infiniband/hw/irdma/protos.h | 1 + drivers/infiniband/hw/irdma/type.h | 43 ++++- drivers/infiniband/hw/irdma/utils.c | 50 +++++- 7 files changed, 439 insertions(+), 15 deletions(-) diff --git a/drivers/infiniband/hw/irdma/ctrl.c b/drivers/infiniband/hw/irdma/ctrl.c index 34875cb2ceff..e524b61e4759 100644 --- a/drivers/infiniband/hw/irdma/ctrl.c +++ b/drivers/infiniband/hw/irdma/ctrl.c @@ -2737,6 +2737,90 @@ static inline void irdma_get_cqp_reg_info(struct irdma_sc_cqp *cqp, u32 *val, *error = FIELD_GET(IRDMA_CQPTAIL_CQP_OP_ERR, *val); } +/** + * irdma_sc_cqp_def_cmpl_ae_handler - remove completed requests from pending list + * @dev: sc device struct + * @info: AE entry info + * @first: true if this is the first call to this handler for given AEQE + * @scratch: (out) scratch entry pointer + * @sw_def_info: (in/out) SW ticket value for this AE + * + * In case of AE_DEF_CMPL event, this function should be called in a loop + * until it returns NULL-ptr via scratch. + * For each call, it looks for a matching CQP request on pending list, + * removes it from the list and returns the pointer to the associated scratch + * entry. + * If this is the first call to this function for given AEQE, sw_def_info + * value is not used to find matching requests. Instead, it is populated + * with the value from the first matching cqp_request on the list. + * For subsequent calls, ooo_op->sw_def_info need to match the value passed + * by a caller. + * + * Return: scratch entry pointer for cqp_request to be released or NULL + * if no matching request is found. + */ +void irdma_sc_cqp_def_cmpl_ae_handler(struct irdma_sc_dev *dev, + struct irdma_aeqe_info *info, + bool first, u64 *scratch, + u32 *sw_def_info) +{ + struct irdma_ooo_cqp_op *ooo_op; + unsigned long flags; + + *scratch = 0; + + spin_lock_irqsave(&dev->cqp->ooo_list_lock, flags); + list_for_each_entry(ooo_op, &dev->cqp->ooo_pnd, list_entry) { + if (ooo_op->deferred && + ((first && ooo_op->def_info == info->def_info) || + (!first && ooo_op->sw_def_info == *sw_def_info))) { + *sw_def_info = ooo_op->sw_def_info; + *scratch = ooo_op->scratch; + + list_del(&ooo_op->list_entry); + list_add(&ooo_op->list_entry, &dev->cqp->ooo_avail); + atomic64_inc(&dev->cqp->completed_ops); + + break; + } + } + spin_unlock_irqrestore(&dev->cqp->ooo_list_lock, flags); + + if (first && !*scratch) + ibdev_dbg(to_ibdev(dev), + "AEQ: deferred completion with unknown ticket: def_info 0x%x\n", + info->def_info); +} + +/** + * irdma_sc_cqp_cleanup_handler - remove requests from pending list + * @dev: sc device struct + * + * This function should be called in a loop from irdma_cleanup_pending_cqp_op. + * For each call, it returns first CQP request on pending list, removes it + * from the list and returns the pointer to the associated scratch entry. + * + * Return: scratch entry pointer for cqp_request to be released or NULL + * if pending list is empty. + */ +u64 irdma_sc_cqp_cleanup_handler(struct irdma_sc_dev *dev) +{ + struct irdma_ooo_cqp_op *ooo_op; + u64 scratch = 0; + + list_for_each_entry(ooo_op, &dev->cqp->ooo_pnd, list_entry) { + scratch = ooo_op->scratch; + + list_del(&ooo_op->list_entry); + list_add(&ooo_op->list_entry, &dev->cqp->ooo_avail); + atomic64_inc(&dev->cqp->completed_ops); + + break; + } + + return scratch; +} + /** * irdma_cqp_poll_registers - poll cqp registers * @cqp: struct for cqp hw @@ -3121,6 +3205,8 @@ void irdma_sc_remove_cq_ctx(struct irdma_sc_ceq *ceq, struct irdma_sc_cq *cq) int irdma_sc_cqp_init(struct irdma_sc_cqp *cqp, struct irdma_cqp_init_info *info) { + struct irdma_ooo_cqp_op *ooo_op; + u32 num_ooo_ops; u8 hw_sq_size; if (info->sq_size > IRDMA_CQP_SW_SQSIZE_2048 || @@ -3151,17 +3237,43 @@ int irdma_sc_cqp_init(struct irdma_sc_cqp *cqp, cqp->rocev2_rto_policy = info->rocev2_rto_policy; cqp->protocol_used = info->protocol_used; memcpy(&cqp->dcqcn_params, &info->dcqcn_params, sizeof(cqp->dcqcn_params)); + if (cqp->dev->hw_attrs.uk_attrs.hw_rev >= IRDMA_GEN_3) { + cqp->ooisc_blksize = info->ooisc_blksize; + cqp->rrsp_blksize = info->rrsp_blksize; + cqp->q1_blksize = info->q1_blksize; + cqp->xmit_blksize = info->xmit_blksize; + cqp->blksizes_valid = info->blksizes_valid; + cqp->ts_shift = info->ts_shift; + cqp->ts_override = info->ts_override; + cqp->en_fine_grained_timers = info->en_fine_grained_timers; + cqp->pe_en_vf_cnt = info->pe_en_vf_cnt; + cqp->ooo_op_array = info->ooo_op_array; + /* initialize the OOO lists */ + INIT_LIST_HEAD(&cqp->ooo_avail); + INIT_LIST_HEAD(&cqp->ooo_pnd); + if (cqp->ooo_op_array) { + /* Populate avail list entries */ + for (num_ooo_ops = 0, ooo_op = info->ooo_op_array; + num_ooo_ops < cqp->sq_size; + num_ooo_ops++, ooo_op++) + list_add(&ooo_op->list_entry, &cqp->ooo_avail); + } + } info->dev->cqp = cqp; IRDMA_RING_INIT(cqp->sq_ring, cqp->sq_size); + cqp->last_def_cmpl_ticket = 0; + cqp->sw_def_cmpl_ticket = 0; cqp->requested_ops = 0; atomic64_set(&cqp->completed_ops, 0); /* for the cqp commands backlog. */ INIT_LIST_HEAD(&cqp->dev->cqp_cmd_head); writel(0, cqp->dev->hw_regs[IRDMA_CQPTAIL]); - writel(0, cqp->dev->hw_regs[IRDMA_CQPDB]); - writel(0, cqp->dev->hw_regs[IRDMA_CCQPSTATUS]); + if (cqp->dev->hw_attrs.uk_attrs.hw_rev <= IRDMA_GEN_2) { + writel(0, cqp->dev->hw_regs[IRDMA_CQPDB]); + writel(0, cqp->dev->hw_regs[IRDMA_CCQPSTATUS]); + } ibdev_dbg(to_ibdev(cqp->dev), "WQE: sq_size[%04d] hw_sq_size[%04d] sq_base[%p] sq_pa[%pK] cqp[%p] polarity[x%04x]\n", @@ -3193,6 +3305,7 @@ int irdma_sc_cqp_create(struct irdma_sc_cqp *cqp, u16 *maj_err, u16 *min_err) return -ENOMEM; spin_lock_init(&cqp->dev->cqp_lock); + spin_lock_init(&cqp->ooo_list_lock); temp = FIELD_PREP(IRDMA_CQPHC_SQSIZE, cqp->hw_sq_size) | FIELD_PREP(IRDMA_CQPHC_SVER, cqp->struct_ver) | @@ -3204,12 +3317,29 @@ int irdma_sc_cqp_create(struct irdma_sc_cqp *cqp, u16 *maj_err, u16 *min_err) FIELD_PREP(IRDMA_CQPHC_PROTOCOL_USED, cqp->protocol_used); } + if (hw_rev >= IRDMA_GEN_3) + temp |= FIELD_PREP(IRDMA_CQPHC_EN_FINE_GRAINED_TIMERS, + cqp->en_fine_grained_timers); set_64bit_val(cqp->host_ctx, 0, temp); set_64bit_val(cqp->host_ctx, 8, cqp->sq_pa); temp = FIELD_PREP(IRDMA_CQPHC_ENABLED_VFS, cqp->ena_vf_count) | FIELD_PREP(IRDMA_CQPHC_HMC_PROFILE, cqp->hmc_profile); + + if (hw_rev >= IRDMA_GEN_3) + temp |= FIELD_PREP(IRDMA_CQPHC_OOISC_BLKSIZE, + cqp->ooisc_blksize) | + FIELD_PREP(IRDMA_CQPHC_RRSP_BLKSIZE, + cqp->rrsp_blksize) | + FIELD_PREP(IRDMA_CQPHC_Q1_BLKSIZE, cqp->q1_blksize) | + FIELD_PREP(IRDMA_CQPHC_XMIT_BLKSIZE, + cqp->xmit_blksize) | + FIELD_PREP(IRDMA_CQPHC_BLKSIZES_VALID, + cqp->blksizes_valid) | + FIELD_PREP(IRDMA_CQPHC_TIMESTAMP_OVERRIDE, + cqp->ts_override) | + FIELD_PREP(IRDMA_CQPHC_TS_SHIFT, cqp->ts_shift); set_64bit_val(cqp->host_ctx, 16, temp); set_64bit_val(cqp->host_ctx, 24, (uintptr_t)cqp); temp = FIELD_PREP(IRDMA_CQPHC_HW_MAJVER, cqp->hw_maj_ver) | @@ -3370,6 +3500,87 @@ void irdma_sc_ccq_arm(struct irdma_sc_cq *ccq) writel(ccq->cq_uk.cq_id, ccq->dev->cq_arm_db); } +/** + * irdma_sc_process_def_cmpl - process deferred or pending completion + * @cqp: CQP sc struct + * @info: CQP CQE info + * @wqe_idx: CQP WQE descriptor index + * @def_info: deferred op ticket value or out-of-order completion id + * @def_cmpl: true for deferred completion, false for pending (RCA) + */ +static void irdma_sc_process_def_cmpl(struct irdma_sc_cqp *cqp, + struct irdma_ccq_cqe_info *info, + u32 wqe_idx, u32 def_info, bool def_cmpl) +{ + struct irdma_ooo_cqp_op *ooo_op; + unsigned long flags; + + /* Deferred and out-of-order completions share the same list of pending + * completions. Since the list can be also accessed from AE handler, + * it must be protected by a lock. + */ + spin_lock_irqsave(&cqp->ooo_list_lock, flags); + + /* For deferred completions bump up SW completion ticket value. */ + if (def_cmpl) { + cqp->last_def_cmpl_ticket = def_info; + cqp->sw_def_cmpl_ticket++; + } + if (!list_empty(&cqp->ooo_avail)) { + ooo_op = (struct irdma_ooo_cqp_op *) + list_entry(cqp->ooo_avail.next, + struct irdma_ooo_cqp_op, list_entry); + + list_del(&ooo_op->list_entry); + ooo_op->scratch = info->scratch; + ooo_op->def_info = def_info; + ooo_op->sw_def_info = cqp->sw_def_cmpl_ticket; + ooo_op->deferred = def_cmpl; + ooo_op->wqe_idx = wqe_idx; + /* Pending completions must be chronologically ordered, + * so adding at the end of list. + */ + list_add_tail(&ooo_op->list_entry, &cqp->ooo_pnd); + } + spin_unlock_irqrestore(&cqp->ooo_list_lock, flags); + + info->pending = true; +} + +/** + * irdma_sc_process_ooo_cmpl - process out-of-order (final) completion + * @cqp: CQP sc struct + * @info: CQP CQE info + * @def_info: out-of-order completion id + */ +static void irdma_sc_process_ooo_cmpl(struct irdma_sc_cqp *cqp, + struct irdma_ccq_cqe_info *info, + u32 def_info) +{ + struct irdma_ooo_cqp_op *ooo_op_tmp; + struct irdma_ooo_cqp_op *ooo_op; + unsigned long flags; + + info->scratch = 0; + + spin_lock_irqsave(&cqp->ooo_list_lock, flags); + list_for_each_entry_safe(ooo_op, ooo_op_tmp, &cqp->ooo_pnd, + list_entry) { + if (!ooo_op->deferred && ooo_op->def_info == def_info) { + list_del(&ooo_op->list_entry); + info->scratch = ooo_op->scratch; + list_add(&ooo_op->list_entry, &cqp->ooo_avail); + break; + } + } + spin_unlock_irqrestore(&cqp->ooo_list_lock, flags); + + if (!info->scratch) + ibdev_dbg(to_ibdev(cqp->dev), + "CQP: DEBUG_FW_OOO out-of-order completion with unknown def_info = 0x%x\n", + def_info); +} + /** * irdma_sc_ccq_get_cqe_info - get ccq's cq entry * @ccq: ccq sc struct @@ -3378,6 +3589,10 @@ void irdma_sc_ccq_arm(struct irdma_sc_cq *ccq) int irdma_sc_ccq_get_cqe_info(struct irdma_sc_cq *ccq, struct irdma_ccq_cqe_info *info) { + u32 def_info; + bool def_cmpl = false; + bool pend_cmpl = false; + bool ooo_final_cmpl = false; u64 qp_ctx, temp, temp1; __le64 *cqe; struct irdma_sc_cqp *cqp; @@ -3385,6 +3600,7 @@ int irdma_sc_ccq_get_cqe_info(struct irdma_sc_cq *ccq, u32 error; u8 polarity; int ret_code = 0; + unsigned long flags; if (ccq->cq_uk.avoid_mem_cflct) cqe = IRDMA_GET_CURRENT_EXTENDED_CQ_ELEM(&ccq->cq_uk); @@ -3416,6 +3632,25 @@ int irdma_sc_ccq_get_cqe_info(struct irdma_sc_cq *ccq, get_64bit_val(cqe, 16, &temp1); info->op_ret_val = (u32)FIELD_GET(IRDMA_CCQ_OPRETVAL, temp1); + if (cqp->dev->hw_attrs.uk_attrs.hw_rev >= IRDMA_GEN_3) { + def_cmpl = info->maj_err_code == IRDMA_CQPSQ_MAJ_NO_ERROR && + info->min_err_code == IRDMA_CQPSQ_MIN_DEF_CMPL; + def_info = (u32)FIELD_GET(IRDMA_CCQ_DEFINFO, temp1); + + pend_cmpl = info->maj_err_code == IRDMA_CQPSQ_MAJ_NO_ERROR && + info->min_err_code == IRDMA_CQPSQ_MIN_OOO_CMPL; + + ooo_final_cmpl = (bool)FIELD_GET(IRDMA_OOO_CMPL, temp); + + if (def_cmpl || pend_cmpl || ooo_final_cmpl) { + if (ooo_final_cmpl) + irdma_sc_process_ooo_cmpl(cqp, info, def_info); + else + irdma_sc_process_def_cmpl(cqp, info, wqe_idx, + def_info, def_cmpl); + } + } + get_64bit_val(cqp->sq_base[wqe_idx].elem, 24, &temp1); info->op_code = (u8)FIELD_GET(IRDMA_CQPSQ_OPCODE, temp1); info->cqp = cqp; @@ -3432,7 +3667,16 @@ int irdma_sc_ccq_get_cqe_info(struct irdma_sc_cq *ccq, dma_wmb(); /* make sure shadow area is updated before moving tail */ - IRDMA_RING_MOVE_TAIL(cqp->sq_ring); + spin_lock_irqsave(&cqp->dev->cqp_lock, flags); + if (!ooo_final_cmpl) + IRDMA_RING_MOVE_TAIL(cqp->sq_ring); + spin_unlock_irqrestore(&cqp->dev->cqp_lock, flags); + + /* Do not increment completed_ops counter on pending or deferred + * completions. + */ + if (pend_cmpl || def_cmpl) + return ret_code; atomic64_inc(&cqp->completed_ops); return ret_code; @@ -4118,6 +4362,10 @@ int irdma_sc_get_next_aeqe(struct irdma_sc_aeq *aeq, info->compl_ctx = compl_ctx << 1; ae_src = IRDMA_AE_SOURCE_RSVD; break; + case IRDMA_AE_CQP_DEFERRED_COMPLETE: + info->def_info = info->wqe_idx; + ae_src = IRDMA_AE_SOURCE_RSVD; + break; case IRDMA_AE_ROCE_EMPTY_MCG: case IRDMA_AE_ROCE_BAD_MC_IP_ADDR: case IRDMA_AE_ROCE_BAD_MC_QPID: diff --git a/drivers/infiniband/hw/irdma/defs.h b/drivers/infiniband/hw/irdma/defs.h index fe75737554b2..5e4d62cb551e 100644 --- a/drivers/infiniband/hw/irdma/defs.h +++ b/drivers/infiniband/hw/irdma/defs.h @@ -367,6 +367,7 @@ enum irdma_cqp_op_type { #define IRDMA_AE_LCE_FUNCTION_CATASTROPHIC 0x0701 #define IRDMA_AE_LCE_CQ_CATASTROPHIC 0x0702 #define IRDMA_AE_QP_SUSPEND_COMPLETE 0x0900 +#define IRDMA_AE_CQP_DEFERRED_COMPLETE 0x0901 #define FLD_LS_64(dev, val, field) \ (((u64)(val) << (dev)->hw_shifts[field ## _S]) & (dev)->hw_masks[field ## _M]) @@ -465,6 +466,16 @@ enum irdma_cqp_op_type { #define IRDMA_CQPHC_SVER GENMASK_ULL(31, 24) #define IRDMA_CQPHC_SQBASE GENMASK_ULL(63, 9) +#define IRDMA_CQPHC_TIMESTAMP_OVERRIDE BIT_ULL(5) +#define IRDMA_CQPHC_TS_SHIFT GENMASK_ULL(12, 8) +#define IRDMA_CQPHC_EN_FINE_GRAINED_TIMERS BIT_ULL(0) + +#define IRDMA_CQPHC_OOISC_BLKSIZE GENMASK_ULL(63, 60) +#define IRDMA_CQPHC_RRSP_BLKSIZE GENMASK_ULL(59, 56) +#define IRDMA_CQPHC_Q1_BLKSIZE GENMASK_ULL(55, 52) +#define IRDMA_CQPHC_XMIT_BLKSIZE GENMASK_ULL(51, 48) +#define IRDMA_CQPHC_BLKSIZES_VALID BIT_ULL(4) + #define IRDMA_CQPHC_QPCTX GENMASK_ULL(63, 0) #define IRDMA_QP_DBSA_HW_SQ_TAIL GENMASK_ULL(14, 0) #define IRDMA_CQ_DBSA_CQEIDX GENMASK_ULL(19, 0) @@ -478,6 +489,8 @@ enum irdma_cqp_op_type { #define IRDMA_CCQ_OPRETVAL GENMASK_ULL(31, 0) +#define IRDMA_CCQ_DEFINFO GENMASK_ULL(63, 32) + #define IRDMA_CQ_MINERR GENMASK_ULL(15, 0) #define IRDMA_CQ_MAJERR GENMASK_ULL(31, 16) #define IRDMA_CQ_WQEIDX GENMASK_ULL(46, 32) @@ -715,6 +728,8 @@ enum irdma_cqp_op_type { #define IRDMA_CQPSQ_MIN_STAG_INVALID 0x0001 #define IRDMA_CQPSQ_MIN_SUSPEND_PND 0x0005 +#define IRDMA_CQPSQ_MIN_DEF_CMPL 0x0006 +#define IRDMA_CQPSQ_MIN_OOO_CMPL 0x0007 #define IRDMA_CQPSQ_MAJ_NO_ERROR 0x0000 #define IRDMA_CQPSQ_MAJ_OBJCACHE_ERROR 0xF000 diff --git a/drivers/infiniband/hw/irdma/hw.c b/drivers/infiniband/hw/irdma/hw.c index 288131466a19..55b10a8b6fd3 100644 --- a/drivers/infiniband/hw/irdma/hw.c +++ b/drivers/infiniband/hw/irdma/hw.c @@ -207,6 +207,51 @@ static void irdma_set_flush_fields(struct irdma_sc_qp *qp, } } +/** + * irdma_complete_cqp_request - perform post-completion cleanup + * @cqp: device CQP + * @cqp_request: CQP request + * + * Mark CQP request as done, wake up waiting thread or invoke + * callback function and release/free CQP request. + */ +static void irdma_complete_cqp_request(struct irdma_cqp *cqp, + struct irdma_cqp_request *cqp_request) +{ + if (cqp_request->waiting) { + WRITE_ONCE(cqp_request->request_done, true); + wake_up(&cqp_request->waitq); + } else if (cqp_request->callback_fcn) { + cqp_request->callback_fcn(cqp_request); + } + irdma_put_cqp_request(cqp, cqp_request); +} + +/** + * irdma_process_ae_def_cmpl - handle IRDMA_AE_CQP_DEFERRED_COMPLETE event + * @rf: RDMA PCI function + * @info: AEQ entry info + */ +static void irdma_process_ae_def_cmpl(struct irdma_pci_f *rf, + struct irdma_aeqe_info *info) +{ + u32 sw_def_info; + u64 scratch; + + irdma_cqp_ce_handler(rf, &rf->ccq.sc_cq); + + irdma_sc_cqp_def_cmpl_ae_handler(&rf->sc_dev, info, true, + &scratch, &sw_def_info); + while (scratch) { + struct irdma_cqp_request *cqp_request = + (struct irdma_cqp_request *)(uintptr_t)scratch; + + irdma_complete_cqp_request(&rf->cqp, cqp_request); + irdma_sc_cqp_def_cmpl_ae_handler(&rf->sc_dev, info, false, + &scratch, &sw_def_info); + } +} + /** * irdma_process_aeq - handle aeq events * @rf: RDMA PCI function @@ -269,7 +314,8 @@ static void irdma_process_aeq(struct irdma_pci_f *rf) spin_unlock_irqrestore(&iwqp->lock, flags); ctx_info = &iwqp->ctx_info; } else { - if (info->ae_id != IRDMA_AE_CQ_OPERATION_ERROR) + if (info->ae_id != IRDMA_AE_CQ_OPERATION_ERROR && + info->ae_id != IRDMA_AE_CQP_DEFERRED_COMPLETE) continue; } @@ -364,6 +410,12 @@ static void irdma_process_aeq(struct irdma_pci_f *rf) } irdma_cq_rem_ref(&iwcq->ibcq); break; + case IRDMA_AE_CQP_DEFERRED_COMPLETE: + /* Remove completed CQP requests from pending list + * and notify about those CQP ops completion. + */ + irdma_process_ae_def_cmpl(rf, info); + break; case IRDMA_AE_RESET_NOT_SENT: case IRDMA_AE_LLP_DOUBT_REACHABILITY: case IRDMA_AE_RESOURCE_EXHAUSTION: @@ -602,6 +654,8 @@ static void irdma_destroy_cqp(struct irdma_pci_f *rf) dma_free_coherent(dev->hw->device, cqp->sq.size, cqp->sq.va, cqp->sq.pa); cqp->sq.va = NULL; + kfree(cqp->oop_op_array); + cqp->oop_op_array = NULL; kfree(cqp->scratch_array); cqp->scratch_array = NULL; kfree(cqp->cqp_requests); @@ -945,6 +999,13 @@ static int irdma_create_cqp(struct irdma_pci_f *rf) goto err_scratch; } + cqp->oop_op_array = kcalloc(sqsize, sizeof(*cqp->oop_op_array), + GFP_KERNEL); + if (!cqp->oop_op_array) { + status = -ENOMEM; + goto err_oop; + } + cqp_init_info.ooo_op_array = cqp->oop_op_array; dev->cqp = &cqp->sc_cqp; dev->cqp->dev = dev; cqp->sq.size = ALIGN(sizeof(struct irdma_cqp_sq_wqe) * sqsize, @@ -981,6 +1042,10 @@ static int irdma_create_cqp(struct irdma_pci_f *rf) case IRDMA_GEN_2: cqp_init_info.hw_maj_ver = IRDMA_CQPHC_HW_MAJVER_GEN_2; break; + case IRDMA_GEN_3: + cqp_init_info.hw_maj_ver = IRDMA_CQPHC_HW_MAJVER_GEN_3; + cqp_init_info.ts_override = 1; + break; } status = irdma_sc_cqp_init(dev->cqp, &cqp_init_info); if (status) { @@ -1015,6 +1080,9 @@ static int irdma_create_cqp(struct irdma_pci_f *rf) cqp->sq.va, cqp->sq.pa); cqp->sq.va = NULL; err_sq: + kfree(cqp->oop_op_array); + cqp->oop_op_array = NULL; +err_oop: kfree(cqp->scratch_array); cqp->scratch_array = NULL; err_scratch: @@ -2106,15 +2174,16 @@ void irdma_cqp_ce_handler(struct irdma_pci_f *rf, struct irdma_sc_cq *cq) cqp_request->compl_info.op_ret_val = info.op_ret_val; cqp_request->compl_info.error = info.error; - if (cqp_request->waiting) { - WRITE_ONCE(cqp_request->request_done, true); - wake_up(&cqp_request->waitq); - irdma_put_cqp_request(&rf->cqp, cqp_request); - } else { - if (cqp_request->callback_fcn) - cqp_request->callback_fcn(cqp_request); - irdma_put_cqp_request(&rf->cqp, cqp_request); - } + /* + * If this is deferred or pending completion, then mark + * CQP request as pending to not block the CQ, but don't + * release CQP request, as it is still on the OOO list. + */ + if (info.pending) + cqp_request->pending = true; + else + irdma_complete_cqp_request(&rf->cqp, + cqp_request); } cqe_count++; diff --git a/drivers/infiniband/hw/irdma/main.h b/drivers/infiniband/hw/irdma/main.h index a7f3d197a390..5d1371891c4c 100644 --- a/drivers/infiniband/hw/irdma/main.h +++ b/drivers/infiniband/hw/irdma/main.h @@ -167,6 +167,7 @@ struct irdma_cqp_request { bool request_done; /* READ/WRITE_ONCE macros operate on it */ bool waiting:1; bool dynamic:1; + bool pending:1; }; struct irdma_cqp { @@ -179,6 +180,7 @@ struct irdma_cqp { struct irdma_dma_mem host_ctx; u64 *scratch_array; struct irdma_cqp_request *cqp_requests; + struct irdma_ooo_cqp_op *oop_op_array; struct list_head cqp_avail_reqs; struct list_head cqp_pending_reqs; }; diff --git a/drivers/infiniband/hw/irdma/protos.h b/drivers/infiniband/hw/irdma/protos.h index d7c8ea948bcd..fac823a1ac1e 100644 --- a/drivers/infiniband/hw/irdma/protos.h +++ b/drivers/infiniband/hw/irdma/protos.h @@ -10,6 +10,7 @@ #define ALL_TC2PFC 0xff #define CQP_COMPL_WAIT_TIME_MS 10 #define CQP_TIMEOUT_THRESHOLD 500 +#define CQP_DEF_CMPL_TIMEOUT_THRESHOLD 2500 /* init operations */ int irdma_sc_dev_init(enum irdma_vers ver, struct irdma_sc_dev *dev, diff --git a/drivers/infiniband/hw/irdma/type.h b/drivers/infiniband/hw/irdma/type.h index cfcb5d938d76..2b93a70432be 100644 --- a/drivers/infiniband/hw/irdma/type.h +++ b/drivers/infiniband/hw/irdma/type.h @@ -262,12 +262,22 @@ struct irdma_cqp_init_info { __le64 *host_ctx; u64 *scratch_array; u32 sq_size; + struct irdma_ooo_cqp_op *ooo_op_array; + u32 pe_en_vf_cnt; u16 hw_maj_ver; u16 hw_min_ver; u8 struct_ver; u8 hmc_profile; u8 ena_vf_count; u8 ceqs_per_vf; + u8 ooisc_blksize; + u8 rrsp_blksize; + u8 q1_blksize; + u8 xmit_blksize; + u8 ts_override; + u8 ts_shift; + u8 en_fine_grained_timers; + u8 blksizes_valid; bool en_datacenter_tcp:1; bool disable_packed:1; bool rocev2_rto_policy:1; @@ -392,7 +402,21 @@ struct irdma_cqp_quanta { __le64 elem[IRDMA_CQP_WQE_SIZE]; }; +struct irdma_ooo_cqp_op { + struct list_head list_entry; + u64 scratch; + u32 def_info; + u32 sw_def_info; + u32 wqe_idx; + bool deferred:1; +}; + struct irdma_sc_cqp { + spinlock_t ooo_list_lock; /* protects list of pending completions */ + struct list_head ooo_avail; + struct list_head ooo_pnd; + u32 last_def_cmpl_ticket; + u32 sw_def_cmpl_ticket; u32 size; u64 sq_pa; u64 host_ctx_pa; @@ -408,8 +432,10 @@ struct irdma_sc_cqp { u64 *scratch_array; u64 requested_ops; atomic64_t completed_ops; + struct irdma_ooo_cqp_op *ooo_op_array; u32 cqp_id; u32 sq_size; + u32 pe_en_vf_cnt; u32 hw_sq_size; u16 hw_maj_ver; u16 hw_min_ver; @@ -419,6 +445,14 @@ struct irdma_sc_cqp { u8 ena_vf_count; u8 timeout_count; u8 ceqs_per_vf; + u8 ooisc_blksize; + u8 rrsp_blksize; + u8 q1_blksize; + u8 xmit_blksize; + u8 ts_override; + u8 ts_shift; + u8 en_fine_grained_timers; + u8 blksizes_valid; bool en_datacenter_tcp:1; bool disable_packed:1; bool rocev2_rto_policy:1; @@ -723,7 +757,8 @@ struct irdma_ccq_cqe_info { u16 maj_err_code; u16 min_err_code; u8 op_code; - bool error; + bool error:1; + bool pending:1; }; struct irdma_dcb_app_info { @@ -998,6 +1033,7 @@ struct irdma_qp_host_ctx_info { struct irdma_aeqe_info { u64 compl_ctx; u32 qp_cq_id; + u32 def_info; /* only valid for DEF_CMPL */ u16 ae_id; u16 wqe_idx; u8 tcp_state; @@ -1242,6 +1278,11 @@ void irdma_sc_pd_init(struct irdma_sc_dev *dev, struct irdma_sc_pd *pd, u32 pd_i void irdma_cfg_aeq(struct irdma_sc_dev *dev, u32 idx, bool enable); void irdma_check_cqp_progress(struct irdma_cqp_timeout *cqp_timeout, struct irdma_sc_dev *dev); +void irdma_sc_cqp_def_cmpl_ae_handler(struct irdma_sc_dev *dev, + struct irdma_aeqe_info *info, + bool first, u64 *scratch, + u32 *sw_def_info); +u64 irdma_sc_cqp_cleanup_handler(struct irdma_sc_dev *dev); int irdma_sc_cqp_create(struct irdma_sc_cqp *cqp, u16 *maj_err, u16 *min_err); int irdma_sc_cqp_destroy(struct irdma_sc_cqp *cqp); int irdma_sc_cqp_init(struct irdma_sc_cqp *cqp, diff --git a/drivers/infiniband/hw/irdma/utils.c b/drivers/infiniband/hw/irdma/utils.c index 0422787592d8..e940d32c9dbb 100644 --- a/drivers/infiniband/hw/irdma/utils.c +++ b/drivers/infiniband/hw/irdma/utils.c @@ -484,6 +484,7 @@ void irdma_free_cqp_request(struct irdma_cqp *cqp, WRITE_ONCE(cqp_request->request_done, false); cqp_request->callback_fcn = NULL; cqp_request->waiting = false; + cqp_request->pending = false; spin_lock_irqsave(&cqp->req_lock, flags); list_add_tail(&cqp_request->list, &cqp->cqp_avail_reqs); @@ -523,6 +524,22 @@ irdma_free_pending_cqp_request(struct irdma_cqp *cqp, irdma_put_cqp_request(cqp, cqp_request); } +/** + * irdma_cleanup_deferred_cqp_ops - clean-up cqp with no completions + * @dev: sc_dev + * @cqp: cqp + */ +static void irdma_cleanup_deferred_cqp_ops(struct irdma_sc_dev *dev, + struct irdma_cqp *cqp) +{ + u64 scratch; + + /* process all CQP requests with deferred/pending completions */ + while ((scratch = irdma_sc_cqp_cleanup_handler(dev))) + irdma_free_pending_cqp_request(cqp, (struct irdma_cqp_request *) + (uintptr_t)scratch); +} + /** * irdma_cleanup_pending_cqp_op - clean-up cqp with no * completions @@ -536,6 +553,8 @@ void irdma_cleanup_pending_cqp_op(struct irdma_pci_f *rf) struct cqp_cmds_info *pcmdinfo = NULL; u32 i, pending_work, wqe_idx; + if (dev->hw_attrs.uk_attrs.hw_rev >= IRDMA_GEN_3) + irdma_cleanup_deferred_cqp_ops(dev, cqp); pending_work = IRDMA_RING_USED_QUANTA(cqp->sc_cqp.sq_ring); wqe_idx = IRDMA_RING_CURRENT_TAIL(cqp->sc_cqp.sq_ring); for (i = 0; i < pending_work; i++) { @@ -555,6 +574,26 @@ void irdma_cleanup_pending_cqp_op(struct irdma_pci_f *rf) } } +static int irdma_get_timeout_threshold(struct irdma_sc_dev *dev) +{ + u16 time_s = dev->vc_caps.cqp_timeout_s; + + if (!time_s) + return CQP_TIMEOUT_THRESHOLD; + + return time_s * 1000 / dev->hw_attrs.max_cqp_compl_wait_time_ms; +} + +static int irdma_get_def_timeout_threshold(struct irdma_sc_dev *dev) +{ + u16 time_s = dev->vc_caps.cqp_def_timeout_s; + + if (!time_s) + return CQP_DEF_CMPL_TIMEOUT_THRESHOLD; + + return time_s * 1000 / dev->hw_attrs.max_cqp_compl_wait_time_ms; +} + /** * irdma_wait_event - wait for completion * @rf: RDMA PCI function @@ -564,6 +603,7 @@ static int irdma_wait_event(struct irdma_pci_f *rf, struct irdma_cqp_request *cqp_request) { struct irdma_cqp_timeout cqp_timeout = {}; + int timeout_threshold = irdma_get_timeout_threshold(&rf->sc_dev); bool cqp_error = false; int err_code = 0; @@ -575,9 +615,17 @@ static int irdma_wait_event(struct irdma_pci_f *rf, msecs_to_jiffies(CQP_COMPL_WAIT_TIME_MS))) break; + if (cqp_request->pending) + /* There was a deferred or pending completion + * received for this CQP request, so we need + * to wait longer than usual. + */ + timeout_threshold = + irdma_get_def_timeout_threshold(&rf->sc_dev); + irdma_check_cqp_progress(&cqp_timeout, &rf->sc_dev); - if (cqp_timeout.count < CQP_TIMEOUT_THRESHOLD) + if (cqp_timeout.count < timeout_threshold) continue; if (!rf->reset) { From patchwork Sat Aug 24 03:19:13 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Nikolova, Tatyana E" X-Patchwork-Id: 13776204 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.16]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A82DC282F7; Sat, 24 Aug 2024 03:20:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.16 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724469653; cv=none; b=rZ+Xye64VVJGTNMn07kRsc3nk3HeeosofGKM3LaY4Z67kCFV6S5UXm5URUcPzmd9EgWm1iLpB4nmoP+c8hbVsDQ9AoaUGGmy8VnTAQQ1tUX1hJ/KZn41Q3EzvIJmf4TiLGduA9Giepv5pJeNBo3MPYIZAXzfrME81Zk5uZvMx38= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724469653; c=relaxed/simple; bh=V9DnkIMd9gSUSMTFb5U1rZGWLqTGGhdOFTlnEIOuakY=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=fEi56fJ/LKqzztPPRPyBEFosqYvqxDkCiKYkYY+rys83fGGDQVrVcIrfhUx/130nKJ766MRK5NWPlDdcU+DfIK46USOsOdEPEq9MMaVhtc6V4mnknsXE2Zr4iXz6WwdGtoR8NmEgQnA+kW3wpfVa8i/wSdWoonu4A/+jSwIBI30= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=NYd5JbVM; arc=none smtp.client-ip=192.198.163.16 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="NYd5JbVM" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1724469650; x=1756005650; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=V9DnkIMd9gSUSMTFb5U1rZGWLqTGGhdOFTlnEIOuakY=; b=NYd5JbVM30lByMlRlHe7EgI5T9weekQoEV6fH6+zkshlSplqi6ZRQBvv rzcyU520auktmsz2Mea4OsNMUzexTp3eYdxHwQTCnEermJUtNHPg1AJjt 7vT693KXOaHuHkSmj2XXD8i1i7vzAULVdamZha294Y5qe3X49QkaeJmYA 67+l34jP1okMuSgdtcUQRpIDl7GlRol7J+vyY0AyhgsZzZWSHjOQW5ilb sYGdzMw4oVfZn+P7nE1g1T5G/Kg/bxU+iGiIq3y/IGGFRgKoFnpTRju9F E7vha5RTFxUvhKKxGQrI8CDACkraqBFJ6LuJr7ircg0vpFAshBNu8kybR Q==; X-CSE-ConnectionGUID: XunZ0WMLSe+0OCWpcpzd3g== X-CSE-MsgGUID: VT9DirqjTwa1PxYf6ROIHA== X-IronPort-AV: E=McAfee;i="6700,10204,11173"; a="13187807" X-IronPort-AV: E=Sophos;i="6.10,172,1719903600"; d="scan'208";a="13187807" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by fmvoesa110.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Aug 2024 20:20:46 -0700 X-CSE-ConnectionGUID: dwBH3tSiSc2d3R3aRejQ2w== X-CSE-MsgGUID: S2XABfI2Td6S/MW740e3/g== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.10,172,1719903600"; d="scan'208";a="99492115" Received: from tenikolo-mobl1.amr.corp.intel.com ([10.124.36.66]) by smtpauth.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Aug 2024 20:20:45 -0700 From: Tatyana Nikolova To: jgg@nvidia.com, leon@kernel.org Cc: linux-rdma@vger.kernel.org, netdev@vger.kernel.org, Shiraz Saleem , Tatyana Nikolova Subject: [RFC v2 14/25] RDMA/irdma: Add GEN3 support for AEQ and CEQ Date: Fri, 23 Aug 2024 22:19:13 -0500 Message-Id: <20240824031924.421-15-tatyana.e.nikolova@intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20240824031924.421-1-tatyana.e.nikolova@intel.com> References: <20240824031924.421-1-tatyana.e.nikolova@intel.com> Precedence: bulk X-Mailing-List: linux-rdma@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Shiraz Saleem Extend support for GEN3 devices by programming the necessary hardware IRQ registers and the updated descriptor fields for the Asynchronous Event Queue (AEQ) and Completion Event Queue (CEQ). Introduce a RDMA virtual channel operation with the Control Plane (CP) to associate interrupt vectors appropriately with AEQ and CEQ. Add new Asynchronous Event (AE) definitions specific to GEN3. Additionally, refactor the AEQ and CEQ setup into the irdma_ctrl_init_hw device control initialization routine. This completes the PCI device level initialization for RDMA in the core driver. Signed-off-by: Shiraz Saleem Signed-off-by: Tatyana Nikolova --- drivers/infiniband/hw/irdma/ctrl.c | 76 ++++++++++--- drivers/infiniband/hw/irdma/defs.h | 29 ++++- drivers/infiniband/hw/irdma/hw.c | 130 +++++++++++++---------- drivers/infiniband/hw/irdma/ig3rdma_hw.c | 45 ++++++++ drivers/infiniband/hw/irdma/irdma.h | 11 +- drivers/infiniband/hw/irdma/main.h | 6 +- drivers/infiniband/hw/irdma/type.h | 11 +- drivers/infiniband/hw/irdma/virtchnl.c | 84 +++++++++++++++ drivers/infiniband/hw/irdma/virtchnl.h | 19 ++++ 9 files changed, 338 insertions(+), 73 deletions(-) diff --git a/drivers/infiniband/hw/irdma/ctrl.c b/drivers/infiniband/hw/irdma/ctrl.c index e524b61e4759..5a5d47cf1854 100644 --- a/drivers/infiniband/hw/irdma/ctrl.c +++ b/drivers/infiniband/hw/irdma/ctrl.c @@ -2562,6 +2562,9 @@ static int irdma_sc_cq_create(struct irdma_sc_cq *cq, u64 scratch, FIELD_PREP(IRDMA_CQPSQ_CQ_LPBLSIZE, cq->pbl_chunk_size) | FIELD_PREP(IRDMA_CQPSQ_CQ_CHKOVERFLOW, check_overflow) | FIELD_PREP(IRDMA_CQPSQ_CQ_VIRTMAP, cq->virtual_map) | + FIELD_PREP(IRDMA_CQPSQ_CQ_CQID_HIGH, cq->cq_uk.cq_id >> 22) | + FIELD_PREP(IRDMA_CQPSQ_CQ_CEQID_HIGH, + (cq->ceq_id_valid ? cq->ceq_id : 0) >> 10) | FIELD_PREP(IRDMA_CQPSQ_CQ_ENCEQEMASK, cq->ceqe_mask) | FIELD_PREP(IRDMA_CQPSQ_CQ_CEQIDVALID, cq->ceq_id_valid) | FIELD_PREP(IRDMA_CQPSQ_TPHEN, cq->tph_en) | @@ -3924,7 +3927,7 @@ int irdma_sc_ceq_init(struct irdma_sc_ceq *ceq, ceq->pbl_list = (ceq->virtual_map ? info->pbl_list : NULL); ceq->tph_en = info->tph_en; ceq->tph_val = info->tph_val; - ceq->vsi = info->vsi; + ceq->vsi_idx = info->vsi_idx; ceq->polarity = 1; IRDMA_RING_INIT(ceq->ceq_ring, ceq->elem_cnt); ceq->dev->ceq[info->ceq_id] = ceq; @@ -3957,13 +3960,16 @@ static int irdma_sc_ceq_create(struct irdma_sc_ceq *ceq, u64 scratch, (ceq->virtual_map ? ceq->first_pm_pbl_idx : 0)); set_64bit_val(wqe, 56, FIELD_PREP(IRDMA_CQPSQ_TPHVAL, ceq->tph_val) | - FIELD_PREP(IRDMA_CQPSQ_VSIIDX, ceq->vsi->vsi_idx)); + FIELD_PREP(IRDMA_CQPSQ_PASID, ceq->pasid) | + FIELD_PREP(IRDMA_CQPSQ_VSIIDX, ceq->vsi_idx)); hdr = FIELD_PREP(IRDMA_CQPSQ_CEQ_CEQID, ceq->ceq_id) | + FIELD_PREP(IRDMA_CQPSQ_CEQ_CEQID_HIGH, ceq->ceq_id >> 10) | FIELD_PREP(IRDMA_CQPSQ_OPCODE, IRDMA_CQP_OP_CREATE_CEQ) | FIELD_PREP(IRDMA_CQPSQ_CEQ_LPBLSIZE, ceq->pbl_chunk_size) | FIELD_PREP(IRDMA_CQPSQ_CEQ_VMAP, ceq->virtual_map) | FIELD_PREP(IRDMA_CQPSQ_CEQ_ITRNOEXPIRE, ceq->itr_no_expire) | FIELD_PREP(IRDMA_CQPSQ_TPHEN, ceq->tph_en) | + FIELD_PREP(IRDMA_CQPSQ_PASID_VALID, ceq->pasid_valid) | FIELD_PREP(IRDMA_CQPSQ_WQEVALID, cqp->polarity); dma_wmb(); /* make sure WQE is written before valid bit is set */ @@ -4018,7 +4024,7 @@ int irdma_sc_cceq_create(struct irdma_sc_ceq *ceq, u64 scratch) int ret_code; struct irdma_sc_dev *dev = ceq->dev; - dev->ccq->vsi = ceq->vsi; + dev->ccq->vsi_idx = ceq->vsi_idx; if (ceq->reg_cq) { ret_code = irdma_sc_add_cq_ctx(ceq, ceq->dev->ccq); if (ret_code) @@ -4051,11 +4057,14 @@ int irdma_sc_ceq_destroy(struct irdma_sc_ceq *ceq, u64 scratch, bool post_sq) set_64bit_val(wqe, 16, ceq->elem_cnt); set_64bit_val(wqe, 48, ceq->first_pm_pbl_idx); + set_64bit_val(wqe, 56, + FIELD_PREP(IRDMA_CQPSQ_PASID, ceq->pasid)); hdr = ceq->ceq_id | FIELD_PREP(IRDMA_CQPSQ_OPCODE, IRDMA_CQP_OP_DESTROY_CEQ) | FIELD_PREP(IRDMA_CQPSQ_CEQ_LPBLSIZE, ceq->pbl_chunk_size) | FIELD_PREP(IRDMA_CQPSQ_CEQ_VMAP, ceq->virtual_map) | FIELD_PREP(IRDMA_CQPSQ_TPHEN, ceq->tph_en) | + FIELD_PREP(IRDMA_CQPSQ_PASID_VALID, ceq->pasid_valid) | FIELD_PREP(IRDMA_CQPSQ_WQEVALID, cqp->polarity); dma_wmb(); /* make sure WQE is written before valid bit is set */ @@ -4219,10 +4228,13 @@ static int irdma_sc_aeq_create(struct irdma_sc_aeq *aeq, u64 scratch, (aeq->virtual_map ? 0 : aeq->aeq_elem_pa)); set_64bit_val(wqe, 48, (aeq->virtual_map ? aeq->first_pm_pbl_idx : 0)); + set_64bit_val(wqe, 56, + FIELD_PREP(IRDMA_CQPSQ_PASID, aeq->pasid)); hdr = FIELD_PREP(IRDMA_CQPSQ_OPCODE, IRDMA_CQP_OP_CREATE_AEQ) | FIELD_PREP(IRDMA_CQPSQ_AEQ_LPBLSIZE, aeq->pbl_chunk_size) | FIELD_PREP(IRDMA_CQPSQ_AEQ_VMAP, aeq->virtual_map) | + FIELD_PREP(IRDMA_CQPSQ_PASID_VALID, aeq->pasid_valid) | FIELD_PREP(IRDMA_CQPSQ_WQEVALID, cqp->polarity); dma_wmb(); /* make sure WQE is written before valid bit is set */ @@ -4251,7 +4263,8 @@ static int irdma_sc_aeq_destroy(struct irdma_sc_aeq *aeq, u64 scratch, u64 hdr; dev = aeq->dev; - writel(0, dev->hw_regs[IRDMA_PFINT_AEQCTL]); + if (dev->privileged) + writel(0, dev->hw_regs[IRDMA_PFINT_AEQCTL]); cqp = dev->cqp; wqe = irdma_sc_cqp_get_next_send_wqe(cqp, scratch); @@ -4259,9 +4272,12 @@ static int irdma_sc_aeq_destroy(struct irdma_sc_aeq *aeq, u64 scratch, return -ENOMEM; set_64bit_val(wqe, 16, aeq->elem_cnt); set_64bit_val(wqe, 48, aeq->first_pm_pbl_idx); + set_64bit_val(wqe, 56, + FIELD_PREP(IRDMA_CQPSQ_PASID, aeq->pasid)); hdr = FIELD_PREP(IRDMA_CQPSQ_OPCODE, IRDMA_CQP_OP_DESTROY_AEQ) | FIELD_PREP(IRDMA_CQPSQ_AEQ_LPBLSIZE, aeq->pbl_chunk_size) | FIELD_PREP(IRDMA_CQPSQ_AEQ_VMAP, aeq->virtual_map) | + FIELD_PREP(IRDMA_CQPSQ_PASID_VALID, aeq->pasid_valid) | FIELD_PREP(IRDMA_CQPSQ_WQEVALID, cqp->polarity); dma_wmb(); /* make sure WQE is written before valid bit is set */ @@ -4302,18 +4318,39 @@ int irdma_sc_get_next_aeqe(struct irdma_sc_aeq *aeq, print_hex_dump_debug("WQE: AEQ_ENTRY WQE", DUMP_PREFIX_OFFSET, 16, 8, aeqe, 16, false); - ae_src = (u8)FIELD_GET(IRDMA_AEQE_AESRC, temp); - info->wqe_idx = (u16)FIELD_GET(IRDMA_AEQE_WQDESCIDX, temp); - info->qp_cq_id = (u32)FIELD_GET(IRDMA_AEQE_QPCQID_LOW, temp) | + if (aeq->dev->hw_attrs.uk_attrs.hw_rev >= IRDMA_GEN_3) { + ae_src = (u8)FIELD_GET(IRDMA_AEQE_AESRC_GEN_3, temp); + info->wqe_idx = (u16)FIELD_GET(IRDMA_AEQE_WQDESCIDX_GEN_3, + temp); + info->qp_cq_id = (u32)FIELD_GET(IRDMA_AEQE_QPCQID_GEN_3, temp); + info->ae_id = (u16)FIELD_GET(IRDMA_AEQE_AECODE_GEN_3, temp); + info->tcp_state = (u8)FIELD_GET(IRDMA_AEQE_TCPSTATE_GEN_3, compl_ctx); + info->iwarp_state = (u8)FIELD_GET(IRDMA_AEQE_IWSTATE_GEN_3, temp); + info->q2_data_written = (u8)FIELD_GET(IRDMA_AEQE_Q2DATA_GEN_3, compl_ctx); + info->aeqe_overflow = (bool)FIELD_GET(IRDMA_AEQE_OVERFLOW_GEN_3, temp); + info->compl_ctx = FIELD_GET(IRDMA_AEQE_CMPL_CTXT, compl_ctx); + compl_ctx = FIELD_GET(IRDMA_AEQE_CMPL_CTXT, compl_ctx) << IRDMA_AEQE_CMPL_CTXT_S; + } else { + ae_src = (u8)FIELD_GET(IRDMA_AEQE_AESRC, temp); + info->wqe_idx = (u16)FIELD_GET(IRDMA_AEQE_WQDESCIDX, temp); + info->qp_cq_id = (u32)FIELD_GET(IRDMA_AEQE_QPCQID_LOW, temp) | ((u32)FIELD_GET(IRDMA_AEQE_QPCQID_HI, temp) << 18); - info->ae_id = (u16)FIELD_GET(IRDMA_AEQE_AECODE, temp); - info->tcp_state = (u8)FIELD_GET(IRDMA_AEQE_TCPSTATE, temp); - info->iwarp_state = (u8)FIELD_GET(IRDMA_AEQE_IWSTATE, temp); - info->q2_data_written = (u8)FIELD_GET(IRDMA_AEQE_Q2DATA, temp); - info->aeqe_overflow = (bool)FIELD_GET(IRDMA_AEQE_OVERFLOW, temp); + info->ae_id = (u16)FIELD_GET(IRDMA_AEQE_AECODE, temp); + info->tcp_state = (u8)FIELD_GET(IRDMA_AEQE_TCPSTATE, temp); + info->iwarp_state = (u8)FIELD_GET(IRDMA_AEQE_IWSTATE, temp); + info->q2_data_written = (u8)FIELD_GET(IRDMA_AEQE_Q2DATA, temp); + info->aeqe_overflow = (bool)FIELD_GET(IRDMA_AEQE_OVERFLOW, + temp); + } info->ae_src = ae_src; switch (info->ae_id) { + case IRDMA_AE_SRQ_LIMIT: + info->srq = true; + /* [63:6] from CMPL_CTXT, [5:0] from WQDESCIDX. */ + info->compl_ctx = compl_ctx | info->wqe_idx; + ae_src = IRDMA_AE_SOURCE_RSVD; + break; case IRDMA_AE_PRIV_OPERATION_DENIED: case IRDMA_AE_AMP_INVALIDATE_TYPE1_MW: case IRDMA_AE_AMP_MWBIND_ZERO_BASED_TYPE1_MW: @@ -4346,6 +4383,10 @@ int irdma_sc_get_next_aeqe(struct irdma_sc_aeq *aeq, case IRDMA_AE_LLP_RECEIVED_MPA_CRC_ERROR: case IRDMA_AE_LLP_SEGMENT_TOO_SMALL: case IRDMA_AE_LLP_TOO_MANY_RETRIES: + case IRDMA_AE_LLP_TOO_MANY_RNRS: + case IRDMA_AE_REMOTE_QP_CATASTROPHIC: + case IRDMA_AE_LOCAL_QP_CATASTROPHIC: + case IRDMA_AE_RCE_QP_CATASTROPHIC: case IRDMA_AE_LLP_DOUBT_REACHABILITY: case IRDMA_AE_LLP_CONNECTION_ESTABLISHED: case IRDMA_AE_RESET_SENT: @@ -4391,6 +4432,7 @@ int irdma_sc_get_next_aeqe(struct irdma_sc_aeq *aeq, info->qp = true; info->rq = true; info->compl_ctx = compl_ctx; + info->err_rq_idx_valid = true; break; case IRDMA_AE_SOURCE_CQ: case IRDMA_AE_SOURCE_CQ_0110: @@ -4406,8 +4448,18 @@ int irdma_sc_get_next_aeqe(struct irdma_sc_aeq *aeq, info->compl_ctx = compl_ctx; break; case IRDMA_AE_SOURCE_IN_RR_WR: + info->qp = true; + if (aeq->dev->hw_attrs.uk_attrs.hw_rev >= IRDMA_GEN_3) + info->err_rq_idx_valid = true; + info->compl_ctx = compl_ctx; + info->in_rdrsp_wr = true; + break; case IRDMA_AE_SOURCE_IN_RR_WR_1011: info->qp = true; + if (aeq->dev->hw_attrs.uk_attrs.hw_rev >= IRDMA_GEN_3) { + info->sq = true; + info->err_rq_idx_valid = true; + } info->compl_ctx = compl_ctx; info->in_rdrsp_wr = true; break; diff --git a/drivers/infiniband/hw/irdma/defs.h b/drivers/infiniband/hw/irdma/defs.h index 5e4d62cb551e..5829c72cd328 100644 --- a/drivers/infiniband/hw/irdma/defs.h +++ b/drivers/infiniband/hw/irdma/defs.h @@ -319,13 +319,18 @@ enum irdma_cqp_op_type { #define IRDMA_AE_STAG_ZERO_INVALID 0x0206 #define IRDMA_AE_IB_RREQ_AND_Q1_FULL 0x0207 #define IRDMA_AE_IB_INVALID_REQUEST 0x0208 +#define IRDMA_AE_SRQ_LIMIT 0x0209 #define IRDMA_AE_WQE_UNEXPECTED_OPCODE 0x020a #define IRDMA_AE_WQE_INVALID_PARAMETER 0x020b #define IRDMA_AE_WQE_INVALID_FRAG_DATA 0x020c #define IRDMA_AE_IB_REMOTE_ACCESS_ERROR 0x020d #define IRDMA_AE_IB_REMOTE_OP_ERROR 0x020e +#define IRDMA_AE_SRQ_CATASTROPHIC_ERROR 0x020f #define IRDMA_AE_WQE_LSMM_TOO_LONG 0x0220 +#define IRDMA_AE_ATOMIC_ALIGNMENT 0x0221 +#define IRDMA_AE_ATOMIC_MASK 0x0222 #define IRDMA_AE_INVALID_REQUEST 0x0223 +#define IRDMA_AE_PCIE_ATOMIC_DISABLE 0x0224 #define IRDMA_AE_DDP_INVALID_MSN_GAP_IN_MSN 0x0301 #define IRDMA_AE_DDP_UBE_DDP_MESSAGE_TOO_LONG_FOR_AVAILABLE_BUFFER 0x0303 #define IRDMA_AE_DDP_UBE_INVALID_DDP_VERSION 0x0304 @@ -366,8 +371,12 @@ enum irdma_cqp_op_type { #define IRDMA_AE_LCE_QP_CATASTROPHIC 0x0700 #define IRDMA_AE_LCE_FUNCTION_CATASTROPHIC 0x0701 #define IRDMA_AE_LCE_CQ_CATASTROPHIC 0x0702 +#define IRDMA_AE_REMOTE_QP_CATASTROPHIC 0x0703 +#define IRDMA_AE_LOCAL_QP_CATASTROPHIC 0x0704 +#define IRDMA_AE_RCE_QP_CATASTROPHIC 0x0705 #define IRDMA_AE_QP_SUSPEND_COMPLETE 0x0900 #define IRDMA_AE_CQP_DEFERRED_COMPLETE 0x0901 +#define IRDMA_AE_ADAPTER_CATASTROPHIC 0x0B0B #define FLD_LS_64(dev, val, field) \ (((u64)(val) << (dev)->hw_shifts[field ## _S]) & (dev)->hw_masks[field ## _M]) @@ -538,6 +547,17 @@ enum irdma_cqp_op_type { #define IRDMA_AEQE_Q2DATA GENMASK_ULL(62, 61) #define IRDMA_AEQE_VALID BIT_ULL(63) +#define IRDMA_AEQE_Q2DATA_GEN_3 GENMASK_ULL(5, 4) +#define IRDMA_AEQE_TCPSTATE_GEN_3 GENMASK_ULL(3, 0) +#define IRDMA_AEQE_QPCQID_GEN_3 GENMASK_ULL(24, 0) +#define IRDMA_AEQE_AECODE_GEN_3 GENMASK_ULL(61, 50) +#define IRDMA_AEQE_OVERFLOW_GEN_3 BIT_ULL(62) +#define IRDMA_AEQE_WQDESCIDX_GEN_3 GENMASK_ULL(49, 32) +#define IRDMA_AEQE_IWSTATE_GEN_3 GENMASK_ULL(31, 29) +#define IRDMA_AEQE_AESRC_GEN_3 GENMASK_ULL(28, 25) +#define IRDMA_AEQE_CMPL_CTXT_S 6 +#define IRDMA_AEQE_CMPL_CTXT GENMASK_ULL(63, 6) + #define IRDMA_UDA_QPSQ_NEXT_HDR GENMASK_ULL(23, 16) #define IRDMA_UDA_QPSQ_OPCODE GENMASK_ULL(37, 32) #define IRDMA_UDA_QPSQ_L4LEN GENMASK_ULL(45, 42) @@ -560,11 +580,14 @@ enum irdma_cqp_op_type { #define IRDMA_CQPSQ_WQEVALID BIT_ULL(63) #define IRDMA_CQPSQ_TPHVAL GENMASK_ULL(7, 0) -#define IRDMA_CQPSQ_VSIIDX GENMASK_ULL(17, 8) +#define IRDMA_CQPSQ_VSIIDX GENMASK_ULL(23, 8) #define IRDMA_CQPSQ_TPHEN BIT_ULL(60) #define IRDMA_CQPSQ_PBUFADDR IRDMA_CQPHC_QPCTX +#define IRDMA_CQPSQ_PASID GENMASK_ULL(51, 32) +#define IRDMA_CQPSQ_PASID_VALID BIT_ULL(62) + /* Create/Modify/Destroy QP */ #define IRDMA_CQPSQ_QP_NEWMSS GENMASK_ULL(45, 32) @@ -600,6 +623,8 @@ enum irdma_cqp_op_type { #define IRDMA_CQPSQ_CQ_CQCTX GENMASK_ULL(62, 0) #define IRDMA_CQPSQ_CQ_SHADOW_READ_THRESHOLD GENMASK(17, 0) +#define IRDMA_CQPSQ_CQ_CQID_HIGH GENMASK_ULL(52, 50) +#define IRDMA_CQPSQ_CQ_CEQID_HIGH GENMASK_ULL(59, 54) #define IRDMA_CQPSQ_CQ_OP GENMASK_ULL(37, 32) #define IRDMA_CQPSQ_CQ_CQRESIZE BIT_ULL(43) #define IRDMA_CQPSQ_CQ_LPBLSIZE GENMASK_ULL(45, 44) @@ -681,6 +706,8 @@ enum irdma_cqp_op_type { #define IRDMA_CQPSQ_CEQ_CEQSIZE GENMASK_ULL(21, 0) #define IRDMA_CQPSQ_CEQ_CEQID GENMASK_ULL(9, 0) +#define IRDMA_CQPSQ_CEQ_CEQID_HIGH GENMASK_ULL(15, 10) + #define IRDMA_CQPSQ_CEQ_LPBLSIZE IRDMA_CQPSQ_CQ_LPBLSIZE #define IRDMA_CQPSQ_CEQ_VMAP BIT_ULL(47) #define IRDMA_CQPSQ_CEQ_ITRNOEXPIRE BIT_ULL(46) diff --git a/drivers/infiniband/hw/irdma/hw.c b/drivers/infiniband/hw/irdma/hw.c index 55b10a8b6fd3..f01ec21edd37 100644 --- a/drivers/infiniband/hw/irdma/hw.c +++ b/drivers/infiniband/hw/irdma/hw.c @@ -282,6 +282,13 @@ static void irdma_process_aeq(struct irdma_pci_f *rf) if (ret) break; + if (info->aeqe_overflow) { + ibdev_err(&iwdev->ibdev, "AEQ has overflowed\n"); + rf->reset = true; + rf->gen_ops.request_reset(rf); + return; + } + aeqcnt++; ibdev_dbg(&iwdev->ibdev, "AEQ: ae_id = 0x%x bool qp=%d qp_id = %d tcp_state=%d iwarp_state=%d ae_src=%d\n", @@ -442,6 +449,9 @@ static void irdma_process_aeq(struct irdma_pci_f *rf) case IRDMA_AE_LCE_FUNCTION_CATASTROPHIC: case IRDMA_AE_LLP_TOO_MANY_RNRS: case IRDMA_AE_LCE_CQ_CATASTROPHIC: + case IRDMA_AE_REMOTE_QP_CATASTROPHIC: + case IRDMA_AE_LOCAL_QP_CATASTROPHIC: + case IRDMA_AE_RCE_QP_CATASTROPHIC: case IRDMA_AE_UDA_XMIT_DGRAM_TOO_LONG: default: ibdev_err(&iwdev->ibdev, "abnormal ae_id = 0x%x bool qp=%d qp_id = %d, ae_src=%d\n", @@ -688,7 +698,9 @@ static void irdma_destroy_aeq(struct irdma_pci_f *rf) int status = -EBUSY; if (!rf->msix_shared) { - rf->sc_dev.irq_ops->irdma_cfg_aeq(&rf->sc_dev, rf->iw_msixtbl->idx, false); + if (rf->sc_dev.privileged) + rf->sc_dev.irq_ops->irdma_cfg_aeq(&rf->sc_dev, + rf->iw_msixtbl->idx, false); irdma_destroy_irq(rf, rf->iw_msixtbl, rf); } if (rf->reset) @@ -754,9 +766,10 @@ static void irdma_del_ceq_0(struct irdma_pci_f *rf) if (rf->msix_shared) { msix_vec = &rf->iw_msixtbl[0]; - rf->sc_dev.irq_ops->irdma_cfg_ceq(&rf->sc_dev, - msix_vec->ceq_id, - msix_vec->idx, false); + if (rf->sc_dev.privileged) + rf->sc_dev.irq_ops->irdma_cfg_ceq(&rf->sc_dev, + msix_vec->ceq_id, + msix_vec->idx, false); irdma_destroy_irq(rf, msix_vec, rf); } else { msix_vec = &rf->iw_msixtbl[1]; @@ -787,8 +800,10 @@ static void irdma_del_ceqs(struct irdma_pci_f *rf) msix_vec = &rf->iw_msixtbl[2]; for (i = 1; i < rf->ceqs_count; i++, msix_vec++, iwceq++) { - rf->sc_dev.irq_ops->irdma_cfg_ceq(&rf->sc_dev, msix_vec->ceq_id, - msix_vec->idx, false); + if (rf->sc_dev.privileged) + rf->sc_dev.irq_ops->irdma_cfg_ceq(&rf->sc_dev, + msix_vec->ceq_id, + msix_vec->idx, false); irdma_destroy_irq(rf, msix_vec, iwceq); irdma_cqp_ceq_cmd(&rf->sc_dev, &iwceq->sc_ceq, IRDMA_OP_CEQ_DESTROY); @@ -1211,9 +1226,13 @@ static int irdma_cfg_ceq_vector(struct irdma_pci_f *rf, struct irdma_ceq *iwceq, } msix_vec->ceq_id = ceq_id; - rf->sc_dev.irq_ops->irdma_cfg_ceq(&rf->sc_dev, ceq_id, msix_vec->idx, true); - - return 0; + if (rf->sc_dev.privileged) + rf->sc_dev.irq_ops->irdma_cfg_ceq(&rf->sc_dev, ceq_id, + msix_vec->idx, true); + else + status = irdma_vchnl_req_ceq_vec_map(&rf->sc_dev, ceq_id, + msix_vec->idx); + return status; } /** @@ -1226,7 +1245,7 @@ static int irdma_cfg_ceq_vector(struct irdma_pci_f *rf, struct irdma_ceq *iwceq, static int irdma_cfg_aeq_vector(struct irdma_pci_f *rf) { struct irdma_msix_vector *msix_vec = rf->iw_msixtbl; - u32 ret = 0; + int ret = 0; if (!rf->msix_shared) { snprintf(msix_vec->name, sizeof(msix_vec->name) - 1, @@ -1237,12 +1256,16 @@ static int irdma_cfg_aeq_vector(struct irdma_pci_f *rf) } if (ret) { ibdev_dbg(&rf->iwdev->ibdev, "ERR: aeq irq config fail\n"); - return -EINVAL; + return ret; } - rf->sc_dev.irq_ops->irdma_cfg_aeq(&rf->sc_dev, msix_vec->idx, true); + if (rf->sc_dev.privileged) + rf->sc_dev.irq_ops->irdma_cfg_aeq(&rf->sc_dev, msix_vec->idx, + true); + else + ret = irdma_vchnl_req_aeq_vec_map(&rf->sc_dev, msix_vec->idx); - return 0; + return ret; } /** @@ -1250,13 +1273,13 @@ static int irdma_cfg_aeq_vector(struct irdma_pci_f *rf) * @rf: RDMA PCI function * @iwceq: pointer to the ceq resources to be created * @ceq_id: the id number of the iwceq - * @vsi: SC vsi struct + * @vsi_idx: vsi idx * * Return 0, if the ceq and the resources associated with it * are successfully created, otherwise return error */ static int irdma_create_ceq(struct irdma_pci_f *rf, struct irdma_ceq *iwceq, - u32 ceq_id, struct irdma_sc_vsi *vsi) + u32 ceq_id, u16 vsi_idx) { int status; struct irdma_ceq_init_info info = {}; @@ -1280,7 +1303,7 @@ static int irdma_create_ceq(struct irdma_pci_f *rf, struct irdma_ceq *iwceq, info.elem_cnt = ceq_size; iwceq->sc_ceq.ceq_id = ceq_id; info.dev = dev; - info.vsi = vsi; + info.vsi_idx = vsi_idx; status = irdma_sc_ceq_init(&iwceq->sc_ceq, &info); if (!status) { if (dev->ceq_valid) @@ -1323,7 +1346,7 @@ static int irdma_setup_ceq_0(struct irdma_pci_f *rf) } iwceq = &rf->ceqlist[0]; - status = irdma_create_ceq(rf, iwceq, 0, &rf->default_vsi); + status = irdma_create_ceq(rf, iwceq, 0, rf->default_vsi.vsi_idx); if (status) { ibdev_dbg(&rf->iwdev->ibdev, "ERR: create ceq status = %d\n", status); @@ -1358,13 +1381,13 @@ static int irdma_setup_ceq_0(struct irdma_pci_f *rf) /** * irdma_setup_ceqs - manage the device ceq's and their interrupt resources * @rf: RDMA PCI function - * @vsi: VSI structure for this CEQ + * @vsi_idx: vsi_idx for this CEQ * * Allocate a list for all device completion event queues * Create the ceq's and configure their msix interrupt vectors * Return 0, if ceqs are successfully set up, otherwise return error */ -static int irdma_setup_ceqs(struct irdma_pci_f *rf, struct irdma_sc_vsi *vsi) +static int irdma_setup_ceqs(struct irdma_pci_f *rf, u16 vsi_idx) { u32 i; u32 ceq_id; @@ -1377,7 +1400,7 @@ static int irdma_setup_ceqs(struct irdma_pci_f *rf, struct irdma_sc_vsi *vsi) i = (rf->msix_shared) ? 1 : 2; for (ceq_id = 1; i < num_ceqs; i++, ceq_id++) { iwceq = &rf->ceqlist[ceq_id]; - status = irdma_create_ceq(rf, iwceq, ceq_id, vsi); + status = irdma_create_ceq(rf, iwceq, ceq_id, vsi_idx); if (status) { ibdev_dbg(&rf->iwdev->ibdev, "ERR: create ceq status = %d\n", status); @@ -1458,7 +1481,10 @@ static int irdma_create_aeq(struct irdma_pci_f *rf) aeq_size = multiplier * hmc_info->hmc_obj[IRDMA_HMC_IW_QP].cnt + hmc_info->hmc_obj[IRDMA_HMC_IW_CQ].cnt; aeq_size = min(aeq_size, dev->hw_attrs.max_hw_aeq_size); - + /* GEN_3 does not support virtual AEQ. Cap at max Kernel alloc size */ + if (rf->rdma_ver == IRDMA_GEN_3) + aeq_size = min(aeq_size, (u32)((PAGE_SIZE << MAX_PAGE_ORDER) / + sizeof(struct irdma_sc_aeqe))); aeq->mem.size = ALIGN(sizeof(struct irdma_sc_aeqe) * aeq_size, IRDMA_AEQ_ALIGNMENT); aeq->mem.va = dma_alloc_coherent(dev->hw->device, aeq->mem.size, @@ -1466,6 +1492,8 @@ static int irdma_create_aeq(struct irdma_pci_f *rf) GFP_KERNEL | __GFP_NOWARN); if (aeq->mem.va) goto skip_virt_aeq; + else if (rf->rdma_ver == IRDMA_GEN_3) + return -ENOMEM; /* physically mapped aeq failed. setup virtual aeq */ status = irdma_create_virt_aeq(rf, aeq_size); @@ -1739,9 +1767,6 @@ void irdma_rt_deinit_hw(struct irdma_device *iwdev) irdma_del_local_mac_entry(iwdev->rf, (u8)iwdev->mac_ip_table_idx); fallthrough; - case AEQ_CREATED: - case PBLE_CHUNK_MEM: - case CEQS_CREATED: case IEQ_CREATED: if (!iwdev->roce_mode) irdma_puda_dele_rsrc(&iwdev->vsi, IRDMA_PUDA_RSRC_TYPE_IEQ, @@ -1824,13 +1849,17 @@ void irdma_ctrl_deinit_hw(struct irdma_pci_f *rf) enum init_completion_state state = rf->init_state; rf->init_state = INVALID_STATE; - if (rf->rsrc_created) { + + switch (state) { + case AEQ_CREATED: irdma_destroy_aeq(rf); + fallthrough; + case PBLE_CHUNK_MEM: irdma_destroy_pble_prm(rf->pble_rsrc); + fallthrough; + case CEQS_CREATED: irdma_del_ceqs(rf); - rf->rsrc_created = false; - } - switch (state) { + fallthrough; case CEQ0_CREATED: irdma_del_ceq_0(rf); fallthrough; @@ -1909,32 +1938,6 @@ int irdma_rt_init_hw(struct irdma_device *iwdev, break; iwdev->init_state = IEQ_CREATED; } - if (!rf->rsrc_created) { - status = irdma_setup_ceqs(rf, &iwdev->vsi); - if (status) - break; - - iwdev->init_state = CEQS_CREATED; - - status = irdma_hmc_init_pble(&rf->sc_dev, - rf->pble_rsrc); - if (status) { - irdma_del_ceqs(rf); - break; - } - - iwdev->init_state = PBLE_CHUNK_MEM; - - status = irdma_setup_aeq(rf); - if (status) { - irdma_destroy_pble_prm(rf->pble_rsrc); - irdma_del_ceqs(rf); - break; - } - iwdev->init_state = AEQ_CREATED; - rf->rsrc_created = true; - } - if (iwdev->rf->sc_dev.hw_attrs.uk_attrs.hw_rev == IRDMA_GEN_1) irdma_alloc_set_mac(iwdev); irdma_add_ip(iwdev); @@ -2016,6 +2019,25 @@ int irdma_ctrl_init_hw(struct irdma_pci_f *rf) } INIT_WORK(&rf->cqp_cmpl_work, cqp_compl_worker); irdma_sc_ccq_arm(dev->ccq); + + status = irdma_setup_ceqs(rf, rf->iwdev ? rf->iwdev->vsi_num : 0); + if (status) + break; + + rf->init_state = CEQS_CREATED; + + status = irdma_hmc_init_pble(&rf->sc_dev, + rf->pble_rsrc); + if (status) + break; + + rf->init_state = PBLE_CHUNK_MEM; + + status = irdma_setup_aeq(rf); + if (status) + break; + rf->init_state = AEQ_CREATED; + return 0; } while (0); diff --git a/drivers/infiniband/hw/irdma/ig3rdma_hw.c b/drivers/infiniband/hw/irdma/ig3rdma_hw.c index 83ef6af82a8f..1d582c50e4d2 100644 --- a/drivers/infiniband/hw/irdma/ig3rdma_hw.c +++ b/drivers/infiniband/hw/irdma/ig3rdma_hw.c @@ -5,8 +5,53 @@ #include "protos.h" #include "ig3rdma_hw.h" +/** + * ig3rdma_ena_irq - Enable interrupt + * @dev: pointer to the device structure + * @idx: vector index + */ +static void ig3rdma_ena_irq(struct irdma_sc_dev *dev, u32 idx) +{ + u32 val; + u32 int_stride = 1; /* one u32 per register */ + + if (dev->is_pf) + int_stride = 0x400; + else + idx--; /* VFs use DYN_CTL_N */ + + val = FIELD_PREP(IRDMA_GLINT_DYN_CTL_INTENA, 1) | + FIELD_PREP(IRDMA_GLINT_DYN_CTL_CLEARPBA, 1); + + writel(val, dev->hw_regs[IRDMA_GLINT_DYN_CTL] + (idx * int_stride)); +} + +/** + * ig3rdma_disable_irq - Disable interrupt + * @dev: pointer to the device structure + * @idx: vector index + */ +static void ig3rdma_disable_irq(struct irdma_sc_dev *dev, u32 idx) +{ + u32 int_stride = 1; /* one u32 per register */ + + if (dev->is_pf) + int_stride = 0x400; + else + idx--; /* VFs use DYN_CTL_N */ + + writel(0, dev->hw_regs[IRDMA_GLINT_DYN_CTL] + (idx * int_stride)); +} + +static const struct irdma_irq_ops ig3rdma_irq_ops = { + .irdma_dis_irq = ig3rdma_disable_irq, + .irdma_en_irq = ig3rdma_ena_irq, +}; + void ig3rdma_init_hw(struct irdma_sc_dev *dev) { + dev->irq_ops = &ig3rdma_irq_ops; + dev->hw_attrs.uk_attrs.hw_rev = IRDMA_GEN_3; dev->hw_attrs.uk_attrs.max_hw_wq_frags = IG3RDMA_MAX_WQ_FRAGMENT_COUNT; dev->hw_attrs.uk_attrs.max_hw_read_sges = IG3RDMA_MAX_SGE_RD; diff --git a/drivers/infiniband/hw/irdma/irdma.h b/drivers/infiniband/hw/irdma/irdma.h index 4dc6bf5b2e97..0544cbad4a48 100644 --- a/drivers/infiniband/hw/irdma/irdma.h +++ b/drivers/infiniband/hw/irdma/irdma.h @@ -32,7 +32,16 @@ #define IRDMA_PFHMC_SDDATALOW_PMSDDATALOW GENMASK(31, 12) #define IRDMA_PFHMC_SDCMD_PMSDWR BIT(31) -#define IRDMA_INVALID_CQ_IDX 0xffffffff +#define IRDMA_INVALID_CQ_IDX 0xffffffff +#define IRDMA_Q_INVALID_IDX 0xffff + +enum irdma_dyn_idx_t { + IRDMA_IDX_ITR0 = 0, + IRDMA_IDX_ITR1 = 1, + IRDMA_IDX_ITR2 = 2, + IRDMA_IDX_NOITR = 3, +}; + enum irdma_registers { IRDMA_CQPTAIL, IRDMA_CQPDB, diff --git a/drivers/infiniband/hw/irdma/main.h b/drivers/infiniband/hw/irdma/main.h index 5d1371891c4c..17169338045a 100644 --- a/drivers/infiniband/hw/irdma/main.h +++ b/drivers/infiniband/hw/irdma/main.h @@ -127,12 +127,12 @@ enum init_completion_state { HMC_OBJS_CREATED, HW_RSRC_INITIALIZED, CCQ_CREATED, - CEQ0_CREATED, /* Last state of probe */ - ILQ_CREATED, - IEQ_CREATED, + CEQ0_CREATED, CEQS_CREATED, PBLE_CHUNK_MEM, AEQ_CREATED, + ILQ_CREATED, + IEQ_CREATED, /* Last state of probe */ IP_ADDR_REGISTERED, /* Last state of open */ }; diff --git a/drivers/infiniband/hw/irdma/type.h b/drivers/infiniband/hw/irdma/type.h index 2b93a70432be..0faf9cf80fa6 100644 --- a/drivers/infiniband/hw/irdma/type.h +++ b/drivers/infiniband/hw/irdma/type.h @@ -472,6 +472,8 @@ struct irdma_sc_aeq { u32 msix_idx; u8 polarity; bool virtual_map:1; + bool pasid_valid:1; + u32 pasid; }; struct irdma_sc_ceq { @@ -487,13 +489,15 @@ struct irdma_sc_ceq { u8 tph_val; u32 first_pm_pbl_idx; u8 polarity; - struct irdma_sc_vsi *vsi; + u16 vsi_idx; struct irdma_sc_cq **reg_cq; u32 reg_cq_size; spinlock_t req_cq_lock; /* protect access to reg_cq array */ bool virtual_map:1; bool tph_en:1; bool itr_no_expire:1; + bool pasid_valid:1; + u32 pasid; }; struct irdma_sc_cq { @@ -501,6 +505,7 @@ struct irdma_sc_cq { u64 cq_pa; u64 shadow_area_pa; struct irdma_sc_dev *dev; + u16 vsi_idx; struct irdma_sc_vsi *vsi; void *pbl_list; void *back_cq; @@ -834,8 +839,8 @@ struct irdma_ceq_init_info { bool itr_no_expire:1; u8 pbl_chunk_size; u8 tph_val; + u16 vsi_idx; u32 first_pm_pbl_idx; - struct irdma_sc_vsi *vsi; struct irdma_sc_cq **reg_cq; u32 reg_cq_idx; }; @@ -1042,9 +1047,11 @@ struct irdma_aeqe_info { bool cq:1; bool sq:1; bool rq:1; + bool srq:1; bool in_rdrsp_wr:1; bool out_rdrsp:1; bool aeqe_overflow:1; + bool err_rq_idx_valid:1; u8 q2_data_written; u8 ae_src; }; diff --git a/drivers/infiniband/hw/irdma/virtchnl.c b/drivers/infiniband/hw/irdma/virtchnl.c index fcb8ef2dd28b..fc669b5a6b77 100644 --- a/drivers/infiniband/hw/irdma/virtchnl.c +++ b/drivers/infiniband/hw/irdma/virtchnl.c @@ -108,6 +108,8 @@ static int irdma_vchnl_req_verify_resp(struct irdma_vchnl_req *vchnl_req, return -EBADMSG; break; case IRDMA_VCHNL_OP_GET_REG_LAYOUT: + case IRDMA_VCHNL_OP_QUEUE_VECTOR_MAP: + case IRDMA_VCHNL_OP_QUEUE_VECTOR_UNMAP: break; default: return -EOPNOTSUPP; @@ -313,6 +315,88 @@ int irdma_vchnl_req_get_reg_layout(struct irdma_sc_dev *dev) return 0; } +/** + * irdma_vchnl_req_aeq_vec_map - Map AEQ to vector on this function + * @dev: RDMA device pointer + * @v_idx: vector index + */ +int irdma_vchnl_req_aeq_vec_map(struct irdma_sc_dev *dev, u32 v_idx) +{ + struct irdma_vchnl_req_init_info info = {}; + struct irdma_vchnl_qvlist_info *qvl; + struct irdma_vchnl_qv_info *qv; + u16 qvl_size, num_vectors = 1; + int ret; + + if (!dev->vchnl_up) + return -EBUSY; + + qvl_size = struct_size(qvl, qv_info, num_vectors); + + qvl = kzalloc(qvl_size, GFP_KERNEL); + if (!qvl) + return -ENOMEM; + + qvl->num_vectors = 1; + qv = qvl->qv_info; + + qv->ceq_idx = IRDMA_Q_INVALID_IDX; + qv->v_idx = v_idx; + qv->itr_idx = IRDMA_IDX_ITR0; + + info.op_code = IRDMA_VCHNL_OP_QUEUE_VECTOR_MAP; + info.op_ver = IRDMA_VCHNL_OP_QUEUE_VECTOR_MAP_V0; + info.req_parm = qvl; + info.req_parm_len = qvl_size; + + ret = irdma_vchnl_req_send_sync(dev, &info); + kfree(qvl); + + return ret; +} + +/** + * irdma_vchnl_req_ceq_vec_map - Map CEQ to vector on this function + * @dev: RDMA device pointer + * @ceq_id: CEQ index + * @v_idx: vector index + */ +int irdma_vchnl_req_ceq_vec_map(struct irdma_sc_dev *dev, u16 ceq_id, u32 v_idx) +{ + struct irdma_vchnl_req_init_info info = {}; + struct irdma_vchnl_qvlist_info *qvl; + struct irdma_vchnl_qv_info *qv; + u16 qvl_size, num_vectors = 1; + int ret; + + if (!dev->vchnl_up) + return -EBUSY; + + qvl_size = struct_size(qvl, qv_info, num_vectors); + + qvl = kzalloc(qvl_size, GFP_KERNEL); + if (!qvl) + return -ENOMEM; + + qvl->num_vectors = num_vectors; + qv = qvl->qv_info; + + qv->aeq_idx = IRDMA_Q_INVALID_IDX; + qv->ceq_idx = ceq_id; + qv->v_idx = v_idx; + qv->itr_idx = IRDMA_IDX_ITR0; + + info.op_code = IRDMA_VCHNL_OP_QUEUE_VECTOR_MAP; + info.op_ver = IRDMA_VCHNL_OP_QUEUE_VECTOR_MAP_V0; + info.req_parm = qvl; + info.req_parm_len = qvl_size; + + ret = irdma_vchnl_req_send_sync(dev, &info); + kfree(qvl); + + return ret; +} + /** * irdma_vchnl_req_get_ver - Request Channel version * @dev: RDMA device pointer diff --git a/drivers/infiniband/hw/irdma/virtchnl.h b/drivers/infiniband/hw/irdma/virtchnl.h index 20526c0b4285..3af725587754 100644 --- a/drivers/infiniband/hw/irdma/virtchnl.h +++ b/drivers/infiniband/hw/irdma/virtchnl.h @@ -15,6 +15,8 @@ #define IRDMA_VCHNL_OP_GET_HMC_FCN_V2 2 #define IRDMA_VCHNL_OP_PUT_HMC_FCN_V0 0 #define IRDMA_VCHNL_OP_GET_REG_LAYOUT_V0 0 +#define IRDMA_VCHNL_OP_QUEUE_VECTOR_MAP_V0 0 +#define IRDMA_VCHNL_OP_QUEUE_VECTOR_UNMAP_V0 0 #define IRDMA_VCHNL_OP_GET_RDMA_CAPS_V0 0 #define IRDMA_VCHNL_OP_GET_RDMA_CAPS_MIN_SIZE 1 @@ -53,6 +55,8 @@ enum irdma_vchnl_ops { IRDMA_VCHNL_OP_PUT_HMC_FCN = 2, IRDMA_VCHNL_OP_GET_REG_LAYOUT = 11, IRDMA_VCHNL_OP_GET_RDMA_CAPS = 13, + IRDMA_VCHNL_OP_QUEUE_VECTOR_MAP = 14, + IRDMA_VCHNL_OP_QUEUE_VECTOR_UNMAP = 15, }; struct irdma_vchnl_req_hmc_info { @@ -65,6 +69,18 @@ struct irdma_vchnl_resp_hmc_info { u16 qs_handle[IRDMA_MAX_USER_PRIORITY]; } __packed; +struct irdma_vchnl_qv_info { + u32 v_idx; + u16 ceq_idx; + u16 aeq_idx; + u8 itr_idx; +}; + +struct irdma_vchnl_qvlist_info { + u32 num_vectors; + struct irdma_vchnl_qv_info qv_info[]; +}; + struct irdma_vchnl_op_buf { u16 op_code; u16 op_ver; @@ -137,4 +153,7 @@ int irdma_vchnl_req_get_caps(struct irdma_sc_dev *dev); int irdma_vchnl_req_get_resp(struct irdma_sc_dev *dev, struct irdma_vchnl_req *vc_req); int irdma_vchnl_req_get_reg_layout(struct irdma_sc_dev *dev); +int irdma_vchnl_req_aeq_vec_map(struct irdma_sc_dev *dev, u32 v_idx); +int irdma_vchnl_req_ceq_vec_map(struct irdma_sc_dev *dev, u16 ceq_id, + u32 v_idx); #endif /* IRDMA_VIRTCHNL_H */ From patchwork Sat Aug 24 03:19:14 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Nikolova, Tatyana E" X-Patchwork-Id: 13776203 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.16]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0820A626CB; Sat, 24 Aug 2024 03:20:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.16 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724469652; cv=none; b=czo6qKmmTH4SHoxSM2FAhZOzQVAHRI0pD7UUQkSh8YHRHjL7nSWZGgWLTTc/KpW/wumUf0ejuuDDhtd52RmFovtYXAkJvliZ0Slg7k7GS38ycn6erQdFhIHzjdpeH8VCxCY1eMrZ4YS54Qj1/3WzbuAkAEN1l7dHi65duSJ+D+0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724469652; c=relaxed/simple; bh=VuK0HKMUf1REPDxLstnaNfetM5RNl+vR7MOn8AbboTY=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=IcV30KZiBDsX2UvA1IHYwhwwa29TK8sl485IO12e49HplcOwoGfZcoZtD/UBVyjs++vkx1fvU1i+4UAKIf8y0F7nC1CMW2X4qlao5QH6Iect6stqfmONMCNFGQeNLM9iDsFIRhrWMwDXni141ladjm2JSRIKSWn/tmZRnQRP4P4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=AdXbE9qr; arc=none smtp.client-ip=192.198.163.16 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="AdXbE9qr" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1724469651; x=1756005651; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=VuK0HKMUf1REPDxLstnaNfetM5RNl+vR7MOn8AbboTY=; b=AdXbE9qrUDf5TUuls/Gfo337NVskPm3I++/QHn4yNyEw6yduMaSyJ5n9 1cQj3DsZWLZAjVcymjxPzyxPch3EhtPy3prFE1SW+hoHJ6zoIZYZP2Atc 6nfCpY9zBNqKeeCrRMeziTd5tcJYnDZKQL0zY79q5XcVa/CUkOOji2NUw P9wDPPvcmIZOlCYXyNcf+sfjb2M6MuBOOp28lkWujuTYlPftGcYXZGWTC 970Nm2qhFMj1hiU9LrO8TE+Yj7lOwykbo4bZkxKMFKagSHF1prYeXwnRx rDu2zHqoWKYIB0dh04OuqfcYvO9pOeGfo2f/K6aIuJUXf/lmGMqwK6PnH w==; X-CSE-ConnectionGUID: EyPbBKFEQbyy9NHHeGWBBQ== X-CSE-MsgGUID: UOuMZS7ZQN+D0s9ktzyvMA== X-IronPort-AV: E=McAfee;i="6700,10204,11173"; a="13187810" X-IronPort-AV: E=Sophos;i="6.10,172,1719903600"; d="scan'208";a="13187810" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by fmvoesa110.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Aug 2024 20:20:47 -0700 X-CSE-ConnectionGUID: Gunp0TbHRXKPQbDInM4xMw== X-CSE-MsgGUID: kYEpDc4BROC+uACkorHX1w== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.10,172,1719903600"; d="scan'208";a="99492118" Received: from tenikolo-mobl1.amr.corp.intel.com ([10.124.36.66]) by smtpauth.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Aug 2024 20:20:46 -0700 From: Tatyana Nikolova To: jgg@nvidia.com, leon@kernel.org Cc: linux-rdma@vger.kernel.org, netdev@vger.kernel.org, Krzysztof Czurylo , Tatyana Nikolova Subject: [RFC v2 15/25] RDMA/irdma: Add GEN3 HW statistics support Date: Fri, 23 Aug 2024 22:19:14 -0500 Message-Id: <20240824031924.421-16-tatyana.e.nikolova@intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20240824031924.421-1-tatyana.e.nikolova@intel.com> References: <20240824031924.421-1-tatyana.e.nikolova@intel.com> Precedence: bulk X-Mailing-List: linux-rdma@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Krzysztof Czurylo Plug into the unified HW statistics framework by adding a hardware statistics map array for GEN3, defining the HW-specific width and location for each counter in the statistics buffer. Signed-off-by: Krzysztof Czurylo Signed-off-by: Tatyana Nikolova --- drivers/infiniband/hw/irdma/ctrl.c | 33 +++++-- drivers/infiniband/hw/irdma/defs.h | 2 +- drivers/infiniband/hw/irdma/ig3rdma_hw.c | 63 +++++++++++++ drivers/infiniband/hw/irdma/type.h | 19 +++- drivers/infiniband/hw/irdma/verbs.c | 110 +++++++++++++---------- 5 files changed, 166 insertions(+), 61 deletions(-) diff --git a/drivers/infiniband/hw/irdma/ctrl.c b/drivers/infiniband/hw/irdma/ctrl.c index 5a5d47cf1854..88eb7a088ee7 100644 --- a/drivers/infiniband/hw/irdma/ctrl.c +++ b/drivers/infiniband/hw/irdma/ctrl.c @@ -1964,7 +1964,8 @@ int irdma_vsi_stats_init(struct irdma_sc_vsi *vsi, (void *)((uintptr_t)stats_buff_mem->va + IRDMA_GATHER_STATS_BUF_SIZE); - irdma_hw_stats_start_timer(vsi); + if (vsi->dev->hw_attrs.uk_attrs.hw_rev < IRDMA_GEN_3) + irdma_hw_stats_start_timer(vsi); /* when stat allocation is not required default to fcn_id. */ vsi->stats_idx = info->fcn_id; @@ -2009,7 +2010,9 @@ void irdma_vsi_stats_free(struct irdma_sc_vsi *vsi) if (!vsi->pestat) return; - irdma_hw_stats_stop_timer(vsi); + + if (dev->hw_attrs.uk_attrs.hw_rev < IRDMA_GEN_3) + irdma_hw_stats_stop_timer(vsi); dma_free_coherent(vsi->pestat->hw->device, vsi->pestat->gather_info.stats_buff_mem.size, vsi->pestat->gather_info.stats_buff_mem.va, @@ -5935,14 +5938,26 @@ void irdma_cfg_aeq(struct irdma_sc_dev *dev, u32 idx, bool enable) */ void sc_vsi_update_stats(struct irdma_sc_vsi *vsi) { - struct irdma_gather_stats *gather_stats; - struct irdma_gather_stats *last_gather_stats; + struct irdma_dev_hw_stats *hw_stats = &vsi->pestat->hw_stats; + struct irdma_gather_stats *gather_stats = + vsi->pestat->gather_info.gather_stats_va; + struct irdma_gather_stats *last_gather_stats = + vsi->pestat->gather_info.last_gather_stats_va; + const struct irdma_hw_stat_map *map = vsi->dev->hw_stats_map; + u16 max_stat_idx = vsi->dev->hw_attrs.max_stat_idx; + u16 i; + + if (vsi->dev->hw_attrs.uk_attrs.hw_rev >= IRDMA_GEN_3) { + for (i = 0; i < max_stat_idx; i++) { + u16 idx = map[i].byteoff / sizeof(u64); + + hw_stats->stats_val[i] = gather_stats->val[idx]; + } + return; + } - gather_stats = vsi->pestat->gather_info.gather_stats_va; - last_gather_stats = vsi->pestat->gather_info.last_gather_stats_va; - irdma_update_stats(&vsi->pestat->hw_stats, gather_stats, - last_gather_stats, vsi->dev->hw_stats_map, - vsi->dev->hw_attrs.max_stat_idx); + irdma_update_stats(hw_stats, gather_stats, last_gather_stats, + map, max_stat_idx); } /** diff --git a/drivers/infiniband/hw/irdma/defs.h b/drivers/infiniband/hw/irdma/defs.h index 5829c72cd328..492529ada042 100644 --- a/drivers/infiniband/hw/irdma/defs.h +++ b/drivers/infiniband/hw/irdma/defs.h @@ -415,7 +415,7 @@ enum irdma_cqp_op_type { #define IRDMA_CQPSQ_STATS_USE_INST BIT_ULL(61) #define IRDMA_CQPSQ_STATS_OP GENMASK_ULL(37, 32) #define IRDMA_CQPSQ_STATS_INST_INDEX GENMASK_ULL(6, 0) -#define IRDMA_CQPSQ_STATS_HMC_FCN_INDEX GENMASK_ULL(5, 0) +#define IRDMA_CQPSQ_STATS_HMC_FCN_INDEX GENMASK_ULL(15, 0) #define IRDMA_CQPSQ_WS_WQEVALID BIT_ULL(63) #define IRDMA_CQPSQ_WS_NODEOP GENMASK_ULL(53, 52) #define IRDMA_SD_MAX GENMASK_ULL(15, 0) diff --git a/drivers/infiniband/hw/irdma/ig3rdma_hw.c b/drivers/infiniband/hw/irdma/ig3rdma_hw.c index 1d582c50e4d2..2a3d7144c771 100644 --- a/drivers/infiniband/hw/irdma/ig3rdma_hw.c +++ b/drivers/infiniband/hw/irdma/ig3rdma_hw.c @@ -48,9 +48,70 @@ static const struct irdma_irq_ops ig3rdma_irq_ops = { .irdma_en_irq = ig3rdma_ena_irq, }; +static const struct irdma_hw_stat_map ig3rdma_hw_stat_map[] = { + [IRDMA_HW_STAT_INDEX_RXVLANERR] = { 0, 0, 0 }, + [IRDMA_HW_STAT_INDEX_IP4RXOCTS] = { 8, 0, 0 }, + [IRDMA_HW_STAT_INDEX_IP4RXPKTS] = { 16, 0, 0 }, + [IRDMA_HW_STAT_INDEX_IP4RXDISCARD] = { 24, 0, 0 }, + [IRDMA_HW_STAT_INDEX_IP4RXTRUNC] = { 32, 0, 0 }, + [IRDMA_HW_STAT_INDEX_IP4RXFRAGS] = { 40, 0, 0 }, + [IRDMA_HW_STAT_INDEX_IP4RXMCOCTS] = { 48, 0, 0 }, + [IRDMA_HW_STAT_INDEX_IP4RXMCPKTS] = { 56, 0, 0 }, + [IRDMA_HW_STAT_INDEX_IP6RXOCTS] = { 64, 0, 0 }, + [IRDMA_HW_STAT_INDEX_IP6RXPKTS] = { 72, 0, 0 }, + [IRDMA_HW_STAT_INDEX_IP6RXDISCARD] = { 80, 0, 0 }, + [IRDMA_HW_STAT_INDEX_IP6RXTRUNC] = { 88, 0, 0 }, + [IRDMA_HW_STAT_INDEX_IP6RXFRAGS] = { 96, 0, 0 }, + [IRDMA_HW_STAT_INDEX_IP6RXMCOCTS] = { 104, 0, 0 }, + [IRDMA_HW_STAT_INDEX_IP6RXMCPKTS] = { 112, 0, 0 }, + [IRDMA_HW_STAT_INDEX_IP4TXOCTS] = { 120, 0, 0 }, + [IRDMA_HW_STAT_INDEX_IP4TXPKTS] = { 128, 0, 0 }, + [IRDMA_HW_STAT_INDEX_IP4TXFRAGS] = { 136, 0, 0 }, + [IRDMA_HW_STAT_INDEX_IP4TXMCOCTS] = { 144, 0, 0 }, + [IRDMA_HW_STAT_INDEX_IP4TXMCPKTS] = { 152, 0, 0 }, + [IRDMA_HW_STAT_INDEX_IP6TXOCTS] = { 160, 0, 0 }, + [IRDMA_HW_STAT_INDEX_IP6TXPKTS] = { 168, 0, 0 }, + [IRDMA_HW_STAT_INDEX_IP6TXFRAGS] = { 176, 0, 0 }, + [IRDMA_HW_STAT_INDEX_IP6TXMCOCTS] = { 184, 0, 0 }, + [IRDMA_HW_STAT_INDEX_IP6TXMCPKTS] = { 192, 0, 0 }, + [IRDMA_HW_STAT_INDEX_IP4TXNOROUTE] = { 200, 0, 0 }, + [IRDMA_HW_STAT_INDEX_IP6TXNOROUTE] = { 208, 0, 0 }, + [IRDMA_HW_STAT_INDEX_TCPRTXSEG] = { 216, 0, 0 }, + [IRDMA_HW_STAT_INDEX_TCPRXOPTERR] = { 224, 0, 0 }, + [IRDMA_HW_STAT_INDEX_TCPRXPROTOERR] = { 232, 0, 0 }, + [IRDMA_HW_STAT_INDEX_TCPTXSEG] = { 240, 0, 0 }, + [IRDMA_HW_STAT_INDEX_TCPRXSEGS] = { 248, 0, 0 }, + [IRDMA_HW_STAT_INDEX_UDPRXPKTS] = { 256, 0, 0 }, + [IRDMA_HW_STAT_INDEX_UDPTXPKTS] = { 264, 0, 0 }, + [IRDMA_HW_STAT_INDEX_RDMARXWRS] = { 272, 0, 0 }, + [IRDMA_HW_STAT_INDEX_RDMARXRDS] = { 280, 0, 0 }, + [IRDMA_HW_STAT_INDEX_RDMARXSNDS] = { 288, 0, 0 }, + [IRDMA_HW_STAT_INDEX_RDMATXWRS] = { 296, 0, 0 }, + [IRDMA_HW_STAT_INDEX_RDMATXRDS] = { 304, 0, 0 }, + [IRDMA_HW_STAT_INDEX_RDMATXSNDS] = { 312, 0, 0 }, + [IRDMA_HW_STAT_INDEX_RDMAVBND] = { 320, 0, 0 }, + [IRDMA_HW_STAT_INDEX_RDMAVINV] = { 328, 0, 0 }, + [IRDMA_HW_STAT_INDEX_RXNPECNMARKEDPKTS] = { 336, 0, 0 }, + [IRDMA_HW_STAT_INDEX_RXRPCNPHANDLED] = { 344, 0, 0 }, + [IRDMA_HW_STAT_INDEX_RXRPCNPIGNORED] = { 352, 0, 0 }, + [IRDMA_HW_STAT_INDEX_TXNPCNPSENT] = { 360, 0, 0 }, + [IRDMA_HW_STAT_INDEX_RNR_SENT] = { 368, 0, 0 }, + [IRDMA_HW_STAT_INDEX_RNR_RCVD] = { 376, 0, 0 }, + [IRDMA_HW_STAT_INDEX_RDMAORDLMTCNT] = { 384, 0, 0 }, + [IRDMA_HW_STAT_INDEX_RDMAIRDLMTCNT] = { 392, 0, 0 }, + [IRDMA_HW_STAT_INDEX_RDMARXATS] = { 408, 0, 0 }, + [IRDMA_HW_STAT_INDEX_RDMATXATS] = { 416, 0, 0 }, + [IRDMA_HW_STAT_INDEX_NAKSEQERR] = { 424, 0, 0 }, + [IRDMA_HW_STAT_INDEX_NAKSEQERR_IMPLIED] = { 432, 0, 0 }, + [IRDMA_HW_STAT_INDEX_RTO] = { 440, 0, 0 }, + [IRDMA_HW_STAT_INDEX_RXOOOPKTS] = { 448, 0, 0 }, + [IRDMA_HW_STAT_INDEX_ICRCERR] = { 456, 0, 0 }, +}; + void ig3rdma_init_hw(struct irdma_sc_dev *dev) { dev->irq_ops = &ig3rdma_irq_ops; + dev->hw_stats_map = ig3rdma_hw_stat_map; dev->hw_attrs.uk_attrs.hw_rev = IRDMA_GEN_3; dev->hw_attrs.uk_attrs.max_hw_wq_frags = IG3RDMA_MAX_WQ_FRAGMENT_COUNT; @@ -70,6 +131,8 @@ void ig3rdma_init_hw(struct irdma_sc_dev *dev) dev->hw_attrs.page_size_cap = SZ_4K | SZ_2M | SZ_1G; dev->hw_attrs.max_hw_ird = IG3RDMA_MAX_IRD_SIZE; dev->hw_attrs.max_hw_ord = IG3RDMA_MAX_ORD_SIZE; + dev->hw_attrs.max_stat_inst = IG3RDMA_MAX_STATS_COUNT; + dev->hw_attrs.max_stat_idx = IRDMA_HW_STAT_INDEX_MAX_GEN_3; dev->hw_attrs.uk_attrs.min_hw_wq_size = IG3RDMA_MIN_WQ_SIZE; dev->hw_attrs.uk_attrs.max_hw_srq_quanta = IRDMA_SRQ_MAX_QUANTA; dev->hw_attrs.uk_attrs.max_hw_inline = IG3RDMA_MAX_INLINE_DATA_SIZE; diff --git a/drivers/infiniband/hw/irdma/type.h b/drivers/infiniband/hw/irdma/type.h index 0faf9cf80fa6..17fc72636bb7 100644 --- a/drivers/infiniband/hw/irdma/type.h +++ b/drivers/infiniband/hw/irdma/type.h @@ -156,6 +156,21 @@ enum irdma_hw_stats_index { IRDMA_HW_STAT_INDEX_RXRPCNPIGNORED = 44, IRDMA_HW_STAT_INDEX_TXNPCNPSENT = 45, IRDMA_HW_STAT_INDEX_MAX_GEN_2 = 46, + + /* gen3 */ + IRDMA_HW_STAT_INDEX_RNR_SENT = 46, + IRDMA_HW_STAT_INDEX_RNR_RCVD = 47, + IRDMA_HW_STAT_INDEX_RDMAORDLMTCNT = 48, + IRDMA_HW_STAT_INDEX_RDMAIRDLMTCNT = 49, + IRDMA_HW_STAT_INDEX_RDMARXATS = 50, + IRDMA_HW_STAT_INDEX_RDMATXATS = 51, + IRDMA_HW_STAT_INDEX_NAKSEQERR = 52, + IRDMA_HW_STAT_INDEX_NAKSEQERR_IMPLIED = 53, + IRDMA_HW_STAT_INDEX_RTO = 54, + IRDMA_HW_STAT_INDEX_RXOOOPKTS = 55, + IRDMA_HW_STAT_INDEX_ICRCERR = 56, + + IRDMA_HW_STAT_INDEX_MAX_GEN_3 = 57, }; enum irdma_feature_type { @@ -569,7 +584,7 @@ struct irdma_sc_qp { struct irdma_stats_inst_info { bool use_hmc_fcn_index; u8 hmc_fn_id; - u8 stats_idx; + u16 stats_idx; }; struct irdma_up_info { @@ -1027,7 +1042,7 @@ struct irdma_qp_host_ctx_info { u32 send_cq_num; u32 rcv_cq_num; u32 rem_endpoint_idx; - u8 stats_idx; + u16 stats_idx; bool srq_valid:1; bool tcp_info_valid:1; bool iwarp_info_valid:1; diff --git a/drivers/infiniband/hw/irdma/verbs.c b/drivers/infiniband/hw/irdma/verbs.c index 65466c1c72c5..569eb5c5f78e 100644 --- a/drivers/infiniband/hw/irdma/verbs.c +++ b/drivers/infiniband/hw/irdma/verbs.c @@ -3914,40 +3914,7 @@ static int irdma_req_notify_cq(struct ib_cq *ibcq, return ret; } -static int irdma_roce_port_immutable(struct ib_device *ibdev, u32 port_num, - struct ib_port_immutable *immutable) -{ - struct ib_port_attr attr; - int err; - - immutable->core_cap_flags = RDMA_CORE_PORT_IBA_ROCE_UDP_ENCAP; - err = ib_query_port(ibdev, port_num, &attr); - if (err) - return err; - - immutable->max_mad_size = IB_MGMT_MAD_SIZE; - immutable->pkey_tbl_len = attr.pkey_tbl_len; - immutable->gid_tbl_len = attr.gid_tbl_len; - - return 0; -} - -static int irdma_iw_port_immutable(struct ib_device *ibdev, u32 port_num, - struct ib_port_immutable *immutable) -{ - struct ib_port_attr attr; - int err; - - immutable->core_cap_flags = RDMA_CORE_PORT_IWARP; - err = ib_query_port(ibdev, port_num, &attr); - if (err) - return err; - immutable->gid_tbl_len = attr.gid_tbl_len; - - return 0; -} - -static const struct rdma_stat_desc irdma_hw_stat_names[] = { +static const struct rdma_stat_desc irdma_hw_stat_descs[] = { /* gen1 - 32-bit */ [IRDMA_HW_STAT_INDEX_IP4RXDISCARD].name = "ip4InDiscards", [IRDMA_HW_STAT_INDEX_IP4RXTRUNC].name = "ip4InTruncatedPkts", @@ -3955,9 +3922,6 @@ static const struct rdma_stat_desc irdma_hw_stat_names[] = { [IRDMA_HW_STAT_INDEX_IP6RXDISCARD].name = "ip6InDiscards", [IRDMA_HW_STAT_INDEX_IP6RXTRUNC].name = "ip6InTruncatedPkts", [IRDMA_HW_STAT_INDEX_IP6TXNOROUTE].name = "ip6OutNoRoutes", - [IRDMA_HW_STAT_INDEX_TCPRTXSEG].name = "tcpRetransSegs", - [IRDMA_HW_STAT_INDEX_TCPRXOPTERR].name = "tcpInOptErrors", - [IRDMA_HW_STAT_INDEX_TCPRXPROTOERR].name = "tcpInProtoErrors", [IRDMA_HW_STAT_INDEX_RXVLANERR].name = "rxVlanErrors", /* gen1 - 64-bit */ [IRDMA_HW_STAT_INDEX_IP4RXOCTS].name = "ip4InOctets", @@ -3976,16 +3940,14 @@ static const struct rdma_stat_desc irdma_hw_stat_names[] = { [IRDMA_HW_STAT_INDEX_IP6TXPKTS].name = "ip6OutPkts", [IRDMA_HW_STAT_INDEX_IP6TXFRAGS].name = "ip6OutSegRqd", [IRDMA_HW_STAT_INDEX_IP6TXMCPKTS].name = "ip6OutMcastPkts", - [IRDMA_HW_STAT_INDEX_TCPRXSEGS].name = "tcpInSegs", - [IRDMA_HW_STAT_INDEX_TCPTXSEG].name = "tcpOutSegs", - [IRDMA_HW_STAT_INDEX_RDMARXRDS].name = "iwInRdmaReads", - [IRDMA_HW_STAT_INDEX_RDMARXSNDS].name = "iwInRdmaSends", - [IRDMA_HW_STAT_INDEX_RDMARXWRS].name = "iwInRdmaWrites", - [IRDMA_HW_STAT_INDEX_RDMATXRDS].name = "iwOutRdmaReads", - [IRDMA_HW_STAT_INDEX_RDMATXSNDS].name = "iwOutRdmaSends", - [IRDMA_HW_STAT_INDEX_RDMATXWRS].name = "iwOutRdmaWrites", - [IRDMA_HW_STAT_INDEX_RDMAVBND].name = "iwRdmaBnd", - [IRDMA_HW_STAT_INDEX_RDMAVINV].name = "iwRdmaInv", + [IRDMA_HW_STAT_INDEX_RDMARXRDS].name = "InRdmaReads", + [IRDMA_HW_STAT_INDEX_RDMARXSNDS].name = "InRdmaSends", + [IRDMA_HW_STAT_INDEX_RDMARXWRS].name = "InRdmaWrites", + [IRDMA_HW_STAT_INDEX_RDMATXRDS].name = "OutRdmaReads", + [IRDMA_HW_STAT_INDEX_RDMATXSNDS].name = "OutRdmaSends", + [IRDMA_HW_STAT_INDEX_RDMATXWRS].name = "OutRdmaWrites", + [IRDMA_HW_STAT_INDEX_RDMAVBND].name = "RdmaBnd", + [IRDMA_HW_STAT_INDEX_RDMAVINV].name = "RdmaInv", /* gen2 - 32-bit */ [IRDMA_HW_STAT_INDEX_RXRPCNPHANDLED].name = "cnpHandled", @@ -3999,9 +3961,59 @@ static const struct rdma_stat_desc irdma_hw_stat_names[] = { [IRDMA_HW_STAT_INDEX_UDPRXPKTS].name = "RxUDP", [IRDMA_HW_STAT_INDEX_UDPTXPKTS].name = "TxUDP", [IRDMA_HW_STAT_INDEX_RXNPECNMARKEDPKTS].name = "RxECNMrkd", - + [IRDMA_HW_STAT_INDEX_TCPRTXSEG].name = "RetransSegs", + [IRDMA_HW_STAT_INDEX_TCPRXOPTERR].name = "InOptErrors", + [IRDMA_HW_STAT_INDEX_TCPRXPROTOERR].name = "InProtoErrors", + [IRDMA_HW_STAT_INDEX_TCPRXSEGS].name = "InSegs", + [IRDMA_HW_STAT_INDEX_TCPTXSEG].name = "OutSegs", + + /* gen3 */ + [IRDMA_HW_STAT_INDEX_RNR_SENT].name = "RNR sent", + [IRDMA_HW_STAT_INDEX_RNR_RCVD].name = "RNR received", + [IRDMA_HW_STAT_INDEX_RDMAORDLMTCNT].name = "ord limit count", + [IRDMA_HW_STAT_INDEX_RDMAIRDLMTCNT].name = "ird limit count", + [IRDMA_HW_STAT_INDEX_RDMARXATS].name = "Rx ATS", + [IRDMA_HW_STAT_INDEX_RDMATXATS].name = "Tx ATS", + [IRDMA_HW_STAT_INDEX_NAKSEQERR].name = "Nak Sequence Error", + [IRDMA_HW_STAT_INDEX_NAKSEQERR_IMPLIED].name = "Nak Sequence Error Implied", + [IRDMA_HW_STAT_INDEX_RTO].name = "RTO", + [IRDMA_HW_STAT_INDEX_RXOOOPKTS].name = "Rcvd Out of order packets", + [IRDMA_HW_STAT_INDEX_ICRCERR].name = "CRC errors", }; +static int irdma_roce_port_immutable(struct ib_device *ibdev, u32 port_num, + struct ib_port_immutable *immutable) +{ + struct ib_port_attr attr; + int err; + + immutable->core_cap_flags = RDMA_CORE_PORT_IBA_ROCE_UDP_ENCAP; + err = ib_query_port(ibdev, port_num, &attr); + if (err) + return err; + + immutable->max_mad_size = IB_MGMT_MAD_SIZE; + immutable->pkey_tbl_len = attr.pkey_tbl_len; + immutable->gid_tbl_len = attr.gid_tbl_len; + + return 0; +} + +static int irdma_iw_port_immutable(struct ib_device *ibdev, u32 port_num, + struct ib_port_immutable *immutable) +{ + struct ib_port_attr attr; + int err; + + immutable->core_cap_flags = RDMA_CORE_PORT_IWARP; + err = ib_query_port(ibdev, port_num, &attr); + if (err) + return err; + immutable->gid_tbl_len = attr.gid_tbl_len; + + return 0; +} + static void irdma_get_dev_fw_str(struct ib_device *dev, char *str) { struct irdma_device *iwdev = to_iwdev(dev); @@ -4025,7 +4037,7 @@ static struct rdma_hw_stats *irdma_alloc_hw_port_stats(struct ib_device *ibdev, int num_counters = dev->hw_attrs.max_stat_idx; unsigned long lifespan = RDMA_HW_STATS_DEFAULT_LIFESPAN; - return rdma_alloc_hw_stats_struct(irdma_hw_stat_names, num_counters, + return rdma_alloc_hw_stats_struct(irdma_hw_stat_descs, num_counters, lifespan); } From patchwork Sat Aug 24 03:19:15 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Nikolova, Tatyana E" X-Patchwork-Id: 13776205 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.16]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 41ED974059; Sat, 24 Aug 2024 03:20:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.16 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724469654; cv=none; b=SD2bE3qe5VjQc1buwVA5k7A/bS0v/51hsiOelDF9equFBJiWshledLiW81VdJkn2HYhU0edKcre6HfPuhz9nhQbL61TqXZTszbeUlEQcvuRs6uVsdWIlNCUia1snnF/7f+dqKHjMDvOhlbCDW363lgKf/O0T2vFMnHSWcklterc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724469654; c=relaxed/simple; bh=SZI9bCUMrLq6xBEgo/S2VvoAQWwgJ6sNapy+N/5CxNM=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=mktjQDYTAsBwGmUE54gPkgBPOPjsjYt/jM+aNKsZNXp2fF2yg4+V+yGQOcOe2Y07RlCga+JirrMUSLIyRScF7oI7EpRaXRQKOQe5HksWToj1TqDN6w/58slKSBpNsrC+dzP2TWMgmj0A8RnPJAv3dEk+w9KyyThqeg/slTJfQe0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=NJ+IfLrv; arc=none smtp.client-ip=192.198.163.16 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="NJ+IfLrv" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1724469652; x=1756005652; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=SZI9bCUMrLq6xBEgo/S2VvoAQWwgJ6sNapy+N/5CxNM=; b=NJ+IfLrvLNnPr6CK7A0geUGuvzMgVjJefQcKIQ4eWUI3TmyADykQ1bNq bvF1Aj0U0hzN/INT8vFaMEi8ryvDN9RGv7EZHH5En4nASC+zW+icV3jGW ZPXatGvpo2Rst8iw30P9ouPUZ6lVNlAxln2KPgVqtakeMoSnPIUn6HAgm zMy3CL0z5s7/pZ9Ab5t7iMg/npepBNhsmc7IVberWyuS1mywX6tv3hjiE ZS2590sRJjhJPrR/SE9iI2MaVOb8CIWulrARrwqVitkfFb8pbgmcXaWHx pivCdLzp00lqZvvJJ+Fild1xvnaz2Y+B0r2hliFDG1JsdkWm195G0d2wF g==; X-CSE-ConnectionGUID: YFx1u4B4RtKPnnZ7FEp+Pg== X-CSE-MsgGUID: r0ZHpk6eRASo3nB8VOhJdA== X-IronPort-AV: E=McAfee;i="6700,10204,11173"; a="13187813" X-IronPort-AV: E=Sophos;i="6.10,172,1719903600"; d="scan'208";a="13187813" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by fmvoesa110.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Aug 2024 20:20:47 -0700 X-CSE-ConnectionGUID: 8IgKpnnRSP6C9Bt1O0Xqqg== X-CSE-MsgGUID: utJgaN1IRdmnVhqrWUd4cw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.10,172,1719903600"; d="scan'208";a="99492121" Received: from tenikolo-mobl1.amr.corp.intel.com ([10.124.36.66]) by smtpauth.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Aug 2024 20:20:47 -0700 From: Tatyana Nikolova To: jgg@nvidia.com, leon@kernel.org Cc: linux-rdma@vger.kernel.org, netdev@vger.kernel.org, Mustafa Ismail , Tatyana Nikolova Subject: [RFC v2 16/25] RDMA/irdma: Introduce GEN3 vPort driver support Date: Fri, 23 Aug 2024 22:19:15 -0500 Message-Id: <20240824031924.421-17-tatyana.e.nikolova@intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20240824031924.421-1-tatyana.e.nikolova@intel.com> References: <20240824031924.421-1-tatyana.e.nikolova@intel.com> Precedence: bulk X-Mailing-List: linux-rdma@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Mustafa Ismail In the IPU model, a function can host one or more logical network endpoints called vPorts. Each vPort may be associated with either a physical or an internal communication port, and can be RDMA capable. A vPort features a netdev and, if RDMA capable, must have an associated ib_dev. This change introduces a GEN3 auxiliary vPort driver responsible for registering a verbs device for every RDMA-capable vPort. Additionally, the UAPI is updated to prevent the binding of GEN3 devices to older user-space providers. Signed-off-by: Mustafa Ismail Signed-off-by: Tatyana Nikolova --- drivers/infiniband/hw/irdma/ig3rdma_if.c | 110 ++++++++++++++++++++++- drivers/infiniband/hw/irdma/main.c | 12 +++ drivers/infiniband/hw/irdma/main.h | 3 + drivers/infiniband/hw/irdma/verbs.c | 12 ++- include/uapi/rdma/irdma-abi.h | 1 + 5 files changed, 135 insertions(+), 3 deletions(-) diff --git a/drivers/infiniband/hw/irdma/ig3rdma_if.c b/drivers/infiniband/hw/irdma/ig3rdma_if.c index 70b1ed3723a4..1e2e41d32fa4 100644 --- a/drivers/infiniband/hw/irdma/ig3rdma_if.c +++ b/drivers/infiniband/hw/irdma/ig3rdma_if.c @@ -14,6 +14,23 @@ static void ig3rdma_idc_core_event_handler(struct idc_rdma_core_dev_info *cdev_i } } +static void ig3rdma_idc_vport_event_handler(struct idc_rdma_vport_dev_info *cdev_info, + struct idc_rdma_event *event) +{ + struct irdma_device *iwdev = auxiliary_get_drvdata(cdev_info->adev); + struct irdma_l2params l2params = {}; + + if (*event->type & BIT(IDC_RDMA_EVENT_AFTER_MTU_CHANGE)) { + ibdev_dbg(&iwdev->ibdev, "CLNT: new MTU = %d\n", iwdev->netdev->mtu); + if (iwdev->vsi.mtu != iwdev->netdev->mtu) { + l2params.mtu = iwdev->netdev->mtu; + l2params.mtu_changed = true; + irdma_log_invalid_mtu(l2params.mtu, &iwdev->rf->sc_dev); + irdma_change_l2params(&iwdev->vsi, &l2params); + } + } +} + static int ig3rdma_cfg_regions(struct irdma_hw *hw, struct idc_rdma_core_dev_info *cdev_info) { @@ -168,4 +185,95 @@ struct idc_rdma_core_auxiliary_drv ig3rdma_core_auxiliary_drv = { .remove = ig3rdma_core_remove, }, .event_handler = ig3rdma_idc_core_event_handler, -}; \ No newline at end of file +}; + +static int ig3rdma_vport_probe(struct auxiliary_device *aux_dev, + const struct auxiliary_device_id *id) +{ + struct idc_rdma_vport_auxiliary_dev *idc_adev = + container_of(aux_dev, struct idc_rdma_vport_auxiliary_dev, adev); + struct auxiliary_device *aux_core_dev = idc_adev->vdev_info->core_adev; + struct irdma_pci_f *rf = auxiliary_get_drvdata(aux_core_dev); + struct iidc_rdma_qos_params qos_info = {}; + struct irdma_l2params l2params = {}; + struct irdma_device *iwdev; + int err; + + if (!rf) { + WARN_ON_ONCE(1); + return -ENOMEM; + } + iwdev = ib_alloc_device(irdma_device, ibdev); + /* Fill iwdev info */ + iwdev->is_vport = true; + iwdev->rf = rf; + iwdev->vport_id = idc_adev->vdev_info->vport_id; + iwdev->netdev = idc_adev->vdev_info->netdev; + iwdev->init_state = INITIAL_STATE; + iwdev->roce_cwnd = IRDMA_ROCE_CWND_DEFAULT; + iwdev->roce_ackcreds = IRDMA_ROCE_ACKCREDS_DEFAULT; + iwdev->rcv_wnd = IRDMA_CM_DEFAULT_RCV_WND_SCALED; + iwdev->rcv_wscale = IRDMA_CM_DEFAULT_RCV_WND_SCALE; + iwdev->roce_mode = true; + iwdev->push_mode = true; + + l2params.mtu = iwdev->netdev->mtu; + irdma_fill_qos_info(&l2params, &qos_info); + + err = irdma_rt_init_hw(iwdev, &l2params); + if (err) + goto err_rt_init; + + err = irdma_ib_register_device(iwdev); + if (err) + goto err_ibreg; + + auxiliary_set_drvdata(aux_dev, iwdev); + + ibdev_dbg(&iwdev->ibdev, + "INIT: Gen[%d] vport[%d] probe success. dev_name = %s, core_dev_name = %s, netdev=%s\n", + rf->rdma_ver, idc_adev->vdev_info->vport_id, + dev_name(&aux_dev->dev), + dev_name(&idc_adev->vdev_info->core_adev->dev), + netdev_name(idc_adev->vdev_info->netdev)); + + return 0; +err_ibreg: + irdma_rt_deinit_hw(iwdev); +err_rt_init: + ib_dealloc_device(&iwdev->ibdev); + + return err; +} + +static void ig3rdma_vport_remove(struct auxiliary_device *aux_dev) +{ + struct idc_rdma_vport_auxiliary_dev *idc_adev = + container_of(aux_dev, struct idc_rdma_vport_auxiliary_dev, adev); + struct irdma_device *iwdev = auxiliary_get_drvdata(aux_dev); + + ibdev_dbg(&iwdev->ibdev, + "INIT: Gen[%d] dev_name = %s, core_dev_name = %s, netdev=%s\n", + iwdev->rf->rdma_ver, dev_name(&aux_dev->dev), + dev_name(&idc_adev->vdev_info->core_adev->dev), + netdev_name(idc_adev->vdev_info->netdev)); + + irdma_ib_unregister_device(iwdev); +} + +static const struct auxiliary_device_id ig3rdma_vport_auxiliary_id_table[] = { + {.name = "idpf.8086.rdma.vdev", }, + {}, +}; + +MODULE_DEVICE_TABLE(auxiliary, ig3rdma_vport_auxiliary_id_table); + +struct idc_rdma_vport_auxiliary_drv ig3rdma_vport_auxiliary_drv = { + .adrv = { + .name = "vdev", + .id_table = ig3rdma_vport_auxiliary_id_table, + .probe = ig3rdma_vport_probe, + .remove = ig3rdma_vport_remove, + }, + .event_handler = ig3rdma_idc_vport_event_handler, +}; diff --git a/drivers/infiniband/hw/irdma/main.c b/drivers/infiniband/hw/irdma/main.c index e9524de1c10f..4b07b0719557 100644 --- a/drivers/infiniband/hw/irdma/main.c +++ b/drivers/infiniband/hw/irdma/main.c @@ -129,6 +129,17 @@ static int __init irdma_init_module(void) return ret; } + + ret = auxiliary_driver_register(&ig3rdma_vport_auxiliary_drv.adrv); + if (ret) { + auxiliary_driver_unregister(&ig3rdma_core_auxiliary_drv.adrv); + auxiliary_driver_unregister(&icrdma_core_auxiliary_drv.adrv); + auxiliary_driver_unregister(&i40iw_auxiliary_drv); + pr_err("Failed ig3rdma vport auxiliary_driver_register() ret=%d\n", + ret); + + return ret; + } irdma_register_notifiers(); return 0; @@ -168,6 +179,7 @@ static void __exit irdma_exit_module(void) auxiliary_driver_unregister(&icrdma_core_auxiliary_drv.adrv); auxiliary_driver_unregister(&i40iw_auxiliary_drv); auxiliary_driver_unregister(&ig3rdma_core_auxiliary_drv.adrv); + auxiliary_driver_unregister(&ig3rdma_vport_auxiliary_drv.adrv); } module_init(irdma_init_module); diff --git a/drivers/infiniband/hw/irdma/main.h b/drivers/infiniband/hw/irdma/main.h index 17169338045a..1dab2ffba5e5 100644 --- a/drivers/infiniband/hw/irdma/main.h +++ b/drivers/infiniband/hw/irdma/main.h @@ -56,6 +56,7 @@ extern struct auxiliary_driver i40iw_auxiliary_drv; extern struct idc_rdma_core_auxiliary_drv ig3rdma_core_auxiliary_drv; +extern struct idc_rdma_vport_auxiliary_drv ig3rdma_vport_auxiliary_drv; extern struct idc_rdma_core_auxiliary_drv icrdma_core_auxiliary_drv; #define IRDMA_FW_VER_DEFAULT 2 @@ -353,12 +354,14 @@ struct irdma_device { u32 rcv_wnd; u16 mac_ip_table_idx; u16 vsi_num; + u16 vport_id; u8 rcv_wscale; u8 iw_status; bool roce_mode:1; bool roce_dcqcn_en:1; bool dcb_vlan_mode:1; bool iw_ooo:1; + bool is_vport:1; enum init_completion_state init_state; wait_queue_head_t suspend_wq; diff --git a/drivers/infiniband/hw/irdma/verbs.c b/drivers/infiniband/hw/irdma/verbs.c index 569eb5c5f78e..d1bce0b155f1 100644 --- a/drivers/infiniband/hw/irdma/verbs.c +++ b/drivers/infiniband/hw/irdma/verbs.c @@ -292,6 +292,10 @@ static int irdma_alloc_ucontext(struct ib_ucontext *uctx, ucontext->iwdev = iwdev; ucontext->abi_ver = req.userspace_ver; + if (!(req.comp_mask & IRDMA_SUPPORT_WQE_FORMAT_V2) && + uk_attrs->hw_rev >= IRDMA_GEN_3) + return -EOPNOTSUPP; + if (req.comp_mask & IRDMA_ALLOC_UCTX_USE_RAW_ATTR) ucontext->use_raw_attrs = true; @@ -4882,6 +4886,10 @@ void irdma_ib_dealloc_device(struct ib_device *ibdev) struct irdma_device *iwdev = to_iwdev(ibdev); irdma_rt_deinit_hw(iwdev); - irdma_ctrl_deinit_hw(iwdev->rf); - kfree(iwdev->rf); + if (!iwdev->is_vport) { + irdma_ctrl_deinit_hw(iwdev->rf); + if (iwdev->rf->vchnl_wq) + destroy_workqueue(iwdev->rf->vchnl_wq); + kfree(iwdev->rf); + } } diff --git a/include/uapi/rdma/irdma-abi.h b/include/uapi/rdma/irdma-abi.h index bb18f15489e3..4e42054cca33 100644 --- a/include/uapi/rdma/irdma-abi.h +++ b/include/uapi/rdma/irdma-abi.h @@ -25,6 +25,7 @@ enum irdma_memreg_type { enum { IRDMA_ALLOC_UCTX_USE_RAW_ATTR = 1 << 0, IRDMA_ALLOC_UCTX_MIN_HW_WQ_SIZE = 1 << 1, + IRDMA_SUPPORT_WQE_FORMAT_V2 = 1 << 3, }; struct irdma_alloc_ucontext_req { From patchwork Sat Aug 24 03:19:16 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Nikolova, Tatyana E" X-Patchwork-Id: 13776206 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.16]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 144CD770E4; Sat, 24 Aug 2024 03:20:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.16 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724469654; cv=none; b=IQhn+hZKZBkRdzMRs77F7n2TUedVvWeKnLF4kcXzUs9XgMw03TOiH4xEhrdRAJzIrDvTAiiuzTXjxIq9hKPafLqAHj819xolquDh0H3lu+gU6nRcQNJz6y1eJy7dAuXob0K9gyWD1Pr1ERO23PHmeJU5WifD4x3w6KCLsgi/QoE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724469654; c=relaxed/simple; bh=R/oiJewUhwL8HA44QlFd/COJ/0OmEU9yyepfw2SG1Ug=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=tktnQXeFhD824kNbE0LmQFuYdd671o2nDzClwYTPzRmCkYbRs15XhPGc/QE47gkEOcWzwM+wQsqkap8I0blC7Q9kKJn8qTSDv1JpVhu7lw2Ia2ABqU09kog2XeVEeffw493FpLI0ntRsh/eZVdRMMUcw0x5kZ9b7m4ow4Av/HXI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=mNBnf4ba; arc=none smtp.client-ip=192.198.163.16 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="mNBnf4ba" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1724469653; x=1756005653; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=R/oiJewUhwL8HA44QlFd/COJ/0OmEU9yyepfw2SG1Ug=; b=mNBnf4bafdExxsD+p6S/lnRwpN/IapPGg0sZ/dmhmPkmcLWrhYg1N56D Nm4L0qcpdLfS5ZqRBm0FUgdJotM41BDZJoG4JtXKhkADEtFhdL6BkYR4/ iGjswxNNfKqJ/bMqHlpnjbne+RoMpsa0RGFdD+A98sa+KWW+szmMLy0YT +OhetmN9SaWMf7sLrVgLeJYwvSI6zXeDsIRSenh+iXxTtH5245QrkJ/U7 ohr4erRHHqXgtICHm35aa7O+c1h0TYG3GMf/sKcAqaoiZWZJWD+fVYNC7 faojV9Dk7E1S46nsp00PLPk+JhOhVev6uhZPn+1bIELjhQtR5twB/W/Rb g==; X-CSE-ConnectionGUID: 7XhPuZJBTwSpu57xkoc/1w== X-CSE-MsgGUID: T2DqIM+WTPO3OlHcqNHQ7A== X-IronPort-AV: E=McAfee;i="6700,10204,11173"; a="13187816" X-IronPort-AV: E=Sophos;i="6.10,172,1719903600"; d="scan'208";a="13187816" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by fmvoesa110.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Aug 2024 20:20:48 -0700 X-CSE-ConnectionGUID: WxeUEmgATsS8UDis4aPelA== X-CSE-MsgGUID: AmXrILgySWaFoTZFpKLy7A== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.10,172,1719903600"; d="scan'208";a="99492125" Received: from tenikolo-mobl1.amr.corp.intel.com ([10.124.36.66]) by smtpauth.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Aug 2024 20:20:47 -0700 From: Tatyana Nikolova To: jgg@nvidia.com, leon@kernel.org Cc: linux-rdma@vger.kernel.org, netdev@vger.kernel.org, Shiraz Saleem , Tatyana Nikolova Subject: [RFC v2 17/25] RDMA/irdma: Add GEN3 virtual QP1 support Date: Fri, 23 Aug 2024 22:19:16 -0500 Message-Id: <20240824031924.421-18-tatyana.e.nikolova@intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20240824031924.421-1-tatyana.e.nikolova@intel.com> References: <20240824031924.421-1-tatyana.e.nikolova@intel.com> Precedence: bulk X-Mailing-List: linux-rdma@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Shiraz Saleem Add a new RDMA virtual channel op during QP1 creation that allow the Control Plane (CP) to virtualize a regular QP as QP1 on non-default RDMA capable vPorts. Additionally, the CP will return the Qsets to use on the ib_device of the vPort. Signed-off-by: Shiraz Saleem Signed-off-by: Tatyana Nikolova --- drivers/infiniband/hw/irdma/ctrl.c | 10 ++- drivers/infiniband/hw/irdma/main.h | 1 + drivers/infiniband/hw/irdma/utils.c | 30 ++++++++- drivers/infiniband/hw/irdma/verbs.c | 84 ++++++++++++++++++++------ drivers/infiniband/hw/irdma/virtchnl.c | 52 ++++++++++++++++ drivers/infiniband/hw/irdma/virtchnl.h | 19 ++++++ 6 files changed, 174 insertions(+), 22 deletions(-) diff --git a/drivers/infiniband/hw/irdma/ctrl.c b/drivers/infiniband/hw/irdma/ctrl.c index 88eb7a088ee7..4f05d0e57114 100644 --- a/drivers/infiniband/hw/irdma/ctrl.c +++ b/drivers/infiniband/hw/irdma/ctrl.c @@ -74,6 +74,14 @@ static void irdma_set_qos_info(struct irdma_sc_vsi *vsi, { u8 i; + if (vsi->dev->hw_attrs.uk_attrs.hw_rev >= IRDMA_GEN_3) { + for (i = 0; i < IRDMA_MAX_USER_PRIORITY; i++) { + vsi->qos[i].qs_handle = vsi->dev->qos[i].qs_handle; + vsi->qos[i].valid = true; + } + + return; + } vsi->qos_rel_bw = l2p->vsi_rel_bw; vsi->qos_prio_type = l2p->vsi_prio_type; vsi->dscp_mode = l2p->dscp_mode; @@ -1873,7 +1881,7 @@ void irdma_sc_vsi_init(struct irdma_sc_vsi *vsi, mutex_init(&vsi->qos[i].qos_mutex); INIT_LIST_HEAD(&vsi->qos[i].qplist); } - if (vsi->register_qset) { + if (vsi->dev->hw_attrs.uk_attrs.hw_rev == IRDMA_GEN_2) { vsi->dev->ws_add = irdma_ws_add; vsi->dev->ws_remove = irdma_ws_remove; vsi->dev->ws_reset = irdma_ws_reset; diff --git a/drivers/infiniband/hw/irdma/main.h b/drivers/infiniband/hw/irdma/main.h index 1dab2ffba5e5..f0196aafe59b 100644 --- a/drivers/infiniband/hw/irdma/main.h +++ b/drivers/infiniband/hw/irdma/main.h @@ -260,6 +260,7 @@ struct irdma_pci_f { bool reset:1; bool rsrc_created:1; bool msix_shared:1; + bool hwqp1_rsvd:1; u8 rsrc_profile; u8 *hmc_info_mem; u8 *mem_rsrc; diff --git a/drivers/infiniband/hw/irdma/utils.c b/drivers/infiniband/hw/irdma/utils.c index e940d32c9dbb..894ced3a8989 100644 --- a/drivers/infiniband/hw/irdma/utils.c +++ b/drivers/infiniband/hw/irdma/utils.c @@ -1184,6 +1184,26 @@ static void irdma_dealloc_push_page(struct irdma_pci_f *rf, irdma_put_cqp_request(&rf->cqp, cqp_request); } +static void irdma_free_gsi_qp_rsrc(struct irdma_qp *iwqp, u32 qp_num) +{ + struct irdma_device *iwdev = iwqp->iwdev; + struct irdma_pci_f *rf = iwdev->rf; + unsigned long flags; + + if (rf->sc_dev.hw_attrs.uk_attrs.hw_rev < IRDMA_GEN_3) + return; + + irdma_vchnl_req_del_vport(&rf->sc_dev, iwdev->vport_id, qp_num); + + if (qp_num == 1) { + spin_lock_irqsave(&rf->rsrc_lock, flags); + rf->hwqp1_rsvd = false; + spin_unlock_irqrestore(&rf->rsrc_lock, flags); + } else if (qp_num > 2) { + irdma_free_rsrc(rf, rf->allocated_qps, qp_num); + } +} + /** * irdma_free_qp_rsrc - free up memory resources for qp * @iwqp: qp ptr (user or kernel) @@ -1192,7 +1212,7 @@ void irdma_free_qp_rsrc(struct irdma_qp *iwqp) { struct irdma_device *iwdev = iwqp->iwdev; struct irdma_pci_f *rf = iwdev->rf; - u32 qp_num = iwqp->ibqp.qp_num; + u32 qp_num = iwqp->sc_qp.qp_uk.qp_id; irdma_ieq_cleanup_qp(iwdev->vsi.ieq, &iwqp->sc_qp); irdma_dealloc_push_page(rf, &iwqp->sc_qp); @@ -1202,8 +1222,12 @@ void irdma_free_qp_rsrc(struct irdma_qp *iwqp) iwqp->sc_qp.user_pri); } - if (qp_num > 2) - irdma_free_rsrc(rf, rf->allocated_qps, qp_num); + if (iwqp->ibqp.qp_type == IB_QPT_GSI) { + irdma_free_gsi_qp_rsrc(iwqp, qp_num); + } else { + if (qp_num > 2) + irdma_free_rsrc(rf, rf->allocated_qps, qp_num); + } dma_free_coherent(rf->sc_dev.hw->device, iwqp->q2_ctx_mem.size, iwqp->q2_ctx_mem.va, iwqp->q2_ctx_mem.pa); iwqp->q2_ctx_mem.va = NULL; diff --git a/drivers/infiniband/hw/irdma/verbs.c b/drivers/infiniband/hw/irdma/verbs.c index d1bce0b155f1..149adf0dad56 100644 --- a/drivers/infiniband/hw/irdma/verbs.c +++ b/drivers/infiniband/hw/irdma/verbs.c @@ -545,6 +545,9 @@ static int irdma_destroy_qp(struct ib_qp *ibqp, struct ib_udata *udata) irdma_cqp_qp_destroy_cmd(&iwdev->rf->sc_dev, &iwqp->sc_qp); irdma_remove_push_mmap_entries(iwqp); + + if (iwqp->sc_qp.qp_uk.qp_id == 1) + iwdev->rf->hwqp1_rsvd = false; irdma_free_qp_rsrc(iwqp); return 0; @@ -723,6 +726,7 @@ static int irdma_setup_kmode_qp(struct irdma_device *iwdev, info->rq_pa + (ukinfo->rq_depth * IRDMA_QP_WQE_MIN_SIZE); ukinfo->sq_size = ukinfo->sq_depth >> ukinfo->sq_shift; ukinfo->rq_size = ukinfo->rq_depth >> ukinfo->rq_shift; + ukinfo->qp_id = info->qp_uk_init_info.qp_id; iwqp->max_send_wr = (ukinfo->sq_depth - IRDMA_SQ_RSVD) >> ukinfo->sq_shift; iwqp->max_recv_wr = (ukinfo->rq_depth - IRDMA_RQ_RSVD) >> ukinfo->rq_shift; @@ -779,6 +783,8 @@ static void irdma_roce_fill_and_set_qpctx_info(struct irdma_qp *iwqp, roce_info = &iwqp->roce_info; ether_addr_copy(roce_info->mac_addr, iwdev->netdev->dev_addr); + if (iwqp->ibqp.qp_type == IB_QPT_GSI && iwqp->ibqp.qp_num != 1) + roce_info->is_qp1 = true; roce_info->rd_en = true; roce_info->wr_rdresp_en = true; roce_info->bind_en = true; @@ -868,6 +874,47 @@ static void irdma_flush_worker(struct work_struct *work) irdma_generate_flush_completions(iwqp); } +static int irdma_setup_gsi_qp_rsrc(struct irdma_qp *iwqp, u32 *qp_num) +{ + struct irdma_device *iwdev = iwqp->iwdev; + struct irdma_pci_f *rf = iwdev->rf; + unsigned long flags; + int ret; + + if (rf->rdma_ver <= IRDMA_GEN_2) { + *qp_num = 1; + return 0; + } + + spin_lock_irqsave(&rf->rsrc_lock, flags); + if (!rf->hwqp1_rsvd) { + *qp_num = 1; + rf->hwqp1_rsvd = true; + spin_unlock_irqrestore(&rf->rsrc_lock, flags); + } else { + spin_unlock_irqrestore(&rf->rsrc_lock, flags); + ret = irdma_alloc_rsrc(rf, rf->allocated_qps, rf->max_qp, + qp_num, &rf->next_qp); + if (ret) + return ret; + } + + ret = irdma_vchnl_req_add_vport(&rf->sc_dev, iwdev->vport_id, *qp_num, + (&iwdev->vsi)->qos); + if (ret) { + if (*qp_num != 1) { + irdma_free_rsrc(rf, rf->allocated_qps, *qp_num); + } else { + spin_lock_irqsave(&rf->rsrc_lock, flags); + rf->hwqp1_rsvd = false; + spin_unlock_irqrestore(&rf->rsrc_lock, flags); + } + return ret; + } + + return 0; +} + /** * irdma_create_qp - create qp * @ibqp: ptr of qp @@ -929,16 +976,20 @@ static int irdma_create_qp(struct ib_qp *ibqp, init_info.host_ctx = (__le64 *)(init_info.q2 + IRDMA_Q2_BUF_SIZE); init_info.host_ctx_pa = init_info.q2_pa + IRDMA_Q2_BUF_SIZE; - if (init_attr->qp_type == IB_QPT_GSI) - qp_num = 1; - else + if (init_attr->qp_type == IB_QPT_GSI) { + err_code = irdma_setup_gsi_qp_rsrc(iwqp, &qp_num); + if (err_code) + goto error; + iwqp->ibqp.qp_num = 1; + } else { err_code = irdma_alloc_rsrc(rf, rf->allocated_qps, rf->max_qp, &qp_num, &rf->next_qp); - if (err_code) - goto error; + if (err_code) + goto error; + iwqp->ibqp.qp_num = qp_num; + } iwqp->iwpd = iwpd; - iwqp->ibqp.qp_num = qp_num; qp = &iwqp->sc_qp; iwqp->iwscq = to_iwcq(init_attr->send_cq); iwqp->iwrcq = to_iwcq(init_attr->recv_cq); @@ -998,10 +1049,17 @@ static int irdma_create_qp(struct ib_qp *ibqp, ctx_info->send_cq_num = iwqp->iwscq->sc_cq.cq_uk.cq_id; ctx_info->rcv_cq_num = iwqp->iwrcq->sc_cq.cq_uk.cq_id; - if (rdma_protocol_roce(&iwdev->ibdev, 1)) + if (rdma_protocol_roce(&iwdev->ibdev, 1)) { + if (dev->ws_add(&iwdev->vsi, 0)) { + irdma_cqp_qp_destroy_cmd(&rf->sc_dev, &iwqp->sc_qp); + err_code = -EINVAL; + goto error; + } + irdma_qp_add_qos(&iwqp->sc_qp); irdma_roce_fill_and_set_qpctx_info(iwqp, ctx_info); - else + } else { irdma_iw_fill_and_set_qpctx_info(iwqp, ctx_info); + } err_code = irdma_cqp_create_qp_cmd(iwqp); if (err_code) @@ -1013,16 +1071,6 @@ static int irdma_create_qp(struct ib_qp *ibqp, iwqp->sig_all = init_attr->sq_sig_type == IB_SIGNAL_ALL_WR; rf->qp_table[qp_num] = iwqp; - if (rdma_protocol_roce(&iwdev->ibdev, 1)) { - if (dev->ws_add(&iwdev->vsi, 0)) { - irdma_cqp_qp_destroy_cmd(&rf->sc_dev, &iwqp->sc_qp); - err_code = -EINVAL; - goto error; - } - - irdma_qp_add_qos(&iwqp->sc_qp); - } - if (udata) { /* GEN_1 legacy support with libi40iw does not have expanded uresp struct */ if (udata->outlen < sizeof(uresp)) { diff --git a/drivers/infiniband/hw/irdma/virtchnl.c b/drivers/infiniband/hw/irdma/virtchnl.c index fc669b5a6b77..9f39cd69d85d 100644 --- a/drivers/infiniband/hw/irdma/virtchnl.c +++ b/drivers/infiniband/hw/irdma/virtchnl.c @@ -110,6 +110,8 @@ static int irdma_vchnl_req_verify_resp(struct irdma_vchnl_req *vchnl_req, case IRDMA_VCHNL_OP_GET_REG_LAYOUT: case IRDMA_VCHNL_OP_QUEUE_VECTOR_MAP: case IRDMA_VCHNL_OP_QUEUE_VECTOR_UNMAP: + case IRDMA_VCHNL_OP_ADD_VPORT: + case IRDMA_VCHNL_OP_DEL_VPORT: break; default: return -EOPNOTSUPP; @@ -315,6 +317,56 @@ int irdma_vchnl_req_get_reg_layout(struct irdma_sc_dev *dev) return 0; } +int irdma_vchnl_req_add_vport(struct irdma_sc_dev *dev, u16 vport_id, + u32 qp1_id, struct irdma_qos *qos) +{ + struct irdma_vchnl_resp_vport_info resp_vport = { 0 }; + struct irdma_vchnl_req_vport_info req_vport = { 0 }; + struct irdma_vchnl_req_init_info info = { 0 }; + int ret, i; + + if (!dev->vchnl_up) + return -EBUSY; + + info.op_code = IRDMA_VCHNL_OP_ADD_VPORT; + info.op_ver = IRDMA_VCHNL_OP_ADD_VPORT_V0; + req_vport.vport_id = vport_id; + req_vport.qp1_id = qp1_id; + info.req_parm_len = sizeof(req_vport); + info.req_parm = &req_vport; + info.resp_parm = &resp_vport; + info.resp_parm_len = sizeof(resp_vport); + + ret = irdma_vchnl_req_send_sync(dev, &info); + if (ret) + return ret; + + for (i = 0; i < IRDMA_MAX_USER_PRIORITY; i++) { + qos[i].qs_handle = resp_vport.qs_handle[i]; + qos[i].valid = true; + } + + return 0; +} + +int irdma_vchnl_req_del_vport(struct irdma_sc_dev *dev, u16 vport_id, u32 qp1_id) +{ + struct irdma_vchnl_req_init_info info = { 0 }; + struct irdma_vchnl_req_vport_info req_vport = { 0 }; + + if (!dev->vchnl_up) + return -EBUSY; + + info.op_code = IRDMA_VCHNL_OP_DEL_VPORT; + info.op_ver = IRDMA_VCHNL_OP_DEL_VPORT_V0; + req_vport.vport_id = vport_id; + req_vport.qp1_id = qp1_id; + info.req_parm_len = sizeof(req_vport); + info.req_parm = &req_vport; + + return irdma_vchnl_req_send_sync(dev, &info); +} + /** * irdma_vchnl_req_aeq_vec_map - Map AEQ to vector on this function * @dev: RDMA device pointer diff --git a/drivers/infiniband/hw/irdma/virtchnl.h b/drivers/infiniband/hw/irdma/virtchnl.h index 3af725587754..23e66bc2aa44 100644 --- a/drivers/infiniband/hw/irdma/virtchnl.h +++ b/drivers/infiniband/hw/irdma/virtchnl.h @@ -17,6 +17,8 @@ #define IRDMA_VCHNL_OP_GET_REG_LAYOUT_V0 0 #define IRDMA_VCHNL_OP_QUEUE_VECTOR_MAP_V0 0 #define IRDMA_VCHNL_OP_QUEUE_VECTOR_UNMAP_V0 0 +#define IRDMA_VCHNL_OP_ADD_VPORT_V0 0 +#define IRDMA_VCHNL_OP_DEL_VPORT_V0 0 #define IRDMA_VCHNL_OP_GET_RDMA_CAPS_V0 0 #define IRDMA_VCHNL_OP_GET_RDMA_CAPS_MIN_SIZE 1 @@ -57,6 +59,8 @@ enum irdma_vchnl_ops { IRDMA_VCHNL_OP_GET_RDMA_CAPS = 13, IRDMA_VCHNL_OP_QUEUE_VECTOR_MAP = 14, IRDMA_VCHNL_OP_QUEUE_VECTOR_UNMAP = 15, + IRDMA_VCHNL_OP_ADD_VPORT = 16, + IRDMA_VCHNL_OP_DEL_VPORT = 17, }; struct irdma_vchnl_req_hmc_info { @@ -81,6 +85,15 @@ struct irdma_vchnl_qvlist_info { struct irdma_vchnl_qv_info qv_info[]; }; +struct irdma_vchnl_req_vport_info { + u16 vport_id; + u32 qp1_id; +}; + +struct irdma_vchnl_resp_vport_info { + u16 qs_handle[IRDMA_MAX_USER_PRIORITY]; +}; + struct irdma_vchnl_op_buf { u16 op_code; u16 op_ver; @@ -141,6 +154,8 @@ struct irdma_vchnl_req_init_info { u16 op_ver; } __packed; +struct irdma_qos; + int irdma_sc_vchnl_init(struct irdma_sc_dev *dev, struct irdma_vchnl_init_info *info); int irdma_vchnl_send_sync(struct irdma_sc_dev *dev, u8 *msg, u16 len, @@ -156,4 +171,8 @@ int irdma_vchnl_req_get_reg_layout(struct irdma_sc_dev *dev); int irdma_vchnl_req_aeq_vec_map(struct irdma_sc_dev *dev, u32 v_idx); int irdma_vchnl_req_ceq_vec_map(struct irdma_sc_dev *dev, u16 ceq_id, u32 v_idx); +int irdma_vchnl_req_add_vport(struct irdma_sc_dev *dev, u16 vport_id, + u32 qp1_id, struct irdma_qos *qos); +int irdma_vchnl_req_del_vport(struct irdma_sc_dev *dev, u16 vport_id, + u32 qp1_id); #endif /* IRDMA_VIRTCHNL_H */ From patchwork Sat Aug 24 03:19:17 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Nikolova, Tatyana E" X-Patchwork-Id: 13776207 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.16]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3653578C75; Sat, 24 Aug 2024 03:20:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.16 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724469655; cv=none; b=hhsYolpml7WiIGFM0bieBu2VChcKn9UTVu5+/j4MtdB1CLx2NmJaYS1jl/kPLEv1k1eaB1PtQ4FE9Vs1FSE20GbqYjLOzNiRxSpSpvgyrIssmggFzWKCHtlj0+44XhPFiQP6mcxODkKUSLX523C9LqDdI5GbcCBzWor9+FMT3p8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724469655; c=relaxed/simple; bh=r7iA+dq7mAWZUbqTW3wbAN0SLoTvSmvLL/p02z8daT0=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=UGD/4RedaaoxHImTVUA5d8X0kOyLjDMsr0VV3WlsJL/llWq93KdZDhDcx4Nc+nAf5t4aaYt6Xnr9VjoK6Nx6vbQF+PX/iCWBttYp3VDJuPoS4hMqqw6FjjesoDx+a5RcPiFhoMNdXupFpn7mebpNRBH2GyVAQw1McwC7GQnBT/4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=kefnpxSC; arc=none smtp.client-ip=192.198.163.16 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="kefnpxSC" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1724469653; x=1756005653; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=r7iA+dq7mAWZUbqTW3wbAN0SLoTvSmvLL/p02z8daT0=; b=kefnpxSCJTKZmcYiZagzCTvH2tWT1tok9UvcZuyaPP+5S1MK8N5w8HRl TD1+zEend3LQuqYUM6DkDkhzp8TSBOvbBP/ocwtJ5qkQ1ntcHYkG2abxp YuqbcfqODzfzeyJivEYmCFC9FjS/JlEJMqdtEHtnu73VdaY8O3Wz0sVJZ 9Pdu/Qe2AbvVphIykl2ZJHmNBipINPu2DuomezOnyQYQxIvwg2ORC6NAb bC12q5dH1ABR0euwgt1X6bR3hf2gWu/je7lkBJ2yzlCgLcOQ8dyVAZZE5 UVZWhO3XPkXdcpuonFMT/YUQINVREG8cywVf+5C3oKU6j+XSA3XReWpyh w==; X-CSE-ConnectionGUID: hPDZNG3iSqWmk2D9npTw/g== X-CSE-MsgGUID: bhz8ZpuEQI2/7k9629Y/Hw== X-IronPort-AV: E=McAfee;i="6700,10204,11173"; a="13187819" X-IronPort-AV: E=Sophos;i="6.10,172,1719903600"; d="scan'208";a="13187819" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by fmvoesa110.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Aug 2024 20:20:49 -0700 X-CSE-ConnectionGUID: 1VNnPUMsQVO42mqPlj/a8A== X-CSE-MsgGUID: GcWuCQubTsKjpOrSooWy1g== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.10,172,1719903600"; d="scan'208";a="99492129" Received: from tenikolo-mobl1.amr.corp.intel.com ([10.124.36.66]) by smtpauth.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Aug 2024 20:20:48 -0700 From: Tatyana Nikolova To: jgg@nvidia.com, leon@kernel.org Cc: linux-rdma@vger.kernel.org, netdev@vger.kernel.org, Shiraz Saleem , Tatyana Nikolova Subject: [RFC v2 18/25] RDMA/irdma: Extend QP context programming for GEN3 Date: Fri, 23 Aug 2024 22:19:17 -0500 Message-Id: <20240824031924.421-19-tatyana.e.nikolova@intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20240824031924.421-1-tatyana.e.nikolova@intel.com> References: <20240824031924.421-1-tatyana.e.nikolova@intel.com> Precedence: bulk X-Mailing-List: linux-rdma@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Shiraz Saleem Extend the QP context structure with support for new fields specific to GEN3 hardware capabilities. Signed-off-by: Shiraz Saleem Signed-off-by: Tatyana Nikolova --- drivers/infiniband/hw/irdma/ctrl.c | 184 +++++++++++++++++++++++++++- drivers/infiniband/hw/irdma/defs.h | 24 +++- drivers/infiniband/hw/irdma/type.h | 4 + drivers/infiniband/hw/irdma/uda_d.h | 5 +- drivers/infiniband/hw/irdma/verbs.c | 5 + 5 files changed, 215 insertions(+), 7 deletions(-) diff --git a/drivers/infiniband/hw/irdma/ctrl.c b/drivers/infiniband/hw/irdma/ctrl.c index 4f05d0e57114..3205385088cc 100644 --- a/drivers/infiniband/hw/irdma/ctrl.c +++ b/drivers/infiniband/hw/irdma/ctrl.c @@ -637,13 +637,14 @@ static u8 irdma_sc_get_encoded_ird_size(u16 ird_size) } /** - * irdma_sc_qp_setctx_roce - set qp's context + * irdma_sc_qp_setctx_roce_gen_2 - set qp's context * @qp: sc qp * @qp_ctx: context ptr * @info: ctx info */ -void irdma_sc_qp_setctx_roce(struct irdma_sc_qp *qp, __le64 *qp_ctx, - struct irdma_qp_host_ctx_info *info) +static void irdma_sc_qp_setctx_roce_gen_2(struct irdma_sc_qp *qp, + __le64 *qp_ctx, + struct irdma_qp_host_ctx_info *info) { struct irdma_roce_offload_info *roce_info; struct irdma_udp_offload_info *udp; @@ -761,6 +762,183 @@ void irdma_sc_qp_setctx_roce(struct irdma_sc_qp *qp, __le64 *qp_ctx, 8, qp_ctx, IRDMA_QP_CTX_SIZE, false); } +/** + * irdma_sc_get_encoded_ird_size_gen_3 - get encoded IRD size for GEN 3 + * @ird_size: IRD size + * The ird from the connection is rounded to a supported HW setting and then encoded + * for ird_size field of qp_ctx. Consumers are expected to provide valid ird size based + * on hardware attributes. IRD size defaults to a value of 4 in case of invalid input. + */ +static u8 irdma_sc_get_encoded_ird_size_gen_3(u16 ird_size) +{ + switch (ird_size ? + roundup_pow_of_two(2 * ird_size) : 4) { + case 4096: + return IRDMA_IRD_HW_SIZE_4096_GEN3; + case 2048: + return IRDMA_IRD_HW_SIZE_2048_GEN3; + case 1024: + return IRDMA_IRD_HW_SIZE_1024_GEN3; + case 512: + return IRDMA_IRD_HW_SIZE_512_GEN3; + case 256: + return IRDMA_IRD_HW_SIZE_256_GEN3; + case 128: + return IRDMA_IRD_HW_SIZE_128_GEN3; + case 64: + return IRDMA_IRD_HW_SIZE_64_GEN3; + case 32: + return IRDMA_IRD_HW_SIZE_32_GEN3; + case 16: + return IRDMA_IRD_HW_SIZE_16_GEN3; + case 8: + return IRDMA_IRD_HW_SIZE_8_GEN3; + case 4: + default: + break; + } + + return IRDMA_IRD_HW_SIZE_4_GEN3; +} + +/** + * irdma_sc_qp_setctx_roce_gen_3 - set qp's context + * @qp: sc qp + * @qp_ctx: context ptr + * @info: ctx info + */ +static void irdma_sc_qp_setctx_roce_gen_3(struct irdma_sc_qp *qp, + __le64 *qp_ctx, + struct irdma_qp_host_ctx_info *info) +{ + struct irdma_roce_offload_info *roce_info = info->roce_info; + struct irdma_udp_offload_info *udp = info->udp_info; + u64 qw0, qw3, qw7 = 0, qw8 = 0; + u8 push_mode_en; + u32 push_idx; + + qp->user_pri = info->user_pri; + if (qp->push_idx == IRDMA_INVALID_PUSH_PAGE_INDEX) { + push_mode_en = 0; + push_idx = 0; + } else { + push_mode_en = 1; + push_idx = qp->push_idx; + } + + qw0 = FIELD_PREP(IRDMAQPC_RQWQESIZE, qp->qp_uk.rq_wqe_size) | + FIELD_PREP(IRDMAQPC_RCVTPHEN, qp->rcv_tph_en) | + FIELD_PREP(IRDMAQPC_XMITTPHEN, qp->xmit_tph_en) | + FIELD_PREP(IRDMAQPC_RQTPHEN, qp->rq_tph_en) | + FIELD_PREP(IRDMAQPC_SQTPHEN, qp->sq_tph_en) | + FIELD_PREP(IRDMAQPC_PPIDX, push_idx) | + FIELD_PREP(IRDMAQPC_PMENA, push_mode_en) | + FIELD_PREP(IRDMAQPC_DC_TCP_EN, roce_info->dctcp_en) | + FIELD_PREP(IRDMAQPC_ISQP1, roce_info->is_qp1) | + FIELD_PREP(IRDMAQPC_ROCE_TVER, roce_info->roce_tver) | + FIELD_PREP(IRDMAQPC_IPV4, udp->ipv4) | + FIELD_PREP(IRDMAQPC_INSERTVLANTAG, udp->insert_vlan_tag); + set_64bit_val(qp_ctx, 0, qw0); + set_64bit_val(qp_ctx, 8, qp->sq_pa); + set_64bit_val(qp_ctx, 16, qp->rq_pa); + qw3 = FIELD_PREP(IRDMAQPC_RQSIZE, qp->hw_rq_size) | + FIELD_PREP(IRDMAQPC_SQSIZE, qp->hw_sq_size) | + FIELD_PREP(IRDMAQPC_TTL, udp->ttl) | + FIELD_PREP(IRDMAQPC_TOS, udp->tos) | + FIELD_PREP(IRDMAQPC_SRCPORTNUM, udp->src_port) | + FIELD_PREP(IRDMAQPC_DESTPORTNUM, udp->dst_port); + set_64bit_val(qp_ctx, 24, qw3); + set_64bit_val(qp_ctx, 32, + FIELD_PREP(IRDMAQPC_DESTIPADDR2, udp->dest_ip_addr[2]) | + FIELD_PREP(IRDMAQPC_DESTIPADDR3, udp->dest_ip_addr[3])); + set_64bit_val(qp_ctx, 40, + FIELD_PREP(IRDMAQPC_DESTIPADDR0, udp->dest_ip_addr[0]) | + FIELD_PREP(IRDMAQPC_DESTIPADDR1, udp->dest_ip_addr[1])); + set_64bit_val(qp_ctx, 48, + FIELD_PREP(IRDMAQPC_SNDMSS, udp->snd_mss) | + FIELD_PREP(IRDMAQPC_VLANTAG, udp->vlan_tag) | + FIELD_PREP(IRDMAQPC_ARPIDX, udp->arp_idx)); + qw7 = FIELD_PREP(IRDMAQPC_PKEY, roce_info->p_key) | + FIELD_PREP(IRDMAQPC_ACKCREDITS, roce_info->ack_credits) | + FIELD_PREP(IRDMAQPC_FLOWLABEL, udp->flow_label); + set_64bit_val(qp_ctx, 56, qw7); + qw8 = FIELD_PREP(IRDMAQPC_QKEY, roce_info->qkey) | + FIELD_PREP(IRDMAQPC_DESTQP, roce_info->dest_qp); + set_64bit_val(qp_ctx, 64, qw8); + set_64bit_val(qp_ctx, 80, + FIELD_PREP(IRDMAQPC_PSNNXT, udp->psn_nxt) | + FIELD_PREP(IRDMAQPC_LSN, udp->lsn)); + set_64bit_val(qp_ctx, 88, + FIELD_PREP(IRDMAQPC_EPSN, udp->epsn)); + set_64bit_val(qp_ctx, 96, + FIELD_PREP(IRDMAQPC_PSNMAX, udp->psn_max) | + FIELD_PREP(IRDMAQPC_PSNUNA, udp->psn_una)); + set_64bit_val(qp_ctx, 112, + FIELD_PREP(IRDMAQPC_CWNDROCE, udp->cwnd)); + set_64bit_val(qp_ctx, 128, + FIELD_PREP(IRDMAQPC_MINRNR_TIMER, udp->min_rnr_timer) | + FIELD_PREP(IRDMAQPC_RNRNAK_THRESH, udp->rnr_nak_thresh) | + FIELD_PREP(IRDMAQPC_REXMIT_THRESH, udp->rexmit_thresh) | + FIELD_PREP(IRDMAQPC_RNRNAK_TMR, udp->rnr_nak_tmr) | + FIELD_PREP(IRDMAQPC_RTOMIN, roce_info->rtomin)); + set_64bit_val(qp_ctx, 136, + FIELD_PREP(IRDMAQPC_TXCQNUM, info->send_cq_num) | + FIELD_PREP(IRDMAQPC_RXCQNUM, info->rcv_cq_num)); + set_64bit_val(qp_ctx, 152, + FIELD_PREP(IRDMAQPC_MACADDRESS, + ether_addr_to_u64(roce_info->mac_addr)) | + FIELD_PREP(IRDMAQPC_LOCALACKTIMEOUT, + roce_info->local_ack_timeout)); + set_64bit_val(qp_ctx, 160, + FIELD_PREP(IRDMAQPC_ORDSIZE_GEN3, roce_info->ord_size) | + FIELD_PREP(IRDMAQPC_IRDSIZE_GEN3, + irdma_sc_get_encoded_ird_size_gen_3(roce_info->ird_size)) | + FIELD_PREP(IRDMAQPC_WRRDRSPOK, roce_info->wr_rdresp_en) | + FIELD_PREP(IRDMAQPC_RDOK, roce_info->rd_en) | + FIELD_PREP(IRDMAQPC_USESTATSINSTANCE, + info->stats_idx_valid) | + FIELD_PREP(IRDMAQPC_BINDEN, roce_info->bind_en) | + FIELD_PREP(IRDMAQPC_FASTREGEN, roce_info->fast_reg_en) | + FIELD_PREP(IRDMAQPC_DCQCNENABLE, roce_info->dcqcn_en) | + FIELD_PREP(IRDMAQPC_RCVNOICRC, roce_info->rcv_no_icrc) | + FIELD_PREP(IRDMAQPC_FW_CC_ENABLE, + roce_info->fw_cc_enable) | + FIELD_PREP(IRDMAQPC_UDPRIVCQENABLE, + roce_info->udprivcq_en) | + FIELD_PREP(IRDMAQPC_PRIVEN, roce_info->priv_mode_en) | + FIELD_PREP(IRDMAQPC_TIMELYENABLE, roce_info->timely_en)); + set_64bit_val(qp_ctx, 168, + FIELD_PREP(IRDMAQPC_QPCOMPCTX, info->qp_compl_ctx)); + set_64bit_val(qp_ctx, 176, + FIELD_PREP(IRDMAQPC_SQTPHVAL, qp->sq_tph_val) | + FIELD_PREP(IRDMAQPC_RQTPHVAL, qp->rq_tph_val) | + FIELD_PREP(IRDMAQPC_QSHANDLE, qp->qs_handle)); + set_64bit_val(qp_ctx, 184, + FIELD_PREP(IRDMAQPC_LOCAL_IPADDR3, udp->local_ipaddr[3]) | + FIELD_PREP(IRDMAQPC_LOCAL_IPADDR2, udp->local_ipaddr[2])); + set_64bit_val(qp_ctx, 192, + FIELD_PREP(IRDMAQPC_LOCAL_IPADDR1, udp->local_ipaddr[1]) | + FIELD_PREP(IRDMAQPC_LOCAL_IPADDR0, udp->local_ipaddr[0])); + set_64bit_val(qp_ctx, 200, + FIELD_PREP(IRDMAQPC_THIGH, roce_info->t_high) | + FIELD_PREP(IRDMAQPC_TLOW, roce_info->t_low)); + set_64bit_val(qp_ctx, 208, roce_info->pd_id | + FIELD_PREP(IRDMAQPC_STAT_INDEX_GEN3, info->stats_idx) | + FIELD_PREP(IRDMAQPC_PKT_LIMIT, qp->pkt_limit)); + + print_hex_dump_debug("WQE: QP_HOST ROCE CTX WQE", DUMP_PREFIX_OFFSET, + 16, 8, qp_ctx, IRDMA_QP_CTX_SIZE, false); +} + +void irdma_sc_qp_setctx_roce(struct irdma_sc_qp *qp, __le64 *qp_ctx, + struct irdma_qp_host_ctx_info *info) +{ + if (qp->dev->hw_attrs.uk_attrs.hw_rev == IRDMA_GEN_2) + irdma_sc_qp_setctx_roce_gen_2(qp, qp_ctx, info); + else + irdma_sc_qp_setctx_roce_gen_3(qp, qp_ctx, info); +} + /* irdma_sc_alloc_local_mac_entry - allocate a mac entry * @cqp: struct for cqp hw * @scratch: u64 saved to be used during cqp completion diff --git a/drivers/infiniband/hw/irdma/defs.h b/drivers/infiniband/hw/irdma/defs.h index 492529ada042..b5484906ca9e 100644 --- a/drivers/infiniband/hw/irdma/defs.h +++ b/drivers/infiniband/hw/irdma/defs.h @@ -14,6 +14,18 @@ #define IRDMA_PE_DB_SIZE_4M 1 #define IRDMA_PE_DB_SIZE_8M 2 +#define IRDMA_IRD_HW_SIZE_4_GEN3 0 +#define IRDMA_IRD_HW_SIZE_8_GEN3 1 +#define IRDMA_IRD_HW_SIZE_16_GEN3 2 +#define IRDMA_IRD_HW_SIZE_32_GEN3 3 +#define IRDMA_IRD_HW_SIZE_64_GEN3 4 +#define IRDMA_IRD_HW_SIZE_128_GEN3 5 +#define IRDMA_IRD_HW_SIZE_256_GEN3 6 +#define IRDMA_IRD_HW_SIZE_512_GEN3 7 +#define IRDMA_IRD_HW_SIZE_1024_GEN3 8 +#define IRDMA_IRD_HW_SIZE_2048_GEN3 9 +#define IRDMA_IRD_HW_SIZE_4096_GEN3 10 + #define IRDMA_IRD_HW_SIZE_4 0 #define IRDMA_IRD_HW_SIZE_16 1 #define IRDMA_IRD_HW_SIZE_64 2 @@ -843,7 +855,8 @@ enum irdma_cqp_op_type { #define IRDMAQPC_CWNDROCE GENMASK_ULL(55, 32) #define IRDMAQPC_SNDWL1 GENMASK_ULL(31, 0) #define IRDMAQPC_SNDWL2 GENMASK_ULL(63, 32) -#define IRDMAQPC_ERR_RQ_IDX GENMASK_ULL(45, 32) +#define IRDMAQPC_MINRNR_TIMER GENMASK_ULL(4, 0) +#define IRDMAQPC_ERR_RQ_IDX GENMASK_ULL(46, 32) #define IRDMAQPC_RTOMIN GENMASK_ULL(63, 57) #define IRDMAQPC_MAXSNDWND GENMASK_ULL(31, 0) #define IRDMAQPC_REXMIT_THRESH GENMASK_ULL(53, 48) @@ -856,8 +869,17 @@ enum irdma_cqp_op_type { #define IRDMAQPC_MACADDRESS GENMASK_ULL(63, 16) #define IRDMAQPC_ORDSIZE GENMASK_ULL(7, 0) +#define IRDMAQPC_LOCALACKTIMEOUT GENMASK_ULL(12, 8) +#define IRDMAQPC_RNRNAK_TMR GENMASK_ULL(4, 0) +#define IRDMAQPC_ORDSIZE_GEN3 GENMASK_ULL(10, 0) +#define IRDMAQPC_REMOTE_ATOMIC_EN BIT_ULL(18) +#define IRDMAQPC_STAT_INDEX_GEN3 GENMASK_ULL(47, 32) +#define IRDMAQPC_PKT_LIMIT GENMASK_ULL(55, 48) + #define IRDMAQPC_IRDSIZE GENMASK_ULL(18, 16) +#define IRDMAQPC_IRDSIZE_GEN3 GENMASK_ULL(17, 14) + #define IRDMAQPC_UDPRIVCQENABLE BIT_ULL(19) #define IRDMAQPC_WRRDRSPOK BIT_ULL(20) #define IRDMAQPC_RDOK BIT_ULL(21) diff --git a/drivers/infiniband/hw/irdma/type.h b/drivers/infiniband/hw/irdma/type.h index 17fc72636bb7..243210466d88 100644 --- a/drivers/infiniband/hw/irdma/type.h +++ b/drivers/infiniband/hw/irdma/type.h @@ -574,6 +574,7 @@ struct irdma_sc_qp { bool flush_rq:1; bool sq_flush_code:1; bool rq_flush_code:1; + u32 pkt_limit; enum irdma_flush_opcode flush_code; enum irdma_qp_event_type event_type; u8 term_flags; @@ -915,6 +916,8 @@ struct irdma_udp_offload_info { u32 cwnd; u8 rexmit_thresh; u8 rnr_nak_thresh; + u8 rnr_nak_tmr; + u8 min_rnr_timer; }; struct irdma_roce_offload_info { @@ -941,6 +944,7 @@ struct irdma_roce_offload_info { bool dctcp_en:1; bool fw_cc_enable:1; bool use_stats_inst:1; + u8 local_ack_timeout; u16 t_high; u16 t_low; u8 last_byte_sent; diff --git a/drivers/infiniband/hw/irdma/uda_d.h b/drivers/infiniband/hw/irdma/uda_d.h index 5a9e6eabf032..4fb4daa20722 100644 --- a/drivers/infiniband/hw/irdma/uda_d.h +++ b/drivers/infiniband/hw/irdma/uda_d.h @@ -78,8 +78,7 @@ #define IRDMA_UDAQPC_IPID GENMASK_ULL(47, 32) #define IRDMA_UDAQPC_SNDMSS GENMASK_ULL(29, 16) #define IRDMA_UDAQPC_VLANTAG GENMASK_ULL(15, 0) - -#define IRDMA_UDA_CQPSQ_MAV_PDINDEXHI GENMASK_ULL(21, 20) +#define IRDMA_UDA_CQPSQ_MAV_PDINDEXHI GENMASK_ULL(27, 20) #define IRDMA_UDA_CQPSQ_MAV_PDINDEXLO GENMASK_ULL(63, 48) #define IRDMA_UDA_CQPSQ_MAV_SRCMACADDRINDEX GENMASK_ULL(29, 24) #define IRDMA_UDA_CQPSQ_MAV_ARPINDEX GENMASK_ULL(63, 48) @@ -94,7 +93,7 @@ #define IRDMA_UDA_CQPSQ_MAV_OPCODE GENMASK_ULL(37, 32) #define IRDMA_UDA_CQPSQ_MAV_DOLOOPBACKK BIT_ULL(62) #define IRDMA_UDA_CQPSQ_MAV_IPV4VALID BIT_ULL(59) -#define IRDMA_UDA_CQPSQ_MAV_AVIDX GENMASK_ULL(16, 0) +#define IRDMA_UDA_CQPSQ_MAV_AVIDX GENMASK_ULL(23, 0) #define IRDMA_UDA_CQPSQ_MAV_INSERTVLANTAG BIT_ULL(60) #define IRDMA_UDA_MGCTX_VFFLAG BIT_ULL(29) #define IRDMA_UDA_MGCTX_DESTPORT GENMASK_ULL(47, 32) diff --git a/drivers/infiniband/hw/irdma/verbs.c b/drivers/infiniband/hw/irdma/verbs.c index 149adf0dad56..950d7570a5c6 100644 --- a/drivers/infiniband/hw/irdma/verbs.c +++ b/drivers/infiniband/hw/irdma/verbs.c @@ -1162,6 +1162,7 @@ static int irdma_query_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr, attr->pkey_index = iwqp->roce_info.p_key; attr->retry_cnt = iwqp->udp_info.rexmit_thresh; attr->rnr_retry = iwqp->udp_info.rnr_nak_thresh; + attr->min_rnr_timer = iwqp->udp_info.min_rnr_timer; attr->max_rd_atomic = iwqp->roce_info.ord_size; attr->max_dest_rd_atomic = iwqp->roce_info.ird_size; } @@ -1294,6 +1295,10 @@ int irdma_modify_qp_roce(struct ib_qp *ibqp, struct ib_qp_attr *attr, if (attr_mask & IB_QP_RNR_RETRY) udp_info->rnr_nak_thresh = attr->rnr_retry; + if (attr_mask & IB_QP_MIN_RNR_TIMER && + dev->hw_attrs.uk_attrs.hw_rev >= IRDMA_GEN_3) + udp_info->min_rnr_timer = attr->min_rnr_timer; + if (attr_mask & IB_QP_RETRY_CNT) udp_info->rexmit_thresh = attr->retry_cnt; From patchwork Sat Aug 24 03:19:18 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Nikolova, Tatyana E" X-Patchwork-Id: 13776208 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.16]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2C1BA7DA74; Sat, 24 Aug 2024 03:20:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.16 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724469655; cv=none; b=sgdB3s+BUJqVTjg7EROVyom5WtpvO8UD7uuxi4F6mmvVYxEw2/kmQaWWIOVkqrV0/Slk98JmBc0SsPLpjJSl+8Yfv73cbJJrkyw5phwAJX2DUl2p2eVBPCBwz0SKGI56xy1zRKZI6otma5o4yBifo2CX/dBcON05E5eLKTYbKZ8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724469655; c=relaxed/simple; bh=sAC3wKZvcX/7SmHj6X/vFBbBlJmpi65c71bKjUJIXys=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=smKEJxfcS9eT02A85zqg2YgyLIvaleVkWIKfmaMsEzidAFuEYh4A/tm0wjtqtg0IhwhqhX8KuoO734Mt8PdLni7QjSTnVToPwARC1Rp7pM4pTolJLagdOZs+s8yUL5CyRRY6aKXKsKS9W5rqjq6zqxFotChEp62MvSxUw+MnFik= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=AMjqhJnT; arc=none smtp.client-ip=192.198.163.16 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="AMjqhJnT" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1724469654; x=1756005654; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=sAC3wKZvcX/7SmHj6X/vFBbBlJmpi65c71bKjUJIXys=; b=AMjqhJnTfjNU5LiH7bDftcnrEk6+U8ek5STH/f0uOYhwYQ5obaSE7yJv 8HZVMdZpByYjYEbqMbfyqtTagJ/UlCfI7VYTX12Y3cuGRLywc7ZdhFrK+ d8/TdPFrI77TeaEt3pqELuUs+TrpqluliL7AMfKd+9O+2lj+2ap09MLab wVhb7cbEDHRxcmvoPz25U3ghKoXevDHT8zxCl9EuiYXMCumQBsdc0S8cQ 8eGqgqEZil3KJ5uYgbXyxYbTDe4qLymyvkeuwr0o2UTm8BXf/PKab9uqE Ar58XmQUK8IT+hFOyfV4LbkF+I52JSqguzZv0EkXo80hz8W+I8nO6Slaa A==; X-CSE-ConnectionGUID: RYtKCEH2RrixHBbTOxNLrg== X-CSE-MsgGUID: /yXgd5fpTWKB9OSNTnHPLQ== X-IronPort-AV: E=McAfee;i="6700,10204,11173"; a="13187822" X-IronPort-AV: E=Sophos;i="6.10,172,1719903600"; d="scan'208";a="13187822" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by fmvoesa110.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Aug 2024 20:20:49 -0700 X-CSE-ConnectionGUID: YqAOFIT0RdC6nQE+ps2bYg== X-CSE-MsgGUID: oR/gG0G1T5KwjmQhgpI5PQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.10,172,1719903600"; d="scan'208";a="99492132" Received: from tenikolo-mobl1.amr.corp.intel.com ([10.124.36.66]) by smtpauth.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Aug 2024 20:20:48 -0700 From: Tatyana Nikolova To: jgg@nvidia.com, leon@kernel.org Cc: linux-rdma@vger.kernel.org, netdev@vger.kernel.org, Shiraz Saleem , Tatyana Nikolova Subject: [RFC v2 19/25] RDMA/irdma: Support 64-byte CQEs and GEN3 CQE opcode decoding Date: Fri, 23 Aug 2024 22:19:18 -0500 Message-Id: <20240824031924.421-20-tatyana.e.nikolova@intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20240824031924.421-1-tatyana.e.nikolova@intel.com> References: <20240824031924.421-1-tatyana.e.nikolova@intel.com> Precedence: bulk X-Mailing-List: linux-rdma@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Shiraz Saleem Introduce support for 64-byte CQEs in GEN3 devices. Additionally, implement GEN3-specific CQE opcode decoding. Signed-off-by: Shiraz Saleem Signed-off-by: Tatyana Nikolova --- drivers/infiniband/hw/irdma/verbs.c | 19 +++++++++++++++---- drivers/infiniband/hw/irdma/verbs.h | 13 +++++++++++++ 2 files changed, 28 insertions(+), 4 deletions(-) diff --git a/drivers/infiniband/hw/irdma/verbs.c b/drivers/infiniband/hw/irdma/verbs.c index 950d7570a5c6..4c7fbdce4433 100644 --- a/drivers/infiniband/hw/irdma/verbs.c +++ b/drivers/infiniband/hw/irdma/verbs.c @@ -2115,6 +2115,7 @@ static int irdma_create_cq(struct ib_cq *ibcq, unsigned long flags; int err_code; int entries = attr->cqe; + bool cqe_64byte_ena; err_code = cq_validate_flags(attr->flags, dev->hw_attrs.uk_attrs.hw_rev); if (err_code) @@ -2138,6 +2139,9 @@ static int irdma_create_cq(struct ib_cq *ibcq, info.dev = dev; ukinfo->cq_size = max(entries, 4); ukinfo->cq_id = cq_num; + cqe_64byte_ena = dev->hw_attrs.uk_attrs.feature_flags & IRDMA_FEATURE_64_BYTE_CQE ? + true : false; + ukinfo->avoid_mem_cflct = cqe_64byte_ena; iwcq->ibcq.cqe = info.cq_uk_init_info.cq_size; if (attr->comp_vector < rf->ceqs_count) info.ceq_id = attr->comp_vector; @@ -2213,11 +2217,14 @@ static int irdma_create_cq(struct ib_cq *ibcq, } entries++; - if (dev->hw_attrs.uk_attrs.hw_rev >= IRDMA_GEN_2) + if (!cqe_64byte_ena && dev->hw_attrs.uk_attrs.hw_rev >= IRDMA_GEN_2) entries *= 2; ukinfo->cq_size = entries; - rsize = info.cq_uk_init_info.cq_size * sizeof(struct irdma_cqe); + if (cqe_64byte_ena) + rsize = info.cq_uk_init_info.cq_size * sizeof(struct irdma_extended_cqe); + else + rsize = info.cq_uk_init_info.cq_size * sizeof(struct irdma_cqe); iwcq->kmem.size = ALIGN(round_up(rsize, 256), 256); iwcq->kmem.va = dma_alloc_coherent(dev->hw->device, iwcq->kmem.size, @@ -3775,8 +3782,12 @@ static void irdma_process_cqe(struct ib_wc *entry, if (cq_poll_info->q_type == IRDMA_CQE_QTYPE_SQ) { set_ib_wc_op_sq(cq_poll_info, entry); } else { - set_ib_wc_op_rq(cq_poll_info, entry, - qp->qp_uk.qp_caps & IRDMA_SEND_WITH_IMM); + if (qp->dev->hw_attrs.uk_attrs.hw_rev <= IRDMA_GEN_2) + set_ib_wc_op_rq(cq_poll_info, entry, + qp->qp_uk.qp_caps & IRDMA_SEND_WITH_IMM ? + true : false); + else + set_ib_wc_op_rq_gen_3(cq_poll_info, entry); if (qp->qp_uk.qp_type != IRDMA_QP_TYPE_ROCE_UD && cq_poll_info->stag_invalid_set) { entry->ex.invalidate_rkey = cq_poll_info->inv_stag; diff --git a/drivers/infiniband/hw/irdma/verbs.h b/drivers/infiniband/hw/irdma/verbs.h index cfa140b36395..fcb163c45252 100644 --- a/drivers/infiniband/hw/irdma/verbs.h +++ b/drivers/infiniband/hw/irdma/verbs.h @@ -267,6 +267,19 @@ static inline void set_ib_wc_op_sq(struct irdma_cq_poll_info *cq_poll_info, } } +static inline void set_ib_wc_op_rq_gen_3(struct irdma_cq_poll_info *info, + struct ib_wc *entry) +{ + switch (info->op_type) { + case IRDMA_OP_TYPE_RDMA_WRITE: + case IRDMA_OP_TYPE_RDMA_WRITE_SOL: + entry->opcode = IB_WC_RECV_RDMA_WITH_IMM; + break; + default: + entry->opcode = IB_WC_RECV; + } +} + static inline void set_ib_wc_op_rq(struct irdma_cq_poll_info *cq_poll_info, struct ib_wc *entry, bool send_imm_support) { From patchwork Sat Aug 24 03:19:19 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Nikolova, Tatyana E" X-Patchwork-Id: 13776211 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.16]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1492E7580C; Sat, 24 Aug 2024 03:20:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.16 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724469658; cv=none; b=ICbk6SHKlkvrK2sYh0wuOSjyqTWLZfLRCUKDo4EJgqU+jiXs4NYpb3hNDDC8Pn+rJrp6E5FtCASRlOFZLdLtNHJZ2s8SAy2NN+SuJDQCzBNNOopRavjD/VGLSCWCHC0il35qnSuyHpdUlZ23zMDb3MpoWjrz3jBg49nUNS+EcU4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724469658; c=relaxed/simple; bh=vzji9XlcnDtiHeLmIqLTu/Bmre5lu8n88iFp3Vy56Ys=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=MpTmeMH/Au5m+/WitnrUUBjV04rDgEjgSxrlohMxvu8X9A4Qh0iLXV9xc+FYvvzMfq3RpJ/cqzjRnNnnUEtT+FHFhhFVDR6lBnjpDF5AdjYm3QAppNYx6VjpQV8OwzrDEM6+55PgXEK6LEzewXttIbNoTnGxLnXjitO3UvV+CU8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=hduXTJSg; arc=none smtp.client-ip=192.198.163.16 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="hduXTJSg" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1724469655; x=1756005655; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=vzji9XlcnDtiHeLmIqLTu/Bmre5lu8n88iFp3Vy56Ys=; b=hduXTJSgQAsfkX2v2suLGSFBkK0iue4LBx46beZZp6+bZirw8iiLS7nX KlEUeqEOedMDju6ESnEMLahZJJs1w6mL4Crbqh8twr/NmfUzalLu1ci/V HwMFdDKfvIRkBBhd01ZCUShpQnZP/olPV+gwKVjjQAQNPVMPYGABpaDFv A5BCKMoixlB9FrG19Pidmla7lLOV5yWQ9Wis0g/An1PhCPCqXXFoHirjp 7ERxBj+LA8Po3ThG9O6lgYNvI7LklUSgmXoG7M7tIb0mvefeXDTCKNjjR WJ6Ip5SWfsZgR7vByOaPBv1OU1Y0rpgxuNBfm99DVr1tyiSRzM8Rt1SYe A==; X-CSE-ConnectionGUID: cqzBTtsKSVKfm010JB/e1w== X-CSE-MsgGUID: YuI+F9TaRyq+bWf9JisBFA== X-IronPort-AV: E=McAfee;i="6700,10204,11173"; a="13187825" X-IronPort-AV: E=Sophos;i="6.10,172,1719903600"; d="scan'208";a="13187825" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by fmvoesa110.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Aug 2024 20:20:50 -0700 X-CSE-ConnectionGUID: uXoRmszGQ2SnucGzTv31iw== X-CSE-MsgGUID: zCCDL6yuQze2wPGMSELJSg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.10,172,1719903600"; d="scan'208";a="99492135" Received: from tenikolo-mobl1.amr.corp.intel.com ([10.124.36.66]) by smtpauth.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Aug 2024 20:20:49 -0700 From: Tatyana Nikolova To: jgg@nvidia.com, leon@kernel.org Cc: linux-rdma@vger.kernel.org, netdev@vger.kernel.org, Faisal Latif , Tatyana Nikolova Subject: [RFC v2 20/25] RDMA/irdma: Add SRQ support Date: Fri, 23 Aug 2024 22:19:19 -0500 Message-Id: <20240824031924.421-21-tatyana.e.nikolova@intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20240824031924.421-1-tatyana.e.nikolova@intel.com> References: <20240824031924.421-1-tatyana.e.nikolova@intel.com> Precedence: bulk X-Mailing-List: linux-rdma@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Faisal Latif Implement verb API and UAPI changes to support SRQ functionality in GEN3 devices. Signed-off-by: Faisal Latif Signed-off-by: Tatyana Nikolova --- drivers/infiniband/hw/irdma/ctrl.c | 237 +++++++++++++- drivers/infiniband/hw/irdma/defs.h | 36 ++- drivers/infiniband/hw/irdma/hw.c | 22 +- drivers/infiniband/hw/irdma/irdma.h | 1 + drivers/infiniband/hw/irdma/main.h | 12 +- drivers/infiniband/hw/irdma/type.h | 66 ++++ drivers/infiniband/hw/irdma/uk.c | 162 +++++++++- drivers/infiniband/hw/irdma/user.h | 42 ++- drivers/infiniband/hw/irdma/utils.c | 27 ++ drivers/infiniband/hw/irdma/verbs.c | 478 +++++++++++++++++++++++++++- drivers/infiniband/hw/irdma/verbs.h | 25 ++ include/uapi/rdma/irdma-abi.h | 15 +- 12 files changed, 1103 insertions(+), 20 deletions(-) diff --git a/drivers/infiniband/hw/irdma/ctrl.c b/drivers/infiniband/hw/irdma/ctrl.c index 3205385088cc..d7165bd7f142 100644 --- a/drivers/infiniband/hw/irdma/ctrl.c +++ b/drivers/infiniband/hw/irdma/ctrl.c @@ -412,7 +412,8 @@ int irdma_sc_qp_init(struct irdma_sc_qp *qp, struct irdma_qp_init_info *info) pble_obj_cnt = info->pd->dev->hmc_info->hmc_obj[IRDMA_HMC_IW_PBLE].cnt; if ((info->virtual_map && info->sq_pa >= pble_obj_cnt) || - (info->virtual_map && info->rq_pa >= pble_obj_cnt)) + (!info->qp_uk_init_info.srq_uk && + info->virtual_map && info->rq_pa >= pble_obj_cnt)) return -EINVAL; qp->llp_stream_handle = (void *)(-1); @@ -446,6 +447,208 @@ int irdma_sc_qp_init(struct irdma_sc_qp *qp, struct irdma_qp_init_info *info) return 0; } +/** + * irdma_sc_srq_init - init sc_srq structure + * @srq: srq sc struct + * @info: parameters for srq init + */ +int irdma_sc_srq_init(struct irdma_sc_srq *srq, + struct irdma_srq_init_info *info) +{ + u32 srq_size_quanta; + int ret_code; + + ret_code = irdma_uk_srq_init(&srq->srq_uk, &info->srq_uk_init_info); + if (ret_code) + return ret_code; + + srq->dev = info->pd->dev; + srq->pd = info->pd; + srq->vsi = info->vsi; + srq->srq_pa = info->srq_pa; + srq->first_pm_pbl_idx = info->first_pm_pbl_idx; + srq->pasid = info->pasid; + srq->pasid_valid = info->pasid_valid; + srq->srq_limit = info->srq_limit; + srq->leaf_pbl_size = info->leaf_pbl_size; + srq->virtual_map = info->virtual_map; + srq->tph_en = info->tph_en; + srq->arm_limit_event = info->arm_limit_event; + srq->tph_val = info->tph_value; + srq->shadow_area_pa = info->shadow_area_pa; + + /* Smallest SRQ size is 256B i.e. 8 quanta */ + srq_size_quanta = max((u32)IRDMA_SRQ_MIN_QUANTA, + srq->srq_uk.srq_size * + srq->srq_uk.wqe_size_multiplier); + srq->hw_srq_size = irdma_get_encoded_wqe_size(srq_size_quanta, + IRDMA_QUEUE_TYPE_SRQ); + + return 0; +} + +/** + * irdma_sc_srq_create - send srq create CQP WQE + * @srq: srq sc struct + * @scratch: u64 saved to be used during cqp completion + * @post_sq: flag for cqp db to ring + */ +static int irdma_sc_srq_create(struct irdma_sc_srq *srq, u64 scratch, + bool post_sq) +{ + struct irdma_sc_cqp *cqp; + __le64 *wqe; + u64 hdr; + + cqp = srq->pd->dev->cqp; + if (srq->srq_uk.srq_id < cqp->dev->hw_attrs.min_hw_srq_id || + srq->srq_uk.srq_id > + (cqp->dev->hmc_info->hmc_obj[IRDMA_HMC_IW_SRQ].max_cnt - 1)) + return -EINVAL; + + wqe = irdma_sc_cqp_get_next_send_wqe(cqp, scratch); + if (!wqe) + return -ENOMEM; + + set_64bit_val(wqe, 0, + FIELD_PREP(IRDMA_CQPSQ_SRQ_SRQ_LIMIT, srq->srq_limit) | + FIELD_PREP(IRDMA_CQPSQ_SRQ_RQSIZE, srq->hw_srq_size) | + FIELD_PREP(IRDMA_CQPSQ_SRQ_RQ_WQE_SIZE, srq->srq_uk.wqe_size)); + set_64bit_val(wqe, 8, (uintptr_t)srq); + set_64bit_val(wqe, 16, + FIELD_PREP(IRDMA_CQPSQ_SRQ_PD_ID, srq->pd->pd_id)); + set_64bit_val(wqe, 32, + FIELD_PREP(IRDMA_CQPSQ_SRQ_PHYSICAL_BUFFER_ADDR, + srq->srq_pa >> + IRDMA_CQPSQ_SRQ_PHYSICAL_BUFFER_ADDR_S)); + set_64bit_val(wqe, 40, + FIELD_PREP(IRDMA_CQPSQ_SRQ_DB_SHADOW_ADDR, + srq->shadow_area_pa >> + IRDMA_CQPSQ_SRQ_DB_SHADOW_ADDR_S)); + set_64bit_val(wqe, 48, + FIELD_PREP(IRDMA_CQPSQ_SRQ_FIRST_PM_PBL_IDX, + srq->first_pm_pbl_idx)); + + hdr = srq->srq_uk.srq_id | + FIELD_PREP(IRDMA_CQPSQ_OPCODE, IRDMA_CQP_OP_CREATE_SRQ) | + FIELD_PREP(IRDMA_CQPSQ_SRQ_LEAF_PBL_SIZE, srq->leaf_pbl_size) | + FIELD_PREP(IRDMA_CQPSQ_SRQ_VIRTMAP, srq->virtual_map) | + FIELD_PREP(IRDMA_CQPSQ_SRQ_ARM_LIMIT_EVENT, + srq->arm_limit_event) | + FIELD_PREP(IRDMA_CQPSQ_WQEVALID, cqp->polarity); + + dma_wmb(); /* make sure WQE is written before valid bit is set */ + + set_64bit_val(wqe, 24, hdr); + + print_hex_dump_debug("WQE: SRQ_CREATE WQE", DUMP_PREFIX_OFFSET, 16, 8, + wqe, IRDMA_CQP_WQE_SIZE * 8, false); + if (post_sq) + irdma_sc_cqp_post_sq(cqp); + + return 0; +} + +/** + * irdma_sc_srq_modify - send modify_srq CQP WQE + * @srq: srq sc struct + * @info: parameters for srq modification + * @scratch: u64 saved to be used during cqp completion + * @post_sq: flag for cqp db to ring + */ +static int irdma_sc_srq_modify(struct irdma_sc_srq *srq, + struct irdma_modify_srq_info *info, u64 scratch, + bool post_sq) +{ + struct irdma_sc_cqp *cqp; + __le64 *wqe; + u64 hdr; + + cqp = srq->dev->cqp; + if (srq->srq_uk.srq_id < cqp->dev->hw_attrs.min_hw_srq_id || + srq->srq_uk.srq_id > + (cqp->dev->hmc_info->hmc_obj[IRDMA_HMC_IW_SRQ].max_cnt - 1)) + return -EINVAL; + + wqe = irdma_sc_cqp_get_next_send_wqe(cqp, scratch); + if (!wqe) + return -ENOMEM; + + set_64bit_val(wqe, 0, + FIELD_PREP(IRDMA_CQPSQ_SRQ_SRQ_LIMIT, info->srq_limit) | + FIELD_PREP(IRDMA_CQPSQ_SRQ_RQSIZE, srq->hw_srq_size) | + FIELD_PREP(IRDMA_CQPSQ_SRQ_RQ_WQE_SIZE, srq->srq_uk.wqe_size)); + set_64bit_val(wqe, 8, + FIELD_PREP(IRDMA_CQPSQ_SRQ_SRQCTX, srq->srq_uk.srq_id)); + set_64bit_val(wqe, 16, + FIELD_PREP(IRDMA_CQPSQ_SRQ_PD_ID, srq->pd->pd_id)); + set_64bit_val(wqe, 32, + FIELD_PREP(IRDMA_CQPSQ_SRQ_PHYSICAL_BUFFER_ADDR, + srq->srq_pa >> + IRDMA_CQPSQ_SRQ_PHYSICAL_BUFFER_ADDR_S)); + set_64bit_val(wqe, 40, + FIELD_PREP(IRDMA_CQPSQ_SRQ_DB_SHADOW_ADDR, + srq->shadow_area_pa >> + IRDMA_CQPSQ_SRQ_DB_SHADOW_ADDR_S)); + set_64bit_val(wqe, 48, + FIELD_PREP(IRDMA_CQPSQ_SRQ_FIRST_PM_PBL_IDX, + srq->first_pm_pbl_idx)); + + hdr = srq->srq_uk.srq_id | + FIELD_PREP(IRDMA_CQPSQ_OPCODE, IRDMA_CQP_OP_MODIFY_SRQ) | + FIELD_PREP(IRDMA_CQPSQ_SRQ_LEAF_PBL_SIZE, srq->leaf_pbl_size) | + FIELD_PREP(IRDMA_CQPSQ_SRQ_VIRTMAP, srq->virtual_map) | + FIELD_PREP(IRDMA_CQPSQ_SRQ_ARM_LIMIT_EVENT, + info->arm_limit_event) | + FIELD_PREP(IRDMA_CQPSQ_WQEVALID, cqp->polarity); + dma_wmb(); /* make sure WQE is written before valid bit is set */ + + set_64bit_val(wqe, 24, hdr); + + print_hex_dump_debug("WQE: SRQ_MODIFY WQE", DUMP_PREFIX_OFFSET, 16, 8, + wqe, IRDMA_CQP_WQE_SIZE * 8, false); + if (post_sq) + irdma_sc_cqp_post_sq(cqp); + + return 0; +} + +/** + * irdma_sc_srq_destroy - send srq_destroy CQP WQE + * @srq: srq sc struct + * @scratch: u64 saved to be used during cqp completion + * @post_sq: flag for cqp db to ring + */ +static int irdma_sc_srq_destroy(struct irdma_sc_srq *srq, u64 scratch, + bool post_sq) +{ + struct irdma_sc_cqp *cqp; + __le64 *wqe; + u64 hdr; + + cqp = srq->dev->cqp; + + wqe = irdma_sc_cqp_get_next_send_wqe(cqp, scratch); + if (!wqe) + return -ENOMEM; + + set_64bit_val(wqe, 8, (uintptr_t)srq); + + hdr = srq->srq_uk.srq_id | + FIELD_PREP(IRDMA_CQPSQ_OPCODE, IRDMA_CQP_OP_DESTROY_SRQ) | + FIELD_PREP(IRDMA_CQPSQ_WQEVALID, cqp->polarity); + dma_wmb(); /* make sure WQE is written before valid bit is set */ + + set_64bit_val(wqe, 24, hdr); + + print_hex_dump_debug("WQE: SRQ_DESTROY WQE", DUMP_PREFIX_OFFSET, 16, + 8, wqe, IRDMA_CQP_WQE_SIZE * 8, false); + if (post_sq) + irdma_sc_cqp_post_sq(cqp); + + return 0; +} + /** * irdma_sc_qp_create - create qp * @qp: sc qp @@ -837,6 +1040,7 @@ static void irdma_sc_qp_setctx_roce_gen_3(struct irdma_sc_qp *qp, FIELD_PREP(IRDMAQPC_ISQP1, roce_info->is_qp1) | FIELD_PREP(IRDMAQPC_ROCE_TVER, roce_info->roce_tver) | FIELD_PREP(IRDMAQPC_IPV4, udp->ipv4) | + FIELD_PREP(IRDMAQPC_USE_SRQ, !qp->qp_uk.srq_uk ? 0 : 1) | FIELD_PREP(IRDMAQPC_INSERTVLANTAG, udp->insert_vlan_tag); set_64bit_val(qp_ctx, 0, qw0); set_64bit_val(qp_ctx, 8, qp->sq_pa); @@ -921,6 +1125,9 @@ static void irdma_sc_qp_setctx_roce_gen_3(struct irdma_sc_qp *qp, FIELD_PREP(IRDMAQPC_LOCAL_IPADDR0, udp->local_ipaddr[0])); set_64bit_val(qp_ctx, 200, FIELD_PREP(IRDMAQPC_THIGH, roce_info->t_high) | + FIELD_PREP(IRDMAQPC_SRQ_ID, + !qp->qp_uk.srq_uk ? + 0 : qp->qp_uk.srq_uk->srq_id) | FIELD_PREP(IRDMAQPC_TLOW, roce_info->t_low)); set_64bit_val(qp_ctx, 208, roce_info->pd_id | FIELD_PREP(IRDMAQPC_STAT_INDEX_GEN3, info->stats_idx) | @@ -2215,6 +2422,14 @@ u8 irdma_get_encoded_wqe_size(u32 wqsize, enum irdma_queue_type queue_type) { u8 encoded_size = 0; + if (queue_type == IRDMA_QUEUE_TYPE_SRQ) { + /* Smallest SRQ size is 256B (8 quanta) that gets + * encoded to 0. + */ + encoded_size = ilog2(wqsize) - 3; + + return encoded_size; + } /* cqp sq's hw coded value starts from 1 for size of 4 * while it starts from 0 for qp' wq's. */ @@ -5464,13 +5679,12 @@ static int irdma_set_loc_hmc_rsrc_gen_3(struct irdma_sc_dev *dev, struct irdma_hmc_fpm_misc *hmc_fpm_misc; u32 xf_cnt, timer_cnt, pages_needed; struct irdma_hmc_info *hmc_info; - u32 ird, ord, min_ird; + u32 ird, ord; hmc_info = dev->hmc_info; hmc_fpm_misc = &dev->hmc_fpm_misc; ird = dev->hw_attrs.max_hw_ird; ord = dev->hw_attrs.max_hw_ord; - min_ird = IRDMA_MIN_IRD; hmc_info->hmc_obj[IRDMA_HMC_IW_HDR].cnt = qpwanted; hmc_info->hmc_obj[IRDMA_HMC_IW_QP].cnt = qpwanted; @@ -6052,6 +6266,22 @@ static int irdma_exec_cqp_cmd(struct irdma_sc_dev *dev, &pcmdinfo->in.u.mc_modify.info, pcmdinfo->in.u.mc_modify.scratch); break; + case IRDMA_OP_SRQ_CREATE: + status = irdma_sc_srq_create(pcmdinfo->in.u.srq_create.srq, + pcmdinfo->in.u.srq_create.scratch, + pcmdinfo->post_sq); + break; + case IRDMA_OP_SRQ_MODIFY: + status = irdma_sc_srq_modify(pcmdinfo->in.u.srq_modify.srq, + &pcmdinfo->in.u.srq_modify.info, + pcmdinfo->in.u.srq_modify.scratch, + pcmdinfo->post_sq); + break; + case IRDMA_OP_SRQ_DESTROY: + status = irdma_sc_srq_destroy(pcmdinfo->in.u.srq_destroy.srq, + pcmdinfo->in.u.srq_destroy.scratch, + pcmdinfo->post_sq); + break; default: status = -EOPNOTSUPP; break; @@ -6209,6 +6439,7 @@ int irdma_sc_dev_init(enum irdma_vers ver, struct irdma_sc_dev *dev, dev->protocol_used = info->protocol_used; /* Setup the hardware limits, hmc may limit further */ dev->hw_attrs.min_hw_qp_id = IRDMA_MIN_IW_QP_ID; + dev->hw_attrs.min_hw_srq_id = IRDMA_MIN_IW_SRQ_ID; dev->hw_attrs.min_hw_aeq_size = IRDMA_MIN_AEQ_ENTRIES; dev->hw_attrs.max_hw_aeq_size = IRDMA_MAX_AEQ_ENTRIES; dev->hw_attrs.min_hw_ceq_size = IRDMA_MIN_CEQ_ENTRIES; diff --git a/drivers/infiniband/hw/irdma/defs.h b/drivers/infiniband/hw/irdma/defs.h index b5484906ca9e..8ead170a8930 100644 --- a/drivers/infiniband/hw/irdma/defs.h +++ b/drivers/infiniband/hw/irdma/defs.h @@ -140,7 +140,11 @@ enum irdma_protocol_used { #define IRDMA_QP_SW_MAX_RQ_QUANTA 32768 #define IRDMA_MAX_QP_WRS(max_quanta_per_wr) \ ((IRDMA_QP_SW_MAX_WQ_QUANTA - IRDMA_SQ_RSVD) / (max_quanta_per_wr)) +#define IRDMA_SRQ_MIN_QUANTA 8 #define IRDMA_SRQ_MAX_QUANTA 262144 +#define IRDMA_MAX_SRQ_WRS \ + ((IRDMA_SRQ_MAX_QUANTA - IRDMA_RQ_RSVD) / IRDMA_MAX_QUANTA_PER_WR) + #define IRDMAQP_TERM_SEND_TERM_AND_FIN 0 #define IRDMAQP_TERM_SEND_TERM_ONLY 1 #define IRDMAQP_TERM_SEND_FIN_ONLY 2 @@ -236,9 +240,12 @@ enum irdma_cqp_op_type { IRDMA_OP_ADD_LOCAL_MAC_ENTRY = 46, IRDMA_OP_DELETE_LOCAL_MAC_ENTRY = 47, IRDMA_OP_CQ_MODIFY = 48, + IRDMA_OP_SRQ_CREATE = 51, + IRDMA_OP_SRQ_MODIFY = 52, + IRDMA_OP_SRQ_DESTROY = 53, /* Must be last entry*/ - IRDMA_MAX_CQP_OPS = 49, + IRDMA_MAX_CQP_OPS = 54, }; /* CQP SQ WQES */ @@ -248,6 +255,9 @@ enum irdma_cqp_op_type { #define IRDMA_CQP_OP_CREATE_CQ 0x03 #define IRDMA_CQP_OP_MODIFY_CQ 0x04 #define IRDMA_CQP_OP_DESTROY_CQ 0x05 +#define IRDMA_CQP_OP_CREATE_SRQ 0x06 +#define IRDMA_CQP_OP_MODIFY_SRQ 0x07 +#define IRDMA_CQP_OP_DESTROY_SRQ 0x08 #define IRDMA_CQP_OP_ALLOC_STAG 0x09 #define IRDMA_CQP_OP_REG_MR 0x0a #define IRDMA_CQP_OP_QUERY_STAG 0x0b @@ -520,6 +530,7 @@ enum irdma_cqp_op_type { #define IRDMA_CQ_ERROR BIT_ULL(55) #define IRDMA_CQ_SQ BIT_ULL(62) +#define IRDMA_CQ_SRQ BIT_ULL(52) #define IRDMA_CQ_VALID BIT_ULL(63) #define IRDMA_CQ_IMMVALID BIT_ULL(62) #define IRDMA_CQ_UDSMACVALID BIT_ULL(61) @@ -631,6 +642,24 @@ enum irdma_cqp_op_type { #define IRDMA_CQPSQ_QP_DBSHADOWADDR IRDMA_CQPHC_QPCTX +#define IRDMA_CQPSQ_SRQ_RQSIZE GENMASK_ULL(3, 0) +#define IRDMA_CQPSQ_SRQ_RQ_WQE_SIZE GENMASK_ULL(5, 4) +#define IRDMA_CQPSQ_SRQ_SRQ_LIMIT GENMASK_ULL(43, 32) +#define IRDMA_CQPSQ_SRQ_SRQCTX GENMASK_ULL(63, 6) +#define IRDMA_CQPSQ_SRQ_PD_ID GENMASK_ULL(39, 16) +#define IRDMA_CQPSQ_SRQ_SRQ_ID GENMASK_ULL(15, 0) +#define IRDMA_CQPSQ_SRQ_OP GENMASK_ULL(37, 32) +#define IRDMA_CQPSQ_SRQ_LEAF_PBL_SIZE GENMASK_ULL(45, 44) +#define IRDMA_CQPSQ_SRQ_VIRTMAP BIT_ULL(47) +#define IRDMA_CQPSQ_SRQ_TPH_EN BIT_ULL(60) +#define IRDMA_CQPSQ_SRQ_ARM_LIMIT_EVENT BIT_ULL(61) +#define IRDMA_CQPSQ_SRQ_FIRST_PM_PBL_IDX GENMASK_ULL(27, 0) +#define IRDMA_CQPSQ_SRQ_TPH_VALUE GENMASK_ULL(7, 0) +#define IRDMA_CQPSQ_SRQ_PHYSICAL_BUFFER_ADDR_S 8 +#define IRDMA_CQPSQ_SRQ_PHYSICAL_BUFFER_ADDR GENMASK_ULL(63, 8) +#define IRDMA_CQPSQ_SRQ_DB_SHADOW_ADDR_S 6 +#define IRDMA_CQPSQ_SRQ_DB_SHADOW_ADDR GENMASK_ULL(63, 6) + #define IRDMA_CQPSQ_CQ_CQSIZE GENMASK_ULL(20, 0) #define IRDMA_CQPSQ_CQ_CQCTX GENMASK_ULL(62, 0) #define IRDMA_CQPSQ_CQ_SHADOW_READ_THRESHOLD GENMASK(17, 0) @@ -785,6 +814,11 @@ enum irdma_cqp_op_type { #define IRDMAQPC_INSERTL2TAG2 BIT_ULL(11) #define IRDMAQPC_LIMIT GENMASK_ULL(13, 12) +#define IRDMAQPC_USE_SRQ BIT_ULL(10) +#define IRDMAQPC_SRQ_ID GENMASK_ULL(15, 0) +#define IRDMAQPC_PASID GENMASK_ULL(19, 0) +#define IRDMAQPC_PASID_VALID BIT_ULL(11) + #define IRDMAQPC_ECN_EN BIT_ULL(14) #define IRDMAQPC_DROPOOOSEG BIT_ULL(15) #define IRDMAQPC_DUPACK_THRESH GENMASK_ULL(18, 16) diff --git a/drivers/infiniband/hw/irdma/hw.c b/drivers/infiniband/hw/irdma/hw.c index f01ec21edd37..524fe5dba760 100644 --- a/drivers/infiniband/hw/irdma/hw.c +++ b/drivers/infiniband/hw/irdma/hw.c @@ -269,6 +269,7 @@ static void irdma_process_aeq(struct irdma_pci_f *rf) struct irdma_sc_qp *qp = NULL; struct irdma_qp_host_ctx_info *ctx_info = NULL; struct irdma_device *iwdev = rf->iwdev; + struct irdma_sc_srq *srq; unsigned long flags; u32 aeqcnt = 0; @@ -319,7 +320,9 @@ static void irdma_process_aeq(struct irdma_pci_f *rf) if (info->ae_id != IRDMA_AE_QP_SUSPEND_COMPLETE) iwqp->last_aeq = info->ae_id; spin_unlock_irqrestore(&iwqp->lock, flags); - ctx_info = &iwqp->ctx_info; + } else if (info->srq) { + if (info->ae_id != IRDMA_AE_SRQ_LIMIT) + continue; } else { if (info->ae_id != IRDMA_AE_CQ_OPERATION_ERROR && info->ae_id != IRDMA_AE_CQP_DEFERRED_COMPLETE) @@ -417,6 +420,12 @@ static void irdma_process_aeq(struct irdma_pci_f *rf) } irdma_cq_rem_ref(&iwcq->ibcq); break; + case IRDMA_AE_SRQ_LIMIT: + srq = (struct irdma_sc_srq *)(uintptr_t)info->compl_ctx; + irdma_srq_event(srq); + break; + case IRDMA_AE_SRQ_CATASTROPHIC_ERROR: + break; case IRDMA_AE_CQP_DEFERRED_COMPLETE: /* Remove completed CQP requests from pending list * and notify about those CQP ops completion. @@ -1839,7 +1848,9 @@ static void irdma_get_used_rsrc(struct irdma_device *iwdev) iwdev->rf->used_qps = find_first_zero_bit(iwdev->rf->allocated_qps, iwdev->rf->max_qp); iwdev->rf->used_cqs = find_first_zero_bit(iwdev->rf->allocated_cqs, - iwdev->rf->max_cq); + iwdev->rf->max_cq); + iwdev->rf->used_srqs = find_first_zero_bit(iwdev->rf->allocated_srqs, + iwdev->rf->max_srq); iwdev->rf->used_mrs = find_first_zero_bit(iwdev->rf->allocated_mrs, iwdev->rf->max_mr); } @@ -2056,7 +2067,8 @@ static void irdma_set_hw_rsrc(struct irdma_pci_f *rf) rf->allocated_qps = (void *)(rf->mem_rsrc + (sizeof(struct irdma_arp_entry) * rf->arp_table_size)); rf->allocated_cqs = &rf->allocated_qps[BITS_TO_LONGS(rf->max_qp)]; - rf->allocated_mrs = &rf->allocated_cqs[BITS_TO_LONGS(rf->max_cq)]; + rf->allocated_srqs = &rf->allocated_cqs[BITS_TO_LONGS(rf->max_cq)]; + rf->allocated_mrs = &rf->allocated_srqs[BITS_TO_LONGS(rf->max_srq)]; rf->allocated_pds = &rf->allocated_mrs[BITS_TO_LONGS(rf->max_mr)]; rf->allocated_ahs = &rf->allocated_pds[BITS_TO_LONGS(rf->max_pd)]; rf->allocated_mcgs = &rf->allocated_ahs[BITS_TO_LONGS(rf->max_ah)]; @@ -2084,12 +2096,14 @@ static u32 irdma_calc_mem_rsrc_size(struct irdma_pci_f *rf) rsrc_size += sizeof(unsigned long) * BITS_TO_LONGS(rf->max_qp); rsrc_size += sizeof(unsigned long) * BITS_TO_LONGS(rf->max_mr); rsrc_size += sizeof(unsigned long) * BITS_TO_LONGS(rf->max_cq); + rsrc_size += sizeof(unsigned long) * BITS_TO_LONGS(rf->max_srq); rsrc_size += sizeof(unsigned long) * BITS_TO_LONGS(rf->max_pd); rsrc_size += sizeof(unsigned long) * BITS_TO_LONGS(rf->arp_table_size); rsrc_size += sizeof(unsigned long) * BITS_TO_LONGS(rf->max_ah); rsrc_size += sizeof(unsigned long) * BITS_TO_LONGS(rf->max_mcg); rsrc_size += sizeof(struct irdma_qp **) * rf->max_qp; rsrc_size += sizeof(struct irdma_cq **) * rf->max_cq; + rsrc_size += sizeof(struct irdma_srq **) * rf->max_srq; return rsrc_size; } @@ -2117,6 +2131,7 @@ u32 irdma_initialize_hw_rsrc(struct irdma_pci_f *rf) rf->max_qp = rf->sc_dev.hmc_info->hmc_obj[IRDMA_HMC_IW_QP].cnt; rf->max_mr = rf->sc_dev.hmc_info->hmc_obj[IRDMA_HMC_IW_MR].cnt; rf->max_cq = rf->sc_dev.hmc_info->hmc_obj[IRDMA_HMC_IW_CQ].cnt; + rf->max_srq = rf->sc_dev.hmc_info->hmc_obj[IRDMA_HMC_IW_SRQ].cnt; rf->max_pd = rf->sc_dev.hw_attrs.max_hw_pds; rf->arp_table_size = rf->sc_dev.hmc_info->hmc_obj[IRDMA_HMC_IW_ARP].cnt; rf->max_ah = rf->sc_dev.hmc_info->hmc_obj[IRDMA_HMC_IW_FSIAV].cnt; @@ -2136,6 +2151,7 @@ u32 irdma_initialize_hw_rsrc(struct irdma_pci_f *rf) set_bit(0, rf->allocated_mrs); set_bit(0, rf->allocated_qps); set_bit(0, rf->allocated_cqs); + set_bit(0, rf->allocated_srqs); set_bit(0, rf->allocated_pds); set_bit(0, rf->allocated_arps); set_bit(0, rf->allocated_ahs); diff --git a/drivers/infiniband/hw/irdma/irdma.h b/drivers/infiniband/hw/irdma/irdma.h index 0544cbad4a48..6af79bb45254 100644 --- a/drivers/infiniband/hw/irdma/irdma.h +++ b/drivers/infiniband/hw/irdma/irdma.h @@ -162,6 +162,7 @@ struct irdma_hw_attrs { u32 max_done_count; u32 max_sleep_count; u32 max_cqp_compl_wait_time_ms; + u32 min_hw_srq_id; u16 max_stat_inst; u16 max_stat_idx; }; diff --git a/drivers/infiniband/hw/irdma/main.h b/drivers/infiniband/hw/irdma/main.h index f0196aafe59b..71352714c3bf 100644 --- a/drivers/infiniband/hw/irdma/main.h +++ b/drivers/infiniband/hw/irdma/main.h @@ -273,6 +273,8 @@ struct irdma_pci_f { u32 max_mr; u32 max_qp; u32 max_cq; + u32 max_srq; + u32 next_srq; u32 max_ah; u32 next_ah; u32 max_mcg; @@ -286,6 +288,7 @@ struct irdma_pci_f { u32 mr_stagmask; u32 used_pds; u32 used_cqs; + u32 used_srqs; u32 used_mrs; u32 used_qps; u32 arp_table_size; @@ -297,6 +300,7 @@ struct irdma_pci_f { unsigned long *allocated_ws_nodes; unsigned long *allocated_qps; unsigned long *allocated_cqs; + unsigned long *allocated_srqs; unsigned long *allocated_mrs; unsigned long *allocated_pds; unsigned long *allocated_mcgs; @@ -420,6 +424,11 @@ static inline struct irdma_pci_f *dev_to_rf(struct irdma_sc_dev *dev) return container_of(dev, struct irdma_pci_f, sc_dev); } +static inline struct irdma_srq *to_iwsrq(struct ib_srq *ibsrq) +{ + return container_of(ibsrq, struct irdma_srq, ibsrq); +} + /** * irdma_alloc_resource - allocate a resource * @iwdev: device pointer @@ -515,7 +524,8 @@ int irdma_modify_qp_roce(struct ib_qp *ibqp, struct ib_qp_attr *attr, void irdma_cq_add_ref(struct ib_cq *ibcq); void irdma_cq_rem_ref(struct ib_cq *ibcq); void irdma_cq_wq_destroy(struct irdma_pci_f *rf, struct irdma_sc_cq *cq); - +void irdma_srq_event(struct irdma_sc_srq *srq); +void irdma_srq_wq_destroy(struct irdma_pci_f *rf, struct irdma_sc_srq *srq); void irdma_cleanup_pending_cqp_op(struct irdma_pci_f *rf); int irdma_hw_modify_qp(struct irdma_device *iwdev, struct irdma_qp *iwqp, struct irdma_modify_qp_info *info, bool wait); diff --git a/drivers/infiniband/hw/irdma/type.h b/drivers/infiniband/hw/irdma/type.h index 243210466d88..adfc528a268e 100644 --- a/drivers/infiniband/hw/irdma/type.h +++ b/drivers/infiniband/hw/irdma/type.h @@ -250,6 +250,7 @@ enum irdma_syn_rst_handling { enum irdma_queue_type { IRDMA_QUEUE_TYPE_SQ_RQ = 0, IRDMA_QUEUE_TYPE_CQP, + IRDMA_QUEUE_TYPE_SRQ, }; struct irdma_sc_dev; @@ -739,6 +740,51 @@ struct irdma_modify_cq_info { bool cq_resize:1; }; +struct irdma_srq_init_info { + struct irdma_sc_pd *pd; + struct irdma_sc_vsi *vsi; + u64 srq_pa; + u64 shadow_area_pa; + u32 first_pm_pbl_idx; + u32 pasid; + u32 srq_size; + u16 srq_limit; + u8 pasid_valid; + u8 wqe_size; + u8 leaf_pbl_size; + u8 virtual_map; + u8 tph_en; + u8 arm_limit_event; + u8 tph_value; + u8 pbl_chunk_size; + struct irdma_srq_uk_init_info srq_uk_init_info; +}; + +struct irdma_sc_srq { + struct irdma_sc_dev *dev; + struct irdma_sc_vsi *vsi; + struct irdma_sc_pd *pd; + struct irdma_srq_uk srq_uk; + void *back_srq; + u64 srq_pa; + u64 shadow_area_pa; + u32 first_pm_pbl_idx; + u32 pasid; + u32 hw_srq_size; + u16 srq_limit; + u8 pasid_valid; + u8 leaf_pbl_size; + u8 virtual_map; + u8 tph_en; + u8 arm_limit_event; + u8 tph_val; +}; + +struct irdma_modify_srq_info { + u16 srq_limit; + u8 arm_limit_event; +}; + struct irdma_create_qp_info { bool ord_valid:1; bool tcp_ctx_valid:1; @@ -1045,6 +1091,7 @@ struct irdma_qp_host_ctx_info { }; u32 send_cq_num; u32 rcv_cq_num; + u32 srq_id; u32 rem_endpoint_idx; u16 stats_idx; bool srq_valid:1; @@ -1344,6 +1391,8 @@ void irdma_sc_cq_resize(struct irdma_sc_cq *cq, struct irdma_modify_cq_info *inf int irdma_sc_static_hmc_pages_allocated(struct irdma_sc_cqp *cqp, u64 scratch, u8 hmc_fn_id, bool post_sq, bool poll_registers); +int irdma_sc_srq_init(struct irdma_sc_srq *srq, + struct irdma_srq_init_info *info); void sc_vsi_update_stats(struct irdma_sc_vsi *vsi); struct cqp_info { @@ -1587,6 +1636,23 @@ struct cqp_info { struct irdma_dma_mem query_buff_mem; u64 scratch; } query_rdma; + + struct { + struct irdma_sc_srq *srq; + u64 scratch; + } srq_create; + + struct { + struct irdma_sc_srq *srq; + struct irdma_modify_srq_info info; + u64 scratch; + } srq_modify; + + struct { + struct irdma_sc_srq *srq; + u64 scratch; + } srq_destroy; + } u; }; diff --git a/drivers/infiniband/hw/irdma/uk.c b/drivers/infiniband/hw/irdma/uk.c index 38c54e59cc2e..26f3475f6453 100644 --- a/drivers/infiniband/hw/irdma/uk.c +++ b/drivers/infiniband/hw/irdma/uk.c @@ -198,6 +198,26 @@ __le64 *irdma_qp_get_next_send_wqe(struct irdma_qp_uk *qp, u32 *wqe_idx, return wqe; } +__le64 *irdma_srq_get_next_recv_wqe(struct irdma_srq_uk *srq, u32 *wqe_idx) +{ + int ret_code; + __le64 *wqe; + + if (IRDMA_RING_FULL_ERR(srq->srq_ring)) + return NULL; + + IRDMA_ATOMIC_RING_MOVE_HEAD(srq->srq_ring, *wqe_idx, ret_code); + if (ret_code) + return NULL; + + if (!*wqe_idx) + srq->srwqe_polarity = !srq->srwqe_polarity; + /* rq_wqe_size_multiplier is no of 32 byte quanta in one rq wqe */ + wqe = srq->srq_base[*wqe_idx * (srq->wqe_size_multiplier)].elem; + + return wqe; +} + /** * irdma_qp_get_next_recv_wqe - get next qp's rcv wqe * @qp: hw qp ptr @@ -317,6 +337,58 @@ int irdma_uk_rdma_write(struct irdma_qp_uk *qp, struct irdma_post_sq_info *info, return 0; } +/** + * irdma_uk_srq_post_receive - post a receive wqe to a shared rq + * @srq: shared rq ptr + * @info: post rq information + */ +int irdma_uk_srq_post_receive(struct irdma_srq_uk *srq, + struct irdma_post_rq_info *info) +{ + u32 wqe_idx, i, byte_off; + u32 addl_frag_cnt; + __le64 *wqe; + u64 hdr; + + if (srq->max_srq_frag_cnt < info->num_sges) + return -EINVAL; + + wqe = irdma_srq_get_next_recv_wqe(srq, &wqe_idx); + if (!wqe) + return -ENOMEM; + + addl_frag_cnt = info->num_sges > 1 ? info->num_sges - 1 : 0; + srq->wqe_ops.iw_set_fragment(wqe, 0, info->sg_list, + srq->srwqe_polarity); + + for (i = 1, byte_off = 32; i < info->num_sges; i++) { + srq->wqe_ops.iw_set_fragment(wqe, byte_off, &info->sg_list[i], + srq->srwqe_polarity); + byte_off += 16; + } + + /* if not an odd number set valid bit in next fragment */ + if (srq->uk_attrs->hw_rev >= IRDMA_GEN_2 && !(info->num_sges & 0x01) && + info->num_sges) { + srq->wqe_ops.iw_set_fragment(wqe, byte_off, NULL, + srq->srwqe_polarity); + if (srq->uk_attrs->hw_rev == IRDMA_GEN_2) + ++addl_frag_cnt; + } + + set_64bit_val(wqe, 16, (u64)info->wr_id); + hdr = FIELD_PREP(IRDMAQPSQ_ADDFRAGCNT, addl_frag_cnt) | + FIELD_PREP(IRDMAQPSQ_VALID, srq->srwqe_polarity); + + dma_wmb(); /* make sure WQE is populated before valid bit is set */ + + set_64bit_val(wqe, 24, hdr); + + set_64bit_val(srq->shadow_area, 0, (wqe_idx + 1) % srq->srq_ring.size); + + return 0; +} + /** * irdma_uk_rdma_read - rdma read command * @qp: hw qp ptr @@ -973,6 +1045,8 @@ int irdma_uk_cq_poll_cmpl(struct irdma_cq_uk *cq, u64 comp_ctx, qword0, qword2, qword3; __le64 *cqe; struct irdma_qp_uk *qp; + struct irdma_srq_uk *srq; + u8 is_srq; struct irdma_ring *pring = NULL; u32 wqe_idx; int ret_code; @@ -1046,8 +1120,14 @@ int irdma_uk_cq_poll_cmpl(struct irdma_cq_uk *cq, } info->q_type = (u8)FIELD_GET(IRDMA_CQ_SQ, qword3); + is_srq = (u8)FIELD_GET(IRDMA_CQ_SRQ, qword3); info->error = (bool)FIELD_GET(IRDMA_CQ_ERROR, qword3); info->ipv4 = (bool)FIELD_GET(IRDMACQ_IPV4, qword3); + get_64bit_val(cqe, 8, &comp_ctx); + if (is_srq) + get_64bit_val(cqe, 40, (u64 *)&qp); + else + qp = (struct irdma_qp_uk *)(unsigned long)comp_ctx; if (info->error) { info->major_err = FIELD_GET(IRDMA_CQ_MAJERR, qword3); info->minor_err = FIELD_GET(IRDMA_CQ_MINERR, qword3); @@ -1085,7 +1165,22 @@ int irdma_uk_cq_poll_cmpl(struct irdma_cq_uk *cq, info->qp_handle = (irdma_qp_handle)(unsigned long)qp; info->op_type = (u8)FIELD_GET(IRDMACQ_OP, qword3); - if (info->q_type == IRDMA_CQE_QTYPE_RQ) { + if (info->q_type == IRDMA_CQE_QTYPE_RQ && is_srq) { + srq = qp->srq_uk; + + get_64bit_val(cqe, 8, &info->wr_id); + info->bytes_xfered = (u32)FIELD_GET(IRDMACQ_PAYLDLEN, qword0); + + if (qword3 & IRDMACQ_STAG) { + info->stag_invalid_set = true; + info->inv_stag = (u32)FIELD_GET(IRDMACQ_INVSTAG, + qword2); + } else { + info->stag_invalid_set = false; + } + IRDMA_RING_MOVE_TAIL(srq->srq_ring); + pring = &srq->srq_ring; + } else if (info->q_type == IRDMA_CQE_QTYPE_RQ && !is_srq) { u32 array_idx; array_idx = wqe_idx / qp->rq_wqe_size_multiplier; @@ -1210,10 +1305,10 @@ int irdma_uk_cq_poll_cmpl(struct irdma_cq_uk *cq, } /** - * irdma_qp_round_up - return round up qp wq depth + * irdma_round_up_wq - return round up qp wq depth * @wqdepth: wq depth in quanta to round up */ -static int irdma_qp_round_up(u32 wqdepth) +static int irdma_round_up_wq(u32 wqdepth) { int scount = 1; @@ -1268,7 +1363,7 @@ int irdma_get_sqdepth(struct irdma_uk_attrs *uk_attrs, u32 sq_size, u8 shift, { u32 min_size = (u32)uk_attrs->min_hw_wq_size << shift; - *sqdepth = irdma_qp_round_up((sq_size << shift) + IRDMA_SQ_RSVD); + *sqdepth = irdma_round_up_wq((sq_size << shift) + IRDMA_SQ_RSVD); if (*sqdepth < min_size) *sqdepth = min_size; @@ -1290,7 +1385,7 @@ int irdma_get_rqdepth(struct irdma_uk_attrs *uk_attrs, u32 rq_size, u8 shift, { u32 min_size = (u32)uk_attrs->min_hw_wq_size << shift; - *rqdepth = irdma_qp_round_up((rq_size << shift) + IRDMA_RQ_RSVD); + *rqdepth = irdma_round_up_wq((rq_size << shift) + IRDMA_RQ_RSVD); if (*rqdepth < min_size) *rqdepth = min_size; @@ -1300,6 +1395,26 @@ int irdma_get_rqdepth(struct irdma_uk_attrs *uk_attrs, u32 rq_size, u8 shift, return 0; } +/* + * irdma_get_srqdepth - get SRQ depth (quanta) + * @uk_attrs: qp HW attributes + * @srq_size: SRQ size + * @shift: shift which determines size of WQE + * @srqdepth: depth of SRQ + */ +int irdma_get_srqdepth(struct irdma_uk_attrs *uk_attrs, u32 srq_size, u8 shift, + u32 *srqdepth) +{ + *srqdepth = irdma_round_up_wq((srq_size << shift) + IRDMA_RQ_RSVD); + + if (*srqdepth < ((u32)uk_attrs->min_hw_wq_size << shift)) + *srqdepth = uk_attrs->min_hw_wq_size << shift; + else if (*srqdepth > uk_attrs->max_hw_srq_quanta) + return -EINVAL; + + return 0; +} + static const struct irdma_wqe_uk_ops iw_wqe_uk_ops = { .iw_copy_inline_data = irdma_copy_inline_data, .iw_inline_data_size_to_quanta = irdma_inline_data_size_to_quanta, @@ -1335,6 +1450,42 @@ static void irdma_setup_connection_wqes(struct irdma_qp_uk *qp, IRDMA_RING_MOVE_HEAD_BY_COUNT_NOCHECK(qp->initial_ring, move_cnt); } +/** + * irdma_uk_srq_init - initialize shared qp + * @srq: hw srq (user and kernel) + * @info: srq initialization info + * + * initializes the vars used in both user and kernel mode. + * size of the wqe depends on numbers of max. fragements + * allowed. Then size of wqe * the number of wqes should be the + * amount of memory allocated for srq. + */ +int irdma_uk_srq_init(struct irdma_srq_uk *srq, + struct irdma_srq_uk_init_info *info) +{ + u8 rqshift; + + srq->uk_attrs = info->uk_attrs; + if (info->max_srq_frag_cnt > srq->uk_attrs->max_hw_wq_frags) + return -EINVAL; + + irdma_get_wqe_shift(srq->uk_attrs, info->max_srq_frag_cnt, 0, &rqshift); + srq->srq_caps = info->srq_caps; + srq->srq_base = info->srq; + srq->shadow_area = info->shadow_area; + srq->srq_id = info->srq_id; + srq->srwqe_polarity = 0; + srq->srq_size = info->srq_size; + srq->wqe_size = rqshift; + srq->max_srq_frag_cnt = min(srq->uk_attrs->max_hw_wq_frags, + ((u32)2 << rqshift) - 1); + IRDMA_RING_INIT(srq->srq_ring, srq->srq_size); + srq->wqe_size_multiplier = 1 << rqshift; + srq->wqe_ops = iw_wqe_uk_ops; + + return 0; +} + /** * irdma_uk_calc_shift_wq - calculate WQE shift for both SQ and RQ * @ukinfo: qp initialization info @@ -1461,6 +1612,7 @@ int irdma_uk_qp_init(struct irdma_qp_uk *qp, struct irdma_qp_uk_init_info *info) qp->wqe_ops = iw_wqe_uk_ops_gen_1; else qp->wqe_ops = iw_wqe_uk_ops; + qp->srq_uk = info->srq_uk; return ret_code; } diff --git a/drivers/infiniband/hw/irdma/user.h b/drivers/infiniband/hw/irdma/user.h index 8fd7eebef11d..af15529643e9 100644 --- a/drivers/infiniband/hw/irdma/user.h +++ b/drivers/infiniband/hw/irdma/user.h @@ -59,7 +59,7 @@ enum irdma_device_caps_const { IRDMA_COMMIT_FPM_BUF_SIZE = 192, IRDMA_GATHER_STATS_BUF_SIZE = 1024, IRDMA_MIN_IW_QP_ID = 0, - IRDMA_MAX_IW_QP_ID = 262143, + IRDMA_MIN_IW_SRQ_ID = 0, IRDMA_MIN_CEQID = 0, IRDMA_MAX_CEQID = 1023, IRDMA_CEQ_MAX_COUNT = IRDMA_MAX_CEQID + 1, @@ -147,6 +147,8 @@ enum irdma_qp_caps { IRDMA_PUSH_MODE = 8, }; +struct irdma_srq_uk; +struct irdma_srq_uk_init_info; struct irdma_qp_uk; struct irdma_cq_uk; struct irdma_qp_uk_init_info; @@ -300,6 +302,39 @@ int irdma_uk_calc_depth_shift_sq(struct irdma_qp_uk_init_info *ukinfo, u32 *sq_depth, u8 *sq_shift); int irdma_uk_calc_depth_shift_rq(struct irdma_qp_uk_init_info *ukinfo, u32 *rq_depth, u8 *rq_shift); +int irdma_uk_srq_init(struct irdma_srq_uk *srq, + struct irdma_srq_uk_init_info *info); +int irdma_uk_srq_post_receive(struct irdma_srq_uk *srq, + struct irdma_post_rq_info *info); + +struct irdma_srq_uk { + u32 srq_caps; + struct irdma_qp_quanta *srq_base; + struct irdma_uk_attrs *uk_attrs; + __le64 *shadow_area; + struct irdma_ring srq_ring; + struct irdma_ring initial_ring; + u32 srq_id; + u32 srq_size; + u32 max_srq_frag_cnt; + struct irdma_wqe_uk_ops wqe_ops; + u8 srwqe_polarity; + u8 wqe_size; + u8 wqe_size_multiplier; + u8 deferred_flag; +}; + +struct irdma_srq_uk_init_info { + struct irdma_qp_quanta *srq; + struct irdma_uk_attrs *uk_attrs; + __le64 *shadow_area; + u64 *srq_wrid_array; + u32 srq_id; + u32 srq_caps; + u32 srq_size; + u32 max_srq_frag_cnt; +}; + struct irdma_sq_uk_wr_trk_info { u64 wrid; u32 wr_len; @@ -344,6 +379,7 @@ struct irdma_qp_uk { bool destroy_pending:1; /* Indicates the QP is being destroyed */ void *back_qp; u8 dbg_rq_flushed; + struct irdma_srq_uk *srq_uk; u8 sq_flush_seen; u8 rq_flush_seen; }; @@ -383,6 +419,7 @@ struct irdma_qp_uk_init_info { u8 rq_shift; int abi_ver; bool legacy_mode; + struct irdma_srq_uk *srq_uk; }; struct irdma_cq_uk_init_info { @@ -398,6 +435,7 @@ struct irdma_cq_uk_init_info { __le64 *irdma_qp_get_next_send_wqe(struct irdma_qp_uk *qp, u32 *wqe_idx, u16 quanta, u32 total_size, struct irdma_post_sq_info *info); +__le64 *irdma_srq_get_next_recv_wqe(struct irdma_srq_uk *srq, u32 *wqe_idx); __le64 *irdma_qp_get_next_recv_wqe(struct irdma_qp_uk *qp, u32 *wqe_idx); void irdma_uk_clean_cq(void *q, struct irdma_cq_uk *cq); int irdma_nop(struct irdma_qp_uk *qp, u64 wr_id, bool signaled, bool post_sq); @@ -409,5 +447,7 @@ int irdma_get_sqdepth(struct irdma_uk_attrs *uk_attrs, u32 sq_size, u8 shift, u32 *wqdepth); int irdma_get_rqdepth(struct irdma_uk_attrs *uk_attrs, u32 rq_size, u8 shift, u32 *wqdepth); +int irdma_get_srqdepth(struct irdma_uk_attrs *uk_attrs, u32 srq_size, u8 shift, + u32 *srqdepth); void irdma_clr_wqes(struct irdma_qp_uk *qp, u32 qp_wqe_idx); #endif /* IRDMA_USER_H */ diff --git a/drivers/infiniband/hw/irdma/utils.c b/drivers/infiniband/hw/irdma/utils.c index 894ced3a8989..2fdcb8872e08 100644 --- a/drivers/infiniband/hw/irdma/utils.c +++ b/drivers/infiniband/hw/irdma/utils.c @@ -700,6 +700,9 @@ static const char *const irdma_cqp_cmd_names[IRDMA_MAX_CQP_OPS] = { [IRDMA_OP_ADD_LOCAL_MAC_ENTRY] = "Add Local MAC Entry Cmd", [IRDMA_OP_DELETE_LOCAL_MAC_ENTRY] = "Delete Local MAC Entry Cmd", [IRDMA_OP_CQ_MODIFY] = "CQ Modify Cmd", + [IRDMA_OP_SRQ_CREATE] = "Create SRQ Cmd", + [IRDMA_OP_SRQ_MODIFY] = "Modify SRQ Cmd", + [IRDMA_OP_SRQ_DESTROY] = "Destroy SRQ Cmd", }; static const struct irdma_cqp_err_info irdma_noncrit_err_list[] = { @@ -1238,6 +1241,30 @@ void irdma_free_qp_rsrc(struct irdma_qp *iwqp) kfree(iwqp->kqp.rq_wrid_mem); } +/** + * irdma_srq_wq_destroy - send srq destroy cqp + * @rf: RDMA PCI function + * @srq: hardware control srq + */ +void irdma_srq_wq_destroy(struct irdma_pci_f *rf, struct irdma_sc_srq *srq) +{ + struct irdma_cqp_request *cqp_request; + struct cqp_cmds_info *cqp_info; + + cqp_request = irdma_alloc_and_get_cqp_request(&rf->cqp, true); + if (!cqp_request) + return; + + cqp_info = &cqp_request->info; + cqp_info->cqp_cmd = IRDMA_OP_SRQ_DESTROY; + cqp_info->post_sq = 1; + cqp_info->in.u.srq_destroy.srq = srq; + cqp_info->in.u.srq_destroy.scratch = (uintptr_t)cqp_request; + + irdma_handle_cqp_op(rf, cqp_request); + irdma_put_cqp_request(&rf->cqp, cqp_request); +} + /** * irdma_cq_wq_destroy - send cq destroy cqp * @rf: RDMA PCI function diff --git a/drivers/infiniband/hw/irdma/verbs.c b/drivers/infiniband/hw/irdma/verbs.c index 4c7fbdce4433..9d08937848bc 100644 --- a/drivers/infiniband/hw/irdma/verbs.c +++ b/drivers/infiniband/hw/irdma/verbs.c @@ -56,9 +56,9 @@ static int irdma_query_device(struct ib_device *ibdev, props->max_mcast_qp_attach = IRDMA_MAX_MGS_PER_CTX; props->max_total_mcast_qp_attach = rf->max_qp * IRDMA_MAX_MGS_PER_CTX; props->max_fast_reg_page_list_len = IRDMA_MAX_PAGES_PER_FMR; -#define HCA_CLOCK_TIMESTAMP_MASK 0x1ffff - if (hw_attrs->uk_attrs.hw_rev >= IRDMA_GEN_2) - props->timestamp_mask = HCA_CLOCK_TIMESTAMP_MASK; + props->max_srq = rf->max_srq - rf->used_srqs; + props->max_srq_wr = IRDMA_MAX_SRQ_WRS; + props->max_srq_sge = hw_attrs->uk_attrs.max_hw_wq_frags; return 0; } @@ -336,6 +336,8 @@ static int irdma_alloc_ucontext(struct ib_ucontext *uctx, uresp.comp_mask |= IRDMA_ALLOC_UCTX_USE_RAW_ATTR; uresp.min_hw_wq_size = uk_attrs->min_hw_wq_size; uresp.comp_mask |= IRDMA_ALLOC_UCTX_MIN_HW_WQ_SIZE; + uresp.max_hw_srq_quanta = uk_attrs->max_hw_srq_quanta; + uresp.comp_mask |= IRDMA_ALLOC_UCTX_MAX_HW_SRQ_QUANTA; if (ib_copy_to_udata(udata, &uresp, min(sizeof(uresp), udata->outlen))) { rdma_user_mmap_entry_remove(ucontext->db_mmap_entry); @@ -347,6 +349,8 @@ static int irdma_alloc_ucontext(struct ib_ucontext *uctx, spin_lock_init(&ucontext->cq_reg_mem_list_lock); INIT_LIST_HEAD(&ucontext->qp_reg_mem_list); spin_lock_init(&ucontext->qp_reg_mem_list_lock); + INIT_LIST_HEAD(&ucontext->srq_reg_mem_list); + spin_lock_init(&ucontext->srq_reg_mem_list_lock); return 0; @@ -571,7 +575,11 @@ static void irdma_setup_virt_qp(struct irdma_device *iwdev, if (iwpbl->pbl_allocated) { init_info->virtual_map = true; init_info->sq_pa = qpmr->sq_pbl.idx; - init_info->rq_pa = qpmr->rq_pbl.idx; + /* Need to use contiguous buffer for RQ of QP + * in case it is associated with SRQ. + */ + init_info->rq_pa = init_info->qp_uk_init_info.srq_uk ? + qpmr->rq_pa : qpmr->rq_pbl.idx; } else { init_info->sq_pa = qpmr->sq_pbl.addr; init_info->rq_pa = qpmr->rq_pbl.addr; @@ -940,6 +948,18 @@ static int irdma_create_qp(struct ib_qp *ibqp, struct irdma_uk_attrs *uk_attrs = &dev->hw_attrs.uk_attrs; struct irdma_qp_init_info init_info = {}; struct irdma_qp_host_ctx_info *ctx_info; + struct irdma_srq *iwsrq; + bool srq_valid = false; + u32 srq_id = 0; + + if (init_attr->srq) { + iwsrq = to_iwsrq(init_attr->srq); + srq_valid = true; + srq_id = iwsrq->srq_num; + init_attr->cap.max_recv_sge = uk_attrs->max_hw_wq_frags; + init_attr->cap.max_recv_wr = 4; + init_info.qp_uk_init_info.srq_uk = &iwsrq->sc_srq.srq_uk; + } err_code = irdma_validate_qp_attrs(init_attr, iwdev); if (err_code) @@ -1046,6 +1066,8 @@ static int irdma_create_qp(struct ib_qp *ibqp, } ctx_info = &iwqp->ctx_info; + ctx_info->srq_valid = srq_valid; + ctx_info->srq_id = srq_id; ctx_info->send_cq_num = iwqp->iwscq->sc_cq.cq_uk.cq_id; ctx_info->rcv_cq_num = iwqp->iwrcq->sc_cq.cq_uk.cq_id; @@ -1171,6 +1193,7 @@ static int irdma_query_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr, init_attr->qp_context = iwqp->ibqp.qp_context; init_attr->send_cq = iwqp->ibqp.send_cq; init_attr->recv_cq = iwqp->ibqp.recv_cq; + init_attr->srq = iwqp->ibqp.srq; init_attr->cap = attr->cap; return 0; @@ -1833,6 +1856,24 @@ int irdma_modify_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr, int attr_mask, return err; } +/** + * irdma_srq_free_rsrc - free up resources for srq + * @rf: RDMA PCI function + * @iwsrq: srq ptr + */ +static void irdma_srq_free_rsrc(struct irdma_pci_f *rf, struct irdma_srq *iwsrq) +{ + struct irdma_sc_srq *srq = &iwsrq->sc_srq; + + if (!iwsrq->user_mode) { + dma_free_coherent(rf->sc_dev.hw->device, iwsrq->kmem.size, + iwsrq->kmem.va, iwsrq->kmem.pa); + iwsrq->kmem.va = NULL; + } + + irdma_free_rsrc(rf, rf->allocated_srqs, srq->srq_uk.srq_id); +} + /** * irdma_cq_free_rsrc - free up resources for cq * @rf: RDMA PCI function @@ -1896,6 +1937,22 @@ static int irdma_process_resize_list(struct irdma_cq *iwcq, return cnt; } +/** + * irdma_destroy_srq - destroy srq + * @ibsrq: srq pointer + * @udata: user data + */ +static int irdma_destroy_srq(struct ib_srq *ibsrq, struct ib_udata *udata) +{ + struct irdma_device *iwdev = to_iwdev(ibsrq->device); + struct irdma_srq *iwsrq = to_iwsrq(ibsrq); + struct irdma_sc_srq *srq = &iwsrq->sc_srq; + + irdma_srq_wq_destroy(iwdev->rf, srq); + irdma_srq_free_rsrc(iwdev->rf, iwsrq); + return 0; +} + /** * irdma_destroy_cq - destroy cq * @ib_cq: cq pointer @@ -2079,6 +2136,293 @@ static int irdma_resize_cq(struct ib_cq *ibcq, int entries, return ret; } +/** + * irdma_srq_event - event notification for srq limit + * @srq: shared srq struct + */ +void irdma_srq_event(struct irdma_sc_srq *srq) +{ + struct irdma_srq *iwsrq = container_of(srq, struct irdma_srq, sc_srq); + struct ib_srq *ibsrq = &iwsrq->ibsrq; + struct ib_event event; + + srq->srq_limit = 0; + + if (!ibsrq->event_handler) + return; + + event.device = ibsrq->device; + event.element.port_num = 1; + event.element.srq = ibsrq; + event.event = IB_EVENT_SRQ_LIMIT_REACHED; + ibsrq->event_handler(&event, ibsrq->srq_context); +} + +/** + * irdma_modify_srq - modify srq request + * @ibsrq: srq's pointer for modify + * @attr: access attributes + * @attr_mask: state mask + * @udata: user data + */ +static int irdma_modify_srq(struct ib_srq *ibsrq, struct ib_srq_attr *attr, + enum ib_srq_attr_mask attr_mask, + struct ib_udata *udata) +{ + struct irdma_device *iwdev = to_iwdev(ibsrq->device); + struct irdma_srq *iwsrq = to_iwsrq(ibsrq); + struct irdma_cqp_request *cqp_request; + struct irdma_pci_f *rf = iwdev->rf; + struct irdma_modify_srq_info *info; + struct cqp_cmds_info *cqp_info; + int status; + + if (attr_mask & IB_SRQ_MAX_WR) + return -EINVAL; + + if (!(attr_mask & IB_SRQ_LIMIT)) + return 0; + + if (attr->srq_limit > iwsrq->sc_srq.srq_uk.srq_size) + return -EINVAL; + + /* Execute this cqp op synchronously, so we can update srq_limit + * upon successful completion. + */ + cqp_request = irdma_alloc_and_get_cqp_request(&rf->cqp, true); + if (!cqp_request) + return -ENOMEM; + + cqp_info = &cqp_request->info; + info = &cqp_info->in.u.srq_modify.info; + info->srq_limit = attr->srq_limit; + if (info->srq_limit > 0xFFF) + info->srq_limit = 0xFFF; + info->arm_limit_event = 1; + + cqp_info->cqp_cmd = IRDMA_OP_SRQ_MODIFY; + cqp_info->post_sq = 1; + cqp_info->in.u.srq_modify.srq = &iwsrq->sc_srq; + cqp_info->in.u.srq_modify.scratch = (uintptr_t)cqp_request; + status = irdma_handle_cqp_op(rf, cqp_request); + irdma_put_cqp_request(&rf->cqp, cqp_request); + if (status) + return status; + + iwsrq->sc_srq.srq_limit = info->srq_limit; + + return 0; +} + +static int irdma_setup_umode_srq(struct irdma_device *iwdev, + struct irdma_srq *iwsrq, + struct irdma_srq_init_info *info, + struct ib_udata *udata) +{ +#define IRDMA_CREATE_SRQ_MIN_REQ_LEN \ + offsetofend(struct irdma_create_srq_req, user_shadow_area) + struct irdma_create_srq_req req = {}; + struct irdma_ucontext *ucontext; + struct irdma_srq_mr *srqmr; + struct irdma_pbl *iwpbl; + unsigned long flags; + + iwsrq->user_mode = true; + ucontext = rdma_udata_to_drv_context(udata, struct irdma_ucontext, + ibucontext); + + if (udata->inlen < IRDMA_CREATE_SRQ_MIN_REQ_LEN) + return -EINVAL; + + if (ib_copy_from_udata(&req, udata, + min(sizeof(req), udata->inlen))) + return -EFAULT; + + spin_lock_irqsave(&ucontext->srq_reg_mem_list_lock, flags); + iwpbl = irdma_get_pbl((unsigned long)req.user_srq_buf, + &ucontext->srq_reg_mem_list); + spin_unlock_irqrestore(&ucontext->srq_reg_mem_list_lock, flags); + if (!iwpbl) + return -EPROTO; + + iwsrq->iwpbl = iwpbl; + srqmr = &iwpbl->srq_mr; + + if (iwpbl->pbl_allocated) { + info->virtual_map = true; + info->pbl_chunk_size = 1; + info->first_pm_pbl_idx = srqmr->srq_pbl.idx; + info->leaf_pbl_size = 1; + } else { + info->srq_pa = srqmr->srq_pbl.addr; + } + info->shadow_area_pa = srqmr->shadow; + + return 0; +} + +static int irdma_setup_kmode_srq(struct irdma_device *iwdev, + struct irdma_srq *iwsrq, + struct irdma_srq_init_info *info, u32 depth, + u8 shift) +{ + struct irdma_srq_uk_init_info *ukinfo = &info->srq_uk_init_info; + struct irdma_dma_mem *mem = &iwsrq->kmem; + u32 size, ring_size; + + ring_size = depth * IRDMA_QP_WQE_MIN_SIZE; + size = ring_size + (IRDMA_SHADOW_AREA_SIZE << 3); + + mem->size = ALIGN(size, 256); + mem->va = dma_alloc_coherent(iwdev->rf->hw.device, mem->size, + &mem->pa, GFP_KERNEL); + if (!mem->va) + return -ENOMEM; + + ukinfo->srq = mem->va; + ukinfo->srq_size = depth >> shift; + ukinfo->shadow_area = mem->va + ring_size; + + info->shadow_area_pa = info->srq_pa + ring_size; + info->srq_pa = mem->pa; + + return 0; +} + +/** + * irdma_create_srq - create srq + * @ibsrq: ib's srq pointer + * @initattrs: attributes for srq + * @udata: user data for create srq + */ +static int irdma_create_srq(struct ib_srq *ibsrq, + struct ib_srq_init_attr *initattrs, + struct ib_udata *udata) +{ + struct irdma_device *iwdev = to_iwdev(ibsrq->device); + struct ib_srq_attr *attr = &initattrs->attr; + struct irdma_pd *iwpd = to_iwpd(ibsrq->pd); + struct irdma_srq *iwsrq = to_iwsrq(ibsrq); + struct irdma_srq_uk_init_info *ukinfo; + struct irdma_cqp_request *cqp_request; + struct irdma_srq_init_info info = {}; + struct irdma_pci_f *rf = iwdev->rf; + struct irdma_uk_attrs *uk_attrs; + struct cqp_cmds_info *cqp_info; + int err_code = 0; + u32 depth; + u8 shift; + + uk_attrs = &rf->sc_dev.hw_attrs.uk_attrs; + ukinfo = &info.srq_uk_init_info; + + if (initattrs->srq_type != IB_SRQT_BASIC) + return -EOPNOTSUPP; + + if (!(uk_attrs->feature_flags & IRDMA_FEATURE_SRQ) || + attr->max_sge > uk_attrs->max_hw_wq_frags) + return -EINVAL; + + refcount_set(&iwsrq->refcnt, 1); + spin_lock_init(&iwsrq->lock); + err_code = irdma_alloc_rsrc(rf, rf->allocated_srqs, rf->max_srq, + &iwsrq->srq_num, &rf->next_srq); + if (err_code) + return err_code; + + ukinfo->max_srq_frag_cnt = attr->max_sge; + ukinfo->uk_attrs = uk_attrs; + ukinfo->srq_id = iwsrq->srq_num; + + irdma_get_wqe_shift(ukinfo->uk_attrs, ukinfo->max_srq_frag_cnt, 0, + &shift); + + err_code = irdma_get_srqdepth(ukinfo->uk_attrs, attr->max_wr, + shift, &depth); + if (err_code) + return err_code; + + /* Actual SRQ size in WRs for ring and HW */ + ukinfo->srq_size = depth >> shift; + + /* Max postable WRs to SRQ */ + iwsrq->max_wr = (depth - IRDMA_RQ_RSVD) >> shift; + attr->max_wr = iwsrq->max_wr; + + if (udata) + err_code = irdma_setup_umode_srq(iwdev, iwsrq, &info, udata); + else + err_code = irdma_setup_kmode_srq(iwdev, iwsrq, &info, depth, + shift); + + if (err_code) + goto free_rsrc; + + info.vsi = &iwdev->vsi; + info.pd = &iwpd->sc_pd; + + err_code = irdma_sc_srq_init(&iwsrq->sc_srq, &info); + if (err_code) + goto free_dmem; + + cqp_request = irdma_alloc_and_get_cqp_request(&rf->cqp, true); + if (!cqp_request) { + err_code = -ENOMEM; + goto free_dmem; + } + + cqp_info = &cqp_request->info; + cqp_info->cqp_cmd = IRDMA_OP_SRQ_CREATE; + cqp_info->post_sq = 1; + cqp_info->in.u.srq_create.srq = &iwsrq->sc_srq; + cqp_info->in.u.srq_create.scratch = (uintptr_t)cqp_request; + err_code = irdma_handle_cqp_op(rf, cqp_request); + irdma_put_cqp_request(&rf->cqp, cqp_request); + if (err_code) + goto free_dmem; + + if (udata) { + struct irdma_create_srq_resp resp = {}; + + resp.srq_id = iwsrq->srq_num; + resp.srq_size = ukinfo->srq_size; + if (ib_copy_to_udata(udata, &resp, + min(sizeof(resp), udata->outlen))) { + err_code = -EPROTO; + goto srq_destroy; + } + } + + return 0; + +srq_destroy: + irdma_srq_wq_destroy(rf, &iwsrq->sc_srq); + +free_dmem: + if (!iwsrq->user_mode) + dma_free_coherent(rf->hw.device, iwsrq->kmem.size, + iwsrq->kmem.va, iwsrq->kmem.pa); +free_rsrc: + irdma_free_rsrc(rf, rf->allocated_srqs, iwsrq->srq_num); + return err_code; +} + +/** + * irdma_query_srq - get SRQ attributes + * @ibsrq: the SRQ to query + * @attr: the attributes of the SRQ + */ +static int irdma_query_srq(struct ib_srq *ibsrq, struct ib_srq_attr *attr) +{ + struct irdma_srq *iwsrq = to_iwsrq(ibsrq); + + attr->max_wr = iwsrq->max_wr; + attr->max_sge = iwsrq->sc_srq.srq_uk.max_srq_frag_cnt; + attr->srq_limit = iwsrq->sc_srq.srq_limit; + + return 0; +} + static inline int cq_validate_flags(u32 flags, u8 hw_rev) { /* GEN1 does not support CQ create flags */ @@ -2527,6 +2871,7 @@ static int irdma_handle_q_mem(struct irdma_device *iwdev, struct irdma_mr *iwmr = iwpbl->iwmr; struct irdma_qp_mr *qpmr = &iwpbl->qp_mr; struct irdma_cq_mr *cqmr = &iwpbl->cq_mr; + struct irdma_srq_mr *srqmr = &iwpbl->srq_mr; struct irdma_hmc_pble *hmc_p; u64 *arr = iwmr->pgaddrmem; u32 pg_size, total; @@ -2546,7 +2891,10 @@ static int irdma_handle_q_mem(struct irdma_device *iwdev, total = req->sq_pages + req->rq_pages; hmc_p = &qpmr->sq_pbl; qpmr->shadow = (dma_addr_t)arr[total]; - + /* Need to use physical address for RQ of QP + * in case it is associated with SRQ. + */ + qpmr->rq_pa = (dma_addr_t)arr[req->sq_pages]; if (lvl) { ret = irdma_check_mem_contiguous(arr, req->sq_pages, pg_size); @@ -2566,6 +2914,18 @@ static int irdma_handle_q_mem(struct irdma_device *iwdev, hmc_p->addr = arr[req->sq_pages]; } break; + case IRDMA_MEMREG_TYPE_SRQ: + hmc_p = &srqmr->srq_pbl; + srqmr->shadow = (dma_addr_t)arr[req->rq_pages]; + if (lvl) + ret = irdma_check_mem_contiguous(arr, req->rq_pages, + pg_size); + + if (!ret) + hmc_p->idx = palloc->level1.idx; + else + hmc_p->addr = arr[0]; + break; case IRDMA_MEMREG_TYPE_CQ: hmc_p = &cqmr->cq_pbl; @@ -3036,6 +3396,37 @@ static int irdma_reg_user_mr_type_qp(struct irdma_mem_reg_req req, return 0; } +static int irdma_reg_user_mr_type_srq(struct irdma_mem_reg_req req, + struct ib_udata *udata, + struct irdma_mr *iwmr) +{ + struct irdma_device *iwdev = to_iwdev(iwmr->ibmr.device); + struct irdma_pbl *iwpbl = &iwmr->iwpbl; + struct irdma_ucontext *ucontext; + unsigned long flags; + u32 total; + int err; + u8 lvl; + + total = req.rq_pages + IRDMA_SHADOW_PGCNT; + if (total > iwmr->page_cnt) + return -EINVAL; + + lvl = req.rq_pages > 1 ? PBLE_LEVEL_1 : PBLE_LEVEL_0; + err = irdma_handle_q_mem(iwdev, &req, iwpbl, lvl); + if (err) + return err; + + ucontext = rdma_udata_to_drv_context(udata, struct irdma_ucontext, + ibucontext); + spin_lock_irqsave(&ucontext->srq_reg_mem_list_lock, flags); + list_add_tail(&iwpbl->list, &ucontext->srq_reg_mem_list); + iwpbl->on_list = true; + spin_unlock_irqrestore(&ucontext->srq_reg_mem_list_lock, flags); + + return 0; +} + static int irdma_reg_user_mr_type_cq(struct irdma_mem_reg_req req, struct ib_udata *udata, struct irdma_mr *iwmr) @@ -3121,6 +3512,12 @@ static struct ib_mr *irdma_reg_user_mr(struct ib_pd *pd, u64 start, u64 len, if (err) goto error; + break; + case IRDMA_MEMREG_TYPE_SRQ: + err = irdma_reg_user_mr_type_srq(req, udata, iwmr); + if (err) + goto error; + break; case IRDMA_MEMREG_TYPE_CQ: err = irdma_reg_user_mr_type_cq(req, udata, iwmr); @@ -3437,6 +3834,14 @@ static void irdma_del_memlist(struct irdma_mr *iwmr, } spin_unlock_irqrestore(&ucontext->qp_reg_mem_list_lock, flags); break; + case IRDMA_MEMREG_TYPE_SRQ: + spin_lock_irqsave(&ucontext->srq_reg_mem_list_lock, flags); + if (iwpbl->on_list) { + iwpbl->on_list = false; + list_del(&iwpbl->list); + } + spin_unlock_irqrestore(&ucontext->srq_reg_mem_list_lock, flags); + break; default: break; } @@ -3655,6 +4060,47 @@ static int irdma_post_send(struct ib_qp *ibqp, return err; } +/** + * irdma_post_srq_recv - post receive wr for kernel application + * @ibsrq: ib srq pointer + * @ib_wr: work request for receive + * @bad_wr: bad wr caused an error + */ +static int irdma_post_srq_recv(struct ib_srq *ibsrq, + const struct ib_recv_wr *ib_wr, + const struct ib_recv_wr **bad_wr) +{ + struct irdma_srq *iwsrq = to_iwsrq(ibsrq); + struct irdma_srq_uk *uksrq = &iwsrq->sc_srq.srq_uk; + struct irdma_post_rq_info post_recv = {}; + unsigned long flags; + int err = 0; + + spin_lock_irqsave(&iwsrq->lock, flags); + while (ib_wr) { + if (ib_wr->num_sge > uksrq->max_srq_frag_cnt) { + err = -EINVAL; + goto out; + } + post_recv.num_sges = ib_wr->num_sge; + post_recv.wr_id = ib_wr->wr_id; + post_recv.sg_list = ib_wr->sg_list; + err = irdma_uk_srq_post_receive(uksrq, &post_recv); + if (err) + goto out; + + ib_wr = ib_wr->next; + } + +out: + spin_unlock_irqrestore(&iwsrq->lock, flags); + + if (err) + *bad_wr = ib_wr; + + return err; +} + /** * irdma_post_recv - post receive wr for kernel application * @ibqp: ib qp pointer @@ -3674,6 +4120,11 @@ static int irdma_post_recv(struct ib_qp *ibqp, iwqp = to_iwqp(ibqp); ukqp = &iwqp->sc_qp.qp_uk; + if (ukqp->srq_uk) { + *bad_wr = ib_wr; + return -EINVAL; + } + spin_lock_irqsave(&iwqp->lock, flags); while (ib_wr) { post_recv.num_sges = ib_wr->num_sge; @@ -4762,6 +5213,18 @@ static enum rdma_link_layer irdma_get_link_layer(struct ib_device *ibdev, return IB_LINK_LAYER_ETHERNET; } +static const struct ib_device_ops irdma_gen1_dev_ops = { + .dealloc_driver = irdma_ib_dealloc_device, +}; + +static const struct ib_device_ops irdma_gen3_dev_ops = { + .create_srq = irdma_create_srq, + .destroy_srq = irdma_destroy_srq, + .modify_srq = irdma_modify_srq, + .post_srq_recv = irdma_post_srq_recv, + .query_srq = irdma_query_srq, +}; + static const struct ib_device_ops irdma_roce_dev_ops = { .attach_mcast = irdma_attach_mcast, .create_ah = irdma_create_ah, @@ -4832,6 +5295,7 @@ static const struct ib_device_ops irdma_dev_ops = { INIT_RDMA_OBJ_SIZE(ib_cq, irdma_cq, ibcq), INIT_RDMA_OBJ_SIZE(ib_mw, irdma_mr, ibmw), INIT_RDMA_OBJ_SIZE(ib_qp, irdma_qp, ibqp), + INIT_RDMA_OBJ_SIZE(ib_srq, irdma_srq, ibsrq), }; /** @@ -4879,6 +5343,10 @@ static void irdma_init_rdma_device(struct irdma_device *iwdev) iwdev->ibdev.num_comp_vectors = iwdev->rf->ceqs_count; iwdev->ibdev.dev.parent = &pcidev->dev; ib_set_device_ops(&iwdev->ibdev, &irdma_dev_ops); + if (iwdev->rf->rdma_ver == IRDMA_GEN_1) + ib_set_device_ops(&iwdev->ibdev, &irdma_gen1_dev_ops); + if (iwdev->rf->rdma_ver >= IRDMA_GEN_3) + ib_set_device_ops(&iwdev->ibdev, &irdma_gen3_dev_ops); } /** diff --git a/drivers/infiniband/hw/irdma/verbs.h b/drivers/infiniband/hw/irdma/verbs.h index fcb163c45252..157dfa2d1a05 100644 --- a/drivers/infiniband/hw/irdma/verbs.h +++ b/drivers/infiniband/hw/irdma/verbs.h @@ -8,6 +8,7 @@ #define IRDMA_PKEY_TBL_SZ 1 #define IRDMA_DEFAULT_PKEY 0xFFFF +#define IRDMA_SHADOW_PGCNT 1 struct irdma_ucontext { struct ib_ucontext ibucontext; @@ -17,6 +18,8 @@ struct irdma_ucontext { spinlock_t cq_reg_mem_list_lock; /* protect CQ memory list */ struct list_head qp_reg_mem_list; spinlock_t qp_reg_mem_list_lock; /* protect QP memory list */ + struct list_head srq_reg_mem_list; + spinlock_t srq_reg_mem_list_lock; /* protect SRQ memory list */ int abi_ver; u8 legacy_mode : 1; u8 use_raw_attrs : 1; @@ -65,10 +68,16 @@ struct irdma_cq_mr { bool split; }; +struct irdma_srq_mr { + struct irdma_hmc_pble srq_pbl; + dma_addr_t shadow; +}; + struct irdma_qp_mr { struct irdma_hmc_pble sq_pbl; struct irdma_hmc_pble rq_pbl; dma_addr_t shadow; + dma_addr_t rq_pa; struct page *sq_page; }; @@ -85,6 +94,7 @@ struct irdma_pbl { union { struct irdma_qp_mr qp_mr; struct irdma_cq_mr cq_mr; + struct irdma_srq_mr srq_mr; }; bool pbl_allocated:1; @@ -112,6 +122,21 @@ struct irdma_mr { struct irdma_pbl iwpbl; }; +struct irdma_srq { + struct ib_srq ibsrq; + struct irdma_sc_srq sc_srq; + struct irdma_dma_mem kmem; + u64 *srq_wrid_mem; + refcount_t refcnt; + spinlock_t lock; /* for poll srq */ + struct irdma_pbl *iwpbl; + struct irdma_sge *sg_list; + u16 srq_head; + u32 srq_num; + u32 max_wr; + bool user_mode:1; +}; + struct irdma_cq { struct ib_cq ibcq; struct irdma_sc_cq sc_cq; diff --git a/include/uapi/rdma/irdma-abi.h b/include/uapi/rdma/irdma-abi.h index 4e42054cca33..f7788d33376b 100644 --- a/include/uapi/rdma/irdma-abi.h +++ b/include/uapi/rdma/irdma-abi.h @@ -20,11 +20,13 @@ enum irdma_memreg_type { IRDMA_MEMREG_TYPE_MEM = 0, IRDMA_MEMREG_TYPE_QP = 1, IRDMA_MEMREG_TYPE_CQ = 2, + IRDMA_MEMREG_TYPE_SRQ = 3, }; enum { IRDMA_ALLOC_UCTX_USE_RAW_ATTR = 1 << 0, IRDMA_ALLOC_UCTX_MIN_HW_WQ_SIZE = 1 << 1, + IRDMA_ALLOC_UCTX_MAX_HW_SRQ_QUANTA = 1 << 2, IRDMA_SUPPORT_WQE_FORMAT_V2 = 1 << 3, }; @@ -55,7 +57,8 @@ struct irdma_alloc_ucontext_resp { __u8 rsvd2; __aligned_u64 comp_mask; __u16 min_hw_wq_size; - __u8 rsvd3[6]; + __u32 max_hw_srq_quanta; + __u8 rsvd3[2]; }; struct irdma_alloc_pd_resp { @@ -72,6 +75,16 @@ struct irdma_create_cq_req { __aligned_u64 user_shadow_area; }; +struct irdma_create_srq_req { + __aligned_u64 user_srq_buf; + __aligned_u64 user_shadow_area; +}; + +struct irdma_create_srq_resp { + __u32 srq_id; + __u32 srq_size; +}; + struct irdma_create_qp_req { __aligned_u64 user_wqe_bufs; __aligned_u64 user_compl_ctx; From patchwork Sat Aug 24 03:19:20 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Nikolova, Tatyana E" X-Patchwork-Id: 13776209 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.16]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4EEAC7DA8F; Sat, 24 Aug 2024 03:20:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.16 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724469656; cv=none; b=nEE/Q1LepfF2S44rVjrFyOv1HTiFpZW2TeOyNQ6K6b58twX0B/zANjx0Ynd2orBVZq+tS7rQypFa9P8adxR9SPTj+XgAYLk7Ql9R2xoAsDZ1Rp+Icerx9TO2sasMCua3d8/Kve0Wy7je9/6HfOwU+zxI2Z3Xh46SzWxLykvnDgs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724469656; c=relaxed/simple; bh=F1j/SJFc7xTc36c5OvZ6+87X2uuK24+pWbCVV6hccWw=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=RdCfARqoh9bvNEPRXluo7Lpjb/0P9vTOV08C6pE11YZkY96nDxXMjgxpvnTIozokMR0D/Z2pipFRSPy7M0f1jkFkSW8U433q9ifzsRXNFwr2zPsimO6ky3fG/E0oXqKRW2RmsCl2qClgw6BKT48TFYL26lJkpqx6L8R7mBhlXz4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=c4mFt03e; arc=none smtp.client-ip=192.198.163.16 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="c4mFt03e" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1724469655; x=1756005655; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=F1j/SJFc7xTc36c5OvZ6+87X2uuK24+pWbCVV6hccWw=; b=c4mFt03eqV3nuuV/nERmxEE/jK7tHLV8vrv+6oq21b/TxV5M9+n5JoYs lsDgxAgk9zof/LXawJGB2R1mFJ+OJNfjn1DDBwBXD6Z0JTspBcGlulM1H EP4eQ8sKoX8dfUrAzrfsCkiKlDElxAd3hFP7FyCRJ0/AOeqGQk1Ikjz90 W44UtkeohRx87WJOsnkFFYvL1bfi0B0LcpXJSJ1BPg0XHX3rsfcvh+t86 HwTtHHXVanbfhhtEjaI4BmZEANCV7X13eFErHS4qgR1xpA4KI/YBIxoqq q/PY1LTxmkO2X8IjWELrM2IrEsibpc+z4H0R1mVV5kvxi+eJ+16E545gu Q==; X-CSE-ConnectionGUID: rNLyG6+uRx2CrNeix6R8EQ== X-CSE-MsgGUID: Xv/+zskfQwOcRODwTROn/A== X-IronPort-AV: E=McAfee;i="6700,10204,11173"; a="13187828" X-IronPort-AV: E=Sophos;i="6.10,172,1719903600"; d="scan'208";a="13187828" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by fmvoesa110.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Aug 2024 20:20:50 -0700 X-CSE-ConnectionGUID: DVSdMZJiQiyN8hGP5fRLAA== X-CSE-MsgGUID: dz1+YSGiQbqfSPEawcpexA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.10,172,1719903600"; d="scan'208";a="99492139" Received: from tenikolo-mobl1.amr.corp.intel.com ([10.124.36.66]) by smtpauth.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Aug 2024 20:20:50 -0700 From: Tatyana Nikolova To: jgg@nvidia.com, leon@kernel.org Cc: linux-rdma@vger.kernel.org, netdev@vger.kernel.org, Shiraz Saleem , Tatyana Nikolova Subject: [RFC v2 21/25] RDMA/irdma: Restrict Memory Window and CQE Timestamping to GEN3 Date: Fri, 23 Aug 2024 22:19:20 -0500 Message-Id: <20240824031924.421-22-tatyana.e.nikolova@intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20240824031924.421-1-tatyana.e.nikolova@intel.com> References: <20240824031924.421-1-tatyana.e.nikolova@intel.com> Precedence: bulk X-Mailing-List: linux-rdma@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Shiraz Saleem With the deprecation of Memory Window and Timestamping support in GEN2, move these features to be exclusive to GEN3. This iteration supports only Type2 Memory Windows. Additionally, it includes the reporting of the timestamp mask and Host Channel Adapter (HCA) core clock frequency via the query device verb. Signed-off-by: Shiraz Saleem Signed-off-by: Tatyana Nikolova --- drivers/infiniband/hw/irdma/verbs.c | 39 +++++++++++++++++++---------- 1 file changed, 26 insertions(+), 13 deletions(-) diff --git a/drivers/infiniband/hw/irdma/verbs.c b/drivers/infiniband/hw/irdma/verbs.c index 9d08937848bc..66e67be2e67b 100644 --- a/drivers/infiniband/hw/irdma/verbs.c +++ b/drivers/infiniband/hw/irdma/verbs.c @@ -41,7 +41,8 @@ static int irdma_query_device(struct ib_device *ibdev, props->max_cq = rf->max_cq - rf->used_cqs; props->max_cqe = rf->max_cqe - 1; props->max_mr = rf->max_mr - rf->used_mrs; - props->max_mw = props->max_mr; + if (hw_attrs->uk_attrs.hw_rev >= IRDMA_GEN_3) + props->max_mw = props->max_mr; props->max_pd = rf->max_pd - rf->used_pds; props->max_sge_rd = hw_attrs->uk_attrs.max_hw_read_sges; props->max_qp_rd_atom = hw_attrs->max_hw_ird; @@ -59,6 +60,13 @@ static int irdma_query_device(struct ib_device *ibdev, props->max_srq = rf->max_srq - rf->used_srqs; props->max_srq_wr = IRDMA_MAX_SRQ_WRS; props->max_srq_sge = hw_attrs->uk_attrs.max_hw_wq_frags; + if (hw_attrs->uk_attrs.hw_rev >= IRDMA_GEN_3) { +#define HCA_CORE_CLOCK_KHZ 1000000UL + props->timestamp_mask = GENMASK(31, 0); + props->hca_core_clock = HCA_CORE_CLOCK_KHZ; + } + if (hw_attrs->uk_attrs.hw_rev >= IRDMA_GEN_3) + props->device_cap_flags |= IB_DEVICE_MEM_WINDOW_TYPE_2B; return 0; } @@ -795,7 +803,8 @@ static void irdma_roce_fill_and_set_qpctx_info(struct irdma_qp *iwqp, roce_info->is_qp1 = true; roce_info->rd_en = true; roce_info->wr_rdresp_en = true; - roce_info->bind_en = true; + if (dev->hw_attrs.uk_attrs.hw_rev >= IRDMA_GEN_3) + roce_info->bind_en = true; roce_info->dcqcn_en = false; roce_info->rtomin = 5; @@ -826,7 +835,6 @@ static void irdma_iw_fill_and_set_qpctx_info(struct irdma_qp *iwqp, ether_addr_copy(iwarp_info->mac_addr, iwdev->netdev->dev_addr); iwarp_info->rd_en = true; iwarp_info->wr_rdresp_en = true; - iwarp_info->bind_en = true; iwarp_info->ecn_en = true; iwarp_info->rtomin = 5; @@ -1144,8 +1152,6 @@ static int irdma_get_ib_acc_flags(struct irdma_qp *iwqp) } if (iwqp->iwarp_info.rd_en) acc_flags |= IB_ACCESS_REMOTE_READ; - if (iwqp->iwarp_info.bind_en) - acc_flags |= IB_ACCESS_MW_BIND; } return acc_flags; } @@ -2425,8 +2431,8 @@ static int irdma_query_srq(struct ib_srq *ibsrq, struct ib_srq_attr *attr) static inline int cq_validate_flags(u32 flags, u8 hw_rev) { - /* GEN1 does not support CQ create flags */ - if (hw_rev == IRDMA_GEN_1) + /* GEN1/2 does not support CQ create flags */ + if (hw_rev <= IRDMA_GEN_2) return flags ? -EOPNOTSUPP : 0; return flags & ~IB_UVERBS_CQ_FLAGS_TIMESTAMP_COMPLETION ? -EOPNOTSUPP : 0; @@ -2648,8 +2654,9 @@ static int irdma_create_cq(struct ib_cq *ibcq, /** * irdma_get_mr_access - get hw MR access permissions from IB access flags * @access: IB access flags + * @hw_rev: Hardware version */ -static inline u16 irdma_get_mr_access(int access) +static inline u16 irdma_get_mr_access(int access, u8 hw_rev) { u16 hw_access = 0; @@ -2659,8 +2666,10 @@ static inline u16 irdma_get_mr_access(int access) IRDMA_ACCESS_FLAGS_REMOTEWRITE : 0; hw_access |= (access & IB_ACCESS_REMOTE_READ) ? IRDMA_ACCESS_FLAGS_REMOTEREAD : 0; - hw_access |= (access & IB_ACCESS_MW_BIND) ? - IRDMA_ACCESS_FLAGS_BIND_WINDOW : 0; + if (hw_rev >= IRDMA_GEN_3) { + hw_access |= (access & IB_ACCESS_MW_BIND) ? + IRDMA_ACCESS_FLAGS_BIND_WINDOW : 0; + } hw_access |= (access & IB_ZERO_BASED) ? IRDMA_ACCESS_FLAGS_ZERO_BASED : 0; hw_access |= IRDMA_ACCESS_FLAGS_LOCALREAD; @@ -3230,7 +3239,8 @@ static int irdma_hwreg_mr(struct irdma_device *iwdev, struct irdma_mr *iwmr, stag_info->stag_idx = iwmr->stag >> IRDMA_CQPSQ_STAG_IDX_S; stag_info->stag_key = (u8)iwmr->stag; stag_info->total_len = iwmr->len; - stag_info->access_rights = irdma_get_mr_access(access); + stag_info->access_rights = irdma_get_mr_access(access, + iwdev->rf->sc_dev.hw_attrs.uk_attrs.hw_rev); stag_info->pd_id = iwpd->sc_pd.pd_id; stag_info->all_memory = pd->flags & IB_PD_UNSAFE_GLOBAL_RKEY; if (stag_info->access_rights & IRDMA_ACCESS_FLAGS_ZERO_BASED) @@ -4015,7 +4025,9 @@ static int irdma_post_send(struct ib_qp *ibqp, stag_info.signaled = info.signaled; stag_info.read_fence = info.read_fence; - stag_info.access_rights = irdma_get_mr_access(reg_wr(ib_wr)->access); + stag_info.access_rights = + irdma_get_mr_access(reg_wr(ib_wr)->access, + dev->hw_attrs.uk_attrs.hw_rev); stag_info.stag_key = reg_wr(ib_wr)->key & 0xff; stag_info.stag_idx = reg_wr(ib_wr)->key >> 8; stag_info.page_size = reg_wr(ib_wr)->mr->page_size; @@ -5218,7 +5230,9 @@ static const struct ib_device_ops irdma_gen1_dev_ops = { }; static const struct ib_device_ops irdma_gen3_dev_ops = { + .alloc_mw = irdma_alloc_mw, .create_srq = irdma_create_srq, + .dealloc_mw = irdma_dealloc_mw, .destroy_srq = irdma_destroy_srq, .modify_srq = irdma_modify_srq, .post_srq_recv = irdma_post_srq_recv, @@ -5259,7 +5273,6 @@ static const struct ib_device_ops irdma_dev_ops = { .alloc_hw_port_stats = irdma_alloc_hw_port_stats, .alloc_mr = irdma_alloc_mr, - .alloc_mw = irdma_alloc_mw, .alloc_pd = irdma_alloc_pd, .alloc_ucontext = irdma_alloc_ucontext, .create_cq = irdma_create_cq, From patchwork Sat Aug 24 03:19:21 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Nikolova, Tatyana E" X-Patchwork-Id: 13776210 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.16]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C6E7680604; Sat, 24 Aug 2024 03:20:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.16 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724469657; cv=none; b=l8sBP+Uum8DG/lDUccUdZIlQxIIF6RHvKmb36rWJ+6Q8WX+MOPeqybmef50X+fANYB3SGIUglutL35bWdQHMhqKwppKxikh/kkuPYZoOsq72/Xk7mBbvodyyfb4KEEMXc7YmYWmk+6A7qhkwDuyrsviKARKVTIvCXFqROi/I0Po= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724469657; c=relaxed/simple; bh=F3plSp/0HplGlTa+sycP8Fu4hcZxQYOg06Pi4OUFnlU=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=cnjq0Ma4/KOE6RYPJeEUZ2klHyZIkxtj66kksBSL5aLy7GnzCZAnkZkxlvJEoZNBXaOkwX57I8DhPhcUQvIGs/MjCSLZ2BK1SxLDYMmVl129MIbWy+x6tO/ksdOCqY448z3+temVUCXHXnMxa3XFNK4eqfnS9BlUpoHS/yeiVJE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=dkU0xW5O; arc=none smtp.client-ip=192.198.163.16 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="dkU0xW5O" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1724469656; x=1756005656; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=F3plSp/0HplGlTa+sycP8Fu4hcZxQYOg06Pi4OUFnlU=; b=dkU0xW5O/yypdmERsws+8SgInOOoXSw0PRhISLelfa4POvD2C2XeNsNS b2iMCFPit7ot9wpSG1Yp8xa1EP/NKKDPuO3+sZOPlBCcpkXRWmrgAD60E BHi3Y4vKwq2a92mxsX9ZscZqvzaBMz7LgoYwoK4B1zco/p/uHW6aV7k6z 0pZyTsoFNaVgKQZwHfxWdclvNWMzs/DY6YsEDAhoFzPw+MTsppuhwnj+G wb9+Rcc2ymW4Wi0YlO+flkfcmmdjFn1T+A1zGnVkbwPl/ejtA50h7RjVE hAHhBJBW5odN0ysIpL5HNate+BLeisizityCwKF8bUAeIjdB3+YBn4NwZ A==; X-CSE-ConnectionGUID: prOwXhYwQGmlUHwrPz05kw== X-CSE-MsgGUID: fRkHDsFjTi63DgBa1iVSzg== X-IronPort-AV: E=McAfee;i="6700,10204,11173"; a="13187831" X-IronPort-AV: E=Sophos;i="6.10,172,1719903600"; d="scan'208";a="13187831" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by fmvoesa110.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Aug 2024 20:20:51 -0700 X-CSE-ConnectionGUID: nsicXgHERcSgCvELOqFwyg== X-CSE-MsgGUID: pajE4G+SQT64SZn3mFj/lQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.10,172,1719903600"; d="scan'208";a="99492142" Received: from tenikolo-mobl1.amr.corp.intel.com ([10.124.36.66]) by smtpauth.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Aug 2024 20:20:50 -0700 From: Tatyana Nikolova To: jgg@nvidia.com, leon@kernel.org Cc: linux-rdma@vger.kernel.org, netdev@vger.kernel.org, Faisal Latif , Tatyana Nikolova Subject: [RFC v2 22/25] RDMA/irdma: Add Atomic Operations support Date: Fri, 23 Aug 2024 22:19:21 -0500 Message-Id: <20240824031924.421-23-tatyana.e.nikolova@intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20240824031924.421-1-tatyana.e.nikolova@intel.com> References: <20240824031924.421-1-tatyana.e.nikolova@intel.com> Precedence: bulk X-Mailing-List: linux-rdma@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Faisal Latif Extend irdma to support atomic operations, namely Compare and Swap and Fetch and Add, for GEN3 devices. Signed-off-by: Faisal Latif Signed-off-by: Tatyana Nikolova --- drivers/infiniband/hw/irdma/ctrl.c | 7 ++ drivers/infiniband/hw/irdma/defs.h | 10 ++- drivers/infiniband/hw/irdma/type.h | 4 ++ drivers/infiniband/hw/irdma/uk.c | 102 ++++++++++++++++++++++++++++ drivers/infiniband/hw/irdma/user.h | 27 ++++++++ drivers/infiniband/hw/irdma/verbs.c | 38 +++++++++++ drivers/infiniband/hw/irdma/verbs.h | 6 ++ 7 files changed, 193 insertions(+), 1 deletion(-) diff --git a/drivers/infiniband/hw/irdma/ctrl.c b/drivers/infiniband/hw/irdma/ctrl.c index d7165bd7f142..40868b58063d 100644 --- a/drivers/infiniband/hw/irdma/ctrl.c +++ b/drivers/infiniband/hw/irdma/ctrl.c @@ -1110,6 +1110,8 @@ static void irdma_sc_qp_setctx_roce_gen_3(struct irdma_sc_qp *qp, FIELD_PREP(IRDMAQPC_UDPRIVCQENABLE, roce_info->udprivcq_en) | FIELD_PREP(IRDMAQPC_PRIVEN, roce_info->priv_mode_en) | + FIELD_PREP(IRDMAQPC_REMOTE_ATOMIC_EN, + info->remote_atomics_en) | FIELD_PREP(IRDMAQPC_TIMELYENABLE, roce_info->timely_en)); set_64bit_val(qp_ctx, 168, FIELD_PREP(IRDMAQPC_QPCOMPCTX, info->qp_compl_ctx)); @@ -1489,6 +1491,8 @@ static int irdma_sc_alloc_stag(struct irdma_sc_dev *dev, FIELD_PREP(IRDMA_CQPSQ_STAG_REMACCENABLED, info->remote_access) | FIELD_PREP(IRDMA_CQPSQ_STAG_USEHMCFNIDX, info->use_hmc_fcn_index) | FIELD_PREP(IRDMA_CQPSQ_STAG_USEPFRID, info->use_pf_rid) | + FIELD_PREP(IRDMA_CQPSQ_STAG_REMOTE_ATOMIC_EN, + info->remote_atomics_en) | FIELD_PREP(IRDMA_CQPSQ_WQEVALID, cqp->polarity); dma_wmb(); /* make sure WQE is written before valid bit is set */ @@ -1580,6 +1584,8 @@ static int irdma_sc_mr_reg_non_shared(struct irdma_sc_dev *dev, FIELD_PREP(IRDMA_CQPSQ_STAG_VABASEDTO, addr_type) | FIELD_PREP(IRDMA_CQPSQ_STAG_USEHMCFNIDX, info->use_hmc_fcn_index) | FIELD_PREP(IRDMA_CQPSQ_STAG_USEPFRID, info->use_pf_rid) | + FIELD_PREP(IRDMA_CQPSQ_STAG_REMOTE_ATOMIC_EN, + info->remote_atomics_en) | FIELD_PREP(IRDMA_CQPSQ_WQEVALID, cqp->polarity); dma_wmb(); /* make sure WQE is written before valid bit is set */ @@ -1736,6 +1742,7 @@ int irdma_sc_mr_fast_register(struct irdma_sc_qp *qp, FIELD_PREP(IRDMAQPSQ_READFENCE, info->read_fence) | FIELD_PREP(IRDMAQPSQ_LOCALFENCE, info->local_fence) | FIELD_PREP(IRDMAQPSQ_SIGCOMPL, info->signaled) | + FIELD_PREP(IRDMAQPSQ_REMOTE_ATOMICS_EN, info->remote_atomics_en) | FIELD_PREP(IRDMAQPSQ_VALID, qp->qp_uk.swqe_polarity); dma_wmb(); /* make sure WQE is written before valid bit is set */ diff --git a/drivers/infiniband/hw/irdma/defs.h b/drivers/infiniband/hw/irdma/defs.h index 8ead170a8930..9c0fd4603a82 100644 --- a/drivers/infiniband/hw/irdma/defs.h +++ b/drivers/infiniband/hw/irdma/defs.h @@ -189,6 +189,8 @@ enum irdma_protocol_used { #define IRDMAQP_OP_RDMA_READ_LOC_INV 0x0b #define IRDMAQP_OP_NOP 0x0c #define IRDMAQP_OP_RDMA_WRITE_SOL 0x0d +#define IRDMAQP_OP_ATOMIC_FETCH_ADD 0x0f +#define IRDMAQP_OP_ATOMIC_COMPARE_SWAP_ADD 0x11 #define IRDMAQP_OP_GEN_RTS_AE 0x30 enum irdma_cqp_op_type { @@ -696,7 +698,8 @@ enum irdma_cqp_op_type { #define IRDMA_CQPSQ_STAG_USEPFRID BIT_ULL(61) #define IRDMA_CQPSQ_STAG_PBA IRDMA_CQPHC_QPCTX -#define IRDMA_CQPSQ_STAG_HMCFNIDX GENMASK_ULL(5, 0) +#define IRDMA_CQPSQ_STAG_HMCFNIDX GENMASK_ULL(15, 0) +#define IRDMA_CQPSQ_STAG_REMOTE_ATOMIC_EN BIT_ULL(61) #define IRDMA_CQPSQ_STAG_FIRSTPMPBLIDX GENMASK_ULL(27, 0) #define IRDMA_CQPSQ_QUERYSTAG_IDX IRDMA_CQPSQ_STAG_IDX @@ -986,6 +989,9 @@ enum irdma_cqp_op_type { #define IRDMAQPSQ_REMTO IRDMA_CQPHC_QPCTX +#define IRDMAQPSQ_STAG GENMASK_ULL(31, 0) +#define IRDMAQPSQ_REMOTE_STAG GENMASK_ULL(31, 0) + #define IRDMAQPSQ_STAGRIGHTS GENMASK_ULL(52, 48) #define IRDMAQPSQ_VABASEDTO BIT_ULL(53) #define IRDMAQPSQ_MEMWINDOWTYPE BIT_ULL(54) @@ -996,6 +1002,8 @@ enum irdma_cqp_op_type { #define IRDMAQPSQ_BASEVA_TO_FBO IRDMA_CQPHC_QPCTX +#define IRDMAQPSQ_REMOTE_ATOMICS_EN BIT_ULL(55) + #define IRDMAQPSQ_LOCSTAG GENMASK_ULL(31, 0) #define IRDMAQPSQ_STAGKEY GENMASK_ULL(7, 0) diff --git a/drivers/infiniband/hw/irdma/type.h b/drivers/infiniband/hw/irdma/type.h index adfc528a268e..52aa1dd3cbb7 100644 --- a/drivers/infiniband/hw/irdma/type.h +++ b/drivers/infiniband/hw/irdma/type.h @@ -1094,6 +1094,7 @@ struct irdma_qp_host_ctx_info { u32 srq_id; u32 rem_endpoint_idx; u16 stats_idx; + bool remote_atomics_en:1; bool srq_valid:1; bool tcp_info_valid:1; bool iwarp_info_valid:1; @@ -1134,6 +1135,7 @@ struct irdma_allocate_stag_info { bool use_hmc_fcn_index:1; bool use_pf_rid:1; bool all_memory:1; + bool remote_atomics_en:1; u16 hmc_fcn_index; }; @@ -1162,6 +1164,7 @@ struct irdma_reg_ns_stag_info { u8 hmc_fcn_index; bool use_pf_rid:1; bool all_memory:1; + bool remote_atomics_en:1; }; struct irdma_fast_reg_stag_info { @@ -1185,6 +1188,7 @@ struct irdma_fast_reg_stag_info { u8 hmc_fcn_index; bool use_pf_rid:1; bool defer_flag:1; + bool remote_atomics_en:1; }; struct irdma_dealloc_stag_info { diff --git a/drivers/infiniband/hw/irdma/uk.c b/drivers/infiniband/hw/irdma/uk.c index 26f3475f6453..24e8df0f8033 100644 --- a/drivers/infiniband/hw/irdma/uk.c +++ b/drivers/infiniband/hw/irdma/uk.c @@ -337,6 +337,108 @@ int irdma_uk_rdma_write(struct irdma_qp_uk *qp, struct irdma_post_sq_info *info, return 0; } +/** + * irdma_uk_atomic_fetch_add - atomic fetch and add operation + * @qp: hw qp ptr + * @info: post sq information + * @post_sq: flag to post sq + */ +int irdma_uk_atomic_fetch_add(struct irdma_qp_uk *qp, + struct irdma_post_sq_info *info, bool post_sq) +{ + struct irdma_atomic_fetch_add *op_info; + u32 total_size = 0; + u16 quanta = 2; + u32 wqe_idx; + __le64 *wqe; + u64 hdr; + + op_info = &info->op.atomic_fetch_add; + wqe = irdma_qp_get_next_send_wqe(qp, &wqe_idx, quanta, total_size, + info); + if (!wqe) + return -ENOMEM; + + set_64bit_val(wqe, 0, op_info->tagged_offset); + set_64bit_val(wqe, 8, + FIELD_PREP(IRDMAQPSQ_STAG, op_info->stag)); + set_64bit_val(wqe, 16, op_info->remote_tagged_offset); + + hdr = FIELD_PREP(IRDMAQPSQ_ADDFRAGCNT, 1) | + FIELD_PREP(IRDMAQPSQ_REMOTE_STAG, op_info->remote_stag) | + FIELD_PREP(IRDMAQPSQ_OPCODE, IRDMAQP_OP_ATOMIC_FETCH_ADD) | + FIELD_PREP(IRDMAQPSQ_READFENCE, info->read_fence) | + FIELD_PREP(IRDMAQPSQ_LOCALFENCE, info->local_fence) | + FIELD_PREP(IRDMAQPSQ_SIGCOMPL, info->signaled) | + FIELD_PREP(IRDMAQPSQ_VALID, qp->swqe_polarity); + + set_64bit_val(wqe, 32, op_info->fetch_add_data_bytes); + set_64bit_val(wqe, 40, 0); + set_64bit_val(wqe, 48, 0); + set_64bit_val(wqe, 56, + FIELD_PREP(IRDMAQPSQ_VALID, qp->swqe_polarity)); + + dma_wmb(); /* make sure WQE is populated before valid bit is set */ + + set_64bit_val(wqe, 24, hdr); + + if (post_sq) + irdma_uk_qp_post_wr(qp); + + return 0; +} + +/** + * irdma_uk_atomic_compare_swap - atomic compare and swap operation + * @qp: hw qp ptr + * @info: post sq information + * @post_sq: flag to post sq + */ +int irdma_uk_atomic_compare_swap(struct irdma_qp_uk *qp, + struct irdma_post_sq_info *info, bool post_sq) +{ + struct irdma_atomic_compare_swap *op_info; + u32 total_size = 0; + u16 quanta = 2; + u32 wqe_idx; + __le64 *wqe; + u64 hdr; + + op_info = &info->op.atomic_compare_swap; + wqe = irdma_qp_get_next_send_wqe(qp, &wqe_idx, quanta, total_size, + info); + if (!wqe) + return -ENOMEM; + + set_64bit_val(wqe, 0, op_info->tagged_offset); + set_64bit_val(wqe, 8, + FIELD_PREP(IRDMAQPSQ_STAG, op_info->stag)); + set_64bit_val(wqe, 16, op_info->remote_tagged_offset); + + hdr = FIELD_PREP(IRDMAQPSQ_ADDFRAGCNT, 1) | + FIELD_PREP(IRDMAQPSQ_REMOTE_STAG, op_info->remote_stag) | + FIELD_PREP(IRDMAQPSQ_OPCODE, IRDMAQP_OP_ATOMIC_COMPARE_SWAP_ADD) | + FIELD_PREP(IRDMAQPSQ_READFENCE, info->read_fence) | + FIELD_PREP(IRDMAQPSQ_LOCALFENCE, info->local_fence) | + FIELD_PREP(IRDMAQPSQ_SIGCOMPL, info->signaled) | + FIELD_PREP(IRDMAQPSQ_VALID, qp->swqe_polarity); + + set_64bit_val(wqe, 32, op_info->swap_data_bytes); + set_64bit_val(wqe, 40, op_info->compare_data_bytes); + set_64bit_val(wqe, 48, 0); + set_64bit_val(wqe, 56, + FIELD_PREP(IRDMAQPSQ_VALID, qp->swqe_polarity)); + + dma_wmb(); /* make sure WQE is populated before valid bit is set */ + + set_64bit_val(wqe, 24, hdr); + + if (post_sq) + irdma_uk_qp_post_wr(qp); + + return 0; +} + /** * irdma_uk_srq_post_receive - post a receive wqe to a shared rq * @srq: shared rq ptr diff --git a/drivers/infiniband/hw/irdma/user.h b/drivers/infiniband/hw/irdma/user.h index af15529643e9..96dea01e83db 100644 --- a/drivers/infiniband/hw/irdma/user.h +++ b/drivers/infiniband/hw/irdma/user.h @@ -41,6 +41,8 @@ #define IRDMA_OP_TYPE_INV_STAG 0x0a #define IRDMA_OP_TYPE_RDMA_READ_INV_STAG 0x0b #define IRDMA_OP_TYPE_NOP 0x0c +#define IRDMA_OP_TYPE_ATOMIC_FETCH_AND_ADD 0x0f +#define IRDMA_OP_TYPE_ATOMIC_COMPARE_AND_SWAP 0x11 #define IRDMA_OP_TYPE_REC 0x3e #define IRDMA_OP_TYPE_REC_IMM 0x3f @@ -203,6 +205,24 @@ struct irdma_bind_window { bool ena_writes:1; irdma_stag mw_stag; bool mem_window_type_1:1; + bool remote_atomics_en:1; +}; + +struct irdma_atomic_fetch_add { + u64 tagged_offset; + u64 remote_tagged_offset; + u64 fetch_add_data_bytes; + u32 stag; + u32 remote_stag; +}; + +struct irdma_atomic_compare_swap { + u64 tagged_offset; + u64 remote_tagged_offset; + u64 swap_data_bytes; + u64 compare_data_bytes; + u32 stag; + u32 remote_stag; }; struct irdma_inv_local_stag { @@ -221,6 +241,7 @@ struct irdma_post_sq_info { bool report_rtt:1; bool udp_hdr:1; bool defer_flag:1; + bool remote_atomic_en:1; u32 imm_data; u32 stag_to_inv; union { @@ -229,6 +250,8 @@ struct irdma_post_sq_info { struct irdma_rdma_read rdma_read; struct irdma_bind_window bind_window; struct irdma_inv_local_stag inv_local_stag; + struct irdma_atomic_fetch_add atomic_fetch_add; + struct irdma_atomic_compare_swap atomic_compare_swap; } op; }; @@ -257,6 +280,10 @@ struct irdma_cq_poll_info { bool imm_valid:1; }; +int irdma_uk_atomic_compare_swap(struct irdma_qp_uk *qp, + struct irdma_post_sq_info *info, bool post_sq); +int irdma_uk_atomic_fetch_add(struct irdma_qp_uk *qp, + struct irdma_post_sq_info *info, bool post_sq); int irdma_uk_inline_rdma_write(struct irdma_qp_uk *qp, struct irdma_post_sq_info *info, bool post_sq); int irdma_uk_inline_send(struct irdma_qp_uk *qp, diff --git a/drivers/infiniband/hw/irdma/verbs.c b/drivers/infiniband/hw/irdma/verbs.c index 66e67be2e67b..25e46aefe147 100644 --- a/drivers/infiniband/hw/irdma/verbs.c +++ b/drivers/infiniband/hw/irdma/verbs.c @@ -60,6 +60,11 @@ static int irdma_query_device(struct ib_device *ibdev, props->max_srq = rf->max_srq - rf->used_srqs; props->max_srq_wr = IRDMA_MAX_SRQ_WRS; props->max_srq_sge = hw_attrs->uk_attrs.max_hw_wq_frags; + if (hw_attrs->uk_attrs.feature_flags & IRDMA_FEATURE_ATOMIC_OPS) + props->atomic_cap = IB_ATOMIC_HCA; + else + props->atomic_cap = IB_ATOMIC_NONE; + props->masked_atomic_cap = props->atomic_cap; if (hw_attrs->uk_attrs.hw_rev >= IRDMA_GEN_3) { #define HCA_CORE_CLOCK_KHZ 1000000UL props->timestamp_mask = GENMASK(31, 0); @@ -1145,6 +1150,8 @@ static int irdma_get_ib_acc_flags(struct irdma_qp *iwqp) acc_flags |= IB_ACCESS_REMOTE_READ; if (iwqp->roce_info.bind_en) acc_flags |= IB_ACCESS_MW_BIND; + if (iwqp->ctx_info.remote_atomics_en) + acc_flags |= IB_ACCESS_REMOTE_ATOMIC; } else { if (iwqp->iwarp_info.wr_rdresp_en) { acc_flags |= IB_ACCESS_LOCAL_WRITE; @@ -1152,6 +1159,8 @@ static int irdma_get_ib_acc_flags(struct irdma_qp *iwqp) } if (iwqp->iwarp_info.rd_en) acc_flags |= IB_ACCESS_REMOTE_READ; + if (iwqp->ctx_info.remote_atomics_en) + acc_flags |= IB_ACCESS_REMOTE_ATOMIC; } return acc_flags; } @@ -1448,6 +1457,8 @@ int irdma_modify_qp_roce(struct ib_qp *ibqp, struct ib_qp_attr *attr, roce_info->wr_rdresp_en = true; if (attr->qp_access_flags & IB_ACCESS_REMOTE_READ) roce_info->rd_en = true; + if (attr->qp_access_flags & IB_ACCESS_REMOTE_ATOMIC) + ctx_info->remote_atomics_en = true; } wait_event(iwqp->mod_qp_waitq, !atomic_read(&iwqp->hw_mod_qp_pend)); @@ -1778,6 +1789,8 @@ int irdma_modify_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr, int attr_mask, offload_info->wr_rdresp_en = true; if (attr->qp_access_flags & IB_ACCESS_REMOTE_READ) offload_info->rd_en = true; + if (attr->qp_access_flags & IB_ACCESS_REMOTE_ATOMIC) + ctx_info->remote_atomics_en = true; } if (ctx_info->iwarp_info_valid) { @@ -3241,6 +3254,7 @@ static int irdma_hwreg_mr(struct irdma_device *iwdev, struct irdma_mr *iwmr, stag_info->total_len = iwmr->len; stag_info->access_rights = irdma_get_mr_access(access, iwdev->rf->sc_dev.hw_attrs.uk_attrs.hw_rev); + stag_info->remote_atomics_en = (access & IB_ACCESS_REMOTE_ATOMIC) ? 1 : 0; stag_info->pd_id = iwpd->sc_pd.pd_id; stag_info->all_memory = pd->flags & IB_PD_UNSAFE_GLOBAL_RKEY; if (stag_info->access_rights & IRDMA_ACCESS_FLAGS_ZERO_BASED) @@ -3931,6 +3945,30 @@ static int irdma_post_send(struct ib_qp *ibqp, if (ib_wr->send_flags & IB_SEND_FENCE) info.read_fence = true; switch (ib_wr->opcode) { + case IB_WR_ATOMIC_CMP_AND_SWP: + info.op_type = IRDMA_OP_TYPE_ATOMIC_COMPARE_AND_SWAP; + info.op.atomic_compare_swap.tagged_offset = ib_wr->sg_list[0].addr; + info.op.atomic_compare_swap.remote_tagged_offset = + atomic_wr(ib_wr)->remote_addr; + info.op.atomic_compare_swap.swap_data_bytes = atomic_wr(ib_wr)->swap; + info.op.atomic_compare_swap.compare_data_bytes = + atomic_wr(ib_wr)->compare_add; + info.op.atomic_compare_swap.stag = ib_wr->sg_list[0].lkey; + info.op.atomic_compare_swap.remote_stag = atomic_wr(ib_wr)->rkey; + err = irdma_uk_atomic_compare_swap(ukqp, &info, false); + break; + case IB_WR_ATOMIC_FETCH_AND_ADD: + info.op_type = IRDMA_OP_TYPE_ATOMIC_FETCH_AND_ADD; + info.op.atomic_fetch_add.tagged_offset = ib_wr->sg_list[0].addr; + info.op.atomic_fetch_add.remote_tagged_offset = + atomic_wr(ib_wr)->remote_addr; + info.op.atomic_fetch_add.fetch_add_data_bytes = + atomic_wr(ib_wr)->compare_add; + info.op.atomic_fetch_add.stag = ib_wr->sg_list[0].lkey; + info.op.atomic_fetch_add.remote_stag = + atomic_wr(ib_wr)->rkey; + err = irdma_uk_atomic_fetch_add(ukqp, &info, false); + break; case IB_WR_SEND_WITH_IMM: if (ukqp->qp_caps & IRDMA_SEND_WITH_IMM) { info.imm_data_valid = true; diff --git a/drivers/infiniband/hw/irdma/verbs.h b/drivers/infiniband/hw/irdma/verbs.h index 157dfa2d1a05..0922a22fbede 100644 --- a/drivers/infiniband/hw/irdma/verbs.h +++ b/drivers/infiniband/hw/irdma/verbs.h @@ -284,6 +284,12 @@ static inline void set_ib_wc_op_sq(struct irdma_cq_poll_info *cq_poll_info, case IRDMA_OP_TYPE_FAST_REG_NSMR: entry->opcode = IB_WC_REG_MR; break; + case IRDMA_OP_TYPE_ATOMIC_COMPARE_AND_SWAP: + entry->opcode = IB_WC_COMP_SWAP; + break; + case IRDMA_OP_TYPE_ATOMIC_FETCH_AND_ADD: + entry->opcode = IB_WC_FETCH_ADD; + break; case IRDMA_OP_TYPE_INV_STAG: entry->opcode = IB_WC_LOCAL_INV; break; From patchwork Sat Aug 24 03:19:22 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Nikolova, Tatyana E" X-Patchwork-Id: 13776212 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.16]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 08A3513AD11; Sat, 24 Aug 2024 03:20:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.16 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724469659; cv=none; b=uT2fDhQVlXKi9zoepAuHCEJxGJ3gWoew0Dg2BGpqK8fbPPzq4NqAiNW1/7b5u+sJ0HSBNOBq1hI7znhHgnfsdoEkJ1rdggp+QJRqVPbWUMARDN/Zl4nIlIaV0QPwhYL8MgXiA4I9K+aPAq8IpNh5DPAlvIcAUl1vhOWHKJoXqmc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724469659; c=relaxed/simple; bh=E+/oG7TPw2cagk4XVEEURXCcvsaS5O9FGZ8GSfkrUjI=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=q+saGxuz/7gPdyGLVeOvG0Kwi7tSQ68Ifp0pjiXw8Pzd5phqORTilM0n8Php8UAF9PmQE2cDotFDBrkROr5O3QrAV/04xeyfw0KV3fceq6WonrDgtCCvXs/KxDBHD2P5e9e4d1tJNpkEf5nGyiUhpi24w5w3hS2WD1eF6eVBlaU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=A66AkPzE; arc=none smtp.client-ip=192.198.163.16 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="A66AkPzE" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1724469657; x=1756005657; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=E+/oG7TPw2cagk4XVEEURXCcvsaS5O9FGZ8GSfkrUjI=; b=A66AkPzEFaYoalMvDHh7N4myBKmgSYc36F8km4qizDqmaLRMwSAxssVk 3EKRdu8+fAAwt9bmYVJOrhzhOYSbpqHtGTctdRvZnvkIzihMGUJbuYhMA vUHh8prCAHwEo+Wk/JbV1d5IiR88y3c6ZILN5nBYUxLSYD0Xq3FSTlgUN Wm9LCBBWzDq/ZdPZmfuDKO9vi6SSu8isCy6Aupq5rdAuhoQLVdlLGh0lP hgoG1mImFVKwSr45y3zmyHOslcbHfnKJzt5agX14HqybGbOpRkSGXYhmn EGkiPBwSApLfguD2VqqpYAALF9Izv+70skIkDz7VqqvbUYMC80jgNEbpo w==; X-CSE-ConnectionGUID: FW9WYx+9Tb2tEfrxGpfYRw== X-CSE-MsgGUID: 2j8RPGiYSgivgyxP5ItqLg== X-IronPort-AV: E=McAfee;i="6700,10204,11173"; a="13187834" X-IronPort-AV: E=Sophos;i="6.10,172,1719903600"; d="scan'208";a="13187834" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by fmvoesa110.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Aug 2024 20:20:52 -0700 X-CSE-ConnectionGUID: dQXQECZXRHWBOv+sWOtVGg== X-CSE-MsgGUID: 6/HJMuzBSaOWii4Nh4ZlFA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.10,172,1719903600"; d="scan'208";a="99492145" Received: from tenikolo-mobl1.amr.corp.intel.com ([10.124.36.66]) by smtpauth.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Aug 2024 20:20:51 -0700 From: Tatyana Nikolova To: jgg@nvidia.com, leon@kernel.org Cc: linux-rdma@vger.kernel.org, netdev@vger.kernel.org, Shiraz Saleem , Tatyana Nikolova Subject: [RFC v2 23/25] RDMA/irdma: Extend CQE Error and Flush Handling for GEN3 Devices Date: Fri, 23 Aug 2024 22:19:22 -0500 Message-Id: <20240824031924.421-24-tatyana.e.nikolova@intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20240824031924.421-1-tatyana.e.nikolova@intel.com> References: <20240824031924.421-1-tatyana.e.nikolova@intel.com> Precedence: bulk X-Mailing-List: linux-rdma@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Shiraz Saleem Enhance the CQE error and flush handling specific to GEN3 devices. Unlike GEN1/2 devices, which depend on software to generate completions in error, GEN3 devices leverage firmware to generate CQEs in error for all WQEs posted after a QP moves to an error state. Key changes include: - Updating the CQ poll logic to properly advance the CQ head in the event of a flush CQE. - Updating the flush logic for GEN3 to pass error WQE idx for SQ on an AE to flush out unprocessed WQEs in error. - Isolating the decoding of AE to flush codes into a separate routine irdma_ae_to_qp_err_code. This routine can now be leveraged to flush error CQEs on an AE and when error CQE is received for SRQ Signed-off-by: Shiraz Saleem Signed-off-by: Tatyana Nikolova --- drivers/infiniband/hw/irdma/ctrl.c | 9 ++ drivers/infiniband/hw/irdma/defs.h | 105 +-------------- drivers/infiniband/hw/irdma/hw.c | 104 +++++---------- drivers/infiniband/hw/irdma/type.h | 14 +- drivers/infiniband/hw/irdma/uk.c | 39 +++++- drivers/infiniband/hw/irdma/user.h | 194 +++++++++++++++++++++++++++- drivers/infiniband/hw/irdma/verbs.c | 31 +++-- 7 files changed, 297 insertions(+), 199 deletions(-) diff --git a/drivers/infiniband/hw/irdma/ctrl.c b/drivers/infiniband/hw/irdma/ctrl.c index 40868b58063d..73cab77d60de 100644 --- a/drivers/infiniband/hw/irdma/ctrl.c +++ b/drivers/infiniband/hw/irdma/ctrl.c @@ -2670,6 +2670,12 @@ int irdma_sc_qp_flush_wqes(struct irdma_sc_qp *qp, info->ae_code | FIELD_PREP(IRDMA_CQPSQ_FWQE_AESOURCE, info->ae_src) : 0; set_64bit_val(wqe, 8, temp); + if (cqp->dev->hw_attrs.uk_attrs.hw_rev >= IRDMA_GEN_3) { + set_64bit_val(wqe, 40, + FIELD_PREP(IRDMA_CQPSQ_FWQE_ERR_SQ_IDX, info->err_sq_idx)); + set_64bit_val(wqe, 48, + FIELD_PREP(IRDMA_CQPSQ_FWQE_ERR_RQ_IDX, info->err_rq_idx)); + } hdr = qp->qp_uk.qp_id | FIELD_PREP(IRDMA_CQPSQ_OPCODE, IRDMA_CQP_OP_FLUSH_WQES) | @@ -2678,6 +2684,9 @@ int irdma_sc_qp_flush_wqes(struct irdma_sc_qp *qp, FIELD_PREP(IRDMA_CQPSQ_FWQE_FLUSHSQ, flush_sq) | FIELD_PREP(IRDMA_CQPSQ_FWQE_FLUSHRQ, flush_rq) | FIELD_PREP(IRDMA_CQPSQ_WQEVALID, cqp->polarity); + if (cqp->dev->hw_attrs.uk_attrs.hw_rev >= IRDMA_GEN_3) + hdr |= FIELD_PREP(IRDMA_CQPSQ_FWQE_ERR_SQ_IDX_VALID, info->err_sq_idx_valid) | + FIELD_PREP(IRDMA_CQPSQ_FWQE_ERR_RQ_IDX_VALID, info->err_rq_idx_valid); dma_wmb(); /* make sure WQE is written before valid bit is set */ set_64bit_val(wqe, 24, hdr); diff --git a/drivers/infiniband/hw/irdma/defs.h b/drivers/infiniband/hw/irdma/defs.h index 9c0fd4603a82..e75dd8bbd86b 100644 --- a/drivers/infiniband/hw/irdma/defs.h +++ b/drivers/infiniband/hw/irdma/defs.h @@ -301,107 +301,6 @@ enum irdma_cqp_op_type { #define IRDMA_CQP_OP_GATHER_STATS 0x2e #define IRDMA_CQP_OP_UP_MAP 0x2f -/* Async Events codes */ -#define IRDMA_AE_AMP_UNALLOCATED_STAG 0x0102 -#define IRDMA_AE_AMP_INVALID_STAG 0x0103 -#define IRDMA_AE_AMP_BAD_QP 0x0104 -#define IRDMA_AE_AMP_BAD_PD 0x0105 -#define IRDMA_AE_AMP_BAD_STAG_KEY 0x0106 -#define IRDMA_AE_AMP_BAD_STAG_INDEX 0x0107 -#define IRDMA_AE_AMP_BOUNDS_VIOLATION 0x0108 -#define IRDMA_AE_AMP_RIGHTS_VIOLATION 0x0109 -#define IRDMA_AE_AMP_TO_WRAP 0x010a -#define IRDMA_AE_AMP_FASTREG_VALID_STAG 0x010c -#define IRDMA_AE_AMP_FASTREG_MW_STAG 0x010d -#define IRDMA_AE_AMP_FASTREG_INVALID_RIGHTS 0x010e -#define IRDMA_AE_AMP_FASTREG_INVALID_LENGTH 0x0110 -#define IRDMA_AE_AMP_INVALIDATE_SHARED 0x0111 -#define IRDMA_AE_AMP_INVALIDATE_NO_REMOTE_ACCESS_RIGHTS 0x0112 -#define IRDMA_AE_AMP_INVALIDATE_MR_WITH_BOUND_WINDOWS 0x0113 -#define IRDMA_AE_AMP_MWBIND_VALID_STAG 0x0114 -#define IRDMA_AE_AMP_MWBIND_OF_MR_STAG 0x0115 -#define IRDMA_AE_AMP_MWBIND_TO_ZERO_BASED_STAG 0x0116 -#define IRDMA_AE_AMP_MWBIND_TO_MW_STAG 0x0117 -#define IRDMA_AE_AMP_MWBIND_INVALID_RIGHTS 0x0118 -#define IRDMA_AE_AMP_MWBIND_INVALID_BOUNDS 0x0119 -#define IRDMA_AE_AMP_MWBIND_TO_INVALID_PARENT 0x011a -#define IRDMA_AE_AMP_MWBIND_BIND_DISABLED 0x011b -#define IRDMA_AE_PRIV_OPERATION_DENIED 0x011c -#define IRDMA_AE_AMP_INVALIDATE_TYPE1_MW 0x011d -#define IRDMA_AE_AMP_MWBIND_ZERO_BASED_TYPE1_MW 0x011e -#define IRDMA_AE_AMP_FASTREG_INVALID_PBL_HPS_CFG 0x011f -#define IRDMA_AE_AMP_MWBIND_WRONG_TYPE 0x0120 -#define IRDMA_AE_AMP_FASTREG_PBLE_MISMATCH 0x0121 -#define IRDMA_AE_UDA_XMIT_DGRAM_TOO_LONG 0x0132 -#define IRDMA_AE_UDA_XMIT_BAD_PD 0x0133 -#define IRDMA_AE_UDA_XMIT_DGRAM_TOO_SHORT 0x0134 -#define IRDMA_AE_UDA_L4LEN_INVALID 0x0135 -#define IRDMA_AE_BAD_CLOSE 0x0201 -#define IRDMA_AE_RDMAP_ROE_BAD_LLP_CLOSE 0x0202 -#define IRDMA_AE_CQ_OPERATION_ERROR 0x0203 -#define IRDMA_AE_RDMA_READ_WHILE_ORD_ZERO 0x0205 -#define IRDMA_AE_STAG_ZERO_INVALID 0x0206 -#define IRDMA_AE_IB_RREQ_AND_Q1_FULL 0x0207 -#define IRDMA_AE_IB_INVALID_REQUEST 0x0208 -#define IRDMA_AE_SRQ_LIMIT 0x0209 -#define IRDMA_AE_WQE_UNEXPECTED_OPCODE 0x020a -#define IRDMA_AE_WQE_INVALID_PARAMETER 0x020b -#define IRDMA_AE_WQE_INVALID_FRAG_DATA 0x020c -#define IRDMA_AE_IB_REMOTE_ACCESS_ERROR 0x020d -#define IRDMA_AE_IB_REMOTE_OP_ERROR 0x020e -#define IRDMA_AE_SRQ_CATASTROPHIC_ERROR 0x020f -#define IRDMA_AE_WQE_LSMM_TOO_LONG 0x0220 -#define IRDMA_AE_ATOMIC_ALIGNMENT 0x0221 -#define IRDMA_AE_ATOMIC_MASK 0x0222 -#define IRDMA_AE_INVALID_REQUEST 0x0223 -#define IRDMA_AE_PCIE_ATOMIC_DISABLE 0x0224 -#define IRDMA_AE_DDP_INVALID_MSN_GAP_IN_MSN 0x0301 -#define IRDMA_AE_DDP_UBE_DDP_MESSAGE_TOO_LONG_FOR_AVAILABLE_BUFFER 0x0303 -#define IRDMA_AE_DDP_UBE_INVALID_DDP_VERSION 0x0304 -#define IRDMA_AE_DDP_UBE_INVALID_MO 0x0305 -#define IRDMA_AE_DDP_UBE_INVALID_MSN_NO_BUFFER_AVAILABLE 0x0306 -#define IRDMA_AE_DDP_UBE_INVALID_QN 0x0307 -#define IRDMA_AE_DDP_NO_L_BIT 0x0308 -#define IRDMA_AE_RDMAP_ROE_INVALID_RDMAP_VERSION 0x0311 -#define IRDMA_AE_RDMAP_ROE_UNEXPECTED_OPCODE 0x0312 -#define IRDMA_AE_ROE_INVALID_RDMA_READ_REQUEST 0x0313 -#define IRDMA_AE_ROE_INVALID_RDMA_WRITE_OR_READ_RESP 0x0314 -#define IRDMA_AE_ROCE_RSP_LENGTH_ERROR 0x0316 -#define IRDMA_AE_ROCE_EMPTY_MCG 0x0380 -#define IRDMA_AE_ROCE_BAD_MC_IP_ADDR 0x0381 -#define IRDMA_AE_ROCE_BAD_MC_QPID 0x0382 -#define IRDMA_AE_MCG_QP_PROTOCOL_MISMATCH 0x0383 -#define IRDMA_AE_INVALID_ARP_ENTRY 0x0401 -#define IRDMA_AE_INVALID_TCP_OPTION_RCVD 0x0402 -#define IRDMA_AE_STALE_ARP_ENTRY 0x0403 -#define IRDMA_AE_INVALID_AH_ENTRY 0x0406 -#define IRDMA_AE_LLP_CLOSE_COMPLETE 0x0501 -#define IRDMA_AE_LLP_CONNECTION_RESET 0x0502 -#define IRDMA_AE_LLP_FIN_RECEIVED 0x0503 -#define IRDMA_AE_LLP_RECEIVED_MARKER_AND_LENGTH_FIELDS_DONT_MATCH 0x0504 -#define IRDMA_AE_LLP_RECEIVED_MPA_CRC_ERROR 0x0505 -#define IRDMA_AE_LLP_SEGMENT_TOO_SMALL 0x0507 -#define IRDMA_AE_LLP_SYN_RECEIVED 0x0508 -#define IRDMA_AE_LLP_TERMINATE_RECEIVED 0x0509 -#define IRDMA_AE_LLP_TOO_MANY_RETRIES 0x050a -#define IRDMA_AE_LLP_TOO_MANY_KEEPALIVE_RETRIES 0x050b -#define IRDMA_AE_LLP_DOUBT_REACHABILITY 0x050c -#define IRDMA_AE_LLP_CONNECTION_ESTABLISHED 0x050e -#define IRDMA_AE_LLP_TOO_MANY_RNRS 0x050f -#define IRDMA_AE_RESOURCE_EXHAUSTION 0x0520 -#define IRDMA_AE_RESET_SENT 0x0601 -#define IRDMA_AE_TERMINATE_SENT 0x0602 -#define IRDMA_AE_RESET_NOT_SENT 0x0603 -#define IRDMA_AE_LCE_QP_CATASTROPHIC 0x0700 -#define IRDMA_AE_LCE_FUNCTION_CATASTROPHIC 0x0701 -#define IRDMA_AE_LCE_CQ_CATASTROPHIC 0x0702 -#define IRDMA_AE_REMOTE_QP_CATASTROPHIC 0x0703 -#define IRDMA_AE_LOCAL_QP_CATASTROPHIC 0x0704 -#define IRDMA_AE_RCE_QP_CATASTROPHIC 0x0705 -#define IRDMA_AE_QP_SUSPEND_COMPLETE 0x0900 -#define IRDMA_AE_CQP_DEFERRED_COMPLETE 0x0901 -#define IRDMA_AE_ADAPTER_CATASTROPHIC 0x0B0B - #define FLD_LS_64(dev, val, field) \ (((u64)(val) << (dev)->hw_shifts[field ## _S]) & (dev)->hw_masks[field ## _M]) #define FLD_RS_64(dev, val, field) \ @@ -778,6 +677,10 @@ enum irdma_cqp_op_type { #define IRDMA_CQPSQ_FWQE_USERFLCODE BIT_ULL(60) #define IRDMA_CQPSQ_FWQE_FLUSHSQ BIT_ULL(61) #define IRDMA_CQPSQ_FWQE_FLUSHRQ BIT_ULL(62) +#define IRDMA_CQPSQ_FWQE_ERR_SQ_IDX_VALID BIT_ULL(42) +#define IRDMA_CQPSQ_FWQE_ERR_SQ_IDX GENMASK_ULL(49, 32) +#define IRDMA_CQPSQ_FWQE_ERR_RQ_IDX_VALID BIT_ULL(43) +#define IRDMA_CQPSQ_FWQE_ERR_RQ_IDX GENMASK_ULL(46, 32) #define IRDMA_CQPSQ_MAPT_PORT GENMASK_ULL(15, 0) #define IRDMA_CQPSQ_MAPT_ADDPORT BIT_ULL(62) #define IRDMA_CQPSQ_UPESD_SDCMD GENMASK_ULL(31, 0) diff --git a/drivers/infiniband/hw/irdma/hw.c b/drivers/infiniband/hw/irdma/hw.c index 524fe5dba760..4daaefa596cd 100644 --- a/drivers/infiniband/hw/irdma/hw.c +++ b/drivers/infiniband/hw/irdma/hw.c @@ -133,78 +133,26 @@ static void irdma_process_ceq(struct irdma_pci_f *rf, struct irdma_ceq *ceq) } static void irdma_set_flush_fields(struct irdma_sc_qp *qp, - struct irdma_aeqe_info *info) + struct irdma_aeqe_info *info, + struct irdma_qp_host_ctx_info *ctx_info) { + struct qp_err_code qp_err; + qp->sq_flush_code = info->sq; qp->rq_flush_code = info->rq; - qp->event_type = IRDMA_QP_EVENT_CATASTROPHIC; - - switch (info->ae_id) { - case IRDMA_AE_AMP_BOUNDS_VIOLATION: - case IRDMA_AE_AMP_INVALID_STAG: - case IRDMA_AE_AMP_RIGHTS_VIOLATION: - case IRDMA_AE_AMP_UNALLOCATED_STAG: - case IRDMA_AE_AMP_BAD_PD: - case IRDMA_AE_AMP_BAD_QP: - case IRDMA_AE_AMP_BAD_STAG_KEY: - case IRDMA_AE_AMP_BAD_STAG_INDEX: - case IRDMA_AE_AMP_TO_WRAP: - case IRDMA_AE_PRIV_OPERATION_DENIED: - qp->flush_code = FLUSH_PROT_ERR; - qp->event_type = IRDMA_QP_EVENT_ACCESS_ERR; - break; - case IRDMA_AE_UDA_XMIT_BAD_PD: - case IRDMA_AE_WQE_UNEXPECTED_OPCODE: - qp->flush_code = FLUSH_LOC_QP_OP_ERR; - qp->event_type = IRDMA_QP_EVENT_CATASTROPHIC; - break; - case IRDMA_AE_UDA_XMIT_DGRAM_TOO_LONG: - case IRDMA_AE_UDA_XMIT_DGRAM_TOO_SHORT: - case IRDMA_AE_UDA_L4LEN_INVALID: - case IRDMA_AE_DDP_UBE_INVALID_MO: - case IRDMA_AE_DDP_UBE_DDP_MESSAGE_TOO_LONG_FOR_AVAILABLE_BUFFER: - qp->flush_code = FLUSH_LOC_LEN_ERR; - qp->event_type = IRDMA_QP_EVENT_CATASTROPHIC; - break; - case IRDMA_AE_AMP_INVALIDATE_NO_REMOTE_ACCESS_RIGHTS: - case IRDMA_AE_IB_REMOTE_ACCESS_ERROR: - qp->flush_code = FLUSH_REM_ACCESS_ERR; - qp->event_type = IRDMA_QP_EVENT_ACCESS_ERR; - break; - case IRDMA_AE_LLP_SEGMENT_TOO_SMALL: - case IRDMA_AE_LLP_RECEIVED_MPA_CRC_ERROR: - case IRDMA_AE_ROCE_RSP_LENGTH_ERROR: - case IRDMA_AE_IB_REMOTE_OP_ERROR: - qp->flush_code = FLUSH_REM_OP_ERR; - qp->event_type = IRDMA_QP_EVENT_CATASTROPHIC; - break; - case IRDMA_AE_LCE_QP_CATASTROPHIC: - qp->flush_code = FLUSH_FATAL_ERR; - qp->event_type = IRDMA_QP_EVENT_CATASTROPHIC; - break; - case IRDMA_AE_IB_RREQ_AND_Q1_FULL: - qp->flush_code = FLUSH_GENERAL_ERR; - break; - case IRDMA_AE_LLP_TOO_MANY_RETRIES: - qp->flush_code = FLUSH_RETRY_EXC_ERR; - qp->event_type = IRDMA_QP_EVENT_CATASTROPHIC; - break; - case IRDMA_AE_AMP_MWBIND_INVALID_RIGHTS: - case IRDMA_AE_AMP_MWBIND_BIND_DISABLED: - case IRDMA_AE_AMP_MWBIND_INVALID_BOUNDS: - case IRDMA_AE_AMP_MWBIND_VALID_STAG: - qp->flush_code = FLUSH_MW_BIND_ERR; - qp->event_type = IRDMA_QP_EVENT_ACCESS_ERR; - break; - case IRDMA_AE_IB_INVALID_REQUEST: - qp->flush_code = FLUSH_REM_INV_REQ_ERR; - qp->event_type = IRDMA_QP_EVENT_REQ_ERR; - break; - default: - qp->flush_code = FLUSH_GENERAL_ERR; - qp->event_type = IRDMA_QP_EVENT_CATASTROPHIC; - break; + if (info->sq && qp->qp_uk.uk_attrs->hw_rev >= IRDMA_GEN_3) { + qp->err_sq_idx_valid = true; + qp->err_sq_idx = info->wqe_idx; + } + if (ctx_info->roce_info->err_rq_idx_valid && qp->qp_uk.uk_attrs->hw_rev >= IRDMA_GEN_3) { + qp->err_rq_idx_valid = true; + qp->err_rq_idx = ctx_info->roce_info->err_rq_idx; } + + qp_err = irdma_ae_to_qp_err_code(info->ae_id); + + qp->flush_code = qp_err.flush_code; + qp->event_type = qp_err.event_type; } /** @@ -465,14 +413,16 @@ static void irdma_process_aeq(struct irdma_pci_f *rf) default: ibdev_err(&iwdev->ibdev, "abnormal ae_id = 0x%x bool qp=%d qp_id = %d, ae_src=%d\n", info->ae_id, info->qp, info->qp_cq_id, info->ae_src); - if (rdma_protocol_roce(&iwdev->ibdev, 1)) { - ctx_info->roce_info->err_rq_idx_valid = info->rq; - if (info->rq) { + ctx_info = &iwqp->ctx_info; + if (rdma_protocol_roce(&iwqp->iwdev->ibdev, 1)) { + ctx_info->roce_info->err_rq_idx_valid = + ctx_info->srq_valid ? false : info->err_rq_idx_valid; + if (ctx_info->roce_info->err_rq_idx_valid) { ctx_info->roce_info->err_rq_idx = info->wqe_idx; irdma_sc_qp_setctx_roce(&iwqp->sc_qp, iwqp->host_ctx.va, ctx_info); } - irdma_set_flush_fields(qp, info); + irdma_set_flush_fields(qp, info, ctx_info); irdma_cm_disconn(iwqp); break; } @@ -2831,7 +2781,9 @@ void irdma_flush_wqes(struct irdma_qp *iwqp, u32 flush_mask) struct irdma_pci_f *rf = iwqp->iwdev->rf; u8 flush_code = iwqp->sc_qp.flush_code; - if (!(flush_mask & IRDMA_FLUSH_SQ) && !(flush_mask & IRDMA_FLUSH_RQ)) + if ((!(flush_mask & IRDMA_FLUSH_SQ) && + !(flush_mask & IRDMA_FLUSH_RQ)) || + ((flush_mask & IRDMA_REFLUSH) && rf->rdma_ver >= IRDMA_GEN_3)) return; /* Set flush info fields*/ @@ -2844,6 +2796,10 @@ void irdma_flush_wqes(struct irdma_qp *iwqp, u32 flush_mask) info.rq_major_code = IRDMA_FLUSH_MAJOR_ERR; info.rq_minor_code = FLUSH_GENERAL_ERR; info.userflushcode = true; + info.err_sq_idx_valid = iwqp->sc_qp.err_sq_idx_valid; + info.err_sq_idx = iwqp->sc_qp.err_sq_idx; + info.err_rq_idx_valid = iwqp->sc_qp.err_rq_idx_valid; + info.err_rq_idx = iwqp->sc_qp.err_rq_idx; if (flush_mask & IRDMA_REFLUSH) { if (info.sq) @@ -2857,7 +2813,7 @@ void irdma_flush_wqes(struct irdma_qp *iwqp, u32 flush_mask) if (info.rq && iwqp->sc_qp.rq_flush_code) info.rq_minor_code = flush_code; } - if (!iwqp->user_mode) + if (!iwqp->user_mode && rf->rdma_ver <= IRDMA_GEN_2) queue_delayed_work(iwqp->iwdev->cleanup_wq, &iwqp->dwork_flush, msecs_to_jiffies(IRDMA_FLUSH_DELAY_MS)); diff --git a/drivers/infiniband/hw/irdma/type.h b/drivers/infiniband/hw/irdma/type.h index 52aa1dd3cbb7..99160910f24b 100644 --- a/drivers/infiniband/hw/irdma/type.h +++ b/drivers/infiniband/hw/irdma/type.h @@ -97,12 +97,6 @@ enum irdma_term_mpa_errors { MPA_REQ_RSP = 0x04, }; -enum irdma_qp_event_type { - IRDMA_QP_EVENT_CATASTROPHIC, - IRDMA_QP_EVENT_ACCESS_ERR, - IRDMA_QP_EVENT_REQ_ERR, -}; - enum irdma_hw_stats_index { /* gen1 - 32-bit */ IRDMA_HW_STAT_INDEX_IP4RXDISCARD = 0, @@ -573,6 +567,10 @@ struct irdma_sc_qp { bool virtual_map:1; bool flush_sq:1; bool flush_rq:1; + bool err_sq_idx_valid:1; + bool err_rq_idx_valid:1; + u32 err_sq_idx; + u32 err_rq_idx; bool sq_flush_code:1; bool rq_flush_code:1; u32 pkt_limit; @@ -1296,6 +1294,8 @@ struct irdma_cqp_manage_push_page_info { }; struct irdma_qp_flush_info { + u32 err_sq_idx; + u32 err_rq_idx; u16 sq_minor_code; u16 sq_major_code; u16 rq_minor_code; @@ -1306,6 +1306,8 @@ struct irdma_qp_flush_info { bool rq:1; bool userflushcode:1; bool generate_ae:1; + bool err_sq_idx_valid:1; + bool err_rq_idx_valid:1; }; struct irdma_gen_ae_info { diff --git a/drivers/infiniband/hw/irdma/uk.c b/drivers/infiniband/hw/irdma/uk.c index 24e8df0f8033..682e848b4db2 100644 --- a/drivers/infiniband/hw/irdma/uk.c +++ b/drivers/infiniband/hw/irdma/uk.c @@ -1148,6 +1148,7 @@ int irdma_uk_cq_poll_cmpl(struct irdma_cq_uk *cq, __le64 *cqe; struct irdma_qp_uk *qp; struct irdma_srq_uk *srq; + struct qp_err_code qp_err; u8 is_srq; struct irdma_ring *pring = NULL; u32 wqe_idx; @@ -1233,16 +1234,35 @@ int irdma_uk_cq_poll_cmpl(struct irdma_cq_uk *cq, if (info->error) { info->major_err = FIELD_GET(IRDMA_CQ_MAJERR, qword3); info->minor_err = FIELD_GET(IRDMA_CQ_MINERR, qword3); - if (info->major_err == IRDMA_FLUSH_MAJOR_ERR) { - info->comp_status = IRDMA_COMPL_STATUS_FLUSHED; + switch (info->major_err) { + case IRDMA_SRQFLUSH_RSVD_MAJOR_ERR: + qp_err = irdma_ae_to_qp_err_code(info->minor_err); + info->minor_err = qp_err.flush_code; + fallthrough; + case IRDMA_FLUSH_MAJOR_ERR: /* Set the min error to standard flush error code for remaining cqes */ if (info->minor_err != FLUSH_GENERAL_ERR) { qword3 &= ~IRDMA_CQ_MINERR; qword3 |= FIELD_PREP(IRDMA_CQ_MINERR, FLUSH_GENERAL_ERR); set_64bit_val(cqe, 24, qword3); } - } else { - info->comp_status = IRDMA_COMPL_STATUS_UNKNOWN; + info->comp_status = IRDMA_COMPL_STATUS_FLUSHED; + break; + default: +#define IRDMA_CIE_SIGNATURE 0xE +#define IRDMA_CQMAJERR_HIGH_NIBBLE GENMASK(15, 12) + if (info->q_type == IRDMA_CQE_QTYPE_SQ && + qp->qp_type == IRDMA_QP_TYPE_ROCE_UD && + FIELD_GET(IRDMA_CQMAJERR_HIGH_NIBBLE, info->major_err) + == IRDMA_CIE_SIGNATURE) { + info->error = 0; + info->major_err = 0; + info->minor_err = 0; + info->comp_status = IRDMA_COMPL_STATUS_SUCCESS; + } else { + info->comp_status = IRDMA_COMPL_STATUS_UNKNOWN; + } + break; } } else { info->comp_status = IRDMA_COMPL_STATUS_SUCCESS; @@ -1251,7 +1271,6 @@ int irdma_uk_cq_poll_cmpl(struct irdma_cq_uk *cq, get_64bit_val(cqe, 0, &qword0); get_64bit_val(cqe, 16, &qword2); - info->tcp_seq_num_rtt = (u32)FIELD_GET(IRDMACQ_TCPSEQNUMRTT, qword0); info->qp_id = (u32)FIELD_GET(IRDMACQ_QPID, qword2); info->ud_src_qpn = (u32)FIELD_GET(IRDMACQ_UDSRCQPN, qword2); @@ -1377,9 +1396,15 @@ int irdma_uk_cq_poll_cmpl(struct irdma_cq_uk *cq, ret_code = 0; exit: - if (!ret_code && info->comp_status == IRDMA_COMPL_STATUS_FLUSHED) + if (!ret_code && info->comp_status == IRDMA_COMPL_STATUS_FLUSHED) { if (pring && IRDMA_RING_MORE_WORK(*pring)) - move_cq_head = false; + /* Park CQ head during a flush to generate additional CQEs + * from SW for all unprocessed WQEs. For GEN3 and beyond + * FW will generate/flush these CQEs so move to the next CQE + */ + move_cq_head = qp->uk_attrs->hw_rev <= IRDMA_GEN_2 ? + false : true; + } if (move_cq_head) { IRDMA_RING_MOVE_HEAD_NOCHECK(cq->cq_ring); diff --git a/drivers/infiniband/hw/irdma/user.h b/drivers/infiniband/hw/irdma/user.h index 96dea01e83db..a2029f5f0284 100644 --- a/drivers/infiniband/hw/irdma/user.h +++ b/drivers/infiniband/hw/irdma/user.h @@ -46,7 +46,109 @@ #define IRDMA_OP_TYPE_REC 0x3e #define IRDMA_OP_TYPE_REC_IMM 0x3f -#define IRDMA_FLUSH_MAJOR_ERR 1 +#define IRDMA_FLUSH_MAJOR_ERR 1 +#define IRDMA_SRQFLUSH_RSVD_MAJOR_ERR 0xfffe + +/* Async Events codes */ +#define IRDMA_AE_AMP_UNALLOCATED_STAG 0x0102 +#define IRDMA_AE_AMP_INVALID_STAG 0x0103 +#define IRDMA_AE_AMP_BAD_QP 0x0104 +#define IRDMA_AE_AMP_BAD_PD 0x0105 +#define IRDMA_AE_AMP_BAD_STAG_KEY 0x0106 +#define IRDMA_AE_AMP_BAD_STAG_INDEX 0x0107 +#define IRDMA_AE_AMP_BOUNDS_VIOLATION 0x0108 +#define IRDMA_AE_AMP_RIGHTS_VIOLATION 0x0109 +#define IRDMA_AE_AMP_TO_WRAP 0x010a +#define IRDMA_AE_AMP_FASTREG_VALID_STAG 0x010c +#define IRDMA_AE_AMP_FASTREG_MW_STAG 0x010d +#define IRDMA_AE_AMP_FASTREG_INVALID_RIGHTS 0x010e +#define IRDMA_AE_AMP_FASTREG_INVALID_LENGTH 0x0110 +#define IRDMA_AE_AMP_INVALIDATE_SHARED 0x0111 +#define IRDMA_AE_AMP_INVALIDATE_NO_REMOTE_ACCESS_RIGHTS 0x0112 +#define IRDMA_AE_AMP_INVALIDATE_MR_WITH_BOUND_WINDOWS 0x0113 +#define IRDMA_AE_AMP_MWBIND_VALID_STAG 0x0114 +#define IRDMA_AE_AMP_MWBIND_OF_MR_STAG 0x0115 +#define IRDMA_AE_AMP_MWBIND_TO_ZERO_BASED_STAG 0x0116 +#define IRDMA_AE_AMP_MWBIND_TO_MW_STAG 0x0117 +#define IRDMA_AE_AMP_MWBIND_INVALID_RIGHTS 0x0118 +#define IRDMA_AE_AMP_MWBIND_INVALID_BOUNDS 0x0119 +#define IRDMA_AE_AMP_MWBIND_TO_INVALID_PARENT 0x011a +#define IRDMA_AE_AMP_MWBIND_BIND_DISABLED 0x011b +#define IRDMA_AE_PRIV_OPERATION_DENIED 0x011c +#define IRDMA_AE_AMP_INVALIDATE_TYPE1_MW 0x011d +#define IRDMA_AE_AMP_MWBIND_ZERO_BASED_TYPE1_MW 0x011e +#define IRDMA_AE_AMP_FASTREG_INVALID_PBL_HPS_CFG 0x011f +#define IRDMA_AE_AMP_MWBIND_WRONG_TYPE 0x0120 +#define IRDMA_AE_AMP_FASTREG_PBLE_MISMATCH 0x0121 +#define IRDMA_AE_UDA_XMIT_DGRAM_TOO_LONG 0x0132 +#define IRDMA_AE_UDA_XMIT_BAD_PD 0x0133 +#define IRDMA_AE_UDA_XMIT_DGRAM_TOO_SHORT 0x0134 +#define IRDMA_AE_UDA_L4LEN_INVALID 0x0135 +#define IRDMA_AE_BAD_CLOSE 0x0201 +#define IRDMA_AE_RDMAP_ROE_BAD_LLP_CLOSE 0x0202 +#define IRDMA_AE_CQ_OPERATION_ERROR 0x0203 +#define IRDMA_AE_RDMA_READ_WHILE_ORD_ZERO 0x0205 +#define IRDMA_AE_STAG_ZERO_INVALID 0x0206 +#define IRDMA_AE_IB_RREQ_AND_Q1_FULL 0x0207 +#define IRDMA_AE_IB_INVALID_REQUEST 0x0208 +#define IRDMA_AE_SRQ_LIMIT 0x0209 +#define IRDMA_AE_WQE_UNEXPECTED_OPCODE 0x020a +#define IRDMA_AE_WQE_INVALID_PARAMETER 0x020b +#define IRDMA_AE_WQE_INVALID_FRAG_DATA 0x020c +#define IRDMA_AE_IB_REMOTE_ACCESS_ERROR 0x020d +#define IRDMA_AE_IB_REMOTE_OP_ERROR 0x020e +#define IRDMA_AE_SRQ_CATASTROPHIC_ERROR 0x020f +#define IRDMA_AE_WQE_LSMM_TOO_LONG 0x0220 +#define IRDMA_AE_ATOMIC_ALIGNMENT 0x0221 +#define IRDMA_AE_ATOMIC_MASK 0x0222 +#define IRDMA_AE_INVALID_REQUEST 0x0223 +#define IRDMA_AE_PCIE_ATOMIC_DISABLE 0x0224 +#define IRDMA_AE_DDP_INVALID_MSN_GAP_IN_MSN 0x0301 +#define IRDMA_AE_DDP_UBE_DDP_MESSAGE_TOO_LONG_FOR_AVAILABLE_BUFFER 0x0303 +#define IRDMA_AE_DDP_UBE_INVALID_DDP_VERSION 0x0304 +#define IRDMA_AE_DDP_UBE_INVALID_MO 0x0305 +#define IRDMA_AE_DDP_UBE_INVALID_MSN_NO_BUFFER_AVAILABLE 0x0306 +#define IRDMA_AE_DDP_UBE_INVALID_QN 0x0307 +#define IRDMA_AE_DDP_NO_L_BIT 0x0308 +#define IRDMA_AE_RDMAP_ROE_INVALID_RDMAP_VERSION 0x0311 +#define IRDMA_AE_RDMAP_ROE_UNEXPECTED_OPCODE 0x0312 +#define IRDMA_AE_ROE_INVALID_RDMA_READ_REQUEST 0x0313 +#define IRDMA_AE_ROE_INVALID_RDMA_WRITE_OR_READ_RESP 0x0314 +#define IRDMA_AE_ROCE_RSP_LENGTH_ERROR 0x0316 +#define IRDMA_AE_ROCE_EMPTY_MCG 0x0380 +#define IRDMA_AE_ROCE_BAD_MC_IP_ADDR 0x0381 +#define IRDMA_AE_ROCE_BAD_MC_QPID 0x0382 +#define IRDMA_AE_MCG_QP_PROTOCOL_MISMATCH 0x0383 +#define IRDMA_AE_INVALID_ARP_ENTRY 0x0401 +#define IRDMA_AE_INVALID_TCP_OPTION_RCVD 0x0402 +#define IRDMA_AE_STALE_ARP_ENTRY 0x0403 +#define IRDMA_AE_INVALID_AH_ENTRY 0x0406 +#define IRDMA_AE_LLP_CLOSE_COMPLETE 0x0501 +#define IRDMA_AE_LLP_CONNECTION_RESET 0x0502 +#define IRDMA_AE_LLP_FIN_RECEIVED 0x0503 +#define IRDMA_AE_LLP_RECEIVED_MARKER_AND_LENGTH_FIELDS_DONT_MATCH 0x0504 +#define IRDMA_AE_LLP_RECEIVED_MPA_CRC_ERROR 0x0505 +#define IRDMA_AE_LLP_SEGMENT_TOO_SMALL 0x0507 +#define IRDMA_AE_LLP_SYN_RECEIVED 0x0508 +#define IRDMA_AE_LLP_TERMINATE_RECEIVED 0x0509 +#define IRDMA_AE_LLP_TOO_MANY_RETRIES 0x050a +#define IRDMA_AE_LLP_TOO_MANY_KEEPALIVE_RETRIES 0x050b +#define IRDMA_AE_LLP_DOUBT_REACHABILITY 0x050c +#define IRDMA_AE_LLP_CONNECTION_ESTABLISHED 0x050e +#define IRDMA_AE_LLP_TOO_MANY_RNRS 0x050f +#define IRDMA_AE_RESOURCE_EXHAUSTION 0x0520 +#define IRDMA_AE_RESET_SENT 0x0601 +#define IRDMA_AE_TERMINATE_SENT 0x0602 +#define IRDMA_AE_RESET_NOT_SENT 0x0603 +#define IRDMA_AE_LCE_QP_CATASTROPHIC 0x0700 +#define IRDMA_AE_LCE_FUNCTION_CATASTROPHIC 0x0701 +#define IRDMA_AE_LCE_CQ_CATASTROPHIC 0x0702 +#define IRDMA_AE_REMOTE_QP_CATASTROPHIC 0x0703 +#define IRDMA_AE_LOCAL_QP_CATASTROPHIC 0x0704 +#define IRDMA_AE_RCE_QP_CATASTROPHIC 0x0705 +#define IRDMA_AE_QP_SUSPEND_COMPLETE 0x0900 +#define IRDMA_AE_CQP_DEFERRED_COMPLETE 0x0901 +#define IRDMA_AE_ADAPTER_CATASTROPHIC 0x0B0B enum irdma_device_caps_const { IRDMA_WQE_SIZE = 4, @@ -107,6 +209,13 @@ enum irdma_flush_opcode { FLUSH_RETRY_EXC_ERR, FLUSH_MW_BIND_ERR, FLUSH_REM_INV_REQ_ERR, + FLUSH_RNR_RETRY_EXC_ERR, +}; + +enum irdma_qp_event_type { + IRDMA_QP_EVENT_CATASTROPHIC, + IRDMA_QP_EVENT_ACCESS_ERR, + IRDMA_QP_EVENT_REQ_ERR, }; enum irdma_cmpl_status { @@ -280,6 +389,11 @@ struct irdma_cq_poll_info { bool imm_valid:1; }; +struct qp_err_code { + enum irdma_flush_opcode flush_code; + enum irdma_qp_event_type event_type; +}; + int irdma_uk_atomic_compare_swap(struct irdma_qp_uk *qp, struct irdma_post_sq_info *info, bool post_sq); int irdma_uk_atomic_fetch_add(struct irdma_qp_uk *qp, @@ -477,4 +591,82 @@ int irdma_get_rqdepth(struct irdma_uk_attrs *uk_attrs, u32 rq_size, u8 shift, int irdma_get_srqdepth(struct irdma_uk_attrs *uk_attrs, u32 srq_size, u8 shift, u32 *srqdepth); void irdma_clr_wqes(struct irdma_qp_uk *qp, u32 qp_wqe_idx); + +static inline struct qp_err_code irdma_ae_to_qp_err_code(u16 ae_id) +{ + struct qp_err_code qp_err = {}; + + switch (ae_id) { + case IRDMA_AE_AMP_BOUNDS_VIOLATION: + case IRDMA_AE_AMP_INVALID_STAG: + case IRDMA_AE_AMP_RIGHTS_VIOLATION: + case IRDMA_AE_AMP_UNALLOCATED_STAG: + case IRDMA_AE_AMP_BAD_PD: + case IRDMA_AE_AMP_BAD_QP: + case IRDMA_AE_AMP_BAD_STAG_KEY: + case IRDMA_AE_AMP_BAD_STAG_INDEX: + case IRDMA_AE_AMP_TO_WRAP: + case IRDMA_AE_PRIV_OPERATION_DENIED: + qp_err.flush_code = FLUSH_PROT_ERR; + qp_err.event_type = IRDMA_QP_EVENT_ACCESS_ERR; + break; + case IRDMA_AE_UDA_XMIT_BAD_PD: + case IRDMA_AE_WQE_UNEXPECTED_OPCODE: + qp_err.flush_code = FLUSH_LOC_QP_OP_ERR; + qp_err.event_type = IRDMA_QP_EVENT_CATASTROPHIC; + break; + case IRDMA_AE_UDA_XMIT_DGRAM_TOO_SHORT: + case IRDMA_AE_UDA_XMIT_DGRAM_TOO_LONG: + case IRDMA_AE_UDA_L4LEN_INVALID: + case IRDMA_AE_DDP_UBE_INVALID_MO: + case IRDMA_AE_DDP_UBE_DDP_MESSAGE_TOO_LONG_FOR_AVAILABLE_BUFFER: + qp_err.flush_code = FLUSH_LOC_LEN_ERR; + qp_err.event_type = IRDMA_QP_EVENT_CATASTROPHIC; + break; + case IRDMA_AE_AMP_INVALIDATE_NO_REMOTE_ACCESS_RIGHTS: + case IRDMA_AE_IB_REMOTE_ACCESS_ERROR: + qp_err.flush_code = FLUSH_REM_ACCESS_ERR; + qp_err.event_type = IRDMA_QP_EVENT_ACCESS_ERR; + break; + case IRDMA_AE_AMP_MWBIND_INVALID_RIGHTS: + case IRDMA_AE_AMP_MWBIND_BIND_DISABLED: + case IRDMA_AE_AMP_MWBIND_INVALID_BOUNDS: + case IRDMA_AE_AMP_MWBIND_VALID_STAG: + qp_err.flush_code = FLUSH_MW_BIND_ERR; + qp_err.event_type = IRDMA_QP_EVENT_ACCESS_ERR; + break; + case IRDMA_AE_LLP_TOO_MANY_RETRIES: + qp_err.flush_code = FLUSH_RETRY_EXC_ERR; + qp_err.event_type = IRDMA_QP_EVENT_CATASTROPHIC; + break; + case IRDMA_AE_IB_INVALID_REQUEST: + qp_err.flush_code = FLUSH_REM_INV_REQ_ERR; + qp_err.event_type = IRDMA_QP_EVENT_REQ_ERR; + break; + case IRDMA_AE_LLP_SEGMENT_TOO_SMALL: + case IRDMA_AE_LLP_RECEIVED_MPA_CRC_ERROR: + case IRDMA_AE_ROCE_RSP_LENGTH_ERROR: + case IRDMA_AE_IB_REMOTE_OP_ERROR: + qp_err.flush_code = FLUSH_REM_OP_ERR; + qp_err.event_type = IRDMA_QP_EVENT_CATASTROPHIC; + break; + case IRDMA_AE_LLP_TOO_MANY_RNRS: + qp_err.flush_code = FLUSH_RNR_RETRY_EXC_ERR; + qp_err.event_type = IRDMA_QP_EVENT_CATASTROPHIC; + break; + case IRDMA_AE_LCE_QP_CATASTROPHIC: + case IRDMA_AE_REMOTE_QP_CATASTROPHIC: + case IRDMA_AE_LOCAL_QP_CATASTROPHIC: + case IRDMA_AE_RCE_QP_CATASTROPHIC: + qp_err.flush_code = FLUSH_FATAL_ERR; + qp_err.event_type = IRDMA_QP_EVENT_CATASTROPHIC; + break; + default: + qp_err.flush_code = FLUSH_GENERAL_ERR; + qp_err.event_type = IRDMA_QP_EVENT_CATASTROPHIC; + break; + } + + return qp_err; +} #endif /* IRDMA_USER_H */ diff --git a/drivers/infiniband/hw/irdma/verbs.c b/drivers/infiniband/hw/irdma/verbs.c index 25e46aefe147..982483e641f4 100644 --- a/drivers/infiniband/hw/irdma/verbs.c +++ b/drivers/infiniband/hw/irdma/verbs.c @@ -542,10 +542,10 @@ static int irdma_destroy_qp(struct ib_qp *ibqp, struct ib_udata *udata) iwqp->sc_qp.qp_uk.destroy_pending = true; - if (iwqp->iwarp_state == IRDMA_QP_STATE_RTS) + if (iwqp->iwarp_state >= IRDMA_QP_STATE_IDLE) irdma_modify_qp_to_err(&iwqp->sc_qp); - if (!iwqp->user_mode) + if (iwdev->rf->rdma_ver <= IRDMA_GEN_2 && !iwqp->user_mode) cancel_delayed_work_sync(&iwqp->dwork_flush); if (!iwqp->user_mode) { @@ -1043,7 +1043,9 @@ static int irdma_create_qp(struct ib_qp *ibqp, err_code = irdma_setup_umode_qp(udata, iwdev, iwqp, &init_info, init_attr); } else { - INIT_DELAYED_WORK(&iwqp->dwork_flush, irdma_flush_worker); + if (uk_attrs->hw_rev <= IRDMA_GEN_2) + INIT_DELAYED_WORK(&iwqp->dwork_flush, + irdma_flush_worker); init_info.qp_uk_init_info.abi_ver = IRDMA_ABI_VER; err_code = irdma_setup_kmode_qp(iwdev, iwqp, &init_info, init_attr); } @@ -4095,15 +4097,22 @@ static int irdma_post_send(struct ib_qp *ibqp, ib_wr = ib_wr->next; } - if (!iwqp->flush_issued) { - if (iwqp->hw_iwarp_state <= IRDMA_QP_STATE_RTS) - irdma_uk_qp_post_wr(ukqp); - spin_unlock_irqrestore(&iwqp->lock, flags); + if (ukqp->uk_attrs->hw_rev <= IRDMA_GEN_2) { + if (!iwqp->flush_issued) { + if (iwqp->hw_iwarp_state <= IRDMA_QP_STATE_RTS) + irdma_uk_qp_post_wr(ukqp); + spin_unlock_irqrestore(&iwqp->lock, flags); + } else { + spin_unlock_irqrestore(&iwqp->lock, flags); + mod_delayed_work(iwqp->iwdev->cleanup_wq, + &iwqp->dwork_flush, + msecs_to_jiffies(IRDMA_FLUSH_DELAY_MS)); + } } else { + irdma_uk_qp_post_wr(ukqp); spin_unlock_irqrestore(&iwqp->lock, flags); - mod_delayed_work(iwqp->iwdev->cleanup_wq, &iwqp->dwork_flush, - msecs_to_jiffies(IRDMA_FLUSH_DELAY_MS)); } + if (err) *bad_wr = ib_wr; @@ -4192,7 +4201,7 @@ static int irdma_post_recv(struct ib_qp *ibqp, out: spin_unlock_irqrestore(&iwqp->lock, flags); - if (iwqp->flush_issued) + if (ukqp->uk_attrs->hw_rev <= IRDMA_GEN_2 && iwqp->flush_issued) mod_delayed_work(iwqp->iwdev->cleanup_wq, &iwqp->dwork_flush, msecs_to_jiffies(IRDMA_FLUSH_DELAY_MS)); @@ -4227,6 +4236,8 @@ static enum ib_wc_status irdma_flush_err_to_ib_wc_status(enum irdma_flush_opcode return IB_WC_MW_BIND_ERR; case FLUSH_REM_INV_REQ_ERR: return IB_WC_REM_INV_REQ_ERR; + case FLUSH_RNR_RETRY_EXC_ERR: + return IB_WC_RNR_RETRY_EXC_ERR; case FLUSH_FATAL_ERR: default: return IB_WC_FATAL_ERR; From patchwork Sat Aug 24 03:19:23 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Nikolova, Tatyana E" X-Patchwork-Id: 13776213 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.16]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C9D632BAE5; Sat, 24 Aug 2024 03:20:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.16 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724469659; cv=none; b=XTgMYaqhWDe3ffKArMRbQ0oc26MIn/kHgb+EKtdGHBk9OkSNiGqciyb8ynvRbBB9HcQyIl+hy6x/fvkYDsuDsCiRX/B6KOsdvtjlpKxi+AhMVCLlHth5gVx+OKpWFMV3YS5t3SxwL9ZGzot6qmX2f06WHLtrCSOq+7zZzilpIX0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724469659; c=relaxed/simple; bh=fIKx0VaWIhs7POmhQMpzD4+SIPqiudJ27Nx1XhPc2+0=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=q3TuAUJMGqmEhvCyhYfnkE+Qgp2b0B7xTmrHpoPOhdbIiRSf1BptDvBV0M1vpUOvI4tMxTgq/1N6vQj8GH573x65hW23bvIPyC6FWhB2M1KbQdaNxZi8SEzXHzrVbBA7uJnjXdaeQ8hdfNRfmRgm00GXWqojiDjkocl5uWVq9ao= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=d6vr8fla; arc=none smtp.client-ip=192.198.163.16 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="d6vr8fla" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1724469658; x=1756005658; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=fIKx0VaWIhs7POmhQMpzD4+SIPqiudJ27Nx1XhPc2+0=; b=d6vr8fla+V1m1IYQbSFzXL7jbWaEZa7G//R9ocFZ/FQqNlfe/SUv1DfK fFlGqlJdp5oE7zmn45qKk2USs/pvbP/kDdYfRzKnH/KvSIvRrYqT1p/nl /GASTNKOSw73nytXUzaK4QhJ/d9vJCC7cbjzPWPCGqrsGbYQYg4lqQTtp 0HiatRNqGzN+L2BdO9sp5xwmQnhMGkdRGkJeHVpBmKzsl0a8wifaAN6Vj M8FU1Fa4T/3+0dpsVHF3JjZCpe2sBwa8yUtUWful8+ZsA6dOTDjvOFDSs MYe3zoenal8ZhLqqU+PbJL2+KS21kPuWOLuvqqIAUsLMUh0VgqsEUi2OZ g==; X-CSE-ConnectionGUID: PKCIEm2LSVyvqgMn5NyMRA== X-CSE-MsgGUID: mSZJxdrTT8G8ppZ10Tz7nA== X-IronPort-AV: E=McAfee;i="6700,10204,11173"; a="13187837" X-IronPort-AV: E=Sophos;i="6.10,172,1719903600"; d="scan'208";a="13187837" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by fmvoesa110.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Aug 2024 20:20:52 -0700 X-CSE-ConnectionGUID: kN7FsA8VR6aLE3TPIx2TOA== X-CSE-MsgGUID: PxdWx+8SRIasBIgfmPLClw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.10,172,1719903600"; d="scan'208";a="99492148" Received: from tenikolo-mobl1.amr.corp.intel.com ([10.124.36.66]) by smtpauth.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Aug 2024 20:20:51 -0700 From: Tatyana Nikolova To: jgg@nvidia.com, leon@kernel.org Cc: linux-rdma@vger.kernel.org, netdev@vger.kernel.org, Jay Bhat , Tatyana Nikolova Subject: [RFC v2 24/25] RDMA/irdma: Add Push Page Support for GEN3 Date: Fri, 23 Aug 2024 22:19:23 -0500 Message-Id: <20240824031924.421-25-tatyana.e.nikolova@intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20240824031924.421-1-tatyana.e.nikolova@intel.com> References: <20240824031924.421-1-tatyana.e.nikolova@intel.com> Precedence: bulk X-Mailing-List: linux-rdma@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Jay Bhat Implement the necessary support for enabling push on GEN3 devices. Key Changes: - Introduce a RDMA virtual channel operation with the Control Plane (CP) to manage the doorbell/push page which is a privileged operation. - Implement the MMIO mapping of push pages which adheres to the updated BAR layout and page indexing specific to GEN3 devices. - Support up to 16 QPs on a single push page, given that they are tied to the same Queue Set. - Impose limits on the size of WQEs pushed based on the message length constraints provided by the CP. Signed-off-by: Jay Bhat Signed-off-by: Tatyana Nikolova --- drivers/infiniband/hw/irdma/ctrl.c | 1 - drivers/infiniband/hw/irdma/defs.h | 2 + drivers/infiniband/hw/irdma/irdma.h | 1 + drivers/infiniband/hw/irdma/type.h | 3 + drivers/infiniband/hw/irdma/user.h | 1 - drivers/infiniband/hw/irdma/utils.c | 48 ++++++++++++++-- drivers/infiniband/hw/irdma/verbs.c | 79 +++++++++++++++++++++++--- drivers/infiniband/hw/irdma/verbs.h | 7 +++ drivers/infiniband/hw/irdma/virtchnl.c | 40 +++++++++++++ drivers/infiniband/hw/irdma/virtchnl.h | 11 ++++ include/uapi/rdma/irdma-abi.h | 3 +- 11 files changed, 178 insertions(+), 18 deletions(-) diff --git a/drivers/infiniband/hw/irdma/ctrl.c b/drivers/infiniband/hw/irdma/ctrl.c index 73cab77d60de..0c13b0bdd73a 100644 --- a/drivers/infiniband/hw/irdma/ctrl.c +++ b/drivers/infiniband/hw/irdma/ctrl.c @@ -6467,7 +6467,6 @@ int irdma_sc_dev_init(enum irdma_vers ver, struct irdma_sc_dev *dev, dev->hw_attrs.max_hw_outbound_msg_size = IRDMA_MAX_OUTBOUND_MSG_SIZE; dev->hw_attrs.max_mr_size = IRDMA_MAX_MR_SIZE; dev->hw_attrs.max_hw_inbound_msg_size = IRDMA_MAX_INBOUND_MSG_SIZE; - dev->hw_attrs.max_hw_device_pages = IRDMA_MAX_PUSH_PAGE_COUNT; dev->hw_attrs.uk_attrs.max_hw_inline = IRDMA_MAX_INLINE_DATA_SIZE; dev->hw_attrs.max_hw_wqes = IRDMA_MAX_WQ_ENTRIES; dev->hw_attrs.max_qp_wr = IRDMA_MAX_QP_WRS(IRDMA_MAX_QUANTA_PER_WR); diff --git a/drivers/infiniband/hw/irdma/defs.h b/drivers/infiniband/hw/irdma/defs.h index e75dd8bbd86b..53d9588a7f3e 100644 --- a/drivers/infiniband/hw/irdma/defs.h +++ b/drivers/infiniband/hw/irdma/defs.h @@ -167,6 +167,8 @@ enum irdma_protocol_used { #define IRDMA_MAX_RQ_WQE_SHIFT_GEN1 2 #define IRDMA_MAX_RQ_WQE_SHIFT_GEN2 3 +#define IRDMA_DEFAULT_MAX_PUSH_LEN 8192 + #define IRDMA_SQ_RSVD 258 #define IRDMA_RQ_RSVD 1 diff --git a/drivers/infiniband/hw/irdma/irdma.h b/drivers/infiniband/hw/irdma/irdma.h index 6af79bb45254..955ab98a7ecb 100644 --- a/drivers/infiniband/hw/irdma/irdma.h +++ b/drivers/infiniband/hw/irdma/irdma.h @@ -133,6 +133,7 @@ struct irdma_uk_attrs { u32 min_hw_cq_size; u32 max_hw_cq_size; u32 max_hw_srq_quanta; + u16 max_hw_push_len; u16 max_hw_sq_chunk; u16 min_hw_wq_size; u8 hw_rev; diff --git a/drivers/infiniband/hw/irdma/type.h b/drivers/infiniband/hw/irdma/type.h index 99160910f24b..0bfaf8b622fd 100644 --- a/drivers/infiniband/hw/irdma/type.h +++ b/drivers/infiniband/hw/irdma/type.h @@ -1289,8 +1289,11 @@ struct irdma_qhash_table_info { struct irdma_cqp_manage_push_page_info { u32 push_idx; u16 qs_handle; + u16 hmc_fn_id; u8 free_page; u8 push_page_type; + u8 page_type; + u8 use_hmc_fn_id; }; struct irdma_qp_flush_info { diff --git a/drivers/infiniband/hw/irdma/user.h b/drivers/infiniband/hw/irdma/user.h index a2029f5f0284..d04175dacd9c 100644 --- a/drivers/infiniband/hw/irdma/user.h +++ b/drivers/infiniband/hw/irdma/user.h @@ -180,7 +180,6 @@ enum irdma_device_caps_const { IRDMA_MAX_SGE_RD = 13, IRDMA_MAX_OUTBOUND_MSG_SIZE = 2147483647, IRDMA_MAX_INBOUND_MSG_SIZE = 2147483647, - IRDMA_MAX_PUSH_PAGE_COUNT = 1024, IRDMA_MAX_PE_ENA_VF_COUNT = 32, IRDMA_MAX_VF_FPM_ID = 47, IRDMA_MAX_SQ_PAYLOAD_SIZE = 2145386496, diff --git a/drivers/infiniband/hw/irdma/utils.c b/drivers/infiniband/hw/irdma/utils.c index 2fdcb8872e08..2e210be3bdd8 100644 --- a/drivers/infiniband/hw/irdma/utils.c +++ b/drivers/infiniband/hw/irdma/utils.c @@ -1156,21 +1156,51 @@ int irdma_cqp_qp_create_cmd(struct irdma_sc_dev *dev, struct irdma_sc_qp *qp) /** * irdma_dealloc_push_page - free a push page for qp * @rf: RDMA PCI function - * @qp: hardware control qp + * @iwqp: QP pointer */ static void irdma_dealloc_push_page(struct irdma_pci_f *rf, - struct irdma_sc_qp *qp) + struct irdma_qp *iwqp) { struct irdma_cqp_request *cqp_request; struct cqp_cmds_info *cqp_info; int status; + struct irdma_sc_qp *qp = &iwqp->sc_qp; + struct irdma_pd *pd = iwqp->iwpd; + u32 push_pos; + bool is_empty; if (qp->push_idx == IRDMA_INVALID_PUSH_PAGE_INDEX) return; + mutex_lock(&pd->push_alloc_mutex); + + push_pos = qp->push_offset / IRDMA_PUSH_WIN_SIZE; + __clear_bit(push_pos, pd->push_offset_bmap); + is_empty = bitmap_empty(pd->push_offset_bmap, IRDMA_QPS_PER_PUSH_PAGE); + if (!is_empty) { + qp->push_idx = IRDMA_INVALID_PUSH_PAGE_INDEX; + goto exit; + } + + if (!rf->sc_dev.privileged) { + u32 pg_idx = qp->push_idx; + + status = irdma_vchnl_req_manage_push_pg(&rf->sc_dev, false, + qp->qs_handle, &pg_idx); + if (!status) { + qp->push_idx = IRDMA_INVALID_PUSH_PAGE_INDEX; + pd->push_idx = IRDMA_INVALID_PUSH_PAGE_INDEX; + } else { + __set_bit(push_pos, pd->push_offset_bmap); + } + goto exit; + } + cqp_request = irdma_alloc_and_get_cqp_request(&rf->cqp, false); - if (!cqp_request) - return; + if (!cqp_request) { + __set_bit(push_pos, pd->push_offset_bmap); + goto exit; + } cqp_info = &cqp_request->info; cqp_info->cqp_cmd = IRDMA_OP_MANAGE_PUSH_PAGE; @@ -1182,9 +1212,15 @@ static void irdma_dealloc_push_page(struct irdma_pci_f *rf, cqp_info->in.u.manage_push_page.cqp = &rf->cqp.sc_cqp; cqp_info->in.u.manage_push_page.scratch = (uintptr_t)cqp_request; status = irdma_handle_cqp_op(rf, cqp_request); - if (!status) + if (!status) { qp->push_idx = IRDMA_INVALID_PUSH_PAGE_INDEX; + pd->push_idx = IRDMA_INVALID_PUSH_PAGE_INDEX; + } else { + __set_bit(push_pos, pd->push_offset_bmap); + } irdma_put_cqp_request(&rf->cqp, cqp_request); +exit: + mutex_unlock(&pd->push_alloc_mutex); } static void irdma_free_gsi_qp_rsrc(struct irdma_qp *iwqp, u32 qp_num) @@ -1218,7 +1254,7 @@ void irdma_free_qp_rsrc(struct irdma_qp *iwqp) u32 qp_num = iwqp->sc_qp.qp_uk.qp_id; irdma_ieq_cleanup_qp(iwdev->vsi.ieq, &iwqp->sc_qp); - irdma_dealloc_push_page(rf, &iwqp->sc_qp); + irdma_dealloc_push_page(rf, iwqp); if (iwqp->sc_qp.vsi) { irdma_qp_rem_qos(&iwqp->sc_qp); iwqp->sc_qp.dev->ws_remove(iwqp->sc_qp.vsi, diff --git a/drivers/infiniband/hw/irdma/verbs.c b/drivers/infiniband/hw/irdma/verbs.c index 982483e641f4..704b401a5880 100644 --- a/drivers/infiniband/hw/irdma/verbs.c +++ b/drivers/infiniband/hw/irdma/verbs.c @@ -245,11 +245,46 @@ static void irdma_alloc_push_page(struct irdma_qp *iwqp) struct cqp_cmds_info *cqp_info; struct irdma_device *iwdev = iwqp->iwdev; struct irdma_sc_qp *qp = &iwqp->sc_qp; + struct irdma_pd *pd = iwqp->iwpd; + u32 push_pos = 0; int status; + mutex_lock(&pd->push_alloc_mutex); + if (pd->push_idx == IRDMA_INVALID_PUSH_PAGE_INDEX) { + bitmap_zero(pd->push_offset_bmap, IRDMA_QPS_PER_PUSH_PAGE); + } else { + if (pd->qs_handle == qp->qs_handle) { + push_pos = find_first_zero_bit(pd->push_offset_bmap, + IRDMA_QPS_PER_PUSH_PAGE); + if (push_pos < IRDMA_QPS_PER_PUSH_PAGE) { + qp->push_idx = pd->push_idx; + qp->push_offset = + push_pos * IRDMA_PUSH_WIN_SIZE; + __set_bit(push_pos, pd->push_offset_bmap); + } + } + goto exit; + } + + if (!iwdev->rf->sc_dev.privileged) { + u32 pg_idx; + + status = irdma_vchnl_req_manage_push_pg(&iwdev->rf->sc_dev, + true, qp->qs_handle, + &pg_idx); + if (!status && pg_idx != IRDMA_INVALID_PUSH_PAGE_INDEX) { + qp->push_idx = pg_idx; + qp->push_offset = push_pos * IRDMA_PUSH_WIN_SIZE; + __set_bit(push_pos, pd->push_offset_bmap); + pd->push_idx = pg_idx; + pd->qs_handle = qp->qs_handle; + } + goto exit; + } + cqp_request = irdma_alloc_and_get_cqp_request(&iwdev->rf->cqp, true); if (!cqp_request) - return; + goto exit; cqp_info = &cqp_request->info; cqp_info->cqp_cmd = IRDMA_OP_MANAGE_PUSH_PAGE; @@ -266,10 +301,15 @@ static void irdma_alloc_push_page(struct irdma_qp *iwqp) if (!status && cqp_request->compl_info.op_ret_val < iwdev->rf->sc_dev.hw_attrs.max_hw_device_pages) { qp->push_idx = cqp_request->compl_info.op_ret_val; - qp->push_offset = 0; + qp->push_offset = push_pos * IRDMA_PUSH_WIN_SIZE; + __set_bit(push_pos, pd->push_offset_bmap); + pd->push_idx = cqp_request->compl_info.op_ret_val; + pd->qs_handle = qp->qs_handle; } irdma_put_cqp_request(&iwdev->rf->cqp, cqp_request); +exit: + mutex_unlock(&pd->push_alloc_mutex); } /** @@ -351,6 +391,9 @@ static int irdma_alloc_ucontext(struct ib_ucontext *uctx, uresp.comp_mask |= IRDMA_ALLOC_UCTX_MIN_HW_WQ_SIZE; uresp.max_hw_srq_quanta = uk_attrs->max_hw_srq_quanta; uresp.comp_mask |= IRDMA_ALLOC_UCTX_MAX_HW_SRQ_QUANTA; + uresp.max_hw_push_len = uk_attrs->max_hw_push_len; + uresp.comp_mask |= IRDMA_SUPPORT_MAX_HW_PUSH_LEN; + if (ib_copy_to_udata(udata, &uresp, min(sizeof(uresp), udata->outlen))) { rdma_user_mmap_entry_remove(ucontext->db_mmap_entry); @@ -410,6 +453,9 @@ static int irdma_alloc_pd(struct ib_pd *pd, struct ib_udata *udata) if (err) return err; + iwpd->push_idx = IRDMA_INVALID_PUSH_PAGE_INDEX; + mutex_init(&iwpd->push_alloc_mutex); + sc_pd = &iwpd->sc_pd; if (udata) { struct irdma_ucontext *ucontext = @@ -485,6 +531,23 @@ static void irdma_clean_cqes(struct irdma_qp *iwqp, struct irdma_cq *iwcq) spin_unlock_irqrestore(&iwcq->lock, flags); } +static u64 irdma_compute_push_wqe_offset(struct irdma_device *iwdev, u32 page_idx) +{ + u64 bar_off = (uintptr_t)iwdev->rf->sc_dev.hw_regs[IRDMA_DB_ADDR_OFFSET]; + + if (iwdev->rf->sc_dev.hw_attrs.uk_attrs.hw_rev == IRDMA_GEN_2) { + /* skip over db page */ + bar_off += IRDMA_HW_PAGE_SIZE; + /* skip over reserved space */ + bar_off += IRDMA_PF_BAR_RSVD; + } + + /* push wqe page */ + bar_off += (u64)page_idx * IRDMA_HW_PAGE_SIZE; + + return bar_off; +} + static void irdma_remove_push_mmap_entries(struct irdma_qp *iwqp) { if (iwqp->push_db_mmap_entry) { @@ -503,14 +566,12 @@ static int irdma_setup_push_mmap_entries(struct irdma_ucontext *ucontext, u64 *push_db_mmap_key) { struct irdma_device *iwdev = ucontext->iwdev; - u64 rsvd, bar_off; + u64 bar_off; + + WARN_ON_ONCE(iwdev->rf->sc_dev.hw_attrs.uk_attrs.hw_rev < IRDMA_GEN_2); + + bar_off = irdma_compute_push_wqe_offset(iwdev, iwqp->sc_qp.push_idx); - rsvd = IRDMA_PF_BAR_RSVD; - bar_off = (uintptr_t)iwdev->rf->sc_dev.hw_regs[IRDMA_DB_ADDR_OFFSET]; - /* skip over db page */ - bar_off += IRDMA_HW_PAGE_SIZE; - /* push wqe page */ - bar_off += rsvd + iwqp->sc_qp.push_idx * IRDMA_HW_PAGE_SIZE; iwqp->push_wqe_mmap_entry = irdma_user_mmap_entry_insert(ucontext, bar_off, IRDMA_MMAP_IO_WC, push_wqe_mmap_key); diff --git a/drivers/infiniband/hw/irdma/verbs.h b/drivers/infiniband/hw/irdma/verbs.h index 0922a22fbede..71b627d52a63 100644 --- a/drivers/infiniband/hw/irdma/verbs.h +++ b/drivers/infiniband/hw/irdma/verbs.h @@ -8,6 +8,9 @@ #define IRDMA_PKEY_TBL_SZ 1 #define IRDMA_DEFAULT_PKEY 0xFFFF + +#define IRDMA_QPS_PER_PUSH_PAGE 16 +#define IRDMA_PUSH_WIN_SIZE 256 #define IRDMA_SHADOW_PGCNT 1 struct irdma_ucontext { @@ -28,6 +31,10 @@ struct irdma_ucontext { struct irdma_pd { struct ib_pd ibpd; struct irdma_sc_pd sc_pd; + struct mutex push_alloc_mutex; /* protect push page alloc within a PD*/ + DECLARE_BITMAP(push_offset_bmap, IRDMA_QPS_PER_PUSH_PAGE); + u32 push_idx; + u16 qs_handle; }; union irdma_sockaddr { diff --git a/drivers/infiniband/hw/irdma/virtchnl.c b/drivers/infiniband/hw/irdma/virtchnl.c index 9f39cd69d85d..667ec0e8806e 100644 --- a/drivers/infiniband/hw/irdma/virtchnl.c +++ b/drivers/infiniband/hw/irdma/virtchnl.c @@ -66,6 +66,7 @@ int irdma_sc_vchnl_init(struct irdma_sc_dev *dev, dev->privileged = info->privileged; dev->is_pf = info->is_pf; dev->hw_attrs.uk_attrs.hw_rev = info->hw_rev; + dev->hw_attrs.uk_attrs.max_hw_push_len = IRDMA_DEFAULT_MAX_PUSH_LEN; if (!dev->privileged) { int ret = irdma_vchnl_req_get_ver(dev, IRDMA_VCHNL_CHNL_VER_MAX, @@ -83,6 +84,7 @@ int irdma_sc_vchnl_init(struct irdma_sc_dev *dev, return ret; dev->hw_attrs.uk_attrs.hw_rev = dev->vc_caps.hw_rev; + dev->hw_attrs.uk_attrs.max_hw_push_len = dev->vc_caps.max_hw_push_len; } return 0; @@ -107,6 +109,7 @@ static int irdma_vchnl_req_verify_resp(struct irdma_vchnl_req *vchnl_req, if (resp_len < IRDMA_VCHNL_OP_GET_RDMA_CAPS_MIN_SIZE) return -EBADMSG; break; + case IRDMA_VCHNL_OP_MANAGE_PUSH_PAGE: case IRDMA_VCHNL_OP_GET_REG_LAYOUT: case IRDMA_VCHNL_OP_QUEUE_VECTOR_MAP: case IRDMA_VCHNL_OP_QUEUE_VECTOR_UNMAP: @@ -186,6 +189,40 @@ static int irdma_vchnl_req_send_sync(struct irdma_sc_dev *dev, return ret; } +/** + * irdma_vchnl_req_manage_push_pg - manage push page + * @dev: rdma device pointer + * @add: Add or remove push page + * @qs_handle: qs_handle of push page for add + * @pg_idx: index of push page that is added or removed + */ +int irdma_vchnl_req_manage_push_pg(struct irdma_sc_dev *dev, bool add, + u32 qs_handle, u32 *pg_idx) +{ + struct irdma_vchnl_manage_push_page add_push_pg = {}; + struct irdma_vchnl_req_init_info info = {}; + + if (!dev->vchnl_up) + return -EBUSY; + + add_push_pg.add = add; + add_push_pg.pg_idx = add ? 0 : *pg_idx; + add_push_pg.qs_handle = qs_handle; + + info.op_code = IRDMA_VCHNL_OP_MANAGE_PUSH_PAGE; + info.op_ver = IRDMA_VCHNL_OP_MANAGE_PUSH_PAGE_V0; + info.req_parm = &add_push_pg; + info.req_parm_len = sizeof(add_push_pg); + info.resp_parm = pg_idx; + info.resp_parm_len = sizeof(*pg_idx); + + ibdev_dbg(to_ibdev(dev), + "VIRT: Sending msg: manage_push_pg add = %d, idx %u, qsh %u\n", + add_push_pg.add, add_push_pg.pg_idx, add_push_pg.qs_handle); + + return irdma_vchnl_req_send_sync(dev, &info); +} + /** * irdma_vchnl_req_get_reg_layout - Get Register Layout * @dev: RDMA device pointer @@ -561,6 +598,9 @@ int irdma_vchnl_req_get_caps(struct irdma_sc_dev *dev) if (ret) return ret; + if (!dev->vc_caps.max_hw_push_len) + dev->vc_caps.max_hw_push_len = IRDMA_DEFAULT_MAX_PUSH_LEN; + if (dev->vc_caps.hw_rev > IRDMA_GEN_MAX || dev->vc_caps.hw_rev < IRDMA_GEN_2) { ibdev_dbg(to_ibdev(dev), diff --git a/drivers/infiniband/hw/irdma/virtchnl.h b/drivers/infiniband/hw/irdma/virtchnl.h index 23e66bc2aa44..0c88f6463077 100644 --- a/drivers/infiniband/hw/irdma/virtchnl.h +++ b/drivers/infiniband/hw/irdma/virtchnl.h @@ -14,6 +14,7 @@ #define IRDMA_VCHNL_OP_GET_HMC_FCN_V1 1 #define IRDMA_VCHNL_OP_GET_HMC_FCN_V2 2 #define IRDMA_VCHNL_OP_PUT_HMC_FCN_V0 0 +#define IRDMA_VCHNL_OP_MANAGE_PUSH_PAGE_V0 0 #define IRDMA_VCHNL_OP_GET_REG_LAYOUT_V0 0 #define IRDMA_VCHNL_OP_QUEUE_VECTOR_MAP_V0 0 #define IRDMA_VCHNL_OP_QUEUE_VECTOR_UNMAP_V0 0 @@ -55,6 +56,7 @@ enum irdma_vchnl_ops { IRDMA_VCHNL_OP_GET_VER = 0, IRDMA_VCHNL_OP_GET_HMC_FCN = 1, IRDMA_VCHNL_OP_PUT_HMC_FCN = 2, + IRDMA_VCHNL_OP_MANAGE_PUSH_PAGE = 10, IRDMA_VCHNL_OP_GET_REG_LAYOUT = 11, IRDMA_VCHNL_OP_GET_RDMA_CAPS = 13, IRDMA_VCHNL_OP_QUEUE_VECTOR_MAP = 14, @@ -125,6 +127,13 @@ struct irdma_vchnl_init_info { bool is_pf; }; +struct irdma_vchnl_manage_push_page { + u8 page_type; + u8 add; + u32 pg_idx; + u32 qs_handle; +}; + struct irdma_vchnl_reg_info { u32 reg_offset; u16 field_cnt; @@ -167,6 +176,8 @@ int irdma_vchnl_req_put_hmc_fcn(struct irdma_sc_dev *dev); int irdma_vchnl_req_get_caps(struct irdma_sc_dev *dev); int irdma_vchnl_req_get_resp(struct irdma_sc_dev *dev, struct irdma_vchnl_req *vc_req); +int irdma_vchnl_req_manage_push_pg(struct irdma_sc_dev *dev, bool add, + u32 qs_handle, u32 *pg_idx); int irdma_vchnl_req_get_reg_layout(struct irdma_sc_dev *dev); int irdma_vchnl_req_aeq_vec_map(struct irdma_sc_dev *dev, u32 v_idx); int irdma_vchnl_req_ceq_vec_map(struct irdma_sc_dev *dev, u16 ceq_id, diff --git a/include/uapi/rdma/irdma-abi.h b/include/uapi/rdma/irdma-abi.h index f7788d33376b..9c8cee0753f0 100644 --- a/include/uapi/rdma/irdma-abi.h +++ b/include/uapi/rdma/irdma-abi.h @@ -28,6 +28,7 @@ enum { IRDMA_ALLOC_UCTX_MIN_HW_WQ_SIZE = 1 << 1, IRDMA_ALLOC_UCTX_MAX_HW_SRQ_QUANTA = 1 << 2, IRDMA_SUPPORT_WQE_FORMAT_V2 = 1 << 3, + IRDMA_SUPPORT_MAX_HW_PUSH_LEN = 1 << 4, }; struct irdma_alloc_ucontext_req { @@ -58,7 +59,7 @@ struct irdma_alloc_ucontext_resp { __aligned_u64 comp_mask; __u16 min_hw_wq_size; __u32 max_hw_srq_quanta; - __u8 rsvd3[2]; + __u16 max_hw_push_len; }; struct irdma_alloc_pd_resp { From patchwork Sat Aug 24 03:19:24 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Nikolova, Tatyana E" X-Patchwork-Id: 13776214 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.16]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 769C613E881; Sat, 24 Aug 2024 03:20:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.16 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724469659; cv=none; b=Pbq6Zx8gAjimsLjdgGAy0PbX4i2niAuMChvP7zQdzZum73dH82ktXqYDCu+gGrBp51xhYUxez81D/lqOtwDLhmyz3Ni0fDGoIGO5u/vLpwwfxyWW36lPQQb8+DgIqlz9oAV5e1Po2+fhcLhRvuk9KqQ9QsW9VyoMbJ3Op3xBkQQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724469659; c=relaxed/simple; bh=p4YYPWWfeOTmFb2XDs2EfNnALNAekuVD4k4hsW0uCiM=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=ZwUWRYLfxvJ6f/v35/g6omGjElqVWNlKj8OYvGx9Jp5lp9QaXWrXJIhMJpG6NWLnu0Z+zrhmRiPOiSew7/y7BcDGPQm2zOopKpxa7/l2HrRJWi57YVHthVV9d9yoDwNhU/QHM0X8k5+n9qxb3I4W5VFEOnKT7D97qPIPHZyzjkE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=hbWEvmo5; arc=none smtp.client-ip=192.198.163.16 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="hbWEvmo5" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1724469658; x=1756005658; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=p4YYPWWfeOTmFb2XDs2EfNnALNAekuVD4k4hsW0uCiM=; b=hbWEvmo5yoMKN99VJMPuu9VA1IL/DdALjONrQEnSxW+lA2zPvD0LKgEL 8lhLHRXnen9aHKfF3BiI+zml1we/jBU9E2UErue2M/EELMBMcebRR77sJ 93pYAJsRlVZPFyj1hTeB8I413obPeiMmAjbsuSnuTCZTYvRnTi3bsEgU2 b9pACnYRMMRkbP/zvVMntmpJuD4aqP5h1ggifr78hGkkgMs4pm07eCdJU t25+tCqvQdjtPKmRFOGRsQkalPlaFEaoNdcl0tXb0siu8JVf3MVE8vx9D syw5WoQdlF32opf5s4ogWgK2VBArfEU/+z9534mZIhSPdrrEKhtNr9fQd A==; X-CSE-ConnectionGUID: KypEm6wNQPeBvnnIur8jBw== X-CSE-MsgGUID: 8rgFG4rSSHq46763jXCjog== X-IronPort-AV: E=McAfee;i="6700,10204,11173"; a="13187840" X-IronPort-AV: E=Sophos;i="6.10,172,1719903600"; d="scan'208";a="13187840" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by fmvoesa110.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Aug 2024 20:20:53 -0700 X-CSE-ConnectionGUID: /B202zQ5RUqwbDqp79MtpA== X-CSE-MsgGUID: 3sYwZGIdTJKeP82Y2SiHjg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.10,172,1719903600"; d="scan'208";a="99492152" Received: from tenikolo-mobl1.amr.corp.intel.com ([10.124.36.66]) by smtpauth.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Aug 2024 20:20:52 -0700 From: Tatyana Nikolova To: jgg@nvidia.com, leon@kernel.org Cc: linux-rdma@vger.kernel.org, netdev@vger.kernel.org, Shiraz Saleem , Tatyana Nikolova Subject: [RFC v2 25/25] RDMA/irdma: Update Kconfig Date: Fri, 23 Aug 2024 22:19:24 -0500 Message-Id: <20240824031924.421-26-tatyana.e.nikolova@intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20240824031924.421-1-tatyana.e.nikolova@intel.com> References: <20240824031924.421-1-tatyana.e.nikolova@intel.com> Precedence: bulk X-Mailing-List: linux-rdma@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Shiraz Saleem Update Kconfig to add dependency on idpf module. Additionally, add IPU E2000 to list of devices supported. Signed-off-by: Shiraz Saleem Signed-off-by: Tatyana Nikolova --- drivers/infiniband/hw/irdma/Kconfig | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/drivers/infiniband/hw/irdma/Kconfig b/drivers/infiniband/hw/irdma/Kconfig index b6f9c41bca51..f6b39f3a726e 100644 --- a/drivers/infiniband/hw/irdma/Kconfig +++ b/drivers/infiniband/hw/irdma/Kconfig @@ -4,9 +4,10 @@ config INFINIBAND_IRDMA depends on INET depends on IPV6 || !IPV6 depends on PCI - depends on ICE && I40E + depends on (IDPF || ICE) && I40E select GENERIC_ALLOCATOR select AUXILIARY_BUS help - This is an Intel(R) Ethernet Protocol Driver for RDMA driver - that support E810 (iWARP/RoCE) and X722 (iWARP) network devices. + This is an Intel(R) Ethernet Protocol Driver for RDMA that + support IPU E2000 (RoCEv2), E810 (iWARP/RoCE) and X722 (iWARP) + network devices.