From patchwork Thu Oct 20 09:49:09 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Amrani, Ram" X-Patchwork-Id: 9386465 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 39EB4607D0 for ; Thu, 20 Oct 2016 09:50:20 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1F6A8296E4 for ; Thu, 20 Oct 2016 09:50:20 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1325A29BAE; Thu, 20 Oct 2016 09:50:20 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, UPPERCASE_50_75 autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 34234296E4 for ; Thu, 20 Oct 2016 09:50:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754639AbcJTJuO (ORCPT ); Thu, 20 Oct 2016 05:50:14 -0400 Received: from mx0a-0016ce01.pphosted.com ([67.231.148.157]:40358 "EHLO mx0b-0016ce01.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S932184AbcJTJuL (ORCPT ); Thu, 20 Oct 2016 05:50:11 -0400 Received: from pps.filterd (m0095336.ppops.net [127.0.0.1]) by mx0a-0016ce01.pphosted.com (8.16.0.17/8.16.0.17) with SMTP id u9K9kjS8023523; Thu, 20 Oct 2016 02:50:03 -0700 Received: from avcashub1.qlogic.com ([198.186.0.117]) by mx0a-0016ce01.pphosted.com with ESMTP id 263jj4qert-1 (version=TLSv1 cipher=ECDHE-RSA-AES256-SHA bits=256 verify=NOT); Thu, 20 Oct 2016 02:50:03 -0700 Received: from localhost.qlogic.org (10.185.6.94) by qlc.com (10.1.4.192) with Microsoft SMTP Server id 14.3.235.1; Thu, 20 Oct 2016 02:50:01 -0700 From: Ram Amrani To: , CC: , , Ram Amrani Subject: [PATCH rdma-core 3/6] libqedr: HSI Date: Thu, 20 Oct 2016 12:49:09 +0300 Message-ID: <1476956952-17388-4-git-send-email-Ram.Amrani@cavium.com> X-Mailer: git-send-email 1.9.3 In-Reply-To: <1476956952-17388-1-git-send-email-Ram.Amrani@cavium.com> References: <1476956952-17388-1-git-send-email-Ram.Amrani@cavium.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=nai engine=5800 definitions=8323 signatures=670677 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 priorityscore=1501 malwarescore=0 suspectscore=0 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 impostorscore=0 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1609300000 definitions=main-1610200175 Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Ram Amrani Introduce the HSI that allows interfacing directly with the NIC. Ram Amrani --- providers/qedr/common_hsi.h | 1502 ++++++++++++++++++++++++++++++++++++++++ providers/qedr/qelr_hsi.h | 67 ++ providers/qedr/qelr_hsi_rdma.h | 914 ++++++++++++++++++++++++ providers/qedr/rdma_common.h | 74 ++ providers/qedr/roce_common.h | 50 ++ 5 files changed, 2607 insertions(+) create mode 100644 providers/qedr/common_hsi.h create mode 100644 providers/qedr/qelr_hsi.h create mode 100644 providers/qedr/qelr_hsi_rdma.h create mode 100644 providers/qedr/rdma_common.h create mode 100644 providers/qedr/roce_common.h diff --git a/providers/qedr/common_hsi.h b/providers/qedr/common_hsi.h new file mode 100644 index 0000000..8027866 --- /dev/null +++ b/providers/qedr/common_hsi.h @@ -0,0 +1,1502 @@ +/* + * Copyright (c) 2015-2016 QLogic Corporation + * + * This software is available to you under a choice of one of two + * licenses. You may choose to be licensed under the terms of the GNU + * General Public License (GPL) Version 2, available from the file + * COPYING in the main directory of this source tree, or the + * OpenIB.org BSD license below: + * + * Redistribution and use in source and binary forms, with or + * without modification, are permitted provided that the following + * conditions are met: + * + * - Redistributions of source code must retain the above + * copyright notice, this list of conditions and the following + * disclaimer. + * + * - Redistributions in binary form must reproduce the above + * copyright notice, this list of conditions and the following + * disclaimer in the documentation and /or other materials + * provided with the distribution. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, + * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF + * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND + * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS + * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN + * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN + * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE + * SOFTWARE. + */ + +#ifndef __COMMON_HSI__ +#define __COMMON_HSI__ +/********************************/ +/* PROTOCOL COMMON FW CONSTANTS */ +/********************************/ + +/* Temporarily here should be added to HSI automatically by resource allocation tool.*/ +#define T_TEST_AGG_INT_TEMP 6 +#define M_TEST_AGG_INT_TEMP 8 +#define U_TEST_AGG_INT_TEMP 6 +#define X_TEST_AGG_INT_TEMP 14 +#define Y_TEST_AGG_INT_TEMP 4 +#define P_TEST_AGG_INT_TEMP 4 + +#define X_FINAL_CLEANUP_AGG_INT 1 + +#define EVENT_RING_PAGE_SIZE_BYTES 4096 + +#define NUM_OF_GLOBAL_QUEUES 128 +#define COMMON_QUEUE_ENTRY_MAX_BYTE_SIZE 64 + +#define ISCSI_CDU_TASK_SEG_TYPE 0 +#define FCOE_CDU_TASK_SEG_TYPE 0 +#define RDMA_CDU_TASK_SEG_TYPE 1 + +#define FW_ASSERT_GENERAL_ATTN_IDX 32 + +#define MAX_PINNED_CCFC 32 + +#define EAGLE_ENG1_WORKAROUND_NIG_FLOWCTRL_MODE 3 + +/* Queue Zone sizes in bytes */ +#define TSTORM_QZONE_SIZE 8 /*tstorm_scsi_queue_zone*/ +#define MSTORM_QZONE_SIZE 16 /*mstorm_eth_queue_zone. Used only for RX producer of VFs in backward compatibility mode.*/ +#define USTORM_QZONE_SIZE 8 /*ustorm_eth_queue_zone*/ +#define XSTORM_QZONE_SIZE 8 /*xstorm_eth_queue_zone*/ +#define YSTORM_QZONE_SIZE 0 +#define PSTORM_QZONE_SIZE 0 + +#define MSTORM_VF_ZONE_DEFAULT_SIZE_LOG 7 /*Log of mstorm default VF zone size.*/ +#define ETH_MAX_NUM_RX_QUEUES_PER_VF_DEFAULT 16 /*Maximum number of RX queues that can be allocated to VF by default*/ +#define ETH_MAX_NUM_RX_QUEUES_PER_VF_DOUBLE 48 /*Maximum number of RX queues that can be allocated to VF with doubled VF zone size. Up to 96 VF supported in this mode*/ +#define ETH_MAX_NUM_RX_QUEUES_PER_VF_QUAD 112 /*Maximum number of RX queues that can be allocated to VF with 4 VF zone size. Up to 48 VF supported in this mode*/ + + +/********************************/ +/* CORE (LIGHT L2) FW CONSTANTS */ +/********************************/ + +#define CORE_LL2_MAX_RAMROD_PER_CON 8 +#define CORE_LL2_TX_BD_PAGE_SIZE_BYTES 4096 +#define CORE_LL2_RX_BD_PAGE_SIZE_BYTES 4096 +#define CORE_LL2_RX_CQE_PAGE_SIZE_BYTES 4096 +#define CORE_LL2_RX_NUM_NEXT_PAGE_BDS 1 + +#define CORE_LL2_TX_MAX_BDS_PER_PACKET 12 + +#define CORE_SPQE_PAGE_SIZE_BYTES 4096 + +#define MAX_NUM_LL2_RX_QUEUES 32 +#define MAX_NUM_LL2_TX_STATS_COUNTERS 32 + + +/////////////////////////////////////////////////////////////////////////////////////////////////// +// Include firmware verison number only- do not add constants here to avoid redundunt compilations +/////////////////////////////////////////////////////////////////////////////////////////////////// + + +#define FW_MAJOR_VERSION 8 +#define FW_MINOR_VERSION 10 +#define FW_REVISION_VERSION 9 +#define FW_ENGINEERING_VERSION 0 + +/***********************/ +/* COMMON HW CONSTANTS */ +/***********************/ + +/* PCI functions */ +#define MAX_NUM_PORTS_K2 (4) +#define MAX_NUM_PORTS_BB (2) +#define MAX_NUM_PORTS (MAX_NUM_PORTS_K2) + +#define MAX_NUM_PFS_K2 (16) +#define MAX_NUM_PFS_BB (8) +#define MAX_NUM_PFS (MAX_NUM_PFS_K2) +#define MAX_NUM_OF_PFS_IN_CHIP (16) /* On both engines */ + +#define MAX_NUM_VFS_K2 (192) +#define MAX_NUM_VFS_BB (120) +#define MAX_NUM_VFS (MAX_NUM_VFS_K2) + +#define MAX_NUM_FUNCTIONS_BB (MAX_NUM_PFS_BB + MAX_NUM_VFS_BB) +#define MAX_NUM_FUNCTIONS_K2 (MAX_NUM_PFS_K2 + MAX_NUM_VFS_K2) +#define MAX_NUM_FUNCTIONS (MAX_NUM_PFS + MAX_NUM_VFS) + +/* in both BB and K2, the VF number starts from 16. so for arrays containing all */ +/* possible PFs and VFs - we need a constant for this size */ +#define MAX_FUNCTION_NUMBER_BB (MAX_NUM_PFS + MAX_NUM_VFS_BB) +#define MAX_FUNCTION_NUMBER_K2 (MAX_NUM_PFS + MAX_NUM_VFS_K2) +#define MAX_FUNCTION_NUMBER (MAX_NUM_PFS + MAX_NUM_VFS) + +#define MAX_NUM_VPORTS_K2 (208) +#define MAX_NUM_VPORTS_BB (160) +#define MAX_NUM_VPORTS (MAX_NUM_VPORTS_K2) + +#define MAX_NUM_L2_QUEUES_K2 (320) +#define MAX_NUM_L2_QUEUES_BB (256) +#define MAX_NUM_L2_QUEUES (MAX_NUM_L2_QUEUES_K2) + +/* Traffic classes in network-facing blocks (PBF, BTB, NIG, BRB, PRS and QM) */ +// 4-Port K2. +#define NUM_PHYS_TCS_4PORT_K2 (4) +#define NUM_OF_PHYS_TCS (8) + +#define NUM_TCS_4PORT_K2 (NUM_PHYS_TCS_4PORT_K2 + 1) +#define NUM_OF_TCS (NUM_OF_PHYS_TCS + 1) + +#define LB_TC (NUM_OF_PHYS_TCS) + +/* Num of possible traffic priority values */ +#define NUM_OF_PRIO (8) + +#define MAX_NUM_VOQS_K2 (NUM_TCS_4PORT_K2 * MAX_NUM_PORTS_K2) +#define MAX_NUM_VOQS_BB (NUM_OF_TCS * MAX_NUM_PORTS_BB) +#define MAX_NUM_VOQS (MAX_NUM_VOQS_K2) +#define MAX_PHYS_VOQS (NUM_OF_PHYS_TCS * MAX_NUM_PORTS_BB) + +/* CIDs */ +#define NUM_OF_CONNECTION_TYPES (8) +#define NUM_OF_LCIDS (320) +#define NUM_OF_LTIDS (320) + +/* Clock values */ +#define MASTER_CLK_FREQ_E4 (375e6) +#define STORM_CLK_FREQ_E4 (1000e6) +#define CLK25M_CLK_FREQ_E4 (25e6) + +/* Global PXP windows (GTT) */ +#define NUM_OF_GTT 19 +#define GTT_DWORD_SIZE_BITS 10 +#define GTT_BYTE_SIZE_BITS (GTT_DWORD_SIZE_BITS + 2) +#define GTT_DWORD_SIZE (1 << GTT_DWORD_SIZE_BITS) + +/* Tools Version */ +#define TOOLS_VERSION 10 +/*****************/ +/* CDU CONSTANTS */ +/*****************/ + +#define CDU_SEG_TYPE_OFFSET_REG_TYPE_SHIFT (17) +#define CDU_SEG_TYPE_OFFSET_REG_OFFSET_MASK (0x1ffff) + +#define CDU_VF_FL_SEG_TYPE_OFFSET_REG_TYPE_SHIFT (12) +#define CDU_VF_FL_SEG_TYPE_OFFSET_REG_OFFSET_MASK (0xfff) + + +/*****************/ +/* DQ CONSTANTS */ +/*****************/ + +/* DEMS */ +#define DQ_DEMS_LEGACY 0 +#define DQ_DEMS_TOE_MORE_TO_SEND 3 +#define DQ_DEMS_TOE_LOCAL_ADV_WND 4 +#define DQ_DEMS_ROCE_CQ_CONS 7 + +/* XCM agg val selection (HW) */ +#define DQ_XCM_AGG_VAL_SEL_WORD2 0 +#define DQ_XCM_AGG_VAL_SEL_WORD3 1 +#define DQ_XCM_AGG_VAL_SEL_WORD4 2 +#define DQ_XCM_AGG_VAL_SEL_WORD5 3 +#define DQ_XCM_AGG_VAL_SEL_REG3 4 +#define DQ_XCM_AGG_VAL_SEL_REG4 5 +#define DQ_XCM_AGG_VAL_SEL_REG5 6 +#define DQ_XCM_AGG_VAL_SEL_REG6 7 + +/* XCM agg val selection (FW) */ +#define DQ_XCM_CORE_TX_BD_CONS_CMD DQ_XCM_AGG_VAL_SEL_WORD3 +#define DQ_XCM_CORE_TX_BD_PROD_CMD DQ_XCM_AGG_VAL_SEL_WORD4 +#define DQ_XCM_CORE_SPQ_PROD_CMD DQ_XCM_AGG_VAL_SEL_WORD4 +#define DQ_XCM_ETH_EDPM_NUM_BDS_CMD DQ_XCM_AGG_VAL_SEL_WORD2 +#define DQ_XCM_ETH_TX_BD_CONS_CMD DQ_XCM_AGG_VAL_SEL_WORD3 +#define DQ_XCM_ETH_TX_BD_PROD_CMD DQ_XCM_AGG_VAL_SEL_WORD4 +#define DQ_XCM_ETH_GO_TO_BD_CONS_CMD DQ_XCM_AGG_VAL_SEL_WORD5 +#define DQ_XCM_FCOE_SQ_CONS_CMD DQ_XCM_AGG_VAL_SEL_WORD3 +#define DQ_XCM_FCOE_SQ_PROD_CMD DQ_XCM_AGG_VAL_SEL_WORD4 +#define DQ_XCM_FCOE_X_FERQ_PROD_CMD DQ_XCM_AGG_VAL_SEL_WORD5 +#define DQ_XCM_ISCSI_SQ_CONS_CMD DQ_XCM_AGG_VAL_SEL_WORD3 +#define DQ_XCM_ISCSI_SQ_PROD_CMD DQ_XCM_AGG_VAL_SEL_WORD4 +#define DQ_XCM_ISCSI_MORE_TO_SEND_SEQ_CMD DQ_XCM_AGG_VAL_SEL_REG3 +#define DQ_XCM_ISCSI_EXP_STAT_SN_CMD DQ_XCM_AGG_VAL_SEL_REG6 +#define DQ_XCM_ROCE_SQ_PROD_CMD DQ_XCM_AGG_VAL_SEL_WORD4 +#define DQ_XCM_TOE_TX_BD_PROD_CMD DQ_XCM_AGG_VAL_SEL_WORD4 +#define DQ_XCM_TOE_MORE_TO_SEND_SEQ_CMD DQ_XCM_AGG_VAL_SEL_REG3 +#define DQ_XCM_TOE_LOCAL_ADV_WND_SEQ_CMD DQ_XCM_AGG_VAL_SEL_REG4 + +/* UCM agg val selection (HW) */ +#define DQ_UCM_AGG_VAL_SEL_WORD0 0 +#define DQ_UCM_AGG_VAL_SEL_WORD1 1 +#define DQ_UCM_AGG_VAL_SEL_WORD2 2 +#define DQ_UCM_AGG_VAL_SEL_WORD3 3 +#define DQ_UCM_AGG_VAL_SEL_REG0 4 +#define DQ_UCM_AGG_VAL_SEL_REG1 5 +#define DQ_UCM_AGG_VAL_SEL_REG2 6 +#define DQ_UCM_AGG_VAL_SEL_REG3 7 + +/* UCM agg val selection (FW) */ +#define DQ_UCM_ETH_PMD_TX_CONS_CMD DQ_UCM_AGG_VAL_SEL_WORD2 +#define DQ_UCM_ETH_PMD_RX_CONS_CMD DQ_UCM_AGG_VAL_SEL_WORD3 +#define DQ_UCM_ROCE_CQ_CONS_CMD DQ_UCM_AGG_VAL_SEL_REG0 +#define DQ_UCM_ROCE_CQ_PROD_CMD DQ_UCM_AGG_VAL_SEL_REG2 + +/* TCM agg val selection (HW) */ +#define DQ_TCM_AGG_VAL_SEL_WORD0 0 +#define DQ_TCM_AGG_VAL_SEL_WORD1 1 +#define DQ_TCM_AGG_VAL_SEL_WORD2 2 +#define DQ_TCM_AGG_VAL_SEL_WORD3 3 +#define DQ_TCM_AGG_VAL_SEL_REG1 4 +#define DQ_TCM_AGG_VAL_SEL_REG2 5 +#define DQ_TCM_AGG_VAL_SEL_REG6 6 +#define DQ_TCM_AGG_VAL_SEL_REG9 7 + +/* TCM agg val selection (FW) */ +#define DQ_TCM_L2B_BD_PROD_CMD DQ_TCM_AGG_VAL_SEL_WORD1 +#define DQ_TCM_ROCE_RQ_PROD_CMD DQ_TCM_AGG_VAL_SEL_WORD0 + +/* XCM agg counter flag selection (HW) */ +#define DQ_XCM_AGG_FLG_SHIFT_BIT14 0 +#define DQ_XCM_AGG_FLG_SHIFT_BIT15 1 +#define DQ_XCM_AGG_FLG_SHIFT_CF12 2 +#define DQ_XCM_AGG_FLG_SHIFT_CF13 3 +#define DQ_XCM_AGG_FLG_SHIFT_CF18 4 +#define DQ_XCM_AGG_FLG_SHIFT_CF19 5 +#define DQ_XCM_AGG_FLG_SHIFT_CF22 6 +#define DQ_XCM_AGG_FLG_SHIFT_CF23 7 + +/* XCM agg counter flag selection (FW) */ +#define DQ_XCM_CORE_DQ_CF_CMD (1 << DQ_XCM_AGG_FLG_SHIFT_CF18) +#define DQ_XCM_CORE_TERMINATE_CMD (1 << DQ_XCM_AGG_FLG_SHIFT_CF19) +#define DQ_XCM_CORE_SLOW_PATH_CMD (1 << DQ_XCM_AGG_FLG_SHIFT_CF22) +#define DQ_XCM_ETH_DQ_CF_CMD (1 << DQ_XCM_AGG_FLG_SHIFT_CF18) +#define DQ_XCM_ETH_TERMINATE_CMD (1 << DQ_XCM_AGG_FLG_SHIFT_CF19) +#define DQ_XCM_ETH_SLOW_PATH_CMD (1 << DQ_XCM_AGG_FLG_SHIFT_CF22) +#define DQ_XCM_ETH_TPH_EN_CMD (1 << DQ_XCM_AGG_FLG_SHIFT_CF23) +#define DQ_XCM_FCOE_SLOW_PATH_CMD (1 << DQ_XCM_AGG_FLG_SHIFT_CF22) +#define DQ_XCM_ISCSI_DQ_FLUSH_CMD (1 << DQ_XCM_AGG_FLG_SHIFT_CF19) +#define DQ_XCM_ISCSI_SLOW_PATH_CMD (1 << DQ_XCM_AGG_FLG_SHIFT_CF22) +#define DQ_XCM_ISCSI_PROC_ONLY_CLEANUP_CMD (1 << DQ_XCM_AGG_FLG_SHIFT_CF23) +#define DQ_XCM_TOE_DQ_FLUSH_CMD (1 << DQ_XCM_AGG_FLG_SHIFT_CF19) +#define DQ_XCM_TOE_SLOW_PATH_CMD (1 << DQ_XCM_AGG_FLG_SHIFT_CF22) + +/* UCM agg counter flag selection (HW) */ +#define DQ_UCM_AGG_FLG_SHIFT_CF0 0 +#define DQ_UCM_AGG_FLG_SHIFT_CF1 1 +#define DQ_UCM_AGG_FLG_SHIFT_CF3 2 +#define DQ_UCM_AGG_FLG_SHIFT_CF4 3 +#define DQ_UCM_AGG_FLG_SHIFT_CF5 4 +#define DQ_UCM_AGG_FLG_SHIFT_CF6 5 +#define DQ_UCM_AGG_FLG_SHIFT_RULE0EN 6 +#define DQ_UCM_AGG_FLG_SHIFT_RULE1EN 7 + +/* UCM agg counter flag selection (FW) */ +#define DQ_UCM_ETH_PMD_TX_ARM_CMD (1 << DQ_UCM_AGG_FLG_SHIFT_CF4) +#define DQ_UCM_ETH_PMD_RX_ARM_CMD (1 << DQ_UCM_AGG_FLG_SHIFT_CF5) +#define DQ_UCM_ROCE_CQ_ARM_SE_CF_CMD (1 << DQ_UCM_AGG_FLG_SHIFT_CF4) +#define DQ_UCM_ROCE_CQ_ARM_CF_CMD (1 << DQ_UCM_AGG_FLG_SHIFT_CF5) +#define DQ_UCM_TOE_TIMER_STOP_ALL_CMD (1 << DQ_UCM_AGG_FLG_SHIFT_CF3) +#define DQ_UCM_TOE_SLOW_PATH_CF_CMD (1 << DQ_UCM_AGG_FLG_SHIFT_CF4) +#define DQ_UCM_TOE_DQ_CF_CMD (1 << DQ_UCM_AGG_FLG_SHIFT_CF5) + +/* TCM agg counter flag selection (HW) */ +#define DQ_TCM_AGG_FLG_SHIFT_CF0 0 +#define DQ_TCM_AGG_FLG_SHIFT_CF1 1 +#define DQ_TCM_AGG_FLG_SHIFT_CF2 2 +#define DQ_TCM_AGG_FLG_SHIFT_CF3 3 +#define DQ_TCM_AGG_FLG_SHIFT_CF4 4 +#define DQ_TCM_AGG_FLG_SHIFT_CF5 5 +#define DQ_TCM_AGG_FLG_SHIFT_CF6 6 +#define DQ_TCM_AGG_FLG_SHIFT_CF7 7 + +/* TCM agg counter flag selection (FW) */ +#define DQ_TCM_FCOE_FLUSH_Q0_CMD (1 << DQ_TCM_AGG_FLG_SHIFT_CF1) +#define DQ_TCM_FCOE_DUMMY_TIMER_CMD (1 << DQ_TCM_AGG_FLG_SHIFT_CF2) +#define DQ_TCM_FCOE_TIMER_STOP_ALL_CMD (1 << DQ_TCM_AGG_FLG_SHIFT_CF3) +#define DQ_TCM_ISCSI_FLUSH_Q0_CMD (1 << DQ_TCM_AGG_FLG_SHIFT_CF1) +#define DQ_TCM_ISCSI_TIMER_STOP_ALL_CMD (1 << DQ_TCM_AGG_FLG_SHIFT_CF3) +#define DQ_TCM_TOE_FLUSH_Q0_CMD (1 << DQ_TCM_AGG_FLG_SHIFT_CF1) +#define DQ_TCM_TOE_TIMER_STOP_ALL_CMD (1 << DQ_TCM_AGG_FLG_SHIFT_CF3) +#define DQ_TCM_IWARP_POST_RQ_CF_CMD (1 << DQ_TCM_AGG_FLG_SHIFT_CF1) + +/* PWM address mapping */ +#define DQ_PWM_OFFSET_DPM_BASE 0x0 +#define DQ_PWM_OFFSET_DPM_END 0x27 +#define DQ_PWM_OFFSET_XCM16_BASE 0x40 +#define DQ_PWM_OFFSET_XCM32_BASE 0x44 +#define DQ_PWM_OFFSET_UCM16_BASE 0x48 +#define DQ_PWM_OFFSET_UCM32_BASE 0x4C +#define DQ_PWM_OFFSET_UCM16_4 0x50 +#define DQ_PWM_OFFSET_TCM16_BASE 0x58 +#define DQ_PWM_OFFSET_TCM32_BASE 0x5C +#define DQ_PWM_OFFSET_XCM_FLAGS 0x68 +#define DQ_PWM_OFFSET_UCM_FLAGS 0x69 +#define DQ_PWM_OFFSET_TCM_FLAGS 0x6B + +#define DQ_PWM_OFFSET_XCM_RDMA_SQ_PROD (DQ_PWM_OFFSET_XCM16_BASE + 2) +#define DQ_PWM_OFFSET_UCM_RDMA_CQ_CONS_32BIT (DQ_PWM_OFFSET_UCM32_BASE) +#define DQ_PWM_OFFSET_UCM_RDMA_CQ_CONS_16BIT (DQ_PWM_OFFSET_UCM16_4) +#define DQ_PWM_OFFSET_UCM_RDMA_INT_TIMEOUT (DQ_PWM_OFFSET_UCM16_BASE + 2) +#define DQ_PWM_OFFSET_UCM_RDMA_ARM_FLAGS (DQ_PWM_OFFSET_UCM_FLAGS) +#define DQ_PWM_OFFSET_TCM_ROCE_RQ_PROD (DQ_PWM_OFFSET_TCM16_BASE + 1) +#define DQ_PWM_OFFSET_TCM_IWARP_RQ_PROD (DQ_PWM_OFFSET_TCM16_BASE + 3) + +#define DQ_REGION_SHIFT (12) + +/* DPM */ +#define DQ_DPM_WQE_BUFF_SIZE (320) + +// Conn type ranges +#define DQ_CONN_TYPE_RANGE_SHIFT (4) + +/*****************/ +/* QM CONSTANTS */ +/*****************/ + +/* number of TX queues in the QM */ +#define MAX_QM_TX_QUEUES_K2 512 +#define MAX_QM_TX_QUEUES_BB 448 +#define MAX_QM_TX_QUEUES MAX_QM_TX_QUEUES_K2 + +/* number of Other queues in the QM */ +#define MAX_QM_OTHER_QUEUES_BB 64 +#define MAX_QM_OTHER_QUEUES_K2 128 +#define MAX_QM_OTHER_QUEUES MAX_QM_OTHER_QUEUES_K2 + +/* number of queues in a PF queue group */ +#define QM_PF_QUEUE_GROUP_SIZE 8 + +/* the size of a single queue element in bytes */ +#define QM_PQ_ELEMENT_SIZE 4 + +/* base number of Tx PQs in the CM PQ representation. + should be used when storing PQ IDs in CM PQ registers and context */ +#define CM_TX_PQ_BASE 0x200 + +/* number of global Vport/QCN rate limiters */ +#define MAX_QM_GLOBAL_RLS 256 + +/* QM registers data */ +#define QM_LINE_CRD_REG_WIDTH 16 +#define QM_LINE_CRD_REG_SIGN_BIT (1 << (QM_LINE_CRD_REG_WIDTH - 1)) +#define QM_BYTE_CRD_REG_WIDTH 24 +#define QM_BYTE_CRD_REG_SIGN_BIT (1 << (QM_BYTE_CRD_REG_WIDTH - 1)) +#define QM_WFQ_CRD_REG_WIDTH 32 +#define QM_WFQ_CRD_REG_SIGN_BIT (1 << (QM_WFQ_CRD_REG_WIDTH - 1)) +#define QM_RL_CRD_REG_WIDTH 32 +#define QM_RL_CRD_REG_SIGN_BIT (1 << (QM_RL_CRD_REG_WIDTH - 1)) + +/*****************/ +/* CAU CONSTANTS */ +/*****************/ + +#define CAU_FSM_ETH_RX 0 +#define CAU_FSM_ETH_TX 1 + +/* Number of Protocol Indices per Status Block */ +#define PIS_PER_SB 12 + + +#define CAU_HC_STOPPED_STATE 3 /* fsm is stopped or not valid for this sb */ +#define CAU_HC_DISABLE_STATE 4 /* fsm is working without interrupt coalescing for this sb*/ +#define CAU_HC_ENABLE_STATE 0 /* fsm is working with interrupt coalescing for this sb*/ + + +/*****************/ +/* IGU CONSTANTS */ +/*****************/ + +#define MAX_SB_PER_PATH_K2 (368) +#define MAX_SB_PER_PATH_BB (288) +#define MAX_TOT_SB_PER_PATH MAX_SB_PER_PATH_K2 + +#define MAX_SB_PER_PF_MIMD 129 +#define MAX_SB_PER_PF_SIMD 64 +#define MAX_SB_PER_VF 64 + +/* Memory addresses on the BAR for the IGU Sub Block */ +#define IGU_MEM_BASE 0x0000 + +#define IGU_MEM_MSIX_BASE 0x0000 +#define IGU_MEM_MSIX_UPPER 0x0101 +#define IGU_MEM_MSIX_RESERVED_UPPER 0x01ff + +#define IGU_MEM_PBA_MSIX_BASE 0x0200 +#define IGU_MEM_PBA_MSIX_UPPER 0x0202 +#define IGU_MEM_PBA_MSIX_RESERVED_UPPER 0x03ff + +#define IGU_CMD_INT_ACK_BASE 0x0400 +#define IGU_CMD_INT_ACK_UPPER (IGU_CMD_INT_ACK_BASE + MAX_TOT_SB_PER_PATH - 1) +#define IGU_CMD_INT_ACK_RESERVED_UPPER 0x05ff + +#define IGU_CMD_ATTN_BIT_UPD_UPPER 0x05f0 +#define IGU_CMD_ATTN_BIT_SET_UPPER 0x05f1 +#define IGU_CMD_ATTN_BIT_CLR_UPPER 0x05f2 + +#define IGU_REG_SISR_MDPC_WMASK_UPPER 0x05f3 +#define IGU_REG_SISR_MDPC_WMASK_LSB_UPPER 0x05f4 +#define IGU_REG_SISR_MDPC_WMASK_MSB_UPPER 0x05f5 +#define IGU_REG_SISR_MDPC_WOMASK_UPPER 0x05f6 + +#define IGU_CMD_PROD_UPD_BASE 0x0600 +#define IGU_CMD_PROD_UPD_UPPER (IGU_CMD_PROD_UPD_BASE + MAX_TOT_SB_PER_PATH - 1) +#define IGU_CMD_PROD_UPD_RESERVED_UPPER 0x07ff + +/*****************/ +/* PXP CONSTANTS */ +/*****************/ + +/* Bars for Blocks */ +#define PXP_BAR_GRC 0 +#define PXP_BAR_TSDM 0 +#define PXP_BAR_USDM 0 +#define PXP_BAR_XSDM 0 +#define PXP_BAR_MSDM 0 +#define PXP_BAR_YSDM 0 +#define PXP_BAR_PSDM 0 +#define PXP_BAR_IGU 0 +#define PXP_BAR_DQ 1 + +/* PTT and GTT */ +#define PXP_NUM_PF_WINDOWS 12 +#define PXP_PER_PF_ENTRY_SIZE 8 +#define PXP_NUM_GLOBAL_WINDOWS 243 +#define PXP_GLOBAL_ENTRY_SIZE 4 +#define PXP_ADMIN_WINDOW_ALLOWED_LENGTH 4 +#define PXP_PF_WINDOW_ADMIN_START 0 +#define PXP_PF_WINDOW_ADMIN_LENGTH 0x1000 +#define PXP_PF_WINDOW_ADMIN_END (PXP_PF_WINDOW_ADMIN_START + PXP_PF_WINDOW_ADMIN_LENGTH - 1) +#define PXP_PF_WINDOW_ADMIN_PER_PF_START 0 +#define PXP_PF_WINDOW_ADMIN_PER_PF_LENGTH (PXP_NUM_PF_WINDOWS * PXP_PER_PF_ENTRY_SIZE) +#define PXP_PF_WINDOW_ADMIN_PER_PF_END (PXP_PF_WINDOW_ADMIN_PER_PF_START + PXP_PF_WINDOW_ADMIN_PER_PF_LENGTH - 1) +#define PXP_PF_WINDOW_ADMIN_GLOBAL_START 0x200 +#define PXP_PF_WINDOW_ADMIN_GLOBAL_LENGTH (PXP_NUM_GLOBAL_WINDOWS * PXP_GLOBAL_ENTRY_SIZE) +#define PXP_PF_WINDOW_ADMIN_GLOBAL_END (PXP_PF_WINDOW_ADMIN_GLOBAL_START + PXP_PF_WINDOW_ADMIN_GLOBAL_LENGTH - 1) +#define PXP_PF_GLOBAL_PRETEND_ADDR 0x1f0 +#define PXP_PF_ME_OPAQUE_MASK_ADDR 0xf4 +#define PXP_PF_ME_OPAQUE_ADDR 0x1f8 +#define PXP_PF_ME_CONCRETE_ADDR 0x1fc + +#define PXP_EXTERNAL_BAR_PF_WINDOW_START 0x1000 +#define PXP_EXTERNAL_BAR_PF_WINDOW_NUM PXP_NUM_PF_WINDOWS +#define PXP_EXTERNAL_BAR_PF_WINDOW_SINGLE_SIZE 0x1000 +#define PXP_EXTERNAL_BAR_PF_WINDOW_LENGTH (PXP_EXTERNAL_BAR_PF_WINDOW_NUM * PXP_EXTERNAL_BAR_PF_WINDOW_SINGLE_SIZE) +#define PXP_EXTERNAL_BAR_PF_WINDOW_END (PXP_EXTERNAL_BAR_PF_WINDOW_START + PXP_EXTERNAL_BAR_PF_WINDOW_LENGTH - 1) + +#define PXP_EXTERNAL_BAR_GLOBAL_WINDOW_START (PXP_EXTERNAL_BAR_PF_WINDOW_END + 1) +#define PXP_EXTERNAL_BAR_GLOBAL_WINDOW_NUM PXP_NUM_GLOBAL_WINDOWS +#define PXP_EXTERNAL_BAR_GLOBAL_WINDOW_SINGLE_SIZE 0x1000 +#define PXP_EXTERNAL_BAR_GLOBAL_WINDOW_LENGTH (PXP_EXTERNAL_BAR_GLOBAL_WINDOW_NUM * PXP_EXTERNAL_BAR_GLOBAL_WINDOW_SINGLE_SIZE) +#define PXP_EXTERNAL_BAR_GLOBAL_WINDOW_END (PXP_EXTERNAL_BAR_GLOBAL_WINDOW_START + PXP_EXTERNAL_BAR_GLOBAL_WINDOW_LENGTH - 1) + +/* PF BAR */ +//#define PXP_BAR0_START_GRC 0x1000 +//#define PXP_BAR0_GRC_LENGTH 0xBFF000 +#define PXP_BAR0_START_GRC 0x0000 +#define PXP_BAR0_GRC_LENGTH 0x1C00000 +#define PXP_BAR0_END_GRC (PXP_BAR0_START_GRC + PXP_BAR0_GRC_LENGTH - 1) + +#define PXP_BAR0_START_IGU 0x1C00000 +#define PXP_BAR0_IGU_LENGTH 0x10000 +#define PXP_BAR0_END_IGU (PXP_BAR0_START_IGU + PXP_BAR0_IGU_LENGTH - 1) + +#define PXP_BAR0_START_TSDM 0x1C80000 +#define PXP_BAR0_SDM_LENGTH 0x40000 +#define PXP_BAR0_SDM_RESERVED_LENGTH 0x40000 +#define PXP_BAR0_END_TSDM (PXP_BAR0_START_TSDM + PXP_BAR0_SDM_LENGTH - 1) + +#define PXP_BAR0_START_MSDM 0x1D00000 +#define PXP_BAR0_END_MSDM (PXP_BAR0_START_MSDM + PXP_BAR0_SDM_LENGTH - 1) + +#define PXP_BAR0_START_USDM 0x1D80000 +#define PXP_BAR0_END_USDM (PXP_BAR0_START_USDM + PXP_BAR0_SDM_LENGTH - 1) + +#define PXP_BAR0_START_XSDM 0x1E00000 +#define PXP_BAR0_END_XSDM (PXP_BAR0_START_XSDM + PXP_BAR0_SDM_LENGTH - 1) + +#define PXP_BAR0_START_YSDM 0x1E80000 +#define PXP_BAR0_END_YSDM (PXP_BAR0_START_YSDM + PXP_BAR0_SDM_LENGTH - 1) + +#define PXP_BAR0_START_PSDM 0x1F00000 +#define PXP_BAR0_END_PSDM (PXP_BAR0_START_PSDM + PXP_BAR0_SDM_LENGTH - 1) + +#define PXP_BAR0_FIRST_INVALID_ADDRESS (PXP_BAR0_END_PSDM + 1) + +/* VF BAR */ +#define PXP_VF_BAR0 0 + +#define PXP_VF_BAR0_START_GRC 0x3E00 +#define PXP_VF_BAR0_GRC_LENGTH 0x200 +#define PXP_VF_BAR0_END_GRC (PXP_VF_BAR0_START_GRC + PXP_VF_BAR0_GRC_LENGTH - 1) + +#define PXP_VF_BAR0_START_IGU 0 +#define PXP_VF_BAR0_IGU_LENGTH 0x3000 +#define PXP_VF_BAR0_END_IGU (PXP_VF_BAR0_START_IGU + PXP_VF_BAR0_IGU_LENGTH - 1) + +#define PXP_VF_BAR0_START_DQ 0x3000 +#define PXP_VF_BAR0_DQ_LENGTH 0x200 +#define PXP_VF_BAR0_DQ_OPAQUE_OFFSET 0 +#define PXP_VF_BAR0_ME_OPAQUE_ADDRESS (PXP_VF_BAR0_START_DQ + PXP_VF_BAR0_DQ_OPAQUE_OFFSET) +#define PXP_VF_BAR0_ME_CONCRETE_ADDRESS (PXP_VF_BAR0_ME_OPAQUE_ADDRESS + 4) +#define PXP_VF_BAR0_END_DQ (PXP_VF_BAR0_START_DQ + PXP_VF_BAR0_DQ_LENGTH - 1) + +#define PXP_VF_BAR0_START_TSDM_ZONE_B 0x3200 +#define PXP_VF_BAR0_SDM_LENGTH_ZONE_B 0x200 +#define PXP_VF_BAR0_END_TSDM_ZONE_B (PXP_VF_BAR0_START_TSDM_ZONE_B + PXP_VF_BAR0_SDM_LENGTH_ZONE_B - 1) + +#define PXP_VF_BAR0_START_MSDM_ZONE_B 0x3400 +#define PXP_VF_BAR0_END_MSDM_ZONE_B (PXP_VF_BAR0_START_MSDM_ZONE_B + PXP_VF_BAR0_SDM_LENGTH_ZONE_B - 1) + +#define PXP_VF_BAR0_START_USDM_ZONE_B 0x3600 +#define PXP_VF_BAR0_END_USDM_ZONE_B (PXP_VF_BAR0_START_USDM_ZONE_B + PXP_VF_BAR0_SDM_LENGTH_ZONE_B - 1) + +#define PXP_VF_BAR0_START_XSDM_ZONE_B 0x3800 +#define PXP_VF_BAR0_END_XSDM_ZONE_B (PXP_VF_BAR0_START_XSDM_ZONE_B + PXP_VF_BAR0_SDM_LENGTH_ZONE_B - 1) + +#define PXP_VF_BAR0_START_YSDM_ZONE_B 0x3a00 +#define PXP_VF_BAR0_END_YSDM_ZONE_B (PXP_VF_BAR0_START_YSDM_ZONE_B + PXP_VF_BAR0_SDM_LENGTH_ZONE_B - 1) + +#define PXP_VF_BAR0_START_PSDM_ZONE_B 0x3c00 +#define PXP_VF_BAR0_END_PSDM_ZONE_B (PXP_VF_BAR0_START_PSDM_ZONE_B + PXP_VF_BAR0_SDM_LENGTH_ZONE_B - 1) + +#define PXP_VF_BAR0_START_SDM_ZONE_A 0x4000 +#define PXP_VF_BAR0_END_SDM_ZONE_A 0x10000 + +#define PXP_VF_BAR0_GRC_WINDOW_LENGTH 32 + +#define PXP_ILT_PAGE_SIZE_NUM_BITS_MIN 12 +#define PXP_ILT_BLOCK_FACTOR_MULTIPLIER 1024 + +// ILT Records +#define PXP_NUM_ILT_RECORDS_BB 7600 +#define PXP_NUM_ILT_RECORDS_K2 11000 +#define MAX_NUM_ILT_RECORDS MAX(PXP_NUM_ILT_RECORDS_BB,PXP_NUM_ILT_RECORDS_K2) + + +// Host Interface +#define PXP_QUEUES_ZONE_MAX_NUM 320 + + + + +/*****************/ +/* PRM CONSTANTS */ +/*****************/ +#define PRM_DMA_PAD_BYTES_NUM 2 +/*****************/ +/* SDMs CONSTANTS */ +/*****************/ + + +#define SDM_OP_GEN_TRIG_NONE 0 +#define SDM_OP_GEN_TRIG_WAKE_THREAD 1 +#define SDM_OP_GEN_TRIG_AGG_INT 2 +#define SDM_OP_GEN_TRIG_LOADER 4 +#define SDM_OP_GEN_TRIG_INDICATE_ERROR 6 +#define SDM_OP_GEN_TRIG_RELEASE_THREAD 7 + +///////////////////////////////////////////////////////////// +// Completion types +///////////////////////////////////////////////////////////// + +#define SDM_COMP_TYPE_NONE 0 +#define SDM_COMP_TYPE_WAKE_THREAD 1 +#define SDM_COMP_TYPE_AGG_INT 2 +#define SDM_COMP_TYPE_CM 3 // Send direct message to local CM and/or remote CMs. Destinations are defined by vector in CompParams. +#define SDM_COMP_TYPE_LOADER 4 +#define SDM_COMP_TYPE_PXP 5 // Send direct message to PXP (like "internal write" command) to write to remote Storm RAM via remote SDM +#define SDM_COMP_TYPE_INDICATE_ERROR 6 // Indicate error per thread +#define SDM_COMP_TYPE_RELEASE_THREAD 7 +#define SDM_COMP_TYPE_RAM 8 // Write to local RAM as a completion + + +/******************/ +/* PBF CONSTANTS */ +/******************/ + +/* Number of PBF command queue lines. Each line is 32B. */ +#define PBF_MAX_CMD_LINES 3328 + +/* Number of BTB blocks. Each block is 256B. */ +#define BTB_MAX_BLOCKS 1440 + +/*****************/ +/* PRS CONSTANTS */ +/*****************/ + +#define PRS_GFT_CAM_LINES_NO_MATCH 31 + +/* + * Async data KCQ CQE + */ +struct async_data +{ + __le32 cid /* Context ID of the connection */; + __le16 itid /* Task Id of the task (for error that happened on a a task) */; + uint8_t error_code /* error code - relevant only if the opcode indicates its an error */; + uint8_t fw_debug_param /* internal fw debug parameter */; +}; + + +/* + * Interrupt coalescing TimeSet + */ +struct coalescing_timeset +{ + uint8_t value; +#define COALESCING_TIMESET_TIMESET_MASK 0x7F /* Interrupt coalescing TimeSet (timeout_ticks = TimeSet shl (TimerRes+1)) */ +#define COALESCING_TIMESET_TIMESET_SHIFT 0 +#define COALESCING_TIMESET_VALID_MASK 0x1 /* Only if this flag is set, timeset will take effect */ +#define COALESCING_TIMESET_VALID_SHIFT 7 +}; + + +struct common_queue_zone +{ + __le16 ring_drv_data_consumer; + __le16 reserved; +}; + + +/* + * ETH Rx producers data + */ +struct eth_rx_prod_data +{ + __le16 bd_prod /* BD producer. */; + __le16 cqe_prod /* CQE producer. */; +}; + + +struct regpair +{ + __le32 lo /* low word for reg-pair */; + __le32 hi /* high word for reg-pair */; +}; + +/* + * Event Ring VF-PF Channel data + */ +struct vf_pf_channel_eqe_data +{ + struct regpair msg_addr /* VF-PF message address */; +}; + +struct iscsi_eqe_data +{ + __le32 cid /* Context ID of the connection */; + __le16 conn_id /* Task Id of the task (for error that happened on a a task) */; + uint8_t error_code /* error code - relevant only if the opcode indicates its an error */; + uint8_t error_pdu_opcode_reserved; +#define ISCSI_EQE_DATA_ERROR_PDU_OPCODE_MASK 0x3F /* The processed PDUs opcode on which happened the error - updated for specific error codes, by defualt=0xFF */ +#define ISCSI_EQE_DATA_ERROR_PDU_OPCODE_SHIFT 0 +#define ISCSI_EQE_DATA_ERROR_PDU_OPCODE_VALID_MASK 0x1 /* Indication for driver is the error_pdu_opcode field has valid value */ +#define ISCSI_EQE_DATA_ERROR_PDU_OPCODE_VALID_SHIFT 6 +#define ISCSI_EQE_DATA_RESERVED0_MASK 0x1 +#define ISCSI_EQE_DATA_RESERVED0_SHIFT 7 +}; + +/* + * Event Ring malicious VF data + */ +struct malicious_vf_eqe_data +{ + uint8_t vfId /* Malicious VF ID */; + uint8_t errId /* Malicious VF error */; + __le16 reserved[3]; +}; + +/* + * Event Ring initial cleanup data + */ +struct initial_cleanup_eqe_data +{ + uint8_t vfId /* VF ID */; + uint8_t reserved[7]; +}; + +/* + * Event Data Union + */ +union event_ring_data +{ + uint8_t bytes[8] /* Byte Array */; + struct vf_pf_channel_eqe_data vf_pf_channel /* VF-PF Channel data */; + struct iscsi_eqe_data iscsi_info /* Dedicated fields to iscsi data */; + struct regpair roceHandle /* Dedicated field for RoCE affiliated asynchronous error */; + struct malicious_vf_eqe_data malicious_vf /* Malicious VF data */; + struct initial_cleanup_eqe_data vf_init_cleanup /* VF Initial Cleanup data */; + struct regpair iwarp_handle /* Host handle for the Async Completions */; +}; + + +/* + * Event Ring Entry + */ +struct event_ring_entry +{ + uint8_t protocol_id /* Event Protocol ID */; + uint8_t opcode /* Event Opcode */; + __le16 reserved0 /* Reserved */; + __le16 echo /* Echo value from ramrod data on the host */; + uint8_t fw_return_code /* FW return code for SP ramrods */; + uint8_t flags; +#define EVENT_RING_ENTRY_ASYNC_MASK 0x1 /* 0: synchronous EQE - a completion of SP message. 1: asynchronous EQE */ +#define EVENT_RING_ENTRY_ASYNC_SHIFT 0 +#define EVENT_RING_ENTRY_RESERVED1_MASK 0x7F +#define EVENT_RING_ENTRY_RESERVED1_SHIFT 1 + union event_ring_data data; +}; + + + + + +/* + * Multi function mode + */ +enum mf_mode +{ + ERROR_MODE /* Unsupported mode */, + MF_OVLAN /* Multi function based on outer VLAN */, + MF_NPAR /* Multi function based on MAC address (NIC partitioning) */, + MAX_MF_MODE +}; + + +/* + * Per-protocol connection types + */ +enum protocol_type +{ + PROTOCOLID_ISCSI /* iSCSI */, + PROTOCOLID_FCOE /* FCoE */, + PROTOCOLID_ROCE /* RoCE */, + PROTOCOLID_CORE /* Core (light L2, slow path core) */, + PROTOCOLID_ETH /* Ethernet */, + PROTOCOLID_IWARP /* iWARP */, + PROTOCOLID_TOE /* TOE */, + PROTOCOLID_PREROCE /* Pre (tapeout) RoCE */, + PROTOCOLID_COMMON /* ProtocolCommon */, + PROTOCOLID_TCP /* TCP */, + MAX_PROTOCOL_TYPE +}; + + +/* + * Ustorm Queue Zone + */ +struct ustorm_eth_queue_zone +{ + struct coalescing_timeset int_coalescing_timeset /* Rx interrupt coalescing TimeSet */; + uint8_t reserved[3]; +}; + + +struct ustorm_queue_zone +{ + struct ustorm_eth_queue_zone eth; + struct common_queue_zone common; +}; + + + +/* + * status block structure + */ +struct cau_pi_entry +{ + __le32 prod; +#define CAU_PI_ENTRY_PROD_VAL_MASK 0xFFFF /* A per protocol indexPROD value. */ +#define CAU_PI_ENTRY_PROD_VAL_SHIFT 0 +#define CAU_PI_ENTRY_PI_TIMESET_MASK 0x7F /* This value determines the TimeSet that the PI is associated with */ +#define CAU_PI_ENTRY_PI_TIMESET_SHIFT 16 +#define CAU_PI_ENTRY_FSM_SEL_MASK 0x1 /* Select the FSM within the SB */ +#define CAU_PI_ENTRY_FSM_SEL_SHIFT 23 +#define CAU_PI_ENTRY_RESERVED_MASK 0xFF /* Select the FSM within the SB */ +#define CAU_PI_ENTRY_RESERVED_SHIFT 24 +}; + + +/* + * status block structure + */ +struct cau_sb_entry +{ + __le32 data; +#define CAU_SB_ENTRY_SB_PROD_MASK 0xFFFFFF /* The SB PROD index which is sent to the IGU. */ +#define CAU_SB_ENTRY_SB_PROD_SHIFT 0 +#define CAU_SB_ENTRY_STATE0_MASK 0xF /* RX state */ +#define CAU_SB_ENTRY_STATE0_SHIFT 24 +#define CAU_SB_ENTRY_STATE1_MASK 0xF /* TX state */ +#define CAU_SB_ENTRY_STATE1_SHIFT 28 + __le32 params; +#define CAU_SB_ENTRY_SB_TIMESET0_MASK 0x7F /* Indicates the RX TimeSet that this SB is associated with. */ +#define CAU_SB_ENTRY_SB_TIMESET0_SHIFT 0 +#define CAU_SB_ENTRY_SB_TIMESET1_MASK 0x7F /* Indicates the TX TimeSet that this SB is associated with. */ +#define CAU_SB_ENTRY_SB_TIMESET1_SHIFT 7 +#define CAU_SB_ENTRY_TIMER_RES0_MASK 0x3 /* This value will determine the RX FSM timer resolution in ticks */ +#define CAU_SB_ENTRY_TIMER_RES0_SHIFT 14 +#define CAU_SB_ENTRY_TIMER_RES1_MASK 0x3 /* This value will determine the TX FSM timer resolution in ticks */ +#define CAU_SB_ENTRY_TIMER_RES1_SHIFT 16 +#define CAU_SB_ENTRY_VF_NUMBER_MASK 0xFF +#define CAU_SB_ENTRY_VF_NUMBER_SHIFT 18 +#define CAU_SB_ENTRY_VF_VALID_MASK 0x1 +#define CAU_SB_ENTRY_VF_VALID_SHIFT 26 +#define CAU_SB_ENTRY_PF_NUMBER_MASK 0xF +#define CAU_SB_ENTRY_PF_NUMBER_SHIFT 27 +#define CAU_SB_ENTRY_TPH_MASK 0x1 /* If set then indicates that the TPH STAG is equal to the SB number. Otherwise the STAG will be equal to all ones. */ +#define CAU_SB_ENTRY_TPH_SHIFT 31 +}; + + +/* + * core doorbell data + */ +struct core_db_data +{ + uint8_t params; +#define CORE_DB_DATA_DEST_MASK 0x3 /* destination of doorbell (use enum db_dest) */ +#define CORE_DB_DATA_DEST_SHIFT 0 +#define CORE_DB_DATA_AGG_CMD_MASK 0x3 /* aggregative command to CM (use enum db_agg_cmd_sel) */ +#define CORE_DB_DATA_AGG_CMD_SHIFT 2 +#define CORE_DB_DATA_BYPASS_EN_MASK 0x1 /* enable QM bypass */ +#define CORE_DB_DATA_BYPASS_EN_SHIFT 4 +#define CORE_DB_DATA_RESERVED_MASK 0x1 +#define CORE_DB_DATA_RESERVED_SHIFT 5 +#define CORE_DB_DATA_AGG_VAL_SEL_MASK 0x3 /* aggregative value selection */ +#define CORE_DB_DATA_AGG_VAL_SEL_SHIFT 6 + uint8_t agg_flags /* bit for every DQ counter flags in CM context that DQ can increment */; + __le16 spq_prod; +}; + + +/* + * Enum of doorbell aggregative command selection + */ +enum db_agg_cmd_sel +{ + DB_AGG_CMD_NOP /* No operation */, + DB_AGG_CMD_SET /* Set the value */, + DB_AGG_CMD_ADD /* Add the value */, + DB_AGG_CMD_MAX /* Set max of current and new value */, + MAX_DB_AGG_CMD_SEL +}; + + +/* + * Enum of doorbell destination + */ +enum db_dest +{ + DB_DEST_XCM /* TX doorbell to XCM */, + DB_DEST_UCM /* RX doorbell to UCM */, + DB_DEST_TCM /* RX doorbell to TCM */, + DB_NUM_DESTINATIONS, + MAX_DB_DEST +}; + + +/* + * Enum of doorbell DPM types + */ +enum db_dpm_type +{ + DPM_LEGACY /* Legacy DPM- to Xstorm RAM */, + DPM_ROCE /* RoCE DPM- to NIG */, + DPM_L2_INLINE /* L2 DPM inline- to PBF, with packet data on doorbell */, + DPM_L2_BD /* L2 DPM with BD- to PBF, with TX BD data on doorbell */, + MAX_DB_DPM_TYPE +}; + + +/* + * Structure for doorbell data, in L2 DPM mode, for the first doorbell in a DPM burst + */ +struct db_l2_dpm_data +{ + __le16 icid /* internal CID */; + __le16 bd_prod /* bd producer value to update */; + __le32 params; +#define DB_L2_DPM_DATA_SIZE_MASK 0x3F /* Size in QWORD-s of the DPM burst */ +#define DB_L2_DPM_DATA_SIZE_SHIFT 0 +#define DB_L2_DPM_DATA_DPM_TYPE_MASK 0x3 /* Type of DPM transaction (DPM_L2_INLINE or DPM_L2_BD) (use enum db_dpm_type) */ +#define DB_L2_DPM_DATA_DPM_TYPE_SHIFT 6 +#define DB_L2_DPM_DATA_NUM_BDS_MASK 0xFF /* number of BD-s */ +#define DB_L2_DPM_DATA_NUM_BDS_SHIFT 8 +#define DB_L2_DPM_DATA_PKT_SIZE_MASK 0x7FF /* size of the packet to be transmitted in bytes */ +#define DB_L2_DPM_DATA_PKT_SIZE_SHIFT 16 +#define DB_L2_DPM_DATA_RESERVED0_MASK 0x1 +#define DB_L2_DPM_DATA_RESERVED0_SHIFT 27 +#define DB_L2_DPM_DATA_SGE_NUM_MASK 0x7 /* In DPM_L2_BD mode: the number of SGE-s */ +#define DB_L2_DPM_DATA_SGE_NUM_SHIFT 28 +#define DB_L2_DPM_DATA_RESERVED1_MASK 0x1 +#define DB_L2_DPM_DATA_RESERVED1_SHIFT 31 +}; + + +/* + * Structure for SGE in a DPM doorbell of type DPM_L2_BD + */ +struct db_l2_dpm_sge +{ + struct regpair addr /* Single continuous buffer */; + __le16 nbytes /* Number of bytes in this BD. */; + __le16 bitfields; +#define DB_L2_DPM_SGE_TPH_ST_INDEX_MASK 0x1FF /* The TPH STAG index value */ +#define DB_L2_DPM_SGE_TPH_ST_INDEX_SHIFT 0 +#define DB_L2_DPM_SGE_RESERVED0_MASK 0x3 +#define DB_L2_DPM_SGE_RESERVED0_SHIFT 9 +#define DB_L2_DPM_SGE_ST_VALID_MASK 0x1 /* Indicate if ST hint is requested or not */ +#define DB_L2_DPM_SGE_ST_VALID_SHIFT 11 +#define DB_L2_DPM_SGE_RESERVED1_MASK 0xF +#define DB_L2_DPM_SGE_RESERVED1_SHIFT 12 + __le32 reserved2; +}; + + +/* + * Structure for doorbell address, in legacy mode + */ +struct db_legacy_addr +{ + __le32 addr; +#define DB_LEGACY_ADDR_RESERVED0_MASK 0x3 +#define DB_LEGACY_ADDR_RESERVED0_SHIFT 0 +#define DB_LEGACY_ADDR_DEMS_MASK 0x7 /* doorbell extraction mode specifier- 0 if not used */ +#define DB_LEGACY_ADDR_DEMS_SHIFT 2 +#define DB_LEGACY_ADDR_ICID_MASK 0x7FFFFFF /* internal CID */ +#define DB_LEGACY_ADDR_ICID_SHIFT 5 +}; + + +/* + * Structure for doorbell address, in PWM mode + */ +struct db_pwm_addr +{ + __le32 addr; +#define DB_PWM_ADDR_RESERVED0_MASK 0x7 +#define DB_PWM_ADDR_RESERVED0_SHIFT 0 +#define DB_PWM_ADDR_OFFSET_MASK 0x7F /* Offset in PWM address space */ +#define DB_PWM_ADDR_OFFSET_SHIFT 3 +#define DB_PWM_ADDR_WID_MASK 0x3 /* Window ID */ +#define DB_PWM_ADDR_WID_SHIFT 10 +#define DB_PWM_ADDR_DPI_MASK 0xFFFF /* Doorbell page ID */ +#define DB_PWM_ADDR_DPI_SHIFT 12 +#define DB_PWM_ADDR_RESERVED1_MASK 0xF +#define DB_PWM_ADDR_RESERVED1_SHIFT 28 +}; + + +/* + * Parameters to RoCE firmware, passed in EDPM doorbell + */ +struct db_roce_dpm_params +{ + __le32 params; +#define DB_ROCE_DPM_PARAMS_SIZE_MASK 0x3F /* Size in QWORD-s of the DPM burst */ +#define DB_ROCE_DPM_PARAMS_SIZE_SHIFT 0 +#define DB_ROCE_DPM_PARAMS_DPM_TYPE_MASK 0x3 /* Type of DPM transacation (DPM_ROCE) (use enum db_dpm_type) */ +#define DB_ROCE_DPM_PARAMS_DPM_TYPE_SHIFT 6 +#define DB_ROCE_DPM_PARAMS_OPCODE_MASK 0xFF /* opcode for ROCE operation */ +#define DB_ROCE_DPM_PARAMS_OPCODE_SHIFT 8 +#define DB_ROCE_DPM_PARAMS_WQE_SIZE_MASK 0x7FF /* the size of the WQE payload in bytes */ +#define DB_ROCE_DPM_PARAMS_WQE_SIZE_SHIFT 16 +#define DB_ROCE_DPM_PARAMS_RESERVED0_MASK 0x1 +#define DB_ROCE_DPM_PARAMS_RESERVED0_SHIFT 27 +#define DB_ROCE_DPM_PARAMS_COMPLETION_FLG_MASK 0x1 /* RoCE completion flag */ +#define DB_ROCE_DPM_PARAMS_COMPLETION_FLG_SHIFT 28 +#define DB_ROCE_DPM_PARAMS_S_FLG_MASK 0x1 /* RoCE S flag */ +#define DB_ROCE_DPM_PARAMS_S_FLG_SHIFT 29 +#define DB_ROCE_DPM_PARAMS_RESERVED1_MASK 0x3 +#define DB_ROCE_DPM_PARAMS_RESERVED1_SHIFT 30 +}; + +/* + * Structure for doorbell data, in ROCE DPM mode, for the first doorbell in a DPM burst + */ +struct db_roce_dpm_data +{ + __le16 icid /* internal CID */; + __le16 prod_val /* aggregated value to update */; + struct db_roce_dpm_params params /* parametes passed to RoCE firmware */; +}; + + + +/* + * Igu interrupt command + */ +enum igu_int_cmd +{ + IGU_INT_ENABLE=0, + IGU_INT_DISABLE=1, + IGU_INT_NOP=2, + IGU_INT_NOP2=3, + MAX_IGU_INT_CMD +}; + + +/* + * IGU producer or consumer update command + */ +struct igu_prod_cons_update +{ + __le32 sb_id_and_flags; +#define IGU_PROD_CONS_UPDATE_SB_INDEX_MASK 0xFFFFFF +#define IGU_PROD_CONS_UPDATE_SB_INDEX_SHIFT 0 +#define IGU_PROD_CONS_UPDATE_UPDATE_FLAG_MASK 0x1 +#define IGU_PROD_CONS_UPDATE_UPDATE_FLAG_SHIFT 24 +#define IGU_PROD_CONS_UPDATE_ENABLE_INT_MASK 0x3 /* interrupt enable/disable/nop (use enum igu_int_cmd) */ +#define IGU_PROD_CONS_UPDATE_ENABLE_INT_SHIFT 25 +#define IGU_PROD_CONS_UPDATE_SEGMENT_ACCESS_MASK 0x1 /* (use enum igu_seg_access) */ +#define IGU_PROD_CONS_UPDATE_SEGMENT_ACCESS_SHIFT 27 +#define IGU_PROD_CONS_UPDATE_TIMER_MASK_MASK 0x1 +#define IGU_PROD_CONS_UPDATE_TIMER_MASK_SHIFT 28 +#define IGU_PROD_CONS_UPDATE_RESERVED0_MASK 0x3 +#define IGU_PROD_CONS_UPDATE_RESERVED0_SHIFT 29 +#define IGU_PROD_CONS_UPDATE_COMMAND_TYPE_MASK 0x1 /* must always be set cleared (use enum command_type_bit) */ +#define IGU_PROD_CONS_UPDATE_COMMAND_TYPE_SHIFT 31 + __le32 reserved1; +}; + + +/* + * Igu segments access for default status block only + */ +enum igu_seg_access +{ + IGU_SEG_ACCESS_REG=0, + IGU_SEG_ACCESS_ATTN=1, + MAX_IGU_SEG_ACCESS +}; + + +/* + * Enumeration for L3 type field of parsing_and_err_flags_union. L3Type: 0 - unknown (not ip) ,1 - Ipv4, 2 - Ipv6 (this field can be filled according to the last-ethertype) + */ +enum l3_type +{ + e_l3Type_unknown, + e_l3Type_ipv4, + e_l3Type_ipv6, + MAX_L3_TYPE +}; + + +/* + * Enumeration for l4Protocol field of parsing_and_err_flags_union. L4-protocol 0 - none, 1 - TCP, 2- UDP. if the packet is IPv4 fragment, and its not the first fragment, the protocol-type should be set to none. + */ +enum l4_protocol +{ + e_l4Protocol_none, + e_l4Protocol_tcp, + e_l4Protocol_udp, + MAX_L4_PROTOCOL +}; + + +/* + * Parsing and error flags field. + */ +struct parsing_and_err_flags +{ + __le16 flags; +#define PARSING_AND_ERR_FLAGS_L3TYPE_MASK 0x3 /* L3Type: 0 - unknown (not ip) ,1 - Ipv4, 2 - Ipv6 (this field can be filled according to the last-ethertype) (use enum l3_type) */ +#define PARSING_AND_ERR_FLAGS_L3TYPE_SHIFT 0 +#define PARSING_AND_ERR_FLAGS_L4PROTOCOL_MASK 0x3 /* L4-protocol 0 - none, 1 - TCP, 2- UDP. if the packet is IPv4 fragment, and its not the first fragment, the protocol-type should be set to none. (use enum l4_protocol) */ +#define PARSING_AND_ERR_FLAGS_L4PROTOCOL_SHIFT 2 +#define PARSING_AND_ERR_FLAGS_IPV4FRAG_MASK 0x1 /* Set if the packet is IPv4 fragment. */ +#define PARSING_AND_ERR_FLAGS_IPV4FRAG_SHIFT 4 +#define PARSING_AND_ERR_FLAGS_TAG8021QEXIST_MASK 0x1 /* Set if VLAN tag exists. Invalid if tunnel type are IP GRE or IP GENEVE. */ +#define PARSING_AND_ERR_FLAGS_TAG8021QEXIST_SHIFT 5 +#define PARSING_AND_ERR_FLAGS_L4CHKSMWASCALCULATED_MASK 0x1 /* Set if L4 checksum was calculated. */ +#define PARSING_AND_ERR_FLAGS_L4CHKSMWASCALCULATED_SHIFT 6 +#define PARSING_AND_ERR_FLAGS_TIMESYNCPKT_MASK 0x1 /* Set for PTP packet. */ +#define PARSING_AND_ERR_FLAGS_TIMESYNCPKT_SHIFT 7 +#define PARSING_AND_ERR_FLAGS_TIMESTAMPRECORDED_MASK 0x1 /* Set if PTP timestamp recorded. */ +#define PARSING_AND_ERR_FLAGS_TIMESTAMPRECORDED_SHIFT 8 +#define PARSING_AND_ERR_FLAGS_IPHDRERROR_MASK 0x1 /* Set if either version-mismatch or hdr-len-error or ipv4-cksm is set or ipv6 ver mismatch */ +#define PARSING_AND_ERR_FLAGS_IPHDRERROR_SHIFT 9 +#define PARSING_AND_ERR_FLAGS_L4CHKSMERROR_MASK 0x1 /* Set if L4 checksum validation failed. Valid only if L4 checksum was calculated. */ +#define PARSING_AND_ERR_FLAGS_L4CHKSMERROR_SHIFT 10 +#define PARSING_AND_ERR_FLAGS_TUNNELEXIST_MASK 0x1 /* Set if GRE/VXLAN/GENEVE tunnel detected. */ +#define PARSING_AND_ERR_FLAGS_TUNNELEXIST_SHIFT 11 +#define PARSING_AND_ERR_FLAGS_TUNNEL8021QTAGEXIST_MASK 0x1 /* Set if VLAN tag exists in tunnel header. */ +#define PARSING_AND_ERR_FLAGS_TUNNEL8021QTAGEXIST_SHIFT 12 +#define PARSING_AND_ERR_FLAGS_TUNNELIPHDRERROR_MASK 0x1 /* Set if either tunnel-ipv4-version-mismatch or tunnel-ipv4-hdr-len-error or tunnel-ipv4-cksm is set or tunneling ipv6 ver mismatch */ +#define PARSING_AND_ERR_FLAGS_TUNNELIPHDRERROR_SHIFT 13 +#define PARSING_AND_ERR_FLAGS_TUNNELL4CHKSMWASCALCULATED_MASK 0x1 /* Set if GRE or VXLAN/GENEVE UDP checksum was calculated. */ +#define PARSING_AND_ERR_FLAGS_TUNNELL4CHKSMWASCALCULATED_SHIFT 14 +#define PARSING_AND_ERR_FLAGS_TUNNELL4CHKSMERROR_MASK 0x1 /* Set if tunnel L4 checksum validation failed. Valid only if tunnel L4 checksum was calculated. */ +#define PARSING_AND_ERR_FLAGS_TUNNELL4CHKSMERROR_SHIFT 15 +}; + + +/* + * Pb context + */ +struct pb_context +{ + __le32 crc[4]; +}; + + +/* + * Concrete Function ID. + */ +struct pxp_concrete_fid +{ + __le16 fid; +#define PXP_CONCRETE_FID_PFID_MASK 0xF /* Parent PFID */ +#define PXP_CONCRETE_FID_PFID_SHIFT 0 +#define PXP_CONCRETE_FID_PORT_MASK 0x3 /* port number */ +#define PXP_CONCRETE_FID_PORT_SHIFT 4 +#define PXP_CONCRETE_FID_PATH_MASK 0x1 /* path number */ +#define PXP_CONCRETE_FID_PATH_SHIFT 6 +#define PXP_CONCRETE_FID_VFVALID_MASK 0x1 +#define PXP_CONCRETE_FID_VFVALID_SHIFT 7 +#define PXP_CONCRETE_FID_VFID_MASK 0xFF +#define PXP_CONCRETE_FID_VFID_SHIFT 8 +}; + + +/* + * Concrete Function ID. + */ +struct pxp_pretend_concrete_fid +{ + __le16 fid; +#define PXP_PRETEND_CONCRETE_FID_PFID_MASK 0xF /* Parent PFID */ +#define PXP_PRETEND_CONCRETE_FID_PFID_SHIFT 0 +#define PXP_PRETEND_CONCRETE_FID_RESERVED_MASK 0x7 /* port number. Only when part of ME register. */ +#define PXP_PRETEND_CONCRETE_FID_RESERVED_SHIFT 4 +#define PXP_PRETEND_CONCRETE_FID_VFVALID_MASK 0x1 +#define PXP_PRETEND_CONCRETE_FID_VFVALID_SHIFT 7 +#define PXP_PRETEND_CONCRETE_FID_VFID_MASK 0xFF +#define PXP_PRETEND_CONCRETE_FID_VFID_SHIFT 8 +}; + +/* + * Function ID. + */ +union pxp_pretend_fid +{ + struct pxp_pretend_concrete_fid concrete_fid; + __le16 opaque_fid; +}; + +/* + * Pxp Pretend Command Register. + */ +struct pxp_pretend_cmd +{ + union pxp_pretend_fid fid; + __le16 control; +#define PXP_PRETEND_CMD_PATH_MASK 0x1 +#define PXP_PRETEND_CMD_PATH_SHIFT 0 +#define PXP_PRETEND_CMD_USE_PORT_MASK 0x1 +#define PXP_PRETEND_CMD_USE_PORT_SHIFT 1 +#define PXP_PRETEND_CMD_PORT_MASK 0x3 +#define PXP_PRETEND_CMD_PORT_SHIFT 2 +#define PXP_PRETEND_CMD_RESERVED0_MASK 0xF +#define PXP_PRETEND_CMD_RESERVED0_SHIFT 4 +#define PXP_PRETEND_CMD_RESERVED1_MASK 0xF +#define PXP_PRETEND_CMD_RESERVED1_SHIFT 8 +#define PXP_PRETEND_CMD_PRETEND_PATH_MASK 0x1 /* is pretend mode? */ +#define PXP_PRETEND_CMD_PRETEND_PATH_SHIFT 12 +#define PXP_PRETEND_CMD_PRETEND_PORT_MASK 0x1 /* is pretend mode? */ +#define PXP_PRETEND_CMD_PRETEND_PORT_SHIFT 13 +#define PXP_PRETEND_CMD_PRETEND_FUNCTION_MASK 0x1 /* is pretend mode? */ +#define PXP_PRETEND_CMD_PRETEND_FUNCTION_SHIFT 14 +#define PXP_PRETEND_CMD_IS_CONCRETE_MASK 0x1 /* is fid concrete? */ +#define PXP_PRETEND_CMD_IS_CONCRETE_SHIFT 15 +}; + + + + +/* + * PTT Record in PXP Admin Window. + */ +struct pxp_ptt_entry +{ + __le32 offset; +#define PXP_PTT_ENTRY_OFFSET_MASK 0x7FFFFF +#define PXP_PTT_ENTRY_OFFSET_SHIFT 0 +#define PXP_PTT_ENTRY_RESERVED0_MASK 0x1FF +#define PXP_PTT_ENTRY_RESERVED0_SHIFT 23 + struct pxp_pretend_cmd pretend; +}; + + +/* + * VF Zone A Permission Register. + */ +struct pxp_vf_zone_a_permission +{ + __le32 control; +#define PXP_VF_ZONE_A_PERMISSION_VFID_MASK 0xFF +#define PXP_VF_ZONE_A_PERMISSION_VFID_SHIFT 0 +#define PXP_VF_ZONE_A_PERMISSION_VALID_MASK 0x1 +#define PXP_VF_ZONE_A_PERMISSION_VALID_SHIFT 8 +#define PXP_VF_ZONE_A_PERMISSION_RESERVED0_MASK 0x7F +#define PXP_VF_ZONE_A_PERMISSION_RESERVED0_SHIFT 9 +#define PXP_VF_ZONE_A_PERMISSION_RESERVED1_MASK 0xFFFF +#define PXP_VF_ZONE_A_PERMISSION_RESERVED1_SHIFT 16 +}; + + +/* + * Rdif context + */ +struct rdif_task_context +{ + __le32 initialRefTag; + __le16 appTagValue; + __le16 appTagMask; + uint8_t flags0; +#define RDIF_TASK_CONTEXT_IGNOREAPPTAG_MASK 0x1 +#define RDIF_TASK_CONTEXT_IGNOREAPPTAG_SHIFT 0 +#define RDIF_TASK_CONTEXT_INITIALREFTAGVALID_MASK 0x1 +#define RDIF_TASK_CONTEXT_INITIALREFTAGVALID_SHIFT 1 +#define RDIF_TASK_CONTEXT_HOSTGUARDTYPE_MASK 0x1 /* 0 = IP checksum, 1 = CRC */ +#define RDIF_TASK_CONTEXT_HOSTGUARDTYPE_SHIFT 2 +#define RDIF_TASK_CONTEXT_SETERRORWITHEOP_MASK 0x1 +#define RDIF_TASK_CONTEXT_SETERRORWITHEOP_SHIFT 3 +#define RDIF_TASK_CONTEXT_PROTECTIONTYPE_MASK 0x3 /* 1/2/3 - Protection Type */ +#define RDIF_TASK_CONTEXT_PROTECTIONTYPE_SHIFT 4 +#define RDIF_TASK_CONTEXT_CRC_SEED_MASK 0x1 /* 0=0x0000, 1=0xffff */ +#define RDIF_TASK_CONTEXT_CRC_SEED_SHIFT 6 +#define RDIF_TASK_CONTEXT_KEEPREFTAGCONST_MASK 0x1 /* Keep reference tag constant */ +#define RDIF_TASK_CONTEXT_KEEPREFTAGCONST_SHIFT 7 + uint8_t partialDifData[7]; + __le16 partialCrcValue; + __le16 partialChecksumValue; + __le32 offsetInIO; + __le16 flags1; +#define RDIF_TASK_CONTEXT_VALIDATEGUARD_MASK 0x1 +#define RDIF_TASK_CONTEXT_VALIDATEGUARD_SHIFT 0 +#define RDIF_TASK_CONTEXT_VALIDATEAPPTAG_MASK 0x1 +#define RDIF_TASK_CONTEXT_VALIDATEAPPTAG_SHIFT 1 +#define RDIF_TASK_CONTEXT_VALIDATEREFTAG_MASK 0x1 +#define RDIF_TASK_CONTEXT_VALIDATEREFTAG_SHIFT 2 +#define RDIF_TASK_CONTEXT_FORWARDGUARD_MASK 0x1 +#define RDIF_TASK_CONTEXT_FORWARDGUARD_SHIFT 3 +#define RDIF_TASK_CONTEXT_FORWARDAPPTAG_MASK 0x1 +#define RDIF_TASK_CONTEXT_FORWARDAPPTAG_SHIFT 4 +#define RDIF_TASK_CONTEXT_FORWARDREFTAG_MASK 0x1 +#define RDIF_TASK_CONTEXT_FORWARDREFTAG_SHIFT 5 +#define RDIF_TASK_CONTEXT_INTERVALSIZE_MASK 0x7 /* 0=512B, 1=1KB, 2=2KB, 3=4KB, 4=8KB */ +#define RDIF_TASK_CONTEXT_INTERVALSIZE_SHIFT 6 +#define RDIF_TASK_CONTEXT_HOSTINTERFACE_MASK 0x3 /* 0=None, 1=DIF, 2=DIX */ +#define RDIF_TASK_CONTEXT_HOSTINTERFACE_SHIFT 9 +#define RDIF_TASK_CONTEXT_DIFBEFOREDATA_MASK 0x1 /* DIF tag right at the beginning of DIF interval */ +#define RDIF_TASK_CONTEXT_DIFBEFOREDATA_SHIFT 11 +#define RDIF_TASK_CONTEXT_RESERVED0_MASK 0x1 +#define RDIF_TASK_CONTEXT_RESERVED0_SHIFT 12 +#define RDIF_TASK_CONTEXT_NETWORKINTERFACE_MASK 0x1 /* 0=None, 1=DIF */ +#define RDIF_TASK_CONTEXT_NETWORKINTERFACE_SHIFT 13 +#define RDIF_TASK_CONTEXT_FORWARDAPPTAGWITHMASK_MASK 0x1 /* Forward application tag with mask */ +#define RDIF_TASK_CONTEXT_FORWARDAPPTAGWITHMASK_SHIFT 14 +#define RDIF_TASK_CONTEXT_FORWARDREFTAGWITHMASK_MASK 0x1 /* Forward reference tag with mask */ +#define RDIF_TASK_CONTEXT_FORWARDREFTAGWITHMASK_SHIFT 15 + __le16 state; +#define RDIF_TASK_CONTEXT_RECEIVEDDIFBYTESLEFT_MASK 0xF +#define RDIF_TASK_CONTEXT_RECEIVEDDIFBYTESLEFT_SHIFT 0 +#define RDIF_TASK_CONTEXT_TRANSMITEDDIFBYTESLEFT_MASK 0xF +#define RDIF_TASK_CONTEXT_TRANSMITEDDIFBYTESLEFT_SHIFT 4 +#define RDIF_TASK_CONTEXT_ERRORINIO_MASK 0x1 +#define RDIF_TASK_CONTEXT_ERRORINIO_SHIFT 8 +#define RDIF_TASK_CONTEXT_CHECKSUMOVERFLOW_MASK 0x1 +#define RDIF_TASK_CONTEXT_CHECKSUMOVERFLOW_SHIFT 9 +#define RDIF_TASK_CONTEXT_REFTAGMASK_MASK 0xF /* mask for refernce tag handling */ +#define RDIF_TASK_CONTEXT_REFTAGMASK_SHIFT 10 +#define RDIF_TASK_CONTEXT_RESERVED1_MASK 0x3 +#define RDIF_TASK_CONTEXT_RESERVED1_SHIFT 14 + __le32 reserved2; +}; + + + +/* + * RSS hash type + */ +enum rss_hash_type +{ + RSS_HASH_TYPE_DEFAULT=0, + RSS_HASH_TYPE_IPV4=1, + RSS_HASH_TYPE_TCP_IPV4=2, + RSS_HASH_TYPE_IPV6=3, + RSS_HASH_TYPE_TCP_IPV6=4, + RSS_HASH_TYPE_UDP_IPV4=5, + RSS_HASH_TYPE_UDP_IPV6=6, + MAX_RSS_HASH_TYPE +}; + + +/* + * status block structure + */ +struct status_block +{ + __le16 pi_array[PIS_PER_SB]; + __le32 sb_num; +#define STATUS_BLOCK_SB_NUM_MASK 0x1FF +#define STATUS_BLOCK_SB_NUM_SHIFT 0 +#define STATUS_BLOCK_ZERO_PAD_MASK 0x7F +#define STATUS_BLOCK_ZERO_PAD_SHIFT 9 +#define STATUS_BLOCK_ZERO_PAD2_MASK 0xFFFF +#define STATUS_BLOCK_ZERO_PAD2_SHIFT 16 + __le32 prod_index; +#define STATUS_BLOCK_PROD_INDEX_MASK 0xFFFFFF +#define STATUS_BLOCK_PROD_INDEX_SHIFT 0 +#define STATUS_BLOCK_ZERO_PAD3_MASK 0xFF +#define STATUS_BLOCK_ZERO_PAD3_SHIFT 24 +}; + + +/* + * Tdif context + */ +struct tdif_task_context +{ + __le32 initialRefTag; + __le16 appTagValue; + __le16 appTagMask; + __le16 partialCrcValueB; + __le16 partialChecksumValueB; + __le16 stateB; +#define TDIF_TASK_CONTEXT_RECEIVEDDIFBYTESLEFTB_MASK 0xF +#define TDIF_TASK_CONTEXT_RECEIVEDDIFBYTESLEFTB_SHIFT 0 +#define TDIF_TASK_CONTEXT_TRANSMITEDDIFBYTESLEFTB_MASK 0xF +#define TDIF_TASK_CONTEXT_TRANSMITEDDIFBYTESLEFTB_SHIFT 4 +#define TDIF_TASK_CONTEXT_ERRORINIOB_MASK 0x1 +#define TDIF_TASK_CONTEXT_ERRORINIOB_SHIFT 8 +#define TDIF_TASK_CONTEXT_CHECKSUMOVERFLOW_MASK 0x1 +#define TDIF_TASK_CONTEXT_CHECKSUMOVERFLOW_SHIFT 9 +#define TDIF_TASK_CONTEXT_RESERVED0_MASK 0x3F +#define TDIF_TASK_CONTEXT_RESERVED0_SHIFT 10 + uint8_t reserved1; + uint8_t flags0; +#define TDIF_TASK_CONTEXT_IGNOREAPPTAG_MASK 0x1 +#define TDIF_TASK_CONTEXT_IGNOREAPPTAG_SHIFT 0 +#define TDIF_TASK_CONTEXT_INITIALREFTAGVALID_MASK 0x1 +#define TDIF_TASK_CONTEXT_INITIALREFTAGVALID_SHIFT 1 +#define TDIF_TASK_CONTEXT_HOSTGUARDTYPE_MASK 0x1 /* 0 = IP checksum, 1 = CRC */ +#define TDIF_TASK_CONTEXT_HOSTGUARDTYPE_SHIFT 2 +#define TDIF_TASK_CONTEXT_SETERRORWITHEOP_MASK 0x1 +#define TDIF_TASK_CONTEXT_SETERRORWITHEOP_SHIFT 3 +#define TDIF_TASK_CONTEXT_PROTECTIONTYPE_MASK 0x3 /* 1/2/3 - Protection Type */ +#define TDIF_TASK_CONTEXT_PROTECTIONTYPE_SHIFT 4 +#define TDIF_TASK_CONTEXT_CRC_SEED_MASK 0x1 /* 0=0x0000, 1=0xffff */ +#define TDIF_TASK_CONTEXT_CRC_SEED_SHIFT 6 +#define TDIF_TASK_CONTEXT_RESERVED2_MASK 0x1 +#define TDIF_TASK_CONTEXT_RESERVED2_SHIFT 7 + __le32 flags1; +#define TDIF_TASK_CONTEXT_VALIDATEGUARD_MASK 0x1 +#define TDIF_TASK_CONTEXT_VALIDATEGUARD_SHIFT 0 +#define TDIF_TASK_CONTEXT_VALIDATEAPPTAG_MASK 0x1 +#define TDIF_TASK_CONTEXT_VALIDATEAPPTAG_SHIFT 1 +#define TDIF_TASK_CONTEXT_VALIDATEREFTAG_MASK 0x1 +#define TDIF_TASK_CONTEXT_VALIDATEREFTAG_SHIFT 2 +#define TDIF_TASK_CONTEXT_FORWARDGUARD_MASK 0x1 +#define TDIF_TASK_CONTEXT_FORWARDGUARD_SHIFT 3 +#define TDIF_TASK_CONTEXT_FORWARDAPPTAG_MASK 0x1 +#define TDIF_TASK_CONTEXT_FORWARDAPPTAG_SHIFT 4 +#define TDIF_TASK_CONTEXT_FORWARDREFTAG_MASK 0x1 +#define TDIF_TASK_CONTEXT_FORWARDREFTAG_SHIFT 5 +#define TDIF_TASK_CONTEXT_INTERVALSIZE_MASK 0x7 /* 0=512B, 1=1KB, 2=2KB, 3=4KB, 4=8KB */ +#define TDIF_TASK_CONTEXT_INTERVALSIZE_SHIFT 6 +#define TDIF_TASK_CONTEXT_HOSTINTERFACE_MASK 0x3 /* 0=None, 1=DIF, 2=DIX */ +#define TDIF_TASK_CONTEXT_HOSTINTERFACE_SHIFT 9 +#define TDIF_TASK_CONTEXT_DIFBEFOREDATA_MASK 0x1 /* DIF tag right at the beginning of DIF interval */ +#define TDIF_TASK_CONTEXT_DIFBEFOREDATA_SHIFT 11 +#define TDIF_TASK_CONTEXT_RESERVED3_MASK 0x1 /* reserved */ +#define TDIF_TASK_CONTEXT_RESERVED3_SHIFT 12 +#define TDIF_TASK_CONTEXT_NETWORKINTERFACE_MASK 0x1 /* 0=None, 1=DIF */ +#define TDIF_TASK_CONTEXT_NETWORKINTERFACE_SHIFT 13 +#define TDIF_TASK_CONTEXT_RECEIVEDDIFBYTESLEFTA_MASK 0xF +#define TDIF_TASK_CONTEXT_RECEIVEDDIFBYTESLEFTA_SHIFT 14 +#define TDIF_TASK_CONTEXT_TRANSMITEDDIFBYTESLEFTA_MASK 0xF +#define TDIF_TASK_CONTEXT_TRANSMITEDDIFBYTESLEFTA_SHIFT 18 +#define TDIF_TASK_CONTEXT_ERRORINIOA_MASK 0x1 +#define TDIF_TASK_CONTEXT_ERRORINIOA_SHIFT 22 +#define TDIF_TASK_CONTEXT_CHECKSUMOVERFLOWA_MASK 0x1 +#define TDIF_TASK_CONTEXT_CHECKSUMOVERFLOWA_SHIFT 23 +#define TDIF_TASK_CONTEXT_REFTAGMASK_MASK 0xF /* mask for refernce tag handling */ +#define TDIF_TASK_CONTEXT_REFTAGMASK_SHIFT 24 +#define TDIF_TASK_CONTEXT_FORWARDAPPTAGWITHMASK_MASK 0x1 /* Forward application tag with mask */ +#define TDIF_TASK_CONTEXT_FORWARDAPPTAGWITHMASK_SHIFT 28 +#define TDIF_TASK_CONTEXT_FORWARDREFTAGWITHMASK_MASK 0x1 /* Forward reference tag with mask */ +#define TDIF_TASK_CONTEXT_FORWARDREFTAGWITHMASK_SHIFT 29 +#define TDIF_TASK_CONTEXT_KEEPREFTAGCONST_MASK 0x1 /* Keep reference tag constant */ +#define TDIF_TASK_CONTEXT_KEEPREFTAGCONST_SHIFT 30 +#define TDIF_TASK_CONTEXT_RESERVED4_MASK 0x1 +#define TDIF_TASK_CONTEXT_RESERVED4_SHIFT 31 + __le32 offsetInIOB; + __le16 partialCrcValueA; + __le16 partialChecksumValueA; + __le32 offsetInIOA; + uint8_t partialDifDataA[8]; + uint8_t partialDifDataB[8]; +}; + + +/* + * Timers context + */ +struct timers_context +{ + __le32 logical_client_0; +#define TIMERS_CONTEXT_EXPIRATIONTIMELC0_MASK 0xFFFFFFF /* Expiration time of logical client 0 */ +#define TIMERS_CONTEXT_EXPIRATIONTIMELC0_SHIFT 0 +#define TIMERS_CONTEXT_VALIDLC0_MASK 0x1 /* Valid bit of logical client 0 */ +#define TIMERS_CONTEXT_VALIDLC0_SHIFT 28 +#define TIMERS_CONTEXT_ACTIVELC0_MASK 0x1 /* Active bit of logical client 0 */ +#define TIMERS_CONTEXT_ACTIVELC0_SHIFT 29 +#define TIMERS_CONTEXT_RESERVED0_MASK 0x3 +#define TIMERS_CONTEXT_RESERVED0_SHIFT 30 + __le32 logical_client_1; +#define TIMERS_CONTEXT_EXPIRATIONTIMELC1_MASK 0xFFFFFFF /* Expiration time of logical client 1 */ +#define TIMERS_CONTEXT_EXPIRATIONTIMELC1_SHIFT 0 +#define TIMERS_CONTEXT_VALIDLC1_MASK 0x1 /* Valid bit of logical client 1 */ +#define TIMERS_CONTEXT_VALIDLC1_SHIFT 28 +#define TIMERS_CONTEXT_ACTIVELC1_MASK 0x1 /* Active bit of logical client 1 */ +#define TIMERS_CONTEXT_ACTIVELC1_SHIFT 29 +#define TIMERS_CONTEXT_RESERVED1_MASK 0x3 +#define TIMERS_CONTEXT_RESERVED1_SHIFT 30 + __le32 logical_client_2; +#define TIMERS_CONTEXT_EXPIRATIONTIMELC2_MASK 0xFFFFFFF /* Expiration time of logical client 2 */ +#define TIMERS_CONTEXT_EXPIRATIONTIMELC2_SHIFT 0 +#define TIMERS_CONTEXT_VALIDLC2_MASK 0x1 /* Valid bit of logical client 2 */ +#define TIMERS_CONTEXT_VALIDLC2_SHIFT 28 +#define TIMERS_CONTEXT_ACTIVELC2_MASK 0x1 /* Active bit of logical client 2 */ +#define TIMERS_CONTEXT_ACTIVELC2_SHIFT 29 +#define TIMERS_CONTEXT_RESERVED2_MASK 0x3 +#define TIMERS_CONTEXT_RESERVED2_SHIFT 30 + __le32 host_expiration_fields; +#define TIMERS_CONTEXT_HOSTEXPRIRATIONVALUE_MASK 0xFFFFFFF /* Expiration time on host (closest one) */ +#define TIMERS_CONTEXT_HOSTEXPRIRATIONVALUE_SHIFT 0 +#define TIMERS_CONTEXT_HOSTEXPRIRATIONVALID_MASK 0x1 /* Valid bit of host expiration */ +#define TIMERS_CONTEXT_HOSTEXPRIRATIONVALID_SHIFT 28 +#define TIMERS_CONTEXT_RESERVED3_MASK 0x7 +#define TIMERS_CONTEXT_RESERVED3_SHIFT 29 +}; + + +/* + * Enum for next_protocol field of tunnel_parsing_flags + */ +enum tunnel_next_protocol +{ + e_unknown=0, + e_l2=1, + e_ipv4=2, + e_ipv6=3, + MAX_TUNNEL_NEXT_PROTOCOL +}; + +#endif /* __COMMON_HSI__ */ diff --git a/providers/qedr/qelr_hsi.h b/providers/qedr/qelr_hsi.h new file mode 100644 index 0000000..8eaf183 --- /dev/null +++ b/providers/qedr/qelr_hsi.h @@ -0,0 +1,67 @@ +/* + * Copyright (c) 2015-2016 QLogic Corporation + * + * This software is available to you under a choice of one of two + * licenses. You may choose to be licensed under the terms of the GNU + * General Public License (GPL) Version 2, available from the file + * COPYING in the main directory of this source tree, or the + * OpenIB.org BSD license below: + * + * Redistribution and use in source and binary forms, with or + * without modification, are permitted provided that the following + * conditions are met: + * + * - Redistributions of source code must retain the above + * copyright notice, this list of conditions and the following + * disclaimer. + * + * - Redistributions in binary form must reproduce the above + * copyright notice, this list of conditions and the following + * disclaimer in the documentation and /or other materials + * provided with the distribution. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, + * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF + * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND + * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS + * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN + * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN + * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE + * SOFTWARE. + */ + +#ifndef __QED_HSI_ROCE__ +#define __QED_HSI_ROCE__ +/********************************/ +/* Add include to common target */ +/********************************/ +#include "common_hsi.h" + +/************************************************************************/ +/* Add include to common roce target for both eCore and protocol roce driver */ +/************************************************************************/ +#include "roce_common.h" +/************************************************************************/ +/* Add include to qed hsi rdma target for both roce and iwarp qed driver */ +/************************************************************************/ +#include "qelr_hsi_rdma.h" + +/* Affiliated asynchronous events / errors enumeration */ +enum roce_async_events_type +{ + ROCE_ASYNC_EVENT_NONE, + ROCE_ASYNC_EVENT_COMM_EST, + ROCE_ASYNC_EVENT_SQ_DRAINED, + ROCE_ASYNC_EVENT_SRQ_LIMIT, + ROCE_ASYNC_EVENT_LAST_WQE_REACHED, + ROCE_ASYNC_EVENT_CQ_ERR, + ROCE_ASYNC_EVENT_LOCAL_INVALID_REQUEST_ERR, + ROCE_ASYNC_EVENT_LOCAL_CATASTROPHIC_ERR, + ROCE_ASYNC_EVENT_LOCAL_ACCESS_ERR, + ROCE_ASYNC_EVENT_QP_CATASTROPHIC_ERR, + ROCE_ASYNC_EVENT_CQ_OVERFLOW_ERR, + ROCE_ASYNC_EVENT_SRQ_EMPTY, + MAX_ROCE_ASYNC_EVENTS_TYPE +}; + +#endif /* __QED_HSI_ROCE__ */ diff --git a/providers/qedr/qelr_hsi_rdma.h b/providers/qedr/qelr_hsi_rdma.h new file mode 100644 index 0000000..c18ce86 --- /dev/null +++ b/providers/qedr/qelr_hsi_rdma.h @@ -0,0 +1,914 @@ +/* + * Copyright (c) 2015-2016 QLogic Corporation + * + * This software is available to you under a choice of one of two + * licenses. You may choose to be licensed under the terms of the GNU + * General Public License (GPL) Version 2, available from the file + * COPYING in the main directory of this source tree, or the + * OpenIB.org BSD license below: + * + * Redistribution and use in source and binary forms, with or + * without modification, are permitted provided that the following + * conditions are met: + * + * - Redistributions of source code must retain the above + * copyright notice, this list of conditions and the following + * disclaimer. + * + * - Redistributions in binary form must reproduce the above + * copyright notice, this list of conditions and the following + * disclaimer in the documentation and /or other materials + * provided with the distribution. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, + * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF + * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND + * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS + * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN + * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN + * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE + * SOFTWARE. + */ + +#ifndef __QED_HSI_RDMA__ +#define __QED_HSI_RDMA__ +/************************************************************************/ +/* Add include to common rdma target for both eCore and protocol rdma driver */ +/************************************************************************/ +#include "rdma_common.h" + +/* + * rdma completion notification queue element + */ +struct rdma_cnqe +{ + struct regpair cq_handle; +}; + + +struct rdma_cqe_responder +{ + struct regpair srq_wr_id; + struct regpair qp_handle; + __le32 imm_data_or_inv_r_Key /* immediate data in case imm_flg is set, or invalidated r_key in case inv_flg is set */; + __le32 length; + __le32 imm_data_hi /* High bytes of immediate data in case imm_flg is set in iWARP only */; + __le16 rq_cons /* Valid only when status is WORK_REQUEST_FLUSHED_ERR. Indicates an aggregative flush on all posted RQ WQEs until the reported rq_cons. */; + uint8_t flags; +#define RDMA_CQE_RESPONDER_TOGGLE_BIT_MASK 0x1 /* indicates a valid completion written by FW. FW toggle this bit each time it finishes producing all PBL entries */ +#define RDMA_CQE_RESPONDER_TOGGLE_BIT_SHIFT 0 +#define RDMA_CQE_RESPONDER_TYPE_MASK 0x3 /* (use enum rdma_cqe_type) */ +#define RDMA_CQE_RESPONDER_TYPE_SHIFT 1 +#define RDMA_CQE_RESPONDER_INV_FLG_MASK 0x1 /* r_key invalidated indicator */ +#define RDMA_CQE_RESPONDER_INV_FLG_SHIFT 3 +#define RDMA_CQE_RESPONDER_IMM_FLG_MASK 0x1 /* immediate data indicator */ +#define RDMA_CQE_RESPONDER_IMM_FLG_SHIFT 4 +#define RDMA_CQE_RESPONDER_RDMA_FLG_MASK 0x1 /* 1=this CQE relates to an RDMA Write. 0=Send. */ +#define RDMA_CQE_RESPONDER_RDMA_FLG_SHIFT 5 +#define RDMA_CQE_RESPONDER_RESERVED2_MASK 0x3 +#define RDMA_CQE_RESPONDER_RESERVED2_SHIFT 6 + uint8_t status; +}; + +struct rdma_cqe_requester +{ + __le16 sq_cons; + __le16 reserved0; + __le32 reserved1; + struct regpair qp_handle; + struct regpair reserved2; + __le32 reserved3; + __le16 reserved4; + uint8_t flags; +#define RDMA_CQE_REQUESTER_TOGGLE_BIT_MASK 0x1 /* indicates a valid completion written by FW. FW toggle this bit each time it finishes producing all PBL entries */ +#define RDMA_CQE_REQUESTER_TOGGLE_BIT_SHIFT 0 +#define RDMA_CQE_REQUESTER_TYPE_MASK 0x3 /* (use enum rdma_cqe_type) */ +#define RDMA_CQE_REQUESTER_TYPE_SHIFT 1 +#define RDMA_CQE_REQUESTER_RESERVED5_MASK 0x1F +#define RDMA_CQE_REQUESTER_RESERVED5_SHIFT 3 + uint8_t status; +}; + +struct rdma_cqe_common +{ + struct regpair reserved0; + struct regpair qp_handle; + __le16 reserved1[7]; + uint8_t flags; +#define RDMA_CQE_COMMON_TOGGLE_BIT_MASK 0x1 /* indicates a valid completion written by FW. FW toggle this bit each time it finishes producing all PBL entries */ +#define RDMA_CQE_COMMON_TOGGLE_BIT_SHIFT 0 +#define RDMA_CQE_COMMON_TYPE_MASK 0x3 /* (use enum rdma_cqe_type) */ +#define RDMA_CQE_COMMON_TYPE_SHIFT 1 +#define RDMA_CQE_COMMON_RESERVED2_MASK 0x1F +#define RDMA_CQE_COMMON_RESERVED2_SHIFT 3 + uint8_t status; +}; + +/* + * rdma completion queue element + */ +union rdma_cqe +{ + struct rdma_cqe_responder resp; + struct rdma_cqe_requester req; + struct rdma_cqe_common cmn; +}; + + + + +/* + * CQE requester status enumeration + */ +enum rdma_cqe_requester_status_enum +{ + RDMA_CQE_REQ_STS_OK, + RDMA_CQE_REQ_STS_BAD_RESPONSE_ERR, + RDMA_CQE_REQ_STS_LOCAL_LENGTH_ERR, + RDMA_CQE_REQ_STS_LOCAL_QP_OPERATION_ERR, + RDMA_CQE_REQ_STS_LOCAL_PROTECTION_ERR, + RDMA_CQE_REQ_STS_MEMORY_MGT_OPERATION_ERR, + RDMA_CQE_REQ_STS_REMOTE_INVALID_REQUEST_ERR, + RDMA_CQE_REQ_STS_REMOTE_ACCESS_ERR, + RDMA_CQE_REQ_STS_REMOTE_OPERATION_ERR, + RDMA_CQE_REQ_STS_RNR_NAK_RETRY_CNT_ERR, + RDMA_CQE_REQ_STS_TRANSPORT_RETRY_CNT_ERR, + RDMA_CQE_REQ_STS_WORK_REQUEST_FLUSHED_ERR, + MAX_RDMA_CQE_REQUESTER_STATUS_ENUM +}; + + + +/* + * CQE responder status enumeration + */ +enum rdma_cqe_responder_status_enum +{ + RDMA_CQE_RESP_STS_OK, + RDMA_CQE_RESP_STS_LOCAL_ACCESS_ERR, + RDMA_CQE_RESP_STS_LOCAL_LENGTH_ERR, + RDMA_CQE_RESP_STS_LOCAL_QP_OPERATION_ERR, + RDMA_CQE_RESP_STS_LOCAL_PROTECTION_ERR, + RDMA_CQE_RESP_STS_MEMORY_MGT_OPERATION_ERR, + RDMA_CQE_RESP_STS_REMOTE_INVALID_REQUEST_ERR, + RDMA_CQE_RESP_STS_WORK_REQUEST_FLUSHED_ERR, + MAX_RDMA_CQE_RESPONDER_STATUS_ENUM +}; + + +/* + * CQE type enumeration + */ +enum rdma_cqe_type +{ + RDMA_CQE_TYPE_REQUESTER, + RDMA_CQE_TYPE_RESPONDER_RQ, + RDMA_CQE_TYPE_RESPONDER_SRQ, + RDMA_CQE_TYPE_INVALID, + MAX_RDMA_CQE_TYPE +}; + + +/* + * DIF Block size options + */ +enum rdma_dif_block_size +{ + RDMA_DIF_BLOCK_512=0, + RDMA_DIF_BLOCK_4096=1, + MAX_RDMA_DIF_BLOCK_SIZE +}; + + +/* + * DIF CRC initial value + */ +enum rdma_dif_crc_seed +{ + RDMA_DIF_CRC_SEED_0000=0, + RDMA_DIF_CRC_SEED_FFFF=1, + MAX_RDMA_DIF_CRC_SEED +}; + + +/* + * RDMA DIF Error Result Structure + */ +struct rdma_dif_error_result +{ + __le32 error_intervals /* Total number of error intervals in the IO. */; + __le32 dif_error_1st_interval /* Number of the first interval that contained error. Set to 0xFFFFFFFF if error occurred in the Runt Block. */; + uint8_t flags; +#define RDMA_DIF_ERROR_RESULT_DIF_ERROR_TYPE_CRC_MASK 0x1 /* CRC error occurred. */ +#define RDMA_DIF_ERROR_RESULT_DIF_ERROR_TYPE_CRC_SHIFT 0 +#define RDMA_DIF_ERROR_RESULT_DIF_ERROR_TYPE_APP_TAG_MASK 0x1 /* App Tag error occurred. */ +#define RDMA_DIF_ERROR_RESULT_DIF_ERROR_TYPE_APP_TAG_SHIFT 1 +#define RDMA_DIF_ERROR_RESULT_DIF_ERROR_TYPE_REF_TAG_MASK 0x1 /* Ref Tag error occurred. */ +#define RDMA_DIF_ERROR_RESULT_DIF_ERROR_TYPE_REF_TAG_SHIFT 2 +#define RDMA_DIF_ERROR_RESULT_RESERVED0_MASK 0xF +#define RDMA_DIF_ERROR_RESULT_RESERVED0_SHIFT 3 +#define RDMA_DIF_ERROR_RESULT_TOGGLE_BIT_MASK 0x1 /* Used to indicate the structure is valid. Toggles each time an invalidate region is performed. */ +#define RDMA_DIF_ERROR_RESULT_TOGGLE_BIT_SHIFT 7 + uint8_t reserved1[55] /* Pad to 64 bytes to ensure efficient word line writing. */; +}; + + +/* + * DIF IO direction + */ +enum rdma_dif_io_direction_flg +{ + RDMA_DIF_DIR_RX=0, + RDMA_DIF_DIR_TX=1, + MAX_RDMA_DIF_IO_DIRECTION_FLG +}; + + +/* + * RDMA DIF Runt Result Structure + */ +struct rdma_dif_runt_result +{ + __le16 guard_tag /* CRC result of received IO. */; + __le16 reserved[3]; +}; + + +/* + * memory window type enumeration + */ +enum rdma_mw_type +{ + RDMA_MW_TYPE_1, + RDMA_MW_TYPE_2A, + MAX_RDMA_MW_TYPE +}; + + +struct rdma_rq_sge +{ + struct regpair addr; + __le32 length; + __le32 flags; +#define RDMA_RQ_SGE_L_KEY_MASK 0x3FFFFFF /* key of memory relating to this RQ */ +#define RDMA_RQ_SGE_L_KEY_SHIFT 0 +#define RDMA_RQ_SGE_NUM_SGES_MASK 0x7 /* first SGE - number of SGEs in this RQ WQE. Other SGEs - should be set to 0 */ +#define RDMA_RQ_SGE_NUM_SGES_SHIFT 26 +#define RDMA_RQ_SGE_RESERVED0_MASK 0x7 +#define RDMA_RQ_SGE_RESERVED0_SHIFT 29 +}; + + +struct rdma_sq_atomic_wqe +{ + __le32 reserved1; + __le32 length /* Total data length (8 bytes for Atomic) */; + __le32 xrc_srq /* Valid only when XRC is set for the QP */; + uint8_t req_type /* Type of WQE */; + uint8_t flags; +#define RDMA_SQ_ATOMIC_WQE_COMP_FLG_MASK 0x1 /* If set, completion will be generated when the WQE is completed */ +#define RDMA_SQ_ATOMIC_WQE_COMP_FLG_SHIFT 0 +#define RDMA_SQ_ATOMIC_WQE_RD_FENCE_FLG_MASK 0x1 /* If set, all pending RDMA read or Atomic operations will be completed before start processing this WQE */ +#define RDMA_SQ_ATOMIC_WQE_RD_FENCE_FLG_SHIFT 1 +#define RDMA_SQ_ATOMIC_WQE_INV_FENCE_FLG_MASK 0x1 /* If set, all pending operations will be completed before start processing this WQE */ +#define RDMA_SQ_ATOMIC_WQE_INV_FENCE_FLG_SHIFT 2 +#define RDMA_SQ_ATOMIC_WQE_SE_FLG_MASK 0x1 /* Dont care for atomic wqe */ +#define RDMA_SQ_ATOMIC_WQE_SE_FLG_SHIFT 3 +#define RDMA_SQ_ATOMIC_WQE_INLINE_FLG_MASK 0x1 /* Should be 0 for atomic wqe */ +#define RDMA_SQ_ATOMIC_WQE_INLINE_FLG_SHIFT 4 +#define RDMA_SQ_ATOMIC_WQE_DIF_ON_HOST_FLG_MASK 0x1 /* Should be 0 for atomic wqe */ +#define RDMA_SQ_ATOMIC_WQE_DIF_ON_HOST_FLG_SHIFT 5 +#define RDMA_SQ_ATOMIC_WQE_RESERVED0_MASK 0x3 +#define RDMA_SQ_ATOMIC_WQE_RESERVED0_SHIFT 6 + uint8_t wqe_size /* Size of WQE in 16B chunks including SGE */; + uint8_t prev_wqe_size /* Previous WQE size in 16B chunks */; + struct regpair remote_va /* remote virtual address */; + __le32 r_key /* Remote key */; + __le32 reserved2; + struct regpair cmp_data /* Data to compare in case of ATOMIC_CMP_AND_SWAP */; + struct regpair swap_data /* Swap or add data */; +}; + + +/* + * First element (16 bytes) of atomic wqe + */ +struct rdma_sq_atomic_wqe_1st +{ + __le32 reserved1; + __le32 length /* Total data length (8 bytes for Atomic) */; + __le32 xrc_srq /* Valid only when XRC is set for the QP */; + uint8_t req_type /* Type of WQE */; + uint8_t flags; +#define RDMA_SQ_ATOMIC_WQE_1ST_COMP_FLG_MASK 0x1 /* If set, completion will be generated when the WQE is completed */ +#define RDMA_SQ_ATOMIC_WQE_1ST_COMP_FLG_SHIFT 0 +#define RDMA_SQ_ATOMIC_WQE_1ST_RD_FENCE_FLG_MASK 0x1 /* If set, all pending RDMA read or Atomic operations will be completed before start processing this WQE */ +#define RDMA_SQ_ATOMIC_WQE_1ST_RD_FENCE_FLG_SHIFT 1 +#define RDMA_SQ_ATOMIC_WQE_1ST_INV_FENCE_FLG_MASK 0x1 /* If set, all pending operations will be completed before start processing this WQE */ +#define RDMA_SQ_ATOMIC_WQE_1ST_INV_FENCE_FLG_SHIFT 2 +#define RDMA_SQ_ATOMIC_WQE_1ST_SE_FLG_MASK 0x1 /* Dont care for atomic wqe */ +#define RDMA_SQ_ATOMIC_WQE_1ST_SE_FLG_SHIFT 3 +#define RDMA_SQ_ATOMIC_WQE_1ST_INLINE_FLG_MASK 0x1 /* Should be 0 for atomic wqe */ +#define RDMA_SQ_ATOMIC_WQE_1ST_INLINE_FLG_SHIFT 4 +#define RDMA_SQ_ATOMIC_WQE_1ST_RESERVED0_MASK 0x7 +#define RDMA_SQ_ATOMIC_WQE_1ST_RESERVED0_SHIFT 5 + uint8_t wqe_size /* Size of WQE in 16B chunks including all SGEs. Set to number of SGEs + 1. */; + uint8_t prev_wqe_size /* Previous WQE size in 16B chunks */; +}; + + +/* + * Second element (16 bytes) of atomic wqe + */ +struct rdma_sq_atomic_wqe_2nd +{ + struct regpair remote_va /* remote virtual address */; + __le32 r_key /* Remote key */; + __le32 reserved2; +}; + + +/* + * Third element (16 bytes) of atomic wqe + */ +struct rdma_sq_atomic_wqe_3rd +{ + struct regpair cmp_data /* Data to compare in case of ATOMIC_CMP_AND_SWAP */; + struct regpair swap_data /* Swap or add data */; +}; + + +struct rdma_sq_bind_wqe +{ + struct regpair addr; + __le32 l_key; + uint8_t req_type /* Type of WQE */; + uint8_t flags; +#define RDMA_SQ_BIND_WQE_COMP_FLG_MASK 0x1 /* If set, completion will be generated when the WQE is completed */ +#define RDMA_SQ_BIND_WQE_COMP_FLG_SHIFT 0 +#define RDMA_SQ_BIND_WQE_RD_FENCE_FLG_MASK 0x1 /* If set, all pending RDMA read or Atomic operations will be completed before start processing this WQE */ +#define RDMA_SQ_BIND_WQE_RD_FENCE_FLG_SHIFT 1 +#define RDMA_SQ_BIND_WQE_INV_FENCE_FLG_MASK 0x1 /* If set, all pending operations will be completed before start processing this WQE */ +#define RDMA_SQ_BIND_WQE_INV_FENCE_FLG_SHIFT 2 +#define RDMA_SQ_BIND_WQE_SE_FLG_MASK 0x1 /* Dont care for bind wqe */ +#define RDMA_SQ_BIND_WQE_SE_FLG_SHIFT 3 +#define RDMA_SQ_BIND_WQE_INLINE_FLG_MASK 0x1 /* Should be 0 for bind wqe */ +#define RDMA_SQ_BIND_WQE_INLINE_FLG_SHIFT 4 +#define RDMA_SQ_BIND_WQE_RESERVED0_MASK 0x7 +#define RDMA_SQ_BIND_WQE_RESERVED0_SHIFT 5 + uint8_t wqe_size /* Size of WQE in 16B chunks */; + uint8_t prev_wqe_size /* Previous WQE size in 16B chunks */; + uint8_t bind_ctrl; +#define RDMA_SQ_BIND_WQE_ZERO_BASED_MASK 0x1 /* zero based indication */ +#define RDMA_SQ_BIND_WQE_ZERO_BASED_SHIFT 0 +#define RDMA_SQ_BIND_WQE_MW_TYPE_MASK 0x1 /* (use enum rdma_mw_type) */ +#define RDMA_SQ_BIND_WQE_MW_TYPE_SHIFT 1 +#define RDMA_SQ_BIND_WQE_RESERVED1_MASK 0x3F +#define RDMA_SQ_BIND_WQE_RESERVED1_SHIFT 2 + uint8_t access_ctrl; +#define RDMA_SQ_BIND_WQE_REMOTE_READ_MASK 0x1 +#define RDMA_SQ_BIND_WQE_REMOTE_READ_SHIFT 0 +#define RDMA_SQ_BIND_WQE_REMOTE_WRITE_MASK 0x1 +#define RDMA_SQ_BIND_WQE_REMOTE_WRITE_SHIFT 1 +#define RDMA_SQ_BIND_WQE_ENABLE_ATOMIC_MASK 0x1 +#define RDMA_SQ_BIND_WQE_ENABLE_ATOMIC_SHIFT 2 +#define RDMA_SQ_BIND_WQE_LOCAL_READ_MASK 0x1 +#define RDMA_SQ_BIND_WQE_LOCAL_READ_SHIFT 3 +#define RDMA_SQ_BIND_WQE_LOCAL_WRITE_MASK 0x1 +#define RDMA_SQ_BIND_WQE_LOCAL_WRITE_SHIFT 4 +#define RDMA_SQ_BIND_WQE_RESERVED2_MASK 0x7 +#define RDMA_SQ_BIND_WQE_RESERVED2_SHIFT 5 + uint8_t reserved3; + uint8_t length_hi /* upper 8 bits of the registered MW length */; + __le32 length_lo /* lower 32 bits of the registered MW length */; + __le32 parent_l_key /* l_key of the parent MR */; + __le32 reserved4; +}; + + +/* + * First element (16 bytes) of bind wqe + */ +struct rdma_sq_bind_wqe_1st +{ + struct regpair addr; + __le32 l_key; + uint8_t req_type /* Type of WQE */; + uint8_t flags; +#define RDMA_SQ_BIND_WQE_1ST_COMP_FLG_MASK 0x1 /* If set, completion will be generated when the WQE is completed */ +#define RDMA_SQ_BIND_WQE_1ST_COMP_FLG_SHIFT 0 +#define RDMA_SQ_BIND_WQE_1ST_RD_FENCE_FLG_MASK 0x1 /* If set, all pending RDMA read or Atomic operations will be completed before start processing this WQE */ +#define RDMA_SQ_BIND_WQE_1ST_RD_FENCE_FLG_SHIFT 1 +#define RDMA_SQ_BIND_WQE_1ST_INV_FENCE_FLG_MASK 0x1 /* If set, all pending operations will be completed before start processing this WQE */ +#define RDMA_SQ_BIND_WQE_1ST_INV_FENCE_FLG_SHIFT 2 +#define RDMA_SQ_BIND_WQE_1ST_SE_FLG_MASK 0x1 /* Dont care for bind wqe */ +#define RDMA_SQ_BIND_WQE_1ST_SE_FLG_SHIFT 3 +#define RDMA_SQ_BIND_WQE_1ST_INLINE_FLG_MASK 0x1 /* Should be 0 for bind wqe */ +#define RDMA_SQ_BIND_WQE_1ST_INLINE_FLG_SHIFT 4 +#define RDMA_SQ_BIND_WQE_1ST_RESERVED0_MASK 0x7 +#define RDMA_SQ_BIND_WQE_1ST_RESERVED0_SHIFT 5 + uint8_t wqe_size /* Size of WQE in 16B chunks */; + uint8_t prev_wqe_size /* Previous WQE size in 16B chunks */; +}; + + +/* + * Second element (16 bytes) of bind wqe + */ +struct rdma_sq_bind_wqe_2nd +{ + uint8_t bind_ctrl; +#define RDMA_SQ_BIND_WQE_2ND_ZERO_BASED_MASK 0x1 /* zero based indication */ +#define RDMA_SQ_BIND_WQE_2ND_ZERO_BASED_SHIFT 0 +#define RDMA_SQ_BIND_WQE_2ND_MW_TYPE_MASK 0x1 /* (use enum rdma_mw_type) */ +#define RDMA_SQ_BIND_WQE_2ND_MW_TYPE_SHIFT 1 +#define RDMA_SQ_BIND_WQE_2ND_RESERVED1_MASK 0x3F +#define RDMA_SQ_BIND_WQE_2ND_RESERVED1_SHIFT 2 + uint8_t access_ctrl; +#define RDMA_SQ_BIND_WQE_2ND_REMOTE_READ_MASK 0x1 +#define RDMA_SQ_BIND_WQE_2ND_REMOTE_READ_SHIFT 0 +#define RDMA_SQ_BIND_WQE_2ND_REMOTE_WRITE_MASK 0x1 +#define RDMA_SQ_BIND_WQE_2ND_REMOTE_WRITE_SHIFT 1 +#define RDMA_SQ_BIND_WQE_2ND_ENABLE_ATOMIC_MASK 0x1 +#define RDMA_SQ_BIND_WQE_2ND_ENABLE_ATOMIC_SHIFT 2 +#define RDMA_SQ_BIND_WQE_2ND_LOCAL_READ_MASK 0x1 +#define RDMA_SQ_BIND_WQE_2ND_LOCAL_READ_SHIFT 3 +#define RDMA_SQ_BIND_WQE_2ND_LOCAL_WRITE_MASK 0x1 +#define RDMA_SQ_BIND_WQE_2ND_LOCAL_WRITE_SHIFT 4 +#define RDMA_SQ_BIND_WQE_2ND_RESERVED2_MASK 0x7 +#define RDMA_SQ_BIND_WQE_2ND_RESERVED2_SHIFT 5 + uint8_t reserved3; + uint8_t length_hi /* upper 8 bits of the registered MW length */; + __le32 length_lo /* lower 32 bits of the registered MW length */; + __le32 parent_l_key /* l_key of the parent MR */; + __le32 reserved4; +}; + + +/* + * Structure with only the SQ WQE common fields. Size is of one SQ element (16B) + */ +struct rdma_sq_common_wqe +{ + __le32 reserved1[3]; + uint8_t req_type /* Type of WQE */; + uint8_t flags; +#define RDMA_SQ_COMMON_WQE_COMP_FLG_MASK 0x1 /* If set, completion will be generated when the WQE is completed */ +#define RDMA_SQ_COMMON_WQE_COMP_FLG_SHIFT 0 +#define RDMA_SQ_COMMON_WQE_RD_FENCE_FLG_MASK 0x1 /* If set, all pending RDMA read or Atomic operations will be completed before start processing this WQE */ +#define RDMA_SQ_COMMON_WQE_RD_FENCE_FLG_SHIFT 1 +#define RDMA_SQ_COMMON_WQE_INV_FENCE_FLG_MASK 0x1 /* If set, all pending operations will be completed before start processing this WQE */ +#define RDMA_SQ_COMMON_WQE_INV_FENCE_FLG_SHIFT 2 +#define RDMA_SQ_COMMON_WQE_SE_FLG_MASK 0x1 /* If set, signal the responder to generate a solicited event on this WQE (only relevant in SENDs and RDMA write with Imm) */ +#define RDMA_SQ_COMMON_WQE_SE_FLG_SHIFT 3 +#define RDMA_SQ_COMMON_WQE_INLINE_FLG_MASK 0x1 /* if set, indicates inline data is following this WQE instead of SGEs (only relevant in SENDs and RDMA writes) */ +#define RDMA_SQ_COMMON_WQE_INLINE_FLG_SHIFT 4 +#define RDMA_SQ_COMMON_WQE_RESERVED0_MASK 0x7 +#define RDMA_SQ_COMMON_WQE_RESERVED0_SHIFT 5 + uint8_t wqe_size /* Size of WQE in 16B chunks including all SGEs or inline data. In case there are SGEs: set to number of SGEs + 1. In case of inline data: set to the whole number of 16B which contain the inline data + 1. */; + uint8_t prev_wqe_size /* Previous WQE size in 16B chunks */; +}; + + +struct rdma_sq_fmr_wqe +{ + struct regpair addr; + __le32 l_key; + uint8_t req_type /* Type of WQE */; + uint8_t flags; +#define RDMA_SQ_FMR_WQE_COMP_FLG_MASK 0x1 /* If set, completion will be generated when the WQE is completed */ +#define RDMA_SQ_FMR_WQE_COMP_FLG_SHIFT 0 +#define RDMA_SQ_FMR_WQE_RD_FENCE_FLG_MASK 0x1 /* If set, all pending RDMA read or Atomic operations will be completed before start processing this WQE */ +#define RDMA_SQ_FMR_WQE_RD_FENCE_FLG_SHIFT 1 +#define RDMA_SQ_FMR_WQE_INV_FENCE_FLG_MASK 0x1 /* If set, all pending operations will be completed before start processing this WQE */ +#define RDMA_SQ_FMR_WQE_INV_FENCE_FLG_SHIFT 2 +#define RDMA_SQ_FMR_WQE_SE_FLG_MASK 0x1 /* Dont care for FMR wqe */ +#define RDMA_SQ_FMR_WQE_SE_FLG_SHIFT 3 +#define RDMA_SQ_FMR_WQE_INLINE_FLG_MASK 0x1 /* Should be 0 for FMR wqe */ +#define RDMA_SQ_FMR_WQE_INLINE_FLG_SHIFT 4 +#define RDMA_SQ_FMR_WQE_DIF_ON_HOST_FLG_MASK 0x1 /* If set, indicated host memory of this WQE is DIF protected. */ +#define RDMA_SQ_FMR_WQE_DIF_ON_HOST_FLG_SHIFT 5 +#define RDMA_SQ_FMR_WQE_RESERVED0_MASK 0x3 +#define RDMA_SQ_FMR_WQE_RESERVED0_SHIFT 6 + uint8_t wqe_size /* Size of WQE in 16B chunks */; + uint8_t prev_wqe_size /* Previous WQE size in 16B chunks */; + uint8_t fmr_ctrl; +#define RDMA_SQ_FMR_WQE_PAGE_SIZE_LOG_MASK 0x1F /* 0 is 4k, 1 is 8k... */ +#define RDMA_SQ_FMR_WQE_PAGE_SIZE_LOG_SHIFT 0 +#define RDMA_SQ_FMR_WQE_ZERO_BASED_MASK 0x1 /* zero based indication */ +#define RDMA_SQ_FMR_WQE_ZERO_BASED_SHIFT 5 +#define RDMA_SQ_FMR_WQE_BIND_EN_MASK 0x1 /* indication whether bind is enabled for this MR */ +#define RDMA_SQ_FMR_WQE_BIND_EN_SHIFT 6 +#define RDMA_SQ_FMR_WQE_RESERVED1_MASK 0x1 +#define RDMA_SQ_FMR_WQE_RESERVED1_SHIFT 7 + uint8_t access_ctrl; +#define RDMA_SQ_FMR_WQE_REMOTE_READ_MASK 0x1 +#define RDMA_SQ_FMR_WQE_REMOTE_READ_SHIFT 0 +#define RDMA_SQ_FMR_WQE_REMOTE_WRITE_MASK 0x1 +#define RDMA_SQ_FMR_WQE_REMOTE_WRITE_SHIFT 1 +#define RDMA_SQ_FMR_WQE_ENABLE_ATOMIC_MASK 0x1 +#define RDMA_SQ_FMR_WQE_ENABLE_ATOMIC_SHIFT 2 +#define RDMA_SQ_FMR_WQE_LOCAL_READ_MASK 0x1 +#define RDMA_SQ_FMR_WQE_LOCAL_READ_SHIFT 3 +#define RDMA_SQ_FMR_WQE_LOCAL_WRITE_MASK 0x1 +#define RDMA_SQ_FMR_WQE_LOCAL_WRITE_SHIFT 4 +#define RDMA_SQ_FMR_WQE_RESERVED2_MASK 0x7 +#define RDMA_SQ_FMR_WQE_RESERVED2_SHIFT 5 + uint8_t reserved3; + uint8_t length_hi /* upper 8 bits of the registered MR length */; + __le32 length_lo /* lower 32 bits of the registered MR length. In case of DIF the length is specified including the DIF guards. */; + struct regpair pbl_addr /* Address of PBL */; + __le32 dif_base_ref_tag /* Ref tag of the first DIF Block. */; + __le16 dif_app_tag /* App tag of all DIF Blocks. */; + __le16 dif_app_tag_mask /* Bitmask for verifying dif_app_tag. */; + __le16 dif_runt_crc_value /* In TX IO, in case the runt_valid_flg is set, this value is used to validate the last Block in the IO. */; + __le16 dif_flags; +#define RDMA_SQ_FMR_WQE_DIF_IO_DIRECTION_FLG_MASK 0x1 /* 0=RX, 1=TX (use enum rdma_dif_io_direction_flg) */ +#define RDMA_SQ_FMR_WQE_DIF_IO_DIRECTION_FLG_SHIFT 0 +#define RDMA_SQ_FMR_WQE_DIF_BLOCK_SIZE_MASK 0x1 /* DIF block size. 0=512B 1=4096B (use enum rdma_dif_block_size) */ +#define RDMA_SQ_FMR_WQE_DIF_BLOCK_SIZE_SHIFT 1 +#define RDMA_SQ_FMR_WQE_DIF_RUNT_VALID_FLG_MASK 0x1 /* In TX IO, indicates the runt_value field is valid. In RX IO, indicates the calculated runt value is to be placed on host buffer. */ +#define RDMA_SQ_FMR_WQE_DIF_RUNT_VALID_FLG_SHIFT 2 +#define RDMA_SQ_FMR_WQE_DIF_VALIDATE_CRC_GUARD_MASK 0x1 /* In TX IO, indicates CRC of each DIF guard tag is checked. */ +#define RDMA_SQ_FMR_WQE_DIF_VALIDATE_CRC_GUARD_SHIFT 3 +#define RDMA_SQ_FMR_WQE_DIF_VALIDATE_REF_TAG_MASK 0x1 /* In TX IO, indicates Ref tag of each DIF guard tag is checked. */ +#define RDMA_SQ_FMR_WQE_DIF_VALIDATE_REF_TAG_SHIFT 4 +#define RDMA_SQ_FMR_WQE_DIF_VALIDATE_APP_TAG_MASK 0x1 /* In TX IO, indicates App tag of each DIF guard tag is checked. */ +#define RDMA_SQ_FMR_WQE_DIF_VALIDATE_APP_TAG_SHIFT 5 +#define RDMA_SQ_FMR_WQE_DIF_CRC_SEED_MASK 0x1 /* DIF CRC Seed to use. 0=0x000 1=0xFFFF (use enum rdma_dif_crc_seed) */ +#define RDMA_SQ_FMR_WQE_DIF_CRC_SEED_SHIFT 6 +#define RDMA_SQ_FMR_WQE_RESERVED4_MASK 0x1FF +#define RDMA_SQ_FMR_WQE_RESERVED4_SHIFT 7 + __le32 Reserved5; +}; + + +/* + * First element (16 bytes) of fmr wqe + */ +struct rdma_sq_fmr_wqe_1st +{ + struct regpair addr; + __le32 l_key; + uint8_t req_type /* Type of WQE */; + uint8_t flags; +#define RDMA_SQ_FMR_WQE_1ST_COMP_FLG_MASK 0x1 /* If set, completion will be generated when the WQE is completed */ +#define RDMA_SQ_FMR_WQE_1ST_COMP_FLG_SHIFT 0 +#define RDMA_SQ_FMR_WQE_1ST_RD_FENCE_FLG_MASK 0x1 /* If set, all pending RDMA read or Atomic operations will be completed before start processing this WQE */ +#define RDMA_SQ_FMR_WQE_1ST_RD_FENCE_FLG_SHIFT 1 +#define RDMA_SQ_FMR_WQE_1ST_INV_FENCE_FLG_MASK 0x1 /* If set, all pending operations will be completed before start processing this WQE */ +#define RDMA_SQ_FMR_WQE_1ST_INV_FENCE_FLG_SHIFT 2 +#define RDMA_SQ_FMR_WQE_1ST_SE_FLG_MASK 0x1 /* Dont care for FMR wqe */ +#define RDMA_SQ_FMR_WQE_1ST_SE_FLG_SHIFT 3 +#define RDMA_SQ_FMR_WQE_1ST_INLINE_FLG_MASK 0x1 /* Should be 0 for FMR wqe */ +#define RDMA_SQ_FMR_WQE_1ST_INLINE_FLG_SHIFT 4 +#define RDMA_SQ_FMR_WQE_1ST_DIF_ON_HOST_FLG_MASK 0x1 /* If set, indicated host memory of this WQE is DIF protected. */ +#define RDMA_SQ_FMR_WQE_1ST_DIF_ON_HOST_FLG_SHIFT 5 +#define RDMA_SQ_FMR_WQE_1ST_RESERVED0_MASK 0x3 +#define RDMA_SQ_FMR_WQE_1ST_RESERVED0_SHIFT 6 + uint8_t wqe_size /* Size of WQE in 16B chunks */; + uint8_t prev_wqe_size /* Previous WQE size in 16B chunks */; +}; + + +/* + * Second element (16 bytes) of fmr wqe + */ +struct rdma_sq_fmr_wqe_2nd +{ + uint8_t fmr_ctrl; +#define RDMA_SQ_FMR_WQE_2ND_PAGE_SIZE_LOG_MASK 0x1F /* 0 is 4k, 1 is 8k... */ +#define RDMA_SQ_FMR_WQE_2ND_PAGE_SIZE_LOG_SHIFT 0 +#define RDMA_SQ_FMR_WQE_2ND_ZERO_BASED_MASK 0x1 /* zero based indication */ +#define RDMA_SQ_FMR_WQE_2ND_ZERO_BASED_SHIFT 5 +#define RDMA_SQ_FMR_WQE_2ND_BIND_EN_MASK 0x1 /* indication whether bind is enabled for this MR */ +#define RDMA_SQ_FMR_WQE_2ND_BIND_EN_SHIFT 6 +#define RDMA_SQ_FMR_WQE_2ND_RESERVED1_MASK 0x1 +#define RDMA_SQ_FMR_WQE_2ND_RESERVED1_SHIFT 7 + uint8_t access_ctrl; +#define RDMA_SQ_FMR_WQE_2ND_REMOTE_READ_MASK 0x1 +#define RDMA_SQ_FMR_WQE_2ND_REMOTE_READ_SHIFT 0 +#define RDMA_SQ_FMR_WQE_2ND_REMOTE_WRITE_MASK 0x1 +#define RDMA_SQ_FMR_WQE_2ND_REMOTE_WRITE_SHIFT 1 +#define RDMA_SQ_FMR_WQE_2ND_ENABLE_ATOMIC_MASK 0x1 +#define RDMA_SQ_FMR_WQE_2ND_ENABLE_ATOMIC_SHIFT 2 +#define RDMA_SQ_FMR_WQE_2ND_LOCAL_READ_MASK 0x1 +#define RDMA_SQ_FMR_WQE_2ND_LOCAL_READ_SHIFT 3 +#define RDMA_SQ_FMR_WQE_2ND_LOCAL_WRITE_MASK 0x1 +#define RDMA_SQ_FMR_WQE_2ND_LOCAL_WRITE_SHIFT 4 +#define RDMA_SQ_FMR_WQE_2ND_RESERVED2_MASK 0x7 +#define RDMA_SQ_FMR_WQE_2ND_RESERVED2_SHIFT 5 + uint8_t reserved3; + uint8_t length_hi /* upper 8 bits of the registered MR length */; + __le32 length_lo /* lower 32 bits of the registered MR length. In case of zero based MR, will hold FBO */; + struct regpair pbl_addr /* Address of PBL */; +}; + + +/* + * Third element (16 bytes) of fmr wqe + */ +struct rdma_sq_fmr_wqe_3rd +{ + __le32 dif_base_ref_tag /* Ref tag of the first DIF Block. */; + __le16 dif_app_tag /* App tag of all DIF Blocks. */; + __le16 dif_app_tag_mask /* Bitmask for verifying dif_app_tag. */; + __le16 dif_runt_crc_value /* In TX IO, in case the runt_valid_flg is set, this value is used to validate the last Block in the IO. */; + __le16 dif_flags; +#define RDMA_SQ_FMR_WQE_3RD_DIF_IO_DIRECTION_FLG_MASK 0x1 /* 0=RX, 1=TX (use enum rdma_dif_io_direction_flg) */ +#define RDMA_SQ_FMR_WQE_3RD_DIF_IO_DIRECTION_FLG_SHIFT 0 +#define RDMA_SQ_FMR_WQE_3RD_DIF_BLOCK_SIZE_MASK 0x1 /* DIF block size. 0=512B 1=4096B (use enum rdma_dif_block_size) */ +#define RDMA_SQ_FMR_WQE_3RD_DIF_BLOCK_SIZE_SHIFT 1 +#define RDMA_SQ_FMR_WQE_3RD_DIF_RUNT_VALID_FLG_MASK 0x1 /* In TX IO, indicates the runt_value field is valid. In RX IO, indicates the calculated runt value is to be placed on host buffer. */ +#define RDMA_SQ_FMR_WQE_3RD_DIF_RUNT_VALID_FLG_SHIFT 2 +#define RDMA_SQ_FMR_WQE_3RD_DIF_VALIDATE_CRC_GUARD_MASK 0x1 /* In TX IO, indicates CRC of each DIF guard tag is checked. */ +#define RDMA_SQ_FMR_WQE_3RD_DIF_VALIDATE_CRC_GUARD_SHIFT 3 +#define RDMA_SQ_FMR_WQE_3RD_DIF_VALIDATE_REF_TAG_MASK 0x1 /* In TX IO, indicates Ref tag of each DIF guard tag is checked. */ +#define RDMA_SQ_FMR_WQE_3RD_DIF_VALIDATE_REF_TAG_SHIFT 4 +#define RDMA_SQ_FMR_WQE_3RD_DIF_VALIDATE_APP_TAG_MASK 0x1 /* In TX IO, indicates App tag of each DIF guard tag is checked. */ +#define RDMA_SQ_FMR_WQE_3RD_DIF_VALIDATE_APP_TAG_SHIFT 5 +#define RDMA_SQ_FMR_WQE_3RD_DIF_CRC_SEED_MASK 0x1 /* DIF CRC Seed to use. 0=0x000 1=0xFFFF (use enum rdma_dif_crc_seed) */ +#define RDMA_SQ_FMR_WQE_3RD_DIF_CRC_SEED_SHIFT 6 +#define RDMA_SQ_FMR_WQE_3RD_RESERVED4_MASK 0x1FF +#define RDMA_SQ_FMR_WQE_3RD_RESERVED4_SHIFT 7 + __le32 Reserved5; +}; + + +struct rdma_sq_local_inv_wqe +{ + struct regpair reserved; + __le32 inv_l_key /* The invalidate local key */; + uint8_t req_type /* Type of WQE */; + uint8_t flags; +#define RDMA_SQ_LOCAL_INV_WQE_COMP_FLG_MASK 0x1 /* If set, completion will be generated when the WQE is completed */ +#define RDMA_SQ_LOCAL_INV_WQE_COMP_FLG_SHIFT 0 +#define RDMA_SQ_LOCAL_INV_WQE_RD_FENCE_FLG_MASK 0x1 /* If set, all pending RDMA read or Atomic operations will be completed before start processing this WQE */ +#define RDMA_SQ_LOCAL_INV_WQE_RD_FENCE_FLG_SHIFT 1 +#define RDMA_SQ_LOCAL_INV_WQE_INV_FENCE_FLG_MASK 0x1 /* If set, all pending operations will be completed before start processing this WQE */ +#define RDMA_SQ_LOCAL_INV_WQE_INV_FENCE_FLG_SHIFT 2 +#define RDMA_SQ_LOCAL_INV_WQE_SE_FLG_MASK 0x1 /* Dont care for local invalidate wqe */ +#define RDMA_SQ_LOCAL_INV_WQE_SE_FLG_SHIFT 3 +#define RDMA_SQ_LOCAL_INV_WQE_INLINE_FLG_MASK 0x1 /* Should be 0 for local invalidate wqe */ +#define RDMA_SQ_LOCAL_INV_WQE_INLINE_FLG_SHIFT 4 +#define RDMA_SQ_LOCAL_INV_WQE_DIF_ON_HOST_FLG_MASK 0x1 /* If set, indicated host memory of this WQE is DIF protected. */ +#define RDMA_SQ_LOCAL_INV_WQE_DIF_ON_HOST_FLG_SHIFT 5 +#define RDMA_SQ_LOCAL_INV_WQE_RESERVED0_MASK 0x3 +#define RDMA_SQ_LOCAL_INV_WQE_RESERVED0_SHIFT 6 + uint8_t wqe_size /* Size of WQE in 16B chunks */; + uint8_t prev_wqe_size /* Previous WQE size in 16B chunks */; +}; + + +struct rdma_sq_rdma_wqe +{ + __le32 imm_data /* The immediate data in case of RDMA_WITH_IMM */; + __le32 length /* Total data length. If DIF on host is enabled, length does NOT include DIF guards. */; + __le32 xrc_srq /* Valid only when XRC is set for the QP */; + uint8_t req_type /* Type of WQE */; + uint8_t flags; +#define RDMA_SQ_RDMA_WQE_COMP_FLG_MASK 0x1 /* If set, completion will be generated when the WQE is completed */ +#define RDMA_SQ_RDMA_WQE_COMP_FLG_SHIFT 0 +#define RDMA_SQ_RDMA_WQE_RD_FENCE_FLG_MASK 0x1 /* If set, all pending RDMA read or Atomic operations will be completed before start processing this WQE */ +#define RDMA_SQ_RDMA_WQE_RD_FENCE_FLG_SHIFT 1 +#define RDMA_SQ_RDMA_WQE_INV_FENCE_FLG_MASK 0x1 /* If set, all pending operations will be completed before start processing this WQE */ +#define RDMA_SQ_RDMA_WQE_INV_FENCE_FLG_SHIFT 2 +#define RDMA_SQ_RDMA_WQE_SE_FLG_MASK 0x1 /* If set, signal the responder to generate a solicited event on this WQE */ +#define RDMA_SQ_RDMA_WQE_SE_FLG_SHIFT 3 +#define RDMA_SQ_RDMA_WQE_INLINE_FLG_MASK 0x1 /* if set, indicates inline data is following this WQE instead of SGEs. Applicable for RDMA_WR or RDMA_WR_WITH_IMM. Should be 0 for RDMA_RD */ +#define RDMA_SQ_RDMA_WQE_INLINE_FLG_SHIFT 4 +#define RDMA_SQ_RDMA_WQE_DIF_ON_HOST_FLG_MASK 0x1 /* If set, indicated host memory of this WQE is DIF protected. */ +#define RDMA_SQ_RDMA_WQE_DIF_ON_HOST_FLG_SHIFT 5 +#define RDMA_SQ_RDMA_WQE_RESERVED0_MASK 0x3 +#define RDMA_SQ_RDMA_WQE_RESERVED0_SHIFT 6 + uint8_t wqe_size /* Size of WQE in 16B chunks including all SGEs or inline data. In case there are SGEs: set to number of SGEs + 1. In case of inline data: set to the whole number of 16B which contain the inline data + 1. */; + uint8_t prev_wqe_size /* Previous WQE size in 16B chunks */; + struct regpair remote_va /* Remote virtual address */; + __le32 r_key /* Remote key */; + uint8_t dif_flags; +#define RDMA_SQ_RDMA_WQE_DIF_BLOCK_SIZE_MASK 0x1 /* if dif_on_host_flg set: DIF block size. 0=512B 1=4096B (use enum rdma_dif_block_size) */ +#define RDMA_SQ_RDMA_WQE_DIF_BLOCK_SIZE_SHIFT 0 +#define RDMA_SQ_RDMA_WQE_DIF_FIRST_RDMA_IN_IO_FLG_MASK 0x1 /* if dif_on_host_flg set: WQE executes first RDMA on related IO. */ +#define RDMA_SQ_RDMA_WQE_DIF_FIRST_RDMA_IN_IO_FLG_SHIFT 1 +#define RDMA_SQ_RDMA_WQE_DIF_LAST_RDMA_IN_IO_FLG_MASK 0x1 /* if dif_on_host_flg set: WQE executes last RDMA on related IO. */ +#define RDMA_SQ_RDMA_WQE_DIF_LAST_RDMA_IN_IO_FLG_SHIFT 2 +#define RDMA_SQ_RDMA_WQE_RESERVED1_MASK 0x1F +#define RDMA_SQ_RDMA_WQE_RESERVED1_SHIFT 3 + uint8_t reserved2[3]; +}; + + +/* + * First element (16 bytes) of rdma wqe + */ +struct rdma_sq_rdma_wqe_1st +{ + __le32 imm_data /* The immediate data in case of RDMA_WITH_IMM */; + __le32 length /* Total data length */; + __le32 xrc_srq /* Valid only when XRC is set for the QP */; + uint8_t req_type /* Type of WQE */; + uint8_t flags; +#define RDMA_SQ_RDMA_WQE_1ST_COMP_FLG_MASK 0x1 /* If set, completion will be generated when the WQE is completed */ +#define RDMA_SQ_RDMA_WQE_1ST_COMP_FLG_SHIFT 0 +#define RDMA_SQ_RDMA_WQE_1ST_RD_FENCE_FLG_MASK 0x1 /* If set, all pending RDMA read or Atomic operations will be completed before start processing this WQE */ +#define RDMA_SQ_RDMA_WQE_1ST_RD_FENCE_FLG_SHIFT 1 +#define RDMA_SQ_RDMA_WQE_1ST_INV_FENCE_FLG_MASK 0x1 /* If set, all pending operations will be completed before start processing this WQE */ +#define RDMA_SQ_RDMA_WQE_1ST_INV_FENCE_FLG_SHIFT 2 +#define RDMA_SQ_RDMA_WQE_1ST_SE_FLG_MASK 0x1 /* If set, signal the responder to generate a solicited event on this WQE */ +#define RDMA_SQ_RDMA_WQE_1ST_SE_FLG_SHIFT 3 +#define RDMA_SQ_RDMA_WQE_1ST_INLINE_FLG_MASK 0x1 /* if set, indicates inline data is following this WQE instead of SGEs. Applicable for RDMA_WR or RDMA_WR_WITH_IMM. Should be 0 for RDMA_RD */ +#define RDMA_SQ_RDMA_WQE_1ST_INLINE_FLG_SHIFT 4 +#define RDMA_SQ_RDMA_WQE_1ST_DIF_ON_HOST_FLG_MASK 0x1 /* If set, indicated host memory of this WQE is DIF protected. */ +#define RDMA_SQ_RDMA_WQE_1ST_DIF_ON_HOST_FLG_SHIFT 5 +#define RDMA_SQ_RDMA_WQE_1ST_RESERVED0_MASK 0x3 +#define RDMA_SQ_RDMA_WQE_1ST_RESERVED0_SHIFT 6 + uint8_t wqe_size /* Size of WQE in 16B chunks including all SGEs or inline data. In case there are SGEs: set to number of SGEs + 1. In case of inline data: set to the whole number of 16B which contain the inline data + 1. */; + uint8_t prev_wqe_size /* Previous WQE size in 16B chunks */; +}; + + +/* + * Second element (16 bytes) of rdma wqe + */ +struct rdma_sq_rdma_wqe_2nd +{ + struct regpair remote_va /* Remote virtual address */; + __le32 r_key /* Remote key */; + uint8_t dif_flags; +#define RDMA_SQ_RDMA_WQE_2ND_DIF_BLOCK_SIZE_MASK 0x1 /* if dif_on_host_flg set: DIF block size. 0=512B 1=4096B (use enum rdma_dif_block_size) */ +#define RDMA_SQ_RDMA_WQE_2ND_DIF_BLOCK_SIZE_SHIFT 0 +#define RDMA_SQ_RDMA_WQE_2ND_DIF_FIRST_SEGMENT_FLG_MASK 0x1 /* if dif_on_host_flg set: WQE executes first DIF on related MR. */ +#define RDMA_SQ_RDMA_WQE_2ND_DIF_FIRST_SEGMENT_FLG_SHIFT 1 +#define RDMA_SQ_RDMA_WQE_2ND_DIF_LAST_SEGMENT_FLG_MASK 0x1 /* if dif_on_host_flg set: WQE executes last DIF on related MR. */ +#define RDMA_SQ_RDMA_WQE_2ND_DIF_LAST_SEGMENT_FLG_SHIFT 2 +#define RDMA_SQ_RDMA_WQE_2ND_RESERVED1_MASK 0x1F +#define RDMA_SQ_RDMA_WQE_2ND_RESERVED1_SHIFT 3 + uint8_t reserved2[3]; +}; + + +/* + * SQ WQE req type enumeration + */ +enum rdma_sq_req_type +{ + RDMA_SQ_REQ_TYPE_SEND, + RDMA_SQ_REQ_TYPE_SEND_WITH_IMM, + RDMA_SQ_REQ_TYPE_SEND_WITH_INVALIDATE, + RDMA_SQ_REQ_TYPE_RDMA_WR, + RDMA_SQ_REQ_TYPE_RDMA_WR_WITH_IMM, + RDMA_SQ_REQ_TYPE_RDMA_RD, + RDMA_SQ_REQ_TYPE_ATOMIC_CMP_AND_SWAP, + RDMA_SQ_REQ_TYPE_ATOMIC_ADD, + RDMA_SQ_REQ_TYPE_LOCAL_INVALIDATE, + RDMA_SQ_REQ_TYPE_FAST_MR, + RDMA_SQ_REQ_TYPE_BIND, + RDMA_SQ_REQ_TYPE_INVALID, + MAX_RDMA_SQ_REQ_TYPE +}; + + +struct rdma_sq_send_wqe +{ + __le32 inv_key_or_imm_data /* the r_key to invalidate in case of SEND_WITH_INVALIDATE, or the immediate data in case of SEND_WITH_IMM */; + __le32 length /* Total data length */; + __le32 xrc_srq /* Valid only when XRC is set for the QP */; + uint8_t req_type /* Type of WQE */; + uint8_t flags; +#define RDMA_SQ_SEND_WQE_COMP_FLG_MASK 0x1 /* If set, completion will be generated when the WQE is completed */ +#define RDMA_SQ_SEND_WQE_COMP_FLG_SHIFT 0 +#define RDMA_SQ_SEND_WQE_RD_FENCE_FLG_MASK 0x1 /* If set, all pending RDMA read or Atomic operations will be completed before start processing this WQE */ +#define RDMA_SQ_SEND_WQE_RD_FENCE_FLG_SHIFT 1 +#define RDMA_SQ_SEND_WQE_INV_FENCE_FLG_MASK 0x1 /* If set, all pending operations will be completed before start processing this WQE */ +#define RDMA_SQ_SEND_WQE_INV_FENCE_FLG_SHIFT 2 +#define RDMA_SQ_SEND_WQE_SE_FLG_MASK 0x1 /* If set, signal the responder to generate a solicited event on this WQE */ +#define RDMA_SQ_SEND_WQE_SE_FLG_SHIFT 3 +#define RDMA_SQ_SEND_WQE_INLINE_FLG_MASK 0x1 /* if set, indicates inline data is following this WQE instead of SGEs */ +#define RDMA_SQ_SEND_WQE_INLINE_FLG_SHIFT 4 +#define RDMA_SQ_SEND_WQE_DIF_ON_HOST_FLG_MASK 0x1 /* Should be 0 for send wqe */ +#define RDMA_SQ_SEND_WQE_DIF_ON_HOST_FLG_SHIFT 5 +#define RDMA_SQ_SEND_WQE_RESERVED0_MASK 0x3 +#define RDMA_SQ_SEND_WQE_RESERVED0_SHIFT 6 + uint8_t wqe_size /* Size of WQE in 16B chunks including all SGEs or inline data. In case there are SGEs: set to number of SGEs + 1. In case of inline data: set to the whole number of 16B which contain the inline data + 1. */; + uint8_t prev_wqe_size /* Previous WQE size in 16B chunks */; + __le32 reserved1[4]; +}; + + +struct rdma_sq_send_wqe_1st +{ + __le32 inv_key_or_imm_data /* the r_key to invalidate in case of SEND_WITH_INVALIDATE, or the immediate data in case of SEND_WITH_IMM */; + __le32 length /* Total data length */; + __le32 xrc_srq /* Valid only when XRC is set for the QP */; + uint8_t req_type /* Type of WQE */; + uint8_t flags; +#define RDMA_SQ_SEND_WQE_1ST_COMP_FLG_MASK 0x1 /* If set, completion will be generated when the WQE is completed */ +#define RDMA_SQ_SEND_WQE_1ST_COMP_FLG_SHIFT 0 +#define RDMA_SQ_SEND_WQE_1ST_RD_FENCE_FLG_MASK 0x1 /* If set, all pending RDMA read or Atomic operations will be completed before start processing this WQE */ +#define RDMA_SQ_SEND_WQE_1ST_RD_FENCE_FLG_SHIFT 1 +#define RDMA_SQ_SEND_WQE_1ST_INV_FENCE_FLG_MASK 0x1 /* If set, all pending operations will be completed before start processing this WQE */ +#define RDMA_SQ_SEND_WQE_1ST_INV_FENCE_FLG_SHIFT 2 +#define RDMA_SQ_SEND_WQE_1ST_SE_FLG_MASK 0x1 /* If set, signal the responder to generate a solicited event on this WQE */ +#define RDMA_SQ_SEND_WQE_1ST_SE_FLG_SHIFT 3 +#define RDMA_SQ_SEND_WQE_1ST_INLINE_FLG_MASK 0x1 /* if set, indicates inline data is following this WQE instead of SGEs */ +#define RDMA_SQ_SEND_WQE_1ST_INLINE_FLG_SHIFT 4 +#define RDMA_SQ_SEND_WQE_1ST_RESERVED0_MASK 0x7 +#define RDMA_SQ_SEND_WQE_1ST_RESERVED0_SHIFT 5 + uint8_t wqe_size /* Size of WQE in 16B chunks including all SGEs or inline data. In case there are SGEs: set to number of SGEs + 1. In case of inline data: set to the whole number of 16B which contain the inline data + 1. */; + uint8_t prev_wqe_size /* Previous WQE size in 16B chunks */; +}; + + +struct rdma_sq_send_wqe_2st +{ + __le32 reserved1[4]; +}; + + +struct rdma_sq_sge +{ + __le32 length /* Total length of the send. If DIF on host is enabled, SGE length includes the DIF guards. */; + struct regpair addr; + __le32 l_key; +}; + + +struct rdma_srq_wqe_header +{ + struct regpair wr_id; + uint8_t num_sges /* number of SGEs in WQE */; + uint8_t reserved2[7]; +}; + +struct rdma_srq_sge +{ + struct regpair addr; + __le32 length; + __le32 l_key; +}; + +/* + * rdma srq sge + */ +union rdma_srq_elm +{ + struct rdma_srq_wqe_header header; + struct rdma_srq_sge sge; +}; + + + + +/* + * Rdma doorbell data for flags update + */ +struct rdma_pwm_flags_data +{ + __le16 icid /* internal CID */; + uint8_t agg_flags /* aggregative flags */; + uint8_t reserved; +}; + + +/* + * Rdma doorbell data for SQ and RQ + */ +struct rdma_pwm_val16_data +{ + __le16 icid /* internal CID */; + __le16 value /* aggregated value to update */; +}; + + +union rdma_pwm_val16_data_union +{ + struct rdma_pwm_val16_data as_struct /* Parameters field */; + __le32 as_dword; +}; + + +/* + * Rdma doorbell data for CQ + */ +struct rdma_pwm_val32_data +{ + __le16 icid /* internal CID */; + uint8_t agg_flags /* bit for every DQ counter flags in CM context that DQ can increment */; + uint8_t params; +#define RDMA_PWM_VAL32_DATA_AGG_CMD_MASK 0x3 /* aggregative command to CM (use enum db_agg_cmd_sel) */ +#define RDMA_PWM_VAL32_DATA_AGG_CMD_SHIFT 0 +#define RDMA_PWM_VAL32_DATA_BYPASS_EN_MASK 0x1 /* enable QM bypass */ +#define RDMA_PWM_VAL32_DATA_BYPASS_EN_SHIFT 2 +#define RDMA_PWM_VAL32_DATA_RESERVED_MASK 0x1F +#define RDMA_PWM_VAL32_DATA_RESERVED_SHIFT 3 + __le32 value /* aggregated value to update */; +}; + + +union rdma_pwm_val32_data_union +{ + struct rdma_pwm_val32_data as_struct /* Parameters field */; + struct regpair as_repair; +}; + +#endif /* __QED_HSI_RDMA__ */ diff --git a/providers/qedr/rdma_common.h b/providers/qedr/rdma_common.h new file mode 100644 index 0000000..0c25793 --- /dev/null +++ b/providers/qedr/rdma_common.h @@ -0,0 +1,74 @@ +/* + * Copyright (c) 2015-2016 QLogic Corporation + * + * This software is available to you under a choice of one of two + * licenses. You may choose to be licensed under the terms of the GNU + * General Public License (GPL) Version 2, available from the file + * COPYING in the main directory of this source tree, or the + * OpenIB.org BSD license below: + * + * Redistribution and use in source and binary forms, with or + * without modification, are permitted provided that the following + * conditions are met: + * + * - Redistributions of source code must retain the above + * copyright notice, this list of conditions and the following + * disclaimer. + * + * - Redistributions in binary form must reproduce the above + * copyright notice, this list of conditions and the following + * disclaimer in the documentation and /or other materials + * provided with the distribution. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, + * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF + * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND + * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS + * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN + * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN + * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE + * SOFTWARE. + */ + +#ifndef __RDMA_COMMON__ +#define __RDMA_COMMON__ +/************************/ +/* RDMA FW CONSTANTS */ +/************************/ + +#define RDMA_RESERVED_LKEY (0) //Reserved lkey +#define RDMA_RING_PAGE_SIZE (0x1000) //4KB pages + +#define RDMA_MAX_SGE_PER_SQ_WQE (4) //max number of SGEs in a single request +#define RDMA_MAX_SGE_PER_RQ_WQE (4) //max number of SGEs in a single request + +#define RDMA_MAX_DATA_SIZE_IN_WQE (0x7FFFFFFF) //max size of data in single request + +#define RDMA_REQ_RD_ATOMIC_ELM_SIZE (0x50) +#define RDMA_RESP_RD_ATOMIC_ELM_SIZE (0x20) + +#define RDMA_MAX_CQS (64*1024) +#define RDMA_MAX_TIDS (128*1024-1) +#define RDMA_MAX_PDS (64*1024) + +#define RDMA_NUM_STATISTIC_COUNTERS MAX_NUM_VPORTS +#define RDMA_NUM_STATISTIC_COUNTERS_K2 MAX_NUM_VPORTS_K2 +#define RDMA_NUM_STATISTIC_COUNTERS_BB MAX_NUM_VPORTS_BB + +#define RDMA_TASK_TYPE (PROTOCOLID_ROCE) + + +struct rdma_srq_id +{ + __le16 srq_idx /* SRQ index */; + __le16 opaque_fid; +}; + + +struct rdma_srq_producers +{ + __le32 sge_prod /* Current produced sge in SRQ */; + __le32 wqe_prod /* Current produced WQE to SRQ */; +}; + +#endif /* __RDMA_COMMON__ */ diff --git a/providers/qedr/roce_common.h b/providers/qedr/roce_common.h new file mode 100644 index 0000000..b01c2ad --- /dev/null +++ b/providers/qedr/roce_common.h @@ -0,0 +1,50 @@ +/* + * Copyright (c) 2015-2016 QLogic Corporation + * + * This software is available to you under a choice of one of two + * licenses. You may choose to be licensed under the terms of the GNU + * General Public License (GPL) Version 2, available from the file + * COPYING in the main directory of this source tree, or the + * OpenIB.org BSD license below: + * + * Redistribution and use in source and binary forms, with or + * without modification, are permitted provided that the following + * conditions are met: + * + * - Redistributions of source code must retain the above + * copyright notice, this list of conditions and the following + * disclaimer. + * + * - Redistributions in binary form must reproduce the above + * copyright notice, this list of conditions and the following + * disclaimer in the documentation and /or other materials + * provided with the distribution. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, + * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF + * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND + * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS + * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN + * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN + * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE + * SOFTWARE. + */ + +#ifndef __ROCE_COMMON__ +#define __ROCE_COMMON__ +/************************************************************************/ +/* Add include to common rdma target for both eCore and protocol rdma driver */ +/************************************************************************/ +#include "rdma_common.h" +/************************/ +/* ROCE FW CONSTANTS */ +/************************/ + +#define ROCE_REQ_MAX_INLINE_DATA_SIZE (256) //max size of inline data in single request +#define ROCE_REQ_MAX_SINGLE_SQ_WQE_SIZE (288) //Maximum size of single SQ WQE (rdma wqe and inline data) + +#define ROCE_MAX_QPS (32*1024) +#define ROCE_DCQCN_NP_MAX_QPS (64) /* notification point max QPs*/ +#define ROCE_DCQCN_RP_MAX_QPS (64) /* reaction point max QPs*/ + +#endif /* __ROCE_COMMON__ */