From patchwork Mon Dec 9 22:49:33 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirsher, Jeffrey T" X-Patchwork-Id: 11280561 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 002BB138D for ; Mon, 9 Dec 2019 22:50:11 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id DB3262071E for ; Mon, 9 Dec 2019 22:50:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727183AbfLIWuJ (ORCPT ); Mon, 9 Dec 2019 17:50:09 -0500 Received: from mga12.intel.com ([192.55.52.136]:33714 "EHLO mga12.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727195AbfLIWt5 (ORCPT ); Mon, 9 Dec 2019 17:49:57 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga106.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 09 Dec 2019 14:49:47 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.69,296,1571727600"; d="scan'208";a="237912369" Received: from jtkirshe-desk1.jf.intel.com ([134.134.177.74]) by fmsmga004.fm.intel.com with ESMTP; 09 Dec 2019 14:49:47 -0800 From: Jeff Kirsher To: davem@davemloft.net, gregkh@linuxfoundation.org Cc: Mustafa Ismail , netdev@vger.kernel.org, linux-rdma@vger.kernel.org, nhorman@redhat.com, sassmann@redhat.com, jgg@ziepe.ca, parav@mellanox.com, Shiraz Saleem , Jeff Kirsher Subject: [PATCH v3 18/20] RDMA/irdma: Add ABI definitions Date: Mon, 9 Dec 2019 14:49:33 -0800 Message-Id: <20191209224935.1780117-19-jeffrey.t.kirsher@intel.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20191209224935.1780117-1-jeffrey.t.kirsher@intel.com> References: <20191209224935.1780117-1-jeffrey.t.kirsher@intel.com> MIME-Version: 1.0 Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org From: Mustafa Ismail Add ABI definitions for irdma. Signed-off-by: Mustafa Ismail Signed-off-by: Shiraz Saleem Signed-off-by: Jeff Kirsher --- include/uapi/rdma/irdma-abi.h | 131 ++++++++++++++++++++++++++++++++++ 1 file changed, 131 insertions(+) create mode 100644 include/uapi/rdma/irdma-abi.h diff --git a/include/uapi/rdma/irdma-abi.h b/include/uapi/rdma/irdma-abi.h new file mode 100644 index 000000000000..0875d73e0b75 --- /dev/null +++ b/include/uapi/rdma/irdma-abi.h @@ -0,0 +1,131 @@ +/* SPDX-License-Identifier: (GPL-2.0 WITH Linux-syscall-note) OR Linux-OpenIB) */ +/* + * Copyright (c) 2006 - 2019 Intel Corporation. All rights reserved. + * Copyright (c) 2005 Topspin Communications. All rights reserved. + * Copyright (c) 2005 Cisco Systems. All rights reserved. + * Copyright (c) 2005 Open Grid Computing, Inc. All rights reserved. + */ + +#ifndef IRDMA_ABI_H +#define IRDMA_ABI_H + +#include + +/* irdma must support legacy GEN_1 i40iw kernel + * and user-space whose last ABI ver is 5 + */ +#define IRDMA_ABI_VER 6 + +enum irdma_memreg_type { + IW_MEMREG_TYPE_MEM = 0, + IW_MEMREG_TYPE_QP = 1, + IW_MEMREG_TYPE_CQ = 2, + IW_MEMREG_TYPE_RSVD = 3, + IW_MEMREG_TYPE_MW = 4, +}; + +struct irdma_alloc_ucontext_req { + __u32 rsvd32; + __u8 userspace_ver; + __u8 rsvd8[3]; +}; + +struct i40iw_alloc_ucontext_req { + __u32 rsvd32; + __u8 userspace_ver; + __u8 rsvd8[3]; +}; + +struct irdma_alloc_ucontext_resp { + __aligned_u64 feature_flags; + __u32 max_hw_wq_frags; + __u32 max_hw_read_sges; + __u32 max_hw_inline; + __u32 max_hw_rq_quanta; + __u32 max_hw_wq_quanta; + __u32 min_hw_cq_size; + __u32 max_hw_cq_size; + __u32 rsvd1[7]; + __u16 max_hw_sq_chunk; + __u16 rsvd2[11]; + __u8 kernel_ver; + __u8 hw_rev; + __u8 rsvd3[6]; +}; + +struct i40iw_alloc_ucontext_resp { + __u32 max_pds; + __u32 max_qps; + __u32 wq_size; /* size of the WQs (SQ+RQ) in the mmaped area */ + __u8 kernel_ver; + __u8 rsvd[3]; +}; + +struct irdma_alloc_pd_resp { + __u32 pd_id; + __u8 rsvd[4]; +}; + +struct irdma_resize_cq_req { + __aligned_u64 user_cq_buffer; +}; + +struct irdma_create_cq_req { + __aligned_u64 user_cq_buf; + __aligned_u64 user_shadow_area; +}; + +struct irdma_create_qp_req { + __aligned_u64 user_wqe_bufs; + __aligned_u64 user_compl_ctx; +}; + +struct i40iw_create_qp_req { + __aligned_u64 user_wqe_bufs; + __aligned_u64 user_compl_ctx; +}; + +struct irdma_mem_reg_req { + __u16 reg_type; /* Memory, QP or CQ */ + __u16 cq_pages; + __u16 rq_pages; + __u16 sq_pages; +}; + +struct irdma_create_cq_resp { + __u32 cq_id; + __u32 cq_size; +}; + +struct irdma_create_qp_resp { + __u32 qp_id; + __u32 actual_sq_size; + __u32 actual_rq_size; + __u32 irdma_drv_opt; + __u32 qp_caps; + __u16 rsvd1; + __u8 lsmm; + __u8 rsvd2; +}; + +struct i40iw_create_qp_resp { + __u32 qp_id; + __u32 actual_sq_size; + __u32 actual_rq_size; + __u32 i40iw_drv_opt; + __u16 push_idx; + __u8 lsmm; + __u8 rsvd; +}; + +struct irdma_modify_qp_resp { + __u16 push_idx; + __u16 push_offset; + __u8 rsvd[4]; +}; + +struct irdma_create_ah_resp { + __u32 ah_id; + __u8 rsvd[4]; +}; +#endif /* IRDMA_ABI_H */