From patchwork Mon Nov 6 19:48:53 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bryan Tan X-Patchwork-Id: 10044367 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 80E6960247 for ; Mon, 6 Nov 2017 19:49:09 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6ACE123794 for ; Mon, 6 Nov 2017 19:49:09 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 5F7C229FBE; Mon, 6 Nov 2017 19:49:09 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BCB8C23794 for ; Mon, 6 Nov 2017 19:49:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932165AbdKFTtG (ORCPT ); Mon, 6 Nov 2017 14:49:06 -0500 Received: from mail-cys01nam02on0058.outbound.protection.outlook.com ([104.47.37.58]:57175 "EHLO NAM02-CY1-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S932897AbdKFTtE (ORCPT ); Mon, 6 Nov 2017 14:49:04 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=onevmw.onmicrosoft.com; s=selector1-vmware-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=PrKjthkErH/PL9LydNOyzCgz4bZEg0viKbGrHdKHtrI=; b=Bv6ZeXJmyhPMJwSEJsY7ImX6S00wAMaMaF6uoO7XqkWxeadOMrBfRqSzGCDG7E+1AZGeJwXY7t/SgKGufdNby0p2eG/AQJ0cL12Qt+0b/0MEDqQtfIVwpSX5gGSDwUzrgRnfyl1y+LHuGvUEXrV5i06ZeemT1EJ40J6Cz35dRBY= Authentication-Results: spf=none (sender IP is ) smtp.mailfrom=bryantan@vmware.com; Received: from bryantan-devbox.prom.eng.vmware.com.prom.eng.vmware.com (208.91.1.34) by CO2PR05MB2774.namprd05.prod.outlook.com (10.166.200.26) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P256) id 15.20.218.6; Mon, 6 Nov 2017 19:49:01 +0000 Date: Mon, 6 Nov 2017 11:48:53 -0800 From: Bryan Tan To: linux-rdma@vger.kernel.org Subject: [PATCH for-next v1] RDMA/vmw_pvrdma: Add shared receive queue support Message-ID: <20171106194847.GA5038@bryantan-devbox.prom.eng.vmware.com.prom.eng.vmware.com> MIME-Version: 1.0 Content-Disposition: inline User-Agent: Mutt/1.5.24 (2015-08-30) X-Originating-IP: [208.91.1.34] X-ClientProxiedBy: CY1PR07CA0032.namprd07.prod.outlook.com (10.166.202.42) To CO2PR05MB2774.namprd05.prod.outlook.com (10.166.200.26) X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 768164c0-80a1-42f9-75f6-08d5254f6d8f X-Microsoft-Antispam: UriScan:; BCL:0; PCL:0; RULEID:(22001)(4534020)(4602075)(4627115)(201703031133081)(201702281549075)(2017052603199); SRVR:CO2PR05MB2774; X-Microsoft-Exchange-Diagnostics: 1; CO2PR05MB2774; 3:CZAmQmjP/fisl9pGFFfOJKUA4GcNcuQrQsTN0Y/amQ9I4vgK73xuWb1F5KYjPykpnH7n8tfUMU+P8AHZXHsFBzT+k7WlyAPcjkd21ZvhvIG7w7hTrlfCKtdeSdKrajX+MV4ZjKZpd7OfFMrlzbp+xEU0Pi/MVUuszZKQTUO0+daLIa9tUJLi7sFPm1JH7RzXgC75SWkV+MwkIr+qfUPXRbqSJ6KPbXq4RKzEWHTNbMFPT1fpKtz1YUjeJgkVP0kZ; 25:S1Wyt7GUlwHX2C9tL5xTbsrBuiXFr8UTexovMbkWQW/7/jfwTGvp6I2vlrwWqo9+CUjfSuTbOegfVIKETup4NWqzt4NTvwjVhCqdGm0wr5WV/cHStuvZM7bSgEYqtvNTmDA7KHo//NRxrR01DYMnpjB0aQxpP31VHPGo8ZFTEsmn0ipHejbjmma8RDZfNCEuby+PHT2rWvLoChY+J7rjSeIR+uuXtoTcXcW6Rd3Y6WaKyixkodFy/x1XoirKEeQHKUBdmg752ANI/gtNTmwh8CN2G+FznmFoMw3r4kleWi9+tFuBMwPy5YtYvejD53ljGeHcGyqnfpyK6FJgrGGXvw==; 31:B/33ZOaEEyP77q3561a0qfTWPTPaF9R/22hzO1Q093Q57FecTPDXIcBdZspnwX+nr6GgkFrykwAcf8qU//zxkXbqzBI7m6g5cjDoGD4AoEDymIdvsZAPVlE5huI0Y0AxfZeW5vI70k7blA06UT/30RHpVANaF1Qys2Eo+eczReKl/ZS82jllG0lEy/p6BCv/hB72UdsAGKrb7O+jU/96YcI7T8oRBvQ51sLObNC+QMM= X-MS-TrafficTypeDiagnostic: CO2PR05MB2774: X-Microsoft-Exchange-Diagnostics: 1; CO2PR05MB2774; 20:+UtdEhIdIVX8WiwO/JWx/1W/3TvFjvINu3ylQ1eEnFnM4iEesB51evcNnhKQiPZVFjA1aabUD2z85z0ycDAPwfiitgjjbQn6uG1ar57TkAMnC1nNL48CrSrVkyBaQmV7IkA8UxFz4DGp9K0xdZFiE1xFeWN2sNDQm7bWQm4KVIx/g0zW9WoTaFO3DrVHlcWrqwevpF4+ZmFXSjASyl6OamQbdXesNqb7R9AbaS/Cvl3njayyTrPMFxZJRNJ/un6vMd1jFjUEVO46FU5lx1js0te+WgARmicsicRBb1XdFywjIr9NZGrfuEfpQbZWKp2u4UM04jsBzeqeiSeENCDuzdXEoFJ+uKRKYVLpBD97dIAC/BQxNzzYgqushP8Dmvx1LrkPo9iZjpjT4aclLueUrwZfi+dLWdWBIS2dk701NYnvA78D9IAr/cpnsfa1GmkiRl998zTNE3St9A9RpIUXQLjs8VvnII5yt6v0ZIGHM+/Iq52MRnF3ChSdtdqAC2T7; 4:9tBRyZFjQYLfEQA7A8UmB54AtAAh3+PNq/XiLQpXKx89UPYrRdJCdEokxhtuHnPI7m72U3eyOqeyAd/R6CKvhSE/8dTcibsjLWnBizFVJLjilpBNKwzfUr4dFN64plSX7dS76vNsPlPv16U9M4zDOlOm5SDpsQ3sXmwOqby0bjTMDHGgTc+Bzn+NroqX7xI2flo2sR/ZhhA4nwQ9fYPYxKJ7w50k6/DUArOc4t11rAmNi/rYgzEQsu4jEnj9eNHM2Qwa5zgPMP718c6zNTYCdqJhf6gbv8QfWc1XMT0dmrp0ZDwPD10mxf5TLowcQlkn/GfltjnYwnog/fwPuW6K377IsGTIs8J8gks3B42Cucp88GmtntXLYIJRostgBALO X-Exchange-Antispam-Report-Test: UriScan:(250305191791016)(61668805478150)(22074186197030); X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(100000700101)(100105000095)(100000701101)(100105300095)(100000702101)(100105100095)(6040450)(2401047)(5005006)(8121501046)(3002001)(10201501046)(3231021)(93006095)(93001095)(100000703101)(100105400095)(6041248)(201703131423075)(201702281528075)(201703061421075)(201703061406153)(20161123555025)(20161123564025)(20161123562025)(20161123560025)(20161123558100)(6072148)(201708071742011)(100000704101)(100105200095)(100000705101)(100105500095); SRVR:CO2PR05MB2774; BCL:0; PCL:0; RULEID:(100000800101)(100110000095)(100000801101)(100110300095)(100000802101)(100110100095)(100000803101)(100110400095)(100000804101)(100110200095)(100000805101)(100110500095); SRVR:CO2PR05MB2774; X-Forefront-PRVS: 048396AFA0 X-Forefront-Antispam-Report: SFV:NSPM; SFS:(10009020)(6009001)(39860400002)(376002)(346002)(199003)(189002)(54534003)(86362001)(8676002)(81156014)(5660300001)(83506002)(106356001)(2351001)(305945005)(7736002)(97736004)(81166006)(8936002)(6916009)(2361001)(575784001)(105586002)(66066001)(47776003)(55016002)(6306002)(2906002)(68736007)(16526018)(1076002)(50466002)(33656002)(6116002)(3846002)(58126008)(316002)(53936002)(189998001)(23726003)(16586007)(966005)(54356999)(478600001)(50986999)(101416001)(25786009)(6666003)(18370500001)(2004002)(309714004); DIR:OUT; SFP:1101; SCL:1; SRVR:CO2PR05MB2774; H:bryantan-devbox.prom.eng.vmware.com.prom.eng.vmware.com; FPR:; SPF:None; PTR:InfoNoRecords; A:1; MX:1; LANG:en; Received-SPF: None (protection.outlook.com: vmware.com does not designate permitted sender hosts) X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1; CO2PR05MB2774; 23:WSTnSBwwW/Vj3bXbMct4AYkpsw/2KhfYd32kkfeRU?= =?us-ascii?Q?0zkutIgpLaWMIqnfpyu5woZpG+NXglcwidR+TiLvnPs6tMAtdWMsfqtMXYLX?= =?us-ascii?Q?0n0vgrszb+GcC3XI5KBBf3M7gWbI2yF2Fj45FQBPQBqSeEtC5dqhJFzE3N0N?= =?us-ascii?Q?i2F45zW5ccvYYXhGuN3t9eJtmNOo8r4jW7mTnOZv4XXDp2e8pmigNm26OX15?= =?us-ascii?Q?bHQuHfuNYw9wfJ/Ylacvk19W+1NEWYQuXD3q+d1xV8isdGQzZ7abm3qqnqc2?= =?us-ascii?Q?BD8GMMZ6qmCxjSyGtKGNGzAofYwfzut3q4owBZbPpFWZF9+jCafwnIALQgTz?= =?us-ascii?Q?eKhAbTVUEH7PNMgK0mDOD/VX/2EuyLNtqNvDkLQmmybqM/vVej4SfIFAg2rX?= =?us-ascii?Q?ohBxnbHd3NHVBYUuLsk0jrhCCSPpkptU0d+UI/kQFGL7X5zSiJkI+YRt1bbW?= =?us-ascii?Q?CKvAptGwp5W0B9WDyKHEj989hJGD1REYe+3NN0ygYeaKG9lYsUvSLJ5OYiSL?= =?us-ascii?Q?ifafLTkhknIhFrexj4Mfiy92waXHmxhnbrmMJbCHlf2vk2tCtH6PB5OtkQNk?= =?us-ascii?Q?pbU9nzzU8lW2wxxvyM+5K5BRlDiOFoLinYSG6J2aOsfIANYzyAzHEf1lOZEA?= =?us-ascii?Q?0ZMUYkeXDqUvFMl5S0dpSg8RTC5xdkkCUw4S5wIJi6bYXDpmvnv5+fTqQtvB?= =?us-ascii?Q?B2uP9o9q7ashrczIgG2h4UZqe8dlCyYBagS9xxk9pRTwROoXBSJToKmPHkMS?= =?us-ascii?Q?bv1eoQZGcqIJw0lMmHhH6NLsAGTMqZfDpB/Hs4gy/AQt5mQZSviDMuHJRlWC?= =?us-ascii?Q?9Tf6GkQE/HXJNm24/CqiguDgLHQWfXf4zGWGBcwA6ggvoqCegotH/oZ/yT2T?= =?us-ascii?Q?j8U9oyKDQzF5mp72NHXgJJmeV4OzrTX1eZOARWNZerQJjeY2tnrCKpQf7uxq?= =?us-ascii?Q?CTwlT5GBxFO5ZswLJSvmahFoE6/1DBUX+kd8fuEVu/qudLst58VzcxvDdWh2?= =?us-ascii?Q?CQgi4abqtDqN8qNRKk7738tZ2hlrH5E9srZyCShNNxggsek7ZSqkZua3PTqV?= =?us-ascii?Q?DgKEsSlDEDpS0HOwbD3Ux8kBMhZvYlU1OgcDcmzwDthX7l+zbLw07bUqyioN?= =?us-ascii?Q?UAQ0kn+lgtStapQ4G7saaQzVhI2iPZ0RKBCjDM+KJpgMos1D2EyZ0qzk4aq2?= =?us-ascii?Q?2zM3D4iGFox3eQ=3D?= X-Microsoft-Exchange-Diagnostics: 1; CO2PR05MB2774; 6:5/LDVDZZb65ZHNDFWoN+U7wXSVMPrpJMSEQ5N1c0MHtSPFvJ3MnWu7+NYTYvi7PO9taI4Elb6vo6sNa+eBCImhnaKsJ6Nfu1ho9bfjIRYP6V2UY+cHPnNSWl8HnDggu6QBS3753fjE24056oGwo0lcQ5sUnSc/tPRPcZ0NsDBIeuU4mKTqJhQXSugzwDaBwoJ0fmf4UL8pzI2XC89l6omNtzP8c5mmnB7kRfLpX9mOw9Iaqg1K409U3ZXuhJE7mEPPk4isZ1emh3JsJGAs6BE39jE5v+8vAzWRopcm92HwIzHXYq10elqnidLcQq2B/F9R+ybfjTr4DrHxE6vdkWjItVV4cUbwoohN3xGx2sr3A=; 5:04X2hB+nfukEgaQyF7rhlfNbeUT/M5ZUDfnBxpTN636TMx5nirWZV72XV4p4cOSDLxdV8yFvKbRXUSdGT7kM8GafgkpWTbJ1NKdRtaNZ3tyZ6QJ8gcRwieqzapmy7f8pRyuqYmZr+dlpyEAZDo8ROQKmVkb35YtIEGHlyO+fJF4=; 24:IlicFeMBdSEid+w3vdQfI50pvqy1BJ66i7MGKDnjp9goJSHbXG/QF1PE0RTfihrcT2Cetx7bssF91M7+uzIjY9LzzJb6fxiqhslLjms517o=; 7:mkPbEsgaZutIIiChhnhaUFVlbDXDJaM0Q95VL8KetUNfTHSA6Fmvmbmk9lpsh0w43Yz0q7sG2bgB7WxVd4hOcAzVuT+NodkxPL3QeFl66sOAIW+g0IGhFHcDVSm9gO5lw1IvVI3uj3UnsueUXrjYtPgdJ950d9dNbM1GtXpjJacv/vRz+uktLdw/Bnr0uwfPUyHOMW07MKcAfbfQkPHiuYplklttXA65L+60TyDGZ+MAINVDTCbTW+wGd8LnxW3I SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-Microsoft-Exchange-Diagnostics: 1; CO2PR05MB2774; 20:YUOnLFdsCwWAFpOO82HbFKRYBRWVhGhw98go5/ztwpnKzZu+Zx4kxczMrABNqs2vOLFYZtV/9c0XBbDlX67a47E7n1aphl1i2GnPXl7sBZawtdug8tBgF5E4ruCJpw5QzWxAjmRUrsy2kX4m/cMgA8tQ7CyqD/oSMyEUx0dFg2o= X-OriginatorOrg: vmware.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Nov 2017 19:49:01.1211 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 768164c0-80a1-42f9-75f6-08d5254f6d8f X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: b39138ca-3cee-4b4a-a4d6-cd83d9dd62f0 X-MS-Exchange-Transport-CrossTenantHeadersStamped: CO2PR05MB2774 Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Add the required functions needed to support SRQs. Currently, kernel clients are not supported. SRQs will only be available in userspace. Reviewed-by: Adit Ranadive Reviewed-by: Aditya Sarwade Reviewed-by: Jorgen Hansen Reviewed-by: Nitish Bhat Signed-off-by: Bryan Tan Reviewed-by: Yuval Shaia --- v0 -> v1 Changelog: - Change SRQ functions to be more consistent with the QP functions - Move check for kernel clients to before resource initialization - Use refcount_t instead of atomic_t - Change allocation from (void *) to (struct pvrdma_srq *) - Remove unnecessary initialization of srq_tbl to NULL - Check for srq_tbl in SRQ event handler - Only register SRQ functions with IB/core when the underlying device has SRQ capabilities. --- drivers/infiniband/hw/vmw_pvrdma/Makefile | 2 +- drivers/infiniband/hw/vmw_pvrdma/pvrdma.h | 25 ++ drivers/infiniband/hw/vmw_pvrdma/pvrdma_dev_api.h | 54 ++++ drivers/infiniband/hw/vmw_pvrdma/pvrdma_main.c | 59 +++- drivers/infiniband/hw/vmw_pvrdma/pvrdma_qp.c | 55 +++- drivers/infiniband/hw/vmw_pvrdma/pvrdma_srq.c | 319 ++++++++++++++++++++++ drivers/infiniband/hw/vmw_pvrdma/pvrdma_verbs.c | 3 + drivers/infiniband/hw/vmw_pvrdma/pvrdma_verbs.h | 18 ++ include/uapi/rdma/vmw_pvrdma-abi.h | 2 + 9 files changed, 523 insertions(+), 14 deletions(-) create mode 100644 drivers/infiniband/hw/vmw_pvrdma/pvrdma_srq.c diff --git a/drivers/infiniband/hw/vmw_pvrdma/Makefile b/drivers/infiniband/hw/vmw_pvrdma/Makefile index 0194ed1..2f52e0a 100644 --- a/drivers/infiniband/hw/vmw_pvrdma/Makefile +++ b/drivers/infiniband/hw/vmw_pvrdma/Makefile @@ -1,3 +1,3 @@ obj-$(CONFIG_INFINIBAND_VMWARE_PVRDMA) += vmw_pvrdma.o -vmw_pvrdma-y := pvrdma_cmd.o pvrdma_cq.o pvrdma_doorbell.o pvrdma_main.o pvrdma_misc.o pvrdma_mr.o pvrdma_qp.o pvrdma_verbs.o +vmw_pvrdma-y := pvrdma_cmd.o pvrdma_cq.o pvrdma_doorbell.o pvrdma_main.o pvrdma_misc.o pvrdma_mr.o pvrdma_qp.o pvrdma_srq.o pvrdma_verbs.o diff --git a/drivers/infiniband/hw/vmw_pvrdma/pvrdma.h b/drivers/infiniband/hw/vmw_pvrdma/pvrdma.h index 984aa34..63bc2ef 100644 --- a/drivers/infiniband/hw/vmw_pvrdma/pvrdma.h +++ b/drivers/infiniband/hw/vmw_pvrdma/pvrdma.h @@ -162,6 +162,22 @@ struct pvrdma_ah { struct pvrdma_av av; }; +struct pvrdma_srq { + struct ib_srq ibsrq; + int offset; + spinlock_t lock; /* SRQ lock. */ + int wqe_cnt; + int wqe_size; + int max_gs; + struct ib_umem *umem; + struct pvrdma_ring_state *ring; + struct pvrdma_page_dir pdir; + u32 srq_handle; + int npages; + refcount_t refcnt; + wait_queue_head_t wait; +}; + struct pvrdma_qp { struct ib_qp ibqp; u32 qp_handle; @@ -171,6 +187,7 @@ struct pvrdma_qp { struct ib_umem *rumem; struct ib_umem *sumem; struct pvrdma_page_dir pdir; + struct pvrdma_srq *srq; int npages; int npages_send; int npages_recv; @@ -210,6 +227,8 @@ struct pvrdma_dev { struct pvrdma_page_dir cq_pdir; struct pvrdma_cq **cq_tbl; spinlock_t cq_tbl_lock; + struct pvrdma_srq **srq_tbl; + spinlock_t srq_tbl_lock; struct pvrdma_qp **qp_tbl; spinlock_t qp_tbl_lock; struct pvrdma_uar_table uar_table; @@ -221,6 +240,7 @@ struct pvrdma_dev { bool ib_active; atomic_t num_qps; atomic_t num_cqs; + atomic_t num_srqs; atomic_t num_pds; atomic_t num_ahs; @@ -256,6 +276,11 @@ static inline struct pvrdma_cq *to_vcq(struct ib_cq *ibcq) return container_of(ibcq, struct pvrdma_cq, ibcq); } +static inline struct pvrdma_srq *to_vsrq(struct ib_srq *ibsrq) +{ + return container_of(ibsrq, struct pvrdma_srq, ibsrq); +} + static inline struct pvrdma_user_mr *to_vmr(struct ib_mr *ibmr) { return container_of(ibmr, struct pvrdma_user_mr, ibmr); diff --git a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_dev_api.h b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_dev_api.h index df0a6b5..6fd5a8f 100644 --- a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_dev_api.h +++ b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_dev_api.h @@ -339,6 +339,10 @@ enum { PVRDMA_CMD_DESTROY_UC, PVRDMA_CMD_CREATE_BIND, PVRDMA_CMD_DESTROY_BIND, + PVRDMA_CMD_CREATE_SRQ, + PVRDMA_CMD_MODIFY_SRQ, + PVRDMA_CMD_QUERY_SRQ, + PVRDMA_CMD_DESTROY_SRQ, PVRDMA_CMD_MAX, }; @@ -361,6 +365,10 @@ enum { PVRDMA_CMD_DESTROY_UC_RESP_NOOP, PVRDMA_CMD_CREATE_BIND_RESP_NOOP, PVRDMA_CMD_DESTROY_BIND_RESP_NOOP, + PVRDMA_CMD_CREATE_SRQ_RESP, + PVRDMA_CMD_MODIFY_SRQ_RESP, + PVRDMA_CMD_QUERY_SRQ_RESP, + PVRDMA_CMD_DESTROY_SRQ_RESP, PVRDMA_CMD_MAX_RESP, }; @@ -495,6 +503,46 @@ struct pvrdma_cmd_destroy_cq { u8 reserved[4]; }; +struct pvrdma_cmd_create_srq { + struct pvrdma_cmd_hdr hdr; + u64 pdir_dma; + u32 pd_handle; + u32 nchunks; + struct pvrdma_srq_attr attrs; + u8 srq_type; + u8 reserved[7]; +}; + +struct pvrdma_cmd_create_srq_resp { + struct pvrdma_cmd_resp_hdr hdr; + u32 srqn; + u8 reserved[4]; +}; + +struct pvrdma_cmd_modify_srq { + struct pvrdma_cmd_hdr hdr; + u32 srq_handle; + u32 attr_mask; + struct pvrdma_srq_attr attrs; +}; + +struct pvrdma_cmd_query_srq { + struct pvrdma_cmd_hdr hdr; + u32 srq_handle; + u8 reserved[4]; +}; + +struct pvrdma_cmd_query_srq_resp { + struct pvrdma_cmd_resp_hdr hdr; + struct pvrdma_srq_attr attrs; +}; + +struct pvrdma_cmd_destroy_srq { + struct pvrdma_cmd_hdr hdr; + u32 srq_handle; + u8 reserved[4]; +}; + struct pvrdma_cmd_create_qp { struct pvrdma_cmd_hdr hdr; u64 pdir_dma; @@ -594,6 +642,10 @@ struct pvrdma_cmd_destroy_bind { struct pvrdma_cmd_destroy_qp destroy_qp; struct pvrdma_cmd_create_bind create_bind; struct pvrdma_cmd_destroy_bind destroy_bind; + struct pvrdma_cmd_create_srq create_srq; + struct pvrdma_cmd_modify_srq modify_srq; + struct pvrdma_cmd_query_srq query_srq; + struct pvrdma_cmd_destroy_srq destroy_srq; }; union pvrdma_cmd_resp { @@ -608,6 +660,8 @@ struct pvrdma_cmd_destroy_bind { struct pvrdma_cmd_create_qp_resp create_qp_resp; struct pvrdma_cmd_query_qp_resp query_qp_resp; struct pvrdma_cmd_destroy_qp_resp destroy_qp_resp; + struct pvrdma_cmd_create_srq_resp create_srq_resp; + struct pvrdma_cmd_query_srq_resp query_srq_resp; }; #endif /* __PVRDMA_DEV_API_H__ */ diff --git a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_main.c b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_main.c index 6ce709a..1f4e187 100644 --- a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_main.c +++ b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_main.c @@ -118,6 +118,7 @@ static int pvrdma_init_device(struct pvrdma_dev *dev) spin_lock_init(&dev->cmd_lock); sema_init(&dev->cmd_sema, 1); atomic_set(&dev->num_qps, 0); + atomic_set(&dev->num_srqs, 0); atomic_set(&dev->num_cqs, 0); atomic_set(&dev->num_pds, 0); atomic_set(&dev->num_ahs, 0); @@ -254,9 +255,32 @@ static int pvrdma_register_device(struct pvrdma_dev *dev) goto err_cq_free; spin_lock_init(&dev->qp_tbl_lock); + /* Check if SRQ is supported by backend */ + if (dev->dsr->caps.max_srq) { + dev->ib_dev.uverbs_cmd_mask |= + (1ull << IB_USER_VERBS_CMD_CREATE_SRQ) | + (1ull << IB_USER_VERBS_CMD_MODIFY_SRQ) | + (1ull << IB_USER_VERBS_CMD_QUERY_SRQ) | + (1ull << IB_USER_VERBS_CMD_DESTROY_SRQ) | + (1ull << IB_USER_VERBS_CMD_POST_SRQ_RECV); + + dev->ib_dev.create_srq = pvrdma_create_srq; + dev->ib_dev.modify_srq = pvrdma_modify_srq; + dev->ib_dev.query_srq = pvrdma_query_srq; + dev->ib_dev.destroy_srq = pvrdma_destroy_srq; + dev->ib_dev.post_srq_recv = pvrdma_post_srq_recv; + + dev->srq_tbl = kcalloc(dev->dsr->caps.max_srq, + sizeof(struct pvrdma_srq *), + GFP_KERNEL); + if (!dev->srq_tbl) + goto err_qp_free; + } + spin_lock_init(&dev->srq_tbl_lock); + ret = ib_register_device(&dev->ib_dev, NULL); if (ret) - goto err_qp_free; + goto err_srq_free; for (i = 0; i < ARRAY_SIZE(pvrdma_class_attributes); ++i) { ret = device_create_file(&dev->ib_dev.dev, @@ -271,6 +295,8 @@ static int pvrdma_register_device(struct pvrdma_dev *dev) err_class: ib_unregister_device(&dev->ib_dev); +err_srq_free: + kfree(dev->srq_tbl); err_qp_free: kfree(dev->qp_tbl); err_cq_free: @@ -353,6 +379,35 @@ static void pvrdma_cq_event(struct pvrdma_dev *dev, u32 cqn, int type) } } +static void pvrdma_srq_event(struct pvrdma_dev *dev, u32 srqn, int type) +{ + struct pvrdma_srq *srq; + unsigned long flags; + + spin_lock_irqsave(&dev->srq_tbl_lock, flags); + if (dev->srq_tbl) + srq = dev->srq_tbl[srqn % dev->dsr->caps.max_srq]; + else + srq = NULL; + if (srq) + refcount_inc(&srq->refcnt); + spin_unlock_irqrestore(&dev->srq_tbl_lock, flags); + + if (srq && srq->ibsrq.event_handler) { + struct ib_srq *ibsrq = &srq->ibsrq; + struct ib_event e; + + e.device = ibsrq->device; + e.element.srq = ibsrq; + e.event = type; /* 1:1 mapping for now. */ + ibsrq->event_handler(&e, ibsrq->srq_context); + } + if (srq) { + if (refcount_dec_and_test(&srq->refcnt)) + wake_up(&srq->wait); + } +} + static void pvrdma_dispatch_event(struct pvrdma_dev *dev, int port, enum ib_event_type event) { @@ -423,6 +478,7 @@ static irqreturn_t pvrdma_intr1_handler(int irq, void *dev_id) case PVRDMA_EVENT_SRQ_ERR: case PVRDMA_EVENT_SRQ_LIMIT_REACHED: + pvrdma_srq_event(dev, eqe->info, eqe->type); break; case PVRDMA_EVENT_PORT_ACTIVE: @@ -1059,6 +1115,7 @@ static void pvrdma_pci_remove(struct pci_dev *pdev) iounmap(dev->regs); kfree(dev->sgid_tbl); kfree(dev->cq_tbl); + kfree(dev->srq_tbl); kfree(dev->qp_tbl); pvrdma_uar_table_cleanup(dev); iounmap(dev->driver_uar.map); diff --git a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_qp.c b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_qp.c index ed34d5a..10420a1 100644 --- a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_qp.c +++ b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_qp.c @@ -198,6 +198,7 @@ struct ib_qp *pvrdma_create_qp(struct ib_pd *pd, struct pvrdma_create_qp ucmd; unsigned long flags; int ret; + bool is_srq = !!init_attr->srq; if (init_attr->create_flags) { dev_warn(&dev->pdev->dev, @@ -214,6 +215,12 @@ struct ib_qp *pvrdma_create_qp(struct ib_pd *pd, return ERR_PTR(-EINVAL); } + if (is_srq && !dev->dsr->caps.max_srq) { + dev_warn(&dev->pdev->dev, + "SRQs not supported by device\n"); + return ERR_PTR(-EINVAL); + } + if (!atomic_add_unless(&dev->num_qps, 1, dev->dsr->caps.max_qp)) return ERR_PTR(-ENOMEM); @@ -252,26 +259,36 @@ struct ib_qp *pvrdma_create_qp(struct ib_pd *pd, goto err_qp; } - /* set qp->sq.wqe_cnt, shift, buf_size.. */ - qp->rumem = ib_umem_get(pd->uobject->context, - ucmd.rbuf_addr, - ucmd.rbuf_size, 0, 0); - if (IS_ERR(qp->rumem)) { - ret = PTR_ERR(qp->rumem); - goto err_qp; + if (!is_srq) { + /* set qp->sq.wqe_cnt, shift, buf_size.. */ + qp->rumem = ib_umem_get(pd->uobject->context, + ucmd.rbuf_addr, + ucmd.rbuf_size, 0, 0); + if (IS_ERR(qp->rumem)) { + ret = PTR_ERR(qp->rumem); + goto err_qp; + } + qp->srq = NULL; + } else { + qp->rumem = NULL; + qp->srq = to_vsrq(init_attr->srq); } qp->sumem = ib_umem_get(pd->uobject->context, ucmd.sbuf_addr, ucmd.sbuf_size, 0, 0); if (IS_ERR(qp->sumem)) { - ib_umem_release(qp->rumem); + if (!is_srq) + ib_umem_release(qp->rumem); ret = PTR_ERR(qp->sumem); goto err_qp; } qp->npages_send = ib_umem_page_count(qp->sumem); - qp->npages_recv = ib_umem_page_count(qp->rumem); + if (!is_srq) + qp->npages_recv = ib_umem_page_count(qp->rumem); + else + qp->npages_recv = 0; qp->npages = qp->npages_send + qp->npages_recv; } else { qp->is_kernel = true; @@ -312,12 +329,14 @@ struct ib_qp *pvrdma_create_qp(struct ib_pd *pd, if (!qp->is_kernel) { pvrdma_page_dir_insert_umem(&qp->pdir, qp->sumem, 0); - pvrdma_page_dir_insert_umem(&qp->pdir, qp->rumem, - qp->npages_send); + if (!is_srq) + pvrdma_page_dir_insert_umem(&qp->pdir, + qp->rumem, + qp->npages_send); } else { /* Ring state is always the first page. */ qp->sq.ring = qp->pdir.pages[0]; - qp->rq.ring = &qp->sq.ring[1]; + qp->rq.ring = is_srq ? NULL : &qp->sq.ring[1]; } break; default: @@ -333,6 +352,10 @@ struct ib_qp *pvrdma_create_qp(struct ib_pd *pd, cmd->pd_handle = to_vpd(pd)->pd_handle; cmd->send_cq_handle = to_vcq(init_attr->send_cq)->cq_handle; cmd->recv_cq_handle = to_vcq(init_attr->recv_cq)->cq_handle; + if (is_srq) + cmd->srq_handle = to_vsrq(init_attr->srq)->srq_handle; + else + cmd->srq_handle = 0; cmd->max_send_wr = init_attr->cap.max_send_wr; cmd->max_recv_wr = init_attr->cap.max_recv_wr; cmd->max_send_sge = init_attr->cap.max_send_sge; @@ -340,6 +363,8 @@ struct ib_qp *pvrdma_create_qp(struct ib_pd *pd, cmd->max_inline_data = init_attr->cap.max_inline_data; cmd->sq_sig_all = (init_attr->sq_sig_type == IB_SIGNAL_ALL_WR) ? 1 : 0; cmd->qp_type = ib_qp_type_to_pvrdma(init_attr->qp_type); + cmd->is_srq = is_srq; + cmd->lkey = 0; cmd->access_flags = IB_ACCESS_LOCAL_WRITE; cmd->total_chunks = qp->npages; cmd->send_chunks = qp->npages_send - PVRDMA_QP_NUM_HEADER_PAGES; @@ -815,6 +840,12 @@ int pvrdma_post_recv(struct ib_qp *ibqp, struct ib_recv_wr *wr, return -EINVAL; } + if (qp->srq) { + dev_warn(&dev->pdev->dev, "QP associated with SRQ\n"); + *bad_wr = wr; + return -EINVAL; + } + spin_lock_irqsave(&qp->rq.lock, flags); while (wr) { diff --git a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_srq.c b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_srq.c new file mode 100644 index 0000000..826ccb8 --- /dev/null +++ b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_srq.c @@ -0,0 +1,319 @@ +/* + * Copyright (c) 2016-2017 VMware, Inc. All rights reserved. + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of EITHER the GNU General Public License + * version 2 as published by the Free Software Foundation or the BSD + * 2-Clause License. This program is distributed in the hope that it + * will be useful, but WITHOUT ANY WARRANTY; WITHOUT EVEN THE IMPLIED + * WARRANTY OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. + * See the GNU General Public License version 2 for more details at + * http://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html. + * + * You should have received a copy of the GNU General Public License + * along with this program available in the file COPYING in the main + * directory of this source tree. + * + * The BSD 2-Clause License + * + * Redistribution and use in source and binary forms, with or + * without modification, are permitted provided that the following + * conditions are met: + * + * - Redistributions of source code must retain the above + * copyright notice, this list of conditions and the following + * disclaimer. + * + * - Redistributions in binary form must reproduce the above + * copyright notice, this list of conditions and the following + * disclaimer in the documentation and/or other materials + * provided with the distribution. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS + * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE + * COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, + * INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES + * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR + * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, + * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) + * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED + * OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#include +#include +#include +#include +#include +#include + +#include "pvrdma.h" + +int pvrdma_post_srq_recv(struct ib_srq *ibsrq, struct ib_recv_wr *wr, + struct ib_recv_wr **bad_wr) +{ + /* No support for kernel clients. */ + return -EOPNOTSUPP; +} + +/** + * pvrdma_query_srq - query shared receive queue + * @ibsrq: the shared receive queue to query + * @srq_attr: attributes to query and return to client + * + * @return: 0 for success, otherwise returns an errno. + */ +int pvrdma_query_srq(struct ib_srq *ibsrq, struct ib_srq_attr *srq_attr) +{ + struct pvrdma_dev *dev = to_vdev(ibsrq->device); + struct pvrdma_srq *srq = to_vsrq(ibsrq); + union pvrdma_cmd_req req; + union pvrdma_cmd_resp rsp; + struct pvrdma_cmd_query_srq *cmd = &req.query_srq; + struct pvrdma_cmd_query_srq_resp *resp = &rsp.query_srq_resp; + int ret; + + memset(cmd, 0, sizeof(*cmd)); + cmd->hdr.cmd = PVRDMA_CMD_QUERY_SRQ; + cmd->srq_handle = srq->srq_handle; + + ret = pvrdma_cmd_post(dev, &req, &rsp, PVRDMA_CMD_QUERY_SRQ_RESP); + if (ret < 0) { + dev_warn(&dev->pdev->dev, + "could not query shared receive queue, error: %d\n", + ret); + return -EINVAL; + } + + srq_attr->srq_limit = resp->attrs.srq_limit; + srq_attr->max_wr = resp->attrs.max_wr; + srq_attr->max_sge = resp->attrs.max_sge; + + return 0; +} + +/** + * pvrdma_create_srq - create shared receive queue + * @pd: protection domain + * @init_attr: shared receive queue attributes + * @udata: user data + * + * @return: the ib_srq pointer on success, otherwise returns an errno. + */ +struct ib_srq *pvrdma_create_srq(struct ib_pd *pd, + struct ib_srq_init_attr *init_attr, + struct ib_udata *udata) +{ + struct pvrdma_srq *srq = NULL; + struct pvrdma_dev *dev = to_vdev(pd->device); + union pvrdma_cmd_req req; + union pvrdma_cmd_resp rsp; + struct pvrdma_cmd_create_srq *cmd = &req.create_srq; + struct pvrdma_cmd_create_srq_resp *resp = &rsp.create_srq_resp; + struct pvrdma_create_srq ucmd; + unsigned long flags; + int ret; + + if (!(pd->uobject && udata)) { + /* No support for kernel clients. */ + dev_warn(&dev->pdev->dev, + "no shared receive queue support for kernel client\n"); + return ERR_PTR(-EOPNOTSUPP); + } + + if (init_attr->srq_type != IB_SRQT_BASIC) { + dev_warn(&dev->pdev->dev, + "shared receive queue type %d not supported\n", + init_attr->srq_type); + return ERR_PTR(-EINVAL); + } + + if (init_attr->attr.max_wr > dev->dsr->caps.max_srq_wr || + init_attr->attr.max_sge > dev->dsr->caps.max_srq_sge) { + dev_warn(&dev->pdev->dev, + "shared receive queue size invalid\n"); + return ERR_PTR(-EINVAL); + } + + if (!atomic_add_unless(&dev->num_srqs, 1, dev->dsr->caps.max_srq)) + return ERR_PTR(-ENOMEM); + + srq = kmalloc(sizeof(*srq), GFP_KERNEL); + if (!srq) { + ret = -ENOMEM; + goto err_srq; + } + + spin_lock_init(&srq->lock); + refcount_set(&srq->refcnt, 1); + init_waitqueue_head(&srq->wait); + + dev_dbg(&dev->pdev->dev, + "create shared receive queue from user space\n"); + + if (ib_copy_from_udata(&ucmd, udata, sizeof(ucmd))) { + ret = -EFAULT; + goto err_srq; + } + + srq->umem = ib_umem_get(pd->uobject->context, + ucmd.buf_addr, + ucmd.buf_size, 0, 0); + if (IS_ERR(srq->umem)) { + ret = PTR_ERR(srq->umem); + goto err_srq; + } + + srq->npages = ib_umem_page_count(srq->umem); + + if (srq->npages < 0 || srq->npages > PVRDMA_PAGE_DIR_MAX_PAGES) { + dev_warn(&dev->pdev->dev, + "overflow pages in shared receive queue\n"); + ret = -EINVAL; + goto err_umem; + } + + ret = pvrdma_page_dir_init(dev, &srq->pdir, srq->npages, false); + if (ret) { + dev_warn(&dev->pdev->dev, + "could not allocate page directory\n"); + goto err_umem; + } + + pvrdma_page_dir_insert_umem(&srq->pdir, srq->umem, 0); + + memset(cmd, 0, sizeof(*cmd)); + cmd->hdr.cmd = PVRDMA_CMD_CREATE_SRQ; + cmd->srq_type = init_attr->srq_type; + cmd->nchunks = srq->npages; + cmd->pd_handle = to_vpd(pd)->pd_handle; + cmd->attrs.max_wr = init_attr->attr.max_wr; + cmd->attrs.max_sge = init_attr->attr.max_sge; + cmd->attrs.srq_limit = init_attr->attr.srq_limit; + cmd->pdir_dma = srq->pdir.dir_dma; + + ret = pvrdma_cmd_post(dev, &req, &rsp, PVRDMA_CMD_CREATE_SRQ_RESP); + if (ret < 0) { + dev_warn(&dev->pdev->dev, + "could not create shared receive queue, error: %d\n", + ret); + goto err_page_dir; + } + + srq->srq_handle = resp->srqn; + spin_lock_irqsave(&dev->srq_tbl_lock, flags); + dev->srq_tbl[srq->srq_handle % dev->dsr->caps.max_srq] = srq; + spin_unlock_irqrestore(&dev->srq_tbl_lock, flags); + + /* Copy udata back. */ + if (ib_copy_to_udata(udata, &srq->srq_handle, sizeof(__u32))) { + dev_warn(&dev->pdev->dev, "failed to copy back udata\n"); + pvrdma_destroy_srq(&srq->ibsrq); + return ERR_PTR(-EINVAL); + } + + return &srq->ibsrq; + +err_page_dir: + pvrdma_page_dir_cleanup(dev, &srq->pdir); +err_umem: + ib_umem_release(srq->umem); +err_srq: + kfree(srq); + atomic_dec(&dev->num_srqs); + + return ERR_PTR(ret); +} + +static void pvrdma_free_srq(struct pvrdma_dev *dev, struct pvrdma_srq *srq) +{ + unsigned long flags; + + spin_lock_irqsave(&dev->srq_tbl_lock, flags); + dev->srq_tbl[srq->srq_handle] = NULL; + spin_unlock_irqrestore(&dev->srq_tbl_lock, flags); + + refcount_dec(&srq->refcnt); + wait_event(srq->wait, !refcount_read(&srq->refcnt)); + + /* There is no support for kernel clients, so this is safe. */ + ib_umem_release(srq->umem); + + pvrdma_page_dir_cleanup(dev, &srq->pdir); + + kfree(srq); + + atomic_dec(&dev->num_srqs); +} + +/** + * pvrdma_destroy_srq - destroy shared receive queue + * @srq: the shared receive queue to destroy + * + * @return: 0 for success. + */ +int pvrdma_destroy_srq(struct ib_srq *srq) +{ + struct pvrdma_srq *vsrq = to_vsrq(srq); + union pvrdma_cmd_req req; + struct pvrdma_cmd_destroy_srq *cmd = &req.destroy_srq; + struct pvrdma_dev *dev = to_vdev(srq->device); + int ret; + + memset(cmd, 0, sizeof(*cmd)); + cmd->hdr.cmd = PVRDMA_CMD_DESTROY_SRQ; + cmd->srq_handle = vsrq->srq_handle; + + ret = pvrdma_cmd_post(dev, &req, NULL, 0); + if (ret < 0) + dev_warn(&dev->pdev->dev, + "destroy shared receive queue failed, error: %d\n", + ret); + + pvrdma_free_srq(dev, vsrq); + + return 0; +} + +/** + * pvrdma_modify_srq - modify shared receive queue attributes + * @ibsrq: the shared receive queue to modify + * @attr: the shared receive queue's new attributes + * @attr_mask: attributes mask + * @udata: user data + * + * @returns 0 on success, otherwise returns an errno. + */ +int pvrdma_modify_srq(struct ib_srq *ibsrq, struct ib_srq_attr *attr, + enum ib_srq_attr_mask attr_mask, struct ib_udata *udata) +{ + struct pvrdma_srq *vsrq = to_vsrq(ibsrq); + union pvrdma_cmd_req req; + struct pvrdma_cmd_modify_srq *cmd = &req.modify_srq; + struct pvrdma_dev *dev = to_vdev(ibsrq->device); + int ret; + + /* Only support SRQ limit. */ + if (!(attr_mask & IB_SRQ_LIMIT)) + return -EINVAL; + + memset(cmd, 0, sizeof(*cmd)); + cmd->hdr.cmd = PVRDMA_CMD_MODIFY_SRQ; + cmd->srq_handle = vsrq->srq_handle; + cmd->attrs.srq_limit = attr->srq_limit; + cmd->attr_mask = attr_mask; + + ret = pvrdma_cmd_post(dev, &req, NULL, 0); + if (ret < 0) { + dev_warn(&dev->pdev->dev, + "could not modify shared receive queue, error: %d\n", + ret); + + return -EINVAL; + } + + return ret; +} diff --git a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_verbs.c b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_verbs.c index 48776f5..16b9661 100644 --- a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_verbs.c +++ b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_verbs.c @@ -85,6 +85,9 @@ int pvrdma_query_device(struct ib_device *ibdev, props->max_sge = dev->dsr->caps.max_sge; props->max_sge_rd = PVRDMA_GET_CAP(dev, dev->dsr->caps.max_sge, dev->dsr->caps.max_sge_rd); + props->max_srq = dev->dsr->caps.max_srq; + props->max_srq_wr = dev->dsr->caps.max_srq_wr; + props->max_srq_sge = dev->dsr->caps.max_srq_sge; props->max_cq = dev->dsr->caps.max_cq; props->max_cqe = dev->dsr->caps.max_cqe; props->max_mr = dev->dsr->caps.max_mr; diff --git a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_verbs.h b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_verbs.h index 002a9b0..b7b2572 100644 --- a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_verbs.h +++ b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_verbs.h @@ -324,6 +324,13 @@ enum pvrdma_mw_type { PVRDMA_MW_TYPE_2 = 2, }; +struct pvrdma_srq_attr { + u32 max_wr; + u32 max_sge; + u32 srq_limit; + u32 reserved; +}; + struct pvrdma_qp_attr { enum pvrdma_qp_state qp_state; enum pvrdma_qp_state cur_qp_state; @@ -420,6 +427,17 @@ int pvrdma_resize_cq(struct ib_cq *ibcq, int entries, struct ib_ah *pvrdma_create_ah(struct ib_pd *pd, struct rdma_ah_attr *ah_attr, struct ib_udata *udata); int pvrdma_destroy_ah(struct ib_ah *ah); + +struct ib_srq *pvrdma_create_srq(struct ib_pd *pd, + struct ib_srq_init_attr *init_attr, + struct ib_udata *udata); +int pvrdma_modify_srq(struct ib_srq *ibsrq, struct ib_srq_attr *attr, + enum ib_srq_attr_mask attr_mask, struct ib_udata *udata); +int pvrdma_query_srq(struct ib_srq *srq, struct ib_srq_attr *srq_attr); +int pvrdma_destroy_srq(struct ib_srq *srq); +int pvrdma_post_srq_recv(struct ib_srq *ibsrq, struct ib_recv_wr *wr, + struct ib_recv_wr **bad_wr); + struct ib_qp *pvrdma_create_qp(struct ib_pd *pd, struct ib_qp_init_attr *init_attr, struct ib_udata *udata); diff --git a/include/uapi/rdma/vmw_pvrdma-abi.h b/include/uapi/rdma/vmw_pvrdma-abi.h index c6569b0..846c6f4 100644 --- a/include/uapi/rdma/vmw_pvrdma-abi.h +++ b/include/uapi/rdma/vmw_pvrdma-abi.h @@ -158,6 +158,8 @@ struct pvrdma_resize_cq { struct pvrdma_create_srq { __u64 buf_addr; + __u32 buf_size; + __u32 reserved; }; struct pvrdma_create_srq_resp {