From patchwork Thu May 16 18:27:06 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Adit Ranadive X-Patchwork-Id: 10946987 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C8019924 for ; Thu, 16 May 2019 18:27:19 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B62B328957 for ; Thu, 16 May 2019 18:27:19 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A777928AA2; Thu, 16 May 2019 18:27:19 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id ECCAC28957 for ; Thu, 16 May 2019 18:27:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727337AbfEPS1S (ORCPT ); Thu, 16 May 2019 14:27:18 -0400 Received: from mail-eopbgr680047.outbound.protection.outlook.com ([40.107.68.47]:59781 "EHLO NAM04-BN3-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726357AbfEPS1S (ORCPT ); Thu, 16 May 2019 14:27:18 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vmware.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=KY367ckya/3sejYZEOCfB8Ym7afEwn8ShFuoocACl1o=; b=vKIG8boRH0Wl8bch7VqiYy5bOKdU++JN9S7I+CqHFB7kGhOuDvLLbeDJv2GNlLO8MHlM1NIhjSICM5vSQnSAkoOqmeA5NCxid75Po4MRyOBeJKIBeVHjAvBGlK9wv2WnLMu1maiS2M5BR/B0mlH0l6n3AFym2ACoOtX7J5fgUQ4= Received: from BYAPR05MB5511.namprd05.prod.outlook.com (20.177.186.28) by BYAPR05MB6245.namprd05.prod.outlook.com (20.178.196.14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.1900.13; Thu, 16 May 2019 18:27:06 +0000 Received: from BYAPR05MB5511.namprd05.prod.outlook.com ([fe80::d99b:a85f:758a:f04b]) by BYAPR05MB5511.namprd05.prod.outlook.com ([fe80::d99b:a85f:758a:f04b%7]) with mapi id 15.20.1922.002; Thu, 16 May 2019 18:27:06 +0000 From: Adit Ranadive To: "jgg@mellanox.com" , "dledford@redhat.com" CC: Bryan Tan , "linux-rdma@vger.kernel.org" , Pv-drivers , Adit Ranadive Subject: [PATCH rdma-core] vmw_pvrdma: Use resource ids from physical device if available Thread-Topic: [PATCH rdma-core] vmw_pvrdma: Use resource ids from physical device if available Thread-Index: AQHVDBT3BQBSIqM9/0CtsQbRxE3AJQ== Date: Thu, 16 May 2019 18:27:06 +0000 Message-ID: <1558031213-14219-1-git-send-email-aditr@vmware.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-clientproxiedby: BYAPR01CA0012.prod.exchangelabs.com (2603:10b6:a02:80::25) To BYAPR05MB5511.namprd05.prod.outlook.com (2603:10b6:a03:1a::28) x-mailer: git-send-email 1.8.3.1 authentication-results: spf=none (sender IP is ) smtp.mailfrom=aditr@vmware.com; x-ms-exchange-messagesentrepresentingtype: 1 x-originating-ip: [66.170.99.2] x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: 7b15c807-9c27-4daa-7a6b-08d6da2c1986 x-microsoft-antispam: BCL:0;PCL:0;RULEID:(2390118)(7020095)(4652040)(8989299)(4534185)(4627221)(201703031133081)(201702281549075)(8990200)(5600141)(711020)(4605104)(2017052603328)(7193020);SRVR:BYAPR05MB6245; x-ms-traffictypediagnostic: BYAPR05MB6245: x-ms-exchange-purlcount: 1 x-ld-processed: b39138ca-3cee-4b4a-a4d6-cd83d9dd62f0,ExtAddr x-microsoft-antispam-prvs: x-ms-oob-tlc-oobclassifiers: OLM:826; x-forefront-prvs: 0039C6E5C5 x-forefront-antispam-report: SFV:NSPM;SFS:(10009020)(376002)(366004)(396003)(39860400002)(346002)(136003)(189003)(199004)(81166006)(81156014)(5660300002)(50226002)(8936002)(66446008)(8676002)(25786009)(66946007)(73956011)(86362001)(64756008)(14444005)(66476007)(66556008)(107886003)(256004)(4326008)(486006)(476003)(3846002)(6116002)(2616005)(305945005)(7736002)(36756003)(478600001)(386003)(102836004)(2501003)(110136005)(4720700003)(26005)(186003)(52116002)(14454004)(66066001)(6512007)(6306002)(6506007)(966005)(68736007)(99286004)(53936002)(71190400001)(71200400001)(2906002)(6486002)(54906003)(316002)(6436002);DIR:OUT;SFP:1101;SCL:1;SRVR:BYAPR05MB6245;H:BYAPR05MB5511.namprd05.prod.outlook.com;FPR:;SPF:None;LANG:en;PTR:InfoNoRecords;MX:1;A:1; received-spf: None (protection.outlook.com: vmware.com does not designate permitted sender hosts) x-ms-exchange-senderadcheck: 1 x-microsoft-antispam-message-info: DQ1A9ZLmppKtI+vzr8cTCdMYT2oc4hPMB8gTBARO2AGYf0bBqIFE4BlEXFnfSPVvv7mshuXODvHmnGaki8J/vF9ORgt0lRRzyei0ZoAuulvYtqTl3ga+M6DpKBo5/AU2OmyMxOkiczqm+Vmjnd8evmTr3r+2cx42azJz2quRMozHkqbeeNTN/iUL4d9DkdG4kaQVOfpQXSxbd7fjMbederjURgCRxGOMn0nkPiv9CfT3UITrfOnrMGXpa+UGsX/Gf9T8BccOQ0jLl/q+HXNEwkGIG2XfJsjf/+injenTIgAiXBv3YxX0yw/roI0FziO1WOfg8Psq28x6dXZZhtalgB2EfI1h6iedG/MkgstZOMSMdWy9BFg47kxfzu3j6wxDUetGChT/GkU7FSBT6/Q6xjvc4ARB0dLYxjeNjykn0Hw= MIME-Version: 1.0 X-OriginatorOrg: vmware.com X-MS-Exchange-CrossTenant-Network-Message-Id: 7b15c807-9c27-4daa-7a6b-08d6da2c1986 X-MS-Exchange-CrossTenant-originalarrivaltime: 16 May 2019 18:27:06.4441 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: b39138ca-3cee-4b4a-a4d6-cd83d9dd62f0 X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR05MB6245 Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Bryan Tan This change allows user-space to use physical resource numbers if they are passed up from the driver. Doing so allows communication with physical non-ESX endpoints (such as a bare-metal Linux machine or a SR-IOV-enabled VM). This is accomplished by separating the concept of the QP number from the QP handle. Previously, the two were the same, as the QP number was exposed to the guest and also used to reference a virtual QP in the device backend. With physical resource numbers exposed, the QP number given to the guest is the QP number assigned to the physical HCA's QP, while the QP handle is still the internal handle used to reference a virtual QP. Regardless of whether the device is exposing physical ids, the driver will still try to pick up the QP handle from the backend if possible. The MR keys exposed to the guest when the physical resource ids feature is turned on are likewise now the MR keys created by the physical HCA, instead of virtual MR keys. The ABI has been updated to allow the return of the QP handle to the guest library. The ABI version has been bumped up because of this non-compatible change. Reviewed-by: Jorgen Hansen Signed-off-by: Adit Ranadive Signed-off-by: Bryan Tan --- kernel-headers/rdma/vmw_pvrdma-abi.h | 11 ++++++++++- providers/vmw_pvrdma/pvrdma-abi.h | 4 +++- providers/vmw_pvrdma/pvrdma.h | 1 + providers/vmw_pvrdma/pvrdma_main.c | 4 ++-- providers/vmw_pvrdma/qp.c | 31 ++++++++++++++++++++----------- 5 files changed, 36 insertions(+), 15 deletions(-) --- Github PR: https://github.com/linux-rdma/rdma-core/pull/531 --- diff --git a/kernel-headers/rdma/vmw_pvrdma-abi.h b/kernel-headers/rdma/vmw_pvrdma-abi.h index 6e73f0274e41..8c388d623e5c 100644 --- a/kernel-headers/rdma/vmw_pvrdma-abi.h +++ b/kernel-headers/rdma/vmw_pvrdma-abi.h @@ -49,7 +49,11 @@ #include -#define PVRDMA_UVERBS_ABI_VERSION 3 /* ABI Version. */ +#define PVRDMA_UVERBS_MIN_ABI_VERSION 3 +#define PVRDMA_UVERBS_MAX_ABI_VERSION 4 + +#define PVRDMA_UVERBS_NO_QP_HANDLE_ABI_VERSION 3 + #define PVRDMA_UAR_HANDLE_MASK 0x00FFFFFF /* Bottom 24 bits. */ #define PVRDMA_UAR_QP_OFFSET 0 /* QP doorbell. */ #define PVRDMA_UAR_QP_SEND (1 << 30) /* Send bit. */ @@ -179,6 +183,11 @@ struct pvrdma_create_qp { __aligned_u64 qp_addr; }; +struct pvrdma_create_qp_resp { + __u32 qpn; + __u32 qp_handle; +}; + /* PVRDMA masked atomic compare and swap */ struct pvrdma_ex_cmp_swap { __aligned_u64 swap_val; diff --git a/providers/vmw_pvrdma/pvrdma-abi.h b/providers/vmw_pvrdma/pvrdma-abi.h index 77db9ddd1bb7..9775925f8555 100644 --- a/providers/vmw_pvrdma/pvrdma-abi.h +++ b/providers/vmw_pvrdma/pvrdma-abi.h @@ -54,8 +54,10 @@ DECLARE_DRV_CMD(user_pvrdma_alloc_pd, IB_USER_VERBS_CMD_ALLOC_PD, empty, pvrdma_alloc_pd_resp); DECLARE_DRV_CMD(user_pvrdma_create_cq, IB_USER_VERBS_CMD_CREATE_CQ, pvrdma_create_cq, pvrdma_create_cq_resp); -DECLARE_DRV_CMD(user_pvrdma_create_qp, IB_USER_VERBS_CMD_CREATE_QP, +DECLARE_DRV_CMD(user_pvrdma_create_qp_v3, IB_USER_VERBS_CMD_CREATE_QP, pvrdma_create_qp, empty); +DECLARE_DRV_CMD(user_pvrdma_create_qp, IB_USER_VERBS_CMD_CREATE_QP, + pvrdma_create_qp, pvrdma_create_qp_resp); DECLARE_DRV_CMD(user_pvrdma_create_srq, IB_USER_VERBS_CMD_CREATE_SRQ, pvrdma_create_srq, pvrdma_create_srq_resp); DECLARE_DRV_CMD(user_pvrdma_alloc_ucontext, IB_USER_VERBS_CMD_GET_CONTEXT, diff --git a/providers/vmw_pvrdma/pvrdma.h b/providers/vmw_pvrdma/pvrdma.h index ebd50ce1c3cd..b67c07e94f90 100644 --- a/providers/vmw_pvrdma/pvrdma.h +++ b/providers/vmw_pvrdma/pvrdma.h @@ -170,6 +170,7 @@ struct pvrdma_qp { struct pvrdma_wq sq; struct pvrdma_wq rq; int is_srq; + uint32_t qp_handle; }; struct pvrdma_ah { diff --git a/providers/vmw_pvrdma/pvrdma_main.c b/providers/vmw_pvrdma/pvrdma_main.c index 52a2de22d44c..616310ae45c5 100644 --- a/providers/vmw_pvrdma/pvrdma_main.c +++ b/providers/vmw_pvrdma/pvrdma_main.c @@ -201,8 +201,8 @@ static const struct verbs_match_ent hca_table[] = { static const struct verbs_device_ops pvrdma_dev_ops = { .name = "pvrdma", - .match_min_abi_version = PVRDMA_UVERBS_ABI_VERSION, - .match_max_abi_version = PVRDMA_UVERBS_ABI_VERSION, + .match_min_abi_version = PVRDMA_UVERBS_MIN_ABI_VERSION, + .match_max_abi_version = PVRDMA_UVERBS_MAX_ABI_VERSION, .match_table = hca_table, .alloc_device = pvrdma_device_alloc, .uninit_device = pvrdma_uninit_device, diff --git a/providers/vmw_pvrdma/qp.c b/providers/vmw_pvrdma/qp.c index ef429db93a43..a173d441df0d 100644 --- a/providers/vmw_pvrdma/qp.c +++ b/providers/vmw_pvrdma/qp.c @@ -211,9 +211,9 @@ struct ibv_qp *pvrdma_create_qp(struct ibv_pd *pd, { struct pvrdma_device *dev = to_vdev(pd->context->device); struct user_pvrdma_create_qp cmd; - struct ib_uverbs_create_qp_resp resp; + struct user_pvrdma_create_qp_resp resp; + struct user_pvrdma_create_qp_v3_resp resp_v3; struct pvrdma_qp *qp; - int ret; int is_srq = !!(attr->srq); attr->cap.max_send_sge = max_t(uint32_t, 1U, attr->cap.max_send_sge); @@ -282,14 +282,23 @@ struct ibv_qp *pvrdma_create_qp(struct ibv_pd *pd, cmd.rbuf_size = qp->rbuf.length; cmd.qp_addr = (uintptr_t) qp; - ret = ibv_cmd_create_qp(pd, &qp->ibv_qp, attr, - &cmd.ibv_cmd, sizeof(cmd), - &resp, sizeof(resp)); + if (dev->abi_version <= PVRDMA_UVERBS_NO_QP_HANDLE_ABI_VERSION) { + if (ibv_cmd_create_qp(pd, &qp->ibv_qp, attr, &cmd.ibv_cmd, + sizeof(cmd), &resp_v3.ibv_resp, + sizeof(resp_v3.ibv_resp))) + goto err_free; - if (ret) - goto err_free; + qp->qp_handle = qp->ibv_qp.qp_num; + } else { + if (ibv_cmd_create_qp(pd, &qp->ibv_qp, attr, &cmd.ibv_cmd, + sizeof(cmd), &resp.ibv_resp, + sizeof(resp))) + goto err_free; + + qp->qp_handle = resp.drv_payload.qp_handle; + } - to_vctx(pd->context)->qp_tbl[qp->ibv_qp.qp_num & 0xFFFF] = qp; + to_vctx(pd->context)->qp_tbl[qp->qp_handle & 0xFFFF] = qp; /* If set, each WR submitted to the SQ generate a completion entry */ if (attr->sq_sig_all) @@ -414,7 +423,7 @@ int pvrdma_destroy_qp(struct ibv_qp *ibqp) free(qp->rq.wrid); pvrdma_free_buf(&qp->rbuf); pvrdma_free_buf(&qp->sbuf); - ctx->qp_tbl[ibqp->qp_num & 0xFFFF] = NULL; + ctx->qp_tbl[qp->qp_handle & 0xFFFF] = NULL; free(qp); return 0; @@ -547,7 +556,7 @@ out: if (nreq) { udma_to_device_barrier(); pvrdma_write_uar_qp(ctx->uar, - PVRDMA_UAR_QP_SEND | ibqp->qp_num); + PVRDMA_UAR_QP_SEND | qp->qp_handle); } pthread_spin_unlock(&qp->sq.lock); @@ -630,7 +639,7 @@ int pvrdma_post_recv(struct ibv_qp *ibqp, struct ibv_recv_wr *wr, out: if (nreq) pvrdma_write_uar_qp(ctx->uar, - PVRDMA_UAR_QP_RECV | ibqp->qp_num); + PVRDMA_UAR_QP_RECV | qp->qp_handle); pthread_spin_unlock(&qp->rq.lock); return ret;