From patchwork Wed Jun 15 06:28:03 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Hefty, Sean" X-Patchwork-Id: 881222 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by demeter2.kernel.org (8.14.4/8.14.4) with ESMTP id p5F6S6Cw010041 for ; Wed, 15 Jun 2011 06:28:13 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752017Ab1FOG2K (ORCPT ); Wed, 15 Jun 2011 02:28:10 -0400 Received: from mga11.intel.com ([192.55.52.93]:14037 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752306Ab1FOG2G convert rfc822-to-8bit (ORCPT ); Wed, 15 Jun 2011 02:28:06 -0400 Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga102.fm.intel.com with ESMTP; 14 Jun 2011 23:28:05 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="4.65,369,1304319600"; d="scan'208";a="16388510" Received: from orsmsx604.amr.corp.intel.com ([10.22.226.87]) by fmsmga002.fm.intel.com with ESMTP; 14 Jun 2011 23:28:05 -0700 Received: from orsmsx102.amr.corp.intel.com (10.22.225.129) by orsmsx604.amr.corp.intel.com (10.22.226.87) with Microsoft SMTP Server (TLS) id 8.2.255.0; Tue, 14 Jun 2011 23:28:04 -0700 Received: from orsmsx101.amr.corp.intel.com ([169.254.8.26]) by ORSMSX102.amr.corp.intel.com ([169.254.1.6]) with mapi id 14.01.0289.001; Tue, 14 Jun 2011 23:28:04 -0700 From: "Hefty, Sean" To: "linux-rdma (linux-rdma@vger.kernel.org)" Subject: [PATCH 5/17] rdma/verbs: Cleanup XRC TGT QPs when destroying XRCD Thread-Topic: [PATCH 5/17] rdma/verbs: Cleanup XRC TGT QPs when destroying XRCD Thread-Index: AcwrIqWNg5gMXFrCRSmmZFmFPpeQGQ== Date: Wed, 15 Jun 2011 06:28:03 +0000 Message-ID: <1828884A29C6694DAF28B7E6B8A82373021012@ORSMSX101.amr.corp.intel.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.9.131.214] MIME-Version: 1.0 Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.6 (demeter2.kernel.org [140.211.167.43]); Wed, 15 Jun 2011 06:28:13 +0000 (UTC) XRC TGT QPs are intended to be shared among multiple users and processes. Allow the destruction of an XRC TGT QP to be done explicitly through ib_destroy_qp() or when the XRCD is destroyed. To support destroying an XRC TGT QP, we need to track TGT QPs with the XRCD. When the XRCD is destroyed, all tracked XRC TGT QPs are also cleaned up. To avoid stale reference issues, if a user is holding a reference on a TGT QP, we increment a reference count on the XRCD. The user releases the reference by calling ib_release_qp. This releases any access to the QP from a user above verbs, but allows the QP to continue to exist until destroyed by the xrcd. Signed-off-by: Sean Hefty --- drivers/infiniband/core/verbs.c | 48 ++++++++++++++++++++++++++++++++++++++- include/rdma/ib_verbs.h | 17 +++++++++++++- 2 files changed, 63 insertions(+), 2 deletions(-) -- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/drivers/infiniband/core/verbs.c b/drivers/infiniband/core/verbs.c index 52d36c8..5702e02 100644 --- a/drivers/infiniband/core/verbs.c +++ b/drivers/infiniband/core/verbs.c @@ -316,6 +316,20 @@ EXPORT_SYMBOL(ib_destroy_srq); /* Queue pairs */ +static void __ib_insert_xrcd_qp(struct ib_xrcd *xrcd, struct ib_qp *qp) +{ + mutex_lock(&xrcd->tgt_qp_mutex); + list_add(&qp->xrcd_list, &xrcd->tgt_qp_list); + mutex_unlock(&xrcd->tgt_qp_mutex); +} + +static void __ib_remove_xrcd_qp(struct ib_xrcd *xrcd, struct ib_qp *qp) +{ + mutex_lock(&xrcd->tgt_qp_mutex); + list_del(&qp->xrcd_list); + mutex_unlock(&xrcd->tgt_qp_mutex); +} + struct ib_qp *ib_create_qp(struct ib_pd *pd, struct ib_qp_init_attr *qp_init_attr) { @@ -334,6 +348,7 @@ struct ib_qp *ib_create_qp(struct ib_pd *pd, qp->srq = NULL; qp->xrcd = qp_init_attr->xrcd; atomic_inc(&qp_init_attr->xrcd->usecnt); + __ib_insert_xrcd_qp(qp_init_attr->xrcd, qp); } else { if (qp_init_attr->qp_type == IB_QPT_XRC_INI) { qp->recv_cq = NULL; @@ -728,7 +743,8 @@ int ib_destroy_qp(struct ib_qp *qp) scq = qp->send_cq; rcq = qp->recv_cq; srq = qp->srq; - xrcd = qp->xrcd; + if ((xrcd = qp->xrcd)) + __ib_remove_xrcd_qp(xrcd, qp); ret = qp->device->destroy_qp(qp); if (!ret) { @@ -742,12 +758,30 @@ int ib_destroy_qp(struct ib_qp *qp) atomic_dec(&srq->usecnt); if (xrcd) atomic_dec(&xrcd->usecnt); + } else if (xrcd) { + __ib_insert_xrcd_qp(xrcd, qp); } return ret; } EXPORT_SYMBOL(ib_destroy_qp); +int ib_release_qp(struct ib_qp *qp) +{ + unsigned long flags; + + if (qp->qp_type != IB_QPT_XRC_TGT) + return -EINVAL; + + spin_lock_irqsave(&qp->device->event_handler_lock, flags); + qp->event_handler = NULL; + spin_unlock_irqrestore(&qp->device->event_handler_lock, flags); + + atomic_dec(&qp->xrcd->usecnt); + return 0; +} +EXPORT_SYMBOL(ib_release_qp); + /* Completion queues */ struct ib_cq *ib_create_cq(struct ib_device *device, @@ -1061,6 +1095,8 @@ struct ib_xrcd *ib_alloc_xrcd(struct ib_device *device) if (!IS_ERR(xrcd)) { xrcd->device = device; atomic_set(&xrcd->usecnt, 0); + mutex_init(&xrcd->tgt_qp_mutex); + INIT_LIST_HEAD(&xrcd->tgt_qp_list); } return xrcd; @@ -1069,9 +1105,19 @@ EXPORT_SYMBOL(ib_alloc_xrcd); int ib_dealloc_xrcd(struct ib_xrcd *xrcd) { + struct ib_qp *qp; + int ret; + if (atomic_read(&xrcd->usecnt)) return -EBUSY; + while (!list_empty(&xrcd->tgt_qp_list)) { + qp = list_entry(xrcd->tgt_qp_list.next, struct ib_qp, xrcd_list); + ret = ib_destroy_qp(qp); + if (ret) + return ret; + } + return xrcd->device->dealloc_xrcd(xrcd); } EXPORT_SYMBOL(ib_dealloc_xrcd); diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h index 30c67cc..d916a84 100644 --- a/include/rdma/ib_verbs.h +++ b/include/rdma/ib_verbs.h @@ -877,7 +877,10 @@ struct ib_pd { struct ib_xrcd { struct ib_device *device; - atomic_t usecnt; /* count all resources */ + atomic_t usecnt; /* count all exposed resources */ + + struct mutex tgt_qp_mutex; + struct list_head tgt_qp_list; }; struct ib_ah { @@ -923,6 +926,7 @@ struct ib_qp { struct ib_cq *recv_cq; struct ib_srq *srq; struct ib_xrcd *xrcd; /* XRC TGT QPs only */ + struct list_head xrcd_list; struct ib_uobject *uobject; void (*event_handler)(struct ib_event *, void *); void *qp_context; @@ -1479,6 +1483,17 @@ int ib_query_qp(struct ib_qp *qp, int ib_destroy_qp(struct ib_qp *qp); /** + * ib_release_qp - Release an external reference to a QP. + * @qp: The QP handle to release + * + * The specified QP handle is released by the caller. If the QP is + * referenced internally, it is not destroyed until all internal + * references are released. After releasing the qp, the caller + * can no longer access it and all events on the QP are discarded. + */ +int ib_release_qp(struct ib_qp *qp); + +/** * ib_post_send - Posts a list of work requests to the send queue of * the specified QP. * @qp: The QP to post the work request on.