From patchwork Wed Oct 14 08:29:48 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Haggai Eran X-Patchwork-Id: 7391331 Return-Path: X-Original-To: patchwork-linux-rdma@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id D6ED09F1B9 for ; Wed, 14 Oct 2015 08:30:31 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id E9E5720710 for ; Wed, 14 Oct 2015 08:30:30 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id E960A207C6 for ; Wed, 14 Oct 2015 08:30:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752698AbbJNIa2 (ORCPT ); Wed, 14 Oct 2015 04:30:28 -0400 Received: from [193.47.165.129] ([193.47.165.129]:56050 "EHLO mellanox.co.il" rhost-flags-FAIL-FAIL-OK-FAIL) by vger.kernel.org with ESMTP id S1752731AbbJNIa0 (ORCPT ); Wed, 14 Oct 2015 04:30:26 -0400 Received: from Internal Mail-Server by MTLPINE1 (envelope-from haggaie@mellanox.com) with ESMTPS (AES256-SHA encrypted); 14 Oct 2015 10:29:44 +0200 Received: from arch003.mtl.labs.mlnx (arch003.mtl.labs.mlnx [10.137.35.1]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id t9E8TitL013937; Wed, 14 Oct 2015 11:29:44 +0300 From: Haggai Eran To: Doug Ledford Cc: linux-rdma@vger.kernel.org, Haggai Eran , Jason Gunthorpe , Hal Rosenstock , Sean Hefty , Or Gerlitz , Eli Cohen Subject: [PATCH 6/6] IB/mad: P_Key change event handler Date: Wed, 14 Oct 2015 11:29:48 +0300 Message-Id: <1444811388-22486-7-git-send-email-haggaie@mellanox.com> X-Mailer: git-send-email 1.7.11.2 In-Reply-To: <1444811388-22486-1-git-send-email-haggaie@mellanox.com> References: <1444811388-22486-1-git-send-email-haggaie@mellanox.com> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Add a device event handler to capture P_Key table change events. For devices that don't support setting the P_Key index per work request, update the per-P_Key QP table in the MAD layer, creating QPs as needed. The code currently doesn't destroy created QPs when their pkeys are cleared. This can be added later on. Signed-off-by: Haggai Eran --- drivers/infiniband/core/mad.c | 51 ++++++++++++++++++++++++++++++++++++-- drivers/infiniband/core/mad_priv.h | 2 ++ 2 files changed, 51 insertions(+), 2 deletions(-) diff --git a/drivers/infiniband/core/mad.c b/drivers/infiniband/core/mad.c index 02977942574c..a350b4117cb3 100644 --- a/drivers/infiniband/core/mad.c +++ b/drivers/infiniband/core/mad.c @@ -3153,6 +3153,28 @@ static void srq_event_handler(struct ib_event *event, void *srq_context) event->event, qp_num); } +static void device_event_handler(struct ib_event_handler *handler, + struct ib_event *event) +{ + struct ib_mad_port_private *port_priv = + container_of(handler, struct ib_mad_port_private, + event_handler); + + if (event->element.port_num != port_priv->port_num) + return; + + dev_dbg(&port_priv->device->dev, "ib_mad: event %s on port %d\n", + ib_event_msg(event->event), event->element.port_num); + + switch (event->event) { + case IB_EVENT_PKEY_CHANGE: + queue_work(port_priv->wq, &port_priv->pkey_change_work); + break; + default: + break; + } +} + static void init_mad_queue(struct ib_mad_qp_info *qp_info, struct ib_mad_queue *mad_queue) { @@ -3306,6 +3328,15 @@ static int update_pkey_table(struct ib_mad_qp_info *qp_info) return 0; } +static void pkey_change_handler(struct work_struct *work) +{ + struct ib_mad_port_private *port_priv = + container_of(work, struct ib_mad_port_private, + pkey_change_work); + + update_pkey_table(&port_priv->qp_info[1]); +} + static void destroy_mad_qp(struct ib_mad_qp_info *qp_info) { u16 qp_index; @@ -3453,6 +3484,17 @@ static int ib_mad_port_open(struct ib_device *device, } INIT_WORK(&port_priv->work, ib_mad_completion_handler); + if (device->gsi_pkey_index_in_qp) { + INIT_WORK(&port_priv->pkey_change_work, pkey_change_handler); + INIT_IB_EVENT_HANDLER(&port_priv->event_handler, device, + device_event_handler); + ret = ib_register_event_handler(&port_priv->event_handler); + if (ret) { + dev_err(&device->dev, "Unable to register event handler for ib_mad\n"); + goto error9; + } + } + spin_lock_irqsave(&ib_mad_port_list_lock, flags); list_add_tail(&port_priv->port_list, &ib_mad_port_list); spin_unlock_irqrestore(&ib_mad_port_list_lock, flags); @@ -3460,16 +3502,19 @@ static int ib_mad_port_open(struct ib_device *device, ret = ib_mad_port_start(port_priv); if (ret) { dev_err(&device->dev, "Couldn't start port\n"); - goto error9; + goto error10; } return 0; -error9: +error10: spin_lock_irqsave(&ib_mad_port_list_lock, flags); list_del_init(&port_priv->port_list); spin_unlock_irqrestore(&ib_mad_port_list_lock, flags); + if (device->gsi_pkey_index_in_qp) + ib_unregister_event_handler(&port_priv->event_handler); +error9: destroy_workqueue(port_priv->wq); error8: destroy_mad_qp(&port_priv->qp_info[1]); @@ -3507,6 +3552,8 @@ static int ib_mad_port_close(struct ib_device *device, int port_num) list_del_init(&port_priv->port_list); spin_unlock_irqrestore(&ib_mad_port_list_lock, flags); + if (device->gsi_pkey_index_in_qp) + ib_unregister_event_handler(&port_priv->event_handler); destroy_workqueue(port_priv->wq); destroy_mad_qp(&port_priv->qp_info[1]); destroy_mad_qp(&port_priv->qp_info[0]); diff --git a/drivers/infiniband/core/mad_priv.h b/drivers/infiniband/core/mad_priv.h index 32b9532c7868..ee8003648d8a 100644 --- a/drivers/infiniband/core/mad_priv.h +++ b/drivers/infiniband/core/mad_priv.h @@ -211,6 +211,8 @@ struct ib_mad_port_private { struct workqueue_struct *wq; struct work_struct work; struct ib_mad_qp_info qp_info[IB_MAD_QPS_CORE]; + struct ib_event_handler event_handler; + struct work_struct pkey_change_work; }; int ib_send_mad(struct ib_mad_send_wr_private *mad_send_wr);