From patchwork Tue Apr 19 19:51:28 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Jurgens X-Patchwork-Id: 8883031 Return-Path: X-Original-To: patchwork-linux-rdma@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id C82D8BF29F for ; Tue, 19 Apr 2016 19:53:46 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id DC03E202C8 for ; Tue, 19 Apr 2016 19:53:45 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id ECF5F2024D for ; Tue, 19 Apr 2016 19:53:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754733AbcDSTxo (ORCPT ); Tue, 19 Apr 2016 15:53:44 -0400 Received: from [193.47.165.129] ([193.47.165.129]:39844 "EHLO mellanox.co.il" rhost-flags-FAIL-FAIL-OK-FAIL) by vger.kernel.org with ESMTP id S1753071AbcDSTxn (ORCPT ); Tue, 19 Apr 2016 15:53:43 -0400 Received: from Internal Mail-Server by MTLPINE1 (envelope-from danielj@mellanox.com) with ESMTPS (AES256-SHA encrypted); 19 Apr 2016 22:52:02 +0300 Received: from x-vnc01.mtx.labs.mlnx (x-vnc01.mtx.labs.mlnx [10.12.150.16]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id u3JJph1j011960; Tue, 19 Apr 2016 22:52:01 +0300 From: Dan Jurgens To: selinux@tycho.nsa.gov, linux-security-module@vger.kernel.org, linux-rdma@vger.kernel.org Cc: yevgenyp@mellanox.com, Daniel Jurgens Subject: [RFC PATCH v3 11/12] ib/core: Enforce Infiniband device SMI security Date: Tue, 19 Apr 2016 22:51:28 +0300 Message-Id: <1461095489-18732-12-git-send-email-danielj@mellanox.com> X-Mailer: git-send-email 1.7.1 In-Reply-To: <1461095489-18732-1-git-send-email-danielj@mellanox.com> References: <1461095489-18732-1-git-send-email-danielj@mellanox.com> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Spam-Status: No, score=-7.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Daniel Jurgens During MAD and snoop agent registration for SMI QPs check that the calling process has permission to access the SMI. When sending and receiving MADs check that the agent has access to the SMI if it's on an SMI QP. Because security policy can change it's possible permission was allowed when creating the agent, but no longer is. Signed-off-by: Daniel Jurgens Reviewed-by: Eli Cohen Reviewed-by: Leon Romanovsky --- v2: Added patch to series. v3: 1. Changed ibdev and smi to endport and smp respectively. Jason --- drivers/infiniband/core/mad.c | 44 ++++++++++++++++++++++++++++++++++++++++- 1 files changed, 43 insertions(+), 1 deletions(-) diff --git a/drivers/infiniband/core/mad.c b/drivers/infiniband/core/mad.c index f76a409..d749b74 100644 --- a/drivers/infiniband/core/mad.c +++ b/drivers/infiniband/core/mad.c @@ -349,6 +349,16 @@ struct ib_mad_agent *ib_register_mad_agent(struct ib_device *device, goto error3; } + if (qp_type == IB_QPT_SMI) { + ret2 = security_ib_end_port_smp(device->name, + port_num, + &mad_agent_priv->agent); + if (ret2) { + ret = ERR_PTR(ret2); + goto error4; + } + } + if (mad_reg_req) { reg_req = kmemdup(mad_reg_req, sizeof *reg_req, GFP_KERNEL); if (!reg_req) { @@ -535,6 +545,17 @@ struct ib_mad_agent *ib_register_mad_snoop(struct ib_device *device, goto error2; } + if (qp_type == IB_QPT_SMI) { + err = security_ib_end_port_smp(device->name, + port_num, + &mad_snoop_priv->agent); + + if (err) { + ret = ERR_PTR(err); + goto error3; + } + } + /* Now, fill in the various structures */ mad_snoop_priv->qp_info = &port_priv->qp_info[qpn]; mad_snoop_priv->agent.device = device; @@ -1248,6 +1269,7 @@ int ib_post_send_mad(struct ib_mad_send_buf *send_buf, /* Walk list of send WRs and post each on send list */ for (; send_buf; send_buf = next_send_buf) { + int err = 0; mad_send_wr = container_of(send_buf, struct ib_mad_send_wr_private, @@ -1255,6 +1277,17 @@ int ib_post_send_mad(struct ib_mad_send_buf *send_buf, mad_agent_priv = mad_send_wr->mad_agent_priv; pkey_index = mad_send_wr->send_wr.pkey_index; + if (mad_agent_priv->agent.qp->qp_type == IB_QPT_SMI) + err = security_ib_end_port_smp( + mad_agent_priv->agent.device->name, + mad_agent_priv->agent.port_num, + &mad_agent_priv->agent); + + if (err) { + ret = err; + goto error; + } + ret = ib_security_ma_pkey_access(mad_agent_priv->agent.device, mad_agent_priv->agent.port_num, pkey_index, @@ -1996,7 +2029,16 @@ static void ib_mad_complete_recv(struct ib_mad_agent_private *mad_agent_priv, struct ib_mad_send_wr_private *mad_send_wr; struct ib_mad_send_wc mad_send_wc; unsigned long flags; - int ret; + int ret = 0; + + if (mad_agent_priv->agent.qp->qp_type == IB_QPT_SMI) + ret = security_ib_end_port_smp( + mad_agent_priv->agent.device->name, + mad_agent_priv->agent.port_num, + &mad_agent_priv->agent); + + if (ret) + goto security_error; ret = ib_security_ma_pkey_access(mad_agent_priv->agent.device, mad_agent_priv->agent.port_num,