From patchwork Thu Jun 23 19:52:57 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Jurgens X-Patchwork-Id: 9195909 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id EF24C6075A for ; Thu, 23 Jun 2016 19:57:12 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DE77328470 for ; Thu, 23 Jun 2016 19:57:12 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D1AD228472; Thu, 23 Jun 2016 19:57:12 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4510A28470 for ; Thu, 23 Jun 2016 19:57:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751040AbcFWT5L (ORCPT ); Thu, 23 Jun 2016 15:57:11 -0400 Received: from [193.47.165.129] ([193.47.165.129]:41447 "EHLO mellanox.co.il" rhost-flags-FAIL-FAIL-OK-FAIL) by vger.kernel.org with ESMTP id S1750950AbcFWT5K (ORCPT ); Thu, 23 Jun 2016 15:57:10 -0400 Received: from Internal Mail-Server by MTLPINE1 (envelope-from danielj@mellanox.com) with ESMTPS (AES256-SHA encrypted); 23 Jun 2016 22:53:53 +0300 Received: from x-vnc01.mtx.labs.mlnx (x-vnc01.mtx.labs.mlnx [10.12.150.16]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id u5NJr2DK029223; Thu, 23 Jun 2016 22:53:50 +0300 From: Dan Jurgens To: chrisw@sous-sol.org, paul@paul-moore.com, sds@tycho.nsa.gov, eparis@parisplace.org, dledford@redhat.com, sean.hefty@intel.com, hal.rosenstock@gmail.com Cc: selinux@tycho.nsa.gov, linux-security-module@vger.kernel.org, linux-rdma@vger.kernel.org, yevgenyp@mellanox.com, Daniel Jurgens Subject: [PATCH 11/12] IB/core: Enforce Infiniband device SMI security Date: Thu, 23 Jun 2016 22:52:57 +0300 Message-Id: <1466711578-64398-12-git-send-email-danielj@mellanox.com> X-Mailer: git-send-email 1.7.1 In-Reply-To: <1466711578-64398-1-git-send-email-danielj@mellanox.com> References: <1466711578-64398-1-git-send-email-danielj@mellanox.com> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Daniel Jurgens During MAD and snoop agent registration for SMI QPs check that the calling process has permission to access the SMI. When sending and receiving MADs check that the agent has access to the SMI if it's on an SMI QP. Because security policy can change it's possible permission was allowed when creating the agent, but no longer is. Signed-off-by: Daniel Jurgens Reviewed-by: Eli Cohen Reviewed-by: Leon Romanovsky --- drivers/infiniband/core/mad.c | 44 ++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 43 insertions(+), 1 deletion(-) diff --git a/drivers/infiniband/core/mad.c b/drivers/infiniband/core/mad.c index 975b472..684dcac 100644 --- a/drivers/infiniband/core/mad.c +++ b/drivers/infiniband/core/mad.c @@ -345,6 +345,16 @@ struct ib_mad_agent *ib_register_mad_agent(struct ib_device *device, goto error3; } + if (qp_type == IB_QPT_SMI) { + ret2 = security_ib_end_port_smp(device->name, + port_num, + &mad_agent_priv->agent); + if (ret2) { + ret = ERR_PTR(ret2); + goto error4; + } + } + if (mad_reg_req) { reg_req = kmemdup(mad_reg_req, sizeof *reg_req, GFP_KERNEL); if (!reg_req) { @@ -531,6 +541,17 @@ struct ib_mad_agent *ib_register_mad_snoop(struct ib_device *device, goto error2; } + if (qp_type == IB_QPT_SMI) { + err = security_ib_end_port_smp(device->name, + port_num, + &mad_snoop_priv->agent); + + if (err) { + ret = ERR_PTR(err); + goto error3; + } + } + /* Now, fill in the various structures */ mad_snoop_priv->qp_info = &port_priv->qp_info[qpn]; mad_snoop_priv->agent.device = device; @@ -1244,6 +1265,7 @@ int ib_post_send_mad(struct ib_mad_send_buf *send_buf, /* Walk list of send WRs and post each on send list */ for (; send_buf; send_buf = next_send_buf) { + int err = 0; mad_send_wr = container_of(send_buf, struct ib_mad_send_wr_private, @@ -1251,6 +1273,17 @@ int ib_post_send_mad(struct ib_mad_send_buf *send_buf, mad_agent_priv = mad_send_wr->mad_agent_priv; pkey_index = mad_send_wr->send_wr.pkey_index; + if (mad_agent_priv->agent.qp->qp_type == IB_QPT_SMI) + err = security_ib_end_port_smp( + mad_agent_priv->agent.device->name, + mad_agent_priv->agent.port_num, + &mad_agent_priv->agent); + + if (err) { + ret = err; + goto error; + } + ret = ib_security_ma_pkey_access(mad_agent_priv->agent.device, mad_agent_priv->agent.port_num, pkey_index, @@ -1992,7 +2025,16 @@ static void ib_mad_complete_recv(struct ib_mad_agent_private *mad_agent_priv, struct ib_mad_send_wr_private *mad_send_wr; struct ib_mad_send_wc mad_send_wc; unsigned long flags; - int ret; + int ret = 0; + + if (mad_agent_priv->agent.qp->qp_type == IB_QPT_SMI) + ret = security_ib_end_port_smp( + mad_agent_priv->agent.device->name, + mad_agent_priv->agent.port_num, + &mad_agent_priv->agent); + + if (ret) + goto security_error; ret = ib_security_ma_pkey_access(mad_agent_priv->agent.device, mad_agent_priv->agent.port_num,