From patchwork Mon Nov 30 14:34:21 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Marciniszyn, Mike" X-Patchwork-Id: 7726601 Return-Path: X-Original-To: patchwork-linux-rdma@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 561A09F1C0 for ; Mon, 30 Nov 2015 14:34:31 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 56B0420483 for ; Mon, 30 Nov 2015 14:34:25 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 67CB720460 for ; Mon, 30 Nov 2015 14:34:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754024AbbK3OeX (ORCPT ); Mon, 30 Nov 2015 09:34:23 -0500 Received: from mga14.intel.com ([192.55.52.115]:32250 "EHLO mga14.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753824AbbK3OeX (ORCPT ); Mon, 30 Nov 2015 09:34:23 -0500 Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga103.fm.intel.com with ESMTP; 30 Nov 2015 06:34:23 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.20,364,1444719600"; d="scan'208";a="609963540" Received: from sedona.ch.intel.com ([143.182.228.65]) by FMSMGA003.fm.intel.com with ESMTP; 30 Nov 2015 06:34:23 -0800 Received: from phlsvsles11.ph.intel.com (phlsvsles11.ph.intel.com [10.228.195.43]) by sedona.ch.intel.com (8.13.6/8.14.3/Standard MailSET/Hub) with ESMTP id tAUEYLbK024910; Mon, 30 Nov 2015 07:34:22 -0700 Received: from phlsvslse11.ph.intel.com (localhost [127.0.0.1]) by phlsvsles11.ph.intel.com with ESMTP id tAUEYLkt024112; Mon, 30 Nov 2015 09:34:21 -0500 Subject: [PATCH 1/2] IB/core: correct issue with sge copyin corrupting wr To: dledford@redhat.com From: Mike Marciniszyn Cc: linux-rdma@vger.kernel.org Date: Mon, 30 Nov 2015 09:34:21 -0500 Message-ID: <20151130143421.24060.88406.stgit@phlsvslse11.ph.intel.com> In-Reply-To: <20151130143326.24060.35941.stgit@phlsvslse11.ph.intel.com> References: <20151130143326.24060.35941.stgit@phlsvslse11.ph.intel.com> User-Agent: StGit/0.16 MIME-Version: 1.0 Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Commit e622f2f4ad21 ("IB: split struct ib_send_wr") introduced a regression for HCAs whose user mode post sends go through ib_uverbs_post_send(). The code didn't account for the fact that the first sge is offset by an operation dependent length. The allocation did, but the pointer to the sge list is computed without that knowledge. Store the operation dependent length in an automatic and compute the sge list copy in destination using that length. Reviewed-by: Ira Weiny Signed-off-by: Mike Marciniszyn --- drivers/infiniband/core/uverbs_cmd.c | 15 ++++++++++----- 1 file changed, 10 insertions(+), 5 deletions(-) -- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/drivers/infiniband/core/uverbs_cmd.c b/drivers/infiniband/core/uverbs_cmd.c index 94816ae..4cb8e9d 100644 --- a/drivers/infiniband/core/uverbs_cmd.c +++ b/drivers/infiniband/core/uverbs_cmd.c @@ -2446,6 +2446,7 @@ ssize_t ib_uverbs_post_send(struct ib_uverbs_file *file, int i, sg_ind; int is_ud; ssize_t ret = -EINVAL; + size_t next_size; if (copy_from_user(&cmd, buf, sizeof cmd)) return -EFAULT; @@ -2490,7 +2491,8 @@ ssize_t ib_uverbs_post_send(struct ib_uverbs_file *file, goto out_put; } - ud = alloc_wr(sizeof(*ud), user_wr->num_sge); + next_size = sizeof(*ud); + ud = alloc_wr(next_size, user_wr->num_sge); if (!ud) { ret = -ENOMEM; goto out_put; @@ -2511,7 +2513,8 @@ ssize_t ib_uverbs_post_send(struct ib_uverbs_file *file, user_wr->opcode == IB_WR_RDMA_READ) { struct ib_rdma_wr *rdma; - rdma = alloc_wr(sizeof(*rdma), user_wr->num_sge); + next_size = sizeof(*rdma); + rdma = alloc_wr(next_size, user_wr->num_sge); if (!rdma) { ret = -ENOMEM; goto out_put; @@ -2525,7 +2528,8 @@ ssize_t ib_uverbs_post_send(struct ib_uverbs_file *file, user_wr->opcode == IB_WR_ATOMIC_FETCH_AND_ADD) { struct ib_atomic_wr *atomic; - atomic = alloc_wr(sizeof(*atomic), user_wr->num_sge); + next_size = sizeof(*atomic); + atomic = alloc_wr(next_size, user_wr->num_sge); if (!atomic) { ret = -ENOMEM; goto out_put; @@ -2540,7 +2544,8 @@ ssize_t ib_uverbs_post_send(struct ib_uverbs_file *file, } else if (user_wr->opcode == IB_WR_SEND || user_wr->opcode == IB_WR_SEND_WITH_IMM || user_wr->opcode == IB_WR_SEND_WITH_INV) { - next = alloc_wr(sizeof(*next), user_wr->num_sge); + next_size = sizeof(*next); + next = alloc_wr(next_size, user_wr->num_sge); if (!next) { ret = -ENOMEM; goto out_put; @@ -2572,7 +2577,7 @@ ssize_t ib_uverbs_post_send(struct ib_uverbs_file *file, if (next->num_sge) { next->sg_list = (void *) next + - ALIGN(sizeof *next, sizeof (struct ib_sge)); + ALIGN(next_size, sizeof(struct ib_sge)); if (copy_from_user(next->sg_list, buf + sizeof cmd + cmd.wr_count * cmd.wqe_size +