From patchwork Wed Jul 22 06:55:22 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sagi Grimberg X-Patchwork-Id: 6839841 Return-Path: X-Original-To: patchwork-linux-rdma@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id DA1E89F7B4 for ; Wed, 22 Jul 2015 06:56:57 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 0CC45206F8 for ; Wed, 22 Jul 2015 06:56:57 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 2723120692 for ; Wed, 22 Jul 2015 06:56:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756133AbbGVG4y (ORCPT ); Wed, 22 Jul 2015 02:56:54 -0400 Received: from [193.47.165.129] ([193.47.165.129]:48089 "EHLO mellanox.co.il" rhost-flags-FAIL-FAIL-OK-FAIL) by vger.kernel.org with ESMTP id S1756130AbbGVG4w (ORCPT ); Wed, 22 Jul 2015 02:56:52 -0400 Received: from Internal Mail-Server by MTLPINE1 (envelope-from sagig@mellanox.com) with ESMTPS (AES256-SHA encrypted); 22 Jul 2015 09:55:51 +0300 Received: from r-vnc05.mtr.labs.mlnx (r-vnc05.mtr.labs.mlnx [10.208.0.115]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id t6M6tp8w010020; Wed, 22 Jul 2015 09:55:51 +0300 Received: from r-vnc05.mtr.labs.mlnx (localhost [127.0.0.1]) by r-vnc05.mtr.labs.mlnx (8.14.4/8.14.4) with ESMTP id t6M6tp40025020; Wed, 22 Jul 2015 09:55:51 +0300 Received: (from sagig@localhost) by r-vnc05.mtr.labs.mlnx (8.14.4/8.14.4/Submit) id t6M6tpUi025019; Wed, 22 Jul 2015 09:55:51 +0300 From: Sagi Grimberg To: linux-rdma@vger.kernel.org Cc: Liran Liss , Oren Duer Subject: [PATCH WIP 22/43] mlx4: Allocate a private page list in ib_alloc_mr Date: Wed, 22 Jul 2015 09:55:22 +0300 Message-Id: <1437548143-24893-23-git-send-email-sagig@mellanox.com> X-Mailer: git-send-email 1.8.4.3 In-Reply-To: <1437548143-24893-1-git-send-email-sagig@mellanox.com> References: <1437548143-24893-1-git-send-email-sagig@mellanox.com> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Spam-Status: No, score=-8.1 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Signed-off-by: Sagi Grimberg --- drivers/infiniband/hw/mlx4/mlx4_ib.h | 5 ++++ drivers/infiniband/hw/mlx4/mr.c | 52 +++++++++++++++++++++++++++++++++--- 2 files changed, 54 insertions(+), 3 deletions(-) diff --git a/drivers/infiniband/hw/mlx4/mlx4_ib.h b/drivers/infiniband/hw/mlx4/mlx4_ib.h index 9220faf..a9a4a7f 100644 --- a/drivers/infiniband/hw/mlx4/mlx4_ib.h +++ b/drivers/infiniband/hw/mlx4/mlx4_ib.h @@ -120,6 +120,11 @@ struct mlx4_ib_mr { struct ib_mr ibmr; struct mlx4_mr mmr; struct ib_umem *umem; + u64 *pl; + __be64 *mpl; + dma_addr_t pl_map; + u32 npages; + u32 max_pages; }; struct mlx4_ib_mw { diff --git a/drivers/infiniband/hw/mlx4/mr.c b/drivers/infiniband/hw/mlx4/mr.c index 121ee7f..01e16bc 100644 --- a/drivers/infiniband/hw/mlx4/mr.c +++ b/drivers/infiniband/hw/mlx4/mr.c @@ -271,11 +271,50 @@ release_mpt_entry: return err; } +static int +mlx4_alloc_page_list(struct ib_device *device, + struct mlx4_ib_mr *mr, + int max_entries) +{ + int size = max_entries * sizeof (u64); + + mr->pl = kcalloc(max_entries, sizeof(u64), GFP_KERNEL); + if (!mr->pl) + return -ENOMEM; + + mr->mpl = dma_alloc_coherent(device->dma_device, size, + &mr->pl_map, GFP_KERNEL); + if (!mr->mpl) + goto err; + + return 0; +err: + kfree(mr->pl); + + return -ENOMEM; +} + +static void +mlx4_free_page_list(struct mlx4_ib_mr *mr) +{ + struct ib_device *device = mr->ibmr.device; + int size = mr->max_pages * sizeof(u64); + + kfree(mr->pl); + if (mr->mpl) + dma_free_coherent(device->dma_device, size, + mr->mpl, mr->pl_map); + mr->pl = NULL; + mr->mpl = NULL; +} + int mlx4_ib_dereg_mr(struct ib_mr *ibmr) { struct mlx4_ib_mr *mr = to_mmr(ibmr); int ret; + mlx4_free_page_list(mr); + ret = mlx4_mr_free(to_mdev(ibmr->device)->dev, &mr->mmr); if (ret) return ret; @@ -371,18 +410,25 @@ struct ib_mr *mlx4_ib_alloc_mr(struct ib_pd *pd, if (err) goto err_free; + err = mlx4_alloc_page_list(pd->device, mr, max_entries); + if (err) + goto err_free_mr; + + mr->max_pages = max_entries; + err = mlx4_mr_enable(dev->dev, &mr->mmr); if (err) - goto err_mr; + goto err_free_pl; mr->ibmr.rkey = mr->ibmr.lkey = mr->mmr.key; mr->umem = NULL; return &mr->ibmr; -err_mr: +err_free_pl: + mlx4_free_page_list(mr); +err_free_mr: (void) mlx4_mr_free(dev->dev, &mr->mmr); - err_free: kfree(mr); return ERR_PTR(err);