From patchwork Tue May 6 12:52:21 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 4120801 Return-Path: X-Original-To: patchwork-linux-rdma@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 628289F1E1 for ; Tue, 6 May 2014 12:52:28 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 741CE201F9 for ; Tue, 6 May 2014 12:52:27 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 857FC20123 for ; Tue, 6 May 2014 12:52:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754949AbaEFMwZ (ORCPT ); Tue, 6 May 2014 08:52:25 -0400 Received: from smtp03.stone-is.org ([87.238.162.6]:42072 "EHLO smtpgw.stone-is.be" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755512AbaEFMwZ (ORCPT ); Tue, 6 May 2014 08:52:25 -0400 Received: from localhost (unknown [127.0.0.1]) by smtpgw.stone-is.be (Postfix) with ESMTP id 5893D334BB9; Tue, 6 May 2014 12:52:24 +0000 (UTC) Received: from smtpgw.stone-is.be ([127.0.0.1]) by localhost (smtpgw.stone-is.be [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id Pe9ujqpRJq_H; Tue, 6 May 2014 14:52:22 +0200 (CEST) Received: from vz19.stone-is.net (vz19.stone-is.net [87.238.162.57]) by smtpgw.stone-is.be (Postfix) with ESMTP id 3E965334BB2; Tue, 6 May 2014 14:52:22 +0200 (CEST) X-No-Relay: not in my network X-No-Relay: not in my network X-No-Relay: not in my network X-No-Relay: not in my network X-No-Relay: not in my network X-No-Relay: not in my network Received: from [192.168.1.117] (178-119-65-67.access.telenet.be [178.119.65.67]) by vz19.stone-is.net (Postfix) with ESMTPSA id 015072DC470; Tue, 6 May 2014 14:52:21 +0200 (CEST) Message-ID: <5368DB05.4070600@acm.org> Date: Tue, 06 May 2014 14:52:21 +0200 From: Bart Van Assche User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:24.0) Gecko/20100101 Thunderbird/24.4.0 MIME-Version: 1.0 To: Roland Dreier CC: Sagi Grimberg , Vu Pham , David Dillow , Sebastian Parschauer , linux-rdma Subject: [PATCH 3/9] IB/srp: Introduce srp_alloc_fmr_pool() References: <5368DA5B.80609@acm.org> In-Reply-To: <5368DA5B.80609@acm.org> X-Enigmail-Version: 1.6 Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Spam-Status: No, score=-7.5 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Introduce the srp_alloc_fmr_pool() function. Only set srp_dev->fmr_max_size if FMR pool creation succeeded. This change is safe since that variable is only used if FMR pool creation succeeded. Signed-off-by: Bart Van Assche Cc: Roland Dreier Cc: David Dillow Cc: Sagi Grimberg Cc: Vu Pham Cc: Sebastian Parschauer Reviewed-by: Sagi Grimberg --- drivers/infiniband/ulp/srp/ib_srp.c | 56 ++++++++++++++++++++++--------------- 1 file changed, 33 insertions(+), 23 deletions(-) diff --git a/drivers/infiniband/ulp/srp/ib_srp.c b/drivers/infiniband/ulp/srp/ib_srp.c index 8c03371..f41cc8c 100644 --- a/drivers/infiniband/ulp/srp/ib_srp.c +++ b/drivers/infiniband/ulp/srp/ib_srp.c @@ -2792,13 +2792,43 @@ free_host: return NULL; } +static void srp_alloc_fmr_pool(struct srp_device *srp_dev) +{ + int max_pages_per_mr; + struct ib_fmr_pool_param fmr_param; + struct ib_fmr_pool *pool; + + srp_dev->fmr_pool = NULL; + + for (max_pages_per_mr = SRP_FMR_SIZE; + max_pages_per_mr >= SRP_FMR_MIN_SIZE; + max_pages_per_mr /= 2) { + memset(&fmr_param, 0, sizeof(fmr_param)); + fmr_param.pool_size = SRP_FMR_POOL_SIZE; + fmr_param.dirty_watermark = SRP_FMR_DIRTY_SIZE; + fmr_param.cache = 1; + fmr_param.max_pages_per_fmr = max_pages_per_mr; + fmr_param.page_shift = ilog2(srp_dev->fmr_page_size); + fmr_param.access = (IB_ACCESS_LOCAL_WRITE | + IB_ACCESS_REMOTE_WRITE | + IB_ACCESS_REMOTE_READ); + + pool = ib_create_fmr_pool(srp_dev->pd, &fmr_param); + if (!IS_ERR(pool)) { + srp_dev->fmr_pool = pool; + srp_dev->fmr_max_size = + srp_dev->fmr_page_size * max_pages_per_mr; + break; + } + } +} + static void srp_add_one(struct ib_device *device) { struct srp_device *srp_dev; struct ib_device_attr *dev_attr; - struct ib_fmr_pool_param fmr_param; struct srp_host *host; - int max_pages_per_fmr, fmr_page_shift, s, e, p; + int fmr_page_shift, s, e, p; dev_attr = kmalloc(sizeof *dev_attr, GFP_KERNEL); if (!dev_attr) @@ -2821,7 +2851,6 @@ static void srp_add_one(struct ib_device *device) fmr_page_shift = max(12, ffs(dev_attr->page_size_cap) - 1); srp_dev->fmr_page_size = 1 << fmr_page_shift; srp_dev->fmr_page_mask = ~((u64) srp_dev->fmr_page_size - 1); - srp_dev->fmr_max_size = srp_dev->fmr_page_size * SRP_FMR_SIZE; INIT_LIST_HEAD(&srp_dev->dev_list); @@ -2837,26 +2866,7 @@ static void srp_add_one(struct ib_device *device) if (IS_ERR(srp_dev->mr)) goto err_pd; - for (max_pages_per_fmr = SRP_FMR_SIZE; - max_pages_per_fmr >= SRP_FMR_MIN_SIZE; - max_pages_per_fmr /= 2, srp_dev->fmr_max_size /= 2) { - memset(&fmr_param, 0, sizeof fmr_param); - fmr_param.pool_size = SRP_FMR_POOL_SIZE; - fmr_param.dirty_watermark = SRP_FMR_DIRTY_SIZE; - fmr_param.cache = 1; - fmr_param.max_pages_per_fmr = max_pages_per_fmr; - fmr_param.page_shift = fmr_page_shift; - fmr_param.access = (IB_ACCESS_LOCAL_WRITE | - IB_ACCESS_REMOTE_WRITE | - IB_ACCESS_REMOTE_READ); - - srp_dev->fmr_pool = ib_create_fmr_pool(srp_dev->pd, &fmr_param); - if (!IS_ERR(srp_dev->fmr_pool)) - break; - } - - if (IS_ERR(srp_dev->fmr_pool)) - srp_dev->fmr_pool = NULL; + srp_alloc_fmr_pool(srp_dev); if (device->node_type == RDMA_NODE_IB_SWITCH) { s = 0;