From patchwork Sat Jan 26 16:59:13 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Saleem, Shiraz" X-Patchwork-Id: 10782539 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8BE6213BF for ; Sat, 26 Jan 2019 17:01:29 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 78DE72C68C for ; Sat, 26 Jan 2019 17:01:29 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 6D70C2C6BD; Sat, 26 Jan 2019 17:01:29 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 184582C68C for ; Sat, 26 Jan 2019 17:01:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726304AbfAZRB2 (ORCPT ); Sat, 26 Jan 2019 12:01:28 -0500 Received: from mga02.intel.com ([134.134.136.20]:32605 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726305AbfAZRB2 (ORCPT ); Sat, 26 Jan 2019 12:01:28 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga101.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 26 Jan 2019 09:01:28 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,525,1539673200"; d="scan'208";a="129125460" Received: from ssaleem-mobl4.amr.corp.intel.com ([10.251.7.42]) by orsmga002.jf.intel.com with ESMTP; 26 Jan 2019 09:01:27 -0800 From: Shiraz Saleem To: dledford@redhat.com, jgg@ziepe.ca, linux-rdma@vger.kernel.org Cc: "Shiraz, Saleem" , Dennis Dalessandro , Mike Marciniszyn Subject: [PATCH RFC 12/12] RDMA/rdmavt: Adapt to handle non-uniform sizes on umem SGEs Date: Sat, 26 Jan 2019 10:59:13 -0600 Message-Id: <20190126165913.18272-13-shiraz.saleem@intel.com> X-Mailer: git-send-email 2.8.3 In-Reply-To: <20190126165913.18272-1-shiraz.saleem@intel.com> References: <20190126165913.18272-1-shiraz.saleem@intel.com> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: "Shiraz, Saleem" rdmavt expects a uniform size on all umem SGEs which is currently at PAGE_SIZE. Adapt to a umem API change which could return non-uniform sized SGEs due to combining contiguous PAGE_SIZE regions into an SGE. Unfold the larger SGEs into a list of PAGE_SIZE elements. Additionally, purge umem->page_shift usage in the driver as its only relevant for ODP MRs. Use system page size and shift instead. Cc: Dennis Dalessandro Cc: Mike Marciniszyn Signed-off-by: Shiraz, Saleem Acked-by: Dennis Dalessandro --- drivers/infiniband/sw/rdmavt/mr.c | 34 +++++++++++++++++++--------------- 1 file changed, 19 insertions(+), 15 deletions(-) diff --git a/drivers/infiniband/sw/rdmavt/mr.c b/drivers/infiniband/sw/rdmavt/mr.c index 8b1c1e8..43a6e71 100644 --- a/drivers/infiniband/sw/rdmavt/mr.c +++ b/drivers/infiniband/sw/rdmavt/mr.c @@ -407,25 +407,29 @@ struct ib_mr *rvt_reg_user_mr(struct ib_pd *pd, u64 start, u64 length, mr->mr.access_flags = mr_access_flags; mr->umem = umem; - mr->mr.page_shift = umem->page_shift; + mr->mr.page_shift = PAGE_SHIFT; m = 0; n = 0; for_each_sg(umem->sg_head.sgl, sg, umem->nmap, entry) { - void *vaddr; + int i, chunk_pages; + struct page *page = sg_page(sg); - vaddr = page_address(sg_page(sg)); - if (!vaddr) { - ret = ERR_PTR(-EINVAL); - goto bail_inval; - } - mr->mr.map[m]->segs[n].vaddr = vaddr; - mr->mr.map[m]->segs[n].length = BIT(umem->page_shift); - trace_rvt_mr_user_seg(&mr->mr, m, n, vaddr, - BIT(umem->page_shift)); - n++; - if (n == RVT_SEGSZ) { - m++; - n = 0; + chunk_pages = sg_dma_len(sg) >> PAGE_SHIFT; + for (i = 0; i < chunk_pages; i++) { + void *vaddr; + + vaddr = page_address(nth_page(page, i)); + if (!vaddr) { + ret = ERR_PTR(-EINVAL); + goto bail_inval; + } + mr->mr.map[m]->segs[n].vaddr = vaddr; + mr->mr.map[m]->segs[n].length = PAGE_SIZE; + trace_rvt_mr_user_seg(&mr->mr, m, n, vaddr, PAGE_SIZE); + if (++n == RVT_SEGSZ) { + m++; + n = 0; + } } } return &mr->ibmr;