From patchwork Sun Sep 14 13:47:55 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eli Cohen X-Patchwork-Id: 4901431 Return-Path: X-Original-To: patchwork-linux-rdma@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 114C69F349 for ; Sun, 14 Sep 2014 13:44:46 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 38E2E20220 for ; Sun, 14 Sep 2014 13:48:17 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 5466220219 for ; Sun, 14 Sep 2014 13:48:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752513AbaINNsP (ORCPT ); Sun, 14 Sep 2014 09:48:15 -0400 Received: from mail-we0-f169.google.com ([74.125.82.169]:44579 "EHLO mail-we0-f169.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752462AbaINNsO (ORCPT ); Sun, 14 Sep 2014 09:48:14 -0400 Received: by mail-we0-f169.google.com with SMTP id w61so2844992wes.0 for ; Sun, 14 Sep 2014 06:48:12 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=I1VcIHRjKY8Twv6r/+EwhDtFvo8BBDfKweEDTcvyZLo=; b=QGQLUNfY6BAY7BoV+ISnR0rFHUYL2yD7GlwwWsFaCJxsrqzspZgX8GJhIKe97Z1mcV in+NTX9Lh5qRThkCZUkeRUYcpfFc/flj8bgc0HUXLil6XyFV/fuQ/ccEbxJCK2YVVFf6 kjy3VNA6zNT1imKOheLo0SJMex/2FYRMupaBDNyOY6s52N5kHJlUffqn0ziBWqW6GIFE Oz+0CgZoC3uHySsMiSw1hVmxQhzETVwAk+Z559R1pjdbf6jS5QGhsTd8KcZ/aToQov6N D9mw0hUrC6Hs0jrrLa3IZV8SuwavnG3Yf+tJ2jbbnT/pp75dk8dU6bI+eJkpeT0+nvIN TctA== X-Gm-Message-State: ALoCoQkAvzE3ci8ONcgBgHGSiz7snjbdd2fgNz1LmUbAl9gIa5aa20inocm+a7RzkEcmJLz+04E1 X-Received: by 10.180.104.201 with SMTP id gg9mr9517127wib.15.1410702492936; Sun, 14 Sep 2014 06:48:12 -0700 (PDT) Received: from localhost (out.voltaire.com. [193.47.165.251]) by mx.google.com with ESMTPSA id a19sm8249668wic.1.2014.09.14.06.48.12 for (version=TLSv1.2 cipher=RC4-SHA bits=128/128); Sun, 14 Sep 2014 06:48:12 -0700 (PDT) From: Eli Cohen X-Google-Original-From: Eli Cohen To: roland@kernel.org, dledford@redhat.com Cc: linux-rdma@vger.kernel.org, ogerlitz@mellanox.com, amirv@mellanox.com, Yishai Hadas , Eli Cohen Subject: [PATCH for-next 6/6] IB/mlx5: Modify to work with arbitrary page size Date: Sun, 14 Sep 2014 16:47:55 +0300 Message-Id: <1410702475-28826-7-git-send-email-eli@mellanox.com> X-Mailer: git-send-email 2.1.0 In-Reply-To: <1410702475-28826-1-git-send-email-eli@mellanox.com> References: <1410702475-28826-1-git-send-email-eli@mellanox.com> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Spam-Status: No, score=-7.5 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Yishai Hadas When dealing with umem objects, the driver assumed host page sizes defined by PAGE_SHIFT. Modify the code to use arbitrary page shift provided by umem->page_shift to support different page sizes. Signed-off-by: Yishai Hadas Signed-off-by: Eli Cohen --- drivers/infiniband/hw/mlx5/mem.c | 18 ++++++++++-------- 1 file changed, 10 insertions(+), 8 deletions(-) diff --git a/drivers/infiniband/hw/mlx5/mem.c b/drivers/infiniband/hw/mlx5/mem.c index a3e81444c825..dae07eae9507 100644 --- a/drivers/infiniband/hw/mlx5/mem.c +++ b/drivers/infiniband/hw/mlx5/mem.c @@ -55,16 +55,17 @@ void mlx5_ib_cont_pages(struct ib_umem *umem, u64 addr, int *count, int *shift, u64 pfn; struct scatterlist *sg; int entry; + unsigned long page_shift = ilog2(umem->page_size); - addr = addr >> PAGE_SHIFT; + addr = addr >> page_shift; tmp = (unsigned long)addr; m = find_first_bit(&tmp, sizeof(tmp)); skip = 1 << m; mask = skip - 1; i = 0; for_each_sg(umem->sg_head.sgl, sg, umem->nmap, entry) { - len = sg_dma_len(sg) >> PAGE_SHIFT; - pfn = sg_dma_address(sg) >> PAGE_SHIFT; + len = sg_dma_len(sg) >> page_shift; + pfn = sg_dma_address(sg) >> page_shift; for (k = 0; k < len; k++) { if (!(i & mask)) { tmp = (unsigned long)pfn; @@ -103,14 +104,15 @@ void mlx5_ib_cont_pages(struct ib_umem *umem, u64 addr, int *count, int *shift, *ncont = 0; } - *shift = PAGE_SHIFT + m; + *shift = page_shift + m; *count = i; } void mlx5_ib_populate_pas(struct mlx5_ib_dev *dev, struct ib_umem *umem, int page_shift, __be64 *pas, int umr) { - int shift = page_shift - PAGE_SHIFT; + unsigned long umem_page_shift = ilog2(umem->page_size); + int shift = page_shift - umem_page_shift; int mask = (1 << shift) - 1; int i, k; u64 cur = 0; @@ -121,11 +123,11 @@ void mlx5_ib_populate_pas(struct mlx5_ib_dev *dev, struct ib_umem *umem, i = 0; for_each_sg(umem->sg_head.sgl, sg, umem->nmap, entry) { - len = sg_dma_len(sg) >> PAGE_SHIFT; + len = sg_dma_len(sg) >> umem_page_shift; base = sg_dma_address(sg); for (k = 0; k < len; k++) { if (!(i & mask)) { - cur = base + (k << PAGE_SHIFT); + cur = base + (k << umem_page_shift); if (umr) cur |= 3; @@ -134,7 +136,7 @@ void mlx5_ib_populate_pas(struct mlx5_ib_dev *dev, struct ib_umem *umem, i >> shift, be64_to_cpu(pas[i >> shift])); } else mlx5_ib_dbg(dev, "=====> 0x%llx\n", - base + (k << PAGE_SHIFT)); + base + (k << umem_page_shift)); i++; } }