From patchwork Thu Jun 16 14:35:18 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 9181151 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 0D40E6075D for ; Thu, 16 Jun 2016 14:35:52 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id F1A1728308 for ; Thu, 16 Jun 2016 14:35:51 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E65332836B; Thu, 16 Jun 2016 14:35:51 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, T_TVD_MIME_EPI,URIBL_BLACK autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B4FA528308 for ; Thu, 16 Jun 2016 14:35:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754450AbcFPOft (ORCPT ); Thu, 16 Jun 2016 10:35:49 -0400 Received: from mail.kernel.org ([198.145.29.136]:44540 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752492AbcFPOfs (ORCPT ); Thu, 16 Jun 2016 10:35:48 -0400 Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id AE6BC2025A; Thu, 16 Jun 2016 14:35:46 +0000 (UTC) Received: from localhost (unknown [193.47.165.251]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id E6BE420166; Thu, 16 Jun 2016 14:35:44 +0000 (UTC) Date: Thu, 16 Jun 2016 17:35:18 +0300 From: Leon Romanovsky To: Chuck Lever Cc: linux-rdma@vger.kernel.org, Linux NFS Mailing List Subject: Re: [PATCH v2 01/24] mlx4-ib: Use coherent memory for priv pages Message-ID: <20160616143518.GX5408@leon.nu> Reply-To: leon@kernel.org References: <20160615030626.14794.43805.stgit@manet.1015granger.net> <20160615031525.14794.69066.stgit@manet.1015granger.net> <20160615042849.GR5408@leon.nu> <68F7CD80-0092-4B55-9FAD-4C54D284BCA3@oracle.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <68F7CD80-0092-4B55-9FAD-4C54D284BCA3@oracle.com> User-Agent: Mutt/1.5.23 (2014-03-12) X-Virus-Scanned: ClamAV using ClamSMTP Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP On Wed, Jun 15, 2016 at 12:40:07PM -0400, Chuck Lever wrote: > > > On Jun 15, 2016, at 12:28 AM, Leon Romanovsky wrote: > > > > On Tue, Jun 14, 2016 at 11:15:25PM -0400, Chuck Lever wrote: > >> From: Sagi Grimberg > >> > >> kmalloc doesn't guarantee the returned memory is all on one page. > > > > IMHO, the patch posted by Christoph at that thread is best way to go, > > because you changed streaming DMA mappings to be coherent DMA mappings [1]. > > > > "The kernel developers recommend the use of streaming mappings over > > coherent mappings whenever possible" [1]. > > > > [1] http://www.makelinux.net/ldd3/chp-15-sect-4 > > Hi Leon- > > I'll happily drop this patch from my 4.8 series as soon > as an official mlx4/mlx5 fix is merged. > > Meanwhile, I notice some unexplained instability (driver > resets, list corruption, and so on) when I test NFS/RDMA > without this patch included. So it is attached to the > series for anyone with mlx4 who wants to pull my topic > branch and try it out. hi Chuck, We plan to send attached patch during our second round of fixes for mlx4/mlx5 and would be grateful to you if you could provide your Tested-by tag before. Thanks From 213f61b44b54edbcbf272e694e889c61412be579 Mon Sep 17 00:00:00 2001 From: Yishai Hadas Date: Thu, 16 Jun 2016 11:13:41 +0300 Subject: [PATCH] IB/mlx4: Ensure cache line boundaries for dma_map_single "In order for memory mapped by this API to operate correctly, the mapped region must begin exactly on a cache line boundary and end exactly on one (to prevent two separately mapped regions from sharing a single cache line). Therefore, it is recommended that driver writers who don't take special care to determine the cache line size at run time only map virtual regions that begin and end on page boundaries (which are guaranteed also to be cache line boundaries)." [1] This patch uses __get_free_pages instead of kzalloc to be sure that above will be true in all ARCHs and in all SLUBs debug configurations. [1] https://www.kernel.org/doc/Documentation/DMA-API.txt issue: 802618 Change-Id: Iee8176b183290213b1b4e66f1835f5c90f067075 Fixes: 1b2cd0fc673c ('IB/mlx4: Support the new memory registration API') Reported-by: Chuck Lever Signed-off-by: Christoph Hellwig Signed-off-by: Leon Romanovsky Signed-off-by: Yishai Hadas --- diff --git a/drivers/infiniband/hw/mlx4/mlx4_ib.h b/drivers/infiniband/hw/mlx4/mlx4_ib.h index 6c5ac5d..4a8bbe4 100644 --- a/drivers/infiniband/hw/mlx4/mlx4_ib.h +++ b/drivers/infiniband/hw/mlx4/mlx4_ib.h @@ -139,7 +139,6 @@ u32 max_pages; struct mlx4_mr mmr; struct ib_umem *umem; - void *pages_alloc; }; struct mlx4_ib_mw { diff --git a/drivers/infiniband/hw/mlx4/mr.c b/drivers/infiniband/hw/mlx4/mr.c index 6312721..56b8d87 100644 --- a/drivers/infiniband/hw/mlx4/mr.c +++ b/drivers/infiniband/hw/mlx4/mr.c @@ -278,16 +278,12 @@ int max_pages) { int size = max_pages * sizeof(u64); - int add_size; int ret; - add_size = max_t(int, MLX4_MR_PAGES_ALIGN - ARCH_KMALLOC_MINALIGN, 0); - - mr->pages_alloc = kzalloc(size + add_size, GFP_KERNEL); - if (!mr->pages_alloc) + mr->pages = (__be64 *)__get_free_pages(GFP_KERNEL | __GFP_ZERO, + get_order(size)); + if (!mr->pages) return -ENOMEM; - - mr->pages = PTR_ALIGN(mr->pages_alloc, MLX4_MR_PAGES_ALIGN); mr->page_map = dma_map_single(device->dma_device, mr->pages, size, DMA_TO_DEVICE); @@ -299,7 +295,7 @@ return 0; err: - kfree(mr->pages_alloc); + free_pages((unsigned long)mr->pages, get_order(size)); return ret; } @@ -313,7 +309,7 @@ dma_unmap_single(device->dma_device, mr->page_map, size, DMA_TO_DEVICE); - kfree(mr->pages_alloc); + free_pages((unsigned long)mr->pages, get_order(size)); mr->pages = NULL; } }