Message ID | 20160529081527.GA5839@infradead.org (mailing list archive) |
---|---|
State | Superseded |
Headers | show |
>> But this is specific to mlx4 which supports up 511 pages per MR, mlx5 >> will need a coherent allocation anyways right (it supports up to 64K >> pages per MR)? > > Why does that make a difference? Chuck's original bug report said that the problem with the current code is that dma_map_single expects the pages array to fit in a single page: " See what mlx4_alloc_priv_pages does with this memory allocation: mr->page_map = dma_map_single(device->dma_device, mr->pages, size, DMA_TO_DEVICE); dma_map_single() expects the mr->pages allocation to fit on a single page, as far as I can tell. " But when I revisited dma_map_single in some archs I didn't see this expectation. So actually now I don't really see what is the problem in the first place (or how your patch fixes it)... -- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
diff --git a/drivers/infiniband/hw/mlx4/mlx4_ib.h b/drivers/infiniband/hw/mlx4/mlx4_ib.h index ba32817..7adfa4b 100644 --- a/drivers/infiniband/hw/mlx4/mlx4_ib.h +++ b/drivers/infiniband/hw/mlx4/mlx4_ib.h @@ -129,8 +129,6 @@ struct mlx4_ib_cq { struct list_head recv_qp_list; }; -#define MLX4_MR_PAGES_ALIGN 0x40 - struct mlx4_ib_mr { struct ib_mr ibmr; __be64 *pages; @@ -139,7 +137,6 @@ struct mlx4_ib_mr { u32 max_pages; struct mlx4_mr mmr; struct ib_umem *umem; - void *pages_alloc; }; struct mlx4_ib_mw { diff --git a/drivers/infiniband/hw/mlx4/mr.c b/drivers/infiniband/hw/mlx4/mr.c index b04f623..3c6a21f 100644 --- a/drivers/infiniband/hw/mlx4/mr.c +++ b/drivers/infiniband/hw/mlx4/mr.c @@ -278,17 +278,13 @@ mlx4_alloc_priv_pages(struct ib_device *device, int max_pages) { int size = max_pages * sizeof(u64); - int add_size; int ret; - add_size = max_t(int, MLX4_MR_PAGES_ALIGN - ARCH_KMALLOC_MINALIGN, 0); - - mr->pages_alloc = kzalloc(size + add_size, GFP_KERNEL); - if (!mr->pages_alloc) + mr->pages = (__be64 *)__get_free_pages(GFP_KERNEL | __GFP_ZERO, + get_order(size)); + if (!mr->pages) return -ENOMEM; - - mr->pages = PTR_ALIGN(mr->pages_alloc, MLX4_MR_PAGES_ALIGN); - + mr->page_map = dma_map_single(device->dma_device, mr->pages, size, DMA_TO_DEVICE); @@ -299,7 +295,7 @@ mlx4_alloc_priv_pages(struct ib_device *device, return 0; err: - kfree(mr->pages_alloc); + free_pages((unsigned long)mr->pages, get_order(size)); return ret; } @@ -313,7 +309,7 @@ mlx4_free_priv_pages(struct mlx4_ib_mr *mr) dma_unmap_single(device->dma_device, mr->page_map, size, DMA_TO_DEVICE); - kfree(mr->pages_alloc); + free_pages((unsigned long)mr->pages, get_order(size)); mr->pages = NULL; } }