diff mbox

[27/33] dma-direct: use node local allocations for coherent memory

Message ID 20180110080027.13879-28-hch@lst.de (mailing list archive)
State New, archived
Headers show

Commit Message

Christoph Hellwig Jan. 10, 2018, 8 a.m. UTC
To preserve the x86 behavior.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 lib/dma-direct.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Comments

Robin Murphy Jan. 10, 2018, 12:06 p.m. UTC | #1
On 10/01/18 08:00, Christoph Hellwig wrote:
> To preserve the x86 behavior.

And combined with patch 10/22 of the SWIOTLB refactoring, this means 
SWIOTLB allocations will also end up NUMA-aware, right? Great, that's 
what we want on arm64 too :)

Reviewed-by: Robin Murphy <robin.murphy@arm.com>

> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>   lib/dma-direct.c | 2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/lib/dma-direct.c b/lib/dma-direct.c
> index a9ae98be7af3..f04a424f91fa 100644
> --- a/lib/dma-direct.c
> +++ b/lib/dma-direct.c
> @@ -38,7 +38,7 @@ static void *dma_direct_alloc(struct device *dev, size_t size,
>   	if (gfpflags_allow_blocking(gfp))
>   		page = dma_alloc_from_contiguous(dev, count, page_order, gfp);
>   	if (!page)
> -		page = alloc_pages(gfp, page_order);
> +		page = alloc_pages_node(dev_to_node(dev), gfp, page_order);
>   	if (!page)
>   		return NULL;
>   
>
Christoph Hellwig Jan. 10, 2018, 3:30 p.m. UTC | #2
On Wed, Jan 10, 2018 at 12:06:22PM +0000, Robin Murphy wrote:
> On 10/01/18 08:00, Christoph Hellwig wrote:
>> To preserve the x86 behavior.
>
> And combined with patch 10/22 of the SWIOTLB refactoring, this means 
> SWIOTLB allocations will also end up NUMA-aware, right? Great, that's what 
> we want on arm64 too :)

Well, only for swiotlb allocations that can be satisfied by
dma_direct_alloc.  If we actually have to fall back to the swiotlb
buffers there is not node affinity yet.
Robin Murphy Jan. 10, 2018, 4:49 p.m. UTC | #3
On 10/01/18 15:30, Christoph Hellwig wrote:
> On Wed, Jan 10, 2018 at 12:06:22PM +0000, Robin Murphy wrote:
>> On 10/01/18 08:00, Christoph Hellwig wrote:
>>> To preserve the x86 behavior.
>>
>> And combined with patch 10/22 of the SWIOTLB refactoring, this means
>> SWIOTLB allocations will also end up NUMA-aware, right? Great, that's what
>> we want on arm64 too :)
> 
> Well, only for swiotlb allocations that can be satisfied by
> dma_direct_alloc.  If we actually have to fall back to the swiotlb
> buffers there is not node affinity yet.

Yeah, when I looked into it I reached the conclusion that per-node 
bounce buffers probably weren't worth it - if you have to bounce you've 
already pretty much lost the performance game, and if the CPU doing the 
bouncing happens to be on a different node from the device you've 
certainly lost either way. Per-node CMA zones we definitely *would* 
like, but that's a future problem (it looks technically feasible without 
huge infrastructure changes, but fiddly).

Robin.
diff mbox

Patch

diff --git a/lib/dma-direct.c b/lib/dma-direct.c
index a9ae98be7af3..f04a424f91fa 100644
--- a/lib/dma-direct.c
+++ b/lib/dma-direct.c
@@ -38,7 +38,7 @@  static void *dma_direct_alloc(struct device *dev, size_t size,
 	if (gfpflags_allow_blocking(gfp))
 		page = dma_alloc_from_contiguous(dev, count, page_order, gfp);
 	if (!page)
-		page = alloc_pages(gfp, page_order);
+		page = alloc_pages_node(dev_to_node(dev), gfp, page_order);
 	if (!page)
 		return NULL;