diff mbox series

[-next] mm: page_alloc: simplify has_managed_dma()

Message ID 20230529144022.42927-1-wangkefeng.wang@huawei.com (mailing list archive)
State New
Headers show
Series [-next] mm: page_alloc: simplify has_managed_dma() | expand

Commit Message

Kefeng Wang May 29, 2023, 2:40 p.m. UTC
The ZONE_DMA should only exists on Node 0, only check NODE_DATA(0)
is enough, so simplify has_managed_dma() and make it inline.

Cc: Baoquan He <bhe@redhat.com>
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 include/linux/mmzone.h | 21 +++++++++++----------
 mm/page_alloc.c        | 15 ---------------
 2 files changed, 11 insertions(+), 25 deletions(-)

Comments

Matthew Wilcox May 29, 2023, 2:26 p.m. UTC | #1
On Mon, May 29, 2023 at 10:40:22PM +0800, Kefeng Wang wrote:
> The ZONE_DMA should only exists on Node 0, only check NODE_DATA(0)
> is enough, so simplify has_managed_dma() and make it inline.

That's true on x86, but is it true on all architectures?
Kefeng Wang May 30, 2023, 2:10 a.m. UTC | #2
On 2023/5/29 22:26, Matthew Wilcox wrote:
> On Mon, May 29, 2023 at 10:40:22PM +0800, Kefeng Wang wrote:
>> The ZONE_DMA should only exists on Node 0, only check NODE_DATA(0)
>> is enough, so simplify has_managed_dma() and make it inline.
> 
> That's true on x86, but is it true on all architectures?

There is no document about numa node info for the DMA_ZONE, + Mike

I used 'git grep -w ZONE_DMA arch/'

1) the following archs without NUMA support, so it's true for them,

arch/alpha/mm/init.c:	max_zone_pfn[ZONE_DMA] = dma_pfn;
arch/arm/mm/init.c:	max_zone_pfn[ZONE_DMA] = min(arm_dma_pfn_limit, 
max_low);
arch/m68k/mm/init.c:	max_zone_pfn[ZONE_DMA] = end_mem >> PAGE_SHIFT;
arch/m68k/mm/mcfmmu.c:	max_zone_pfn[ZONE_DMA] = PFN_DOWN(_ramend);
arch/m68k/mm/motorola.c:	max_zone_pfn[ZONE_DMA] = memblock_end_of_DRAM();
arch/m68k/mm/sun3mmu.c:	max_zone_pfn[ZONE_DMA] = ((unsigned 
long)high_memory) >> PAGE_SHIFT;
arch/microblaze/mm/init.c:	zones_size[ZONE_DMA] = max_low_pfn;
arch/microblaze/mm/init.c:	zones_size[ZONE_DMA] = max_pfn;


2) Simple check following archs, it seems that it is yes to them too.

arch/mips/mm/init.c:	max_zone_pfns[ZONE_DMA] = MAX_DMA_PFN;
arch/powerpc/mm/mem.c:	max_zone_pfns[ZONE_DMA]	= min(max_low_pfn,
arch/s390/mm/init.c:	max_zone_pfns[ZONE_DMA] = PFN_DOWN(MAX_DMA_ADDRESS);
arch/sparc/mm/srmmu.c:		max_zone_pfn[ZONE_DMA] = max_low_pfn;
arch/x86/mm/init.c:	max_zone_pfns[ZONE_DMA]		= min(MAX_DMA_PFN, 
max_low_pfn);
arch/arm64/mm/init.c:	max_zone_pfns[ZONE_DMA] = 
PFN_DOWN(arm64_dma_phys_limit);
arch/loongarch/mm/init.c:	max_zone_pfns[ZONE_DMA] = MAX_DMA_PFN;
Baoquan He May 30, 2023, 4:18 a.m. UTC | #3
On 05/30/23 at 10:10am, Kefeng Wang wrote:
> 
> 
> On 2023/5/29 22:26, Matthew Wilcox wrote:
> > On Mon, May 29, 2023 at 10:40:22PM +0800, Kefeng Wang wrote:
> > > The ZONE_DMA should only exists on Node 0, only check NODE_DATA(0)
> > > is enough, so simplify has_managed_dma() and make it inline.
> > 
> > That's true on x86, but is it true on all architectures?
> 
> There is no document about numa node info for the DMA_ZONE, + Mike
> 
> I used 'git grep -w ZONE_DMA arch/'

willy is right. max_zone_pfn can only limit the range of zone, but
can't decide which zone is put on which node. The memory layout is
decided by firmware. I searched commit log to get below commit which
can give a good example.

commit c1d0da83358a2316d9be7f229f26126dbaa07468
Author: Laurent Dufour <ldufour@linux.ibm.com>
Date:   Fri Sep 25 21:19:28 2020 -0700

    mm: replace memmap_context by meminit_context
    
    Patch series "mm: fix memory to node bad links in sysfs", v3.
    
    Sometimes, firmware may expose interleaved memory layout like this:
    
     Early memory node ranges
       node   1: [mem 0x0000000000000000-0x000000011fffffff]
       node   2: [mem 0x0000000120000000-0x000000014fffffff]
       node   1: [mem 0x0000000150000000-0x00000001ffffffff]
       node   0: [mem 0x0000000200000000-0x000000048fffffff]
       node   2: [mem 0x0000000490000000-0x00000007ffffffff]

> 
> 1) the following archs without NUMA support, so it's true for them,
> 
> arch/alpha/mm/init.c:	max_zone_pfn[ZONE_DMA] = dma_pfn;
> arch/arm/mm/init.c:	max_zone_pfn[ZONE_DMA] = min(arm_dma_pfn_limit,
> max_low);
> arch/m68k/mm/init.c:	max_zone_pfn[ZONE_DMA] = end_mem >> PAGE_SHIFT;
> arch/m68k/mm/mcfmmu.c:	max_zone_pfn[ZONE_DMA] = PFN_DOWN(_ramend);
> arch/m68k/mm/motorola.c:	max_zone_pfn[ZONE_DMA] = memblock_end_of_DRAM();
> arch/m68k/mm/sun3mmu.c:	max_zone_pfn[ZONE_DMA] = ((unsigned
> long)high_memory) >> PAGE_SHIFT;
> arch/microblaze/mm/init.c:	zones_size[ZONE_DMA] = max_low_pfn;
> arch/microblaze/mm/init.c:	zones_size[ZONE_DMA] = max_pfn;
> 
> 
> 2) Simple check following archs, it seems that it is yes to them too.
> 
> arch/mips/mm/init.c:	max_zone_pfns[ZONE_DMA] = MAX_DMA_PFN;
> arch/powerpc/mm/mem.c:	max_zone_pfns[ZONE_DMA]	= min(max_low_pfn,
> arch/s390/mm/init.c:	max_zone_pfns[ZONE_DMA] = PFN_DOWN(MAX_DMA_ADDRESS);
> arch/sparc/mm/srmmu.c:		max_zone_pfn[ZONE_DMA] = max_low_pfn;
> arch/x86/mm/init.c:	max_zone_pfns[ZONE_DMA]		= min(MAX_DMA_PFN,
> max_low_pfn);
> arch/arm64/mm/init.c:	max_zone_pfns[ZONE_DMA] =
> PFN_DOWN(arm64_dma_phys_limit);
> arch/loongarch/mm/init.c:	max_zone_pfns[ZONE_DMA] = MAX_DMA_PFN;
>
Kefeng Wang May 30, 2023, 6:40 a.m. UTC | #4
On 2023/5/30 12:18, Baoquan He wrote:
> On 05/30/23 at 10:10am, Kefeng Wang wrote:
>>
>>
>> On 2023/5/29 22:26, Matthew Wilcox wrote:
>>> On Mon, May 29, 2023 at 10:40:22PM +0800, Kefeng Wang wrote:
>>>> The ZONE_DMA should only exists on Node 0, only check NODE_DATA(0)
>>>> is enough, so simplify has_managed_dma() and make it inline.
>>>
>>> That's true on x86, but is it true on all architectures?
>>
>> There is no document about numa node info for the DMA_ZONE, + Mike
>>
>> I used 'git grep -w ZONE_DMA arch/'
> 
> willy is right. max_zone_pfn can only limit the range of zone, but
> can't decide which zone is put on which node. The memory layout is
> decided by firmware. I searched commit log to get below commit which
> can give a good example.
> 
> commit c1d0da83358a2316d9be7f229f26126dbaa07468
> Author: Laurent Dufour <ldufour@linux.ibm.com>
> Date:   Fri Sep 25 21:19:28 2020 -0700
> 
>      mm: replace memmap_context by meminit_context
>      
>      Patch series "mm: fix memory to node bad links in sysfs", v3.
>      
>      Sometimes, firmware may expose interleaved memory layout like this:
>      
>       Early memory node ranges
>         node   1: [mem 0x0000000000000000-0x000000011fffffff]
>         node   2: [mem 0x0000000120000000-0x000000014fffffff]
>         node   1: [mem 0x0000000150000000-0x00000001ffffffff]
>         node   0: [mem 0x0000000200000000-0x000000048fffffff]
>         node   2: [mem 0x0000000490000000-0x00000007ffffffff]

Oh, it looks strange, but it do occur if firmware report as this way.

Thanks Willy and Baoquan, please ignore the patch.
diff mbox series

Patch

diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 5a7ada0413da..48e9fd8eccb4 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -1503,16 +1503,6 @@  static inline int is_highmem(struct zone *zone)
 	return is_highmem_idx(zone_idx(zone));
 }
 
-#ifdef CONFIG_ZONE_DMA
-bool has_managed_dma(void);
-#else
-static inline bool has_managed_dma(void)
-{
-	return false;
-}
-#endif
-
-
 #ifndef CONFIG_NUMA
 
 extern struct pglist_data contig_page_data;
@@ -1527,6 +1517,17 @@  static inline struct pglist_data *NODE_DATA(int nid)
 
 #endif /* !CONFIG_NUMA */
 
+static inline bool has_managed_dma(void)
+{
+#ifdef CONFIG_ZONE_DMA
+	struct zone *zone = NODE_DATA(0)->node_zones + ZONE_DMA;
+
+	if (managed_zone(zone))
+		return true;
+#endif
+	return false;
+}
+
 extern struct pglist_data *first_online_pgdat(void);
 extern struct pglist_data *next_online_pgdat(struct pglist_data *pgdat);
 extern struct zone *next_zone(struct zone *zone);
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index e671c747892f..e847b39939b8 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -6613,18 +6613,3 @@  bool put_page_back_buddy(struct page *page)
 	return ret;
 }
 #endif
-
-#ifdef CONFIG_ZONE_DMA
-bool has_managed_dma(void)
-{
-	struct pglist_data *pgdat;
-
-	for_each_online_pgdat(pgdat) {
-		struct zone *zone = &pgdat->node_zones[ZONE_DMA];
-
-		if (managed_zone(zone))
-			return true;
-	}
-	return false;
-}
-#endif /* CONFIG_ZONE_DMA */