Message ID | 1522636236-12625-6-git-send-email-hejianet@gmail.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On 2 April 2018 at 04:30, Jia He <hejianet@gmail.com> wrote: > Commit b92df1de5d28 ("mm: page_alloc: skip over regions of invalid pfns > where possible") optimized the loop in memmap_init_zone(). But there is > still some room for improvement. E.g. in early_pfn_valid(), if pfn and > pfn+1 are in the same memblock region, we can record the last returned > memblock region index and check check pfn++ is still in the same region. > > Currently it only improve the performance on arm64 and will have no > impact on other arches. > How much does it improve the performance? And in which cases? I guess it improves boot time on systems with physical address spaces that are sparsely populated with DRAM, but you really have to quantify this if you want other people to care. > Signed-off-by: Jia He <jia.he@hxt-semitech.com> > --- > include/linux/mmzone.h | 7 ++++++- > 1 file changed, 6 insertions(+), 1 deletion(-) > > diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h > index f9c0c46..079f468 100644 > --- a/include/linux/mmzone.h > +++ b/include/linux/mmzone.h > @@ -1268,9 +1268,14 @@ static inline int pfn_present(unsigned long pfn) > }) > #else > #define pfn_to_nid(pfn) (0) > -#endif > +#endif /*CONFIG_NUMA*/ > > +#ifdef CONFIG_HAVE_ARCH_PFN_VALID > +#define early_pfn_valid(pfn) pfn_valid_region(pfn) > +#else > #define early_pfn_valid(pfn) pfn_valid(pfn) > +#endif /*CONFIG_HAVE_ARCH_PFN_VALID*/ > + > void sparse_init(void); > #else > #define sparse_init() do {} while (0) > -- > 2.7.4 >
On 4/2/2018 3:00 PM, Ard Biesheuvel Wrote: > How much does it improve the performance? And in which cases? > > I guess it improves boot time on systems with physical address spaces > that are sparsely populated with DRAM, but you really have to quantify > this if you want other people to care. Yes, I write the performance in patch 0/5. I will write it in the patch description later.
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index f9c0c46..079f468 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -1268,9 +1268,14 @@ static inline int pfn_present(unsigned long pfn) }) #else #define pfn_to_nid(pfn) (0) -#endif +#endif /*CONFIG_NUMA*/ +#ifdef CONFIG_HAVE_ARCH_PFN_VALID +#define early_pfn_valid(pfn) pfn_valid_region(pfn) +#else #define early_pfn_valid(pfn) pfn_valid(pfn) +#endif /*CONFIG_HAVE_ARCH_PFN_VALID*/ + void sparse_init(void); #else #define sparse_init() do {} while (0)
Commit b92df1de5d28 ("mm: page_alloc: skip over regions of invalid pfns where possible") optimized the loop in memmap_init_zone(). But there is still some room for improvement. E.g. in early_pfn_valid(), if pfn and pfn+1 are in the same memblock region, we can record the last returned memblock region index and check check pfn++ is still in the same region. Currently it only improve the performance on arm64 and will have no impact on other arches. Signed-off-by: Jia He <jia.he@hxt-semitech.com> --- include/linux/mmzone.h | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-)