Message ID | 1611905986-20155-3-git-send-email-anshuman.khandual@arm.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | arm64/mm: Fix pfn_valid() for ZONE_DEVICE based memory | expand |
On 29.01.21 08:39, Anshuman Khandual wrote: > There are multiple instances of pfn_to_section_nr() and __pfn_to_section() > when CONFIG_SPARSEMEM is enabled. This can be just optimized if the memory > section is fetched earlier. Hence bifurcate pfn_valid() into two different > definitions depending on whether CONFIG_SPARSEMEM is enabled. Also replace > the open coded pfn <--> addr conversion with __[pfn|phys]_to_[phys|pfn](). > This does not cause any functional change. > > Cc: Catalin Marinas <catalin.marinas@arm.com> > Cc: Will Deacon <will@kernel.org> > Cc: Ard Biesheuvel <ardb@kernel.org> > Cc: linux-arm-kernel@lists.infradead.org > Cc: linux-kernel@vger.kernel.org > Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> > --- > arch/arm64/mm/init.c | 38 +++++++++++++++++++++++++++++++------- > 1 file changed, 31 insertions(+), 7 deletions(-) > > diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c > index 1141075e4d53..09adca90c57a 100644 > --- a/arch/arm64/mm/init.c > +++ b/arch/arm64/mm/init.c > @@ -217,18 +217,25 @@ static void __init zone_sizes_init(unsigned long min, unsigned long max) > free_area_init(max_zone_pfns); > } > > +#ifdef CONFIG_SPARSEMEM > int pfn_valid(unsigned long pfn) > { > - phys_addr_t addr = pfn << PAGE_SHIFT; > + struct mem_section *ms = __pfn_to_section(pfn); > + phys_addr_t addr = __pfn_to_phys(pfn); I'd just use PFN_PHYS() here, which is more frequently used in the kernel. > > - if ((addr >> PAGE_SHIFT) != pfn) > + /* > + * Ensure the upper PAGE_SHIFT bits are clear in the > + * pfn. Else it might lead to false positives when > + * some of the upper bits are set, but the lower bits > + * match a valid pfn. > + */ > + if (__phys_to_pfn(addr) != pfn) and here PHYS_PFN(). Comment is helpful. :) > return 0; > > -#ifdef CONFIG_SPARSEMEM > if (pfn_to_section_nr(pfn) >= NR_MEM_SECTIONS) > return 0; > > - if (!valid_section(__pfn_to_section(pfn))) > + if (!valid_section(ms)) > return 0; > > /* > @@ -240,11 +247,28 @@ int pfn_valid(unsigned long pfn) > * memory sections covering all of hotplug memory including > * both normal and ZONE_DEVICE based. > */ > - if (!early_section(__pfn_to_section(pfn))) > - return pfn_section_valid(__pfn_to_section(pfn), pfn); > -#endif > + if (!early_section(ms)) > + return pfn_section_valid(ms, pfn); > + > return memblock_is_map_memory(addr); > } > +#else > +int pfn_valid(unsigned long pfn) > +{ > + phys_addr_t addr = __pfn_to_phys(pfn); > + > + /* > + * Ensure the upper PAGE_SHIFT bits are clear in the > + * pfn. Else it might lead to false positives when > + * some of the upper bits are set, but the lower bits > + * match a valid pfn. > + */ > + if (__phys_to_pfn(addr) != pfn) > + return 0; > + > + return memblock_is_map_memory(addr); > +} I think you can avoid duplicating the code by doing something like: phys_addr_t addr = PFN_PHYS(pfn); if (PHYS_PFN(addr) != pfn) return 0; #ifdef CONFIG_SPARSEMEM { struct mem_section *ms = __pfn_to_section(pfn); if (!valid_section(ms)) return 0; if (!early_section(ms)) return pfn_section_valid(ms, pfn); } #endif return memblock_is_map_memory(addr);
On 1/29/21 3:37 PM, David Hildenbrand wrote: > On 29.01.21 08:39, Anshuman Khandual wrote: >> There are multiple instances of pfn_to_section_nr() and __pfn_to_section() >> when CONFIG_SPARSEMEM is enabled. This can be just optimized if the memory >> section is fetched earlier. Hence bifurcate pfn_valid() into two different >> definitions depending on whether CONFIG_SPARSEMEM is enabled. Also replace >> the open coded pfn <--> addr conversion with __[pfn|phys]_to_[phys|pfn](). >> This does not cause any functional change. >> >> Cc: Catalin Marinas <catalin.marinas@arm.com> >> Cc: Will Deacon <will@kernel.org> >> Cc: Ard Biesheuvel <ardb@kernel.org> >> Cc: linux-arm-kernel@lists.infradead.org >> Cc: linux-kernel@vger.kernel.org >> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> >> --- >> arch/arm64/mm/init.c | 38 +++++++++++++++++++++++++++++++------- >> 1 file changed, 31 insertions(+), 7 deletions(-) >> >> diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c >> index 1141075e4d53..09adca90c57a 100644 >> --- a/arch/arm64/mm/init.c >> +++ b/arch/arm64/mm/init.c >> @@ -217,18 +217,25 @@ static void __init zone_sizes_init(unsigned long min, unsigned long max) >> free_area_init(max_zone_pfns); >> } >> +#ifdef CONFIG_SPARSEMEM >> int pfn_valid(unsigned long pfn) >> { >> - phys_addr_t addr = pfn << PAGE_SHIFT; >> + struct mem_section *ms = __pfn_to_section(pfn); >> + phys_addr_t addr = __pfn_to_phys(pfn); > > I'd just use PFN_PHYS() here, which is more frequently used in the kernel. Sure, will replace. > >> - if ((addr >> PAGE_SHIFT) != pfn) >> + /* >> + * Ensure the upper PAGE_SHIFT bits are clear in the >> + * pfn. Else it might lead to false positives when >> + * some of the upper bits are set, but the lower bits >> + * match a valid pfn. >> + */ >> + if (__phys_to_pfn(addr) != pfn) > > and here PHYS_PFN(). Comment is helpful. :) Sure, will replace. > >> return 0; >> -#ifdef CONFIG_SPARSEMEM >> if (pfn_to_section_nr(pfn) >= NR_MEM_SECTIONS) >> return 0; >> - if (!valid_section(__pfn_to_section(pfn))) >> + if (!valid_section(ms)) >> return 0; >> /* >> @@ -240,11 +247,28 @@ int pfn_valid(unsigned long pfn) >> * memory sections covering all of hotplug memory including >> * both normal and ZONE_DEVICE based. >> */ >> - if (!early_section(__pfn_to_section(pfn))) >> - return pfn_section_valid(__pfn_to_section(pfn), pfn); >> -#endif >> + if (!early_section(ms)) >> + return pfn_section_valid(ms, pfn); >> + >> return memblock_is_map_memory(addr); >> } >> +#else >> +int pfn_valid(unsigned long pfn) >> +{ >> + phys_addr_t addr = __pfn_to_phys(pfn); >> + >> + /* >> + * Ensure the upper PAGE_SHIFT bits are clear in the >> + * pfn. Else it might lead to false positives when >> + * some of the upper bits are set, but the lower bits >> + * match a valid pfn. >> + */ >> + if (__phys_to_pfn(addr) != pfn) >> + return 0; >> + >> + return memblock_is_map_memory(addr); >> +} > > > I think you can avoid duplicating the code by doing something like: Right and also this looks more compact as well. Initially though about it but then was apprehensive about the style in #ifdef { } #endif code block inside the function. After this change, the resulting patch also clears checkpatch.pl test. Will do the change. > > > phys_addr_t addr = PFN_PHYS(pfn); > > if (PHYS_PFN(addr) != pfn) > return 0; > > #ifdef CONFIG_SPARSEMEM > { > struct mem_section *ms = __pfn_to_section(pfn); > > if (!valid_section(ms)) > return 0; > if (!early_section(ms)) > return pfn_section_valid(ms, pfn); > } > #endif > return memblock_is_map_memory(addr); >
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index 1141075e4d53..09adca90c57a 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -217,18 +217,25 @@ static void __init zone_sizes_init(unsigned long min, unsigned long max) free_area_init(max_zone_pfns); } +#ifdef CONFIG_SPARSEMEM int pfn_valid(unsigned long pfn) { - phys_addr_t addr = pfn << PAGE_SHIFT; + struct mem_section *ms = __pfn_to_section(pfn); + phys_addr_t addr = __pfn_to_phys(pfn); - if ((addr >> PAGE_SHIFT) != pfn) + /* + * Ensure the upper PAGE_SHIFT bits are clear in the + * pfn. Else it might lead to false positives when + * some of the upper bits are set, but the lower bits + * match a valid pfn. + */ + if (__phys_to_pfn(addr) != pfn) return 0; -#ifdef CONFIG_SPARSEMEM if (pfn_to_section_nr(pfn) >= NR_MEM_SECTIONS) return 0; - if (!valid_section(__pfn_to_section(pfn))) + if (!valid_section(ms)) return 0; /* @@ -240,11 +247,28 @@ int pfn_valid(unsigned long pfn) * memory sections covering all of hotplug memory including * both normal and ZONE_DEVICE based. */ - if (!early_section(__pfn_to_section(pfn))) - return pfn_section_valid(__pfn_to_section(pfn), pfn); -#endif + if (!early_section(ms)) + return pfn_section_valid(ms, pfn); + return memblock_is_map_memory(addr); } +#else +int pfn_valid(unsigned long pfn) +{ + phys_addr_t addr = __pfn_to_phys(pfn); + + /* + * Ensure the upper PAGE_SHIFT bits are clear in the + * pfn. Else it might lead to false positives when + * some of the upper bits are set, but the lower bits + * match a valid pfn. + */ + if (__phys_to_pfn(addr) != pfn) + return 0; + + return memblock_is_map_memory(addr); +} +#endif EXPORT_SYMBOL(pfn_valid); static phys_addr_t memory_limit = PHYS_ADDR_MAX;
There are multiple instances of pfn_to_section_nr() and __pfn_to_section() when CONFIG_SPARSEMEM is enabled. This can be just optimized if the memory section is fetched earlier. Hence bifurcate pfn_valid() into two different definitions depending on whether CONFIG_SPARSEMEM is enabled. Also replace the open coded pfn <--> addr conversion with __[pfn|phys]_to_[phys|pfn](). This does not cause any functional change. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Ard Biesheuvel <ardb@kernel.org> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> --- arch/arm64/mm/init.c | 38 +++++++++++++++++++++++++++++++------- 1 file changed, 31 insertions(+), 7 deletions(-)