diff mbox series

[v2] mm/sparse: only sub-section aligned range would be populated

Message ID 20200703031828.14645-1-richard.weiyang@linux.alibaba.com (mailing list archive)
State New, archived
Headers show
Series [v2] mm/sparse: only sub-section aligned range would be populated | expand

Commit Message

Wei Yang July 3, 2020, 3:18 a.m. UTC
There are two code path which invoke __populate_section_memmap()

  * sparse_init_nid()
  * sparse_add_section()

For both case, we are sure the memory range is sub-section aligned.

  * we pass PAGES_PER_SECTION to sparse_init_nid()
  * we check range by check_pfn_span() before calling
    sparse_add_section()

Also, the counterpart of __populate_section_memmap(), we don't do such
calculation and check since the range is checked by check_pfn_span() in
__remove_pages().

Clear the calculation and check to keep it simple and comply with its
counterpart.

Signed-off-by: Wei Yang <richard.weiyang@linux.alibaba.com>

---
v2:
  * add a warn on once for unaligned range, suggested by David
---
 mm/sparse-vmemmap.c | 20 ++++++--------------
 1 file changed, 6 insertions(+), 14 deletions(-)

Comments

David Hildenbrand July 3, 2020, 7:14 a.m. UTC | #1
On 03.07.20 05:18, Wei Yang wrote:
> There are two code path which invoke __populate_section_memmap()
> 
>   * sparse_init_nid()
>   * sparse_add_section()
> 
> For both case, we are sure the memory range is sub-section aligned.
> 
>   * we pass PAGES_PER_SECTION to sparse_init_nid()
>   * we check range by check_pfn_span() before calling
>     sparse_add_section()
> 
> Also, the counterpart of __populate_section_memmap(), we don't do such
> calculation and check since the range is checked by check_pfn_span() in
> __remove_pages().
> 
> Clear the calculation and check to keep it simple and comply with its
> counterpart.
> 
> Signed-off-by: Wei Yang <richard.weiyang@linux.alibaba.com>
> 
> ---
> v2:
>   * add a warn on once for unaligned range, suggested by David
> ---
>  mm/sparse-vmemmap.c | 20 ++++++--------------
>  1 file changed, 6 insertions(+), 14 deletions(-)
> 
> diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c
> index 0db7738d76e9..8d3a1b6287c5 100644
> --- a/mm/sparse-vmemmap.c
> +++ b/mm/sparse-vmemmap.c
> @@ -247,20 +247,12 @@ int __meminit vmemmap_populate_basepages(unsigned long start,
>  struct page * __meminit __populate_section_memmap(unsigned long pfn,
>  		unsigned long nr_pages, int nid, struct vmem_altmap *altmap)
>  {
> -	unsigned long start;
> -	unsigned long end;
> -
> -	/*
> -	 * The minimum granularity of memmap extensions is
> -	 * PAGES_PER_SUBSECTION as allocations are tracked in the
> -	 * 'subsection_map' bitmap of the section.
> -	 */
> -	end = ALIGN(pfn + nr_pages, PAGES_PER_SUBSECTION);
> -	pfn &= PAGE_SUBSECTION_MASK;
> -	nr_pages = end - pfn;
> -
> -	start = (unsigned long) pfn_to_page(pfn);
> -	end = start + nr_pages * sizeof(struct page);
> +	unsigned long start = (unsigned long) pfn_to_page(pfn);
> +	unsigned long end = start + nr_pages * sizeof(struct page);
> +
> +	if (WARN_ON_ONCE(!IS_ALIGNED(pfn, PAGES_PER_SUBSECTION) ||
> +		!IS_ALIGNED(nr_pages, PAGES_PER_SUBSECTION)))
> +		return NULL;

Nit: indentation of both IS_ALIGNED should match.

Acked-by: David Hildenbrand <david@redhat.com>

>  
>  	if (vmemmap_populate(start, end, nid, altmap))
>  		return NULL;
>
Wei Yang Aug. 5, 2020, 9:49 p.m. UTC | #2
On Fri, Jul 03, 2020 at 11:18:28AM +0800, Wei Yang wrote:
>There are two code path which invoke __populate_section_memmap()
>
>  * sparse_init_nid()
>  * sparse_add_section()
>
>For both case, we are sure the memory range is sub-section aligned.
>
>  * we pass PAGES_PER_SECTION to sparse_init_nid()
>  * we check range by check_pfn_span() before calling
>    sparse_add_section()
>
>Also, the counterpart of __populate_section_memmap(), we don't do such
>calculation and check since the range is checked by check_pfn_span() in
>__remove_pages().
>
>Clear the calculation and check to keep it simple and comply with its
>counterpart.
>
>Signed-off-by: Wei Yang <richard.weiyang@linux.alibaba.com>
>

Hi, Andrew,

Is this one picked up?

>---
>v2:
>  * add a warn on once for unaligned range, suggested by David
>---
> mm/sparse-vmemmap.c | 20 ++++++--------------
> 1 file changed, 6 insertions(+), 14 deletions(-)
>
>diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c
>index 0db7738d76e9..8d3a1b6287c5 100644
>--- a/mm/sparse-vmemmap.c
>+++ b/mm/sparse-vmemmap.c
>@@ -247,20 +247,12 @@ int __meminit vmemmap_populate_basepages(unsigned long start,
> struct page * __meminit __populate_section_memmap(unsigned long pfn,
> 		unsigned long nr_pages, int nid, struct vmem_altmap *altmap)
> {
>-	unsigned long start;
>-	unsigned long end;
>-
>-	/*
>-	 * The minimum granularity of memmap extensions is
>-	 * PAGES_PER_SUBSECTION as allocations are tracked in the
>-	 * 'subsection_map' bitmap of the section.
>-	 */
>-	end = ALIGN(pfn + nr_pages, PAGES_PER_SUBSECTION);
>-	pfn &= PAGE_SUBSECTION_MASK;
>-	nr_pages = end - pfn;
>-
>-	start = (unsigned long) pfn_to_page(pfn);
>-	end = start + nr_pages * sizeof(struct page);
>+	unsigned long start = (unsigned long) pfn_to_page(pfn);
>+	unsigned long end = start + nr_pages * sizeof(struct page);
>+
>+	if (WARN_ON_ONCE(!IS_ALIGNED(pfn, PAGES_PER_SUBSECTION) ||
>+		!IS_ALIGNED(nr_pages, PAGES_PER_SUBSECTION)))
>+		return NULL;
> 
> 	if (vmemmap_populate(start, end, nid, altmap))
> 		return NULL;
>-- 
>2.20.1 (Apple Git-117)
David Hildenbrand Aug. 6, 2020, 7:29 a.m. UTC | #3
On 05.08.20 23:49, Wei Yang wrote:
> On Fri, Jul 03, 2020 at 11:18:28AM +0800, Wei Yang wrote:
>> There are two code path which invoke __populate_section_memmap()
>>
>>  * sparse_init_nid()
>>  * sparse_add_section()
>>
>> For both case, we are sure the memory range is sub-section aligned.
>>
>>  * we pass PAGES_PER_SECTION to sparse_init_nid()
>>  * we check range by check_pfn_span() before calling
>>    sparse_add_section()
>>
>> Also, the counterpart of __populate_section_memmap(), we don't do such
>> calculation and check since the range is checked by check_pfn_span() in
>> __remove_pages().
>>
>> Clear the calculation and check to keep it simple and comply with its
>> counterpart.
>>
>> Signed-off-by: Wei Yang <richard.weiyang@linux.alibaba.com>
>>
> 
> Hi, Andrew,
> 
> Is this one picked up?

I can spot it in -next via the -mm tree:

https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git/commit/?id=68ad9becb23be14622e39ed36e5b0621a90a41d9
Wei Yang Aug. 6, 2020, 9:59 a.m. UTC | #4
On Thu, Aug 06, 2020 at 09:29:36AM +0200, David Hildenbrand wrote:
>On 05.08.20 23:49, Wei Yang wrote:
>> On Fri, Jul 03, 2020 at 11:18:28AM +0800, Wei Yang wrote:
>>> There are two code path which invoke __populate_section_memmap()
>>>
>>>  * sparse_init_nid()
>>>  * sparse_add_section()
>>>
>>> For both case, we are sure the memory range is sub-section aligned.
>>>
>>>  * we pass PAGES_PER_SECTION to sparse_init_nid()
>>>  * we check range by check_pfn_span() before calling
>>>    sparse_add_section()
>>>
>>> Also, the counterpart of __populate_section_memmap(), we don't do such
>>> calculation and check since the range is checked by check_pfn_span() in
>>> __remove_pages().
>>>
>>> Clear the calculation and check to keep it simple and comply with its
>>> counterpart.
>>>
>>> Signed-off-by: Wei Yang <richard.weiyang@linux.alibaba.com>
>>>
>> 
>> Hi, Andrew,
>> 
>> Is this one picked up?
>
>I can spot it in -next via the -mm tree:
>
>https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git/commit/?id=68ad9becb23be14622e39ed36e5b0621a90a41d9
>

Thanks ;-)

Next time I would refer to this repo first.

>
>-- 
>Thanks,
>
>David / dhildenb
diff mbox series

Patch

diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c
index 0db7738d76e9..8d3a1b6287c5 100644
--- a/mm/sparse-vmemmap.c
+++ b/mm/sparse-vmemmap.c
@@ -247,20 +247,12 @@  int __meminit vmemmap_populate_basepages(unsigned long start,
 struct page * __meminit __populate_section_memmap(unsigned long pfn,
 		unsigned long nr_pages, int nid, struct vmem_altmap *altmap)
 {
-	unsigned long start;
-	unsigned long end;
-
-	/*
-	 * The minimum granularity of memmap extensions is
-	 * PAGES_PER_SUBSECTION as allocations are tracked in the
-	 * 'subsection_map' bitmap of the section.
-	 */
-	end = ALIGN(pfn + nr_pages, PAGES_PER_SUBSECTION);
-	pfn &= PAGE_SUBSECTION_MASK;
-	nr_pages = end - pfn;
-
-	start = (unsigned long) pfn_to_page(pfn);
-	end = start + nr_pages * sizeof(struct page);
+	unsigned long start = (unsigned long) pfn_to_page(pfn);
+	unsigned long end = start + nr_pages * sizeof(struct page);
+
+	if (WARN_ON_ONCE(!IS_ALIGNED(pfn, PAGES_PER_SUBSECTION) ||
+		!IS_ALIGNED(nr_pages, PAGES_PER_SUBSECTION)))
+		return NULL;
 
 	if (vmemmap_populate(start, end, nid, altmap))
 		return NULL;