[v6] mm/pgmap: Use correct alignment when looking at first pfn from a region
diff mbox series

Message ID 20190917153129.12905-1-aneesh.kumar@linux.ibm.com
State New
Headers show
Series
  • [v6] mm/pgmap: Use correct alignment when looking at first pfn from a region
Related show

Commit Message

Aneesh Kumar K.V Sept. 17, 2019, 3:31 p.m. UTC
vmem_altmap_offset() adjust the section aligned base_pfn offset.
So we need to make sure we account for the same when computing base_pfn.

ie, for altmap_valid case, our pfn_first should be:

pfn_first = altmap->base_pfn + vmem_altmap_offset(altmap);

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
---
* changes from v5
* update commit subject and use linux-mm for merge

 mm/memremap.c | 12 ++++++++++--
 1 file changed, 10 insertions(+), 2 deletions(-)

Comments

Ralph Campbell Sept. 17, 2019, 9:57 p.m. UTC | #1
On 9/17/19 8:31 AM, Aneesh Kumar K.V wrote:
> vmem_altmap_offset() adjust the section aligned base_pfn offset.
> So we need to make sure we account for the same when computing base_pfn.
> 
> ie, for altmap_valid case, our pfn_first should be:
> 
> pfn_first = altmap->base_pfn + vmem_altmap_offset(altmap);
> 
> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
> ---
> * changes from v5
> * update commit subject and use linux-mm for merge
> 
>   mm/memremap.c | 12 ++++++++++--
>   1 file changed, 10 insertions(+), 2 deletions(-)
> 
> diff --git a/mm/memremap.c b/mm/memremap.c
> index ed70c4e8e52a..233908d7df75 100644
> --- a/mm/memremap.c
> +++ b/mm/memremap.c
> @@ -54,8 +54,16 @@ static void pgmap_array_delete(struct resource *res)
>   
>   static unsigned long pfn_first(struct dev_pagemap *pgmap)
>   {
> -	return PHYS_PFN(pgmap->res.start) +
> -		vmem_altmap_offset(pgmap_altmap(pgmap));
> +	const struct resource *res = &pgmap->res;
> +	struct vmem_altmap *altmap = pgmap_altmap(pgmap);
> +	unsigned long pfn;
> +
> +	if (altmap) {
> +		pfn = altmap->base_pfn + vmem_altmap_offset(altmap);
> +	} else

A nit: you don't need the '{}'s

> +		pfn = PHYS_PFN(res->start);
> +
> +	return pfn;
>   }
>   
>   static unsigned long pfn_end(struct dev_pagemap *pgmap)
>
Andrew Morton Sept. 19, 2019, 7:25 p.m. UTC | #2
On Tue, 17 Sep 2019 21:01:29 +0530 "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com> wrote:

> vmem_altmap_offset() adjust the section aligned base_pfn offset.
> So we need to make sure we account for the same when computing base_pfn.
> 
> ie, for altmap_valid case, our pfn_first should be:
> 
> pfn_first = altmap->base_pfn + vmem_altmap_offset(altmap);

What are the user-visible runtime effects of this change?
Aneesh Kumar K.V Sept. 25, 2019, 3:51 a.m. UTC | #3
Andrew Morton <akpm@linux-foundation.org> writes:

> On Tue, 17 Sep 2019 21:01:29 +0530 "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com> wrote:
>
>> vmem_altmap_offset() adjust the section aligned base_pfn offset.
>> So we need to make sure we account for the same when computing base_pfn.
>> 
>> ie, for altmap_valid case, our pfn_first should be:
>> 
>> pfn_first = altmap->base_pfn + vmem_altmap_offset(altmap);
>
> What are the user-visible runtime effects of this change?

This was found by code inspection. If the pmem region is not correctly
section aligned we can skip pfns while iterating device pfn using 
	for_each_device_pfn(pfn, pgmap)


I still would want Dan to ack the change though.

-aneesh
Andrew Morton Nov. 30, 2019, 11:13 p.m. UTC | #4
On Wed, 25 Sep 2019 09:21:02 +0530 "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com> wrote:

> Andrew Morton <akpm@linux-foundation.org> writes:
> 
> > On Tue, 17 Sep 2019 21:01:29 +0530 "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com> wrote:
> >
> >> vmem_altmap_offset() adjust the section aligned base_pfn offset.
> >> So we need to make sure we account for the same when computing base_pfn.
> >> 
> >> ie, for altmap_valid case, our pfn_first should be:
> >> 
> >> pfn_first = altmap->base_pfn + vmem_altmap_offset(altmap);
> >
> > What are the user-visible runtime effects of this change?
> 
> This was found by code inspection. If the pmem region is not correctly
> section aligned we can skip pfns while iterating device pfn using 
> 	for_each_device_pfn(pfn, pgmap)
> 
> 
> I still would want Dan to ack the change though.
> 

Dan?


From: "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com>
Subject: mm/pgmap: use correct alignment when looking at first pfn from a region

vmem_altmap_offset() adjusts the section aligned base_pfn offset.  So we
need to make sure we account for the same when computing base_pfn.

ie, for altmap_valid case, our pfn_first should be:

pfn_first = altmap->base_pfn + vmem_altmap_offset(altmap);

This was found by code inspection. If the pmem region is not correctly
section aligned we can skip pfns while iterating device pfn using 

	for_each_device_pfn(pfn, pgmap)

[akpm@linux-foundation.org: coding style fixes]
Link: http://lkml.kernel.org/r/20190917153129.12905-1-aneesh.kumar@linux.ibm.com
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Cc: Ralph Campbell <rcampbell@nvidia.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 mm/memremap.c |   12 ++++++++++--
 1 file changed, 10 insertions(+), 2 deletions(-)

--- a/mm/memremap.c~mm-pgmap-use-correct-alignment-when-looking-at-first-pfn-from-a-region
+++ a/mm/memremap.c
@@ -55,8 +55,16 @@ static void pgmap_array_delete(struct re
 
 static unsigned long pfn_first(struct dev_pagemap *pgmap)
 {
-	return PHYS_PFN(pgmap->res.start) +
-		vmem_altmap_offset(pgmap_altmap(pgmap));
+	const struct resource *res = &pgmap->res;
+	struct vmem_altmap *altmap = pgmap_altmap(pgmap);
+	unsigned long pfn;
+
+	if (altmap)
+		pfn = altmap->base_pfn + vmem_altmap_offset(altmap);
+	else
+		pfn = PHYS_PFN(res->start);
+
+	return pfn;
 }
 
 static unsigned long pfn_end(struct dev_pagemap *pgmap)
Dan Williams Dec. 3, 2019, 12:50 a.m. UTC | #5
On Sat, Nov 30, 2019 at 3:13 PM Andrew Morton <akpm@linux-foundation.org> wrote:
>
> On Wed, 25 Sep 2019 09:21:02 +0530 "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com> wrote:
>
> > Andrew Morton <akpm@linux-foundation.org> writes:
> >
> > > On Tue, 17 Sep 2019 21:01:29 +0530 "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com> wrote:
> > >
> > >> vmem_altmap_offset() adjust the section aligned base_pfn offset.
> > >> So we need to make sure we account for the same when computing base_pfn.
> > >>
> > >> ie, for altmap_valid case, our pfn_first should be:
> > >>
> > >> pfn_first = altmap->base_pfn + vmem_altmap_offset(altmap);
> > >
> > > What are the user-visible runtime effects of this change?
> >
> > This was found by code inspection. If the pmem region is not correctly
> > section aligned we can skip pfns while iterating device pfn using
> >       for_each_device_pfn(pfn, pgmap)
> >
> >
> > I still would want Dan to ack the change though.
> >
>
> Dan?
>
>
> From: "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com>
> Subject: mm/pgmap: use correct alignment when looking at first pfn from a region
>
> vmem_altmap_offset() adjusts the section aligned base_pfn offset.  So we
> need to make sure we account for the same when computing base_pfn.
>
> ie, for altmap_valid case, our pfn_first should be:
>
> pfn_first = altmap->base_pfn + vmem_altmap_offset(altmap);
>
> This was found by code inspection. If the pmem region is not correctly
> section aligned we can skip pfns while iterating device pfn using
>
>         for_each_device_pfn(pfn, pgmap)
>
> [akpm@linux-foundation.org: coding style fixes]
> Link: http://lkml.kernel.org/r/20190917153129.12905-1-aneesh.kumar@linux.ibm.com
> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
> Cc: Ralph Campbell <rcampbell@nvidia.com>
> Cc: Dan Williams <dan.j.williams@intel.com>
> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
> ---
>
>  mm/memremap.c |   12 ++++++++++--
>  1 file changed, 10 insertions(+), 2 deletions(-)
>
> --- a/mm/memremap.c~mm-pgmap-use-correct-alignment-when-looking-at-first-pfn-from-a-region
> +++ a/mm/memremap.c
> @@ -55,8 +55,16 @@ static void pgmap_array_delete(struct re
>
>  static unsigned long pfn_first(struct dev_pagemap *pgmap)
>  {
> -       return PHYS_PFN(pgmap->res.start) +
> -               vmem_altmap_offset(pgmap_altmap(pgmap));
> +       const struct resource *res = &pgmap->res;
> +       struct vmem_altmap *altmap = pgmap_altmap(pgmap);
> +       unsigned long pfn;
> +
> +       if (altmap)
> +               pfn = altmap->base_pfn + vmem_altmap_offset(altmap);
> +       else
> +               pfn = PHYS_PFN(res->start);

This would only be a problem if res->start is not subsection aligned.
Is that bug triggering in your case, or is this just inspection. Now
that the subsections can be assumed as the minimum mapping granularity
I'd rather see a cleanup  I'd rather cleanup the implementation to
eliminate altmap->base_pfn or at least assert that
PHYS_PFN(res->start) and altmap->base_pfn are always identical.

Otherwise ->base_pfn is supposed to be just a convenient way to recall
the bounds of the memory hotplug operation deeper in the vmemmap
setup.
Aneesh Kumar K.V Dec. 3, 2019, 10:12 a.m. UTC | #6
On 12/3/19 6:20 AM, Dan Williams wrote:
> On Sat, Nov 30, 2019 at 3:13 PM Andrew Morton <akpm@linux-foundation.org> wrote:
>>
>> On Wed, 25 Sep 2019 09:21:02 +0530 "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com> wrote:
>>
>>> Andrew Morton <akpm@linux-foundation.org> writes:
>>>
>>>> On Tue, 17 Sep 2019 21:01:29 +0530 "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com> wrote:
>>>>
>>>>> vmem_altmap_offset() adjust the section aligned base_pfn offset.
>>>>> So we need to make sure we account for the same when computing base_pfn.
>>>>>
>>>>> ie, for altmap_valid case, our pfn_first should be:
>>>>>
>>>>> pfn_first = altmap->base_pfn + vmem_altmap_offset(altmap);
>>>>
>>>> What are the user-visible runtime effects of this change?
>>>
>>> This was found by code inspection. If the pmem region is not correctly
>>> section aligned we can skip pfns while iterating device pfn using
>>>        for_each_device_pfn(pfn, pgmap)
>>>
>>>
>>> I still would want Dan to ack the change though.
>>>
>>
>> Dan?
>>
>>
>> From: "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com>
>> Subject: mm/pgmap: use correct alignment when looking at first pfn from a region
>>
>> vmem_altmap_offset() adjusts the section aligned base_pfn offset.  So we
>> need to make sure we account for the same when computing base_pfn.
>>
>> ie, for altmap_valid case, our pfn_first should be:
>>
>> pfn_first = altmap->base_pfn + vmem_altmap_offset(altmap);
>>
>> This was found by code inspection. If the pmem region is not correctly
>> section aligned we can skip pfns while iterating device pfn using
>>
>>          for_each_device_pfn(pfn, pgmap)
>>
>> [akpm@linux-foundation.org: coding style fixes]
>> Link: http://lkml.kernel.org/r/20190917153129.12905-1-aneesh.kumar@linux.ibm.com
>> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
>> Cc: Ralph Campbell <rcampbell@nvidia.com>
>> Cc: Dan Williams <dan.j.williams@intel.com>
>> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
>> ---
>>
>>   mm/memremap.c |   12 ++++++++++--
>>   1 file changed, 10 insertions(+), 2 deletions(-)
>>
>> --- a/mm/memremap.c~mm-pgmap-use-correct-alignment-when-looking-at-first-pfn-from-a-region
>> +++ a/mm/memremap.c
>> @@ -55,8 +55,16 @@ static void pgmap_array_delete(struct re
>>
>>   static unsigned long pfn_first(struct dev_pagemap *pgmap)
>>   {
>> -       return PHYS_PFN(pgmap->res.start) +
>> -               vmem_altmap_offset(pgmap_altmap(pgmap));
>> +       const struct resource *res = &pgmap->res;
>> +       struct vmem_altmap *altmap = pgmap_altmap(pgmap);
>> +       unsigned long pfn;
>> +
>> +       if (altmap)
>> +               pfn = altmap->base_pfn + vmem_altmap_offset(altmap);
>> +       else
>> +               pfn = PHYS_PFN(res->start);
> 
> This would only be a problem if res->start is not subsection aligned.

Kernel is not enforcing this right? ie, If i create multiple namespace
as below

ndctl create-namespace -s 16908288 --align 64K

I can get base_pfn different from res->start PHYS_PFN

Yes that results in other error as below with the current upstream.

[   17.491097] memory add fail, invalid altmap



> Is that bug triggering in your case, or is this just inspection. Now
> that the subsections can be assumed as the minimum mapping granularity
> I'd rather see a cleanup  I'd rather cleanup the implementation to
> eliminate altmap->base_pfn or at least assert that
> PHYS_PFN(res->start) and altmap->base_pfn are always identical.
> 
> Otherwise ->base_pfn is supposed to be just a convenient way to recall
> the bounds of the memory hotplug operation deeper in the vmemmap
> setup.
> 

Is the right fix to ensure that we always make sure res->start is 
subsection aligned ? If so we may need the patch series
https://patchwork.kernel.org/project/linux-nvdimm/list/?series=209373

And enforce that to be multiple of subsection size?

-aneesh

Patch
diff mbox series

diff --git a/mm/memremap.c b/mm/memremap.c
index ed70c4e8e52a..233908d7df75 100644
--- a/mm/memremap.c
+++ b/mm/memremap.c
@@ -54,8 +54,16 @@  static void pgmap_array_delete(struct resource *res)
 
 static unsigned long pfn_first(struct dev_pagemap *pgmap)
 {
-	return PHYS_PFN(pgmap->res.start) +
-		vmem_altmap_offset(pgmap_altmap(pgmap));
+	const struct resource *res = &pgmap->res;
+	struct vmem_altmap *altmap = pgmap_altmap(pgmap);
+	unsigned long pfn;
+
+	if (altmap) {
+		pfn = altmap->base_pfn + vmem_altmap_offset(altmap);
+	} else
+		pfn = PHYS_PFN(res->start);
+
+	return pfn;
 }
 
 static unsigned long pfn_end(struct dev_pagemap *pgmap)