diff mbox

[v5,2/4] mm/sparsemem: Defer the ms->section_mem_map clearing

Message ID 20180627013116.12411-3-bhe@redhat.com (mailing list archive)
State New, archived
Headers show

Commit Message

Baoquan He June 27, 2018, 1:31 a.m. UTC
In sparse_init(), if CONFIG_SPARSEMEM_ALLOC_MEM_MAP_TOGETHER=y, system
will allocate one continuous memory chunk for mem maps on one node and
populate the relevant page tables to map memory section one by one. If
fail to populate for a certain mem section, print warning and its
->section_mem_map will be cleared to cancel the marking of being present.
Like this, the number of mem sections marked as present could become
less during sparse_init() execution.

Here just defer the ms->section_mem_map clearing if failed to populate
its page tables until the last for_each_present_section_nr() loop. This
is in preparation for later optimizing the mem map allocation.

Signed-off-by: Baoquan He <bhe@redhat.com>
---
 mm/sparse-vmemmap.c |  1 -
 mm/sparse.c         | 12 ++++++++----
 2 files changed, 8 insertions(+), 5 deletions(-)

Comments

Oscar Salvador June 27, 2018, 9:54 a.m. UTC | #1
On Wed, Jun 27, 2018 at 09:31:14AM +0800, Baoquan He wrote:
> In sparse_init(), if CONFIG_SPARSEMEM_ALLOC_MEM_MAP_TOGETHER=y, system
> will allocate one continuous memory chunk for mem maps on one node and
> populate the relevant page tables to map memory section one by one. If
> fail to populate for a certain mem section, print warning and its
> ->section_mem_map will be cleared to cancel the marking of being present.
> Like this, the number of mem sections marked as present could become
> less during sparse_init() execution.
> 
> Here just defer the ms->section_mem_map clearing if failed to populate
> its page tables until the last for_each_present_section_nr() loop. This
> is in preparation for later optimizing the mem map allocation.
> 
> Signed-off-by: Baoquan He <bhe@redhat.com>
> ---
>  mm/sparse-vmemmap.c |  1 -
>  mm/sparse.c         | 12 ++++++++----
>  2 files changed, 8 insertions(+), 5 deletions(-)
> 
> diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c
> index bd0276d5f66b..640e68f8324b 100644
> --- a/mm/sparse-vmemmap.c
> +++ b/mm/sparse-vmemmap.c
> @@ -303,7 +303,6 @@ void __init sparse_mem_maps_populate_node(struct page **map_map,
>  		ms = __nr_to_section(pnum);
>  		pr_err("%s: sparsemem memory map backing failed some memory will not be available\n",
>  		       __func__);
> -		ms->section_mem_map = 0;

Since we are deferring the clearing of section_mem_map, I guess we do not need

struct mem_section *ms;
ms = __nr_to_section(pnum);

anymore, right?

>  	}
>  
>  	if (vmemmap_buf_start) {
> diff --git a/mm/sparse.c b/mm/sparse.c
> index 6314303130b0..71ad53da2cd1 100644
> --- a/mm/sparse.c
> +++ b/mm/sparse.c
> @@ -451,7 +451,6 @@ void __init sparse_mem_maps_populate_node(struct page **map_map,
>  		ms = __nr_to_section(pnum);
>  		pr_err("%s: sparsemem memory map backing failed some memory will not be available\n",
>  		       __func__);
> -		ms->section_mem_map = 0;

The same goes here.
Baoquan He June 27, 2018, 10:59 p.m. UTC | #2
On 06/27/18 at 11:54am, Oscar Salvador wrote:
> On Wed, Jun 27, 2018 at 09:31:14AM +0800, Baoquan He wrote:
> > In sparse_init(), if CONFIG_SPARSEMEM_ALLOC_MEM_MAP_TOGETHER=y, system
> > will allocate one continuous memory chunk for mem maps on one node and
> > populate the relevant page tables to map memory section one by one. If
> > fail to populate for a certain mem section, print warning and its
> > ->section_mem_map will be cleared to cancel the marking of being present.
> > Like this, the number of mem sections marked as present could become
> > less during sparse_init() execution.
> > 
> > Here just defer the ms->section_mem_map clearing if failed to populate
> > its page tables until the last for_each_present_section_nr() loop. This
> > is in preparation for later optimizing the mem map allocation.
> > 
> > Signed-off-by: Baoquan He <bhe@redhat.com>
> > ---
> >  mm/sparse-vmemmap.c |  1 -
> >  mm/sparse.c         | 12 ++++++++----
> >  2 files changed, 8 insertions(+), 5 deletions(-)
> > 
> > diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c
> > index bd0276d5f66b..640e68f8324b 100644
> > --- a/mm/sparse-vmemmap.c
> > +++ b/mm/sparse-vmemmap.c
> > @@ -303,7 +303,6 @@ void __init sparse_mem_maps_populate_node(struct page **map_map,
> >  		ms = __nr_to_section(pnum);
> >  		pr_err("%s: sparsemem memory map backing failed some memory will not be available\n",
> >  		       __func__);
> > -		ms->section_mem_map = 0;
> 
> Since we are deferring the clearing of section_mem_map, I guess we do not need
> 
> struct mem_section *ms;
> ms = __nr_to_section(pnum);
> 
> anymore, right?

Right, good catch, thanks.

I will post a new round to fix this.

> 
> >  	}
> >  
> >  	if (vmemmap_buf_start) {
> > diff --git a/mm/sparse.c b/mm/sparse.c
> > index 6314303130b0..71ad53da2cd1 100644
> > --- a/mm/sparse.c
> > +++ b/mm/sparse.c
> > @@ -451,7 +451,6 @@ void __init sparse_mem_maps_populate_node(struct page **map_map,
> >  		ms = __nr_to_section(pnum);
> >  		pr_err("%s: sparsemem memory map backing failed some memory will not be available\n",
> >  		       __func__);
> > -		ms->section_mem_map = 0;
> 
> The same goes here.
> 
> 
> 
> -- 
> Oscar Salvador
> SUSE L3
Pavel Tatashin June 28, 2018, 3:11 a.m. UTC | #3
Once you remove the ms mentioned by Oscar:
Reviewed-by: Pavel Tatashin <pasha.tatashin@oracle.com>
On Wed, Jun 27, 2018 at 6:59 PM Baoquan He <bhe@redhat.com> wrote:
>
> On 06/27/18 at 11:54am, Oscar Salvador wrote:
> > On Wed, Jun 27, 2018 at 09:31:14AM +0800, Baoquan He wrote:
> > > In sparse_init(), if CONFIG_SPARSEMEM_ALLOC_MEM_MAP_TOGETHER=y, system
> > > will allocate one continuous memory chunk for mem maps on one node and
> > > populate the relevant page tables to map memory section one by one. If
> > > fail to populate for a certain mem section, print warning and its
> > > ->section_mem_map will be cleared to cancel the marking of being present.
> > > Like this, the number of mem sections marked as present could become
> > > less during sparse_init() execution.
> > >
> > > Here just defer the ms->section_mem_map clearing if failed to populate
> > > its page tables until the last for_each_present_section_nr() loop. This
> > > is in preparation for later optimizing the mem map allocation.
> > >
> > > Signed-off-by: Baoquan He <bhe@redhat.com>
> > > ---
> > >  mm/sparse-vmemmap.c |  1 -
> > >  mm/sparse.c         | 12 ++++++++----
> > >  2 files changed, 8 insertions(+), 5 deletions(-)
> > >
> > > diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c
> > > index bd0276d5f66b..640e68f8324b 100644
> > > --- a/mm/sparse-vmemmap.c
> > > +++ b/mm/sparse-vmemmap.c
> > > @@ -303,7 +303,6 @@ void __init sparse_mem_maps_populate_node(struct page **map_map,
> > >             ms = __nr_to_section(pnum);
> > >             pr_err("%s: sparsemem memory map backing failed some memory will not be available\n",
> > >                    __func__);
> > > -           ms->section_mem_map = 0;
> >
> > Since we are deferring the clearing of section_mem_map, I guess we do not need
> >
> > struct mem_section *ms;
> > ms = __nr_to_section(pnum);
> >
> > anymore, right?
>
> Right, good catch, thanks.
>
> I will post a new round to fix this.
>
> >
> > >     }
> > >
> > >     if (vmemmap_buf_start) {
> > > diff --git a/mm/sparse.c b/mm/sparse.c
> > > index 6314303130b0..71ad53da2cd1 100644
> > > --- a/mm/sparse.c
> > > +++ b/mm/sparse.c
> > > @@ -451,7 +451,6 @@ void __init sparse_mem_maps_populate_node(struct page **map_map,
> > >             ms = __nr_to_section(pnum);
> > >             pr_err("%s: sparsemem memory map backing failed some memory will not be available\n",
> > >                    __func__);
> > > -           ms->section_mem_map = 0;
> >
> > The same goes here.
> >
> >
> >
> > --
> > Oscar Salvador
> > SUSE L3
>
diff mbox

Patch

diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c
index bd0276d5f66b..640e68f8324b 100644
--- a/mm/sparse-vmemmap.c
+++ b/mm/sparse-vmemmap.c
@@ -303,7 +303,6 @@  void __init sparse_mem_maps_populate_node(struct page **map_map,
 		ms = __nr_to_section(pnum);
 		pr_err("%s: sparsemem memory map backing failed some memory will not be available\n",
 		       __func__);
-		ms->section_mem_map = 0;
 	}
 
 	if (vmemmap_buf_start) {
diff --git a/mm/sparse.c b/mm/sparse.c
index 6314303130b0..71ad53da2cd1 100644
--- a/mm/sparse.c
+++ b/mm/sparse.c
@@ -451,7 +451,6 @@  void __init sparse_mem_maps_populate_node(struct page **map_map,
 		ms = __nr_to_section(pnum);
 		pr_err("%s: sparsemem memory map backing failed some memory will not be available\n",
 		       __func__);
-		ms->section_mem_map = 0;
 	}
 }
 #endif /* !CONFIG_SPARSEMEM_VMEMMAP */
@@ -479,7 +478,6 @@  static struct page __init *sparse_early_mem_map_alloc(unsigned long pnum)
 
 	pr_err("%s: sparsemem memory map backing failed some memory will not be available\n",
 	       __func__);
-	ms->section_mem_map = 0;
 	return NULL;
 }
 #endif
@@ -583,17 +581,23 @@  void __init sparse_init(void)
 #endif
 
 	for_each_present_section_nr(0, pnum) {
+		struct mem_section *ms;
+		ms = __nr_to_section(pnum);
 		usemap = usemap_map[pnum];
-		if (!usemap)
+		if (!usemap) {
+			ms->section_mem_map = 0;
 			continue;
+		}
 
 #ifdef CONFIG_SPARSEMEM_ALLOC_MEM_MAP_TOGETHER
 		map = map_map[pnum];
 #else
 		map = sparse_early_mem_map_alloc(pnum);
 #endif
-		if (!map)
+		if (!map) {
+			ms->section_mem_map = 0;
 			continue;
+		}
 
 		sparse_init_one_section(__nr_to_section(pnum), pnum, map,
 								usemap);