diff mbox series

[v2,2/4] mm/sparse: Optimize sparse_add_one_section()

Message ID 20190326090227.3059-3-bhe@redhat.com (mailing list archive)
State New, archived
Headers show
Series Clean up comments and codes in sparse_add_one_section() | expand

Commit Message

Baoquan He March 26, 2019, 9:02 a.m. UTC
Reorder the allocation of usemap and memmap since usemap allocation
is much simpler and easier. Otherwise hard work is done to make
memmap ready, then have to rollback just because of usemap allocation
failure.

And also check if section is present earlier. Then don't bother to
allocate usemap and memmap if yes.

Signed-off-by: Baoquan He <bhe@redhat.com>
---
v1->v2:
  Do section existence checking earlier to further optimize code.

 mm/sparse.c | 29 +++++++++++------------------
 1 file changed, 11 insertions(+), 18 deletions(-)

Comments

Mike Rapoport March 26, 2019, 9:23 a.m. UTC | #1
On Tue, Mar 26, 2019 at 05:02:25PM +0800, Baoquan He wrote:
> Reorder the allocation of usemap and memmap since usemap allocation
> is much simpler and easier. Otherwise hard work is done to make
> memmap ready, then have to rollback just because of usemap allocation
> failure.
> 
> And also check if section is present earlier. Then don't bother to
> allocate usemap and memmap if yes.
> 
> Signed-off-by: Baoquan He <bhe@redhat.com>

Reviewed-by: Mike Rapoport <rppt@linux.ibm.com>

> ---
> v1->v2:
>   Do section existence checking earlier to further optimize code.
> 
>  mm/sparse.c | 29 +++++++++++------------------
>  1 file changed, 11 insertions(+), 18 deletions(-)
> 
> diff --git a/mm/sparse.c b/mm/sparse.c
> index b2111f996aa6..f4f34d69131e 100644
> --- a/mm/sparse.c
> +++ b/mm/sparse.c
> @@ -714,20 +714,18 @@ int __meminit sparse_add_one_section(int nid, unsigned long start_pfn,
>  	ret = sparse_index_init(section_nr, nid);
>  	if (ret < 0 && ret != -EEXIST)
>  		return ret;
> -	ret = 0;
> -	memmap = kmalloc_section_memmap(section_nr, nid, altmap);
> -	if (!memmap)
> -		return -ENOMEM;
> -	usemap = __kmalloc_section_usemap();
> -	if (!usemap) {
> -		__kfree_section_memmap(memmap, altmap);
> -		return -ENOMEM;
> -	}
>  
>  	ms = __pfn_to_section(start_pfn);
> -	if (ms->section_mem_map & SECTION_MARKED_PRESENT) {
> -		ret = -EEXIST;
> -		goto out;
> +	if (ms->section_mem_map & SECTION_MARKED_PRESENT)
> +		return -EEXIST;
> +
> +	usemap = __kmalloc_section_usemap();
> +	if (!usemap)
> +		return -ENOMEM;
> +	memmap = kmalloc_section_memmap(section_nr, nid, altmap);
> +	if (!memmap) {
> +		kfree(usemap);
> +		return  -ENOMEM;
>  	}
>  
>  	/*
> @@ -739,12 +737,7 @@ int __meminit sparse_add_one_section(int nid, unsigned long start_pfn,
>  	section_mark_present(ms);
>  	sparse_init_one_section(ms, section_nr, memmap, usemap);
>  
> -out:
> -	if (ret < 0) {
> -		kfree(usemap);
> -		__kfree_section_memmap(memmap, altmap);
> -	}
> -	return ret;
> +	return 0;
>  }
>  
>  #ifdef CONFIG_MEMORY_HOTREMOVE
> -- 
> 2.17.2
>
Michal Hocko March 26, 2019, 9:29 a.m. UTC | #2
On Tue 26-03-19 17:02:25, Baoquan He wrote:
> Reorder the allocation of usemap and memmap since usemap allocation
> is much simpler and easier. Otherwise hard work is done to make
> memmap ready, then have to rollback just because of usemap allocation
> failure.

Is this really worth it? I can see that !VMEMMAP is doing memmap size
allocation which would be 2MB aka costly allocation but we do not do
__GFP_RETRY_MAYFAIL so the allocator backs off early.

> And also check if section is present earlier. Then don't bother to
> allocate usemap and memmap if yes.

Moving the check up makes some sense.

> Signed-off-by: Baoquan He <bhe@redhat.com>

The patch is not incorrect but I am wondering whether it is really worth
it for the current code base. Is it fixing anything real or it is a mere
code shuffling to please an eye?

> ---
> v1->v2:
>   Do section existence checking earlier to further optimize code.
> 
>  mm/sparse.c | 29 +++++++++++------------------
>  1 file changed, 11 insertions(+), 18 deletions(-)
> 
> diff --git a/mm/sparse.c b/mm/sparse.c
> index b2111f996aa6..f4f34d69131e 100644
> --- a/mm/sparse.c
> +++ b/mm/sparse.c
> @@ -714,20 +714,18 @@ int __meminit sparse_add_one_section(int nid, unsigned long start_pfn,
>  	ret = sparse_index_init(section_nr, nid);
>  	if (ret < 0 && ret != -EEXIST)
>  		return ret;
> -	ret = 0;
> -	memmap = kmalloc_section_memmap(section_nr, nid, altmap);
> -	if (!memmap)
> -		return -ENOMEM;
> -	usemap = __kmalloc_section_usemap();
> -	if (!usemap) {
> -		__kfree_section_memmap(memmap, altmap);
> -		return -ENOMEM;
> -	}
>  
>  	ms = __pfn_to_section(start_pfn);
> -	if (ms->section_mem_map & SECTION_MARKED_PRESENT) {
> -		ret = -EEXIST;
> -		goto out;
> +	if (ms->section_mem_map & SECTION_MARKED_PRESENT)
> +		return -EEXIST;
> +
> +	usemap = __kmalloc_section_usemap();
> +	if (!usemap)
> +		return -ENOMEM;
> +	memmap = kmalloc_section_memmap(section_nr, nid, altmap);
> +	if (!memmap) {
> +		kfree(usemap);
> +		return  -ENOMEM;
>  	}
>  
>  	/*
> @@ -739,12 +737,7 @@ int __meminit sparse_add_one_section(int nid, unsigned long start_pfn,
>  	section_mark_present(ms);
>  	sparse_init_one_section(ms, section_nr, memmap, usemap);
>  
> -out:
> -	if (ret < 0) {
> -		kfree(usemap);
> -		__kfree_section_memmap(memmap, altmap);
> -	}
> -	return ret;
> +	return 0;
>  }
>  
>  #ifdef CONFIG_MEMORY_HOTREMOVE
> -- 
> 2.17.2
>
Baoquan He March 26, 2019, 10:08 a.m. UTC | #3
On 03/26/19 at 10:29am, Michal Hocko wrote:
> On Tue 26-03-19 17:02:25, Baoquan He wrote:
> > Reorder the allocation of usemap and memmap since usemap allocation
> > is much simpler and easier. Otherwise hard work is done to make
> > memmap ready, then have to rollback just because of usemap allocation
> > failure.
> 
> Is this really worth it? I can see that !VMEMMAP is doing memmap size
> allocation which would be 2MB aka costly allocation but we do not do
> __GFP_RETRY_MAYFAIL so the allocator backs off early.

In !VMEMMAP case, it truly does simple allocation directly. surely
usemap which size is 32 is smaller. So it doesn't matter that much who's
ahead or who's behind. However, this benefit a little in VMEMMAP case.

And this make code a little cleaner, e.g the error handling at the end
is taken away.

> 
> > And also check if section is present earlier. Then don't bother to
> > allocate usemap and memmap if yes.
> 
> Moving the check up makes some sense.
> 
> > Signed-off-by: Baoquan He <bhe@redhat.com>
> 
> The patch is not incorrect but I am wondering whether it is really worth
> it for the current code base. Is it fixing anything real or it is a mere
> code shuffling to please an eye?

It's not a fixing, just a tiny code refactorying inside
sparse_add_one_section(), seems it doesn't worsen thing if I got the
!VMEMMAP case correctly, not quite sure. I am fine to drop it if it's
not worth. I could miss something in different cases.

Thanks
Baoquan

> 
> > ---
> > v1->v2:
> >   Do section existence checking earlier to further optimize code.
> > 
> >  mm/sparse.c | 29 +++++++++++------------------
> >  1 file changed, 11 insertions(+), 18 deletions(-)
> > 
> > diff --git a/mm/sparse.c b/mm/sparse.c
> > index b2111f996aa6..f4f34d69131e 100644
> > --- a/mm/sparse.c
> > +++ b/mm/sparse.c
> > @@ -714,20 +714,18 @@ int __meminit sparse_add_one_section(int nid, unsigned long start_pfn,
> >  	ret = sparse_index_init(section_nr, nid);
> >  	if (ret < 0 && ret != -EEXIST)
> >  		return ret;
> > -	ret = 0;
> > -	memmap = kmalloc_section_memmap(section_nr, nid, altmap);
> > -	if (!memmap)
> > -		return -ENOMEM;
> > -	usemap = __kmalloc_section_usemap();
> > -	if (!usemap) {
> > -		__kfree_section_memmap(memmap, altmap);
> > -		return -ENOMEM;
> > -	}
> >  
> >  	ms = __pfn_to_section(start_pfn);
> > -	if (ms->section_mem_map & SECTION_MARKED_PRESENT) {
> > -		ret = -EEXIST;
> > -		goto out;
> > +	if (ms->section_mem_map & SECTION_MARKED_PRESENT)
> > +		return -EEXIST;
> > +
> > +	usemap = __kmalloc_section_usemap();
> > +	if (!usemap)
> > +		return -ENOMEM;
> > +	memmap = kmalloc_section_memmap(section_nr, nid, altmap);
> > +	if (!memmap) {
> > +		kfree(usemap);
> > +		return  -ENOMEM;
> >  	}
> >  
> >  	/*
> > @@ -739,12 +737,7 @@ int __meminit sparse_add_one_section(int nid, unsigned long start_pfn,
> >  	section_mark_present(ms);
> >  	sparse_init_one_section(ms, section_nr, memmap, usemap);
> >  
> > -out:
> > -	if (ret < 0) {
> > -		kfree(usemap);
> > -		__kfree_section_memmap(memmap, altmap);
> > -	}
> > -	return ret;
> > +	return 0;
> >  }
> >  
> >  #ifdef CONFIG_MEMORY_HOTREMOVE
> > -- 
> > 2.17.2
> > 
> 
> -- 
> Michal Hocko
> SUSE Labs
Michal Hocko March 26, 2019, 10:17 a.m. UTC | #4
On Tue 26-03-19 18:08:17, Baoquan He wrote:
> On 03/26/19 at 10:29am, Michal Hocko wrote:
> > On Tue 26-03-19 17:02:25, Baoquan He wrote:
> > > Reorder the allocation of usemap and memmap since usemap allocation
> > > is much simpler and easier. Otherwise hard work is done to make
> > > memmap ready, then have to rollback just because of usemap allocation
> > > failure.
> > 
> > Is this really worth it? I can see that !VMEMMAP is doing memmap size
> > allocation which would be 2MB aka costly allocation but we do not do
> > __GFP_RETRY_MAYFAIL so the allocator backs off early.
> 
> In !VMEMMAP case, it truly does simple allocation directly. surely
> usemap which size is 32 is smaller. So it doesn't matter that much who's
> ahead or who's behind. However, this benefit a little in VMEMMAP case.

How does it help there? The failure should be even much less probable
there because we simply fall back to a small 4kB pages and those
essentially never fail.

> And this make code a little cleaner, e.g the error handling at the end
> is taken away.
> 
> > 
> > > And also check if section is present earlier. Then don't bother to
> > > allocate usemap and memmap if yes.
> > 
> > Moving the check up makes some sense.
> > 
> > > Signed-off-by: Baoquan He <bhe@redhat.com>
> > 
> > The patch is not incorrect but I am wondering whether it is really worth
> > it for the current code base. Is it fixing anything real or it is a mere
> > code shuffling to please an eye?
> 
> It's not a fixing, just a tiny code refactorying inside
> sparse_add_one_section(), seems it doesn't worsen thing if I got the
> !VMEMMAP case correctly, not quite sure. I am fine to drop it if it's
> not worth. I could miss something in different cases.

Well, I usually prefer to not do micro-optimizations in a code that
really begs for a much larger surgery. There are other people working on
the code and patches like these might get into the way and cuase
conflicts without a very good justification.
Baoquan He March 26, 2019, 1:45 p.m. UTC | #5
On 03/26/19 at 11:17am, Michal Hocko wrote:
> On Tue 26-03-19 18:08:17, Baoquan He wrote:
> > On 03/26/19 at 10:29am, Michal Hocko wrote:
> > > On Tue 26-03-19 17:02:25, Baoquan He wrote:
> > > > Reorder the allocation of usemap and memmap since usemap allocation
> > > > is much simpler and easier. Otherwise hard work is done to make
> > > > memmap ready, then have to rollback just because of usemap allocation
> > > > failure.
> > > 
> > > Is this really worth it? I can see that !VMEMMAP is doing memmap size
> > > allocation which would be 2MB aka costly allocation but we do not do
> > > __GFP_RETRY_MAYFAIL so the allocator backs off early.
> > 
> > In !VMEMMAP case, it truly does simple allocation directly. surely
> > usemap which size is 32 is smaller. So it doesn't matter that much who's
> > ahead or who's behind. However, this benefit a little in VMEMMAP case.
> 
> How does it help there? The failure should be even much less probable
> there because we simply fall back to a small 4kB pages and those
> essentially never fail.

OK, I am fine to drop it. Or only put the section existence checking
earlier to avoid unnecessary usemap/memmap allocation?


From 7594b86ebf5d6fcc8146eca8fc5625f1961a15b1 Mon Sep 17 00:00:00 2001
From: Baoquan He <bhe@redhat.com>
Date: Tue, 26 Mar 2019 18:48:39 +0800
Subject: [PATCH] mm/sparse: Check section's existence earlier in
 sparse_add_one_section()

No need to allocate usemap and memmap if section has been present.
And can clean up the handling on failure.

Signed-off-by: Baoquan He <bhe@redhat.com>
---
 mm/sparse.c | 21 ++++++++-------------
 1 file changed, 8 insertions(+), 13 deletions(-)

diff --git a/mm/sparse.c b/mm/sparse.c
index 363f9d31b511..f564b531e0f7 100644
--- a/mm/sparse.c
+++ b/mm/sparse.c
@@ -714,7 +714,13 @@ int __meminit sparse_add_one_section(int nid, unsigned long start_pfn,
 	ret = sparse_index_init(section_nr, nid);
 	if (ret < 0 && ret != -EEXIST)
 		return ret;
-	ret = 0;
+
+	ms = __pfn_to_section(start_pfn);
+	if (ms->section_mem_map & SECTION_MARKED_PRESENT) {
+		ret = -EEXIST;
+		goto out;
+	}
+
 	memmap = kmalloc_section_memmap(section_nr, nid, altmap);
 	if (!memmap)
 		return -ENOMEM;
@@ -724,12 +730,6 @@ int __meminit sparse_add_one_section(int nid, unsigned long start_pfn,
 		return -ENOMEM;
 	}
 
-	ms = __pfn_to_section(start_pfn);
-	if (ms->section_mem_map & SECTION_MARKED_PRESENT) {
-		ret = -EEXIST;
-		goto out;
-	}
-
 	/*
 	 * Poison uninitialized struct pages in order to catch invalid flags
 	 * combinations.
@@ -739,12 +739,7 @@ int __meminit sparse_add_one_section(int nid, unsigned long start_pfn,
 	section_mark_present(ms);
 	sparse_init_one_section(ms, section_nr, memmap, usemap);
 
-out:
-	if (ret < 0) {
-		kfree(usemap);
-		__kfree_section_memmap(memmap, altmap);
-	}
-	return ret;
+	return 0;
 }
 
 #ifdef CONFIG_MEMORY_HOTREMOVE
Mike Rapoport March 26, 2019, 1:57 p.m. UTC | #6
On Tue, Mar 26, 2019 at 09:45:22PM +0800, Baoquan He wrote:
> On 03/26/19 at 11:17am, Michal Hocko wrote:
> > On Tue 26-03-19 18:08:17, Baoquan He wrote:
> > > On 03/26/19 at 10:29am, Michal Hocko wrote:
> > > > On Tue 26-03-19 17:02:25, Baoquan He wrote:
> > > > > Reorder the allocation of usemap and memmap since usemap allocation
> > > > > is much simpler and easier. Otherwise hard work is done to make
> > > > > memmap ready, then have to rollback just because of usemap allocation
> > > > > failure.
> > > > 
> > > > Is this really worth it? I can see that !VMEMMAP is doing memmap size
> > > > allocation which would be 2MB aka costly allocation but we do not do
> > > > __GFP_RETRY_MAYFAIL so the allocator backs off early.
> > > 
> > > In !VMEMMAP case, it truly does simple allocation directly. surely
> > > usemap which size is 32 is smaller. So it doesn't matter that much who's
> > > ahead or who's behind. However, this benefit a little in VMEMMAP case.
> > 
> > How does it help there? The failure should be even much less probable
> > there because we simply fall back to a small 4kB pages and those
> > essentially never fail.
> 
> OK, I am fine to drop it. Or only put the section existence checking
> earlier to avoid unnecessary usemap/memmap allocation?
> 
> 
> From 7594b86ebf5d6fcc8146eca8fc5625f1961a15b1 Mon Sep 17 00:00:00 2001
> From: Baoquan He <bhe@redhat.com>
> Date: Tue, 26 Mar 2019 18:48:39 +0800
> Subject: [PATCH] mm/sparse: Check section's existence earlier in
>  sparse_add_one_section()
> 
> No need to allocate usemap and memmap if section has been present.
> And can clean up the handling on failure.
> 
> Signed-off-by: Baoquan He <bhe@redhat.com>
> ---
>  mm/sparse.c | 21 ++++++++-------------
>  1 file changed, 8 insertions(+), 13 deletions(-)
> 
> diff --git a/mm/sparse.c b/mm/sparse.c
> index 363f9d31b511..f564b531e0f7 100644
> --- a/mm/sparse.c
> +++ b/mm/sparse.c
> @@ -714,7 +714,13 @@ int __meminit sparse_add_one_section(int nid, unsigned long start_pfn,
>  	ret = sparse_index_init(section_nr, nid);
>  	if (ret < 0 && ret != -EEXIST)
>  		return ret;
> -	ret = 0;
> +
> +	ms = __pfn_to_section(start_pfn);
> +	if (ms->section_mem_map & SECTION_MARKED_PRESENT) {
> +		ret = -EEXIST;
> +		goto out;

		return -EEXIST; ?

> +	}
> +
>  	memmap = kmalloc_section_memmap(section_nr, nid, altmap);
>  	if (!memmap)
>  		return -ENOMEM;
> @@ -724,12 +730,6 @@ int __meminit sparse_add_one_section(int nid, unsigned long start_pfn,
>  		return -ENOMEM;
>  	}
>  
> -	ms = __pfn_to_section(start_pfn);
> -	if (ms->section_mem_map & SECTION_MARKED_PRESENT) {
> -		ret = -EEXIST;
> -		goto out;
> -	}
> -
>  	/*
>  	 * Poison uninitialized struct pages in order to catch invalid flags
>  	 * combinations.
> @@ -739,12 +739,7 @@ int __meminit sparse_add_one_section(int nid, unsigned long start_pfn,
>  	section_mark_present(ms);
>  	sparse_init_one_section(ms, section_nr, memmap, usemap);
>  
> -out:
> -	if (ret < 0) {
> -		kfree(usemap);
> -		__kfree_section_memmap(memmap, altmap);
> -	}
> -	return ret;
> +	return 0;
>  }
>  
>  #ifdef CONFIG_MEMORY_HOTREMOVE
> -- 
> 2.17.2
>
Michal Hocko March 26, 2019, 2:03 p.m. UTC | #7
On Tue 26-03-19 21:45:22, Baoquan He wrote:
> On 03/26/19 at 11:17am, Michal Hocko wrote:
> > On Tue 26-03-19 18:08:17, Baoquan He wrote:
> > > On 03/26/19 at 10:29am, Michal Hocko wrote:
> > > > On Tue 26-03-19 17:02:25, Baoquan He wrote:
> > > > > Reorder the allocation of usemap and memmap since usemap allocation
> > > > > is much simpler and easier. Otherwise hard work is done to make
> > > > > memmap ready, then have to rollback just because of usemap allocation
> > > > > failure.
> > > > 
> > > > Is this really worth it? I can see that !VMEMMAP is doing memmap size
> > > > allocation which would be 2MB aka costly allocation but we do not do
> > > > __GFP_RETRY_MAYFAIL so the allocator backs off early.
> > > 
> > > In !VMEMMAP case, it truly does simple allocation directly. surely
> > > usemap which size is 32 is smaller. So it doesn't matter that much who's
> > > ahead or who's behind. However, this benefit a little in VMEMMAP case.
> > 
> > How does it help there? The failure should be even much less probable
> > there because we simply fall back to a small 4kB pages and those
> > essentially never fail.
> 
> OK, I am fine to drop it. Or only put the section existence checking
> earlier to avoid unnecessary usemap/memmap allocation?

DO you have any data on how often that happens? Should basically never
happening, right?
Baoquan He March 26, 2019, 2:18 p.m. UTC | #8
On 03/26/19 at 03:03pm, Michal Hocko wrote:
> On Tue 26-03-19 21:45:22, Baoquan He wrote:
> > On 03/26/19 at 11:17am, Michal Hocko wrote:
> > > On Tue 26-03-19 18:08:17, Baoquan He wrote:
> > > > On 03/26/19 at 10:29am, Michal Hocko wrote:
> > > > > On Tue 26-03-19 17:02:25, Baoquan He wrote:
> > > > > > Reorder the allocation of usemap and memmap since usemap allocation
> > > > > > is much simpler and easier. Otherwise hard work is done to make
> > > > > > memmap ready, then have to rollback just because of usemap allocation
> > > > > > failure.
> > > > > 
> > > > > Is this really worth it? I can see that !VMEMMAP is doing memmap size
> > > > > allocation which would be 2MB aka costly allocation but we do not do
> > > > > __GFP_RETRY_MAYFAIL so the allocator backs off early.
> > > > 
> > > > In !VMEMMAP case, it truly does simple allocation directly. surely
> > > > usemap which size is 32 is smaller. So it doesn't matter that much who's
> > > > ahead or who's behind. However, this benefit a little in VMEMMAP case.
> > > 
> > > How does it help there? The failure should be even much less probable
> > > there because we simply fall back to a small 4kB pages and those
> > > essentially never fail.
> > 
> > OK, I am fine to drop it. Or only put the section existence checking
> > earlier to avoid unnecessary usemap/memmap allocation?
> 
> DO you have any data on how often that happens? Should basically never
> happening, right?

Oh, you think about it in this aspect. Yes, it rarely happens.
Always allocating firstly can increase efficiency. Then I will just drop
it.
Michal Hocko March 26, 2019, 2:31 p.m. UTC | #9
On Tue 26-03-19 22:18:03, Baoquan He wrote:
> On 03/26/19 at 03:03pm, Michal Hocko wrote:
> > On Tue 26-03-19 21:45:22, Baoquan He wrote:
> > > On 03/26/19 at 11:17am, Michal Hocko wrote:
> > > > On Tue 26-03-19 18:08:17, Baoquan He wrote:
> > > > > On 03/26/19 at 10:29am, Michal Hocko wrote:
> > > > > > On Tue 26-03-19 17:02:25, Baoquan He wrote:
> > > > > > > Reorder the allocation of usemap and memmap since usemap allocation
> > > > > > > is much simpler and easier. Otherwise hard work is done to make
> > > > > > > memmap ready, then have to rollback just because of usemap allocation
> > > > > > > failure.
> > > > > > 
> > > > > > Is this really worth it? I can see that !VMEMMAP is doing memmap size
> > > > > > allocation which would be 2MB aka costly allocation but we do not do
> > > > > > __GFP_RETRY_MAYFAIL so the allocator backs off early.
> > > > > 
> > > > > In !VMEMMAP case, it truly does simple allocation directly. surely
> > > > > usemap which size is 32 is smaller. So it doesn't matter that much who's
> > > > > ahead or who's behind. However, this benefit a little in VMEMMAP case.
> > > > 
> > > > How does it help there? The failure should be even much less probable
> > > > there because we simply fall back to a small 4kB pages and those
> > > > essentially never fail.
> > > 
> > > OK, I am fine to drop it. Or only put the section existence checking
> > > earlier to avoid unnecessary usemap/memmap allocation?
> > 
> > DO you have any data on how often that happens? Should basically never
> > happening, right?
> 
> Oh, you think about it in this aspect. Yes, it rarely happens.
> Always allocating firstly can increase efficiency. Then I will just drop
> it.

OK, let me try once more. Doing a check early is something that makes
sense in general. Another question is whether the check is needed at
all. So rather than fiddling with its placement I would go whether it is
actually failing at all. I suspect it doesn't because the memory hotplug
is currently enforced to be section aligned. There are people who would
like to allow subsection or section unaligned aware hotplug and then
this would be much more relevant but without any solid justification
such a patch is not really helpful because it might cause code conflicts
with other work or obscure the git blame tracking by an additional hop.

In short, if you want to optimize something then make sure you describe
what you are optimizing how it helps.
Baoquan He March 26, 2019, 10:57 p.m. UTC | #10
Hi Michal,

On 03/26/19 at 03:31pm, Michal Hocko wrote:
> > > > OK, I am fine to drop it. Or only put the section existence checking
> > > > earlier to avoid unnecessary usemap/memmap allocation?
> > > 
> > > DO you have any data on how often that happens? Should basically never
> > > happening, right?
> > 
> > Oh, you think about it in this aspect. Yes, it rarely happens.
> > Always allocating firstly can increase efficiency. Then I will just drop
> > it.
> 
> OK, let me try once more. Doing a check early is something that makes
> sense in general. Another question is whether the check is needed at
> all. So rather than fiddling with its placement I would go whether it is
> actually failing at all. I suspect it doesn't because the memory hotplug
> is currently enforced to be section aligned. There are people who would
> like to allow subsection or section unaligned aware hotplug and then
> this would be much more relevant but without any solid justification
> such a patch is not really helpful because it might cause code conflicts
> with other work or obscure the git blame tracking by an additional hop.
> 
> In short, if you want to optimize something then make sure you describe
> what you are optimizing how it helps.

I must be dizzy last night when thinking and replying mails, I thought
about it a while, got a point you may mean. Now when I check mail and
rethink about it, that reply may make misunderstanding. It doesn't
actually makes sense to optimize, just a little code block moving. I now
agree with you that it doesn't optimize anything and may impact people's
code change. Sorry about that.

Thanks
Baoquan
diff mbox series

Patch

diff --git a/mm/sparse.c b/mm/sparse.c
index b2111f996aa6..f4f34d69131e 100644
--- a/mm/sparse.c
+++ b/mm/sparse.c
@@ -714,20 +714,18 @@  int __meminit sparse_add_one_section(int nid, unsigned long start_pfn,
 	ret = sparse_index_init(section_nr, nid);
 	if (ret < 0 && ret != -EEXIST)
 		return ret;
-	ret = 0;
-	memmap = kmalloc_section_memmap(section_nr, nid, altmap);
-	if (!memmap)
-		return -ENOMEM;
-	usemap = __kmalloc_section_usemap();
-	if (!usemap) {
-		__kfree_section_memmap(memmap, altmap);
-		return -ENOMEM;
-	}
 
 	ms = __pfn_to_section(start_pfn);
-	if (ms->section_mem_map & SECTION_MARKED_PRESENT) {
-		ret = -EEXIST;
-		goto out;
+	if (ms->section_mem_map & SECTION_MARKED_PRESENT)
+		return -EEXIST;
+
+	usemap = __kmalloc_section_usemap();
+	if (!usemap)
+		return -ENOMEM;
+	memmap = kmalloc_section_memmap(section_nr, nid, altmap);
+	if (!memmap) {
+		kfree(usemap);
+		return  -ENOMEM;
 	}
 
 	/*
@@ -739,12 +737,7 @@  int __meminit sparse_add_one_section(int nid, unsigned long start_pfn,
 	section_mark_present(ms);
 	sparse_init_one_section(ms, section_nr, memmap, usemap);
 
-out:
-	if (ret < 0) {
-		kfree(usemap);
-		__kfree_section_memmap(memmap, altmap);
-	}
-	return ret;
+	return 0;
 }
 
 #ifdef CONFIG_MEMORY_HOTREMOVE