diff mbox series

[v6,2/2] mm/page_alloc: remove software prefetching in __free_pages_core

Message ID 1541484194-1493-2-git-send-email-arunks@codeaurora.org (mailing list archive)
State New, archived
Headers show
Series [v6,1/2] memory_hotplug: Free pages as higher order | expand

Commit Message

Arun KS Nov. 6, 2018, 6:03 a.m. UTC
They not only increase the code footprint, they actually make things
slower rather than faster. Remove them as contemporary hardware doesn't
need any hint.

Suggested-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Arun KS <arunks@codeaurora.org>
---
 mm/page_alloc.c | 6 +-----
 1 file changed, 1 insertion(+), 5 deletions(-)

Comments

Michal Hocko Nov. 6, 2018, 2:08 p.m. UTC | #1
On Tue 06-11-18 11:33:14, Arun KS wrote:
> They not only increase the code footprint, they actually make things
> slower rather than faster. Remove them as contemporary hardware doesn't
> need any hint.

I guess I have already asked for that. When you argue about performance
then always add some numbers.

I do agree we want to get rid of the prefetching because it is just too
of an micro-optimization without any reasonable story behind.

> Suggested-by: Dan Williams <dan.j.williams@intel.com>
> Signed-off-by: Arun KS <arunks@codeaurora.org>
> ---
>  mm/page_alloc.c | 6 +-----
>  1 file changed, 1 insertion(+), 5 deletions(-)
> 
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 7cf503f..a1b9a6a 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -1270,14 +1270,10 @@ void __free_pages_core(struct page *page, unsigned int order)
>  	struct page *p = page;
>  	unsigned int loop;
>  
> -	prefetchw(p);
> -	for (loop = 0; loop < (nr_pages - 1); loop++, p++) {
> -		prefetchw(p + 1);
> +	for (loop = 0; loop < nr_pages ; loop++, p++) {
>  		__ClearPageReserved(p);
>  		set_page_count(p, 0);
>  	}
> -	__ClearPageReserved(p);
> -	set_page_count(p, 0);
>  
>  	page_zone(page)->managed_pages += nr_pages;
>  	set_page_refcounted(page);
> -- 
> 1.9.1
diff mbox series

Patch

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 7cf503f..a1b9a6a 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1270,14 +1270,10 @@  void __free_pages_core(struct page *page, unsigned int order)
 	struct page *p = page;
 	unsigned int loop;
 
-	prefetchw(p);
-	for (loop = 0; loop < (nr_pages - 1); loop++, p++) {
-		prefetchw(p + 1);
+	for (loop = 0; loop < nr_pages ; loop++, p++) {
 		__ClearPageReserved(p);
 		set_page_count(p, 0);
 	}
-	__ClearPageReserved(p);
-	set_page_count(p, 0);
 
 	page_zone(page)->managed_pages += nr_pages;
 	set_page_refcounted(page);