diff mbox series

[for,v5.8,3/3] mm/memory: fix IO cost for anonymous page

Message ID 1592288204-27734-4-git-send-email-iamjoonsoo.kim@lge.com (mailing list archive)
State New, archived
Headers show
Series fix for "mm: balance LRU lists based on relative thrashing" patchset | expand

Commit Message

Joonsoo Kim June 16, 2020, 6:16 a.m. UTC
From: Joonsoo Kim <iamjoonsoo.kim@lge.com>

With synchronous IO swap device, swap-in is directly handled in fault
code. Since IO cost notation isn't added there, with synchronous IO swap
device, LRU balancing could be wrongly biased. Fix it to count it
in fault code.

Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
---
 mm/memory.c | 8 ++++++++
 1 file changed, 8 insertions(+)

Comments

Johannes Weiner June 16, 2020, 2:50 p.m. UTC | #1
On Tue, Jun 16, 2020 at 03:16:44PM +0900, js1304@gmail.com wrote:
> From: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> 
> With synchronous IO swap device, swap-in is directly handled in fault
> code. Since IO cost notation isn't added there, with synchronous IO swap
> device, LRU balancing could be wrongly biased. Fix it to count it
> in fault code.
> 
> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>

Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Vlastimil Babka June 29, 2020, 10:27 a.m. UTC | #2
On 6/16/20 8:16 AM, js1304@gmail.com wrote:
> From: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> 
> With synchronous IO swap device, swap-in is directly handled in fault
> code. Since IO cost notation isn't added there, with synchronous IO swap
> device, LRU balancing could be wrongly biased. Fix it to count it
> in fault code.
> 
> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>

Acked-by: Vlastimil Babka <vbabka@suse.cz>

> ---
>  mm/memory.c | 8 ++++++++
>  1 file changed, 8 insertions(+)
> 
> diff --git a/mm/memory.c b/mm/memory.c
> index bc6a471..3359057 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -3143,6 +3143,14 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
>  				if (err)
>  					goto out_page;
>  
> +				/*
> +				 * XXX: Move to lru_cache_add() when it
> +				 * supports new vs putback
> +				 */
> +				spin_lock_irq(&page_pgdat(page)->lru_lock);
> +				lru_note_cost_page(page);
> +				spin_unlock_irq(&page_pgdat(page)->lru_lock);
> +
>  				lru_cache_add(page);
>  				swap_readpage(page, true);
>  			}
>
diff mbox series

Patch

diff --git a/mm/memory.c b/mm/memory.c
index bc6a471..3359057 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3143,6 +3143,14 @@  vm_fault_t do_swap_page(struct vm_fault *vmf)
 				if (err)
 					goto out_page;
 
+				/*
+				 * XXX: Move to lru_cache_add() when it
+				 * supports new vs putback
+				 */
+				spin_lock_irq(&page_pgdat(page)->lru_lock);
+				lru_note_cost_page(page);
+				spin_unlock_irq(&page_pgdat(page)->lru_lock);
+
 				lru_cache_add(page);
 				swap_readpage(page, true);
 			}