[PATCHv2,3/8] khugepaged: Drain all LRU caches before scanning pages
diff mbox series

Message ID 20200403112928.19742-4-kirill.shutemov@linux.intel.com
State New
Headers show
Series
  • thp/khugepaged improvements and CoW semantics
Related show

Commit Message

Kirill A. Shutemov April 3, 2020, 11:29 a.m. UTC
Having a page in LRU add cache offsets page refcount and gives
false-negative on PageLRU(). It reduces collapse success rate.

Drain all LRU add caches before scanning. It happens relatively
rare and should not disturb the system too much.

Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
---
 mm/khugepaged.c | 2 ++
 1 file changed, 2 insertions(+)

Comments

Yang Shi April 6, 2020, 6:15 p.m. UTC | #1
On 4/3/20 4:29 AM, Kirill A. Shutemov wrote:
> Having a page in LRU add cache offsets page refcount and gives
> false-negative on PageLRU(). It reduces collapse success rate.
>
> Drain all LRU add caches before scanning. It happens relatively
> rare and should not disturb the system too much.
>
> Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>

Acked-by: Yang Shi <yang.shi@linux.alibaba.com>

> ---
>   mm/khugepaged.c | 2 ++
>   1 file changed, 2 insertions(+)
>
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index 14d7afc90786..fdc10ffde1ca 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -2065,6 +2065,8 @@ static void khugepaged_do_scan(void)
>   
>   	barrier(); /* write khugepaged_pages_to_scan to local stack */
>   
> +	lru_add_drain_all();
> +
>   	while (progress < pages) {
>   		if (!khugepaged_prealloc_page(&hpage, &wait))
>   			break;

Patch
diff mbox series

diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 14d7afc90786..fdc10ffde1ca 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -2065,6 +2065,8 @@  static void khugepaged_do_scan(void)
 
 	barrier(); /* write khugepaged_pages_to_scan to local stack */
 
+	lru_add_drain_all();
+
 	while (progress < pages) {
 		if (!khugepaged_prealloc_page(&hpage, &wait))
 			break;