diff mbox series

[PATCHv4,3/8] khugepaged: Drain all LRU caches before scanning pages

Message ID 20200416160026.16538-4-kirill.shutemov@linux.intel.com (mailing list archive)
State New, archived
Headers show
Series thp/khugepaged improvements and CoW semantics | expand

Commit Message

kirill.shutemov@linux.intel.com April 16, 2020, 4 p.m. UTC
Having a page in LRU add cache offsets page refcount and gives
false-negative on PageLRU(). It reduces collapse success rate.

Drain all LRU add caches before scanning. It happens relatively
rare and should not disturb the system too much.

Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Reviewed-by: William Kucharski <william.kucharski@oracle.com>
Reviewed-and-Tested-by: Zi Yan <ziy@nvidia.com>
Acked-by: Yang Shi <yang.shi@linux.alibaba.com>
---
 mm/khugepaged.c | 2 ++
 1 file changed, 2 insertions(+)
diff mbox series

Patch

diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 6e69dc9a9fb1..0b75b9821f44 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -2078,6 +2078,8 @@  static void khugepaged_do_scan(void)
 
 	barrier(); /* write khugepaged_pages_to_scan to local stack */
 
+	lru_add_drain_all();
+
 	while (progress < pages) {
 		if (!khugepaged_prealloc_page(&hpage, &wait))
 			break;