diff mbox series

mm:vmscan remove unnecessary lru lock unlock/lock pair

Message ID TYCP286MB110836FA8100F06A50DA8362C5DF9@TYCP286MB1108.JPNP286.PROD.OUTLOOK.COM (mailing list archive)
State New
Headers show
Series mm:vmscan remove unnecessary lru lock unlock/lock pair | expand

Commit Message

Xie Yongmei Sept. 19, 2021, 3:24 p.m. UTC
There's code redundant in move_pages_to_lru. When there're multiple of mlocked pages
or compound pages, the original implementation tries to unlock and then lock to handle
some exceptional case.

Signed-off-by: Yongmei Xie <yongmeixie@hotmail.com>
---
 mm/vmscan.c | 32 ++++++++++++++++++++++++--------
 1 file changed, 24 insertions(+), 8 deletions(-)

Comments

Johannes Weiner Sept. 21, 2021, 2:30 p.m. UTC | #1
Hello Yongmei,

On Sun, Sep 19, 2021 at 11:24:30PM +0800, Yongmei Xie wrote:
> There's code redundant in move_pages_to_lru. When there're multiple of mlocked pages
> or compound pages, the original implementation tries to unlock and then lock to handle
> some exceptional case.
> 
> Signed-off-by: Yongmei Xie <yongmeixie@hotmail.com>
> ---
>  mm/vmscan.c | 32 ++++++++++++++++++++++++--------
>  1 file changed, 24 insertions(+), 8 deletions(-)

Is the lock cycling creating actual problems for you?

The locks aren't batched because we expect those situations to be
rare: mlock or truncate/munmap racing with reclaim isolation. And in
fact, you're adding an unconditional lock cycle and more branches to
the hot path to deal with it. It's more code overall.

Without data, the patch isn't very compelling. If you do have data, it
would be good to include it in the changelog.

Thanks!
Xie Yongmei Sept. 22, 2021, 2:34 p.m. UTC | #2
I got your point.
You're right. Thanks, it's very helpful to me to understand the tradeoff.

Best Regards,
Yongmei
diff mbox series

Patch

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 74296c2d1fed..c19c6c572ba3 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2156,6 +2156,8 @@  static unsigned int move_pages_to_lru(struct lruvec *lruvec,
 {
 	int nr_pages, nr_moved = 0;
 	LIST_HEAD(pages_to_free);
+	LIST_HEAD(pages_to_putback);
+	LIST_HEAD(compound_pages_to_free);
 	struct page *page;
 
 	while (!list_empty(list)) {
@@ -2163,9 +2165,7 @@  static unsigned int move_pages_to_lru(struct lruvec *lruvec,
 		VM_BUG_ON_PAGE(PageLRU(page), page);
 		list_del(&page->lru);
 		if (unlikely(!page_evictable(page))) {
-			spin_unlock_irq(&lruvec->lru_lock);
-			putback_lru_page(page);
-			spin_lock_irq(&lruvec->lru_lock);
+			list_move(&page->lru, &pages_to_putback);
 			continue;
 		}
 
@@ -2185,11 +2185,9 @@  static unsigned int move_pages_to_lru(struct lruvec *lruvec,
 		if (unlikely(put_page_testzero(page))) {
 			__clear_page_lru_flags(page);
 
-			if (unlikely(PageCompound(page))) {
-				spin_unlock_irq(&lruvec->lru_lock);
-				destroy_compound_page(page);
-				spin_lock_irq(&lruvec->lru_lock);
-			} else
+			if (unlikely(PageCompound(page)))
+				list_move(&page->lru, &compound_pages_to_free);
+			else
 				list_add(&page->lru, &pages_to_free);
 
 			continue;
@@ -2207,6 +2205,24 @@  static unsigned int move_pages_to_lru(struct lruvec *lruvec,
 			workingset_age_nonresident(lruvec, nr_pages);
 	}
 
+	/*
+	 * Putback as a batch to reduce unlock/lock pair for unevictable pages
+	 */
+	spin_unlock_irq(&lruvec->lru_lock);
+	while (!list_empty(&pages_to_putback)) {
+		page = lru_to_page(&pages_to_putback);
+		putback_lru_page(page);
+	}
+
+	/*
+	 * Free compound page as a batch to reduce unnecessary unlock/lock
+	 */
+	while (!list_empty(&compound_pages_to_free)) {
+		page = lru_to_page(&compound_pages_to_free);
+		destroy_compound_page(page);
+	}
+	spin_lock_irq(&lruvec->lru_lock);
+
 	/*
 	 * To save our caller's stack, now use input list for pages to free.
 	 */