mbox series

[00/11] mm: lru related cleanups

Message ID 20201207220949.830352-1-yuzhao@google.com (mailing list archive)
Headers show
Series mm: lru related cleanups | expand

Message

Yu Zhao Dec. 7, 2020, 10:09 p.m. UTC
The cleanups are intended to reduce the verbosity in lru list
operations and make them less error-prone. A typical example
would be how the patches change __activate_page():

 static void __activate_page(struct page *page, struct lruvec *lruvec)
 {
 	if (!PageActive(page) && !PageUnevictable(page)) {
-		int lru = page_lru_base_type(page);
 		int nr_pages = thp_nr_pages(page);
 
-		del_page_from_lru_list(page, lruvec, lru);
+		del_page_from_lru_list(page, lruvec);
 		SetPageActive(page);
-		lru += LRU_ACTIVE;
-		add_page_to_lru_list(page, lruvec, lru);
+		add_page_to_lru_list(page, lruvec);
 		trace_mm_lru_activate(page);
 
There are a few more places like __activate_page() and they are
unnecessarily repetitive in terms of figuring out which list a page
should be added onto or deleted from. And with the duplicated code
removed, they are easier to read, IMO.

Patch 1 to 5 basically cover the above. Patch 6 and 7 make code more
robust by improving bug reporting. Patch 8, 9 and 10 take care of
some dangling helpers left in header files. Patch 11 isn't strictly a
clean-up patch, but it seems still relevant to include it here.

Yu Zhao (11):
  mm: use add_page_to_lru_list()
  mm: shuffle lru list addition and deletion functions
  mm: don't pass "enum lru_list" to lru list addition functions
  mm: don't pass "enum lru_list" to trace_mm_lru_insertion()
  mm: don't pass "enum lru_list" to del_page_from_lru_list()
  mm: add __clear_page_lru_flags() to replace page_off_lru()
  mm: VM_BUG_ON lru page flags
  mm: fold page_lru_base_type() into its sole caller
  mm: fold __update_lru_size() into its sole caller
  mm: make lruvec_lru_size() static
  mm: enlarge the "int nr_pages" parameter of update_lru_size()

 include/linux/memcontrol.h     |  10 +--
 include/linux/mm_inline.h      | 115 ++++++++++++++-------------------
 include/linux/mmzone.h         |   2 -
 include/linux/vmstat.h         |   6 +-
 include/trace/events/pagemap.h |  11 ++--
 mm/compaction.c                |   2 +-
 mm/memcontrol.c                |  10 +--
 mm/mlock.c                     |   3 +-
 mm/swap.c                      |  50 ++++++--------
 mm/vmscan.c                    |  21 ++----
 10 files changed, 91 insertions(+), 139 deletions(-)

Comments

Alex Shi Dec. 10, 2020, 9:28 a.m. UTC | #1
Hi Yu,

btw, after this patchset, to do cacheline alignment on each of lru lists
are possible, so did you try that to see performance changes?

Thanks
Alex

在 2020/12/8 上午6:09, Yu Zhao 写道:
> The cleanups are intended to reduce the verbosity in lru list
> operations and make them less error-prone. A typical example
> would be how the patches change __activate_page():
> 
>  static void __activate_page(struct page *page, struct lruvec *lruvec)
>  {
>  	if (!PageActive(page) && !PageUnevictable(page)) {
> -		int lru = page_lru_base_type(page);
>  		int nr_pages = thp_nr_pages(page);
>  
> -		del_page_from_lru_list(page, lruvec, lru);
> +		del_page_from_lru_list(page, lruvec);
>  		SetPageActive(page);
> -		lru += LRU_ACTIVE;
> -		add_page_to_lru_list(page, lruvec, lru);
> +		add_page_to_lru_list(page, lruvec);
>  		trace_mm_lru_activate(page);
>  
> There are a few more places like __activate_page() and they are
> unnecessarily repetitive in terms of figuring out which list a page
> should be added onto or deleted from. And with the duplicated code
> removed, they are easier to read, IMO.
> 
> Patch 1 to 5 basically cover the above. Patch 6 and 7 make code more
> robust by improving bug reporting. Patch 8, 9 and 10 take care of
> some dangling helpers left in header files. Patch 11 isn't strictly a
> clean-up patch, but it seems still relevant to include it here.
> 
> Yu Zhao (11):
>   mm: use add_page_to_lru_list()
>   mm: shuffle lru list addition and deletion functions
>   mm: don't pass "enum lru_list" to lru list addition functions
>   mm: don't pass "enum lru_list" to trace_mm_lru_insertion()
>   mm: don't pass "enum lru_list" to del_page_from_lru_list()
>   mm: add __clear_page_lru_flags() to replace page_off_lru()
>   mm: VM_BUG_ON lru page flags
>   mm: fold page_lru_base_type() into its sole caller
>   mm: fold __update_lru_size() into its sole caller
>   mm: make lruvec_lru_size() static
>   mm: enlarge the "int nr_pages" parameter of update_lru_size()
> 
>  include/linux/memcontrol.h     |  10 +--
>  include/linux/mm_inline.h      | 115 ++++++++++++++-------------------
>  include/linux/mmzone.h         |   2 -
>  include/linux/vmstat.h         |   6 +-
>  include/trace/events/pagemap.h |  11 ++--
>  mm/compaction.c                |   2 +-
>  mm/memcontrol.c                |  10 +--
>  mm/mlock.c                     |   3 +-
>  mm/swap.c                      |  50 ++++++--------
>  mm/vmscan.c                    |  21 ++----
>  10 files changed, 91 insertions(+), 139 deletions(-)
>
Yu Zhao Dec. 16, 2020, 12:48 a.m. UTC | #2
On Thu, Dec 10, 2020 at 05:28:08PM +0800, Alex Shi wrote:
> Hi Yu,
> 
> btw, after this patchset, to do cacheline alignment on each of lru lists
> are possible, so did you try that to see performance changes?

I ran a Chrome-based performance benchmark without memcg and with one
memcg many times. The good news is I didn't see any regressions for
these basic cases. But I can't say how much improvements there would
be with hundreds of memcgs.