Message ID | 1584423717-3440-5-git-send-email-iamjoonsoo.kim@lge.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | workingset protection/detection on the anonymous LRU list | expand |
On Tue, Mar 17, 2020 at 02:41:52PM +0900, js1304@gmail.com wrote: > From: Joonsoo Kim <iamjoonsoo.kim@lge.com> > > Swapcache doesn't handle the value since there is no case using the value. > In the following patch, workingset detection for anonymous page will be > implemented and it stores the value into the swapcache. So, we need to > handle it and this patch implement handling. "value" is too generic, it's not quite clear what this refers to here. "Exceptional entries" or "shadow entries" would be better. > @@ -155,24 +163,33 @@ int add_to_swap_cache(struct page *page, swp_entry_t entry, gfp_t gfp) > * This must be called only on pages that have > * been verified to be in the swap cache. > */ > -void __delete_from_swap_cache(struct page *page, swp_entry_t entry) > +void __delete_from_swap_cache(struct page *page, > + swp_entry_t entry, void *shadow) > { > struct address_space *address_space = swap_address_space(entry); > int i, nr = hpage_nr_pages(page); > pgoff_t idx = swp_offset(entry); > XA_STATE(xas, &address_space->i_pages, idx); > > + /* Do not apply workingset detection for the hugh page */ > + if (nr > 1) > + shadow = NULL; Hm, why is that? Should that be an XXX/TODO item? The comment should explain the reason, not necessarily what the code is doing. Also, s/hugh/huge/ The rest of the patch looks straight-forward to me.
2020년 3월 19일 (목) 오전 3:33, Johannes Weiner <hannes@cmpxchg.org>님이 작성: > > On Tue, Mar 17, 2020 at 02:41:52PM +0900, js1304@gmail.com wrote: > > From: Joonsoo Kim <iamjoonsoo.kim@lge.com> > > > > Swapcache doesn't handle the value since there is no case using the value. > > In the following patch, workingset detection for anonymous page will be > > implemented and it stores the value into the swapcache. So, we need to > > handle it and this patch implement handling. > > "value" is too generic, it's not quite clear what this refers to > here. "Exceptional entries" or "shadow entries" would be better. Okay. Will change it. > > @@ -155,24 +163,33 @@ int add_to_swap_cache(struct page *page, swp_entry_t entry, gfp_t gfp) > > * This must be called only on pages that have > > * been verified to be in the swap cache. > > */ > > -void __delete_from_swap_cache(struct page *page, swp_entry_t entry) > > +void __delete_from_swap_cache(struct page *page, > > + swp_entry_t entry, void *shadow) > > { > > struct address_space *address_space = swap_address_space(entry); > > int i, nr = hpage_nr_pages(page); > > pgoff_t idx = swp_offset(entry); > > XA_STATE(xas, &address_space->i_pages, idx); > > > > + /* Do not apply workingset detection for the hugh page */ > > + if (nr > 1) > > + shadow = NULL; > > Hm, why is that? Should that be an XXX/TODO item? The comment should > explain the reason, not necessarily what the code is doing. It was my TODO. Now, I check the code and find that there is no blocker for the huge page support. So, I will remove this code and enable the workingset detection even for the huge page. > Also, s/hugh/huge/ Okay. > The rest of the patch looks straight-forward to me. Thanks.
diff --git a/include/linux/swap.h b/include/linux/swap.h index 954e13e..0df8b3f 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -410,7 +410,8 @@ extern void show_swap_cache_info(void); extern int add_to_swap(struct page *page); extern int add_to_swap_cache(struct page *, swp_entry_t, gfp_t); extern int __add_to_swap_cache(struct page *page, swp_entry_t entry); -extern void __delete_from_swap_cache(struct page *, swp_entry_t entry); +extern void __delete_from_swap_cache(struct page *page, + swp_entry_t entry, void *shadow); extern void delete_from_swap_cache(struct page *); extern void free_page_and_swap_cache(struct page *); extern void free_pages_and_swap_cache(struct page **, int); @@ -571,7 +572,7 @@ static inline int add_to_swap_cache(struct page *page, swp_entry_t entry, } static inline void __delete_from_swap_cache(struct page *page, - swp_entry_t entry) + swp_entry_t entry, void *shadow) { } diff --git a/mm/swap_state.c b/mm/swap_state.c index 8e7ce9a..3fbbe45 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -117,6 +117,10 @@ int add_to_swap_cache(struct page *page, swp_entry_t entry, gfp_t gfp) pgoff_t idx = swp_offset(entry); XA_STATE_ORDER(xas, &address_space->i_pages, idx, compound_order(page)); unsigned long i, nr = compound_nr(page); + unsigned long nrexceptional = 0; + void *old; + + xas_set_update(&xas, workingset_update_node); VM_BUG_ON_PAGE(!PageLocked(page), page); VM_BUG_ON_PAGE(PageSwapCache(page), page); @@ -132,10 +136,14 @@ int add_to_swap_cache(struct page *page, swp_entry_t entry, gfp_t gfp) goto unlock; for (i = 0; i < nr; i++) { VM_BUG_ON_PAGE(xas.xa_index != idx + i, page); + old = xas_load(&xas); + if (xa_is_value(old)) + nrexceptional++; set_page_private(page + i, entry.val + i); xas_store(&xas, page); xas_next(&xas); } + address_space->nrexceptional -= nrexceptional; address_space->nrpages += nr; __mod_node_page_state(page_pgdat(page), NR_FILE_PAGES, nr); ADD_CACHE_INFO(add_total, nr); @@ -155,24 +163,33 @@ int add_to_swap_cache(struct page *page, swp_entry_t entry, gfp_t gfp) * This must be called only on pages that have * been verified to be in the swap cache. */ -void __delete_from_swap_cache(struct page *page, swp_entry_t entry) +void __delete_from_swap_cache(struct page *page, + swp_entry_t entry, void *shadow) { struct address_space *address_space = swap_address_space(entry); int i, nr = hpage_nr_pages(page); pgoff_t idx = swp_offset(entry); XA_STATE(xas, &address_space->i_pages, idx); + /* Do not apply workingset detection for the hugh page */ + if (nr > 1) + shadow = NULL; + + xas_set_update(&xas, workingset_update_node); + VM_BUG_ON_PAGE(!PageLocked(page), page); VM_BUG_ON_PAGE(!PageSwapCache(page), page); VM_BUG_ON_PAGE(PageWriteback(page), page); for (i = 0; i < nr; i++) { - void *entry = xas_store(&xas, NULL); + void *entry = xas_store(&xas, shadow); VM_BUG_ON_PAGE(entry != page, entry); set_page_private(page + i, 0); xas_next(&xas); } ClearPageSwapCache(page); + if (shadow) + address_space->nrexceptional += nr; address_space->nrpages -= nr; __mod_node_page_state(page_pgdat(page), NR_FILE_PAGES, -nr); ADD_CACHE_INFO(del_total, nr); @@ -247,7 +264,7 @@ void delete_from_swap_cache(struct page *page) struct address_space *address_space = swap_address_space(entry); xa_lock_irq(&address_space->i_pages); - __delete_from_swap_cache(page, entry); + __delete_from_swap_cache(page, entry, NULL); xa_unlock_irq(&address_space->i_pages); put_swap_page(page, entry); diff --git a/mm/vmscan.c b/mm/vmscan.c index 0493c25..9871861 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -909,7 +909,7 @@ static int __remove_mapping(struct address_space *mapping, struct page *page, if (PageSwapCache(page)) { swp_entry_t swap = { .val = page_private(page) }; mem_cgroup_swapout(page, swap); - __delete_from_swap_cache(page, swap); + __delete_from_swap_cache(page, swap, NULL); xa_unlock_irqrestore(&mapping->i_pages, flags); put_swap_page(page, swap); } else {