Message ID | 20200914130042.11442-3-willy@infradead.org (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Overhaul multi-page lookups for THP | expand |
On Mon 14-09-20 14:00:32, Matthew Wilcox (Oracle) wrote: > The comment shows that the reason for using find_get_entries() is now > stale; find_get_pages() will not return 0 if it hits a consecutive run > of swap entries, and I don't believe it has since 2011. pagevec_lookup() > is a simpler function to use than find_get_pages(), so use it instead. > > Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Looks good to me. BTW, I think I've already reviewed this... You can add: Reviewed-by: Jan Kara <jack@suse.cz> Honza > --- > mm/shmem.c | 11 +---------- > 1 file changed, 1 insertion(+), 10 deletions(-) > > diff --git a/mm/shmem.c b/mm/shmem.c > index 58bc9e326d0d..108931a6cc43 100644 > --- a/mm/shmem.c > +++ b/mm/shmem.c > @@ -840,7 +840,6 @@ unsigned long shmem_swap_usage(struct vm_area_struct *vma) > void shmem_unlock_mapping(struct address_space *mapping) > { > struct pagevec pvec; > - pgoff_t indices[PAGEVEC_SIZE]; > pgoff_t index = 0; > > pagevec_init(&pvec); > @@ -848,16 +847,8 @@ void shmem_unlock_mapping(struct address_space *mapping) > * Minor point, but we might as well stop if someone else SHM_LOCKs it. > */ > while (!mapping_unevictable(mapping)) { > - /* > - * Avoid pagevec_lookup(): find_get_pages() returns 0 as if it > - * has finished, if it hits a row of PAGEVEC_SIZE swap entries. > - */ > - pvec.nr = find_get_entries(mapping, index, > - PAGEVEC_SIZE, pvec.pages, indices); > - if (!pvec.nr) > + if (!pagevec_lookup(&pvec, mapping, &index)) > break; > - index = indices[pvec.nr - 1] + 1; > - pagevec_remove_exceptionals(&pvec); > check_move_unevictable_pages(&pvec); > pagevec_release(&pvec); > cond_resched(); > -- > 2.28.0 >
On Tue, Sep 29, 2020 at 10:28:28AM +0200, Jan Kara wrote: > On Mon 14-09-20 14:00:32, Matthew Wilcox (Oracle) wrote: > > The comment shows that the reason for using find_get_entries() is now > > stale; find_get_pages() will not return 0 if it hits a consecutive run > > of swap entries, and I don't believe it has since 2011. pagevec_lookup() > > is a simpler function to use than find_get_pages(), so use it instead. > > > > Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> > > Looks good to me. BTW, I think I've already reviewed this... You can add: > > Reviewed-by: Jan Kara <jack@suse.cz> So you did! My apologies for missing that.
diff --git a/mm/shmem.c b/mm/shmem.c index 58bc9e326d0d..108931a6cc43 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -840,7 +840,6 @@ unsigned long shmem_swap_usage(struct vm_area_struct *vma) void shmem_unlock_mapping(struct address_space *mapping) { struct pagevec pvec; - pgoff_t indices[PAGEVEC_SIZE]; pgoff_t index = 0; pagevec_init(&pvec); @@ -848,16 +847,8 @@ void shmem_unlock_mapping(struct address_space *mapping) * Minor point, but we might as well stop if someone else SHM_LOCKs it. */ while (!mapping_unevictable(mapping)) { - /* - * Avoid pagevec_lookup(): find_get_pages() returns 0 as if it - * has finished, if it hits a row of PAGEVEC_SIZE swap entries. - */ - pvec.nr = find_get_entries(mapping, index, - PAGEVEC_SIZE, pvec.pages, indices); - if (!pvec.nr) + if (!pagevec_lookup(&pvec, mapping, &index)) break; - index = indices[pvec.nr - 1] + 1; - pagevec_remove_exceptionals(&pvec); check_move_unevictable_pages(&pvec); pagevec_release(&pvec); cond_resched();
The comment shows that the reason for using find_get_entries() is now stale; find_get_pages() will not return 0 if it hits a consecutive run of swap entries, and I don't believe it has since 2011. pagevec_lookup() is a simpler function to use than find_get_pages(), so use it instead. Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> --- mm/shmem.c | 11 +---------- 1 file changed, 1 insertion(+), 10 deletions(-)