Message ID | 20211208042256.1923824-43-willy@infradead.org (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | Folios for 5.17 | expand |
Hi "Matthew,
Thank you for the patch! Perhaps something to improve:
[auto build test WARNING on hnaz-mm/master]
[also build test WARNING on rostedt-trace/for-next linus/master v5.16-rc4]
[cannot apply to next-20211208]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]
url: https://github.com/0day-ci/linux/commits/Matthew-Wilcox-Oracle/Folios-for-5-17/20211208-122734
base: https://github.com/hnaz/linux-mm master
config: alpha-allyesconfig (https://download.01.org/0day-ci/archive/20211208/202112081952.NHF8MX2L-lkp@intel.com/config)
compiler: alpha-linux-gcc (GCC) 11.2.0
reproduce (this is a W=1 build):
wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# https://github.com/0day-ci/linux/commit/b883ee2b43293c901ea31f233d1596f255e0dcb9
git remote add linux-review https://github.com/0day-ci/linux
git fetch --no-tags linux-review Matthew-Wilcox-Oracle/Folios-for-5-17/20211208-122734
git checkout b883ee2b43293c901ea31f233d1596f255e0dcb9
# save the config file to linux build tree
mkdir build_dir
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-11.2.0 make.cross O=build_dir ARCH=alpha SHELL=/bin/bash fs/
If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>
All warnings (new ones prefixed by >>):
In file included from fs/f2fs/dir.c:13:
>> fs/f2fs/f2fs.h:4055:67: warning: 'struct pagevec' declared inside parameter list will not be visible outside of this definition or declaration
4055 | bool f2fs_all_cluster_page_loaded(struct compress_ctx *cc, struct pagevec *pvec,
| ^~~~~~~
vim +4055 fs/f2fs/f2fs.h
4c8ff7095bef64 Chao Yu 2019-11-01 4034
4c8ff7095bef64 Chao Yu 2019-11-01 4035 /*
4c8ff7095bef64 Chao Yu 2019-11-01 4036 * compress.c
4c8ff7095bef64 Chao Yu 2019-11-01 4037 */
4c8ff7095bef64 Chao Yu 2019-11-01 4038 #ifdef CONFIG_F2FS_FS_COMPRESSION
4c8ff7095bef64 Chao Yu 2019-11-01 4039 bool f2fs_is_compressed_page(struct page *page);
4c8ff7095bef64 Chao Yu 2019-11-01 4040 struct page *f2fs_compress_control_page(struct page *page);
4c8ff7095bef64 Chao Yu 2019-11-01 4041 int f2fs_prepare_compress_overwrite(struct inode *inode,
4c8ff7095bef64 Chao Yu 2019-11-01 4042 struct page **pagep, pgoff_t index, void **fsdata);
4c8ff7095bef64 Chao Yu 2019-11-01 4043 bool f2fs_compress_write_end(struct inode *inode, void *fsdata,
4c8ff7095bef64 Chao Yu 2019-11-01 4044 pgoff_t index, unsigned copied);
3265d3db1f1639 Chao Yu 2020-03-18 4045 int f2fs_truncate_partial_cluster(struct inode *inode, u64 from, bool lock);
4c8ff7095bef64 Chao Yu 2019-11-01 4046 void f2fs_compress_write_end_io(struct bio *bio, struct page *page);
4c8ff7095bef64 Chao Yu 2019-11-01 4047 bool f2fs_is_compress_backend_ready(struct inode *inode);
5e6bbde9598230 Chao Yu 2020-04-08 4048 int f2fs_init_compress_mempool(void);
5e6bbde9598230 Chao Yu 2020-04-08 4049 void f2fs_destroy_compress_mempool(void);
6ce19aff0b8cd3 Chao Yu 2021-05-20 4050 void f2fs_decompress_cluster(struct decompress_io_ctx *dic);
6ce19aff0b8cd3 Chao Yu 2021-05-20 4051 void f2fs_end_read_compressed_page(struct page *page, bool failed,
6ce19aff0b8cd3 Chao Yu 2021-05-20 4052 block_t blkaddr);
4c8ff7095bef64 Chao Yu 2019-11-01 4053 bool f2fs_cluster_is_empty(struct compress_ctx *cc);
4c8ff7095bef64 Chao Yu 2019-11-01 4054 bool f2fs_cluster_can_merge_page(struct compress_ctx *cc, pgoff_t index);
2ce5eeadf5d8d9 Andrew Morton 2021-10-28 @4055 bool f2fs_all_cluster_page_loaded(struct compress_ctx *cc, struct pagevec *pvec,
2ce5eeadf5d8d9 Andrew Morton 2021-10-28 4056 int index, int nr_pages);
bbe1da7e34ac5a Chao Yu 2021-08-06 4057 bool f2fs_sanity_check_cluster(struct dnode_of_data *dn);
4c8ff7095bef64 Chao Yu 2019-11-01 4058 void f2fs_compress_ctx_add_page(struct compress_ctx *cc, struct page *page);
4c8ff7095bef64 Chao Yu 2019-11-01 4059 int f2fs_write_multi_pages(struct compress_ctx *cc,
4c8ff7095bef64 Chao Yu 2019-11-01 4060 int *submitted,
4c8ff7095bef64 Chao Yu 2019-11-01 4061 struct writeback_control *wbc,
4c8ff7095bef64 Chao Yu 2019-11-01 4062 enum iostat_type io_type);
4c8ff7095bef64 Chao Yu 2019-11-01 4063 int f2fs_is_compressed_cluster(struct inode *inode, pgoff_t index);
94afd6d6e52531 Chao Yu 2021-08-04 4064 void f2fs_update_extent_tree_range_compressed(struct inode *inode,
94afd6d6e52531 Chao Yu 2021-08-04 4065 pgoff_t fofs, block_t blkaddr, unsigned int llen,
94afd6d6e52531 Chao Yu 2021-08-04 4066 unsigned int c_len);
4c8ff7095bef64 Chao Yu 2019-11-01 4067 int f2fs_read_multi_pages(struct compress_ctx *cc, struct bio **bio_ret,
4c8ff7095bef64 Chao Yu 2019-11-01 4068 unsigned nr_pages, sector_t *last_block_in_bio,
0683728adab251 Chao Yu 2020-02-18 4069 bool is_readahead, bool for_write);
4c8ff7095bef64 Chao Yu 2019-11-01 4070 struct decompress_io_ctx *f2fs_alloc_dic(struct compress_ctx *cc);
7f59b277f79e8a Eric Biggers 2021-01-04 4071 void f2fs_decompress_end_io(struct decompress_io_ctx *dic, bool failed);
7f59b277f79e8a Eric Biggers 2021-01-04 4072 void f2fs_put_page_dic(struct page *page);
94afd6d6e52531 Chao Yu 2021-08-04 4073 unsigned int f2fs_cluster_blocks_are_contiguous(struct dnode_of_data *dn);
4c8ff7095bef64 Chao Yu 2019-11-01 4074 int f2fs_init_compress_ctx(struct compress_ctx *cc);
8bfbfb0ddd706b Chao Yu 2021-05-10 4075 void f2fs_destroy_compress_ctx(struct compress_ctx *cc, bool reuse);
4c8ff7095bef64 Chao Yu 2019-11-01 4076 void f2fs_init_compress_info(struct f2fs_sb_info *sbi);
6ce19aff0b8cd3 Chao Yu 2021-05-20 4077 int f2fs_init_compress_inode(struct f2fs_sb_info *sbi);
6ce19aff0b8cd3 Chao Yu 2021-05-20 4078 void f2fs_destroy_compress_inode(struct f2fs_sb_info *sbi);
31083031709eea Chao Yu 2020-09-14 4079 int f2fs_init_page_array_cache(struct f2fs_sb_info *sbi);
31083031709eea Chao Yu 2020-09-14 4080 void f2fs_destroy_page_array_cache(struct f2fs_sb_info *sbi);
c68d6c88302250 Chao Yu 2020-09-14 4081 int __init f2fs_init_compress_cache(void);
c68d6c88302250 Chao Yu 2020-09-14 4082 void f2fs_destroy_compress_cache(void);
6ce19aff0b8cd3 Chao Yu 2021-05-20 4083 struct address_space *COMPRESS_MAPPING(struct f2fs_sb_info *sbi);
6ce19aff0b8cd3 Chao Yu 2021-05-20 4084 void f2fs_invalidate_compress_page(struct f2fs_sb_info *sbi, block_t blkaddr);
6ce19aff0b8cd3 Chao Yu 2021-05-20 4085 void f2fs_cache_compressed_page(struct f2fs_sb_info *sbi, struct page *page,
6ce19aff0b8cd3 Chao Yu 2021-05-20 4086 nid_t ino, block_t blkaddr);
6ce19aff0b8cd3 Chao Yu 2021-05-20 4087 bool f2fs_load_compressed_page(struct f2fs_sb_info *sbi, struct page *page,
6ce19aff0b8cd3 Chao Yu 2021-05-20 4088 block_t blkaddr);
6ce19aff0b8cd3 Chao Yu 2021-05-20 4089 void f2fs_invalidate_compress_pages(struct f2fs_sb_info *sbi, nid_t ino);
5ac443e26a0964 Daeho Jeong 2021-03-15 4090 #define inc_compr_inode_stat(inode) \
5ac443e26a0964 Daeho Jeong 2021-03-15 4091 do { \
5ac443e26a0964 Daeho Jeong 2021-03-15 4092 struct f2fs_sb_info *sbi = F2FS_I_SB(inode); \
5ac443e26a0964 Daeho Jeong 2021-03-15 4093 sbi->compr_new_inode++; \
5ac443e26a0964 Daeho Jeong 2021-03-15 4094 } while (0)
5ac443e26a0964 Daeho Jeong 2021-03-15 4095 #define add_compr_block_stat(inode, blocks) \
5ac443e26a0964 Daeho Jeong 2021-03-15 4096 do { \
5ac443e26a0964 Daeho Jeong 2021-03-15 4097 struct f2fs_sb_info *sbi = F2FS_I_SB(inode); \
5ac443e26a0964 Daeho Jeong 2021-03-15 4098 int diff = F2FS_I(inode)->i_cluster_size - blocks; \
5ac443e26a0964 Daeho Jeong 2021-03-15 4099 sbi->compr_written_block += blocks; \
5ac443e26a0964 Daeho Jeong 2021-03-15 4100 sbi->compr_saved_block += diff; \
5ac443e26a0964 Daeho Jeong 2021-03-15 4101 } while (0)
4c8ff7095bef64 Chao Yu 2019-11-01 4102 #else
4c8ff7095bef64 Chao Yu 2019-11-01 4103 static inline bool f2fs_is_compressed_page(struct page *page) { return false; }
4c8ff7095bef64 Chao Yu 2019-11-01 4104 static inline bool f2fs_is_compress_backend_ready(struct inode *inode)
4c8ff7095bef64 Chao Yu 2019-11-01 4105 {
4c8ff7095bef64 Chao Yu 2019-11-01 4106 if (!f2fs_compressed_file(inode))
4c8ff7095bef64 Chao Yu 2019-11-01 4107 return true;
4c8ff7095bef64 Chao Yu 2019-11-01 4108 /* not support compression */
4c8ff7095bef64 Chao Yu 2019-11-01 4109 return false;
4c8ff7095bef64 Chao Yu 2019-11-01 4110 }
4c8ff7095bef64 Chao Yu 2019-11-01 4111 static inline struct page *f2fs_compress_control_page(struct page *page)
4c8ff7095bef64 Chao Yu 2019-11-01 4112 {
4c8ff7095bef64 Chao Yu 2019-11-01 4113 WARN_ON_ONCE(1);
4c8ff7095bef64 Chao Yu 2019-11-01 4114 return ERR_PTR(-EINVAL);
4c8ff7095bef64 Chao Yu 2019-11-01 4115 }
5e6bbde9598230 Chao Yu 2020-04-08 4116 static inline int f2fs_init_compress_mempool(void) { return 0; }
5e6bbde9598230 Chao Yu 2020-04-08 4117 static inline void f2fs_destroy_compress_mempool(void) { }
6ce19aff0b8cd3 Chao Yu 2021-05-20 4118 static inline void f2fs_decompress_cluster(struct decompress_io_ctx *dic) { }
6ce19aff0b8cd3 Chao Yu 2021-05-20 4119 static inline void f2fs_end_read_compressed_page(struct page *page,
6ce19aff0b8cd3 Chao Yu 2021-05-20 4120 bool failed, block_t blkaddr)
7f59b277f79e8a Eric Biggers 2021-01-04 4121 {
7f59b277f79e8a Eric Biggers 2021-01-04 4122 WARN_ON_ONCE(1);
7f59b277f79e8a Eric Biggers 2021-01-04 4123 }
7f59b277f79e8a Eric Biggers 2021-01-04 4124 static inline void f2fs_put_page_dic(struct page *page)
7f59b277f79e8a Eric Biggers 2021-01-04 4125 {
7f59b277f79e8a Eric Biggers 2021-01-04 4126 WARN_ON_ONCE(1);
7f59b277f79e8a Eric Biggers 2021-01-04 4127 }
94afd6d6e52531 Chao Yu 2021-08-04 4128 static inline unsigned int f2fs_cluster_blocks_are_contiguous(struct dnode_of_data *dn) { return 0; }
bbe1da7e34ac5a Chao Yu 2021-08-06 4129 static inline bool f2fs_sanity_check_cluster(struct dnode_of_data *dn) { return false; }
6ce19aff0b8cd3 Chao Yu 2021-05-20 4130 static inline int f2fs_init_compress_inode(struct f2fs_sb_info *sbi) { return 0; }
6ce19aff0b8cd3 Chao Yu 2021-05-20 4131 static inline void f2fs_destroy_compress_inode(struct f2fs_sb_info *sbi) { }
31083031709eea Chao Yu 2020-09-14 4132 static inline int f2fs_init_page_array_cache(struct f2fs_sb_info *sbi) { return 0; }
31083031709eea Chao Yu 2020-09-14 4133 static inline void f2fs_destroy_page_array_cache(struct f2fs_sb_info *sbi) { }
c68d6c88302250 Chao Yu 2020-09-14 4134 static inline int __init f2fs_init_compress_cache(void) { return 0; }
c68d6c88302250 Chao Yu 2020-09-14 4135 static inline void f2fs_destroy_compress_cache(void) { }
6ce19aff0b8cd3 Chao Yu 2021-05-20 4136 static inline void f2fs_invalidate_compress_page(struct f2fs_sb_info *sbi,
6ce19aff0b8cd3 Chao Yu 2021-05-20 4137 block_t blkaddr) { }
6ce19aff0b8cd3 Chao Yu 2021-05-20 4138 static inline void f2fs_cache_compressed_page(struct f2fs_sb_info *sbi,
6ce19aff0b8cd3 Chao Yu 2021-05-20 4139 struct page *page, nid_t ino, block_t blkaddr) { }
6ce19aff0b8cd3 Chao Yu 2021-05-20 4140 static inline bool f2fs_load_compressed_page(struct f2fs_sb_info *sbi,
6ce19aff0b8cd3 Chao Yu 2021-05-20 4141 struct page *page, block_t blkaddr) { return false; }
6ce19aff0b8cd3 Chao Yu 2021-05-20 4142 static inline void f2fs_invalidate_compress_pages(struct f2fs_sb_info *sbi,
6ce19aff0b8cd3 Chao Yu 2021-05-20 4143 nid_t ino) { }
5ac443e26a0964 Daeho Jeong 2021-03-15 4144 #define inc_compr_inode_stat(inode) do { } while (0)
94afd6d6e52531 Chao Yu 2021-08-04 4145 static inline void f2fs_update_extent_tree_range_compressed(struct inode *inode,
94afd6d6e52531 Chao Yu 2021-08-04 4146 pgoff_t fofs, block_t blkaddr, unsigned int llen,
94afd6d6e52531 Chao Yu 2021-08-04 4147 unsigned int c_len) { }
4c8ff7095bef64 Chao Yu 2019-11-01 4148 #endif
4c8ff7095bef64 Chao Yu 2019-11-01 4149
---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org
On Wed, Dec 08, 2021 at 07:29:33PM +0800, kernel test robot wrote: > config: alpha-allyesconfig (https://download.01.org/0day-ci/archive/20211208/202112081952.NHF8MX2L-lkp@intel.com/config) > compiler: alpha-linux-gcc (GCC) 11.2.0 Thanks. Strangely, it doesn't reproduce on x86 allmodconfig.
diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index d2259a1da51c..6e038811f4c8 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -16,7 +16,7 @@ #include <linux/hardirq.h> /* for in_interrupt() */ #include <linux/hugetlb_inline.h> -struct pagevec; +struct folio_batch; static inline bool mapping_empty(struct address_space *mapping) { @@ -936,7 +936,7 @@ static inline void __delete_from_page_cache(struct page *page, void *shadow) } void replace_page_cache_page(struct page *old, struct page *new); void delete_from_page_cache_batch(struct address_space *mapping, - struct pagevec *pvec); + struct folio_batch *fbatch); int try_to_release_page(struct page *page, gfp_t gfp); bool filemap_release_folio(struct folio *folio, gfp_t gfp); loff_t mapping_seek_hole_data(struct address_space *, loff_t start, loff_t end, diff --git a/mm/filemap.c b/mm/filemap.c index 4f00412d72d3..89a10624e361 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -270,30 +270,29 @@ void filemap_remove_folio(struct folio *folio) } /* - * page_cache_delete_batch - delete several pages from page cache - * @mapping: the mapping to which pages belong - * @pvec: pagevec with pages to delete + * page_cache_delete_batch - delete several folios from page cache + * @mapping: the mapping to which folios belong + * @fbatch: batch of folios to delete * - * The function walks over mapping->i_pages and removes pages passed in @pvec - * from the mapping. The function expects @pvec to be sorted by page index - * and is optimised for it to be dense. - * It tolerates holes in @pvec (mapping entries at those indices are not - * modified). The function expects only THP head pages to be present in the - * @pvec. + * The function walks over mapping->i_pages and removes folios passed in + * @fbatch from the mapping. The function expects @fbatch to be sorted + * by page index and is optimised for it to be dense. + * It tolerates holes in @fbatch (mapping entries at those indices are not + * modified). * * The function expects the i_pages lock to be held. */ static void page_cache_delete_batch(struct address_space *mapping, - struct pagevec *pvec) + struct folio_batch *fbatch) { - XA_STATE(xas, &mapping->i_pages, pvec->pages[0]->index); + XA_STATE(xas, &mapping->i_pages, fbatch->folios[0]->index); int total_pages = 0; int i = 0; struct folio *folio; mapping_set_update(&xas, mapping); xas_for_each(&xas, folio, ULONG_MAX) { - if (i >= pagevec_count(pvec)) + if (i >= folio_batch_count(fbatch)) break; /* A swap/dax/shadow entry got inserted? Skip it. */ @@ -306,9 +305,9 @@ static void page_cache_delete_batch(struct address_space *mapping, * means our page has been removed, which shouldn't be * possible because we're holding the PageLock. */ - if (&folio->page != pvec->pages[i]) { + if (folio != fbatch->folios[i]) { VM_BUG_ON_FOLIO(folio->index > - pvec->pages[i]->index, folio); + fbatch->folios[i]->index, folio); continue; } @@ -316,12 +315,11 @@ static void page_cache_delete_batch(struct address_space *mapping, if (folio->index == xas.xa_index) folio->mapping = NULL; - /* Leave page->index set: truncation lookup relies on it */ + /* Leave folio->index set: truncation lookup relies on it */ /* - * Move to the next page in the vector if this is a regular - * page or the index is of the last sub-page of this compound - * page. + * Move to the next folio in the batch if this is a regular + * folio or the index is of the last sub-page of this folio. */ if (folio->index + folio_nr_pages(folio) - 1 == xas.xa_index) i++; @@ -332,29 +330,29 @@ static void page_cache_delete_batch(struct address_space *mapping, } void delete_from_page_cache_batch(struct address_space *mapping, - struct pagevec *pvec) + struct folio_batch *fbatch) { int i; - if (!pagevec_count(pvec)) + if (!folio_batch_count(fbatch)) return; spin_lock(&mapping->host->i_lock); xa_lock_irq(&mapping->i_pages); - for (i = 0; i < pagevec_count(pvec); i++) { - struct folio *folio = page_folio(pvec->pages[i]); + for (i = 0; i < folio_batch_count(fbatch); i++) { + struct folio *folio = fbatch->folios[i]; trace_mm_filemap_delete_from_page_cache(folio); filemap_unaccount_folio(mapping, folio); } - page_cache_delete_batch(mapping, pvec); + page_cache_delete_batch(mapping, fbatch); xa_unlock_irq(&mapping->i_pages); if (mapping_shrinkable(mapping)) inode_add_lru(mapping->host); spin_unlock(&mapping->host->i_lock); - for (i = 0; i < pagevec_count(pvec); i++) - filemap_free_folio(mapping, page_folio(pvec->pages[i])); + for (i = 0; i < folio_batch_count(fbatch); i++) + filemap_free_folio(mapping, fbatch->folios[i]); } int filemap_check_errors(struct address_space *mapping) @@ -2052,8 +2050,8 @@ unsigned find_get_entries(struct address_space *mapping, pgoff_t start, * @mapping: The address_space to search. * @start: The starting page cache index. * @end: The final page index (inclusive). - * @pvec: Where the resulting entries are placed. - * @indices: The cache indices of the entries in @pvec. + * @fbatch: Where the resulting entries are placed. + * @indices: The cache indices of the entries in @fbatch. * * find_lock_entries() will return a batch of entries from @mapping. * Swap, shadow and DAX entries are included. Folios are returned @@ -2068,7 +2066,7 @@ unsigned find_get_entries(struct address_space *mapping, pgoff_t start, * Return: The number of entries which were found. */ unsigned find_lock_entries(struct address_space *mapping, pgoff_t start, - pgoff_t end, struct pagevec *pvec, pgoff_t *indices) + pgoff_t end, struct folio_batch *fbatch, pgoff_t *indices) { XA_STATE(xas, &mapping->i_pages, start); struct folio *folio; @@ -2088,8 +2086,8 @@ unsigned find_lock_entries(struct address_space *mapping, pgoff_t start, VM_BUG_ON_FOLIO(!folio_contains(folio, xas.xa_index), folio); } - indices[pvec->nr] = xas.xa_index; - if (!pagevec_add(pvec, &folio->page)) + indices[fbatch->nr] = xas.xa_index; + if (!folio_batch_add(fbatch, folio)) break; goto next; unlock: @@ -2106,7 +2104,7 @@ unsigned find_lock_entries(struct address_space *mapping, pgoff_t start, } rcu_read_unlock(); - return pagevec_count(pvec); + return folio_batch_count(fbatch); } /** diff --git a/mm/internal.h b/mm/internal.h index 36ad6ffe53bf..7759d4ff3323 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -114,7 +114,7 @@ static inline void force_page_cache_readahead(struct address_space *mapping, } unsigned find_lock_entries(struct address_space *mapping, pgoff_t start, - pgoff_t end, struct pagevec *pvec, pgoff_t *indices); + pgoff_t end, struct folio_batch *fbatch, pgoff_t *indices); unsigned find_get_entries(struct address_space *mapping, pgoff_t start, pgoff_t end, struct folio_batch *fbatch, pgoff_t *indices); void filemap_free_folio(struct address_space *mapping, struct folio *folio); diff --git a/mm/shmem.c b/mm/shmem.c index e909c163fb38..bbfa2d05e787 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -919,7 +919,6 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend, pgoff_t end = (lend + 1) >> PAGE_SHIFT; unsigned int partial_start = lstart & (PAGE_SIZE - 1); unsigned int partial_end = (lend + 1) & (PAGE_SIZE - 1); - struct pagevec pvec; struct folio_batch fbatch; pgoff_t indices[PAGEVEC_SIZE]; long nr_swaps_freed = 0; @@ -932,12 +931,12 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend, if (info->fallocend > start && info->fallocend <= end && !unfalloc) info->fallocend = start; - pagevec_init(&pvec); + folio_batch_init(&fbatch); index = start; while (index < end && find_lock_entries(mapping, index, end - 1, - &pvec, indices)) { - for (i = 0; i < pagevec_count(&pvec); i++) { - struct folio *folio = (struct folio *)pvec.pages[i]; + &fbatch, indices)) { + for (i = 0; i < folio_batch_count(&fbatch); i++) { + struct folio *folio = fbatch.folios[i]; index = indices[i]; @@ -954,8 +953,8 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend, truncate_inode_folio(mapping, folio); folio_unlock(folio); } - pagevec_remove_exceptionals(&pvec); - pagevec_release(&pvec); + folio_batch_remove_exceptionals(&fbatch); + folio_batch_release(&fbatch); cond_resched(); index++; } @@ -988,7 +987,6 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend, if (start >= end) return; - folio_batch_init(&fbatch); index = start; while (index < end) { cond_resched(); diff --git a/mm/truncate.c b/mm/truncate.c index 357af144df63..e7f5762c43d3 100644 --- a/mm/truncate.c +++ b/mm/truncate.c @@ -56,11 +56,11 @@ static void clear_shadow_entry(struct address_space *mapping, pgoff_t index, /* * Unconditionally remove exceptional entries. Usually called from truncate - * path. Note that the pagevec may be altered by this function by removing + * path. Note that the folio_batch may be altered by this function by removing * exceptional entries similar to what pagevec_remove_exceptionals does. */ -static void truncate_exceptional_pvec_entries(struct address_space *mapping, - struct pagevec *pvec, pgoff_t *indices) +static void truncate_folio_batch_exceptionals(struct address_space *mapping, + struct folio_batch *fbatch, pgoff_t *indices) { int i, j; bool dax; @@ -69,11 +69,11 @@ static void truncate_exceptional_pvec_entries(struct address_space *mapping, if (shmem_mapping(mapping)) return; - for (j = 0; j < pagevec_count(pvec); j++) - if (xa_is_value(pvec->pages[j])) + for (j = 0; j < folio_batch_count(fbatch); j++) + if (xa_is_value(fbatch->folios[j])) break; - if (j == pagevec_count(pvec)) + if (j == folio_batch_count(fbatch)) return; dax = dax_mapping(mapping); @@ -82,12 +82,12 @@ static void truncate_exceptional_pvec_entries(struct address_space *mapping, xa_lock_irq(&mapping->i_pages); } - for (i = j; i < pagevec_count(pvec); i++) { - struct page *page = pvec->pages[i]; + for (i = j; i < folio_batch_count(fbatch); i++) { + struct folio *folio = fbatch->folios[i]; pgoff_t index = indices[i]; - if (!xa_is_value(page)) { - pvec->pages[j++] = page; + if (!xa_is_value(folio)) { + fbatch->folios[j++] = folio; continue; } @@ -96,7 +96,7 @@ static void truncate_exceptional_pvec_entries(struct address_space *mapping, continue; } - __clear_shadow_entry(mapping, index, page); + __clear_shadow_entry(mapping, index, folio); } if (!dax) { @@ -105,14 +105,7 @@ static void truncate_exceptional_pvec_entries(struct address_space *mapping, inode_add_lru(mapping->host); spin_unlock(&mapping->host->i_lock); } - pvec->nr = j; -} - -static void truncate_folio_batch_exceptionals(struct address_space *mapping, - struct folio_batch *fbatch, pgoff_t *indices) -{ - truncate_exceptional_pvec_entries(mapping, (struct pagevec *)fbatch, - indices); + fbatch->nr = j; } /* @@ -303,7 +296,6 @@ void truncate_inode_pages_range(struct address_space *mapping, pgoff_t end; /* exclusive */ unsigned int partial_start; /* inclusive */ unsigned int partial_end; /* exclusive */ - struct pagevec pvec; struct folio_batch fbatch; pgoff_t indices[PAGEVEC_SIZE]; pgoff_t index; @@ -333,18 +325,18 @@ void truncate_inode_pages_range(struct address_space *mapping, else end = (lend + 1) >> PAGE_SHIFT; - pagevec_init(&pvec); + folio_batch_init(&fbatch); index = start; while (index < end && find_lock_entries(mapping, index, end - 1, - &pvec, indices)) { - index = indices[pagevec_count(&pvec) - 1] + 1; - truncate_exceptional_pvec_entries(mapping, &pvec, indices); - for (i = 0; i < pagevec_count(&pvec); i++) - truncate_cleanup_folio(page_folio(pvec.pages[i])); - delete_from_page_cache_batch(mapping, &pvec); - for (i = 0; i < pagevec_count(&pvec); i++) - unlock_page(pvec.pages[i]); - pagevec_release(&pvec); + &fbatch, indices)) { + index = indices[folio_batch_count(&fbatch) - 1] + 1; + truncate_folio_batch_exceptionals(mapping, &fbatch, indices); + for (i = 0; i < folio_batch_count(&fbatch); i++) + truncate_cleanup_folio(fbatch.folios[i]); + delete_from_page_cache_batch(mapping, &fbatch); + for (i = 0; i < folio_batch_count(&fbatch); i++) + folio_unlock(fbatch.folios[i]); + folio_batch_release(&fbatch); cond_resched(); } @@ -387,7 +379,6 @@ void truncate_inode_pages_range(struct address_space *mapping, if (start >= end) goto out; - folio_batch_init(&fbatch); index = start; for ( ; ; ) { cond_resched(); @@ -489,16 +480,16 @@ static unsigned long __invalidate_mapping_pages(struct address_space *mapping, pgoff_t start, pgoff_t end, unsigned long *nr_pagevec) { pgoff_t indices[PAGEVEC_SIZE]; - struct pagevec pvec; + struct folio_batch fbatch; pgoff_t index = start; unsigned long ret; unsigned long count = 0; int i; - pagevec_init(&pvec); - while (find_lock_entries(mapping, index, end, &pvec, indices)) { - for (i = 0; i < pagevec_count(&pvec); i++) { - struct page *page = pvec.pages[i]; + folio_batch_init(&fbatch); + while (find_lock_entries(mapping, index, end, &fbatch, indices)) { + for (i = 0; i < folio_batch_count(&fbatch); i++) { + struct page *page = &fbatch.folios[i]->page; /* We rely upon deletion not changing page->index */ index = indices[i]; @@ -525,8 +516,8 @@ static unsigned long __invalidate_mapping_pages(struct address_space *mapping, } count += ret; } - pagevec_remove_exceptionals(&pvec); - pagevec_release(&pvec); + folio_batch_remove_exceptionals(&fbatch); + folio_batch_release(&fbatch); cond_resched(); index++; }
find_lock_entries() already only returned the head page of folios, so convert it to return a folio_batch instead of a pagevec. That cascades through converting truncate_inode_pages_range() to delete_from_page_cache_batch() and page_cache_delete_batch(). Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> --- include/linux/pagemap.h | 4 +-- mm/filemap.c | 60 ++++++++++++++++++------------------ mm/internal.h | 2 +- mm/shmem.c | 14 ++++----- mm/truncate.c | 67 ++++++++++++++++++----------------------- 5 files changed, 67 insertions(+), 80 deletions(-)