Message ID | 20210121041616.3955703-3-willy@infradead.org (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Refactor generic_file_buffered_read | expand |
On Thu, Jan 21, 2021 at 04:16:00AM +0000, Matthew Wilcox (Oracle) wrote: > Increasing the batch size runs into diminishing returns. It's probably > better to make, eg, three calls to filemap_get_pages() than it is to > call into kmalloc(). > > Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Looks good, Reviewed-by: Christoph Hellwig <hch@lst.de>
Hi: On 2021/1/21 12:16, Matthew Wilcox (Oracle) wrote: > Increasing the batch size runs into diminishing returns. It's probably > better to make, eg, three calls to filemap_get_pages() than it is to > call into kmalloc(). > LGTM. Thanks. Reviewed-by: Miaohe Lin <linmiaohe@huawei.com> > Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> > --- > mm/filemap.c | 15 ++------------- > 1 file changed, 2 insertions(+), 13 deletions(-) > > diff --git a/mm/filemap.c b/mm/filemap.c > index 934e04f1760ef..5dec04c8e16b0 100644 > --- a/mm/filemap.c > +++ b/mm/filemap.c > @@ -2421,8 +2421,8 @@ ssize_t generic_file_buffered_read(struct kiocb *iocb, > struct file_ra_state *ra = &filp->f_ra; > struct address_space *mapping = filp->f_mapping; > struct inode *inode = mapping->host; > - struct page *pages_onstack[PAGEVEC_SIZE], **pages = NULL; > - unsigned int nr_pages = min_t(unsigned int, 512, > + struct page *pages[PAGEVEC_SIZE]; > + unsigned int nr_pages = min_t(unsigned int, PAGEVEC_SIZE, > ((iocb->ki_pos + iter->count + PAGE_SIZE - 1) >> PAGE_SHIFT) - > (iocb->ki_pos >> PAGE_SHIFT)); > int i, pg_nr, error = 0; > @@ -2433,14 +2433,6 @@ ssize_t generic_file_buffered_read(struct kiocb *iocb, > return 0; > iov_iter_truncate(iter, inode->i_sb->s_maxbytes); > > - if (nr_pages > ARRAY_SIZE(pages_onstack)) > - pages = kmalloc_array(nr_pages, sizeof(void *), GFP_KERNEL); > - > - if (!pages) { > - pages = pages_onstack; > - nr_pages = min_t(unsigned int, nr_pages, ARRAY_SIZE(pages_onstack)); > - } > - > do { > cond_resched(); > > @@ -2525,9 +2517,6 @@ ssize_t generic_file_buffered_read(struct kiocb *iocb, > > file_accessed(filp); > > - if (pages != pages_onstack) > - kfree(pages); > - > return written ? written : error; > } > EXPORT_SYMBOL_GPL(generic_file_buffered_read); >
diff --git a/mm/filemap.c b/mm/filemap.c index 934e04f1760ef..5dec04c8e16b0 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -2421,8 +2421,8 @@ ssize_t generic_file_buffered_read(struct kiocb *iocb, struct file_ra_state *ra = &filp->f_ra; struct address_space *mapping = filp->f_mapping; struct inode *inode = mapping->host; - struct page *pages_onstack[PAGEVEC_SIZE], **pages = NULL; - unsigned int nr_pages = min_t(unsigned int, 512, + struct page *pages[PAGEVEC_SIZE]; + unsigned int nr_pages = min_t(unsigned int, PAGEVEC_SIZE, ((iocb->ki_pos + iter->count + PAGE_SIZE - 1) >> PAGE_SHIFT) - (iocb->ki_pos >> PAGE_SHIFT)); int i, pg_nr, error = 0; @@ -2433,14 +2433,6 @@ ssize_t generic_file_buffered_read(struct kiocb *iocb, return 0; iov_iter_truncate(iter, inode->i_sb->s_maxbytes); - if (nr_pages > ARRAY_SIZE(pages_onstack)) - pages = kmalloc_array(nr_pages, sizeof(void *), GFP_KERNEL); - - if (!pages) { - pages = pages_onstack; - nr_pages = min_t(unsigned int, nr_pages, ARRAY_SIZE(pages_onstack)); - } - do { cond_resched(); @@ -2525,9 +2517,6 @@ ssize_t generic_file_buffered_read(struct kiocb *iocb, file_accessed(filp); - if (pages != pages_onstack) - kfree(pages); - return written ? written : error; } EXPORT_SYMBOL_GPL(generic_file_buffered_read);
Increasing the batch size runs into diminishing returns. It's probably better to make, eg, three calls to filemap_get_pages() than it is to call into kmalloc(). Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> --- mm/filemap.c | 15 ++------------- 1 file changed, 2 insertions(+), 13 deletions(-)