Message ID | 20220910065058.3303831-5-hch@lst.de (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | [1/5] mm: add PSI accounting around ->read_folio and ->readahead calls | expand |
On Sat, Sep 10, 2022 at 08:50:57AM +0200, Christoph Hellwig wrote: > erofs uses an additional address space for compressed data read from disk > in addition to the one directly associated with the inode. Reading into > the lower address space is open coded using add_to_page_cache_lru instead > of using the filemap.c helper for page allocation micro-optimizations, > which means it is not covered by the MM PSI annotations for ->read_folio > and ->readahead, so add manual ones instead. > > Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Johannes Weiner <hannes@cmpxchg.org>
On Sat, Sep 10, 2022 at 08:50:57AM +0200, Christoph Hellwig wrote: > erofs uses an additional address space for compressed data read from disk > in addition to the one directly associated with the inode. Reading into > the lower address space is open coded using add_to_page_cache_lru instead > of using the filemap.c helper for page allocation micro-optimizations, > which means it is not covered by the MM PSI annotations for ->read_folio > and ->readahead, so add manual ones instead. > > Signed-off-by: Christoph Hellwig <hch@lst.de> Thanks, Looks good to me (Although I don't have chance to seek more time digging into PSI internal...) Acked-by: Gao Xiang <hsiangkao@linux.alibaba.com> Thanks, Gao Xiang,
diff --git a/fs/erofs/zdata.c b/fs/erofs/zdata.c index 5792ca9e0d5ef..143a101a36887 100644 --- a/fs/erofs/zdata.c +++ b/fs/erofs/zdata.c @@ -7,6 +7,7 @@ #include "zdata.h" #include "compress.h" #include <linux/prefetch.h> +#include <linux/psi.h> #include <trace/events/erofs.h> @@ -1365,6 +1366,8 @@ static void z_erofs_submit_queue(struct z_erofs_decompress_frontend *f, struct block_device *last_bdev; unsigned int nr_bios = 0; struct bio *bio = NULL; + /* initialize to 1 to make skip psi_memstall_leave unless needed */ + unsigned long pflags = 1; bi_private = jobqueueset_init(sb, q, fgq, force_fg); qtail[JQ_BYPASS] = &q[JQ_BYPASS]->head; @@ -1414,10 +1417,15 @@ static void z_erofs_submit_queue(struct z_erofs_decompress_frontend *f, if (bio && (cur != last_index + 1 || last_bdev != mdev.m_bdev)) { submit_bio_retry: + if (!pflags) + psi_memstall_leave(&pflags); submit_bio(bio); bio = NULL; } + if (unlikely(PageWorkingset(page))) + psi_memstall_enter(&pflags); + if (!bio) { bio = bio_alloc(mdev.m_bdev, BIO_MAX_VECS, REQ_OP_READ, GFP_NOIO); @@ -1445,8 +1453,11 @@ static void z_erofs_submit_queue(struct z_erofs_decompress_frontend *f, move_to_bypass_jobqueue(pcl, qtail, owned_head); } while (owned_head != Z_EROFS_PCLUSTER_TAIL); - if (bio) + if (bio) { + if (!pflags) + psi_memstall_leave(&pflags); submit_bio(bio); + } /* * although background is preferred, no one is pending for submission.
erofs uses an additional address space for compressed data read from disk in addition to the one directly associated with the inode. Reading into the lower address space is open coded using add_to_page_cache_lru instead of using the filemap.c helper for page allocation micro-optimizations, which means it is not covered by the MM PSI annotations for ->read_folio and ->readahead, so add manual ones instead. Signed-off-by: Christoph Hellwig <hch@lst.de> --- fs/erofs/zdata.c | 13 ++++++++++++- 1 file changed, 12 insertions(+), 1 deletion(-)