Message ID | 20241108174505.1214230-9-axboe@kernel.dk (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | [01/13] mm/filemap: change filemap_create_folio() to take a struct kiocb | expand |
On Fri, Nov 08, 2024 at 10:43:31AM -0700, Jens Axboe wrote: > +++ b/mm/swap.c > @@ -472,6 +472,8 @@ static void folio_inc_refs(struct folio *folio) > */ > void folio_mark_accessed(struct folio *folio) > { > + if (folio_test_uncached(folio)) > + return; > if (lru_gen_enabled()) { This feels like it might be a problem. If, eg, process A is doing uncached IO and process B comes along and, say, mmap()s it, I think we'll need to clear the uncached flag in order to have things work correctly. It's a performance problem, not a correctness problem.
On 11/8/24 11:33 AM, Matthew Wilcox wrote: > On Fri, Nov 08, 2024 at 10:43:31AM -0700, Jens Axboe wrote: >> +++ b/mm/swap.c >> @@ -472,6 +472,8 @@ static void folio_inc_refs(struct folio *folio) >> */ >> void folio_mark_accessed(struct folio *folio) >> { >> + if (folio_test_uncached(folio)) >> + return; >> if (lru_gen_enabled()) { > > This feels like it might be a problem. If, eg, process A is doing > uncached IO and process B comes along and, say, mmap()s it, I think > we'll need to clear the uncached flag in order to have things work > correctly. It's a performance problem, not a correctness problem. I'll take a look, should be fine to just unconditionally clear it here. uncached is a hint after all. We'll try our best to honor it, but there will be cases where inline reclaim will fail and you'll get cached contents, particularly if you mix uncached and buffered, or uncached and mmap.
Hi Jens, > If the same test case is run with RWF_UNCACHED set for the buffered read, > the output looks as follows: > > Reading bs 65536, uncached 0 > 1s: 153144MB/sec > 2s: 156760MB/sec > 3s: 158110MB/sec > 4s: 158009MB/sec > 5s: 158043MB/sec > 6s: 157638MB/sec > 7s: 157999MB/sec > 8s: 158024MB/sec > 9s: 157764MB/sec > 10s: 157477MB/sec > 11s: 157417MB/sec > 12s: 157455MB/sec > 13s: 157233MB/sec > 14s: 156692MB/sec > > which is just chugging along at ~155GB/sec of read performance. Looking > at top, we see: > > PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND > 7961 root 20 0 267004 0 0 S 3180 0.0 5:37.95 uncached > 8024 axboe 20 0 14292 4096 0 R 1.0 0.0 0:00.13 top > > where just the test app is using CPU, no reclaim is taking place outside > of the main thread. Not only is performance 65% better, it's also using > half the CPU to do it. Do you have numbers of similar code using O_DIRECT just to see the impact of the memcpy from the page cache to the userspace buffer... Thanks! metze
On 11/11/24 6:04 AM, Stefan Metzmacher wrote: > Hi Jens, > >> If the same test case is run with RWF_UNCACHED set for the buffered read, >> the output looks as follows: >> >> Reading bs 65536, uncached 0 >> 1s: 153144MB/sec >> 2s: 156760MB/sec >> 3s: 158110MB/sec >> 4s: 158009MB/sec >> 5s: 158043MB/sec >> 6s: 157638MB/sec >> 7s: 157999MB/sec >> 8s: 158024MB/sec >> 9s: 157764MB/sec >> 10s: 157477MB/sec >> 11s: 157417MB/sec >> 12s: 157455MB/sec >> 13s: 157233MB/sec >> 14s: 156692MB/sec >> >> which is just chugging along at ~155GB/sec of read performance. Looking >> at top, we see: >> >> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND >> 7961 root 20 0 267004 0 0 S 3180 0.0 5:37.95 uncached >> 8024 axboe 20 0 14292 4096 0 R 1.0 0.0 0:00.13 top >> >> where just the test app is using CPU, no reclaim is taking place outside >> of the main thread. Not only is performance 65% better, it's also using >> half the CPU to do it. > > Do you have numbers of similar code using O_DIRECT just to > see the impact of the memcpy from the page cache to the userspace > buffer... I don't, but I can surely generate those. I didn't consider them that interesting for this comparison which is why I didn't do them, O_DIRECT reads for bigger blocks sizes (or even smaller block sizes, if using io_uring + registered buffers) will definitely have lower overhead than uncached and buffered IO. Copying 160GB/sec isn't free :-) For writes it's a bit more complicated to do an apples to apples comparison, as uncached IO isn't synchronous like O_DIRECT is. It only kicks off the IO, doesn't wait for it.
On 11/11/24 7:10 AM, Jens Axboe wrote: > On 11/11/24 6:04 AM, Stefan Metzmacher wrote: >> Hi Jens, >> >>> If the same test case is run with RWF_UNCACHED set for the buffered read, >>> the output looks as follows: >>> >>> Reading bs 65536, uncached 0 >>> 1s: 153144MB/sec >>> 2s: 156760MB/sec >>> 3s: 158110MB/sec >>> 4s: 158009MB/sec >>> 5s: 158043MB/sec >>> 6s: 157638MB/sec >>> 7s: 157999MB/sec >>> 8s: 158024MB/sec >>> 9s: 157764MB/sec >>> 10s: 157477MB/sec >>> 11s: 157417MB/sec >>> 12s: 157455MB/sec >>> 13s: 157233MB/sec >>> 14s: 156692MB/sec >>> >>> which is just chugging along at ~155GB/sec of read performance. Looking >>> at top, we see: >>> >>> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND >>> 7961 root 20 0 267004 0 0 S 3180 0.0 5:37.95 uncached >>> 8024 axboe 20 0 14292 4096 0 R 1.0 0.0 0:00.13 top >>> >>> where just the test app is using CPU, no reclaim is taking place outside >>> of the main thread. Not only is performance 65% better, it's also using >>> half the CPU to do it. >> >> Do you have numbers of similar code using O_DIRECT just to >> see the impact of the memcpy from the page cache to the userspace >> buffer... > > I don't, but I can surely generate those. I didn't consider them that > interesting for this comparison which is why I didn't do them, O_DIRECT > reads for bigger blocks sizes (or even smaller block sizes, if using > io_uring + registered buffers) will definitely have lower overhead than > uncached and buffered IO. Copying 160GB/sec isn't free :-) > > For writes it's a bit more complicated to do an apples to apples > comparison, as uncached IO isn't synchronous like O_DIRECT is. It only > kicks off the IO, doesn't wait for it. Here's the read side - same test as above, using 64K reads: 1s: 24947MB/sec 2s: 24840MB/sec 3s: 24666MB/sec 4s: 24549MB/sec 5s: 24575MB/sec 6s: 24669MB/sec 7s: 24611MB/sec 8s: 24369MB/sec 9s: 24261MB/sec 10s: 24125MB/sec which is in fact pretty depressing. As before, this is 32 threads, each reading a file from separate XFS mount points, so 32 file systems in total. If I bump the read size to 128K, then it's about 42GB/sec. 256K gets you to 71-72GB/sec. Just goes to show you, you need parallellism to get the best performance out of the devices with O_DIRECT. If I run io_uring + dio + registered buffers, I can get ~172GB/sec out of reading the same 32 files from 32 threads.
diff --git a/include/linux/fs.h b/include/linux/fs.h index 491eeb73e725..5abc53991cd0 100644 --- a/include/linux/fs.h +++ b/include/linux/fs.h @@ -320,6 +320,7 @@ struct readahead_control; #define IOCB_NOWAIT (__force int) RWF_NOWAIT #define IOCB_APPEND (__force int) RWF_APPEND #define IOCB_ATOMIC (__force int) RWF_ATOMIC +#define IOCB_UNCACHED (__force int) RWF_UNCACHED /* non-RWF related bits - start at 16 */ #define IOCB_EVENTFD (1 << 16) @@ -354,7 +355,8 @@ struct readahead_control; { IOCB_SYNC, "SYNC" }, \ { IOCB_NOWAIT, "NOWAIT" }, \ { IOCB_APPEND, "APPEND" }, \ - { IOCB_ATOMIC, "ATOMIC"}, \ + { IOCB_ATOMIC, "ATOMIC" }, \ + { IOCB_UNCACHED, "UNCACHED" }, \ { IOCB_EVENTFD, "EVENTFD"}, \ { IOCB_DIRECT, "DIRECT" }, \ { IOCB_WRITE, "WRITE" }, \ diff --git a/include/uapi/linux/fs.h b/include/uapi/linux/fs.h index 753971770733..dc77cd8ae1a3 100644 --- a/include/uapi/linux/fs.h +++ b/include/uapi/linux/fs.h @@ -332,9 +332,13 @@ typedef int __bitwise __kernel_rwf_t; /* Atomic Write */ #define RWF_ATOMIC ((__force __kernel_rwf_t)0x00000040) +/* buffered IO that drops the cache after reading or writing data */ +#define RWF_UNCACHED ((__force __kernel_rwf_t)0x00000080) + /* mask of flags supported by the kernel */ #define RWF_SUPPORTED (RWF_HIPRI | RWF_DSYNC | RWF_SYNC | RWF_NOWAIT |\ - RWF_APPEND | RWF_NOAPPEND | RWF_ATOMIC) + RWF_APPEND | RWF_NOAPPEND | RWF_ATOMIC |\ + RWF_UNCACHED) #define PROCFS_IOCTL_MAGIC 'f' diff --git a/mm/filemap.c b/mm/filemap.c index 7f8d13f06c04..6f65025782bb 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -2471,6 +2471,8 @@ static int filemap_create_folio(struct kiocb *iocb, folio = filemap_alloc_folio(mapping_gfp_mask(mapping), min_order); if (!folio) return -ENOMEM; + if (iocb->ki_flags & IOCB_UNCACHED) + folio_set_uncached(folio); /* * Protect against truncate / hole punch. Grabbing invalidate_lock @@ -2516,6 +2518,8 @@ static int filemap_readahead(struct kiocb *iocb, struct file *file, if (iocb->ki_flags & IOCB_NOIO) return -EAGAIN; + if (iocb->ki_flags & IOCB_UNCACHED) + ractl.uncached = 1; page_cache_async_ra(&ractl, folio, last_index - folio->index); return 0; } @@ -2545,6 +2549,8 @@ static int filemap_get_pages(struct kiocb *iocb, size_t count, return -EAGAIN; if (iocb->ki_flags & IOCB_NOWAIT) flags = memalloc_noio_save(); + if (iocb->ki_flags & IOCB_UNCACHED) + ractl.uncached = 1; page_cache_sync_ra(&ractl, last_index - index); if (iocb->ki_flags & IOCB_NOWAIT) memalloc_noio_restore(flags); @@ -2705,8 +2711,16 @@ ssize_t filemap_read(struct kiocb *iocb, struct iov_iter *iter, } } put_folios: - for (i = 0; i < folio_batch_count(&fbatch); i++) - folio_put(fbatch.folios[i]); + for (i = 0; i < folio_batch_count(&fbatch); i++) { + struct folio *folio = fbatch.folios[i]; + + if (folio_test_uncached(folio)) { + folio_lock(folio); + invalidate_complete_folio2(mapping, folio, 0); + folio_unlock(folio); + } + folio_put(folio); + } folio_batch_init(&fbatch); } while (iov_iter_count(iter) && iocb->ki_pos < isize && !error); diff --git a/mm/swap.c b/mm/swap.c index 835bdf324b76..f2457acae383 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -472,6 +472,8 @@ static void folio_inc_refs(struct folio *folio) */ void folio_mark_accessed(struct folio *folio) { + if (folio_test_uncached(folio)) + return; if (lru_gen_enabled()) { folio_inc_refs(folio); return;
Add RWF_UNCACHED as a read operation flag, which means that any data read wil be removed from the page cache upon completion. Uses the page cache to synchronize, and simply prunes folios that were instantiated when the operation completes. While it would be possible to use private pages for this, using the page cache as synchronization is handy for a variety of reasons: 1) No special truncate magic is needed 2) Async buffered reads need some place to serialize, using the page cache is a lot easier than writing extra code for this 3) The pruning cost is pretty reasonable and the code to support this is much simpler as a result. You can think of uncached buffered IO as being the much more attractive cousing of O_DIRECT - it has none of the restrictions of O_DIRECT. Yes, it will copy the data, but unlike regular buffered IO, it doesn't run into the unpredictability of the page cache in terms of reclaim. As an example, on a test box with 32 drives, reading them with buffered IO looks as follows: Reading bs 65536, uncached 0 1s: 145945MB/sec 2s: 158067MB/sec 3s: 157007MB/sec 4s: 148622MB/sec 5s: 118824MB/sec 6s: 70494MB/sec 7s: 41754MB/sec 8s: 90811MB/sec 9s: 92204MB/sec 10s: 95178MB/sec 11s: 95488MB/sec 12s: 95552MB/sec 13s: 96275MB/sec where it's quite easy to see where the page cache filled up, and performance went from good to erratic, and finally settles at a much lower rate. Looking at top while this is ongoing, we see: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 7535 root 20 0 267004 0 0 S 3199 0.0 8:40.65 uncached 3326 root 20 0 0 0 0 R 100.0 0.0 0:16.40 kswapd4 3327 root 20 0 0 0 0 R 100.0 0.0 0:17.22 kswapd5 3328 root 20 0 0 0 0 R 100.0 0.0 0:13.29 kswapd6 3332 root 20 0 0 0 0 R 100.0 0.0 0:11.11 kswapd10 3339 root 20 0 0 0 0 R 100.0 0.0 0:16.25 kswapd17 3348 root 20 0 0 0 0 R 100.0 0.0 0:16.40 kswapd26 3343 root 20 0 0 0 0 R 100.0 0.0 0:16.30 kswapd21 3344 root 20 0 0 0 0 R 100.0 0.0 0:11.92 kswapd22 3349 root 20 0 0 0 0 R 100.0 0.0 0:16.28 kswapd27 3352 root 20 0 0 0 0 R 99.7 0.0 0:11.89 kswapd30 3353 root 20 0 0 0 0 R 96.7 0.0 0:16.04 kswapd31 3329 root 20 0 0 0 0 R 96.4 0.0 0:11.41 kswapd7 3345 root 20 0 0 0 0 R 96.4 0.0 0:13.40 kswapd23 3330 root 20 0 0 0 0 S 91.1 0.0 0:08.28 kswapd8 3350 root 20 0 0 0 0 S 86.8 0.0 0:11.13 kswapd28 3325 root 20 0 0 0 0 S 76.3 0.0 0:07.43 kswapd3 3341 root 20 0 0 0 0 S 74.7 0.0 0:08.85 kswapd19 3334 root 20 0 0 0 0 S 71.7 0.0 0:10.04 kswapd12 3351 root 20 0 0 0 0 R 60.5 0.0 0:09.59 kswapd29 3323 root 20 0 0 0 0 R 57.6 0.0 0:11.50 kswapd1 [...] which is just showing a partial list of the 32 kswapd threads that are running mostly full tilt, burning ~28 full CPU cores. If the same test case is run with RWF_UNCACHED set for the buffered read, the output looks as follows: Reading bs 65536, uncached 0 1s: 153144MB/sec 2s: 156760MB/sec 3s: 158110MB/sec 4s: 158009MB/sec 5s: 158043MB/sec 6s: 157638MB/sec 7s: 157999MB/sec 8s: 158024MB/sec 9s: 157764MB/sec 10s: 157477MB/sec 11s: 157417MB/sec 12s: 157455MB/sec 13s: 157233MB/sec 14s: 156692MB/sec which is just chugging along at ~155GB/sec of read performance. Looking at top, we see: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 7961 root 20 0 267004 0 0 S 3180 0.0 5:37.95 uncached 8024 axboe 20 0 14292 4096 0 R 1.0 0.0 0:00.13 top where just the test app is using CPU, no reclaim is taking place outside of the main thread. Not only is performance 65% better, it's also using half the CPU to do it. Signed-off-by: Jens Axboe <axboe@kernel.dk> --- include/linux/fs.h | 4 +++- include/uapi/linux/fs.h | 6 +++++- mm/filemap.c | 18 ++++++++++++++++-- mm/swap.c | 2 ++ 4 files changed, 26 insertions(+), 4 deletions(-)