diff mbox series

mm, fadvise: improve the expensive remote LRU cache draining after FADV_DONTNEED

Message ID 20200921014317.73915-1-laoar.shao@gmail.com (mailing list archive)
State New, archived
Headers show
Series mm, fadvise: improve the expensive remote LRU cache draining after FADV_DONTNEED | expand

Commit Message

Yafang Shao Sept. 21, 2020, 1:43 a.m. UTC
Our users reported that there're some random latency spikes when their RT
process is running. Finally we found that latency spike is caused by
FADV_DONTNEED. Which may call lru_add_drain_all() to drain LRU cache on
remote CPUs, and then waits the per-cpu work to complete. The wait time
is uncertain, which may be tens millisecond.
That behavior is unreasonable, because this process is bound to a
specific CPU and the file is only accessed by itself, IOW, there should
be no pagecache pages on a per-cpu pagevec of a remote CPU. That
unreasonable behavior is partially caused by the wrong comparation of the
number of invalidated pages and the number of the target. For example,
	if (count < (end_index - start_index + 1))
The count above is how many pages were invalidated in the local CPU, and
(end_index - start_index + 1) is how many pages should be invalidated.
The usage of (end_index - start_index + 1) is incorrect, because they
are virtual addresses, which may not mapped to pages. We'd better use
inode->i_data.nrpages as the target.

Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Johannes Weiner <hannes@cmpxchg.org>
---
 mm/fadvise.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Comments

Mel Gorman Sept. 21, 2020, 10:34 p.m. UTC | #1
On Mon, Sep 21, 2020 at 09:43:17AM +0800, Yafang Shao wrote:
> Our users reported that there're some random latency spikes when their RT
> process is running. Finally we found that latency spike is caused by
> FADV_DONTNEED. Which may call lru_add_drain_all() to drain LRU cache on
> remote CPUs, and then waits the per-cpu work to complete. The wait time
> is uncertain, which may be tens millisecond.
> That behavior is unreasonable, because this process is bound to a
> specific CPU and the file is only accessed by itself, IOW, there should
> be no pagecache pages on a per-cpu pagevec of a remote CPU. That
> unreasonable behavior is partially caused by the wrong comparation of the
> number of invalidated pages and the number of the target. For example,
> 	if (count < (end_index - start_index + 1))
> The count above is how many pages were invalidated in the local CPU, and
> (end_index - start_index + 1) is how many pages should be invalidated.
> The usage of (end_index - start_index + 1) is incorrect, because they
> are virtual addresses, which may not mapped to pages. We'd better use
> inode->i_data.nrpages as the target.
> 

How does that work if the invalidation is for a subset of the file?
Yafang Shao Sept. 22, 2020, 2:12 a.m. UTC | #2
On Tue, Sep 22, 2020 at 6:34 AM Mel Gorman <mgorman@suse.de> wrote:
>
> On Mon, Sep 21, 2020 at 09:43:17AM +0800, Yafang Shao wrote:
> > Our users reported that there're some random latency spikes when their RT
> > process is running. Finally we found that latency spike is caused by
> > FADV_DONTNEED. Which may call lru_add_drain_all() to drain LRU cache on
> > remote CPUs, and then waits the per-cpu work to complete. The wait time
> > is uncertain, which may be tens millisecond.
> > That behavior is unreasonable, because this process is bound to a
> > specific CPU and the file is only accessed by itself, IOW, there should
> > be no pagecache pages on a per-cpu pagevec of a remote CPU. That
> > unreasonable behavior is partially caused by the wrong comparation of the
> > number of invalidated pages and the number of the target. For example,
> >       if (count < (end_index - start_index + 1))
> > The count above is how many pages were invalidated in the local CPU, and
> > (end_index - start_index + 1) is how many pages should be invalidated.
> > The usage of (end_index - start_index + 1) is incorrect, because they
> > are virtual addresses, which may not mapped to pages. We'd better use
> > inode->i_data.nrpages as the target.
> >
>
> How does that work if the invalidation is for a subset of the file?
>

I realized it as well. There are some solutions to improve it.

Option 1, take the min as the target.
-                       if (count < (end_index - start_index + 1)) {
+                       target = min_t(unsigned long, inode->i_data.nrpages,
+                                      end_index - start_index + 1);
+                       if (count < target) {
                                lru_add_drain_all();

Option 2, change the prototype of  invalidate_mapping_pages and then
check how many pages were skipped.

+ struct invalidate_stat {
+    unsigned long skipped;       // how many pages were skipped
+    unsigned long invalidated;   // how many pages were invalidated
+};

- unsigned long invalidate_mapping_pages(struct address_space *mapping,
+unsigned long invalidate_mapping_pages(struct address_space *mapping,
struct invalidate_stat *stat,


I prefer option 2.
What do you think ?
Mel Gorman Sept. 22, 2020, 7:23 a.m. UTC | #3
On Tue, Sep 22, 2020 at 10:12:31AM +0800, Yafang Shao wrote:
> On Tue, Sep 22, 2020 at 6:34 AM Mel Gorman <mgorman@suse.de> wrote:
> >
> > On Mon, Sep 21, 2020 at 09:43:17AM +0800, Yafang Shao wrote:
> > > Our users reported that there're some random latency spikes when their RT
> > > process is running. Finally we found that latency spike is caused by
> > > FADV_DONTNEED. Which may call lru_add_drain_all() to drain LRU cache on
> > > remote CPUs, and then waits the per-cpu work to complete. The wait time
> > > is uncertain, which may be tens millisecond.
> > > That behavior is unreasonable, because this process is bound to a
> > > specific CPU and the file is only accessed by itself, IOW, there should
> > > be no pagecache pages on a per-cpu pagevec of a remote CPU. That
> > > unreasonable behavior is partially caused by the wrong comparation of the
> > > number of invalidated pages and the number of the target. For example,
> > >       if (count < (end_index - start_index + 1))
> > > The count above is how many pages were invalidated in the local CPU, and
> > > (end_index - start_index + 1) is how many pages should be invalidated.
> > > The usage of (end_index - start_index + 1) is incorrect, because they
> > > are virtual addresses, which may not mapped to pages. We'd better use
> > > inode->i_data.nrpages as the target.
> > >
> >
> > How does that work if the invalidation is for a subset of the file?
> >
> 
> I realized it as well. There are some solutions to improve it.
> 
> Option 1, take the min as the target.
> -                       if (count < (end_index - start_index + 1)) {
> +                       target = min_t(unsigned long, inode->i_data.nrpages,
> +                                      end_index - start_index + 1);
> +                       if (count < target) {
>                                 lru_add_drain_all();
> 
> Option 2, change the prototype of  invalidate_mapping_pages and then
> check how many pages were skipped.
> 
> + struct invalidate_stat {
> +    unsigned long skipped;       // how many pages were skipped
> +    unsigned long invalidated;   // how many pages were invalidated
> +};
> 
> - unsigned long invalidate_mapping_pages(struct address_space *mapping,
> +unsigned long invalidate_mapping_pages(struct address_space *mapping,
> struct invalidate_stat *stat,
> 

That would involve updating each caller and the struct is
unnecessarily heavy. Create one that returns via **nr_lruvec. For
invalidate_mapping_pages, pass in NULL as nr_lruvec.  Create a new helper
for fadvise that accepts nr_lruvec. In the common helper, account for pages
that are likely on an LRU and count them in nr_lruvec if !NULL. Update
fadvise to drain only if pages were skipped that were on the lruvec. That
should also deal with the case where holes have been punched between
start and end.
Yafang Shao Sept. 23, 2020, 10:05 a.m. UTC | #4
On Tue, Sep 22, 2020 at 3:23 PM Mel Gorman <mgorman@suse.de> wrote:
>
> On Tue, Sep 22, 2020 at 10:12:31AM +0800, Yafang Shao wrote:
> > On Tue, Sep 22, 2020 at 6:34 AM Mel Gorman <mgorman@suse.de> wrote:
> > >
> > > On Mon, Sep 21, 2020 at 09:43:17AM +0800, Yafang Shao wrote:
> > > > Our users reported that there're some random latency spikes when their RT
> > > > process is running. Finally we found that latency spike is caused by
> > > > FADV_DONTNEED. Which may call lru_add_drain_all() to drain LRU cache on
> > > > remote CPUs, and then waits the per-cpu work to complete. The wait time
> > > > is uncertain, which may be tens millisecond.
> > > > That behavior is unreasonable, because this process is bound to a
> > > > specific CPU and the file is only accessed by itself, IOW, there should
> > > > be no pagecache pages on a per-cpu pagevec of a remote CPU. That
> > > > unreasonable behavior is partially caused by the wrong comparation of the
> > > > number of invalidated pages and the number of the target. For example,
> > > >       if (count < (end_index - start_index + 1))
> > > > The count above is how many pages were invalidated in the local CPU, and
> > > > (end_index - start_index + 1) is how many pages should be invalidated.
> > > > The usage of (end_index - start_index + 1) is incorrect, because they
> > > > are virtual addresses, which may not mapped to pages. We'd better use
> > > > inode->i_data.nrpages as the target.
> > > >
> > >
> > > How does that work if the invalidation is for a subset of the file?
> > >
> >
> > I realized it as well. There are some solutions to improve it.
> >
> > Option 1, take the min as the target.
> > -                       if (count < (end_index - start_index + 1)) {
> > +                       target = min_t(unsigned long, inode->i_data.nrpages,
> > +                                      end_index - start_index + 1);
> > +                       if (count < target) {
> >                                 lru_add_drain_all();
> >
> > Option 2, change the prototype of  invalidate_mapping_pages and then
> > check how many pages were skipped.
> >
> > + struct invalidate_stat {
> > +    unsigned long skipped;       // how many pages were skipped
> > +    unsigned long invalidated;   // how many pages were invalidated
> > +};
> >
> > - unsigned long invalidate_mapping_pages(struct address_space *mapping,
> > +unsigned long invalidate_mapping_pages(struct address_space *mapping,
> > struct invalidate_stat *stat,
> >
>
> That would involve updating each caller and the struct is
> unnecessarily heavy. Create one that returns via **nr_lruvec. For
> invalidate_mapping_pages, pass in NULL as nr_lruvec.  Create a new helper
> for fadvise that accepts nr_lruvec. In the common helper, account for pages
> that are likely on an LRU and count them in nr_lruvec if !NULL. Update
> fadvise to drain only if pages were skipped that were on the lruvec. That
> should also deal with the case where holes have been punched between
> start and end.
>

Good suggestion, thanks Mel.
I will send v2.
diff mbox series

Patch

diff --git a/mm/fadvise.c b/mm/fadvise.c
index 0e66f2aaeea3..ec25c91194a3 100644
--- a/mm/fadvise.c
+++ b/mm/fadvise.c
@@ -163,7 +163,7 @@  int generic_fadvise(struct file *file, loff_t offset, loff_t len, int advice)
 			 * a per-cpu pagevec for a remote CPU. Drain all
 			 * pagevecs and try again.
 			 */
-			if (count < (end_index - start_index + 1)) {
+			if (count < inode->i_data.nrpages) {
 				lru_add_drain_all();
 				invalidate_mapping_pages(mapping, start_index,
 						end_index);