Patchwork mm: fadvise: avoid expensive remote LRU cache draining after FADV_DONTNEED

login
register
mail settings
Submitter Johannes Weiner
Date Dec. 10, 2016, 5:26 p.m.
Message ID <20161210172658.5182-1-hannes@cmpxchg.org>
Download mbox | patch
Permalink /patch/9469387/
State New
Headers show

Comments

Johannes Weiner - Dec. 10, 2016, 5:26 p.m.
When FADV_DONTNEED cannot drop all pages in the range, it observes
that some pages might still be on per-cpu LRU caches after recent
instantiation and so initiates remote calls to all CPUs to flush their
local caches. However, in most cases, the fadvise happens from the
same context that instantiated the pages, and any pre-LRU pages in the
specified range are most likely sitting on the local CPU's LRU cache,
and so in many cases this results in unnecessary remote calls, which,
in a loaded system, can hold up the fadvise() call significantly.

Try to avoid the remote call by flushing the local LRU cache before
even attempting to invalidate anything. It's a cheap operation, and
the local LRU cache is the most likely to hold any pre-LRU pages in
the specified fadvise range.

Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
---
 mm/fadvise.c | 15 ++++++++++++++-
 1 file changed, 14 insertions(+), 1 deletion(-)
Vlastimil Babka - Dec. 12, 2016, 9:21 a.m.
On 12/10/2016 06:26 PM, Johannes Weiner wrote:
> When FADV_DONTNEED cannot drop all pages in the range, it observes
> that some pages might still be on per-cpu LRU caches after recent
> instantiation and so initiates remote calls to all CPUs to flush their
> local caches. However, in most cases, the fadvise happens from the
> same context that instantiated the pages, and any pre-LRU pages in the
> specified range are most likely sitting on the local CPU's LRU cache,
> and so in many cases this results in unnecessary remote calls, which,
> in a loaded system, can hold up the fadvise() call significantly.

Got any numbers for this part?

> Try to avoid the remote call by flushing the local LRU cache before
> even attempting to invalidate anything. It's a cheap operation, and
> the local LRU cache is the most likely to hold any pre-LRU pages in
> the specified fadvise range.

Anyway it looks like things can't be worse after this patch, so...

> Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>

Acked-by: Vlastimil Babka <vbabka@suse.cz>

> ---
>  mm/fadvise.c | 15 ++++++++++++++-
>  1 file changed, 14 insertions(+), 1 deletion(-)
>
> diff --git a/mm/fadvise.c b/mm/fadvise.c
> index 6c707bfe02fd..a43013112581 100644
> --- a/mm/fadvise.c
> +++ b/mm/fadvise.c
> @@ -139,7 +139,20 @@ SYSCALL_DEFINE4(fadvise64_64, int, fd, loff_t, offset, loff_t, len, int, advice)
>  		}
>
>  		if (end_index >= start_index) {
> -			unsigned long count = invalidate_mapping_pages(mapping,
> +			unsigned long count;
> +
> +			/*
> +			 * It's common to FADV_DONTNEED right after
> +			 * the read or write that instantiates the
> +			 * pages, in which case there will be some
> +			 * sitting on the local LRU cache. Try to
> +			 * avoid the expensive remote drain and the
> +			 * second cache tree walk below by flushing
> +			 * them out right away.
> +			 */
> +			lru_add_drain();
> +
> +			count = invalidate_mapping_pages(mapping,
>  						start_index, end_index);
>
>  			/*
>
Mel Gorman - Dec. 12, 2016, 9:51 a.m.
On Sat, Dec 10, 2016 at 12:26:58PM -0500, Johannes Weiner wrote:
> When FADV_DONTNEED cannot drop all pages in the range, it observes
> that some pages might still be on per-cpu LRU caches after recent
> instantiation and so initiates remote calls to all CPUs to flush their
> local caches. However, in most cases, the fadvise happens from the
> same context that instantiated the pages, and any pre-LRU pages in the
> specified range are most likely sitting on the local CPU's LRU cache,
> and so in many cases this results in unnecessary remote calls, which,
> in a loaded system, can hold up the fadvise() call significantly.
> 
> Try to avoid the remote call by flushing the local LRU cache before
> even attempting to invalidate anything. It's a cheap operation, and
> the local LRU cache is the most likely to hold any pre-LRU pages in
> the specified fadvise range.
> 
> Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>

Acked-by: Mel Gorman <mgorman@suse.de>
Johannes Weiner - Dec. 12, 2016, 3:55 p.m.
On Mon, Dec 12, 2016 at 10:21:24AM +0100, Vlastimil Babka wrote:
> On 12/10/2016 06:26 PM, Johannes Weiner wrote:
> > When FADV_DONTNEED cannot drop all pages in the range, it observes
> > that some pages might still be on per-cpu LRU caches after recent
> > instantiation and so initiates remote calls to all CPUs to flush their
> > local caches. However, in most cases, the fadvise happens from the
> > same context that instantiated the pages, and any pre-LRU pages in the
> > specified range are most likely sitting on the local CPU's LRU cache,
> > and so in many cases this results in unnecessary remote calls, which,
> > in a loaded system, can hold up the fadvise() call significantly.
> 
> Got any numbers for this part?

I didn't record it in the extreme case we observed, unfortunately. We
had a slow-to-respond system and noticed it spending seconds in
lru_add_drain_all() after fadvise calls, and this patch came out of
thinking about the code and how we commonly call FADV_DONTNEED.

FWIW, I wrote a silly directory tree walker/searcher that recurses
through /usr to read and FADV_DONTNEED each file it finds. On a 2
socket 40 ht machine, over 1% is spent in lru_add_drain_all(). With
the patch, that cost is gone; the local drain cost shows at 0.09%.

> > Try to avoid the remote call by flushing the local LRU cache before
> > even attempting to invalidate anything. It's a cheap operation, and
> > the local LRU cache is the most likely to hold any pre-LRU pages in
> > the specified fadvise range.
> 
> Anyway it looks like things can't be worse after this patch, so...
> 
> > Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
> 
> Acked-by: Vlastimil Babka <vbabka@suse.cz>

Thanks!
Vlastimil Babka - Dec. 13, 2016, 12:32 p.m.
On 12/12/2016 04:55 PM, Johannes Weiner wrote:
> On Mon, Dec 12, 2016 at 10:21:24AM +0100, Vlastimil Babka wrote:
>> On 12/10/2016 06:26 PM, Johannes Weiner wrote:
>>> When FADV_DONTNEED cannot drop all pages in the range, it observes
>>> that some pages might still be on per-cpu LRU caches after recent
>>> instantiation and so initiates remote calls to all CPUs to flush their
>>> local caches. However, in most cases, the fadvise happens from the
>>> same context that instantiated the pages, and any pre-LRU pages in the
>>> specified range are most likely sitting on the local CPU's LRU cache,
>>> and so in many cases this results in unnecessary remote calls, which,
>>> in a loaded system, can hold up the fadvise() call significantly.
>>
>> Got any numbers for this part?
>
> I didn't record it in the extreme case we observed, unfortunately. We
> had a slow-to-respond system and noticed it spending seconds in
> lru_add_drain_all() after fadvise calls, and this patch came out of
> thinking about the code and how we commonly call FADV_DONTNEED.
>
> FWIW, I wrote a silly directory tree walker/searcher that recurses
> through /usr to read and FADV_DONTNEED each file it finds. On a 2
> socket 40 ht machine, over 1% is spent in lru_add_drain_all(). With
> the patch, that cost is gone; the local drain cost shows at 0.09%.

Thanks, worth adding to changelog :)

Vlastimil

Patch

diff --git a/mm/fadvise.c b/mm/fadvise.c
index 6c707bfe02fd..a43013112581 100644
--- a/mm/fadvise.c
+++ b/mm/fadvise.c
@@ -139,7 +139,20 @@  SYSCALL_DEFINE4(fadvise64_64, int, fd, loff_t, offset, loff_t, len, int, advice)
 		}
 
 		if (end_index >= start_index) {
-			unsigned long count = invalidate_mapping_pages(mapping,
+			unsigned long count;
+
+			/*
+			 * It's common to FADV_DONTNEED right after
+			 * the read or write that instantiates the
+			 * pages, in which case there will be some
+			 * sitting on the local LRU cache. Try to
+			 * avoid the expensive remote drain and the
+			 * second cache tree walk below by flushing
+			 * them out right away.
+			 */
+			lru_add_drain();
+
+			count = invalidate_mapping_pages(mapping,
 						start_index, end_index);
 
 			/*