diff mbox series

[04/10] mm/truncate: Replace page_mapped() call in invalidate_inode_page()

Message ID 20220214200017.3150590-5-willy@infradead.org (mailing list archive)
State New
Headers show
Series Various fixes around invalidate_page() | expand

Commit Message

Matthew Wilcox Feb. 14, 2022, 8 p.m. UTC
folio_mapped() is expensive because it has to check each page's mapcount
field.  A cheaper check is whether there are any extra references to
the page, other than the one we own and the ones held by the page cache.
The call to remove_mapping() will fail in any case if it cannot freeze
the refcount, but failing here avoids cycling the i_pages spinlock.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/truncate.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Comments

Christoph Hellwig Feb. 15, 2022, 7:19 a.m. UTC | #1
On Mon, Feb 14, 2022 at 08:00:11PM +0000, Matthew Wilcox (Oracle) wrote:
> folio_mapped() is expensive because it has to check each page's mapcount
> field.  A cheaper check is whether there are any extra references to
> the page, other than the one we own and the ones held by the page cache.
> The call to remove_mapping() will fail in any case if it cannot freeze
> the refcount, but failing here avoids cycling the i_pages spinlock.

I wonder if something like this should also be in a comment near
the check in the code.
Miaohe Lin Feb. 15, 2022, 8:32 a.m. UTC | #2
On 2022/2/15 4:00, Matthew Wilcox (Oracle) wrote:
> folio_mapped() is expensive because it has to check each page's mapcount
> field.  A cheaper check is whether there are any extra references to
> the page, other than the one we own and the ones held by the page cache.
> The call to remove_mapping() will fail in any case if it cannot freeze
> the refcount, but failing here avoids cycling the i_pages spinlock.
> 
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> ---

LGTM. Thanks.

Reviewed-by: Miaohe Lin <linmiaohe@huawei.com>

>  mm/truncate.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/mm/truncate.c b/mm/truncate.c
> index b73c30c95cd0..d67fa8871b75 100644
> --- a/mm/truncate.c
> +++ b/mm/truncate.c
> @@ -287,7 +287,7 @@ int invalidate_inode_page(struct page *page)
>  		return 0;
>  	if (folio_test_dirty(folio) || folio_test_writeback(folio))
>  		return 0;
> -	if (page_mapped(page))
> +	if (folio_ref_count(folio) > folio_nr_pages(folio) + 1)
>  		return 0;
>  	if (folio_has_private(folio) && !filemap_release_folio(folio, 0))
>  		return 0;
>
Matthew Wilcox Feb. 15, 2022, 8:12 p.m. UTC | #3
On Mon, Feb 14, 2022 at 11:19:22PM -0800, Christoph Hellwig wrote:
> On Mon, Feb 14, 2022 at 08:00:11PM +0000, Matthew Wilcox (Oracle) wrote:
> > folio_mapped() is expensive because it has to check each page's mapcount
> > field.  A cheaper check is whether there are any extra references to
> > the page, other than the one we own and the ones held by the page cache.
> > The call to remove_mapping() will fail in any case if it cannot freeze
> > the refcount, but failing here avoids cycling the i_pages spinlock.
> 
> I wonder if something like this should also be in a comment near
> the check in the code.

        /* The refcount will be elevated if any page in the folio is mapped */

is what I've added for now.
Matthew Wilcox Feb. 25, 2022, 1:31 a.m. UTC | #4
On Mon, Feb 14, 2022 at 08:00:11PM +0000, Matthew Wilcox (Oracle) wrote:
> folio_mapped() is expensive because it has to check each page's mapcount
> field.  A cheaper check is whether there are any extra references to
> the page, other than the one we own and the ones held by the page cache.
> The call to remove_mapping() will fail in any case if it cannot freeze
> the refcount, but failing here avoids cycling the i_pages spinlock.

This is the patch that's causing ltp's readahead02 test to break.
Haven't dug into why yet, but it happens without large folios, so
I got something wrong.

> diff --git a/mm/truncate.c b/mm/truncate.c
> index b73c30c95cd0..d67fa8871b75 100644
> --- a/mm/truncate.c
> +++ b/mm/truncate.c
> @@ -287,7 +287,7 @@ int invalidate_inode_page(struct page *page)
>  		return 0;
>  	if (folio_test_dirty(folio) || folio_test_writeback(folio))
>  		return 0;
> -	if (page_mapped(page))
> +	if (folio_ref_count(folio) > folio_nr_pages(folio) + 1)
>  		return 0;
>  	if (folio_has_private(folio) && !filemap_release_folio(folio, 0))
>  		return 0;
> -- 
> 2.34.1
>
Matthew Wilcox Feb. 25, 2022, 3:27 a.m. UTC | #5
On Fri, Feb 25, 2022 at 01:31:09AM +0000, Matthew Wilcox wrote:
> On Mon, Feb 14, 2022 at 08:00:11PM +0000, Matthew Wilcox (Oracle) wrote:
> > folio_mapped() is expensive because it has to check each page's mapcount
> > field.  A cheaper check is whether there are any extra references to
> > the page, other than the one we own and the ones held by the page cache.
> > The call to remove_mapping() will fail in any case if it cannot freeze
> > the refcount, but failing here avoids cycling the i_pages spinlock.
> 
> This is the patch that's causing ltp's readahead02 test to break.
> Haven't dug into why yet, but it happens without large folios, so
> I got something wrong.

This fixes it:

+++ b/mm/truncate.c
@@ -288,7 +288,8 @@ int invalidate_inode_page(struct page *page)
        if (folio_test_dirty(folio) || folio_test_writeback(folio))
                return 0;
        /* The refcount will be elevated if any page in the folio is mapped */
-       if (folio_ref_count(folio) > folio_nr_pages(folio) + 1)
+       if (folio_ref_count(folio) >
+                       folio_nr_pages(folio) + 1 + folio_has_private(folio))
                return 0;
        if (folio_has_private(folio) && !filemap_release_folio(folio, 0))
                return 0;

Too late for today's -next, but I'll push it out tomorrow.
diff mbox series

Patch

diff --git a/mm/truncate.c b/mm/truncate.c
index b73c30c95cd0..d67fa8871b75 100644
--- a/mm/truncate.c
+++ b/mm/truncate.c
@@ -287,7 +287,7 @@  int invalidate_inode_page(struct page *page)
 		return 0;
 	if (folio_test_dirty(folio) || folio_test_writeback(folio))
 		return 0;
-	if (page_mapped(page))
+	if (folio_ref_count(folio) > folio_nr_pages(folio) + 1)
 		return 0;
 	if (folio_has_private(folio) && !filemap_release_folio(folio, 0))
 		return 0;