diff mbox series

mm/filemap: set folio->mapping to NULL before xas_store()

Message ID 20240322210455.3738-1-soma.nakata01@gmail.com (mailing list archive)
State New
Headers show
Series mm/filemap: set folio->mapping to NULL before xas_store() | expand

Commit Message

Soma March 22, 2024, 9:04 p.m. UTC
Functions such as __filemap_get_folio() check the truncation of
folios based on the mapping field. Therefore setting this field to NULL
earlier prevents unnecessary operations on already removed folios.

Signed-off-by: Soma Nakata <soma.nakata01@gmail.com>
---
 mm/filemap.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

Comments

Andrew Morton March 26, 2024, 9:05 p.m. UTC | #1
On Sat, 23 Mar 2024 06:04:54 +0900 Soma Nakata <soma.nakata01@gmail.com> wrote:

> Functions such as __filemap_get_folio() check the truncation of
> folios based on the mapping field. Therefore setting this field to NULL
> earlier prevents unnecessary operations on already removed folios.
> 
> ...
>
> --- a/mm/filemap.c
> +++ b/mm/filemap.c
> @@ -139,11 +139,12 @@ static void page_cache_delete(struct address_space *mapping,
>  
>  	VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio);
>  
> +	folio->mapping = NULL;
> +	/* Leave page->index set: truncation lookup relies upon it */
> +
>  	xas_store(&xas, shadow);
>  	xas_init_marks(&xas);
>  
> -	folio->mapping = NULL;
> -	/* Leave page->index set: truncation lookup relies upon it */
>  	mapping->nrpages -= nr;
>  }

Seems at least harmless, but I wonder if it can really make any
difference.  Don't readers of folio->mapping lock the folio first?
Matthew Wilcox March 26, 2024, 10:50 p.m. UTC | #2
On Tue, Mar 26, 2024 at 02:05:33PM -0700, Andrew Morton wrote:
> On Sat, 23 Mar 2024 06:04:54 +0900 Soma Nakata <soma.nakata01@gmail.com> wrote:
> > Functions such as __filemap_get_folio() check the truncation of
> > folios based on the mapping field. Therefore setting this field to NULL
> > earlier prevents unnecessary operations on already removed folios.
> > 
> > ...
> >
> > --- a/mm/filemap.c
> > +++ b/mm/filemap.c
> > @@ -139,11 +139,12 @@ static void page_cache_delete(struct address_space *mapping,
> >  
> >  	VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio);
> >  
> > +	folio->mapping = NULL;
> > +	/* Leave page->index set: truncation lookup relies upon it */
> > +
> >  	xas_store(&xas, shadow);
> >  	xas_init_marks(&xas);
> >  
> > -	folio->mapping = NULL;
> > -	/* Leave page->index set: truncation lookup relies upon it */
> >  	mapping->nrpages -= nr;
> >  }
> 
> Seems at least harmless, but I wonder if it can really make any
> difference.  Don't readers of folio->mapping lock the folio first?

I can't think of anywhere that doesn't ... most of the places that check
folio->mapping have "goto unlock" as the very next line.  I don't think
this patch accomplishes anything.
Soma March 26, 2024, 10:52 p.m. UTC | #3
On Wed, Mar 27, 2024 at 6:05 AM Andrew Morton <akpm@linux-foundation.org> wrote:
>
> On Sat, 23 Mar 2024 06:04:54 +0900 Soma Nakata <soma.nakata01@gmail.com> wrote:
>
> > Functions such as __filemap_get_folio() check the truncation of
> > folios based on the mapping field. Therefore setting this field to NULL
> > earlier prevents unnecessary operations on already removed folios.
> >
> > ...
> >
> > --- a/mm/filemap.c
> > +++ b/mm/filemap.c
> > @@ -139,11 +139,12 @@ static void page_cache_delete(struct address_space *mapping,
> >
> >       VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio);
> >
> > +     folio->mapping = NULL;
> > +     /* Leave page->index set: truncation lookup relies upon it */
> > +
> >       xas_store(&xas, shadow);
> >       xas_init_marks(&xas);
> >
> > -     folio->mapping = NULL;
> > -     /* Leave page->index set: truncation lookup relies upon it */
> >       mapping->nrpages -= nr;
> >  }
>
> Seems at least harmless, but I wonder if it can really make any
> difference.  Don't readers of folio->mapping lock the folio first?

Yes, the reader locks the folio.
Only __filemap_remove_folio() calls page_cache_delete(),
and it says the caller has to lock the folio or make sure
that usage is safe. In the latter case, this patch improves
efficiency a little bit.
However, I found that there is not any latter case actually,
so discard it or apply, also to make the order of operations in
page_cache_delete() and page_cache_delete_batch() the same
for a cleanup.
Thanks,
diff mbox series

Patch

diff --git a/mm/filemap.c b/mm/filemap.c
index 2723104cc06a..79bac7c00084 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -139,11 +139,12 @@  static void page_cache_delete(struct address_space *mapping,
 
 	VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio);
 
+	folio->mapping = NULL;
+	/* Leave page->index set: truncation lookup relies upon it */
+
 	xas_store(&xas, shadow);
 	xas_init_marks(&xas);
 
-	folio->mapping = NULL;
-	/* Leave page->index set: truncation lookup relies upon it */
 	mapping->nrpages -= nr;
 }