Message ID | 20220508061543.318394-1-xiubli@redhat.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | ceph: check folio PG_private bit instead of folio->private | expand |
On Sun, 2022-05-08 at 14:15 +0800, Xiubo Li wrote: > The pages in the file mapping maybe reclaimed and reused by other > subsystems and the page->private maybe used as flags field or > something else, if later that pages are used by page caches again > the page->private maybe not cleared as expected. > > Here will check the PG_private bit instead of the folio->private. > > Cc: stable@vger.kernel.org > URL: https://tracker.ceph.com/issues/55421 > Signed-off-by: Xiubo Li <xiubli@redhat.com> > --- > fs/ceph/addr.c | 6 +++--- > 1 file changed, 3 insertions(+), 3 deletions(-) > > diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c > index 63b7430e1ce6..1a108f24e7d9 100644 > --- a/fs/ceph/addr.c > +++ b/fs/ceph/addr.c > @@ -85,7 +85,7 @@ static bool ceph_dirty_folio(struct address_space *mapping, struct folio *folio) > if (folio_test_dirty(folio)) { > dout("%p dirty_folio %p idx %lu -- already dirty\n", > mapping->host, folio, folio->index); > - VM_BUG_ON_FOLIO(!folio_get_private(folio), folio); > + VM_BUG_ON_FOLIO(!folio_test_private(folio), folio); > return false; > } > > @@ -122,7 +122,7 @@ static bool ceph_dirty_folio(struct address_space *mapping, struct folio *folio) > * Reference snap context in folio->private. Also set > * PagePrivate so that we get invalidate_folio callback. > */ > - VM_BUG_ON_FOLIO(folio_get_private(folio), folio); > + VM_BUG_ON_FOLIO(folio_test_private(folio), folio); > folio_attach_private(folio, snapc); > > return ceph_fscache_dirty_folio(mapping, folio); > @@ -150,7 +150,7 @@ static void ceph_invalidate_folio(struct folio *folio, size_t offset, > } > > WARN_ON(!folio_test_locked(folio)); > - if (folio_get_private(folio)) { > + if (folio_test_private(folio)) { > dout("%p invalidate_folio idx %lu full dirty page\n", > inode, folio->index); > Reviewed-by: Jeff Layton <jlayton@kernel.org>
On Sun, May 08, 2022 at 02:15:43PM +0800, Xiubo Li wrote: > The pages in the file mapping maybe reclaimed and reused by other > subsystems and the page->private maybe used as flags field or > something else, if later that pages are used by page caches again > the page->private maybe not cleared as expected. > > Here will check the PG_private bit instead of the folio->private. I thought that a patch to set ->private to NULL in the folio code (maybe in folio_end_private_2()) would make sense. But then... it probably wouldn't get accepted as we're probably not supposed to assume these fields are initialised. Anyway, thanks Xiubo! Reviewed-by: Luís Henriques <lhenriques@suse.de> Cheers, -- Luís > > Cc: stable@vger.kernel.org > URL: https://tracker.ceph.com/issues/55421 > Signed-off-by: Xiubo Li <xiubli@redhat.com> > --- > fs/ceph/addr.c | 6 +++--- > 1 file changed, 3 insertions(+), 3 deletions(-) > > diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c > index 63b7430e1ce6..1a108f24e7d9 100644 > --- a/fs/ceph/addr.c > +++ b/fs/ceph/addr.c > @@ -85,7 +85,7 @@ static bool ceph_dirty_folio(struct address_space *mapping, struct folio *folio) > if (folio_test_dirty(folio)) { > dout("%p dirty_folio %p idx %lu -- already dirty\n", > mapping->host, folio, folio->index); > - VM_BUG_ON_FOLIO(!folio_get_private(folio), folio); > + VM_BUG_ON_FOLIO(!folio_test_private(folio), folio); > return false; > } > > @@ -122,7 +122,7 @@ static bool ceph_dirty_folio(struct address_space *mapping, struct folio *folio) > * Reference snap context in folio->private. Also set > * PagePrivate so that we get invalidate_folio callback. > */ > - VM_BUG_ON_FOLIO(folio_get_private(folio), folio); > + VM_BUG_ON_FOLIO(folio_test_private(folio), folio); > folio_attach_private(folio, snapc); > > return ceph_fscache_dirty_folio(mapping, folio); > @@ -150,7 +150,7 @@ static void ceph_invalidate_folio(struct folio *folio, size_t offset, > } > > WARN_ON(!folio_test_locked(folio)); > - if (folio_get_private(folio)) { > + if (folio_test_private(folio)) { > dout("%p invalidate_folio idx %lu full dirty page\n", > inode, folio->index); > > -- > 2.36.0.rc1 >
On 5/9/22 5:13 PM, Luís Henriques wrote: > On Sun, May 08, 2022 at 02:15:43PM +0800, Xiubo Li wrote: >> The pages in the file mapping maybe reclaimed and reused by other >> subsystems and the page->private maybe used as flags field or >> something else, if later that pages are used by page caches again >> the page->private maybe not cleared as expected. >> >> Here will check the PG_private bit instead of the folio->private. > I thought that a patch to set ->private to NULL in the folio code (maybe > in folio_end_private_2()) would make sense. But then... it probably > wouldn't get accepted as we're probably not supposed to assume these > fields are initialised. Not very sure this or something like this is correct place to clear the ->private. Because the 'folio' and 'page' struct are like union and also in the 'page' struct there also has a big union, such as any of the following field could affect the ->private: unsigned long private; unsigned long dma_addr_upper; atomic_long_t pp_frag_count; atomic_t compound_pincount; spinlock_t ptl; > Anyway, thanks Xiubo! > > Reviewed-by: Luís Henriques <lhenriques@suse.de> Thanks Luis. -- Xiubo > Cheers, > -- > Luís > >> Cc: stable@vger.kernel.org >> URL: https://tracker.ceph.com/issues/55421 >> Signed-off-by: Xiubo Li <xiubli@redhat.com> >> --- >> fs/ceph/addr.c | 6 +++--- >> 1 file changed, 3 insertions(+), 3 deletions(-) >> >> diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c >> index 63b7430e1ce6..1a108f24e7d9 100644 >> --- a/fs/ceph/addr.c >> +++ b/fs/ceph/addr.c >> @@ -85,7 +85,7 @@ static bool ceph_dirty_folio(struct address_space *mapping, struct folio *folio) >> if (folio_test_dirty(folio)) { >> dout("%p dirty_folio %p idx %lu -- already dirty\n", >> mapping->host, folio, folio->index); >> - VM_BUG_ON_FOLIO(!folio_get_private(folio), folio); >> + VM_BUG_ON_FOLIO(!folio_test_private(folio), folio); >> return false; >> } >> >> @@ -122,7 +122,7 @@ static bool ceph_dirty_folio(struct address_space *mapping, struct folio *folio) >> * Reference snap context in folio->private. Also set >> * PagePrivate so that we get invalidate_folio callback. >> */ >> - VM_BUG_ON_FOLIO(folio_get_private(folio), folio); >> + VM_BUG_ON_FOLIO(folio_test_private(folio), folio); >> folio_attach_private(folio, snapc); >> >> return ceph_fscache_dirty_folio(mapping, folio); >> @@ -150,7 +150,7 @@ static void ceph_invalidate_folio(struct folio *folio, size_t offset, >> } >> >> WARN_ON(!folio_test_locked(folio)); >> - if (folio_get_private(folio)) { >> + if (folio_test_private(folio)) { >> dout("%p invalidate_folio idx %lu full dirty page\n", >> inode, folio->index); >> >> -- >> 2.36.0.rc1 >>
diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c index 63b7430e1ce6..1a108f24e7d9 100644 --- a/fs/ceph/addr.c +++ b/fs/ceph/addr.c @@ -85,7 +85,7 @@ static bool ceph_dirty_folio(struct address_space *mapping, struct folio *folio) if (folio_test_dirty(folio)) { dout("%p dirty_folio %p idx %lu -- already dirty\n", mapping->host, folio, folio->index); - VM_BUG_ON_FOLIO(!folio_get_private(folio), folio); + VM_BUG_ON_FOLIO(!folio_test_private(folio), folio); return false; } @@ -122,7 +122,7 @@ static bool ceph_dirty_folio(struct address_space *mapping, struct folio *folio) * Reference snap context in folio->private. Also set * PagePrivate so that we get invalidate_folio callback. */ - VM_BUG_ON_FOLIO(folio_get_private(folio), folio); + VM_BUG_ON_FOLIO(folio_test_private(folio), folio); folio_attach_private(folio, snapc); return ceph_fscache_dirty_folio(mapping, folio); @@ -150,7 +150,7 @@ static void ceph_invalidate_folio(struct folio *folio, size_t offset, } WARN_ON(!folio_test_locked(folio)); - if (folio_get_private(folio)) { + if (folio_test_private(folio)) { dout("%p invalidate_folio idx %lu full dirty page\n", inode, folio->index);
The pages in the file mapping maybe reclaimed and reused by other subsystems and the page->private maybe used as flags field or something else, if later that pages are used by page caches again the page->private maybe not cleared as expected. Here will check the PG_private bit instead of the folio->private. Cc: stable@vger.kernel.org URL: https://tracker.ceph.com/issues/55421 Signed-off-by: Xiubo Li <xiubli@redhat.com> --- fs/ceph/addr.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-)