Message ID | 20181115184140.1388751-1-pjaroszynski@nvidia.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [v2] iomap: get/put the page in iomap_page_create/release() | expand |
The V2 fixes look good to me. William Kucharski > On Nov 15, 2018, at 11:41 AM, p.jaroszynski@gmail.com wrote: > > Fixes: 82cb14175e7d ("xfs: add support for sub-pagesize writeback without buffer_heads") > Signed-off-by: Piotr Jaroszynski <pjaroszynski@nvidia.com> > Reviewed-by: Christoph Hellwig <hch@lst.de> > --- > fs/iomap.c | 7 +++++++ > 1 file changed, 7 insertions(+) > > diff --git a/fs/iomap.c b/fs/iomap.c > index 90c2febc93ac..7c369faea1dc 100644 > --- a/fs/iomap.c > +++ b/fs/iomap.c > @@ -117,6 +117,12 @@ iomap_page_create(struct inode *inode, struct page *page) > atomic_set(&iop->read_count, 0); > atomic_set(&iop->write_count, 0); > bitmap_zero(iop->uptodate, PAGE_SIZE / SECTOR_SIZE); > + > + /* > + * migrate_page_move_mapping() assumes that pages with private data have > + * their count elevated by 1. > + */ > + get_page(page); > set_page_private(page, (unsigned long)iop); > SetPagePrivate(page); > return iop; > @@ -133,6 +139,7 @@ iomap_page_release(struct page *page) > WARN_ON_ONCE(atomic_read(&iop->write_count)); > ClearPagePrivate(page); > set_page_private(page, 0); > + put_page(page); > kfree(iop); > } > > -- > 2.11.0.262.g4b0a5b2.dirty >
On Thu, 15 Nov 2018 10:41:40 -0800 p.jaroszynski@gmail.com wrote: > migrate_page_move_mapping() expects pages with private data set to have > a page_count elevated by 1. This is what used to happen for xfs through > the buffer_heads code before the switch to iomap in commit 82cb14175e7d > ("xfs: add support for sub-pagesize writeback without buffer_heads"). > Not having the count elevated causes move_pages() to fail on memory > mapped files coming from xfs. > > Make iomap compatible with the migrate_page_move_mapping() assumption > by elevating the page count as part of iomap_page_create() and lowering > it in iomap_page_release(). What are the real-world end-user effects of this bug? Is a -stable backport warranted?
On 12/3/18 3:22 PM, Andrew Morton wrote: > On Thu, 15 Nov 2018 10:41:40 -0800 p.jaroszynski@gmail.com wrote: > >> migrate_page_move_mapping() expects pages with private data set to have >> a page_count elevated by 1. This is what used to happen for xfs through >> the buffer_heads code before the switch to iomap in commit 82cb14175e7d >> ("xfs: add support for sub-pagesize writeback without buffer_heads"). >> Not having the count elevated causes move_pages() to fail on memory >> mapped files coming from xfs. >> >> Make iomap compatible with the migrate_page_move_mapping() assumption >> by elevating the page count as part of iomap_page_create() and lowering >> it in iomap_page_release(). > > What are the real-world end-user effects of this bug? It causes the move_pages() syscall to misbehave on memory mapped files from xfs. It does not not move any pages, which I suppose is "just" a perf issue, but it also ends up returning a positive number which is out of spec for the syscall. Talking to Michal Hocko, it sounds like returning positive numbers might be a necessary update to move_pages() anyway though, see [1]. I only hit this in tests that verify that move_pages() actually moved the pages. The test also got confused by the positive return from move_pages() (it got treated as a success as positive numbers were not expected and not handled) making it a bit harder to track down what's going on. > > Is a -stable backport warranted? > I would say yes, but this is my first kernel contribution so others would be probably better judges of that. [1] - https://lkml.kernel.org/r/20181116114955.GJ14706@dhcp22.suse.cz Thanks, Piotr
diff --git a/fs/iomap.c b/fs/iomap.c index 90c2febc93ac..7c369faea1dc 100644 --- a/fs/iomap.c +++ b/fs/iomap.c @@ -117,6 +117,12 @@ iomap_page_create(struct inode *inode, struct page *page) atomic_set(&iop->read_count, 0); atomic_set(&iop->write_count, 0); bitmap_zero(iop->uptodate, PAGE_SIZE / SECTOR_SIZE); + + /* + * migrate_page_move_mapping() assumes that pages with private data have + * their count elevated by 1. + */ + get_page(page); set_page_private(page, (unsigned long)iop); SetPagePrivate(page); return iop; @@ -133,6 +139,7 @@ iomap_page_release(struct page *page) WARN_ON_ONCE(atomic_read(&iop->write_count)); ClearPagePrivate(page); set_page_private(page, 0); + put_page(page); kfree(iop); }