Message ID | 20210803193134.1198733-1-willy@infradead.org (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [1/2] iomap: Use kmap_local_page instead of kmap_atomic | expand |
On Tue, Aug 03, 2021 at 08:31:33PM +0100, Matthew Wilcox (Oracle) wrote: > kmap_atomic() has the side-effect of disabling pagefaults and > preemption. kmap_local_page() does not do this and is preferred. > > Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Pretty straightforward. Reviewed-by: Darrick J. Wong <djwong@kernel.org> --D > --- > fs/iomap/buffered-io.c | 10 +++++----- > 1 file changed, 5 insertions(+), 5 deletions(-) > > diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c > index c1c8cd41ea81..8ee0211bea86 100644 > --- a/fs/iomap/buffered-io.c > +++ b/fs/iomap/buffered-io.c > @@ -223,10 +223,10 @@ static int iomap_read_inline_data(struct inode *inode, struct page *page, > if (poff > 0) > iomap_page_create(inode, page); > > - addr = kmap_atomic(page) + poff; > + addr = kmap_local_page(page) + poff; > memcpy(addr, iomap->inline_data, size); > memset(addr + size, 0, PAGE_SIZE - poff - size); > - kunmap_atomic(addr); > + kunmap_local(addr); > iomap_set_range_uptodate(page, poff, PAGE_SIZE - poff); > return PAGE_SIZE - poff; > } > @@ -682,9 +682,9 @@ static size_t iomap_write_end_inline(struct inode *inode, struct page *page, > BUG_ON(!iomap_inline_data_valid(iomap)); > > flush_dcache_page(page); > - addr = kmap_atomic(page); > - memcpy(iomap_inline_data(iomap, pos), addr + pos, copied); > - kunmap_atomic(addr); > + addr = kmap_local_page(page) + pos; > + memcpy(iomap_inline_data(iomap, pos), addr, copied); > + kunmap_local(addr); > > mark_inode_dirty(inode); > return copied; > -- > 2.30.2 >
On Thu, Aug 05, 2021 at 10:31:04AM -0700, Darrick J. Wong wrote: > On Tue, Aug 03, 2021 at 08:31:33PM +0100, Matthew Wilcox (Oracle) wrote: > > kmap_atomic() has the side-effect of disabling pagefaults and > > preemption. kmap_local_page() does not do this and is preferred. > > > > Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> > > Pretty straightforward. > Reviewed-by: Darrick J. Wong <djwong@kernel.org> > > --D > > > --- > > fs/iomap/buffered-io.c | 10 +++++----- > > 1 file changed, 5 insertions(+), 5 deletions(-) > > > > diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c > > index c1c8cd41ea81..8ee0211bea86 100644 > > --- a/fs/iomap/buffered-io.c > > +++ b/fs/iomap/buffered-io.c > > @@ -223,10 +223,10 @@ static int iomap_read_inline_data(struct inode *inode, struct page *page, > > if (poff > 0) > > iomap_page_create(inode, page); > > > > - addr = kmap_atomic(page) + poff; > > + addr = kmap_local_page(page) + poff; Though now that I think about it: Why does iomap_write_actor still use copy_page_from_iter_atomic? Can that be converted to use regular copy_page_from_iter, which at least sometimes uses kmap_local_page? --D > > memcpy(addr, iomap->inline_data, size); > > memset(addr + size, 0, PAGE_SIZE - poff - size); > > - kunmap_atomic(addr); > > + kunmap_local(addr); > > iomap_set_range_uptodate(page, poff, PAGE_SIZE - poff); > > return PAGE_SIZE - poff; > > } > > @@ -682,9 +682,9 @@ static size_t iomap_write_end_inline(struct inode *inode, struct page *page, > > BUG_ON(!iomap_inline_data_valid(iomap)); > > > > flush_dcache_page(page); > > - addr = kmap_atomic(page); > > - memcpy(iomap_inline_data(iomap, pos), addr + pos, copied); > > - kunmap_atomic(addr); > > + addr = kmap_local_page(page) + pos; > > + memcpy(iomap_inline_data(iomap, pos), addr, copied); > > + kunmap_local(addr); > > > > mark_inode_dirty(inode); > > return copied; > > -- > > 2.30.2 > >
On Thu, Aug 05, 2021 at 10:39:03AM -0700, Darrick J. Wong wrote: > Though now that I think about it: Why does iomap_write_actor still use > copy_page_from_iter_atomic? Can that be converted to use regular > copy_page_from_iter, which at least sometimes uses kmap_local_page? I suspect copy_page_from_iter_atomic() should be converted to use kmap_local_page(), but I don't know. generic_perform_write() uses the _atomic() version, so I'm not doing anything different without understanding more than I currently do.
On Thu, Aug 05 2021 at 19:24, Matthew Wilcox wrote: > On Thu, Aug 05, 2021 at 10:39:03AM -0700, Darrick J. Wong wrote: >> Though now that I think about it: Why does iomap_write_actor still use >> copy_page_from_iter_atomic? Can that be converted to use regular >> copy_page_from_iter, which at least sometimes uses kmap_local_page? > > I suspect copy_page_from_iter_atomic() should be converted to use > kmap_local_page(), but I don't know. generic_perform_write() uses > the _atomic() version, so I'm not doing anything different without > understanding more than I currently do. Most of the kmap_atomic() usage can be converted to kmap_local(). There are only a few usage sites which really depend on the implicit preempt disable. The reason why we cannot convert the bulk blindly is that quite some usage sites have user memory access nested inside. As kmap_atomic() disables preemption and page faults the error handling needs to be outside the atomic section, i.e. after kunmap_atomic(). So if you convert that you have to get rid of that extra error handling and just use the regular user memory accessors. IIRC there are a few places which really want pagefaults disabled, but those do not necessarily need preemption disabled. So they need to be converted to kmap_local(); pagefault_disable(); err = dostuff(); .... Hope that helps. Thanks tglx
diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index c1c8cd41ea81..8ee0211bea86 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -223,10 +223,10 @@ static int iomap_read_inline_data(struct inode *inode, struct page *page, if (poff > 0) iomap_page_create(inode, page); - addr = kmap_atomic(page) + poff; + addr = kmap_local_page(page) + poff; memcpy(addr, iomap->inline_data, size); memset(addr + size, 0, PAGE_SIZE - poff - size); - kunmap_atomic(addr); + kunmap_local(addr); iomap_set_range_uptodate(page, poff, PAGE_SIZE - poff); return PAGE_SIZE - poff; } @@ -682,9 +682,9 @@ static size_t iomap_write_end_inline(struct inode *inode, struct page *page, BUG_ON(!iomap_inline_data_valid(iomap)); flush_dcache_page(page); - addr = kmap_atomic(page); - memcpy(iomap_inline_data(iomap, pos), addr + pos, copied); - kunmap_atomic(addr); + addr = kmap_local_page(page) + pos; + memcpy(iomap_inline_data(iomap, pos), addr, copied); + kunmap_local(addr); mark_inode_dirty(inode); return copied;
kmap_atomic() has the side-effect of disabling pagefaults and preemption. kmap_local_page() does not do this and is preferred. Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> --- fs/iomap/buffered-io.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-)