Message ID | 20190620151839.195506-4-zwisler@google.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Add dirty range scoping to jbd2 | expand |
On Thu, Jun 20, 2019 at 09:18:39AM -0600, Ross Zwisler wrote: > Use the newly introduced jbd2_inode dirty range scoping to prevent us > from waiting forever when trying to complete a journal transaction. > > Signed-off-by: Ross Zwisler <zwisler@google.com> > Reviewed-by: Jan Kara <jack@suse.cz> > Cc: stable@vger.kernel.org Applied, thanks. - Ted
On Mon, Jun 24, 2019 at 02:54:49AM +0800, kbuild test robot wrote: > Hi Ross, > > Thank you for the patch! Yet something to improve: > > [auto build test ERROR on linus/master] > [also build test ERROR on v5.2-rc6 next-20190621] > [if your patch is applied to the wrong git tree, please drop us a note to help improve the system] > > url: https://github.com/0day-ci/linux/commits/Ross-Zwisler/mm-add-filemap_fdatawait_range_keep_errors/20190623-181603 > config: x86_64-rhel-7.6 (attached as .config) > compiler: gcc-7 (Debian 7.3.0-1) 7.3.0 > reproduce: > # save the attached .config to linux build tree > make ARCH=x86_64 > > If you fix the issue, kindly add following tag > Reported-by: kbuild test robot <lkp@intel.com> > > All errors (new ones prefixed by >>): > > >> ERROR: "jbd2_journal_inode_ranged_wait" [fs/ext4/ext4.ko] undefined! > >> ERROR: "jbd2_journal_inode_ranged_write" [fs/ext4/ext4.ko] undefined! Yep, this is caused by the lack of EXPORT_SYMBOL() calls for these two new jbd2 functions. Ted also pointed this out and fixed this up when he was committing: https://patchwork.kernel.org/patch/11007139/#22717091 Thank you for the report!
diff --git a/fs/ext4/ext4_jbd2.h b/fs/ext4/ext4_jbd2.h index 75a5309f22315..ef8fcf7d0d3b3 100644 --- a/fs/ext4/ext4_jbd2.h +++ b/fs/ext4/ext4_jbd2.h @@ -361,20 +361,20 @@ static inline int ext4_journal_force_commit(journal_t *journal) } static inline int ext4_jbd2_inode_add_write(handle_t *handle, - struct inode *inode) + struct inode *inode, loff_t start_byte, loff_t length) { if (ext4_handle_valid(handle)) - return jbd2_journal_inode_add_write(handle, - EXT4_I(inode)->jinode); + return jbd2_journal_inode_ranged_write(handle, + EXT4_I(inode)->jinode, start_byte, length); return 0; } static inline int ext4_jbd2_inode_add_wait(handle_t *handle, - struct inode *inode) + struct inode *inode, loff_t start_byte, loff_t length) { if (ext4_handle_valid(handle)) - return jbd2_journal_inode_add_wait(handle, - EXT4_I(inode)->jinode); + return jbd2_journal_inode_ranged_wait(handle, + EXT4_I(inode)->jinode, start_byte, length); return 0; } diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c index c7f77c6430085..27fec5c594459 100644 --- a/fs/ext4/inode.c +++ b/fs/ext4/inode.c @@ -731,10 +731,16 @@ int ext4_map_blocks(handle_t *handle, struct inode *inode, !(flags & EXT4_GET_BLOCKS_ZERO) && !ext4_is_quota_file(inode) && ext4_should_order_data(inode)) { + loff_t start_byte = + (loff_t)map->m_lblk << inode->i_blkbits; + loff_t length = (loff_t)map->m_len << inode->i_blkbits; + if (flags & EXT4_GET_BLOCKS_IO_SUBMIT) - ret = ext4_jbd2_inode_add_wait(handle, inode); + ret = ext4_jbd2_inode_add_wait(handle, inode, + start_byte, length); else - ret = ext4_jbd2_inode_add_write(handle, inode); + ret = ext4_jbd2_inode_add_write(handle, inode, + start_byte, length); if (ret) return ret; } @@ -4085,7 +4091,8 @@ static int __ext4_block_zero_page_range(handle_t *handle, err = 0; mark_buffer_dirty(bh); if (ext4_should_order_data(inode)) - err = ext4_jbd2_inode_add_write(handle, inode); + err = ext4_jbd2_inode_add_write(handle, inode, from, + length); } unlock: diff --git a/fs/ext4/move_extent.c b/fs/ext4/move_extent.c index 1083a9f3f16a1..c7ded4e2adff5 100644 --- a/fs/ext4/move_extent.c +++ b/fs/ext4/move_extent.c @@ -390,7 +390,8 @@ move_extent_per_page(struct file *o_filp, struct inode *donor_inode, /* Even in case of data=writeback it is reasonable to pin * inode to transaction, to prevent unexpected data loss */ - *err = ext4_jbd2_inode_add_write(handle, orig_inode); + *err = ext4_jbd2_inode_add_write(handle, orig_inode, + (loff_t)orig_page_offset << PAGE_SHIFT, replaced_size); unlock_pages: unlock_page(pagep[0]);