From patchwork Sun Jan 8 16:56:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13092624 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 40F80C53210 for ; Sun, 8 Jan 2023 16:57:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234500AbjAHQ5a (ORCPT ); Sun, 8 Jan 2023 11:57:30 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44118 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233688AbjAHQ5K (ORCPT ); Sun, 8 Jan 2023 11:57:10 -0500 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7BDC85FCC; Sun, 8 Jan 2023 08:57:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=oMPhLOYpJNBqa4dwFa8oL6UthrVrVH8Rk8f8kVGsTX8=; b=PKwRols7MQFA6r90zKuNxTrBwU rDshHiaWkYRgvGWSV5BX9ARC7MnPg6FQjItWl5iozMErhRr/Myxp7ECTVJlvIhcjk2Vg/2LTqbzNh 88GmfJGxKVNRHcx2n7V9pWxYXUFEFMMmqXaMPAjXflzSLusxgchlMH5lAKPklqrMQsPKoIGsEjeOl amYtRU5bsHvGaTKx7PCEiCK91nyhaRkzZmd/GYz/MD+VwBAYP+oqFKTZfqNTMsoDUniL2++T+s5Wt TsUxel7JrvdxmSTtPAyuHGghIaZzQSAvOeMlwx5HJY49qQL0jDsVZaUe43cAoHOWJHlgEg8Fb6fd2 tQphvRNg==; Received: from [2001:4bb8:198:a591:1c7c:bf66:af15:b282] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1pEYyj-00ERtL-Ge; Sun, 08 Jan 2023 16:57:02 +0000 From: Christoph Hellwig To: Andrew Morton , Chris Mason , Josef Bacik , David Sterba , Dave Kleikamp , Mark Fasheh , Joel Becker , Joseph Qi , Evgeniy Dushistov , "Matthew Wilcox (Oracle)" Cc: linux-btrfs@vger.kernel.org, jfs-discussion@lists.sourceforge.net, ocfs2-devel@oss.oracle.com, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 5/7] ufs: don't flush page immediately for DIRSYNC directories Date: Sun, 8 Jan 2023 17:56:43 +0100 Message-Id: <20230108165645.381077-6-hch@lst.de> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20230108165645.381077-1-hch@lst.de> References: <20230108165645.381077-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org We do not need to writeout modified directory blocks immediately when modifying them while the page is locked. It is enough to do the flush somewhat later which has the added benefit that inode times can be flushed as well. It also allows us to stop depending on write_one_page() function. Ported from an ext2 patch by Jan Kara. Signed-off-by: Christoph Hellwig --- fs/ufs/dir.c | 29 +++++++++++++++++++---------- 1 file changed, 19 insertions(+), 10 deletions(-) diff --git a/fs/ufs/dir.c b/fs/ufs/dir.c index 391efaf1d52897..379d75796a5ce3 100644 --- a/fs/ufs/dir.c +++ b/fs/ufs/dir.c @@ -42,11 +42,10 @@ static inline int ufs_match(struct super_block *sb, int len, return !memcmp(name, de->d_name, len); } -static int ufs_commit_chunk(struct page *page, loff_t pos, unsigned len) +static void ufs_commit_chunk(struct page *page, loff_t pos, unsigned len) { struct address_space *mapping = page->mapping; struct inode *dir = mapping->host; - int err = 0; inode_inc_iversion(dir); block_write_end(NULL, mapping, pos, len, len, page, NULL); @@ -54,10 +53,16 @@ static int ufs_commit_chunk(struct page *page, loff_t pos, unsigned len) i_size_write(dir, pos+len); mark_inode_dirty(dir); } - if (IS_DIRSYNC(dir)) - err = write_one_page(page); - else - unlock_page(page); + unlock_page(page); +} + +static int ufs_handle_dirsync(struct inode *dir) +{ + int err; + + err = filemap_write_and_wait(dir->i_mapping); + if (!err) + err = sync_inode_metadata(dir, 1); return err; } @@ -99,11 +104,12 @@ void ufs_set_link(struct inode *dir, struct ufs_dir_entry *de, de->d_ino = cpu_to_fs32(dir->i_sb, inode->i_ino); ufs_set_de_type(dir->i_sb, de, inode->i_mode); - err = ufs_commit_chunk(page, pos, len); + ufs_commit_chunk(page, pos, len); ufs_put_page(page); if (update_times) dir->i_mtime = dir->i_ctime = current_time(dir); mark_inode_dirty(dir); + ufs_handle_dirsync(dir); } @@ -390,10 +396,11 @@ int ufs_add_link(struct dentry *dentry, struct inode *inode) de->d_ino = cpu_to_fs32(sb, inode->i_ino); ufs_set_de_type(sb, de, inode->i_mode); - err = ufs_commit_chunk(page, pos, rec_len); + ufs_commit_chunk(page, pos, rec_len); dir->i_mtime = dir->i_ctime = current_time(dir); mark_inode_dirty(dir); + err = ufs_handle_dirsync(dir); /* OFFSET_CACHE */ out_put: ufs_put_page(page); @@ -531,9 +538,10 @@ int ufs_delete_entry(struct inode *inode, struct ufs_dir_entry *dir, if (pde) pde->d_reclen = cpu_to_fs16(sb, to - from); dir->d_ino = 0; - err = ufs_commit_chunk(page, pos, to - from); + ufs_commit_chunk(page, pos, to - from); inode->i_ctime = inode->i_mtime = current_time(inode); mark_inode_dirty(inode); + err = ufs_handle_dirsync(inode); out: ufs_put_page(page); UFSD("EXIT\n"); @@ -579,7 +587,8 @@ int ufs_make_empty(struct inode * inode, struct inode *dir) strcpy (de->d_name, ".."); kunmap(page); - err = ufs_commit_chunk(page, 0, chunk_size); + ufs_commit_chunk(page, 0, chunk_size); + err = ufs_handle_dirsync(inode); fail: put_page(page); return err;