From patchwork Sun Jan 8 16:56:41 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13092620 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id ABC0BC67871 for ; Sun, 8 Jan 2023 16:57:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234737AbjAHQ5J (ORCPT ); Sun, 8 Jan 2023 11:57:09 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44094 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233617AbjAHQ5I (ORCPT ); Sun, 8 Jan 2023 11:57:08 -0500 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 99E6F5FD9; Sun, 8 Jan 2023 08:57:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=w04Rctw8qyCjwLO2i1OjkI+LN8twYupiOrnbVDTEa+0=; b=3xxKbUYOQBIbQYRO8w6HvdN1R4 9OQTx9rAbhFdmf+fGmYydYTlIuwJkr3F4bGLLwPJxxUSG0Uwx2aYZvNXu9JdVOqkigNLVSMVUnDIE Psz+/plZU4G45A3la//6lBz9RAD3770Mote9HxQPVLHNRllqYg4M2m23173VtyGQo1F73drNUgmeW tbNy3JqZ6/44QmJbTmELuKvI/sJbV+e/ZzhVrXUj82kr1WETchKqaxpv82mrFBTS4cyy/ReyZU8N+ YsqCclyRwnDWYY6aAcNiM4Zo29IqQ4Vx8WyirZV+wC4u9rt5mnPNCpuwYKtyXQJjKrUS4Sd+CfwVD k1TB835w==; Received: from [2001:4bb8:198:a591:1c7c:bf66:af15:b282] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1pEYyd-00ERru-V5; Sun, 08 Jan 2023 16:56:56 +0000 From: Christoph Hellwig To: Andrew Morton , Chris Mason , Josef Bacik , David Sterba , Dave Kleikamp , Mark Fasheh , Joel Becker , Joseph Qi , Evgeniy Dushistov , "Matthew Wilcox (Oracle)" Cc: linux-btrfs@vger.kernel.org, jfs-discussion@lists.sourceforge.net, ocfs2-devel@oss.oracle.com, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 3/7] minix: don't flush page immediately for DIRSYNC directories Date: Sun, 8 Jan 2023 17:56:41 +0100 Message-Id: <20230108165645.381077-4-hch@lst.de> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20230108165645.381077-1-hch@lst.de> References: <20230108165645.381077-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org We do not need to writeout modified directory blocks immediately when modifying them while the page is locked. It is enough to do the flush somewhat later which has the added benefit that inode times can be flushed as well. It also allows us to stop depending on write_one_page() function. Ported from an ext2 patch by Jan Kara. Signed-off-by: Christoph Hellwig --- fs/minix/dir.c | 30 ++++++++++++++++++++---------- 1 file changed, 20 insertions(+), 10 deletions(-) diff --git a/fs/minix/dir.c b/fs/minix/dir.c index dcfe5b25378b54..d48b09271dc48f 100644 --- a/fs/minix/dir.c +++ b/fs/minix/dir.c @@ -46,21 +46,27 @@ minix_last_byte(struct inode *inode, unsigned long page_nr) return last_byte; } -static int dir_commit_chunk(struct page *page, loff_t pos, unsigned len) +static void dir_commit_chunk(struct page *page, loff_t pos, unsigned len) { struct address_space *mapping = page->mapping; struct inode *dir = mapping->host; - int err = 0; + block_write_end(NULL, mapping, pos, len, len, page, NULL); if (pos+len > dir->i_size) { i_size_write(dir, pos+len); mark_inode_dirty(dir); } - if (IS_DIRSYNC(dir)) - err = write_one_page(page); - else - unlock_page(page); + unlock_page(page); +} + +static int minix_handle_dirsync(struct inode *dir) +{ + int err; + + err = filemap_write_and_wait(dir->i_mapping); + if (!err) + err = sync_inode_metadata(dir, 1); return err; } @@ -274,9 +280,10 @@ int minix_add_link(struct dentry *dentry, struct inode *inode) memset (namx + namelen, 0, sbi->s_dirsize - namelen - 2); de->inode = inode->i_ino; } - err = dir_commit_chunk(page, pos, sbi->s_dirsize); + dir_commit_chunk(page, pos, sbi->s_dirsize); dir->i_mtime = dir->i_ctime = current_time(dir); mark_inode_dirty(dir); + minix_handle_dirsync(dir); out_put: dir_put_page(page); out: @@ -302,13 +309,15 @@ int minix_delete_entry(struct minix_dir_entry *de, struct page *page) ((minix3_dirent *) de)->inode = 0; else de->inode = 0; - err = dir_commit_chunk(page, pos, len); + dir_commit_chunk(page, pos, len); } else { unlock_page(page); } dir_put_page(page); inode->i_ctime = inode->i_mtime = current_time(inode); mark_inode_dirty(inode); + if (!err) + err = minix_handle_dirsync(inode); return err; } @@ -349,7 +358,8 @@ int minix_make_empty(struct inode *inode, struct inode *dir) } kunmap_atomic(kaddr); - err = dir_commit_chunk(page, 0, 2 * sbi->s_dirsize); + dir_commit_chunk(page, 0, 2 * sbi->s_dirsize); + err = minix_handle_dirsync(inode); fail: put_page(page); return err; @@ -426,7 +436,7 @@ void minix_set_link(struct minix_dir_entry *de, struct page *page, ((minix3_dirent *) de)->inode = inode->i_ino; else de->inode = inode->i_ino; - err = dir_commit_chunk(page, pos, sbi->s_dirsize); + dir_commit_chunk(page, pos, sbi->s_dirsize); } else { unlock_page(page); }