From patchwork Mon Nov 6 17:38:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13447254 Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 82E1E29D03 for ; Mon, 6 Nov 2023 17:39:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="OC2PMPFj" Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 06F4910F8; Mon, 6 Nov 2023 09:39:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=vaMf6+ceFAF1BP5HFmDPS10ulzb64i/FkHesSTm9xno=; b=OC2PMPFjVpVlg2EfhoYkqMwOzP OJGvz8ttxlR23/WD8Wq/wVVEILmEv9i/P9cOgBXBMXg0XatwJUpoY3Yl0YnHZeyHqluQezk9PVz8x FdZsLzmHquhwKbmSOOeJ/XtcCu37k6mT9ekWbNXJovs94HhnkV1d9ThQVF2261AN1TwG+ndu2mUhC 8p0KaAOsdFPaGsCO6k3YqOCCscQfuy2fp385HVt+gG32GwAeYVdKiyPuycNfYpBpnyi0I0aoun0NS Rqlabd6NsZ/FhHPLksN9KLXW0PGQ9g3pD1o7Z1ddiODzI245vO/mcEwLmsLKeeCPcfyz5U7V7jwS2 V+RppHYw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1r03Z7-007HBG-Qw; Mon, 06 Nov 2023 17:39:09 +0000 From: "Matthew Wilcox (Oracle)" To: Ryusuke Konishi Cc: "Matthew Wilcox (Oracle)" , linux-nilfs@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: [PATCH 31/35] nilfs2: Convert nilfs_add_link() to use a folio Date: Mon, 6 Nov 2023 17:38:59 +0000 Message-Id: <20231106173903.1734114-32-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20231106173903.1734114-1-willy@infradead.org> References: <20231106173903.1734114-1-willy@infradead.org> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Remove six calls to compound_head() by using the folio API. Signed-off-by: Matthew Wilcox (Oracle) --- fs/nilfs2/dir.c | 31 ++++++++++++++----------------- 1 file changed, 14 insertions(+), 17 deletions(-) diff --git a/fs/nilfs2/dir.c b/fs/nilfs2/dir.c index 25a468dda0f3..fd4f99a7f402 100644 --- a/fs/nilfs2/dir.c +++ b/fs/nilfs2/dir.c @@ -440,30 +440,28 @@ int nilfs_add_link(struct dentry *dentry, struct inode *inode) unsigned int chunk_size = nilfs_chunk_size(dir); unsigned int reclen = NILFS_DIR_REC_LEN(namelen); unsigned short rec_len, name_len; - struct page *page = NULL; + struct folio *folio = NULL; struct nilfs_dir_entry *de; unsigned long npages = dir_pages(dir); unsigned long n; - char *kaddr; - unsigned int from, to; + size_t from, to; int err; /* * We take care of directory expansion in the same loop. - * This code plays outside i_size, so it locks the page + * This code plays outside i_size, so it locks the folio * to protect that region. */ for (n = 0; n <= npages; n++) { + char *kaddr = nilfs_get_folio(dir, n, &folio); char *dir_end; - kaddr = nilfs_get_page(dir, n, &page); - err = PTR_ERR(kaddr); if (IS_ERR(kaddr)) - goto out; - lock_page(page); + return PTR_ERR(kaddr); + folio_lock(folio); dir_end = kaddr + nilfs_last_byte(dir, n); de = (struct nilfs_dir_entry *)kaddr; - kaddr += PAGE_SIZE - reclen; + kaddr += folio_size(folio) - reclen; while ((char *)de <= kaddr) { if ((char *)de == dir_end) { /* We hit i_size */ @@ -490,16 +488,16 @@ int nilfs_add_link(struct dentry *dentry, struct inode *inode) goto got_it; de = (struct nilfs_dir_entry *)((char *)de + rec_len); } - unlock_page(page); - unmap_and_put_page(page, kaddr); + folio_unlock(folio); + folio_release_kmap(folio, kaddr); } BUG(); return -EINVAL; got_it: - from = offset_in_page(de); + from = offset_in_folio(folio, de); to = from + rec_len; - err = nilfs_prepare_chunk(page, from, to); + err = nilfs_prepare_chunk(&folio->page, from, to); if (err) goto out_unlock; if (de->inode) { @@ -514,16 +512,15 @@ int nilfs_add_link(struct dentry *dentry, struct inode *inode) memcpy(de->name, name, namelen); de->inode = cpu_to_le64(inode->i_ino); nilfs_set_de_type(de, inode); - nilfs_commit_chunk(page, page->mapping, from, to); + nilfs_commit_chunk(&folio->page, folio->mapping, from, to); inode_set_mtime_to_ts(dir, inode_set_ctime_current(dir)); nilfs_mark_inode_dirty(dir); /* OFFSET_CACHE */ out_put: - unmap_and_put_page(page, de); -out: + folio_release_kmap(folio, de); return err; out_unlock: - unlock_page(page); + folio_unlock(folio); goto out_put; }