From patchwork Wed Dec 22 23:29:05 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 12697591 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9D26EC433F5 for ; Wed, 22 Dec 2021 23:29:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238645AbhLVX3U (ORCPT ); Wed, 22 Dec 2021 18:29:20 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:26773 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242552AbhLVX3U (ORCPT ); Wed, 22 Dec 2021 18:29:20 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1640215759; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=dI8ic26rdAkG0BobDKeowrTKb6oz6xUCAym+jlVdkoo=; b=Wy0Jw+X86QyGlf+a6aIJqe105o3FOmH6b4SQYOAidFWHBf20ubWn8o7mLyIZOwO9r5hS6l UYOvNlbDteFLGyxpuJX7M0ZgrezxV6zz/+TIW2nHoUk/7pX1C4+bbiVuLFlzO3wIWvTLss yH7SAuLHjBFwSc32If3yrw5DT72PTQE= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-426-t33iwRhANT-iL23Xeb1SQA-1; Wed, 22 Dec 2021 18:29:16 -0500 X-MC-Unique: t33iwRhANT-iL23Xeb1SQA-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 45C1980BCA8; Wed, 22 Dec 2021 23:29:14 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.165]) by smtp.corp.redhat.com (Postfix) with ESMTP id 1C98B77457; Wed, 22 Dec 2021 23:29:05 +0000 (UTC) Organization: Red Hat UK Ltd. Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Subject: [PATCH v4 60/68] 9p: Copy local writes to the cache when writing to the server From: David Howells To: linux-cachefs@redhat.com Cc: Eric Van Hensbergen , Latchesar Ionkov , Dominique Martinet , v9fs-developer@lists.sourceforge.net, dhowells@redhat.com, Trond Myklebust , Anna Schumaker , Steve French , Dominique Martinet , Jeff Layton , Matthew Wilcox , Alexander Viro , Omar Sandoval , JeffleXu , Linus Torvalds , linux-afs@lists.infradead.org, linux-nfs@vger.kernel.org, linux-cifs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs-developer@lists.sourceforge.net, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Date: Wed, 22 Dec 2021 23:29:05 +0000 Message-ID: <164021574522.640689.13849966660182529125.stgit@warthog.procyon.org.uk> In-Reply-To: <164021479106.640689.17404516570194656552.stgit@warthog.procyon.org.uk> References: <164021479106.640689.17404516570194656552.stgit@warthog.procyon.org.uk> User-Agent: StGit/0.23 MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org When writing to the server from v9fs_vfs_writepage(), copy the data to the cache object too. To make this possible, the cookie must have its active users count incremented when the page is dirtied and kept incremented until we manage to clean up all the pages. This allows the writeback to take place after the last file struct is released. This is done by taking a use on the cookie in v9fs_set_page_dirty() if we haven't already done so (controlled by the I_PINNING_FSCACHE_WB flag) and dropping the pin in v9fs_write_inode() if __writeback_single_inode() clears all the outstanding dirty pages (conveyed by the unpinned_fscache_wb flag in the writeback_control struct). Inode eviction must also clear the flag after truncating away all the outstanding pages. In the future this will be handled more gracefully by netfslib. Changes ======= ver #3: - Canonicalise the coherency data to make it endianness-independent. ver #2: - Fix an unused-var warning due to CONFIG_9P_FSCACHE=n[1]. Signed-off-by: David Howells cc: Eric Van Hensbergen cc: Latchesar Ionkov cc: Dominique Martinet cc: v9fs-developer@lists.sourceforge.net cc: linux-cachefs@redhat.com Link: https://lore.kernel.org/r/163819667027.215744.13815687931204222995.stgit@warthog.procyon.org.uk/ # v1 Link: https://lore.kernel.org/r/163906978015.143852.10646669694345706328.stgit@warthog.procyon.org.uk/ # v2 Link: https://lore.kernel.org/r/163967180760.1823006.5831751873616248910.stgit@warthog.procyon.org.uk/ # v3 --- fs/9p/vfs_addr.c | 46 +++++++++++++++++++++++++++++++++++++++++++++- fs/9p/vfs_inode.c | 4 ++++ fs/9p/vfs_super.c | 3 +++ 3 files changed, 52 insertions(+), 1 deletion(-) diff --git a/fs/9p/vfs_addr.c b/fs/9p/vfs_addr.c index 4f5ce4aca317..f3f349f460e5 100644 --- a/fs/9p/vfs_addr.c +++ b/fs/9p/vfs_addr.c @@ -137,6 +137,7 @@ static void v9fs_vfs_readahead(struct readahead_control *ractl) static int v9fs_release_page(struct page *page, gfp_t gfp) { struct folio *folio = page_folio(page); + struct inode *inode = folio_inode(folio); if (folio_test_private(folio)) return 0; @@ -147,6 +148,7 @@ static int v9fs_release_page(struct page *page, gfp_t gfp) folio_wait_fscache(folio); } #endif + fscache_note_page_release(v9fs_inode_cookie(V9FS_I(inode))); return 1; } @@ -165,10 +167,25 @@ static void v9fs_invalidate_page(struct page *page, unsigned int offset, folio_wait_fscache(folio); } +static void v9fs_write_to_cache_done(void *priv, ssize_t transferred_or_error, + bool was_async) +{ + struct v9fs_inode *v9inode = priv; + __le32 version; + + if (IS_ERR_VALUE(transferred_or_error) && + transferred_or_error != -ENOBUFS) { + version = cpu_to_le32(v9inode->qid.version); + fscache_invalidate(v9fs_inode_cookie(v9inode), &version, + i_size_read(&v9inode->vfs_inode), 0); + } +} + static int v9fs_vfs_write_folio_locked(struct folio *folio) { struct inode *inode = folio_inode(folio); struct v9fs_inode *v9inode = V9FS_I(inode); + struct fscache_cookie *cookie = v9fs_inode_cookie(v9inode); loff_t start = folio_pos(folio); loff_t i_size = i_size_read(inode); struct iov_iter from; @@ -185,10 +202,21 @@ static int v9fs_vfs_write_folio_locked(struct folio *folio) /* We should have writeback_fid always set */ BUG_ON(!v9inode->writeback_fid); + folio_wait_fscache(folio); folio_start_writeback(folio); p9_client_write(v9inode->writeback_fid, start, &from, &err); + if (err == 0 && + fscache_cookie_enabled(cookie) && + test_bit(FSCACHE_COOKIE_IS_CACHING, &cookie->flags)) { + folio_start_fscache(folio); + fscache_write_to_cache(v9fs_inode_cookie(v9inode), + folio_mapping(folio), start, len, i_size, + v9fs_write_to_cache_done, v9inode, + true); + } + folio_end_writeback(folio); return err; } @@ -307,6 +335,7 @@ static int v9fs_write_end(struct file *filp, struct address_space *mapping, loff_t last_pos = pos + copied; struct folio *folio = page_folio(subpage); struct inode *inode = mapping->host; + struct v9fs_inode *v9inode = V9FS_I(inode); p9_debug(P9_DEBUG_VFS, "filp %p, mapping %p\n", filp, mapping); @@ -326,6 +355,7 @@ static int v9fs_write_end(struct file *filp, struct address_space *mapping, if (last_pos > inode->i_size) { inode_add_bytes(inode, last_pos - inode->i_size); i_size_write(inode, last_pos); + fscache_update_cookie(v9fs_inode_cookie(v9inode), NULL, &last_pos); } folio_mark_dirty(folio); out: @@ -335,11 +365,25 @@ static int v9fs_write_end(struct file *filp, struct address_space *mapping, return copied; } +#ifdef CONFIG_9P_FSCACHE +/* + * Mark a page as having been made dirty and thus needing writeback. We also + * need to pin the cache object to write back to. + */ +static int v9fs_set_page_dirty(struct page *page) +{ + struct v9fs_inode *v9inode = V9FS_I(page->mapping->host); + + return fscache_set_page_dirty(page, v9fs_inode_cookie(v9inode)); +} +#else +#define v9fs_set_page_dirty __set_page_dirty_nobuffers +#endif const struct address_space_operations v9fs_addr_operations = { .readpage = v9fs_vfs_readpage, .readahead = v9fs_vfs_readahead, - .set_page_dirty = __set_page_dirty_nobuffers, + .set_page_dirty = v9fs_set_page_dirty, .writepage = v9fs_vfs_writepage, .write_begin = v9fs_write_begin, .write_end = v9fs_write_end, diff --git a/fs/9p/vfs_inode.c b/fs/9p/vfs_inode.c index 00366bf1ac2c..2a10242c79c7 100644 --- a/fs/9p/vfs_inode.c +++ b/fs/9p/vfs_inode.c @@ -380,8 +380,12 @@ struct inode *v9fs_get_inode(struct super_block *sb, umode_t mode, dev_t rdev) void v9fs_evict_inode(struct inode *inode) { struct v9fs_inode *v9inode = V9FS_I(inode); + __le32 version; truncate_inode_pages_final(&inode->i_data); + version = cpu_to_le32(v9inode->qid.version); + fscache_clear_inode_writeback(v9fs_inode_cookie(v9inode), inode, + &version); clear_inode(inode); filemap_fdatawrite(&inode->i_data); diff --git a/fs/9p/vfs_super.c b/fs/9p/vfs_super.c index b739e02f5ef7..97e23b4e6982 100644 --- a/fs/9p/vfs_super.c +++ b/fs/9p/vfs_super.c @@ -20,6 +20,7 @@ #include #include #include +#include #include #include @@ -309,6 +310,7 @@ static int v9fs_write_inode(struct inode *inode, __mark_inode_dirty(inode, I_DIRTY_DATASYNC); return ret; } + fscache_unpin_writeback(wbc, v9fs_inode_cookie(v9inode)); return 0; } @@ -332,6 +334,7 @@ static int v9fs_write_inode_dotl(struct inode *inode, __mark_inode_dirty(inode, I_DIRTY_DATASYNC); return ret; } + fscache_unpin_writeback(wbc, v9fs_inode_cookie(v9inode)); return 0; }