From patchwork Mon May 4 17:15:57 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 11527215 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1DA701668 for ; Mon, 4 May 2020 17:16:09 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 006C82073E for ; Mon, 4 May 2020 17:16:08 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="OE4yDkmn" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730432AbgEDRQI (ORCPT ); Mon, 4 May 2020 13:16:08 -0400 Received: from us-smtp-delivery-1.mimecast.com ([205.139.110.120]:42979 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1730146AbgEDRQH (ORCPT ); Mon, 4 May 2020 13:16:07 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1588612565; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ZuwzthaQnSNVn03puNWRMvA3AfX1NCjj+U0YsPzF/UI=; b=OE4yDkmno3Kb+nRgXfx9+lxQPL/Jl1vJMuio90lm3yv44FVjrvBQxxtqaZN/CaJmoldobn 1up5Dt+AMQG3jjzvkl4Wos13kYgPfkudLDeOtN5jNMJgOX0rP87no9/q+iq/EcJKVOAnl2 t66ejuCIrTL/Sb83iDREphE6yX1ZsBg= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-334-59IuCeZMP6WrTHdqLI9BLg-1; Mon, 04 May 2020 13:16:04 -0400 X-MC-Unique: 59IuCeZMP6WrTHdqLI9BLg-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 6857119200CD; Mon, 4 May 2020 17:16:02 +0000 (UTC) Received: from warthog.procyon.org.uk (ovpn-118-225.rdu2.redhat.com [10.10.118.225]) by smtp.corp.redhat.com (Postfix) with ESMTP id ADDD35C1B2; Mon, 4 May 2020 17:15:58 +0000 (UTC) Organization: Red Hat UK Ltd. Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Subject: [RFC PATCH 56/61] afs: Copy local writes to the cache when writing to the server From: David Howells To: Trond Myklebust , Anna Schumaker , Steve French , Jeff Layton Cc: dhowells@redhat.com, Matthew Wilcox , Alexander Viro , linux-afs@lists.infradead.org, linux-nfs@vger.kernel.org, linux-cifs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs-developer@lists.sourceforge.net, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Date: Mon, 04 May 2020 18:15:57 +0100 Message-ID: <158861255792.340223.1309939976492492377.stgit@warthog.procyon.org.uk> In-Reply-To: <158861203563.340223.7585359869938129395.stgit@warthog.procyon.org.uk> References: <158861203563.340223.7585359869938129395.stgit@warthog.procyon.org.uk> User-Agent: StGit/0.21 MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 Sender: ceph-devel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org When writing to the server from afs_writepage() or afs_writepages(), copy the data to the cache object too. Signed-off-by: David Howells --- fs/afs/write.c | 124 ++++++++++++++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 119 insertions(+), 5 deletions(-) diff --git a/fs/afs/write.c b/fs/afs/write.c index 312d8f07533e..38637dc6043c 100644 --- a/fs/afs/write.c +++ b/fs/afs/write.c @@ -13,6 +13,9 @@ #include #include "internal.h" +static void afs_write_to_cache(struct afs_vnode *vnode, + pgoff_t start, pgoff_t end, loff_t a, loff_t b); + /* * mark a page as having been made dirty and thus needing writeback */ @@ -389,6 +392,8 @@ static int afs_write_back_from_locked_page(struct address_space *mapping, count = 1; if (test_set_page_writeback(primary_page)) BUG(); + if (TestSetPageFsCache(primary_page)) + BUG(); /* Find all consecutive lockable dirty pages that have contiguous * written regions, stopping when we find a page that is not @@ -437,7 +442,8 @@ static int afs_write_back_from_locked_page(struct address_space *mapping, break; if (!trylock_page(page)) break; - if (!PageDirty(page) || PageWriteback(page)) { + if (!PageDirty(page) || PageWriteback(page) || + PageFsCache(page)) { unlock_page(page); break; } @@ -459,6 +465,8 @@ static int afs_write_back_from_locked_page(struct address_space *mapping, BUG(); if (test_set_page_writeback(page)) BUG(); + if (TestSetPageFsCache(page)) + BUG(); unlock_page(page); put_page(page); } @@ -489,8 +497,13 @@ static int afs_write_back_from_locked_page(struct address_space *mapping, b = last; b <<= PAGE_SHIFT; b += to; - iov_iter_mapping(&iter, WRITE, mapping, a, b - a); + /* Speculatively write to the cache. We have to fix this up later if + * the store fails. + */ + afs_write_to_cache(vnode, first, last + 1, a, b); + + iov_iter_mapping(&iter, WRITE, mapping, a, b - a); ret = afs_store_data(vnode, &iter, a, first, last); switch (ret) { case 0: @@ -543,6 +556,10 @@ int afs_writepage(struct page *page, struct writeback_control *wbc) _enter("{%lx},", page->index); +#ifdef CONFIG_AFS_FSCACHE + wait_on_page_fscache(page); +#endif + ret = afs_write_back_from_locked_page(page->mapping, wbc, page, wbc->range_end >> PAGE_SHIFT); if (ret < 0) { @@ -570,7 +587,7 @@ static int afs_writepages_region(struct address_space *mapping, do { n = find_get_pages_range_tag(mapping, &index, end, - PAGECACHE_TAG_DIRTY, 1, &page); + PAGECACHE_TAG_DIRTY, 1, &page); if (!n) break; @@ -595,10 +612,14 @@ static int afs_writepages_region(struct address_space *mapping, continue; } - if (PageWriteback(page)) { + if (PageWriteback(page) || PageFsCache(page)) { unlock_page(page); - if (wbc->sync_mode != WB_SYNC_NONE) + if (wbc->sync_mode != WB_SYNC_NONE) { wait_on_page_writeback(page); +#ifdef CONFIG_AFS_FSCACHE + wait_on_page_fscache(page); +#endif + } put_page(page); continue; } @@ -818,3 +839,96 @@ int afs_launder_page(struct page *page) ClearPagePrivate(page); return ret; } + +/* + * Clear the PG_fscache flag from a sequence of pages and wake up anyone who's + * waiting. The last page is included in the sequence. + */ +static void afs_clear_fscache_bits(struct address_space *mapping, + pgoff_t start, pgoff_t last) +{ + struct page *page; + + XA_STATE(xas, &mapping->i_pages, start); + + rcu_read_lock(); + xas_for_each(&xas, page, last) { + unlock_page_fscache(page); + } + rcu_read_unlock(); +} + +/* + * Deal with the completion of writing the data to the cache. + */ +static void afs_write_to_cache_done(struct fscache_io_request *_req) +{ + struct afs_read *req = container_of(_req, struct afs_read, cache); + pgoff_t index = req->cache.pos >> PAGE_SHIFT; + pgoff_t last = index + req->cache.nr_pages - 1; + + _enter("%lx,%x,%llx", index, req->cache.nr_pages, req->cache.transferred); + + afs_clear_fscache_bits(req->cache.mapping, index, last); + + if (req->cache.error && req->cache.error != -ENOBUFS) { + struct afs_vnode *vnode = req->vnode; + struct afs_vnode_cache_aux aux = { + .data_version = vnode->status.data_version, + }; + _debug("inval wr %d", req->cache.error); + fscache_invalidate(req->cache.cookie, &aux, + i_size_read(&vnode->vfs_inode), 0); + } +} + +static const struct fscache_io_request_ops afs_write_req_ops = { + .get = afs_req_get, + .put = afs_req_put, +}; + +/* + * Save the write to the cache also. + */ +static void afs_write_to_cache(struct afs_vnode *vnode, + pgoff_t start, pgoff_t end, loff_t a, loff_t b) +{ + struct afs_read *req; + struct iov_iter iter; + unsigned int x; + + struct fscache_extent extent = { + .start = start, + .block_end = end, + .limit = end, + }; + + _enter("%lx,%lx,%llx,%llx", start, end, a, b); + + x = fscache_shape_extent(afs_vnode_cache(vnode), &extent, + i_size_read(&vnode->vfs_inode), true); + if (!(x & FSCACHE_WRITE_TO_CACHE)) + goto abandon; + + req = afs_alloc_read(GFP_KERNEL); + if (!req) + goto abandon; + + fscache_init_io_request(&req->cache, afs_vnode_cache(vnode), + &afs_write_req_ops); + req->vnode = vnode; + req->cache.pos = round_down(a, extent.dio_block_size); + req->cache.len = round_up(b, extent.dio_block_size) - req->cache.pos; + req->cache.nr_pages = end - start; + req->cache.mapping = vnode->vfs_inode.i_mapping; + req->cache.io_done = &afs_write_to_cache_done; + + iov_iter_mapping(&iter, WRITE, req->cache.mapping, + req->cache.pos, req->cache.len); + fscache_write(&req->cache, &iter); + afs_put_read(req); + return; + +abandon: + afs_clear_fscache_bits(vnode->vfs_inode.i_mapping, start, end); +}