From patchwork Thu Nov 17 05:30:15 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jingbo Xu X-Patchwork-Id: 13046133 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D6645C433FE for ; Thu, 17 Nov 2022 05:30:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233972AbiKQFaX (ORCPT ); Thu, 17 Nov 2022 00:30:23 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37242 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229451AbiKQFaW (ORCPT ); Thu, 17 Nov 2022 00:30:22 -0500 Received: from out30-56.freemail.mail.aliyun.com (out30-56.freemail.mail.aliyun.com [115.124.30.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9F249532C6; Wed, 16 Nov 2022 21:30:20 -0800 (PST) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R161e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046049;MF=jefflexu@linux.alibaba.com;NM=1;PH=DS;RN=8;SR=0;TI=SMTPD_---0VV-W7uG_1668663017; Received: from localhost(mailfrom:jefflexu@linux.alibaba.com fp:SMTPD_---0VV-W7uG_1668663017) by smtp.aliyun-inc.com; Thu, 17 Nov 2022 13:30:18 +0800 From: Jingbo Xu To: xiang@kernel.org, chao@kernel.org, jlayton@kernel.org, linux-erofs@lists.ozlabs.org, linux-cachefs@redhat.com, dhowells@redhat.com Cc: linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: [PATCH v4 0/2] fscache,cachefiles: add prepare_ondemand_read() interface Date: Thu, 17 Nov 2022 13:30:15 +0800 Message-Id: <20221117053017.21074-1-jefflexu@linux.alibaba.com> X-Mailer: git-send-email 2.19.1.6.gb485710b MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org v4: - patch 1 - make cachefiles_do_prepare_read() pass start by value (Jeff Layton) - adjust the indentation of the parameter/argument list, so that they are all lined up (David) - pass flags in for cachefiles_prepare_ondemand_read(), so that it can tail call cachefiles_do_prepare_read() directly without shuffling arguments around (David) - declare cachefiles_do_prepare_read() as inline, to eliminate one extra function calling and arguments copying when calling cachefiles_do_prepare_read() (David) v3: - rebase to v6.1-rc5, while the xas_retry() checking in patch 2 has been extracted out as a separate fix [1] [1] commit 37020bbb71d9 ("erofs: fix missing xas_retry() in fscache mode") (https://github.com/torvalds/linux/commit/37020bbb71d9) v2: - patch 1: the generic routine, i.e. cachefiles_do_prepare_read() now accepts a parameter list instead of netfs_io_subrequest, and thus some debug info retrieved from netfs_io_subrequest is removed from trace_cachefiles_prep_read(). - patch 2: add xas_retry() checking in erofs_fscache_req_complete() [Rationale] =========== Fscache has been landed as a generic caching management framework in the Linux kernel for decades. It aims to manage cache data availability or fetch data if needed. Currently it's mainly used for network fses, but in principle the main caching subsystem can be used more widely. We do really like fscache framework and we believe it'd be better to reuse such framework if possible instead of duplicating other alternatives for better maintenance and testing. Therefore for our container image use cases, we applied the existing fscache to implement on-demand read for erofs in the past months. For more details, also see [1]. In short, here each erofs filesystem is composed of multiple blobs (or devices). Each blob corresponds to one fscache cookie to strictly follow on-disk format and implement the image downloading in a deterministic manner, which means it has a unique checksum and is signed by vendors. Data of each erofs inode can be scattered among multiple blobs (cookie) since erofs supports chunk-level deduplication. In this case, each erofs inode can correspond to multiple cookies, and there's a logical to physical offset mapping between the logical offset in erofs inode and the physical offset in the backing file. As described above, per-cookie netfs model can not be used here directly. Instead, we'd like to propose/decouple a simple set of raw fscache APIs, to access cache for all fses to use. We believe it's useful since it's like the relationship between raw bio and iomap, both of which are useful for local fses. fscache_read() seems a reasonable candidate and is enough for such use case. In addition, the on-demand read feature relies on .prepare_read() to reuse the hole detecting logic as much as possible. However, after fscache/netfs rework, libnetfs is preferred to access fscache, making .prepare_read() closely coupled with libnetfs, or more precisely, netfs_io_subrequest. [What We Do] ============ As we discussed previously, we propose a new interface, i,e, .prepare_ondemand_read() dedicated for the on-demand read scenarios, which is independent on netfs_io_subrequest. The netfs will still use the original .prepare_read() as usual. Jingbo Xu (2): fscache,cachefiles: add prepare_ondemand_read() callback erofs: switch to prepare_ondemand_read() in fscache mode fs/cachefiles/io.c | 77 +++++---- fs/erofs/fscache.c | 260 +++++++++++------------------- include/linux/netfs.h | 8 + include/trace/events/cachefiles.h | 27 ++-- 4 files changed, 164 insertions(+), 208 deletions(-)