From patchwork Thu Nov 24 03:42:10 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jingbo Xu X-Patchwork-Id: 13054574 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 16278C4332F for ; Thu, 24 Nov 2022 03:42:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229677AbiKXDmS (ORCPT ); Wed, 23 Nov 2022 22:42:18 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60122 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229455AbiKXDmR (ORCPT ); Wed, 23 Nov 2022 22:42:17 -0500 Received: from out30-43.freemail.mail.aliyun.com (out30-43.freemail.mail.aliyun.com [115.124.30.43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 799B75A6CF; Wed, 23 Nov 2022 19:42:15 -0800 (PST) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R121e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018045170;MF=jefflexu@linux.alibaba.com;NM=1;PH=DS;RN=8;SR=0;TI=SMTPD_---0VVZO4J._1669261332; Received: from localhost(mailfrom:jefflexu@linux.alibaba.com fp:SMTPD_---0VVZO4J._1669261332) by smtp.aliyun-inc.com; Thu, 24 Nov 2022 11:42:13 +0800 From: Jingbo Xu To: dhowells@redhat.com, jlayton@kernel.org, xiang@kernel.org, chao@kernel.org, linux-cachefs@redhat.com, linux-erofs@lists.ozlabs.org Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v5 0/2] fscache,cachefiles: add prepare_ondemand_read() interface Date: Thu, 24 Nov 2022 11:42:10 +0800 Message-Id: <20221124034212.81892-1-jefflexu@linux.alibaba.com> X-Mailer: git-send-email 2.19.1.6.gb485710b MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org v5: - patch 1: add back netfs_inode number to trace_cachefiles_prep_read, and .prepare_ondemand_read() now also accepts a "ino_t ino" parameter (David) - add reviewed-by tags v4: https://lore.kernel.org/all/20221117053017.21074-1-jefflexu@linux.alibaba.com/ - patch 1 - make cachefiles_do_prepare_read() pass start by value (Jeff Layton) - adjust the indentation of the parameter/argument list, so that they are all lined up (David) - pass flags in for cachefiles_prepare_ondemand_read(), so that it can tail call cachefiles_do_prepare_read() directly without shuffling arguments around (David) - declare cachefiles_do_prepare_read() as inline, to eliminate one extra function calling and arguments copying when calling cachefiles_do_prepare_read() (David) v3: - rebase to v6.1-rc5, while the xas_retry() checking in patch 2 has been extracted out as a separate fix [1] [1] commit 37020bbb71d9 ("erofs: fix missing xas_retry() in fscache mode") (https://github.com/torvalds/linux/commit/37020bbb71d9) v2: - patch 1: the generic routine, i.e. cachefiles_do_prepare_read() now accepts a parameter list instead of netfs_io_subrequest, and thus some debug info retrieved from netfs_io_subrequest is removed from trace_cachefiles_prep_read(). - patch 2: add xas_retry() checking in erofs_fscache_req_complete() [Rationale] =========== Fscache has been landed as a generic caching management framework in the Linux kernel for decades. It aims to manage cache data availability or fetch data if needed. Currently it's mainly used for network fses, but in principle the main caching subsystem can be used more widely. We do really like fscache framework and we believe it'd be better to reuse such framework if possible instead of duplicating other alternatives for better maintenance and testing. Therefore for our container image use cases, we applied the existing fscache to implement on-demand read for erofs in the past months. For more details, also see [1]. In short, here each erofs filesystem is composed of multiple blobs (or devices). Each blob corresponds to one fscache cookie to strictly follow on-disk format and implement the image downloading in a deterministic manner, which means it has a unique checksum and is signed by vendors. Data of each erofs inode can be scattered among multiple blobs (cookie) since erofs supports chunk-level deduplication. In this case, each erofs inode can correspond to multiple cookies, and there's a logical to physical offset mapping between the logical offset in erofs inode and the physical offset in the backing file. As described above, per-cookie netfs model can not be used here directly. Instead, we'd like to propose/decouple a simple set of raw fscache APIs, to access cache for all fses to use. We believe it's useful since it's like the relationship between raw bio and iomap, both of which are useful for local fses. fscache_read() seems a reasonable candidate and is enough for such use case. In addition, the on-demand read feature relies on .prepare_read() to reuse the hole detecting logic as much as possible. However, after fscache/netfs rework, libnetfs is preferred to access fscache, making .prepare_read() closely coupled with libnetfs, or more precisely, netfs_io_subrequest. [What We Do] ============ As we discussed previously, we propose a new interface, i,e, .prepare_ondemand_read() dedicated for the on-demand read scenarios, which is independent on netfs_io_subrequest. The netfs will still use the original .prepare_read() as usual. Jingbo Xu (2): fscache,cachefiles: add prepare_ondemand_read() callback erofs: switch to prepare_ondemand_read() in fscache mode fs/cachefiles/io.c | 77 +++++---- fs/erofs/fscache.c | 261 +++++++++++------------------- include/linux/netfs.h | 8 + include/trace/events/cachefiles.h | 27 ++-- 4 files changed, 166 insertions(+), 207 deletions(-)