From patchwork Fri Oct 13 16:03:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13421283 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A94C8C3DA41 for ; Fri, 13 Oct 2023 16:06:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232912AbjJMQGx (ORCPT ); Fri, 13 Oct 2023 12:06:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50072 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232904AbjJMQGO (ORCPT ); Fri, 13 Oct 2023 12:06:14 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 578D51B1 for ; Fri, 13 Oct 2023 09:05:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1697213116; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=/wI2ukAfhLlByZQe0rXpGLnwelltmScQCYU5cYtls1s=; b=XK6nmg+N2WecTwtbU4pSoFd2KDorM+iXodgQauSoDi0lHgAk8RxeErgNRW9SqFpP3ssr06 Iw5c3NtH5pt4Lt/vpCNjpAh682E0eIGk/dmNc/zX9I0KRI6s1EHuFrhyQbgWUwAfBF6GCW KBmviUmuRcAVpMD6fXK+CNP1uQIE3SE= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-393-Y3JGXtQpP-C4R3eDg4Xd4A-1; Fri, 13 Oct 2023 12:05:12 -0400 X-MC-Unique: Y3JGXtQpP-C4R3eDg4Xd4A-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id D4C5A88B7A8; Fri, 13 Oct 2023 16:05:10 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.42.28.226]) by smtp.corp.redhat.com (Postfix) with ESMTP id 5A8DB201F457; Fri, 13 Oct 2023 16:05:08 +0000 (UTC) From: David Howells To: Jeff Layton , Steve French Cc: David Howells , Matthew Wilcox , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Dominique Martinet , Ilya Dryomov , Christian Brauner , linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-cachefs@redhat.com Subject: [RFC PATCH 13/53] netfs: Add bounce buffering support Date: Fri, 13 Oct 2023 17:03:42 +0100 Message-ID: <20231013160423.2218093-14-dhowells@redhat.com> In-Reply-To: <20231013160423.2218093-1-dhowells@redhat.com> References: <20231013160423.2218093-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org Add a second xarray struct to netfs_io_request for the purposes of holding a bounce buffer for when we have to deal with encrypted/compressed data or if we have to up/download data in blocks larger than we were asked for. Signed-off-by: David Howells cc: Jeff Layton cc: linux-cachefs@redhat.com cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org --- fs/netfs/io.c | 6 +++++- fs/netfs/objects.c | 3 +++ include/linux/netfs.h | 2 ++ 3 files changed, 10 insertions(+), 1 deletion(-) diff --git a/fs/netfs/io.c b/fs/netfs/io.c index e9d408e211b8..d8e9cd6ce338 100644 --- a/fs/netfs/io.c +++ b/fs/netfs/io.c @@ -643,7 +643,11 @@ int netfs_begin_read(struct netfs_io_request *rreq, bool sync) return -EIO; } - rreq->io_iter = rreq->iter; + if (test_bit(NETFS_RREQ_USE_BOUNCE_BUFFER, &rreq->flags)) + iov_iter_xarray(&rreq->io_iter, ITER_DEST, &rreq->bounce, + rreq->start, rreq->len); + else + rreq->io_iter = rreq->iter; INIT_WORK(&rreq->work, netfs_rreq_work); diff --git a/fs/netfs/objects.c b/fs/netfs/objects.c index 4396318081bf..0782a284dda8 100644 --- a/fs/netfs/objects.c +++ b/fs/netfs/objects.c @@ -35,6 +35,7 @@ struct netfs_io_request *netfs_alloc_request(struct address_space *mapping, rreq->inode = inode; rreq->i_size = i_size_read(inode); rreq->debug_id = atomic_inc_return(&debug_ids); + xa_init(&rreq->bounce); INIT_LIST_HEAD(&rreq->subrequests); refcount_set(&rreq->ref, 1); __set_bit(NETFS_RREQ_IN_PROGRESS, &rreq->flags); @@ -43,6 +44,7 @@ struct netfs_io_request *netfs_alloc_request(struct address_space *mapping, if (rreq->netfs_ops->init_request) { ret = rreq->netfs_ops->init_request(rreq, file); if (ret < 0) { + xa_destroy(&rreq->bounce); kfree(rreq); return ERR_PTR(ret); } @@ -96,6 +98,7 @@ static void netfs_free_request(struct work_struct *work) } kvfree(rreq->direct_bv); } + netfs_clear_buffer(&rreq->bounce); kfree_rcu(rreq, rcu); netfs_stat_d(&netfs_n_rh_rreq); } diff --git a/include/linux/netfs.h b/include/linux/netfs.h index e8d702ac6968..a7220e906287 100644 --- a/include/linux/netfs.h +++ b/include/linux/netfs.h @@ -196,6 +196,7 @@ struct netfs_io_request { struct iov_iter iter; /* Unencrypted-side iterator */ struct iov_iter io_iter; /* I/O (Encrypted-side) iterator */ struct bio_vec *direct_bv; /* DIO buffer list (when handling iovec-iter) */ + struct xarray bounce; /* Bounce buffer (eg. for crypto/compression) */ void *netfs_priv; /* Private data for the netfs */ unsigned int direct_bv_count; /* Number of elements in bv[] */ unsigned int debug_id; @@ -220,6 +221,7 @@ struct netfs_io_request { #define NETFS_RREQ_IN_PROGRESS 5 /* Unlocked when the request completes */ #define NETFS_RREQ_NONBLOCK 6 /* Don't block if possible (O_NONBLOCK) */ #define NETFS_RREQ_BLOCKED 7 /* We blocked */ +#define NETFS_RREQ_USE_BOUNCE_BUFFER 8 /* Use bounce buffer */ const struct netfs_request_ops *netfs_ops; };