From patchwork Fri Nov 8 17:32:10 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13868567 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8D429D64072 for ; Fri, 8 Nov 2024 17:34:02 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 204856B00AA; Fri, 8 Nov 2024 12:34:02 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1B4AE6B00AB; Fri, 8 Nov 2024 12:34:02 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 054C86B00AC; Fri, 8 Nov 2024 12:34:01 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id D42E16B00AA for ; Fri, 8 Nov 2024 12:34:01 -0500 (EST) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 8EDEE40246 for ; Fri, 8 Nov 2024 17:34:01 +0000 (UTC) X-FDA: 82763624790.24.AD0883D Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf25.hostedemail.com (Postfix) with ESMTP id 0C0BCA0005 for ; Fri, 8 Nov 2024 17:33:32 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=G8ooX3B1; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf25.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1731087068; a=rsa-sha256; cv=none; b=kj2OKRy3o3nHee7iksFuea4tm5uXHZoRV3EqHiVEGQ48U3KLxbdMpdruvtDBrqBOpryDky 4cmf5fuXcD9b9HUGM1ZQwQr1a6192A1n8N5XK+WLJ5/DLWSKUzxI4vHNGjM3u0VjQ3DoVx MCZLU+xyoPf9yTYsqgWHR67fJnbMfqM= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=G8ooX3B1; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf25.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1731087068; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=cXT5ALic3xBUT2yykbdv9+aDxF01hC9kmNJ9X8Q7R1c=; b=Pj+o3MtdFl+mR5dUo/aflM0uQQYt9CrVPkcAJR/5RMqG/90C5j5WOiKnCfqElc61NFRe89 /7RD2q0d4JacnyZ6BT9s1r8rMD+TaUTry2sz0Bs6J5ILzZuoLRcKgsTSe8i7GBgBd7OF+M /Vd7Csb8Zk/NgZLfN+tXV6urpSH0944= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1731087239; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=cXT5ALic3xBUT2yykbdv9+aDxF01hC9kmNJ9X8Q7R1c=; b=G8ooX3B1yntRb1eYFah/adI5/iZatK7uC28tCW1ssGWJyPvgyABYw/tNve06XouF/uOFF2 7MSdseDhPQ1ZqjG/A0boxalm17KRfxSlTgDvrdzGQexD6HFjSDoOV/PVu0SAbfnI1ueDp9 9L3emy2Sd9sU+ufojD8T9Qsif0sYA8M= Received: from mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-474-wHSjUS5DM9-_cFE7o5pFWA-1; Fri, 08 Nov 2024 12:33:55 -0500 X-MC-Unique: wHSjUS5DM9-_cFE7o5pFWA-1 X-Mimecast-MFC-AGG-ID: wHSjUS5DM9-_cFE7o5pFWA Received: from mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.40]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 50D0E1956089; Fri, 8 Nov 2024 17:33:52 +0000 (UTC) Received: from warthog.procyon.org.uk.com (unknown [10.42.28.231]) by mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id CD0361955F3D; Fri, 8 Nov 2024 17:33:46 +0000 (UTC) From: David Howells To: Christian Brauner , Steve French , Matthew Wilcox Cc: David Howells , Jeff Layton , Gao Xiang , Dominique Martinet , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Eric Van Hensbergen , Ilya Dryomov , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-erofs@lists.ozlabs.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 09/33] netfs: Split retry code out of fs/netfs/write_collect.c Date: Fri, 8 Nov 2024 17:32:10 +0000 Message-ID: <20241108173236.1382366-10-dhowells@redhat.com> In-Reply-To: <20241108173236.1382366-1-dhowells@redhat.com> References: <20241108173236.1382366-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.40 X-Stat-Signature: h4nbgreg6rjyt3fspoiwfk5gd1ftp18j X-Rspamd-Queue-Id: 0C0BCA0005 X-Rspamd-Server: rspam08 X-Rspam-User: X-HE-Tag: 1731087212-588461 X-HE-Meta: U2FsdGVkX19CILOL/YGwvMOOZG/8NQznFJWTBVjkn8Lo0A5K7rvJK9KTEhzIAwUs2Uy50cGCcYtUeN3nDc/ZClVF6NqpZ0q15i437Q/mFYJDeWcp1wivOvoi2batnAlSD97czeDr7JoVgKiVnNAOUkrv03SaNI8Xschq5HOztz/wmvNIaLAFP4SsPLDTG3Lv3dSA6aQN/DS9dyWa4KO59aCMaW/Q2NdzDNGjzn128OZx60V/DteHlQOiyGP/V97yeT5MWglfxlvdC7MeqiLy5A+AS4di6LVUHMtkuCXbhiq+PlHfvDYPEWUqpHn9/MF0jgm9Nn5uBqstszbqXEt2xCVhHuRQniVl1ltcmlNsacWD/7KZq4T2BMNUtuck2WBC9qLgeAs+b1DmUTAs6+BMOa2UqvBCPE1YIh7zThBm34MU2/U3OtwxFjZ5SuolI0nKQ9MlOubhxEP/yAQyr5uyB5yJv9Y7aqH4zqL2tml6XXAiqfgXtsaEKAiegMvoPS59piND9DFffDGtpsbfoD2eFWZ1ueN0bkgfK7OKLgbo0JDxOWxuErkKwfsQBKxOP+t3qQFoSiCBESGMoPMimMAwrcpWYJIopb2x8S6y17QjnuFS9gS0kaNrVElPNuhCBnz4sx7DG1u/r8KkKS6LH3buOK5ooRIKib+wI5jEzVJP2y/rEVoFwX1i9u1ClZfcNLXTitlaSP/AM6Q5imZ2xvaVSHp7OXv+8DgmsH+ZDa1bpWAlXoGnhUQ36Mx3U2Q48GuqQP1OIm43wAnmvFmKT0360asWyb15buyK2CwZZuPw6im7TwrnmC2MGYh7s8K93Nj1S6t+tJIEyoXQ/UufYEv7xuDcghMgiaZ640SPZOg3LDeirbIDlJQPGO1nvvCVo4UCCuv0rvZv9lqCmWaRna38THERuN3AbNfAyK9z0V7JDA+bi92xRIgJMmHdJn3yZ7aD8y01IJxrW2C29a70fDa mUrH3br2 qOf2YUW4sa4MLNSIoGuXfyU7h9fCqbISLSA206b9uOXOt/W10t8yHTpawJFsMCMxCOVgd5CHpgsYJ2PggiGXwq/kGt5Rarn/GmeCvs1OJXSs1EBaixaV9AIkIZ1MhWdlb0NWdzPlSjHsvBuRK1im8tq4Pb85fG7U+Z/q87HsEKe3kuXZkxKfQyJWiMI/V47+d6rJzyKw+3wxqYwkNlEdsf3MBSVHa8TsEYcXIXdnbsBnwifrbAjbbYNZHP77m7jDMWgRNlZjym27xs3iCVW4jF7PnMwhyF0z96mR0NNiU4mPopShdroRbMej/KrDa+af3fFA66S52PSOw9MPX1iOyGuERFGzbscRcnMPqRWJM3MqujN8Bjdab2FPzima77E1ZeUnIUuK5L+yuZeYKmw+N+97p/v7jf5lMRJ+FPbuTDKwFuAAgaQd+ocKZ5U0RSDZkHJAapJWZUmQ+nY/6qWXvMI3DA3zMU6Unypky/n2dVbWVe0bZMzgunCNJng== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Split write-retry code out of fs/netfs/write_collect.c as it will become more elaborate when content crypto is introduced. Signed-off-by: David Howells cc: Jeff Layton cc: netfs@lists.linux.dev cc: linux-fsdevel@vger.kernel.org --- fs/netfs/Makefile | 3 +- fs/netfs/internal.h | 5 + fs/netfs/write_collect.c | 215 ------------------------------------ fs/netfs/write_retry.c | 227 +++++++++++++++++++++++++++++++++++++++ 4 files changed, 234 insertions(+), 216 deletions(-) create mode 100644 fs/netfs/write_retry.c diff --git a/fs/netfs/Makefile b/fs/netfs/Makefile index 7492c4aa331e..cbb30bdeacc4 100644 --- a/fs/netfs/Makefile +++ b/fs/netfs/Makefile @@ -15,7 +15,8 @@ netfs-y := \ read_retry.o \ rolling_buffer.o \ write_collect.o \ - write_issue.o + write_issue.o \ + write_retry.o netfs-$(CONFIG_NETFS_STATS) += stats.o diff --git a/fs/netfs/internal.h b/fs/netfs/internal.h index 6aa2a8d49b37..73887525e939 100644 --- a/fs/netfs/internal.h +++ b/fs/netfs/internal.h @@ -189,6 +189,11 @@ int netfs_end_writethrough(struct netfs_io_request *wreq, struct writeback_contr struct folio *writethrough_cache); int netfs_unbuffered_write(struct netfs_io_request *wreq, bool may_wait, size_t len); +/* + * write_retry.c + */ +void netfs_retry_writes(struct netfs_io_request *wreq); + /* * Miscellaneous functions. */ diff --git a/fs/netfs/write_collect.c b/fs/netfs/write_collect.c index f3fab41ca3e5..85e8e94da90a 100644 --- a/fs/netfs/write_collect.c +++ b/fs/netfs/write_collect.c @@ -151,221 +151,6 @@ static void netfs_writeback_unlock_folios(struct netfs_io_request *wreq, wreq->buffer.first_tail_slot = slot; } -/* - * Perform retries on the streams that need it. - */ -static void netfs_retry_write_stream(struct netfs_io_request *wreq, - struct netfs_io_stream *stream) -{ - struct list_head *next; - - _enter("R=%x[%x:]", wreq->debug_id, stream->stream_nr); - - if (list_empty(&stream->subrequests)) - return; - - if (stream->source == NETFS_UPLOAD_TO_SERVER && - wreq->netfs_ops->retry_request) - wreq->netfs_ops->retry_request(wreq, stream); - - if (unlikely(stream->failed)) - return; - - /* If there's no renegotiation to do, just resend each failed subreq. */ - if (!stream->prepare_write) { - struct netfs_io_subrequest *subreq; - - list_for_each_entry(subreq, &stream->subrequests, rreq_link) { - if (test_bit(NETFS_SREQ_FAILED, &subreq->flags)) - break; - if (__test_and_clear_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags)) { - struct iov_iter source = subreq->io_iter; - - iov_iter_revert(&source, subreq->len - source.count); - __set_bit(NETFS_SREQ_RETRYING, &subreq->flags); - netfs_get_subrequest(subreq, netfs_sreq_trace_get_resubmit); - netfs_reissue_write(stream, subreq, &source); - } - } - return; - } - - next = stream->subrequests.next; - - do { - struct netfs_io_subrequest *subreq = NULL, *from, *to, *tmp; - struct iov_iter source; - unsigned long long start, len; - size_t part; - bool boundary = false; - - /* Go through the stream and find the next span of contiguous - * data that we then rejig (cifs, for example, needs the wsize - * renegotiating) and reissue. - */ - from = list_entry(next, struct netfs_io_subrequest, rreq_link); - to = from; - start = from->start + from->transferred; - len = from->len - from->transferred; - - if (test_bit(NETFS_SREQ_FAILED, &from->flags) || - !test_bit(NETFS_SREQ_NEED_RETRY, &from->flags)) - return; - - list_for_each_continue(next, &stream->subrequests) { - subreq = list_entry(next, struct netfs_io_subrequest, rreq_link); - if (subreq->start + subreq->transferred != start + len || - test_bit(NETFS_SREQ_BOUNDARY, &subreq->flags) || - !test_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags)) - break; - to = subreq; - len += to->len; - } - - /* Determine the set of buffers we're going to use. Each - * subreq gets a subset of a single overall contiguous buffer. - */ - netfs_reset_iter(from); - source = from->io_iter; - source.count = len; - - /* Work through the sublist. */ - subreq = from; - list_for_each_entry_from(subreq, &stream->subrequests, rreq_link) { - if (!len) - break; - /* Renegotiate max_len (wsize) */ - trace_netfs_sreq(subreq, netfs_sreq_trace_retry); - __clear_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags); - __set_bit(NETFS_SREQ_RETRYING, &subreq->flags); - stream->prepare_write(subreq); - - part = min(len, stream->sreq_max_len); - subreq->len = part; - subreq->start = start; - subreq->transferred = 0; - len -= part; - start += part; - if (len && subreq == to && - __test_and_clear_bit(NETFS_SREQ_BOUNDARY, &to->flags)) - boundary = true; - - netfs_get_subrequest(subreq, netfs_sreq_trace_get_resubmit); - netfs_reissue_write(stream, subreq, &source); - if (subreq == to) - break; - } - - /* If we managed to use fewer subreqs, we can discard the - * excess; if we used the same number, then we're done. - */ - if (!len) { - if (subreq == to) - continue; - list_for_each_entry_safe_from(subreq, tmp, - &stream->subrequests, rreq_link) { - trace_netfs_sreq(subreq, netfs_sreq_trace_discard); - list_del(&subreq->rreq_link); - netfs_put_subrequest(subreq, false, netfs_sreq_trace_put_done); - if (subreq == to) - break; - } - continue; - } - - /* We ran out of subrequests, so we need to allocate some more - * and insert them after. - */ - do { - subreq = netfs_alloc_subrequest(wreq); - subreq->source = to->source; - subreq->start = start; - subreq->debug_index = atomic_inc_return(&wreq->subreq_counter); - subreq->stream_nr = to->stream_nr; - __set_bit(NETFS_SREQ_RETRYING, &subreq->flags); - - trace_netfs_sreq_ref(wreq->debug_id, subreq->debug_index, - refcount_read(&subreq->ref), - netfs_sreq_trace_new); - netfs_get_subrequest(subreq, netfs_sreq_trace_get_resubmit); - - list_add(&subreq->rreq_link, &to->rreq_link); - to = list_next_entry(to, rreq_link); - trace_netfs_sreq(subreq, netfs_sreq_trace_retry); - - stream->sreq_max_len = len; - stream->sreq_max_segs = INT_MAX; - switch (stream->source) { - case NETFS_UPLOAD_TO_SERVER: - netfs_stat(&netfs_n_wh_upload); - stream->sreq_max_len = umin(len, wreq->wsize); - break; - case NETFS_WRITE_TO_CACHE: - netfs_stat(&netfs_n_wh_write); - break; - default: - WARN_ON_ONCE(1); - } - - stream->prepare_write(subreq); - - part = umin(len, stream->sreq_max_len); - subreq->len = subreq->transferred + part; - len -= part; - start += part; - if (!len && boundary) { - __set_bit(NETFS_SREQ_BOUNDARY, &to->flags); - boundary = false; - } - - netfs_reissue_write(stream, subreq, &source); - if (!len) - break; - - } while (len); - - } while (!list_is_head(next, &stream->subrequests)); -} - -/* - * Perform retries on the streams that need it. If we're doing content - * encryption and the server copy changed due to a third-party write, we may - * need to do an RMW cycle and also rewrite the data to the cache. - */ -static void netfs_retry_writes(struct netfs_io_request *wreq) -{ - struct netfs_io_subrequest *subreq; - struct netfs_io_stream *stream; - int s; - - /* Wait for all outstanding I/O to quiesce before performing retries as - * we may need to renegotiate the I/O sizes. - */ - for (s = 0; s < NR_IO_STREAMS; s++) { - stream = &wreq->io_streams[s]; - if (!stream->active) - continue; - - list_for_each_entry(subreq, &stream->subrequests, rreq_link) { - wait_on_bit(&subreq->flags, NETFS_SREQ_IN_PROGRESS, - TASK_UNINTERRUPTIBLE); - } - } - - // TODO: Enc: Fetch changed partial pages - // TODO: Enc: Reencrypt content if needed. - // TODO: Enc: Wind back transferred point. - // TODO: Enc: Mark cache pages for retry. - - for (s = 0; s < NR_IO_STREAMS; s++) { - stream = &wreq->io_streams[s]; - if (stream->need_retry) { - stream->need_retry = false; - netfs_retry_write_stream(wreq, stream); - } - } -} - /* * Collect and assess the results of various write subrequests. We may need to * retry some of the results - or even do an RMW cycle for content crypto. diff --git a/fs/netfs/write_retry.c b/fs/netfs/write_retry.c new file mode 100644 index 000000000000..2222c3a6b9d1 --- /dev/null +++ b/fs/netfs/write_retry.c @@ -0,0 +1,227 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Network filesystem write retrying. + * + * Copyright (C) 2024 Red Hat, Inc. All Rights Reserved. + * Written by David Howells (dhowells@redhat.com) + */ + +#include +#include +#include +#include +#include "internal.h" + +/* + * Perform retries on the streams that need it. + */ +static void netfs_retry_write_stream(struct netfs_io_request *wreq, + struct netfs_io_stream *stream) +{ + struct list_head *next; + + _enter("R=%x[%x:]", wreq->debug_id, stream->stream_nr); + + if (list_empty(&stream->subrequests)) + return; + + if (stream->source == NETFS_UPLOAD_TO_SERVER && + wreq->netfs_ops->retry_request) + wreq->netfs_ops->retry_request(wreq, stream); + + if (unlikely(stream->failed)) + return; + + /* If there's no renegotiation to do, just resend each failed subreq. */ + if (!stream->prepare_write) { + struct netfs_io_subrequest *subreq; + + list_for_each_entry(subreq, &stream->subrequests, rreq_link) { + if (test_bit(NETFS_SREQ_FAILED, &subreq->flags)) + break; + if (__test_and_clear_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags)) { + struct iov_iter source = subreq->io_iter; + + iov_iter_revert(&source, subreq->len - source.count); + __set_bit(NETFS_SREQ_RETRYING, &subreq->flags); + netfs_get_subrequest(subreq, netfs_sreq_trace_get_resubmit); + netfs_reissue_write(stream, subreq, &source); + } + } + return; + } + + next = stream->subrequests.next; + + do { + struct netfs_io_subrequest *subreq = NULL, *from, *to, *tmp; + struct iov_iter source; + unsigned long long start, len; + size_t part; + bool boundary = false; + + /* Go through the stream and find the next span of contiguous + * data that we then rejig (cifs, for example, needs the wsize + * renegotiating) and reissue. + */ + from = list_entry(next, struct netfs_io_subrequest, rreq_link); + to = from; + start = from->start + from->transferred; + len = from->len - from->transferred; + + if (test_bit(NETFS_SREQ_FAILED, &from->flags) || + !test_bit(NETFS_SREQ_NEED_RETRY, &from->flags)) + return; + + list_for_each_continue(next, &stream->subrequests) { + subreq = list_entry(next, struct netfs_io_subrequest, rreq_link); + if (subreq->start + subreq->transferred != start + len || + test_bit(NETFS_SREQ_BOUNDARY, &subreq->flags) || + !test_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags)) + break; + to = subreq; + len += to->len; + } + + /* Determine the set of buffers we're going to use. Each + * subreq gets a subset of a single overall contiguous buffer. + */ + netfs_reset_iter(from); + source = from->io_iter; + source.count = len; + + /* Work through the sublist. */ + subreq = from; + list_for_each_entry_from(subreq, &stream->subrequests, rreq_link) { + if (!len) + break; + /* Renegotiate max_len (wsize) */ + trace_netfs_sreq(subreq, netfs_sreq_trace_retry); + __clear_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags); + __set_bit(NETFS_SREQ_RETRYING, &subreq->flags); + stream->prepare_write(subreq); + + part = min(len, stream->sreq_max_len); + subreq->len = part; + subreq->start = start; + subreq->transferred = 0; + len -= part; + start += part; + if (len && subreq == to && + __test_and_clear_bit(NETFS_SREQ_BOUNDARY, &to->flags)) + boundary = true; + + netfs_get_subrequest(subreq, netfs_sreq_trace_get_resubmit); + netfs_reissue_write(stream, subreq, &source); + if (subreq == to) + break; + } + + /* If we managed to use fewer subreqs, we can discard the + * excess; if we used the same number, then we're done. + */ + if (!len) { + if (subreq == to) + continue; + list_for_each_entry_safe_from(subreq, tmp, + &stream->subrequests, rreq_link) { + trace_netfs_sreq(subreq, netfs_sreq_trace_discard); + list_del(&subreq->rreq_link); + netfs_put_subrequest(subreq, false, netfs_sreq_trace_put_done); + if (subreq == to) + break; + } + continue; + } + + /* We ran out of subrequests, so we need to allocate some more + * and insert them after. + */ + do { + subreq = netfs_alloc_subrequest(wreq); + subreq->source = to->source; + subreq->start = start; + subreq->debug_index = atomic_inc_return(&wreq->subreq_counter); + subreq->stream_nr = to->stream_nr; + __set_bit(NETFS_SREQ_RETRYING, &subreq->flags); + + trace_netfs_sreq_ref(wreq->debug_id, subreq->debug_index, + refcount_read(&subreq->ref), + netfs_sreq_trace_new); + netfs_get_subrequest(subreq, netfs_sreq_trace_get_resubmit); + + list_add(&subreq->rreq_link, &to->rreq_link); + to = list_next_entry(to, rreq_link); + trace_netfs_sreq(subreq, netfs_sreq_trace_retry); + + stream->sreq_max_len = len; + stream->sreq_max_segs = INT_MAX; + switch (stream->source) { + case NETFS_UPLOAD_TO_SERVER: + netfs_stat(&netfs_n_wh_upload); + stream->sreq_max_len = umin(len, wreq->wsize); + break; + case NETFS_WRITE_TO_CACHE: + netfs_stat(&netfs_n_wh_write); + break; + default: + WARN_ON_ONCE(1); + } + + stream->prepare_write(subreq); + + part = umin(len, stream->sreq_max_len); + subreq->len = subreq->transferred + part; + len -= part; + start += part; + if (!len && boundary) { + __set_bit(NETFS_SREQ_BOUNDARY, &to->flags); + boundary = false; + } + + netfs_reissue_write(stream, subreq, &source); + if (!len) + break; + + } while (len); + + } while (!list_is_head(next, &stream->subrequests)); +} + +/* + * Perform retries on the streams that need it. If we're doing content + * encryption and the server copy changed due to a third-party write, we may + * need to do an RMW cycle and also rewrite the data to the cache. + */ +void netfs_retry_writes(struct netfs_io_request *wreq) +{ + struct netfs_io_subrequest *subreq; + struct netfs_io_stream *stream; + int s; + + /* Wait for all outstanding I/O to quiesce before performing retries as + * we may need to renegotiate the I/O sizes. + */ + for (s = 0; s < NR_IO_STREAMS; s++) { + stream = &wreq->io_streams[s]; + if (!stream->active) + continue; + + list_for_each_entry(subreq, &stream->subrequests, rreq_link) { + wait_on_bit(&subreq->flags, NETFS_SREQ_IN_PROGRESS, + TASK_UNINTERRUPTIBLE); + } + } + + // TODO: Enc: Fetch changed partial pages + // TODO: Enc: Reencrypt content if needed. + // TODO: Enc: Wind back transferred point. + // TODO: Enc: Mark cache pages for retry. + + for (s = 0; s < NR_IO_STREAMS; s++) { + stream = &wreq->io_streams[s]; + if (stream->need_retry) { + stream->need_retry = false; + netfs_retry_write_stream(wreq, stream); + } + } +}