From patchwork Fri Nov 8 17:32:13 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13868570 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8E509D64070 for ; Fri, 8 Nov 2024 17:34:23 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 22BE76B00B0; Fri, 8 Nov 2024 12:34:23 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1DB876B00B2; Fri, 8 Nov 2024 12:34:23 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 054E46B00B3; Fri, 8 Nov 2024 12:34:22 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id D43416B00B0 for ; Fri, 8 Nov 2024 12:34:22 -0500 (EST) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 9BA071A0E8D for ; Fri, 8 Nov 2024 17:34:22 +0000 (UTC) X-FDA: 82763625924.05.A6D6276 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf09.hostedemail.com (Postfix) with ESMTP id 2AB7F14001F for ; Fri, 8 Nov 2024 17:33:55 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=LNVjvHMv; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf09.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1731087200; a=rsa-sha256; cv=none; b=ip2Sf1EcOsaXx3AfXzrAIPqNpp4fPlmo0TgJ6QE8V2fqE2LTjFMAvtle5y158HFvxdSAl4 CGb7XEJk9md3ZVZXwjO3VPcNOLttfTIxz01GWUQFy3VRLPL9NM4DNIgQr8iAukrlnV12zC UZzzpwwIQvytYX8JcJW/qGssiu4Af10= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=LNVjvHMv; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf09.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1731087200; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Gu5E8tyC96H9fpVehZhchYQCXTYRja375CBx48Z0WNk=; b=WPhrWpRKi8nPEXYYDt4hlO9zY/lq6y428bbdNquA5rBFWtm78ndLamWXUPMwL2/tWpeCj/ sRqezbV8tp9ME9NTQrYraREVVWeMQ6YjyXGiIMPRNZHiVhXsbklh4qtJGzqJXlp/Goy2cw Q//E4lN0uQUJt3Af6DFHtqpTM3LhQfU= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1731087260; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Gu5E8tyC96H9fpVehZhchYQCXTYRja375CBx48Z0WNk=; b=LNVjvHMv2hbLENJRSItzOBXiwiDm4DpsX+UPzNvqgPaYYc3iV8NzJUwGpTR35lPMP1pAdL +iCKP9Js39WnEmoH3W42mtSomc9dcX3AyIXGNtKRhWMz/9qnTkacpdr3klYVtlg76hhAy6 5OK49Jip7PZwQnYMsfhE2R0KnxaO6fU= Received: from mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-680-vCWqGH5vOt6yeYbCNuTgOg-1; Fri, 08 Nov 2024 12:34:17 -0500 X-MC-Unique: vCWqGH5vOt6yeYbCNuTgOg-1 X-Mimecast-MFC-AGG-ID: vCWqGH5vOt6yeYbCNuTgOg Received: from mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.12]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id BF0621955F42; Fri, 8 Nov 2024 17:34:13 +0000 (UTC) Received: from warthog.procyon.org.uk.com (unknown [10.42.28.231]) by mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id EE274195E481; Fri, 8 Nov 2024 17:34:07 +0000 (UTC) From: David Howells To: Christian Brauner , Steve French , Matthew Wilcox Cc: David Howells , Jeff Layton , Gao Xiang , Dominique Martinet , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Eric Van Hensbergen , Ilya Dryomov , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-erofs@lists.ozlabs.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-cachefs@redhat.com Subject: [PATCH v4 12/33] netfs: Don't use bh spinlock Date: Fri, 8 Nov 2024 17:32:13 +0000 Message-ID: <20241108173236.1382366-13-dhowells@redhat.com> In-Reply-To: <20241108173236.1382366-1-dhowells@redhat.com> References: <20241108173236.1382366-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.12 X-Rspam-User: X-Rspamd-Queue-Id: 2AB7F14001F X-Rspamd-Server: rspam11 X-Stat-Signature: iidnpjzy51b5949hjyyyzski443y8hga X-HE-Tag: 1731087235-59840 X-HE-Meta: U2FsdGVkX1+BL05LbjFpX1I21hDRDXl3cfbuB+IjM4EvptHV7DFVHlvj8kcvxnlnVQeT54lzPmRBruifZDHhUlqd4tTcIiDbOI5QyQ0wzQ1gYjC+mrO70Lf/0zY7/ubWnHim/WnoTX/7kmxpLIUBnI9JSzWTStRxfYJ/LMzEMgKa/Me+BZcg2P39xdpeRpCledTXbYoup08Su2GrjXZH/jRaYc8z4Hm0SpLru/ab2eSesWhRZII5U6CnJHHfL8RQ5k3sHnoW9dGHtm0xb+F+purauW3ylCsSDg9dzucXAtEWcgAmMQ5br562rA2CPY2PP+XTHqbf9idQPa+seBrazkTfyY4fdSKlub1s7/sqzOTvbsqyrzZ29cIu4vfZz0VDMHVig+ma4fceCKMA1MVqtoqNc/X0Sgw8Vb8zxKxx8innCRfQnTJoDeWU4J1dlw2KOYX+o9aHf/w4zgQbhYqsNU9LHkeaBM/mIBij3e/YvSTkFvrJhFnLcdA6fC6ABwjjD19QKLNZTdMX/kTq/NnF8zT8dXAaVDbXQQJbsxb3d3QUi976SloqxpA0AbhfgsX4CfEBcAVTDYYDsCHBpSb2wsL5g8u8ePygmKpG/0tHmvM1+xtVt02XKg5wBSYnrp/BS7B3zuvS3eqI2rgKNNqenDM4xcQSPMUxb8hrSg7HXRwdbhxLrvIsF5kvpJuTQvldxxOftXWxL3/KQ57W1iYj5uBxxPL+fPOU/JwTQmi2v2/stn79jlpMTgfETEnIg1t2z2lJceUBik+b+ZzFcbxy03u+0IHk2Dp/IprNeFwJDR7FIeah4/C5HzfQ07+1eZJnmh7ZyUYOzRIyK0KJk/qd4oD7v64Gv/+ujYlfvWvDGbndbxPmj8wlbc9LR8oY4dWgs9b9gPyCdCd07qM31OO57yTUqh3fwpUCNbl7sLnzIVSSsDK/Fi4bGWzBRe2SVkS4DyHtazoEwJ4WH8NsPRt fk8I2/O3 pwhp+m8o1I4NVw0XLMcqfSZsHkM7BW34+3HW+vhwHv6xJ/acWwB7a0buz5cZAVhM7S1ffWpxoPJICib4+kpdmObJ8kjCmzv8Mewmrh99fUbfKaXhLDm5rd4NLUdAKXl7KLMuFOuzOfZoukRFHtoKTs10ve3XaM26yeeFBdNSCc59yoL253F/Q15aOmo+KiyNkopvEajzND/iDEMqCsKpyCEJhfMsAPC9XqCIPLyV/2xmFfG0q+ViFILIZ1Z2sScVCKoGWYa1vLqg2S9niKnndq/GM82M1nXMLsKYhy67iNS49tkdY/GpFGeBPSuNUQ7pvK304aNhDjV+NNZI3XraVHMC5khXNgi63PRJcQWpGpLdtdns3ueu3vQwKd4M2iLOjeB8P3Ku2ADOMyNrh36e/gLy9qmjGmwSTtaPPNW3TG2VoZo8ZZVhMZUQ9kUZpaOm35Xv5/CG4umdNjRkoUu8H7lWBqns9LWvMrFMC+FGaC1rW9bg= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: All the accessing of the subrequest lists is now done in process context, possibly in a workqueue, but not now in a BH context, so we don't need the lock against BH interference when taking the netfs_io_request::lock spinlock. Signed-off-by: David Howells cc: Jeff Layton cc: linux-cachefs@redhat.com cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org --- fs/netfs/buffered_read.c | 4 ++-- fs/netfs/direct_read.c | 4 ++-- fs/netfs/read_collect.c | 20 ++++++++++---------- fs/netfs/read_retry.c | 8 ++++---- fs/netfs/write_collect.c | 4 ++-- fs/netfs/write_issue.c | 4 ++-- 6 files changed, 22 insertions(+), 22 deletions(-) diff --git a/fs/netfs/buffered_read.c b/fs/netfs/buffered_read.c index 6fd4f3bef3b4..4a48b79b8807 100644 --- a/fs/netfs/buffered_read.c +++ b/fs/netfs/buffered_read.c @@ -200,12 +200,12 @@ static void netfs_read_to_pagecache(struct netfs_io_request *rreq) subreq->len = size; atomic_inc(&rreq->nr_outstanding); - spin_lock_bh(&rreq->lock); + spin_lock(&rreq->lock); list_add_tail(&subreq->rreq_link, &rreq->subrequests); subreq->prev_donated = rreq->prev_donated; rreq->prev_donated = 0; trace_netfs_sreq(subreq, netfs_sreq_trace_added); - spin_unlock_bh(&rreq->lock); + spin_unlock(&rreq->lock); source = netfs_cache_prepare_read(rreq, subreq, rreq->i_size); subreq->source = source; diff --git a/fs/netfs/direct_read.c b/fs/netfs/direct_read.c index 54027fd14904..1a20cc3979c7 100644 --- a/fs/netfs/direct_read.c +++ b/fs/netfs/direct_read.c @@ -68,12 +68,12 @@ static int netfs_dispatch_unbuffered_reads(struct netfs_io_request *rreq) subreq->len = size; atomic_inc(&rreq->nr_outstanding); - spin_lock_bh(&rreq->lock); + spin_lock(&rreq->lock); list_add_tail(&subreq->rreq_link, &rreq->subrequests); subreq->prev_donated = rreq->prev_donated; rreq->prev_donated = 0; trace_netfs_sreq(subreq, netfs_sreq_trace_added); - spin_unlock_bh(&rreq->lock); + spin_unlock(&rreq->lock); netfs_stat(&netfs_n_rh_download); if (rreq->netfs_ops->prepare_read) { diff --git a/fs/netfs/read_collect.c b/fs/netfs/read_collect.c index 146abb2e399a..53ef7e0f3e9c 100644 --- a/fs/netfs/read_collect.c +++ b/fs/netfs/read_collect.c @@ -142,7 +142,7 @@ static bool netfs_consume_read_data(struct netfs_io_subrequest *subreq) prev_donated = READ_ONCE(subreq->prev_donated); next_donated = READ_ONCE(subreq->next_donated); if (prev_donated || next_donated) { - spin_lock_bh(&rreq->lock); + spin_lock(&rreq->lock); prev_donated = subreq->prev_donated; next_donated = subreq->next_donated; subreq->start -= prev_donated; @@ -155,7 +155,7 @@ static bool netfs_consume_read_data(struct netfs_io_subrequest *subreq) next_donated = subreq->next_donated = 0; } trace_netfs_sreq(subreq, netfs_sreq_trace_add_donations); - spin_unlock_bh(&rreq->lock); + spin_unlock(&rreq->lock); } avail = subreq->transferred; @@ -184,18 +184,18 @@ static bool netfs_consume_read_data(struct netfs_io_subrequest *subreq) } else if (fpos < start) { excess = fend - subreq->start; - spin_lock_bh(&rreq->lock); + spin_lock(&rreq->lock); /* If we complete first on a folio split with the * preceding subreq, donate to that subreq - otherwise * we get the responsibility. */ if (subreq->prev_donated != prev_donated) { - spin_unlock_bh(&rreq->lock); + spin_unlock(&rreq->lock); goto donation_changed; } if (list_is_first(&subreq->rreq_link, &rreq->subrequests)) { - spin_unlock_bh(&rreq->lock); + spin_unlock(&rreq->lock); pr_err("Can't donate prior to front\n"); goto bad; } @@ -211,7 +211,7 @@ static bool netfs_consume_read_data(struct netfs_io_subrequest *subreq) if (subreq->consumed >= subreq->len) goto remove_subreq_locked; - spin_unlock_bh(&rreq->lock); + spin_unlock(&rreq->lock); } else { pr_err("fpos > start\n"); goto bad; @@ -239,11 +239,11 @@ static bool netfs_consume_read_data(struct netfs_io_subrequest *subreq) /* Donate the remaining downloaded data to one of the neighbouring * subrequests. Note that we may race with them doing the same thing. */ - spin_lock_bh(&rreq->lock); + spin_lock(&rreq->lock); if (subreq->prev_donated != prev_donated || subreq->next_donated != next_donated) { - spin_unlock_bh(&rreq->lock); + spin_unlock(&rreq->lock); cond_resched(); goto donation_changed; } @@ -293,11 +293,11 @@ static bool netfs_consume_read_data(struct netfs_io_subrequest *subreq) goto remove_subreq_locked; remove_subreq: - spin_lock_bh(&rreq->lock); + spin_lock(&rreq->lock); remove_subreq_locked: subreq->consumed = subreq->len; list_del(&subreq->rreq_link); - spin_unlock_bh(&rreq->lock); + spin_unlock(&rreq->lock); netfs_put_subrequest(subreq, false, netfs_sreq_trace_put_consumed); return true; diff --git a/fs/netfs/read_retry.c b/fs/netfs/read_retry.c index d1986cec3db7..264f3cb6a7dc 100644 --- a/fs/netfs/read_retry.c +++ b/fs/netfs/read_retry.c @@ -139,12 +139,12 @@ static void netfs_retry_read_subrequests(struct netfs_io_request *rreq) __clear_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags); __set_bit(NETFS_SREQ_RETRYING, &subreq->flags); - spin_lock_bh(&rreq->lock); + spin_lock(&rreq->lock); list_add_tail(&subreq->rreq_link, &rreq->subrequests); subreq->prev_donated += rreq->prev_donated; rreq->prev_donated = 0; trace_netfs_sreq(subreq, netfs_sreq_trace_retry); - spin_unlock_bh(&rreq->lock); + spin_unlock(&rreq->lock); BUG_ON(!len); @@ -215,9 +215,9 @@ static void netfs_retry_read_subrequests(struct netfs_io_request *rreq) __clear_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags); __clear_bit(NETFS_SREQ_RETRYING, &subreq->flags); } - spin_lock_bh(&rreq->lock); + spin_lock(&rreq->lock); list_splice_tail_init(&queue, &rreq->subrequests); - spin_unlock_bh(&rreq->lock); + spin_unlock(&rreq->lock); } /* diff --git a/fs/netfs/write_collect.c b/fs/netfs/write_collect.c index 85e8e94da90a..d291b31dd074 100644 --- a/fs/netfs/write_collect.c +++ b/fs/netfs/write_collect.c @@ -238,14 +238,14 @@ static void netfs_collect_write_results(struct netfs_io_request *wreq) cancel: /* Remove if completely consumed. */ - spin_lock_bh(&wreq->lock); + spin_lock(&wreq->lock); remove = front; list_del_init(&front->rreq_link); front = list_first_entry_or_null(&stream->subrequests, struct netfs_io_subrequest, rreq_link); stream->front = front; - spin_unlock_bh(&wreq->lock); + spin_unlock(&wreq->lock); netfs_put_subrequest(remove, false, notes & SAW_FAILURE ? netfs_sreq_trace_put_cancel : diff --git a/fs/netfs/write_issue.c b/fs/netfs/write_issue.c index c186221b45c0..10b5300b9448 100644 --- a/fs/netfs/write_issue.c +++ b/fs/netfs/write_issue.c @@ -203,7 +203,7 @@ static void netfs_prepare_write(struct netfs_io_request *wreq, * the list. The collector only goes nextwards and uses the lock to * remove entries off of the front. */ - spin_lock_bh(&wreq->lock); + spin_lock(&wreq->lock); list_add_tail(&subreq->rreq_link, &stream->subrequests); if (list_is_first(&subreq->rreq_link, &stream->subrequests)) { stream->front = subreq; @@ -214,7 +214,7 @@ static void netfs_prepare_write(struct netfs_io_request *wreq, } } - spin_unlock_bh(&wreq->lock); + spin_unlock(&wreq->lock); stream->construct = subreq; }