From patchwork Mon Aug 24 14:55:07 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 11733337 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 918811731 for ; Mon, 24 Aug 2020 14:55:33 +0000 (UTC) Received: from ml01.01.org (ml01.01.org [198.145.21.10]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 6F586206B5 for ; Mon, 24 Aug 2020 14:55:33 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="Nk7dZB8P" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6F586206B5 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-nvdimm-bounces@lists.01.org Received: from ml01.vlan13.01.org (localhost [IPv6:::1]) by ml01.01.org (Postfix) with ESMTP id 4C48D1359E205; Mon, 24 Aug 2020 07:55:26 -0700 (PDT) Received-SPF: None (mailfrom) identity=mailfrom; client-ip=90.155.50.34; helo=casper.infradead.org; envelope-from=willy@infradead.org; receiver= Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)) (No client certificate requested) by ml01.01.org (Postfix) with ESMTPS id 9277013596B75 for ; Mon, 24 Aug 2020 07:55:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=Rhv5QEQPezgVV3tetynnY4/DOAHp31mkbEkGZ1QaUS4=; b=Nk7dZB8P6+HWX4yEqPImukdi8W adMlOJv0YwCXJ4INpqzmJ4MxIHPccVUIVxVWAIDqvHwMuFAsEzGxKZEtDbQQW/6hBDJk8Niesx2nt y2kYDHCDnrow1NxpMWcEQxStBAU1JVppPJ8RT0dQi8M/V5g8oFlCyE2f1WEFNHCH9vlWee5zcAVBr HfpefY0ohKCeTFpSjIGKoVW9+9VtRulF+HEco0vWjt7cP4bM3sxcSLQBsxf6rvbyucO71xmG0dz8b 8Ne7rin749OibOzIMgN7qTrW6SltMEsxg8GZ0wrnZXCTLJRvuo8XzbROVJayJSvihIcZm9cgqx3Ai icv5amrg==; Received: from willy by casper.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1kADsQ-0002mO-OI; Mon, 24 Aug 2020 14:55:14 +0000 From: "Matthew Wilcox (Oracle)" To: linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: [PATCH 6/9] iomap: Convert read_count to byte count Date: Mon, 24 Aug 2020 15:55:07 +0100 Message-Id: <20200824145511.10500-7-willy@infradead.org> X-Mailer: git-send-email 2.21.3 In-Reply-To: <20200824145511.10500-1-willy@infradead.org> References: <20200824145511.10500-1-willy@infradead.org> MIME-Version: 1.0 Message-ID-Hash: ZECMN57DRPLQEX66JYISI6ESOJV2X464 X-Message-ID-Hash: ZECMN57DRPLQEX66JYISI6ESOJV2X464 X-MailFrom: willy@infradead.org X-Mailman-Rule-Hits: nonmember-moderation X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation CC: "Matthew Wilcox (Oracle)" , "Darrick J . Wong" , linux-nvdimm@lists.01.org, linux-kernel@vger.kernel.org X-Mailman-Version: 3.1.1 Precedence: list List-Id: "Linux-nvdimm developer list." Archived-At: List-Archive: List-Help: List-Post: List-Subscribe: List-Unsubscribe: Instead of counting bio segments, count the number of bytes submitted. This insulates us from the block layer's definition of what a 'same page' is, which is not necessarily clear once THPs are involved. Signed-off-by: Matthew Wilcox (Oracle) --- fs/iomap/buffered-io.c | 29 ++++++++++------------------- 1 file changed, 10 insertions(+), 19 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 844e95cacea8..c9b44f86d166 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -161,13 +161,6 @@ iomap_set_range_uptodate(struct page *page, unsigned off, unsigned len) SetPageUptodate(page); } -static void -iomap_read_finish(struct iomap_page *iop, struct page *page) -{ - if (!iop || atomic_dec_and_test(&iop->read_count)) - unlock_page(page); -} - static void iomap_read_page_end_io(struct bio_vec *bvec, int error) { @@ -181,7 +174,8 @@ iomap_read_page_end_io(struct bio_vec *bvec, int error) iomap_set_range_uptodate(page, bvec->bv_offset, bvec->bv_len); } - iomap_read_finish(iop, page); + if (!iop || atomic_sub_and_test(bvec->bv_len, &iop->read_count)) + unlock_page(page); } static void @@ -269,20 +263,17 @@ iomap_readpage_actor(struct inode *inode, loff_t pos, loff_t length, void *data, if (ctx->bio && bio_end_sector(ctx->bio) == sector) is_contig = true; - if (is_contig && - __bio_try_merge_page(ctx->bio, page, plen, poff, &same_page)) { - if (!same_page && iop) - atomic_inc(&iop->read_count); - goto done; - } - /* - * If we start a new segment we need to increase the read count, and we - * need to do so before submitting any previous full bio to make sure - * that we don't prematurely unlock the page. + * We need to increase the read count before submitting any + * previous bio to make sure that we don't prematurely unlock + * the page. */ if (iop) - atomic_inc(&iop->read_count); + atomic_add(plen, &iop->read_count); + + if (is_contig && + __bio_try_merge_page(ctx->bio, page, plen, poff, &same_page)) + goto done; if (!ctx->bio || !is_contig || bio_full(ctx->bio, plen)) { gfp_t gfp = mapping_gfp_constraint(page->mapping, GFP_KERNEL);