From patchwork Mon Aug 24 14:55:07 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11733363 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4ED56138A for ; Mon, 24 Aug 2020 15:10:09 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3288E207CD for ; Mon, 24 Aug 2020 15:10:09 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="Nk7dZB8P" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727024AbgHXPIh (ORCPT ); Mon, 24 Aug 2020 11:08:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39608 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726880AbgHXPGX (ORCPT ); Mon, 24 Aug 2020 11:06:23 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8AD70C061795; Mon, 24 Aug 2020 07:55:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=Rhv5QEQPezgVV3tetynnY4/DOAHp31mkbEkGZ1QaUS4=; b=Nk7dZB8P6+HWX4yEqPImukdi8W adMlOJv0YwCXJ4INpqzmJ4MxIHPccVUIVxVWAIDqvHwMuFAsEzGxKZEtDbQQW/6hBDJk8Niesx2nt y2kYDHCDnrow1NxpMWcEQxStBAU1JVppPJ8RT0dQi8M/V5g8oFlCyE2f1WEFNHCH9vlWee5zcAVBr HfpefY0ohKCeTFpSjIGKoVW9+9VtRulF+HEco0vWjt7cP4bM3sxcSLQBsxf6rvbyucO71xmG0dz8b 8Ne7rin749OibOzIMgN7qTrW6SltMEsxg8GZ0wrnZXCTLJRvuo8XzbROVJayJSvihIcZm9cgqx3Ai icv5amrg==; Received: from willy by casper.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1kADsQ-0002mO-OI; Mon, 24 Aug 2020 14:55:14 +0000 From: "Matthew Wilcox (Oracle)" To: linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , "Darrick J . Wong" , linux-nvdimm@lists.01.org, linux-kernel@vger.kernel.org Subject: [PATCH 6/9] iomap: Convert read_count to byte count Date: Mon, 24 Aug 2020 15:55:07 +0100 Message-Id: <20200824145511.10500-7-willy@infradead.org> X-Mailer: git-send-email 2.21.3 In-Reply-To: <20200824145511.10500-1-willy@infradead.org> References: <20200824145511.10500-1-willy@infradead.org> MIME-Version: 1.0 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Instead of counting bio segments, count the number of bytes submitted. This insulates us from the block layer's definition of what a 'same page' is, which is not necessarily clear once THPs are involved. Signed-off-by: Matthew Wilcox (Oracle) --- fs/iomap/buffered-io.c | 29 ++++++++++------------------- 1 file changed, 10 insertions(+), 19 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 844e95cacea8..c9b44f86d166 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -161,13 +161,6 @@ iomap_set_range_uptodate(struct page *page, unsigned off, unsigned len) SetPageUptodate(page); } -static void -iomap_read_finish(struct iomap_page *iop, struct page *page) -{ - if (!iop || atomic_dec_and_test(&iop->read_count)) - unlock_page(page); -} - static void iomap_read_page_end_io(struct bio_vec *bvec, int error) { @@ -181,7 +174,8 @@ iomap_read_page_end_io(struct bio_vec *bvec, int error) iomap_set_range_uptodate(page, bvec->bv_offset, bvec->bv_len); } - iomap_read_finish(iop, page); + if (!iop || atomic_sub_and_test(bvec->bv_len, &iop->read_count)) + unlock_page(page); } static void @@ -269,20 +263,17 @@ iomap_readpage_actor(struct inode *inode, loff_t pos, loff_t length, void *data, if (ctx->bio && bio_end_sector(ctx->bio) == sector) is_contig = true; - if (is_contig && - __bio_try_merge_page(ctx->bio, page, plen, poff, &same_page)) { - if (!same_page && iop) - atomic_inc(&iop->read_count); - goto done; - } - /* - * If we start a new segment we need to increase the read count, and we - * need to do so before submitting any previous full bio to make sure - * that we don't prematurely unlock the page. + * We need to increase the read count before submitting any + * previous bio to make sure that we don't prematurely unlock + * the page. */ if (iop) - atomic_inc(&iop->read_count); + atomic_add(plen, &iop->read_count); + + if (is_contig && + __bio_try_merge_page(ctx->bio, page, plen, poff, &same_page)) + goto done; if (!ctx->bio || !is_contig || bio_full(ctx->bio, plen)) { gfp_t gfp = mapping_gfp_constraint(page->mapping, GFP_KERNEL);