From patchwork Wed Jun 10 20:13:19 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11598743 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0D15992A for ; Wed, 10 Jun 2020 20:16:57 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E8AAD206F7 for ; Wed, 10 Jun 2020 20:16:56 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="qcRuO7qB" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728530AbgFJUQc (ORCPT ); Wed, 10 Jun 2020 16:16:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60508 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730550AbgFJUNu (ORCPT ); Wed, 10 Jun 2020 16:13:50 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5DB00C00862E; Wed, 10 Jun 2020 13:13:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=hfj7nvAJQpdF1gepuLuX+65YmLup2Ovu+X8+8QHyAHw=; b=qcRuO7qBGPf0OdGpDWT27F6hEb rgVvYJxj/KD4udt31sGi07iE6f0yKu5B+dceWQbTsMezUbA9o6+sEUOdyvcGUNqlejxVHcpYhMsud 5KD5ussyhQvJz5DQ0VeUxhaXFqbRXF+J0JNpvjy4dlQwykwGx9d4AePGxnsuQn6HpBehY1J6OnXQJ aFJVI1JC2mw8T2oEp6oIMDtvT5aRMfV7hVpXvPRMFNtaQR0yOXtwKcZgDd+FY+9+txO+HaDBLFeSb s02BSBur5St9bzgC5ZcXVYtkzEgU/czCTXoWoUti27KKSSZrfKCxN+f8wIx513HuqAyxX9ziOBx/j 8fRCVe7w==; Received: from willy by bombadil.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1jj76a-0003VT-79; Wed, 10 Jun 2020 20:13:48 +0000 From: Matthew Wilcox To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v6 25/51] iomap: Support THPs in read paths Date: Wed, 10 Jun 2020 13:13:19 -0700 Message-Id: <20200610201345.13273-26-willy@infradead.org> X-Mailer: git-send-email 2.21.1 In-Reply-To: <20200610201345.13273-1-willy@infradead.org> References: <20200610201345.13273-1-willy@infradead.org> MIME-Version: 1.0 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: "Matthew Wilcox (Oracle)" Use thp_size() instead of PAGE_SIZE, offset_in_thp() instead of offset_in_page() and bio_for_each_thp_segment_all(). Signed-off-by: Matthew Wilcox (Oracle) --- fs/iomap/buffered-io.c | 21 ++++++++++++++++----- 1 file changed, 16 insertions(+), 5 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index b1a2ab2c7a8d..555ec90e8b54 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -192,7 +192,7 @@ iomap_read_end_io(struct bio *bio) struct bio_vec *bvec; struct bvec_iter_all iter_all; - bio_for_each_segment_all(bvec, bio, iter_all) + bio_for_each_thp_segment_all(bvec, bio, iter_all) iomap_read_page_end_io(bvec, error); bio_put(bio); } @@ -232,6 +232,16 @@ static inline bool iomap_block_needs_zeroing(struct inode *inode, pos >= i_size_read(inode); } +/* + * Estimate the number of vectors we need based on the current page size; + * if we're wrong we'll end up doing an overly large allocation or needing + * to do a second allocation, neither of which is a big deal. + */ +static unsigned int iomap_nr_vecs(struct page *page, loff_t length) +{ + return (length + thp_size(page) - 1) >> page_shift(page); +} + static loff_t iomap_readpage_actor(struct inode *inode, loff_t pos, loff_t length, void *data, struct iomap *iomap, struct iomap *srcmap) @@ -288,7 +298,7 @@ iomap_readpage_actor(struct inode *inode, loff_t pos, loff_t length, void *data, if (!ctx->bio || !is_contig || bio_full(ctx->bio, plen)) { gfp_t gfp = mapping_gfp_constraint(page->mapping, GFP_KERNEL); gfp_t orig_gfp = gfp; - int nr_vecs = (length + PAGE_SIZE - 1) >> PAGE_SHIFT; + int nr_vecs = iomap_nr_vecs(page, length); if (ctx->bio) submit_bio(ctx->bio); @@ -332,9 +342,9 @@ iomap_readpage(struct page *page, const struct iomap_ops *ops) trace_iomap_readpage(page->mapping->host, 1); - for (poff = 0; poff < PAGE_SIZE; poff += ret) { + for (poff = 0; poff < thp_size(page); poff += ret) { ret = iomap_apply(inode, page_offset(page) + poff, - PAGE_SIZE - poff, 0, ops, &ctx, + thp_size(page) - poff, 0, ops, &ctx, iomap_readpage_actor); if (ret <= 0) { WARN_ON_ONCE(ret == 0); @@ -368,7 +378,8 @@ iomap_readahead_actor(struct inode *inode, loff_t pos, loff_t length, loff_t done, ret; for (done = 0; done < length; done += ret) { - if (ctx->cur_page && offset_in_page(pos + done) == 0) { + if (ctx->cur_page && + offset_in_thp(ctx->cur_page, pos + done) == 0) { if (!ctx->cur_page_in_bio) unlock_page(ctx->cur_page); put_page(ctx->cur_page);