From patchwork Wed May 23 14:43:38 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 10421571 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id D6E376032A for ; Wed, 23 May 2018 14:45:14 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C5D3128113 for ; Wed, 23 May 2018 14:45:14 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id BA738289B1; Wed, 23 May 2018 14:45:14 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE, T_DKIM_INVALID autolearn=unavailable version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1C2C726E82 for ; Wed, 23 May 2018 14:45:14 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 120356B0272; Wed, 23 May 2018 10:44:52 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 0AA3B6B0273; Wed, 23 May 2018 10:44:52 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E63366B0274; Wed, 23 May 2018 10:44:51 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pl0-f70.google.com (mail-pl0-f70.google.com [209.85.160.70]) by kanga.kvack.org (Postfix) with ESMTP id 9B93E6B0272 for ; Wed, 23 May 2018 10:44:51 -0400 (EDT) Received: by mail-pl0-f70.google.com with SMTP id a5-v6so14179379plp.8 for ; Wed, 23 May 2018 07:44:51 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references; bh=k3tF7mu9aB3K/T/3+3yV9M4lplhvettmuZppE0eZs90=; b=R0Qw8LCCJlZigorpBYJ8UVBoo+FZ3RrhuGx/JuhLsxZf1KX/mc8OsLBHXYMKrNmWy7 QIVSkgSn64rreZdY6UKHBNkJtqQTTTYqdTmf1b2Tz6B75UEUqUBscy38edzIfoFd0A2q XXhDH9MGbFqMB6R7Elm/6oDrOtbtdVCx6tEuRVx5chi8R3pMHPY1we4XpzOhFtZRPFig 27NaqoOdNTxmtIkGFwoB4cEoThWQWvB8W6rOox7OC0VYPi0d5/Q0DIk3hThZ53JdsMYs 2COPRgm3NfZYh3+SIbMhPE3SU656CMfPz0812OKz7DLeg271H4ve10bo5DBOs+knXAt6 xC6A== X-Gm-Message-State: ALKqPwec1HjJxV3U1VH/fXTUveh7HFgoo1PcM9KU03U5AtWQWHxfQt2Z gkw098Xe2DmADkYguC1QZ50jIDiZ+5dyQJ2baaXCaEPkXcXZfgdeQaExXVeDSl4d4i7tMdPj3nT wFl/jubdF50lcqJimHlzJGq2PGHMv5x+O/5GDeD2wpHttITrIYGkT0usOlghsTCQ= X-Received: by 2002:a62:5281:: with SMTP id g123-v6mr3216881pfb.22.1527086691311; Wed, 23 May 2018 07:44:51 -0700 (PDT) X-Google-Smtp-Source: AB8JxZrPRI5t9Av5kyN9GWVkkFuuaB40ILXbn/KW2Vtz0bTBgL8Ii59rlqgG0+MpViL0OXGkqoDc X-Received: by 2002:a62:5281:: with SMTP id g123-v6mr3216829pfb.22.1527086690182; Wed, 23 May 2018 07:44:50 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1527086690; cv=none; d=google.com; s=arc-20160816; b=Bw9VLulkXjRCNP07oRl/klw8G06+yeAMqfZp6RkCQY2qjBa67fMkGVmcsJcgouGb/H mFz0nTXFVcHF4ml+z5X6q3g/xshVim7aM08zvpCaKVqpegPOxEo1YZxoMougeH61rAd0 RxpceGOWeEd2RT330a2ZwPELLkRIvBj4I0sb4zjnEKhbc1ONLLCQcgy0TuwJ5Q6RlSMg mEiQq5bfjkSSed+MT3MbabFGdbfLpOSllsE5UHUr07rN6oYW26QsAd4T2Z3wEhr35RK1 Mz96Y79pclaicnuPlpVBr4tHzARaDph1qyxOkMOX7qJ9jFQuEVIrD18Z8D6UCuHREF90 fr4A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:arc-authentication-results; bh=k3tF7mu9aB3K/T/3+3yV9M4lplhvettmuZppE0eZs90=; b=mbm7Y2XjTNTCZaFS2l+sfEnyB+CNhzYldDYRZhP8OgexBL8iuzizpEkjK3nFSlGzuB 52M9PF9YnxGCtwpW7B/bNg/UFYLu0HIUe4yVr7uWb32CnVU2xMHuFKIMo+CVjRA2KUfW S6o1s/3ROpSs9lV+LNR6VvozB0agYxEJcWPrn7xAzOFXOCawbIztSkrENVnY6YEv9WTX zalQuDfr2N0kaSNeSluYMNvKujCxMWkFq2Jxvn/hTZ1BwF9hlpFJ9BhbNVhYAgJSqFDJ RNE6TEQ7i0s/lWvgSKj3fGcTENg6NlW13VHzdSiQ2sPSExoIbLLzciMrqEHj4KBuH2Cw OvEg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=TxyJdNG3; spf=pass (google.com: best guess record for domain of batv+df5a2477ff0fa86e9985+5386+infradead.org+hch@bombadil.srs.infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=BATV+df5a2477ff0fa86e9985+5386+infradead.org+hch@bombadil.srs.infradead.org Received: from bombadil.infradead.org (bombadil.infradead.org. [2607:7c80:54:e::133]) by mx.google.com with ESMTPS id q75-v6si19202335pfk.268.2018.05.23.07.44.50 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Wed, 23 May 2018 07:44:50 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of batv+df5a2477ff0fa86e9985+5386+infradead.org+hch@bombadil.srs.infradead.org designates 2607:7c80:54:e::133 as permitted sender) client-ip=2607:7c80:54:e::133; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=TxyJdNG3; spf=pass (google.com: best guess record for domain of batv+df5a2477ff0fa86e9985+5386+infradead.org+hch@bombadil.srs.infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=BATV+df5a2477ff0fa86e9985+5386+infradead.org+hch@bombadil.srs.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=k3tF7mu9aB3K/T/3+3yV9M4lplhvettmuZppE0eZs90=; b=TxyJdNG3efnICCTj8uUkb+EDa /Rbyix/rIB+HYnf2Y18zxOKkhu6PhazLR8nTdC6cMClK9MHWgU4c07B0gMQcZ07U4NNoe99ODtt6b JN6y4hjLRbYs8JkwsAqcAdX2WLRDEoe8SC9+HVjPTro/YQ/O3h9E3H4xhwcQ7eTdYxkSxYL8bWd4r SBLwRZOnId5kvTzFx51Ulj1O2Kx9X2pRI+oPXpLUqhk6Ac2YqGUW3eC+CvzqPGv/tnBxGLFJ+8f2F +Ckl4je1+UbmSqIyBcFSO0ARZbRPuqv1fjs+nv5x+y6Ct/of4mSczH7siq1Wr7WZJZe9Y4PBGp31D hmKTdhMDg==; Received: from 089144199016.atnat0008.highway.a1.net ([89.144.199.16] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.90_1 #2 (Red Hat Linux)) id 1fLV0Q-0000P6-CV; Wed, 23 May 2018 14:44:46 +0000 From: Christoph Hellwig To: linux-xfs@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 15/34] iomap: add an iomap-based readpage and readpages implementation Date: Wed, 23 May 2018 16:43:38 +0200 Message-Id: <20180523144357.18985-16-hch@lst.de> X-Mailer: git-send-email 2.17.0 In-Reply-To: <20180523144357.18985-1-hch@lst.de> References: <20180523144357.18985-1-hch@lst.de> X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Simply use iomap_apply to iterate over the file and a submit a bio for each non-uptodate but mapped region and zero everything else. Note that as-is this can not be used for file systems with a blocksize smaller than the page size, but that support will be added later. Signed-off-by: Christoph Hellwig --- fs/iomap.c | 194 +++++++++++++++++++++++++++++++++++++++++- include/linux/iomap.h | 4 + 2 files changed, 197 insertions(+), 1 deletion(-) diff --git a/fs/iomap.c b/fs/iomap.c index fa278ed338ce..78259a2249f4 100644 --- a/fs/iomap.c +++ b/fs/iomap.c @@ -1,6 +1,6 @@ /* * Copyright (C) 2010 Red Hat, Inc. - * Copyright (c) 2016 Christoph Hellwig. + * Copyright (c) 2016-2018 Christoph Hellwig. * * This program is free software; you can redistribute it and/or modify it * under the terms and conditions of the GNU General Public License, @@ -18,6 +18,7 @@ #include #include #include +#include #include #include #include @@ -103,6 +104,197 @@ iomap_sector(struct iomap *iomap, loff_t pos) return (iomap->addr + pos - iomap->offset) >> SECTOR_SHIFT; } +static void +iomap_read_end_io(struct bio *bio) +{ + int error = blk_status_to_errno(bio->bi_status); + struct bio_vec *bvec; + int i; + + bio_for_each_segment_all(bvec, bio, i) + page_endio(bvec->bv_page, false, error); + bio_put(bio); +} + +static struct bio * +iomap_read_bio_alloc(struct iomap *iomap, sector_t sector, loff_t length) +{ + int nr_vecs = (length + PAGE_SIZE - 1) >> PAGE_SHIFT; + struct bio *bio = bio_alloc(GFP_NOFS, min(BIO_MAX_PAGES, nr_vecs)); + + bio->bi_opf = REQ_OP_READ; + bio->bi_iter.bi_sector = sector; + bio_set_dev(bio, iomap->bdev); + bio->bi_end_io = iomap_read_end_io; + return bio; +} + +struct iomap_readpage_ctx { + struct page *cur_page; + bool cur_page_in_bio; + struct bio *bio; + struct list_head *pages; +}; + +static loff_t +iomap_readpage_actor(struct inode *inode, loff_t pos, loff_t length, void *data, + struct iomap *iomap) +{ + struct iomap_readpage_ctx *ctx = data; + struct page *page = ctx->cur_page; + unsigned poff = pos & (PAGE_SIZE - 1); + unsigned plen = min_t(loff_t, PAGE_SIZE - poff, length); + bool is_contig = false; + sector_t sector; + + /* we don't support blocksize < PAGE_SIZE quite yet: */ + WARN_ON_ONCE(pos != page_offset(page)); + WARN_ON_ONCE(plen != PAGE_SIZE); + + if (iomap->type != IOMAP_MAPPED || pos >= i_size_read(inode)) { + zero_user(page, poff, plen); + SetPageUptodate(page); + goto done; + } + + ctx->cur_page_in_bio = true; + + /* + * Try to merge into a previous segment if we can. + */ + sector = iomap_sector(iomap, pos); + if (ctx->bio && bio_end_sector(ctx->bio) == sector) { + if (__bio_try_merge_page(ctx->bio, page, plen, poff)) + goto done; + is_contig = true; + } + + if (!ctx->bio || !is_contig || bio_full(ctx->bio)) { + if (ctx->bio) + submit_bio(ctx->bio); + ctx->bio = iomap_read_bio_alloc(iomap, sector, length); + } + + __bio_add_page(ctx->bio, page, plen, poff); +done: + return plen; +} + +int +iomap_readpage(struct page *page, const struct iomap_ops *ops) +{ + struct iomap_readpage_ctx ctx = { .cur_page = page }; + struct inode *inode = page->mapping->host; + unsigned poff; + loff_t ret; + + WARN_ON_ONCE(page_has_buffers(page)); + + for (poff = 0; poff < PAGE_SIZE; poff += ret) { + ret = iomap_apply(inode, page_offset(page) + poff, + PAGE_SIZE - poff, 0, ops, &ctx, + iomap_readpage_actor); + if (ret <= 0) { + WARN_ON_ONCE(ret == 0); + SetPageError(page); + break; + } + } + + if (ctx.bio) { + submit_bio(ctx.bio); + WARN_ON_ONCE(!ctx.cur_page_in_bio); + } else { + WARN_ON_ONCE(ctx.cur_page_in_bio); + unlock_page(page); + } + return 0; +} +EXPORT_SYMBOL_GPL(iomap_readpage); + +static struct page * +iomap_next_page(struct inode *inode, struct list_head *pages, loff_t pos, + loff_t length, loff_t *done) +{ + while (!list_empty(pages)) { + struct page *page = lru_to_page(pages); + + if (page_offset(page) >= (u64)pos + length) + break; + + list_del(&page->lru); + if (!add_to_page_cache_lru(page, inode->i_mapping, page->index, + GFP_NOFS)) + return page; + + *done += PAGE_SIZE; + put_page(page); + } + + return NULL; +} + +static loff_t +iomap_readpages_actor(struct inode *inode, loff_t pos, loff_t length, + void *data, struct iomap *iomap) +{ + struct iomap_readpage_ctx *ctx = data; + loff_t done, ret; + + for (done = 0; done < length; done += ret) { + if (ctx->cur_page && ((pos + done) & (PAGE_SIZE - 1)) == 0) { + if (!ctx->cur_page_in_bio) + unlock_page(ctx->cur_page); + put_page(ctx->cur_page); + ctx->cur_page = NULL; + } + if (!ctx->cur_page) { + ctx->cur_page = iomap_next_page(inode, ctx->pages, + pos, length, &done); + if (!ctx->cur_page) + break; + ctx->cur_page_in_bio = false; + } + ret = iomap_readpage_actor(inode, pos + done, length - done, + ctx, iomap); + } + + return done; +} + +int +iomap_readpages(struct address_space *mapping, struct list_head *pages, + unsigned nr_pages, const struct iomap_ops *ops) +{ + struct iomap_readpage_ctx ctx = { .pages = pages }; + loff_t pos = page_offset(list_entry(pages->prev, struct page, lru)); + loff_t last = page_offset(list_entry(pages->next, struct page, lru)); + loff_t length = last - pos + PAGE_SIZE, ret = 0; + + while (length > 0) { + ret = iomap_apply(mapping->host, pos, length, 0, ops, + &ctx, iomap_readpages_actor); + if (ret <= 0) { + WARN_ON_ONCE(ret == 0); + goto done; + } + pos += ret; + length -= ret; + } + ret = 0; +done: + if (ctx.bio) + submit_bio(ctx.bio); + if (ctx.cur_page) { + if (!ctx.cur_page_in_bio) + unlock_page(ctx.cur_page); + put_page(ctx.cur_page); + } + WARN_ON_ONCE(!ret && !list_empty(ctx.pages)); + return ret; +} +EXPORT_SYMBOL_GPL(iomap_readpages); + static void iomap_write_failed(struct inode *inode, loff_t pos, unsigned len) { diff --git a/include/linux/iomap.h b/include/linux/iomap.h index a044a824da85..7300d30ca495 100644 --- a/include/linux/iomap.h +++ b/include/linux/iomap.h @@ -9,6 +9,7 @@ struct fiemap_extent_info; struct inode; struct iov_iter; struct kiocb; +struct page; struct vm_area_struct; struct vm_fault; @@ -88,6 +89,9 @@ struct iomap_ops { ssize_t iomap_file_buffered_write(struct kiocb *iocb, struct iov_iter *from, const struct iomap_ops *ops); +int iomap_readpage(struct page *page, const struct iomap_ops *ops); +int iomap_readpages(struct address_space *mapping, struct list_head *pages, + unsigned nr_pages, const struct iomap_ops *ops); int iomap_file_dirty(struct inode *inode, loff_t pos, loff_t len, const struct iomap_ops *ops); int iomap_zero_range(struct inode *inode, loff_t pos, loff_t len,