From patchwork Tue Jun 11 15:10:03 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 10987531 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D8FAE14B6 for ; Tue, 11 Jun 2019 15:10:27 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C9A9B1FF7E for ; Tue, 11 Jun 2019 15:10:27 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id BDBA028372; Tue, 11 Jun 2019 15:10:27 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 665771FF7E for ; Tue, 11 Jun 2019 15:10:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2404461AbfFKPK0 (ORCPT ); Tue, 11 Jun 2019 11:10:26 -0400 Received: from bombadil.infradead.org ([198.137.202.133]:44706 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2404429AbfFKPK0 (ORCPT ); Tue, 11 Jun 2019 11:10:26 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=CivRGk2IJQ+CG/Ap3jP7tA22KCO2fUTPLtqqWiWXwyk=; b=ONSzOX9UY77SC0oZwpB8hSjQVG GNFJGpNEgFkPRtmHHPEkOYSkMLgZIYlr6zbUwgdjKyrFwGLddrH/IEpVvogXZzKZOocnXQRnsQVpU UhuL550U4dylS4E10knInUKpHHlGUPs9hLtq4R2IXV7j8q2NxjnUdmgLv/SY8qCS+b9LBcATbzJeC 2JRFVqNDlPi4S1O86GoqOAvXUXFj0qaNVZlBmcN8JofKXoJy3S2+H1GVGasutfp7tK3FxWXuUtX6K o5rrBORlT0cRVfnRG0QvMsEjLz6oEqp8cPc7GIgCBdkuLs/BVKySFcgsjeawlFZtQpHiMiUT60eum 5pac5R2g==; Received: from mpp-cp1-natpool-1-037.ethz.ch ([82.130.71.37] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.92 #3 (Red Hat Linux)) id 1haiPi-0000nC-Kt; Tue, 11 Jun 2019 15:10:20 +0000 From: Christoph Hellwig To: Jens Axboe , Ming Lei Cc: David Gibson , "Darrick J. Wong" , linux-block@vger.kernel.org, linux-xfs@vger.kernel.org Subject: [PATCH 1/5] block: fix gap checking in __bio_add_pc_page Date: Tue, 11 Jun 2019 17:10:03 +0200 Message-Id: <20190611151007.13625-2-hch@lst.de> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190611151007.13625-1-hch@lst.de> References: <20190611151007.13625-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Sender: linux-xfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP If we can add more data into an existing segment we do not create a gap per definition, so move the check for a gap after the attempt to merge into the segment. Signed-off-by: Christoph Hellwig --- block/bio.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/block/bio.c b/block/bio.c index 683cbb40f051..6db39699aab9 100644 --- a/block/bio.c +++ b/block/bio.c @@ -722,18 +722,18 @@ static int __bio_add_pc_page(struct request_queue *q, struct bio *bio, goto done; } + if (page_is_mergeable(bvec, page, len, offset, false) && + can_add_page_to_seg(q, bvec, page, len, offset)) { + bvec->bv_len += len; + goto done; + } + /* * If the queue doesn't support SG gaps and adding this * offset would create a gap, disallow it. */ if (bvec_gap_to_prev(q, bvec, offset)) return 0; - - if (page_is_mergeable(bvec, page, len, offset, false) && - can_add_page_to_seg(q, bvec, page, len, offset)) { - bvec->bv_len += len; - goto done; - } } if (bio_full(bio)) From patchwork Tue Jun 11 15:10:04 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 10987535 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9BC3614B6 for ; Tue, 11 Jun 2019 15:10:33 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8AE7520223 for ; Tue, 11 Jun 2019 15:10:33 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 7E53C28438; Tue, 11 Jun 2019 15:10:33 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1CAC620223 for ; Tue, 11 Jun 2019 15:10:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2404471AbfFKPKc (ORCPT ); Tue, 11 Jun 2019 11:10:32 -0400 Received: from bombadil.infradead.org ([198.137.202.133]:44734 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2404429AbfFKPKc (ORCPT ); Tue, 11 Jun 2019 11:10:32 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=KtT427HTZklBFwvdKvmmBAmh+gVIpodh018L4Fqwbkk=; b=ms17YFEOp7ikn9RhQia0APIMT2 xuDgR8buqss9GQ1A2plPvUS1PKLTzbfGXzH9sDa4Xwpy5gvlXOiiXd+05spi+H2UjqWKsK8j5XnDH KVXfC5//dflhJbWZbuk5mP/gTytDY6ZtNp8hMtbNqDEu5UZZEfCX3LDaBRryxwfMq2tRpKDPxW2ej OtN0NEUGxpR9g961AhJVTMdM8LKt5hUAn0bOIvWAQdWHM4SXJB2qZREp2Re4m/nzjbpPjKOnwq31o svS1o/m26IS0VbB3DKrL1XDFEtEt2o35GE5tHwu4+zjwjWBzxkdwEdZVm18C+Zg0KEeHYLQNffGrq Ix+8YTAA==; Received: from mpp-cp1-natpool-1-037.ethz.ch ([82.130.71.37] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.92 #3 (Red Hat Linux)) id 1haiPq-0000o9-Fi; Tue, 11 Jun 2019 15:10:27 +0000 From: Christoph Hellwig To: Jens Axboe , Ming Lei Cc: David Gibson , "Darrick J. Wong" , linux-block@vger.kernel.org, linux-xfs@vger.kernel.org Subject: [PATCH 2/5] block: factor out a bio_try_merge_pc_page helper Date: Tue, 11 Jun 2019 17:10:04 +0200 Message-Id: <20190611151007.13625-3-hch@lst.de> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190611151007.13625-1-hch@lst.de> References: <20190611151007.13625-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Sender: linux-xfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Factor the case of trying to add to an existing segment in __bio_add_pc_page into a new helper that is similar to the regular bio case. Subsume the existing can_add_page_to_seg helper into this new one. Signed-off-by: Christoph Hellwig --- block/bio.c | 44 ++++++++++++++++++++------------------------ 1 file changed, 20 insertions(+), 24 deletions(-) diff --git a/block/bio.c b/block/bio.c index 6db39699aab9..85e243ea6a0e 100644 --- a/block/bio.c +++ b/block/bio.c @@ -659,24 +659,27 @@ static inline bool page_is_mergeable(const struct bio_vec *bv, return true; } -/* - * Check if the @page can be added to the current segment(@bv), and make - * sure to call it only if page_is_mergeable(@bv, @page) is true - */ -static bool can_add_page_to_seg(struct request_queue *q, - struct bio_vec *bv, struct page *page, unsigned len, - unsigned offset) +static bool bio_try_merge_pc_page(struct request_queue *q, struct bio *bio, + struct page *page, unsigned len, unsigned off, bool *same_page) { + struct bio_vec *bv = &bio->bi_io_vec[bio->bi_vcnt - 1]; unsigned long mask = queue_segment_boundary(q); phys_addr_t addr1 = page_to_phys(bv->bv_page) + bv->bv_offset; - phys_addr_t addr2 = page_to_phys(page) + offset + len - 1; + phys_addr_t addr2 = page_to_phys(page) + off + len - 1; + + if (page == bv->bv_page && off == bv->bv_offset + bv->bv_len) { + *same_page = true; + goto done; + } if ((addr1 | mask) != (addr2 | mask)) return false; - if (bv->bv_len + len > queue_max_segment_size(q)) return false; - + if (!page_is_mergeable(bv, page, len, off, false)) + return false; +done: + bv->bv_len += len; return true; } @@ -701,6 +704,7 @@ static int __bio_add_pc_page(struct request_queue *q, struct bio *bio, bool put_same_page) { struct bio_vec *bvec; + bool same_page = false; /* * cloned bio must not modify vec list @@ -712,26 +716,18 @@ static int __bio_add_pc_page(struct request_queue *q, struct bio *bio, return 0; if (bio->bi_vcnt > 0) { - bvec = &bio->bi_io_vec[bio->bi_vcnt - 1]; - - if (page == bvec->bv_page && - offset == bvec->bv_offset + bvec->bv_len) { - if (put_same_page) + if (bio_try_merge_pc_page(q, bio, page, len, offset, + &same_page)) { + if (put_same_page && same_page) put_page(page); - bvec->bv_len += len; - goto done; - } - - if (page_is_mergeable(bvec, page, len, offset, false) && - can_add_page_to_seg(q, bvec, page, len, offset)) { - bvec->bv_len += len; goto done; } /* - * If the queue doesn't support SG gaps and adding this - * offset would create a gap, disallow it. + * If the queue doesn't support SG gaps and adding this offset + * would create a gap, disallow it. */ + bvec = &bio->bi_io_vec[bio->bi_vcnt - 1]; if (bvec_gap_to_prev(q, bvec, offset)) return 0; } From patchwork Tue Jun 11 15:10:05 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 10987541 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7FA3A14B6 for ; Tue, 11 Jun 2019 15:10:40 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6F9041FF7E for ; Tue, 11 Jun 2019 15:10:40 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 63F2928355; Tue, 11 Jun 2019 15:10:40 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C0E7C1FF7E for ; Tue, 11 Jun 2019 15:10:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2404493AbfFKPKj (ORCPT ); Tue, 11 Jun 2019 11:10:39 -0400 Received: from bombadil.infradead.org ([198.137.202.133]:44762 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2404488AbfFKPKj (ORCPT ); Tue, 11 Jun 2019 11:10:39 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=1TL/DSGtRu+LxhRIrfPy7eNL1B2jI4TeRgYWHWzR/n4=; b=N5G5BqEnyKuXGvKnOdG9mP7p1G Px6pyqLNs/wayvGCGRL1l3WHOx44WdWLymyRDrE/5rCC/svp2JkszD816mUZN6VF6XZyLhmWAaXLQ KrML5SD4aC7KnfKEfQs7TbHGQHMpqGdUXjc/jDSqLhrO622L6W9jeGwedw3GTjpRJC1vCqPe+PUwK a/hb2STv613I9iWJEBZkvrWLStFPaVBjINnvKpoHaY+U79zzNvL42VrckLFmQrRYmOZgo+HNbMtcD NVxMwApSCUCFxRDirZyQHntdo3kj3M2Yb9LNT4IYdghu5WYzf7J1SQOtGz/1UETpBheIKPjk04OV/ mqC5LBVQ==; Received: from mpp-cp1-natpool-1-037.ethz.ch ([82.130.71.37] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.92 #3 (Red Hat Linux)) id 1haiPv-0000q9-Us; Tue, 11 Jun 2019 15:10:32 +0000 From: Christoph Hellwig To: Jens Axboe , Ming Lei Cc: David Gibson , "Darrick J. Wong" , linux-block@vger.kernel.org, linux-xfs@vger.kernel.org Subject: [PATCH 3/5] block: return from __bio_try_merge_page if merging occured in the same page Date: Tue, 11 Jun 2019 17:10:05 +0200 Message-Id: <20190611151007.13625-4-hch@lst.de> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190611151007.13625-1-hch@lst.de> References: <20190611151007.13625-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Sender: linux-xfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP We currently have an input same_page parameter to __bio_try_merge_page to prohibit merging in the same page. The rationale for that is that some callers need to account for every page added to a bio. Instead of letting these callers call twice into the merge code to account for the new vs existing page cases, just turn the paramter into an output one that returns if a merge in the same page occured and let them act accordingly. Signed-off-by: Christoph Hellwig Reviewed-by: Ming Lei --- block/bio.c | 23 +++++++++-------------- fs/iomap.c | 12 ++++++++---- fs/xfs/xfs_aops.c | 11 ++++++++--- include/linux/bio.h | 2 +- 4 files changed, 26 insertions(+), 22 deletions(-) diff --git a/block/bio.c b/block/bio.c index 85e243ea6a0e..c34327aa9216 100644 --- a/block/bio.c +++ b/block/bio.c @@ -636,7 +636,7 @@ EXPORT_SYMBOL(bio_clone_fast); static inline bool page_is_mergeable(const struct bio_vec *bv, struct page *page, unsigned int len, unsigned int off, - bool same_page) + bool *same_page) { phys_addr_t vec_end_addr = page_to_phys(bv->bv_page) + bv->bv_offset + bv->bv_len - 1; @@ -647,15 +647,9 @@ static inline bool page_is_mergeable(const struct bio_vec *bv, if (xen_domain() && !xen_biovec_phys_mergeable(bv, page)) return false; - if ((vec_end_addr & PAGE_MASK) != page_addr) { - if (same_page) - return false; - if (pfn_to_page(PFN_DOWN(vec_end_addr)) + 1 != page) - return false; - } - - WARN_ON_ONCE(same_page && (len + off) > PAGE_SIZE); - + *same_page = ((vec_end_addr & PAGE_MASK) == page_addr); + if (!*same_page && pfn_to_page(PFN_DOWN(vec_end_addr)) + 1 != page) + return false; return true; } @@ -763,8 +757,7 @@ EXPORT_SYMBOL(bio_add_pc_page); * @page: start page to add * @len: length of the data to add * @off: offset of the data relative to @page - * @same_page: if %true only merge if the new data is in the same physical - * page as the last segment of the bio. + * @same_page: return if the segment has been merged inside the same page * * Try to add the data at @page + @off to the last bvec of @bio. This is a * a useful optimisation for file systems with a block size smaller than the @@ -775,7 +768,7 @@ EXPORT_SYMBOL(bio_add_pc_page); * Return %true on success or %false on failure. */ bool __bio_try_merge_page(struct bio *bio, struct page *page, - unsigned int len, unsigned int off, bool same_page) + unsigned int len, unsigned int off, bool *same_page) { if (WARN_ON_ONCE(bio_flagged(bio, BIO_CLONED))) return false; @@ -833,7 +826,9 @@ EXPORT_SYMBOL_GPL(__bio_add_page); int bio_add_page(struct bio *bio, struct page *page, unsigned int len, unsigned int offset) { - if (!__bio_try_merge_page(bio, page, len, offset, false)) { + bool same_page = false; + + if (!__bio_try_merge_page(bio, page, len, offset, &same_page)) { if (bio_full(bio)) return 0; __bio_add_page(bio, page, len, offset); diff --git a/fs/iomap.c b/fs/iomap.c index 23ef63fd1669..12654c2e78f8 100644 --- a/fs/iomap.c +++ b/fs/iomap.c @@ -287,7 +287,7 @@ iomap_readpage_actor(struct inode *inode, loff_t pos, loff_t length, void *data, struct iomap_readpage_ctx *ctx = data; struct page *page = ctx->cur_page; struct iomap_page *iop = iomap_page_create(inode, page); - bool is_contig = false; + bool same_page = false, is_contig = false; loff_t orig_pos = pos; unsigned poff, plen; sector_t sector; @@ -315,10 +315,14 @@ iomap_readpage_actor(struct inode *inode, loff_t pos, loff_t length, void *data, * Try to merge into a previous segment if we can. */ sector = iomap_sector(iomap, pos); - if (ctx->bio && bio_end_sector(ctx->bio) == sector) { - if (__bio_try_merge_page(ctx->bio, page, plen, poff, true)) - goto done; + if (ctx->bio && bio_end_sector(ctx->bio) == sector) is_contig = true; + + if (is_contig && + __bio_try_merge_page(ctx->bio, page, plen, poff, &same_page)) { + if (!same_page && iop) + atomic_inc(&iop->read_count); + goto done; } /* diff --git a/fs/xfs/xfs_aops.c b/fs/xfs/xfs_aops.c index a6f0f4761a37..8da5e6637771 100644 --- a/fs/xfs/xfs_aops.c +++ b/fs/xfs/xfs_aops.c @@ -758,6 +758,7 @@ xfs_add_to_ioend( struct block_device *bdev = xfs_find_bdev_for_inode(inode); unsigned len = i_blocksize(inode); unsigned poff = offset & (PAGE_SIZE - 1); + bool merged, same_page = false; sector_t sector; sector = xfs_fsb_to_db(ip, wpc->imap.br_startblock) + @@ -774,9 +775,13 @@ xfs_add_to_ioend( wpc->imap.br_state, offset, bdev, sector); } - if (!__bio_try_merge_page(wpc->ioend->io_bio, page, len, poff, true)) { - if (iop) - atomic_inc(&iop->write_count); + merged = __bio_try_merge_page(wpc->ioend->io_bio, page, len, poff, + &same_page); + + if (iop && !same_page) + atomic_inc(&iop->write_count); + + if (!merged) { if (bio_full(wpc->ioend->io_bio)) xfs_chain_bio(wpc->ioend, wbc, bdev, sector); bio_add_page(wpc->ioend->io_bio, page, len, poff); diff --git a/include/linux/bio.h b/include/linux/bio.h index 0f23b5682640..f87abaa898f0 100644 --- a/include/linux/bio.h +++ b/include/linux/bio.h @@ -423,7 +423,7 @@ extern int bio_add_page(struct bio *, struct page *, unsigned int,unsigned int); extern int bio_add_pc_page(struct request_queue *, struct bio *, struct page *, unsigned int, unsigned int); bool __bio_try_merge_page(struct bio *bio, struct page *page, - unsigned int len, unsigned int off, bool same_page); + unsigned int len, unsigned int off, bool *same_page); void __bio_add_page(struct bio *bio, struct page *page, unsigned int len, unsigned int off); int bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter); From patchwork Tue Jun 11 15:10:06 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 10987545 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9D4A114B6 for ; Tue, 11 Jun 2019 15:10:47 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8E67F1FF7E for ; Tue, 11 Jun 2019 15:10:47 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 82BD828355; Tue, 11 Jun 2019 15:10:47 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 242B51FF7E for ; Tue, 11 Jun 2019 15:10:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2404496AbfFKPKq (ORCPT ); Tue, 11 Jun 2019 11:10:46 -0400 Received: from bombadil.infradead.org ([198.137.202.133]:44790 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2404494AbfFKPKq (ORCPT ); Tue, 11 Jun 2019 11:10:46 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=tT3EiyEovucz2Neriu7X1SBEsLlrNP6yFwU1+X0b/Io=; b=oBqOylG8OOuiRtpK1ydtcGmku3 2n3an7bcuFw1vP9qL3cFdzMvFKn8Hje8YOV7Bu0ArBFgerLpxHXrDwa4nPuIErD/O323MKg9YNZAy EX2+BuTETAAiIeOW4z0zfpd7j1pFG2t5E1ygUrv65ZYW6enAORBKPdrExHAmLd4pn5lTe+C2yg+6q GiBfQRxC9jPVwLChVmNUpsdj5DowT26nwxxZSrBQHiUKT+N7DzuyZ2RK0TEjD1M4bRkOxDSOBQygK 37L6TRevhcpcHuUPjrLSvMG1aZUL8arGsICp6UY8Mw0Ym1JhwbX567tFCVgOB0p6snCAt9svkN7qr VQP8f3gA==; Received: from mpp-cp1-natpool-1-037.ethz.ch ([82.130.71.37] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.92 #3 (Red Hat Linux)) id 1haiQ1-0000qQ-Sw; Tue, 11 Jun 2019 15:10:39 +0000 From: Christoph Hellwig To: Jens Axboe , Ming Lei Cc: David Gibson , "Darrick J. Wong" , linux-block@vger.kernel.org, linux-xfs@vger.kernel.org Subject: [PATCH 4/5] block: fix page leak when merging to same page Date: Tue, 11 Jun 2019 17:10:06 +0200 Message-Id: <20190611151007.13625-5-hch@lst.de> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190611151007.13625-1-hch@lst.de> References: <20190611151007.13625-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Sender: linux-xfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP When multiple iovecs reference the same page, each get_user_page call will add a reference to the page. But once we've created the bio that information gets lost and only a single reference will be dropped after I/O completion. Use the same_page information returned from __bio_try_merge_page to drop additional references to pages that were already present in the bio. Based on a patch from Ming Lei. Link: https://lkml.org/lkml/2019/4/23/64 Fixes: 576ed913 ("block: use bio_add_page in bio_iov_iter_get_pages") Reported-by: David Gibson Signed-off-by: Christoph Hellwig Reviewed-by: Ming Lei --- block/bio.c | 12 ++++++++++-- 1 file changed, 10 insertions(+), 2 deletions(-) diff --git a/block/bio.c b/block/bio.c index c34327aa9216..0d841ba4373a 100644 --- a/block/bio.c +++ b/block/bio.c @@ -891,6 +891,7 @@ static int __bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter) unsigned short entries_left = bio->bi_max_vecs - bio->bi_vcnt; struct bio_vec *bv = bio->bi_io_vec + bio->bi_vcnt; struct page **pages = (struct page **)bv; + bool same_page = false; ssize_t size, left; unsigned len, i; size_t offset; @@ -911,8 +912,15 @@ static int __bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter) struct page *page = pages[i]; len = min_t(size_t, PAGE_SIZE - offset, left); - if (WARN_ON_ONCE(bio_add_page(bio, page, len, offset) != len)) - return -EINVAL; + + if (__bio_try_merge_page(bio, page, len, offset, &same_page)) { + if (same_page) + put_page(page); + } else { + if (WARN_ON_ONCE(bio_full(bio))) + return -EINVAL; + __bio_add_page(bio, page, len, offset); + } offset = 0; } From patchwork Tue Jun 11 15:10:07 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 10987549 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 789C676 for ; Tue, 11 Jun 2019 15:10:52 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 67B332852C for ; Tue, 11 Jun 2019 15:10:52 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 5C17628563; Tue, 11 Jun 2019 15:10:52 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0919D2852C for ; Tue, 11 Jun 2019 15:10:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2404506AbfFKPKv (ORCPT ); Tue, 11 Jun 2019 11:10:51 -0400 Received: from bombadil.infradead.org ([198.137.202.133]:44802 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2404494AbfFKPKv (ORCPT ); Tue, 11 Jun 2019 11:10:51 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=JRePIxhIBKIYU5r/SmxJouq0GO/d1zeJKYfLAmd06XU=; b=Wjoi2YZ8+qKs2rr01aziZzm5dp N4gkYKqUl4FluIobWKppUOk1FO7DB/ROTaN93h/REw3v/UbQxErrGDf4O1ZyOhhkEmA8T7a87I/W6 S6LFGsemrUy1gbroEc3HWkmV5xCUhvWLK/WhGyPhgMRW13KjjMCWsg8/y3lFZsb1FR2uqFxZDGYkZ 8GAZOT5z54dHGngNWrlKrjWBMvSr6H3fVAPItKMKTZsNz5wHHLz8PmEwcwtinIZ7Zu+vlti8OBIGU OiI6eeZ+wyjESA6vhggNgqL2dTtcekd/4/IbVvoDkVJ9I94LK0++HGUEWpGkbQUVkC4cAWHkEYmAT rFYHUGhg==; Received: from mpp-cp1-natpool-1-037.ethz.ch ([82.130.71.37] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.92 #3 (Red Hat Linux)) id 1haiQ7-0000rH-83; Tue, 11 Jun 2019 15:10:44 +0000 From: Christoph Hellwig To: Jens Axboe , Ming Lei Cc: David Gibson , "Darrick J. Wong" , linux-block@vger.kernel.org, linux-xfs@vger.kernel.org Subject: [PATCH 5/5] block: use __bio_try_merge_page in __bio_try_merge_pc_page Date: Tue, 11 Jun 2019 17:10:07 +0200 Message-Id: <20190611151007.13625-6-hch@lst.de> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190611151007.13625-1-hch@lst.de> References: <20190611151007.13625-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Sender: linux-xfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Passsthrough bio handling should be the same as normal bio handling, except that we need to take hardware limitations into account. Thus use the common try_merge implementation after checking the hardware limits. Signed-off-by: Christoph Hellwig --- block/bio.c | 13 ++----------- 1 file changed, 2 insertions(+), 11 deletions(-) diff --git a/block/bio.c b/block/bio.c index 0d841ba4373a..7db7186eab1c 100644 --- a/block/bio.c +++ b/block/bio.c @@ -661,20 +661,11 @@ static bool bio_try_merge_pc_page(struct request_queue *q, struct bio *bio, phys_addr_t addr1 = page_to_phys(bv->bv_page) + bv->bv_offset; phys_addr_t addr2 = page_to_phys(page) + off + len - 1; - if (page == bv->bv_page && off == bv->bv_offset + bv->bv_len) { - *same_page = true; - goto done; - } - if ((addr1 | mask) != (addr2 | mask)) return false; if (bv->bv_len + len > queue_max_segment_size(q)) return false; - if (!page_is_mergeable(bv, page, len, off, false)) - return false; -done: - bv->bv_len += len; - return true; + return __bio_try_merge_page(bio, page, len, off, same_page); } /** @@ -737,8 +728,8 @@ static int __bio_add_pc_page(struct request_queue *q, struct bio *bio, bvec->bv_len = len; bvec->bv_offset = offset; bio->bi_vcnt++; - done: bio->bi_iter.bi_size += len; + done: bio->bi_phys_segments = bio->bi_vcnt; bio_set_flag(bio, BIO_SEG_VALID); return len;