From patchwork Wed May 9 07:47:58 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 10388293 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 00D9C602C2 for ; Wed, 9 May 2018 07:48:44 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E74A928E2B for ; Wed, 9 May 2018 07:48:43 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id DBBA128E65; Wed, 9 May 2018 07:48:43 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI, T_DKIM_INVALID autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 656C828E2B for ; Wed, 9 May 2018 07:48:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933818AbeEIHsl (ORCPT ); Wed, 9 May 2018 03:48:41 -0400 Received: from bombadil.infradead.org ([198.137.202.133]:51576 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933383AbeEIHsk (ORCPT ); Wed, 9 May 2018 03:48:40 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=vsIeBz/JXk2hQDm0gEa61hc8AptzrB0JeXzHyVpu3xU=; b=EAX7c9Y5J7AFkwv967iOrmZ8E 46YLNoimKQBnyXZs9r0tY1FvsFuWLFYsQTCVZhuHvM/g4c+qgGGnA2FC04E9OohrU/9VYkHEQSGTA o906NGhrynzBrHqAl4QYDOCou/jF42OMhg/8zIQsQ9IfB+ZB+SQIcUUiVH4po+/d28zL32Gr+gFte 4VoiBWUYMsOljcuKePZeWaAFRuX54I3nN72vQtz0ElolUv813cw4osPhi25JqOhm2uedo+XcQI0lu QQ7aXiae3h2KRsmsbRBdIohDLSPso/8JrTGSNiwcM5EHDI3qZLFoE1w3fXLF5hf7d/0wVv0/eO/Ih cRVq+fDjg==; Received: from 213-225-15-246.nat.highway.a1.net ([213.225.15.246] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.90_1 #2 (Red Hat Linux)) id 1fGJq3-0000oF-CT; Wed, 09 May 2018 07:48:40 +0000 From: Christoph Hellwig To: linux-xfs@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org, linux-block@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 01/33] block: add a lower-level bio_add_page interface Date: Wed, 9 May 2018 09:47:58 +0200 Message-Id: <20180509074830.16196-2-hch@lst.de> X-Mailer: git-send-email 2.17.0 In-Reply-To: <20180509074830.16196-1-hch@lst.de> References: <20180509074830.16196-1-hch@lst.de> X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Sender: linux-xfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP For the upcoming removal of buffer heads in XFS we need to keep track of the number of outstanding writeback requests per page. For this we need to know if bio_add_page merged a region with the previous bvec or not. Instead of adding additional arguments this refactors bio_add_page to be implemented using three lower level helpers which users like XFS can use directly if they care about the merge decisions. Signed-off-by: Christoph Hellwig --- block/bio.c | 87 ++++++++++++++++++++++++++++++--------------- include/linux/bio.h | 9 +++++ 2 files changed, 67 insertions(+), 29 deletions(-) diff --git a/block/bio.c b/block/bio.c index 53e0f0a1ed94..6ceba6adbf42 100644 --- a/block/bio.c +++ b/block/bio.c @@ -773,7 +773,7 @@ int bio_add_pc_page(struct request_queue *q, struct bio *bio, struct page return 0; } - if (bio->bi_vcnt >= bio->bi_max_vecs) + if (bio_full(bio)) return 0; /* @@ -820,6 +820,59 @@ int bio_add_pc_page(struct request_queue *q, struct bio *bio, struct page } EXPORT_SYMBOL(bio_add_pc_page); +/** + * __bio_try_merge_page - try adding data to an existing bvec + * @bio: destination bio + * @page: page to add + * @len: length of the range to add + * @off: offset into @page + * + * Try adding the data described at @page + @offset to the last bvec of @bio. + * Return %true on success or %false on failure. This can happen frequently + * for file systems with a block size smaller than the page size. + */ +bool __bio_try_merge_page(struct bio *bio, struct page *page, + unsigned int len, unsigned int off) +{ + if (bio->bi_vcnt > 0) { + struct bio_vec *bv = &bio->bi_io_vec[bio->bi_vcnt - 1]; + + if (page == bv->bv_page && off == bv->bv_offset + bv->bv_len) { + bv->bv_len += len; + bio->bi_iter.bi_size += len; + return true; + } + } + return false; +} +EXPORT_SYMBOL_GPL(__bio_try_merge_page); + +/** + * __bio_add_page - add page to a bio in a new segment + * @bio: destination bio + * @page: page to add + * @len: length of the range to add + * @off: offset into @page + * + * Add the data at @page + @offset to @bio as a new bvec. The caller must + * ensure that @bio has space for another bvec. + */ +void __bio_add_page(struct bio *bio, struct page *page, + unsigned int len, unsigned int off) +{ + struct bio_vec *bv = &bio->bi_io_vec[bio->bi_vcnt]; + + WARN_ON_ONCE(bio_full(bio)); + + bv->bv_page = page; + bv->bv_offset = off; + bv->bv_len = len; + + bio->bi_iter.bi_size += len; + bio->bi_vcnt++; +} +EXPORT_SYMBOL_GPL(__bio_add_page); + /** * bio_add_page - attempt to add page to bio * @bio: destination bio @@ -833,40 +886,16 @@ EXPORT_SYMBOL(bio_add_pc_page); int bio_add_page(struct bio *bio, struct page *page, unsigned int len, unsigned int offset) { - struct bio_vec *bv; - /* * cloned bio must not modify vec list */ if (WARN_ON_ONCE(bio_flagged(bio, BIO_CLONED))) return 0; - - /* - * For filesystems with a blocksize smaller than the pagesize - * we will often be called with the same page as last time and - * a consecutive offset. Optimize this special case. - */ - if (bio->bi_vcnt > 0) { - bv = &bio->bi_io_vec[bio->bi_vcnt - 1]; - - if (page == bv->bv_page && - offset == bv->bv_offset + bv->bv_len) { - bv->bv_len += len; - goto done; - } + if (!__bio_try_merge_page(bio, page, len, offset)) { + if (bio_full(bio)) + return 0; + __bio_add_page(bio, page, len, offset); } - - if (bio->bi_vcnt >= bio->bi_max_vecs) - return 0; - - bv = &bio->bi_io_vec[bio->bi_vcnt]; - bv->bv_page = page; - bv->bv_len = len; - bv->bv_offset = offset; - - bio->bi_vcnt++; -done: - bio->bi_iter.bi_size += len; return len; } EXPORT_SYMBOL(bio_add_page); diff --git a/include/linux/bio.h b/include/linux/bio.h index ce547a25e8ae..3e73c8bc25ea 100644 --- a/include/linux/bio.h +++ b/include/linux/bio.h @@ -123,6 +123,11 @@ static inline void *bio_data(struct bio *bio) return NULL; } +static inline bool bio_full(struct bio *bio) +{ + return bio->bi_vcnt >= bio->bi_max_vecs; +} + /* * will die */ @@ -470,6 +475,10 @@ void bio_chain(struct bio *, struct bio *); extern int bio_add_page(struct bio *, struct page *, unsigned int,unsigned int); extern int bio_add_pc_page(struct request_queue *, struct bio *, struct page *, unsigned int, unsigned int); +bool __bio_try_merge_page(struct bio *bio, struct page *page, + unsigned int len, unsigned int off); +void __bio_add_page(struct bio *bio, struct page *page, + unsigned int len, unsigned int off); int bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter); struct rq_map_data; extern struct bio *bio_map_user_iov(struct request_queue *,