From patchwork Mon Jun 26 12:10:34 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 9809397 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 58B9560209 for ; Mon, 26 Jun 2017 12:23:53 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5901624B44 for ; Mon, 26 Jun 2017 12:23:47 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4D7F926530; Mon, 26 Jun 2017 12:23:47 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E5503269E2 for ; Mon, 26 Jun 2017 12:23:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752289AbdFZMWT (ORCPT ); Mon, 26 Jun 2017 08:22:19 -0400 Received: from mx1.redhat.com ([209.132.183.28]:55408 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751520AbdFZMWJ (ORCPT ); Mon, 26 Jun 2017 08:22:09 -0400 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 3DD4383F42; Mon, 26 Jun 2017 12:22:09 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com 3DD4383F42 Authentication-Results: ext-mx03.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=redhat.com Authentication-Results: ext-mx03.extmail.prod.ext.phx2.redhat.com; spf=pass smtp.mailfrom=ming.lei@redhat.com DKIM-Filter: OpenDKIM Filter v2.11.0 mx1.redhat.com 3DD4383F42 Received: from localhost (ovpn-12-86.pek2.redhat.com [10.72.12.86]) by smtp.corp.redhat.com (Postfix) with ESMTP id B9F0878915; Mon, 26 Jun 2017 12:22:02 +0000 (UTC) From: Ming Lei To: Jens Axboe , Christoph Hellwig , Huang Ying , Andrew Morton , Alexander Viro Cc: linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, Ming Lei Subject: [PATCH v2 51/51] block: bio: pass segments to bio if bio_add_page() is bypassed Date: Mon, 26 Jun 2017 20:10:34 +0800 Message-Id: <20170626121034.3051-52-ming.lei@redhat.com> In-Reply-To: <20170626121034.3051-1-ming.lei@redhat.com> References: <20170626121034.3051-1-ming.lei@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.27]); Mon, 26 Jun 2017 12:22:09 +0000 (UTC) Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Under some situations, such as block direct I/O, we can't use bio_add_page() for merging pages into multipage bvec, so a new function is implemented for converting page array into one segment array, then these cases can benefit from multipage bvec too. Signed-off-by: Ming Lei --- block/bio.c | 54 ++++++++++++++++++++++++++++++++++++++++++++++++------ 1 file changed, 48 insertions(+), 6 deletions(-) diff --git a/block/bio.c b/block/bio.c index 436305cde045..e2bcbb842982 100644 --- a/block/bio.c +++ b/block/bio.c @@ -876,6 +876,41 @@ int bio_add_page(struct bio *bio, struct page *page, } EXPORT_SYMBOL(bio_add_page); +static unsigned convert_to_segs(struct bio* bio, struct page **pages, + unsigned char *page_cnt, + unsigned nr_pages) +{ + + unsigned idx; + unsigned nr_seg = 0; + struct request_queue *q = NULL; + + if (bio->bi_bdev) + q = bdev_get_queue(bio->bi_bdev); + + if (!q || !blk_queue_cluster(q)) { + memset(page_cnt, 0, nr_pages); + return nr_pages; + } + + page_cnt[nr_seg] = 0; + for (idx = 1; idx < nr_pages; idx++) { + struct page *pg_s = pages[nr_seg]; + struct page *pg = pages[idx]; + + if (page_to_pfn(pg_s) + page_cnt[nr_seg] + 1 == + page_to_pfn(pg)) { + page_cnt[nr_seg]++; + } else { + page_cnt[++nr_seg] = 0; + if (nr_seg < idx) + pages[nr_seg] = pg; + } + } + + return nr_seg + 1; +} + /** * bio_iov_iter_get_pages - pin user or kernel pages and add them to a bio * @bio: bio to add pages to @@ -895,6 +930,8 @@ int bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter) struct page **pages = (struct page **)bv; size_t offset, diff; ssize_t size; + unsigned short nr_segs; + unsigned char page_cnt[nr_pages]; /* at most 256 pages */ size = iov_iter_get_pages(iter, pages, LONG_MAX, nr_pages, &offset); if (unlikely(size <= 0)) @@ -910,13 +947,18 @@ int bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter) * need to be reflected here as well. */ bio->bi_iter.bi_size += size; - bio->bi_vcnt += nr_pages; - diff = (nr_pages * PAGE_SIZE - offset) - size; - while (nr_pages--) { - bv[nr_pages].bv_page = pages[nr_pages]; - bv[nr_pages].bv_len = PAGE_SIZE; - bv[nr_pages].bv_offset = 0; + + /* convert into segments */ + nr_segs = convert_to_segs(bio, pages, page_cnt, nr_pages); + bio->bi_vcnt += nr_segs; + + while (nr_segs--) { + unsigned cnt = (unsigned)page_cnt[nr_segs] + 1; + + bv[nr_segs].bv_page = pages[nr_segs]; + bv[nr_segs].bv_len = PAGE_SIZE * cnt; + bv[nr_segs].bv_offset = 0; } bv[0].bv_offset += offset;