From patchwork Mon Dec 18 12:22:45 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 10119293 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 6E11A60390 for ; Mon, 18 Dec 2017 12:34:35 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5E87C237F1 for ; Mon, 18 Dec 2017 12:34:35 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 506E228F70; Mon, 18 Dec 2017 12:34:35 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0077E237F1 for ; Mon, 18 Dec 2017 12:34:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933861AbdLRMdV (ORCPT ); Mon, 18 Dec 2017 07:33:21 -0500 Received: from mx1.redhat.com ([209.132.183.28]:44190 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933549AbdLRMdT (ORCPT ); Mon, 18 Dec 2017 07:33:19 -0500 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 4330C356CF; Mon, 18 Dec 2017 12:33:19 +0000 (UTC) Received: from localhost (ovpn-12-48.pek2.redhat.com [10.72.12.48]) by smtp.corp.redhat.com (Postfix) with ESMTP id 178BD6928A; Mon, 18 Dec 2017 12:33:10 +0000 (UTC) From: Ming Lei To: Jens Axboe , Christoph Hellwig , Alexander Viro , Kent Overstreet Cc: Huang Ying , linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, Theodore Ts'o , "Darrick J . Wong" , Coly Li , Filipe Manana , Ming Lei Subject: [PATCH V4 43/45] block: bio: pass segments to bio if bio_add_page() is bypassed Date: Mon, 18 Dec 2017 20:22:45 +0800 Message-Id: <20171218122247.3488-44-ming.lei@redhat.com> In-Reply-To: <20171218122247.3488-1-ming.lei@redhat.com> References: <20171218122247.3488-1-ming.lei@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.30]); Mon, 18 Dec 2017 12:33:19 +0000 (UTC) Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Under some situations, such as block direct I/O, we can't use bio_add_page() for merging pages into multipage bvec, so a new function is implemented for converting page array into one segment array, then these cases can benefit from multipage bvec too. Signed-off-by: Ming Lei --- block/bio.c | 54 ++++++++++++++++++++++++++++++++++++++++++++++++------ 1 file changed, 48 insertions(+), 6 deletions(-) diff --git a/block/bio.c b/block/bio.c index 34af328681a8..e808d8352067 100644 --- a/block/bio.c +++ b/block/bio.c @@ -882,6 +882,41 @@ int bio_add_page(struct bio *bio, struct page *page, } EXPORT_SYMBOL(bio_add_page); +static unsigned convert_to_segs(struct bio* bio, struct page **pages, + unsigned char *page_cnt, + unsigned nr_pages) +{ + + unsigned idx; + unsigned nr_seg = 0; + struct request_queue *q = NULL; + + if (bio->bi_disk) + q = bio->bi_disk->queue; + + if (!q || !blk_queue_cluster(q)) { + memset(page_cnt, 0, nr_pages); + return nr_pages; + } + + page_cnt[nr_seg] = 0; + for (idx = 1; idx < nr_pages; idx++) { + struct page *pg_s = pages[nr_seg]; + struct page *pg = pages[idx]; + + if (page_to_pfn(pg_s) + page_cnt[nr_seg] + 1 == + page_to_pfn(pg)) { + page_cnt[nr_seg]++; + } else { + page_cnt[++nr_seg] = 0; + if (nr_seg < idx) + pages[nr_seg] = pg; + } + } + + return nr_seg + 1; +} + /** * bio_iov_iter_get_pages - pin user or kernel pages and add them to a bio * @bio: bio to add pages to @@ -897,6 +932,8 @@ int bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter) struct page **pages = (struct page **)bv; size_t offset, diff; ssize_t size; + unsigned short nr_segs; + unsigned char page_cnt[nr_pages]; /* at most 256 pages */ size = iov_iter_get_pages(iter, pages, LONG_MAX, nr_pages, &offset); if (unlikely(size <= 0)) @@ -912,13 +949,18 @@ int bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter) * need to be reflected here as well. */ bio->bi_iter.bi_size += size; - bio->bi_vcnt += nr_pages; - diff = (nr_pages * PAGE_SIZE - offset) - size; - while (nr_pages--) { - bv[nr_pages].bv_page = pages[nr_pages]; - bv[nr_pages].bv_len = PAGE_SIZE; - bv[nr_pages].bv_offset = 0; + + /* convert into segments */ + nr_segs = convert_to_segs(bio, pages, page_cnt, nr_pages); + bio->bi_vcnt += nr_segs; + + while (nr_segs--) { + unsigned cnt = (unsigned)page_cnt[nr_segs] + 1; + + bv[nr_segs].bv_page = pages[nr_segs]; + bv[nr_segs].bv_len = PAGE_SIZE * cnt; + bv[nr_segs].bv_offset = 0; } bv[0].bv_offset += offset;