From patchwork Sat Oct 29 08:08:41 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 9403319 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 3A4EC60587 for ; Sat, 29 Oct 2016 08:22:08 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 26D012A13D for ; Sat, 29 Oct 2016 08:22:08 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1B2072A5F2; Sat, 29 Oct 2016 08:22:08 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=2.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED,FREEMAIL_FROM,RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A37932A13D for ; Sat, 29 Oct 2016 08:22:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S965591AbcJ2IPQ (ORCPT ); Sat, 29 Oct 2016 04:15:16 -0400 Received: from mail-pf0-f193.google.com ([209.85.192.193]:33913 "EHLO mail-pf0-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S943136AbcJ2IPM (ORCPT ); Sat, 29 Oct 2016 04:15:12 -0400 Received: by mail-pf0-f193.google.com with SMTP id u84so3381325pfj.1; Sat, 29 Oct 2016 01:15:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=2x5bASJy/2Q6TeUKgLdoonbMOIG5yYFHsTeTcjmgLXA=; b=p0Pk1duauuVVKwjUEKIOf6GGp6pb/2foIPAWxN3GSQAhVG3SuhZ0wcpPV2bGlOtD2w CvW7nwvaow3pSsP4tmhcn3ok8tRBUCBBV16Erupi6j94AYLPoQOC8Pgv+8bKyVyksOF3 tdRLIlfjfCCtQtnY1ONyNsSevHNMahA5JJCqFE7iA377/chUR1U2Uhtw01s2iYZOuogf 90/fY6byPTmWuEgcQtzDdTjf30fKQ+j9BZNwJ37UT5pAbdQa6INrA0Hqxj3TISJ3Pfti ZlO+29hzxh+6eX2FF/nrdGArxjmdDe6EsRuGJkHtZKXNk/RBrvQJvDcglhwpDlqhWb0i 8UuA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=2x5bASJy/2Q6TeUKgLdoonbMOIG5yYFHsTeTcjmgLXA=; b=VbWqlEVwqg3RFA5wUch1SYBLoTPYd8uxQ03uIX0j/2N5GOpFPF5nibQFzkXGpDX0mY syIn6ttaW2DnfB0BEYWFl6dPUK0O88Crr19p/Bqa26NuAc5BG0Ec1WZmsDQeWzj5XoGT jHabn7V819Zs7G2lDw5FPe9pYZ4x19i+C8pGo4Y/lcqzWfDgl12eIrVYg0dYUotxsoZn 2phNwJi8j98tOvKGKKEBieyUitlvNKxR0YLR6wpjUXYQHX9cPvXMDjlOoLBKLCyAdf42 n2vHKPdI8U5OONC8GHbXfyRtc5q9Up2/qeEVkGlEUOZJawQqON4d8RDRih8FUQKyS5jC 7uAw== X-Gm-Message-State: ABUngvdOiKat3Pih+i5qbb5nLLNn3MK8Ig+ujwb9GkyO+mdqHFq3Gp05miMGP1+e3UAgnQ== X-Received: by 10.98.192.135 with SMTP id g7mr31842840pfk.58.1477728911897; Sat, 29 Oct 2016 01:15:11 -0700 (PDT) Received: from localhost ([45.34.23.101]) by smtp.gmail.com with ESMTPSA id u17sm23570848pfi.1.2016.10.29.01.15.10 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sat, 29 Oct 2016 01:15:11 -0700 (PDT) From: Ming Lei To: Jens Axboe , linux-kernel@vger.kernel.org Cc: linux-block@vger.kernel.org, linux-fsdevel@vger.kernel.org, Christoph Hellwig , "Kirill A . Shutemov" , Ming Lei , Jens Axboe Subject: [PATCH 42/60] block: use bio_for_each_segment_mp() to compute segments count Date: Sat, 29 Oct 2016 16:08:41 +0800 Message-Id: <1477728600-12938-43-git-send-email-tom.leiming@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1477728600-12938-1-git-send-email-tom.leiming@gmail.com> References: <1477728600-12938-1-git-send-email-tom.leiming@gmail.com> Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Firstly it is more efficient to use bio_for_each_segment_mp() in both blk_bio_segment_split() and __blk_recalc_rq_segments() to compute how many segments there are in the bio. Secondaly once bio_for_each_segment_mp() is used, the bvec may need to be splitted because its length can be very long and more than max segment size, so we have to split one bvec into several segments. Thirdly during splitting mp bvec into segments, max segment number may be reached, then the bio need to be splitted. Signed-off-by: Ming Lei --- block/blk-merge.c | 89 ++++++++++++++++++++++++++++++++++++++++++++++--------- 1 file changed, 75 insertions(+), 14 deletions(-) diff --git a/block/blk-merge.c b/block/blk-merge.c index a6457e70dafc..9142f1fc914b 100644 --- a/block/blk-merge.c +++ b/block/blk-merge.c @@ -86,6 +86,61 @@ static inline unsigned get_max_io_size(struct request_queue *q, return sectors; } +static bool bvec_split_segs(struct request_queue *q, struct bio_vec *bv, + unsigned *nsegs, unsigned *last_seg_size, + unsigned *front_seg_size, unsigned *sectors) +{ + bool need_split = false; + unsigned old_nsegs = *nsegs; + unsigned new_nsegs, seg_size; + + WARN_ON(old_nsegs == queue_max_segments(q)); + WARN_ON(bv->bv_len == 0); + + /* + * Multipage bvec is too big to hold in one segment, + * so the current bvec has to be splitted as multiple + * segments. + * + * @seg_size is segment size of last segment in this bvec + * @new_nsegs is segment count of this bvec + */ + seg_size = bv->bv_len % queue_max_segment_size(q); + new_nsegs = bv->bv_len / queue_max_segment_size(q); + if (!seg_size) + seg_size = queue_max_segment_size(q); + else + new_nsegs += 1; + + /* need splitting if max segs is reached */ + if (old_nsegs + new_nsegs > queue_max_segments(q)) { + new_nsegs = queue_max_segments(q) - old_nsegs; + + /* split the bvec */ + if (bv->bv_len > queue_max_segment_size(q)) + seg_size = queue_max_segment_size(q); + need_split = true; + } + + /* update front segment size */ + if (!old_nsegs) { + unsigned first_seg_size = seg_size; + if (new_nsegs > 1) + first_seg_size = queue_max_segment_size(q); + if (*front_seg_size < first_seg_size) + *front_seg_size = first_seg_size; + } + + *last_seg_size = seg_size; + *nsegs += new_nsegs; + + if (sectors) + *sectors += ((new_nsegs - 1) * + queue_max_segment_size(q) + seg_size) >> 9; + + return need_split; +} + static struct bio *blk_bio_segment_split(struct request_queue *q, struct bio *bio, struct bio_set *bs, @@ -101,7 +156,7 @@ static struct bio *blk_bio_segment_split(struct request_queue *q, unsigned bvecs = 0; unsigned advance; - bio_for_each_segment(bv, bio, iter) { + bio_for_each_segment_mp(bv, bio, iter) { /* * With arbitrary bio size, the incoming bio may be very * big. We have to split the bio into small bios so that @@ -138,8 +193,12 @@ static struct bio *blk_bio_segment_split(struct request_queue *q, */ if (nsegs < queue_max_segments(q) && sectors < max_sectors) { - nsegs++; - sectors = max_sectors; + /* split in the middle of bvec */ + bv.bv_len = (max_sectors - sectors) << 9; + bvec_split_segs(q, &bv, &nsegs, + &seg_size, + &front_seg_size, + §ors); } if (sectors) goto split; @@ -180,11 +239,12 @@ static struct bio *blk_bio_segment_split(struct request_queue *q, if (nsegs == 1 && seg_size > front_seg_size) front_seg_size = seg_size; - nsegs++; bvprv = bv; bvprvp = &bvprv; - seg_size = bv.bv_len; - sectors += bv.bv_len >> 9; + + if (bvec_split_segs(q, &bv, &nsegs, &seg_size, + &front_seg_size, §ors)) + goto split; /* restore the bvec for iterator */ bv.bv_len += advance; @@ -253,6 +313,7 @@ static unsigned int __blk_recalc_rq_segments(struct request_queue *q, struct bio_vec bv, bvprv = { NULL }; int cluster, prev = 0; unsigned int seg_size, nr_phys_segs; + unsigned front_seg_size = bio->bi_seg_front_size; struct bio *fbio, *bbio; struct bvec_iter iter; @@ -274,7 +335,7 @@ static unsigned int __blk_recalc_rq_segments(struct request_queue *q, seg_size = 0; nr_phys_segs = 0; for_each_bio(bio) { - bio_for_each_segment(bv, bio, iter) { + bio_for_each_segment_mp(bv, bio, iter) { /* * If SG merging is disabled, each bio vector is * a segment @@ -296,20 +357,20 @@ static unsigned int __blk_recalc_rq_segments(struct request_queue *q, continue; } new_segment: - if (nr_phys_segs == 1 && seg_size > - fbio->bi_seg_front_size) - fbio->bi_seg_front_size = seg_size; + if (nr_phys_segs == 1 && seg_size > front_seg_size) + front_seg_size = seg_size; - nr_phys_segs++; bvprv = bv; prev = 1; - seg_size = bv.bv_len; + bvec_split_segs(q, &bv, &nr_phys_segs, &seg_size, + &front_seg_size, NULL); } bbio = bio; } - if (nr_phys_segs == 1 && seg_size > fbio->bi_seg_front_size) - fbio->bi_seg_front_size = seg_size; + if (nr_phys_segs == 1 && seg_size > front_seg_size) + front_seg_size = seg_size; + fbio->bi_seg_front_size = front_seg_size; if (seg_size > bbio->bi_seg_back_size) bbio->bi_seg_back_size = seg_size;