From patchwork Wed Nov 21 03:23:12 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 10691635 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id AE95D16B1 for ; Wed, 21 Nov 2018 03:25:50 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9B8512B406 for ; Wed, 21 Nov 2018 03:25:50 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8FBE32B40A; Wed, 21 Nov 2018 03:25:50 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=unavailable version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0F6882B406 for ; Wed, 21 Nov 2018 03:25:50 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3729B6B2399; Tue, 20 Nov 2018 22:25:49 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 320286B239A; Tue, 20 Nov 2018 22:25:49 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1E8A66B239B; Tue, 20 Nov 2018 22:25:49 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-qk1-f199.google.com (mail-qk1-f199.google.com [209.85.222.199]) by kanga.kvack.org (Postfix) with ESMTP id E40DC6B2399 for ; Tue, 20 Nov 2018 22:25:48 -0500 (EST) Received: by mail-qk1-f199.google.com with SMTP id f22so5408066qkm.11 for ; Tue, 20 Nov 2018 19:25:48 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:in-reply-to:references; bh=wqdoEzWuH2Khs9YEZzBBs3VzYLH7cCmoAFW+Gss4ZYc=; b=JIdOiEUvClPLNMOTg6Wi/x3PI+WHiq1dgdv2dBYTUGij4tE6AFXHUmu9e7rr4wkNlW /Eiknv/20QJVkbs0Q3Gni8uSV284nXl62JxOTrg3o/FSza4IgBW5pGH/GFH5mld3oslw hIRw8J6s48uLiZaTqzakXWwV1+Y4tqaN+sJ4K16qhx6e5JA6yDcdJafaxmGnULauXkP5 dNE+h6+uair5P4Kq3zGLmT58ryXGjKUAMJH4Y1uxIA2I94F1VZdKF2U6Dl0/3jIZbrsJ iwjKBQcN3QNa6ICBcRBw2uw9EPZgi+Gn3GVO5iFHDmEJiUmq5XrxOXXFtIGEuUj2ZcvE 6a/A== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of ming.lei@redhat.com designates 209.132.183.28 as permitted sender) smtp.mailfrom=ming.lei@redhat.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com X-Gm-Message-State: AA+aEWYXNuYRMvY/Rz+VxSUaeaqTyPyTNGzwnJ4pYnMMjTbhoYF5kxOP IuRWMRXeqMdo7B/h1d7i25LWHtXMaeabFftECAAApNYMH1iq393WT0akRZY7tsEMvRHlheNZYJT 4Y2XVqSze02XArYToq6b4YuU92O828dTcMpVAeruPRNVlaRWVamk+ffONdimJQDQ9/w== X-Received: by 2002:ac8:7482:: with SMTP id v2mr4477123qtq.57.1542770748686; Tue, 20 Nov 2018 19:25:48 -0800 (PST) X-Google-Smtp-Source: AJdET5fuRxJAR+9TeDCEBncmui4SozBJDEouwA/Us6wwfNhC4fF+U8xUXifF5Zlm/VfkuxBfDBYS X-Received: by 2002:ac8:7482:: with SMTP id v2mr4477106qtq.57.1542770748016; Tue, 20 Nov 2018 19:25:48 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1542770747; cv=none; d=google.com; s=arc-20160816; b=ukPZM+uIL6jl8vlY0vTaTiK8Vi9xrTU4z7n+a6Xfp1AM7v6L5uKwg1yy8bSgpEWLam nHYVq9sg3QBcB4aOROErPpUhhOe3PfjsxzWe0rV68tiSuUJGEvQIEI6v5tMQoskwsUEf IPjD9edV7hoXL/w5vKwfLmmATrlj5kNrpjbaIBeX0qVcyRXQRvLdhnMdyT5Z8Pudld7Z PDDXQAvUCuiN6GTNzdNwDEVkZU5K8TCITzoCVWE56xkYxHulu6xEETno0mWsth0wlHHr zv9+LipsysnPJ4LEcriRI6FAhFIC529nwNXQOvZT+xbsVTaC99hhSxgA2vZpvFHca8J6 ZnIQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from; bh=wqdoEzWuH2Khs9YEZzBBs3VzYLH7cCmoAFW+Gss4ZYc=; b=tWic0biHVkwjUIu++jm6KU7jvEaldWIYiA+HqoSI0V0ykJduwjbcIoNRXZFAGpTkQp EnWOSXJqsfVJb8dD3uLU6iuH/IJ0RrngBux72toqzoOiYkg6dsLo1ask+kU2mtPSf5Kf HJ417u5nx9Ep2/chKAMRt2DpOgg88goUTv838rcFdSt0AnhGGTVCrjOGbGNJAwX+EadR VqEf4qjU6X5WZcwnSJ5y9BDywz2HSPUkmg4hQPVtj7XJTpEDtM8/R9NCkXmzKVZivucG Y1A2okXw7QaTXcxns/ra49QXk/3xX3XI4NYFfUnzOfW4uXQg/MA6CbOLmb0KC3MKjgKY 0Dog== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of ming.lei@redhat.com designates 209.132.183.28 as permitted sender) smtp.mailfrom=ming.lei@redhat.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from mx1.redhat.com (mx1.redhat.com. [209.132.183.28]) by mx.google.com with ESMTPS id 14si5809585qka.272.2018.11.20.19.25.47 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 20 Nov 2018 19:25:47 -0800 (PST) Received-SPF: pass (google.com: domain of ming.lei@redhat.com designates 209.132.183.28 as permitted sender) client-ip=209.132.183.28; Authentication-Results: mx.google.com; spf=pass (google.com: domain of ming.lei@redhat.com designates 209.132.183.28 as permitted sender) smtp.mailfrom=ming.lei@redhat.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id B820B811D7; Wed, 21 Nov 2018 03:25:46 +0000 (UTC) Received: from localhost (ovpn-8-21.pek2.redhat.com [10.72.8.21]) by smtp.corp.redhat.com (Postfix) with ESMTP id 69B805C221; Wed, 21 Nov 2018 03:25:32 +0000 (UTC) From: Ming Lei To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Theodore Ts'o , Omar Sandoval , Sagi Grimberg , Dave Chinner , Kent Overstreet , Mike Snitzer , dm-devel@redhat.com, Alexander Viro , linux-fsdevel@vger.kernel.org, Shaohua Li , linux-raid@vger.kernel.org, David Sterba , linux-btrfs@vger.kernel.org, "Darrick J . Wong" , linux-xfs@vger.kernel.org, Gao Xiang , Christoph Hellwig , linux-ext4@vger.kernel.org, Coly Li , linux-bcache@vger.kernel.org, Boaz Harrosh , Bob Peterson , cluster-devel@redhat.com, Ming Lei Subject: [PATCH V11 04/19] block: use bio_for_each_bvec() to compute multi-page bvec count Date: Wed, 21 Nov 2018 11:23:12 +0800 Message-Id: <20181121032327.8434-5-ming.lei@redhat.com> In-Reply-To: <20181121032327.8434-1-ming.lei@redhat.com> References: <20181121032327.8434-1-ming.lei@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.27]); Wed, 21 Nov 2018 03:25:47 +0000 (UTC) X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP First it is more efficient to use bio_for_each_bvec() in both blk_bio_segment_split() and __blk_recalc_rq_segments() to compute how many multi-page bvecs there are in the bio. Secondly once bio_for_each_bvec() is used, the bvec may need to be splitted because its length can be very longer than max segment size, so we have to split the big bvec into several segments. Thirdly when splitting multi-page bvec into segments, the max segment limit may be reached, so the bio split need to be considered under this situation too. Signed-off-by: Ming Lei --- block/blk-merge.c | 87 +++++++++++++++++++++++++++++++++++++++++++------------ 1 file changed, 68 insertions(+), 19 deletions(-) diff --git a/block/blk-merge.c b/block/blk-merge.c index f52400ce2187..ec0b93fa1ff8 100644 --- a/block/blk-merge.c +++ b/block/blk-merge.c @@ -161,6 +161,54 @@ static inline unsigned get_max_io_size(struct request_queue *q, return sectors; } +/* + * Split the bvec @bv into segments, and update all kinds of + * variables. + */ +static bool bvec_split_segs(struct request_queue *q, struct bio_vec *bv, + unsigned *nsegs, unsigned *last_seg_size, + unsigned *front_seg_size, unsigned *sectors) +{ + unsigned len = bv->bv_len; + unsigned total_len = 0; + unsigned new_nsegs = 0, seg_size = 0; + + /* + * Multipage bvec may be too big to hold in one segment, + * so the current bvec has to be splitted as multiple + * segments. + */ + while (len && new_nsegs + *nsegs < queue_max_segments(q)) { + seg_size = min(queue_max_segment_size(q), len); + + new_nsegs++; + total_len += seg_size; + len -= seg_size; + + if ((bv->bv_offset + total_len) & queue_virt_boundary(q)) + break; + } + + /* update front segment size */ + if (!*nsegs) { + unsigned first_seg_size = seg_size; + + if (new_nsegs > 1) + first_seg_size = queue_max_segment_size(q); + if (*front_seg_size < first_seg_size) + *front_seg_size = first_seg_size; + } + + /* update other varibles */ + *last_seg_size = seg_size; + *nsegs += new_nsegs; + if (sectors) + *sectors += total_len >> 9; + + /* split in the middle of the bvec if len != 0 */ + return !!len; +} + static struct bio *blk_bio_segment_split(struct request_queue *q, struct bio *bio, struct bio_set *bs, @@ -174,7 +222,7 @@ static struct bio *blk_bio_segment_split(struct request_queue *q, struct bio *new = NULL; const unsigned max_sectors = get_max_io_size(q, bio); - bio_for_each_segment(bv, bio, iter) { + bio_for_each_bvec(bv, bio, iter) { /* * If the queue doesn't support SG gaps and adding this * offset would create a gap, disallow it. @@ -189,8 +237,12 @@ static struct bio *blk_bio_segment_split(struct request_queue *q, */ if (nsegs < queue_max_segments(q) && sectors < max_sectors) { - nsegs++; - sectors = max_sectors; + /* split in the middle of bvec */ + bv.bv_len = (max_sectors - sectors) << 9; + bvec_split_segs(q, &bv, &nsegs, + &seg_size, + &front_seg_size, + §ors); } goto split; } @@ -212,14 +264,12 @@ static struct bio *blk_bio_segment_split(struct request_queue *q, if (nsegs == queue_max_segments(q)) goto split; - if (nsegs == 1 && seg_size > front_seg_size) - front_seg_size = seg_size; - - nsegs++; bvprv = bv; bvprvp = &bvprv; - seg_size = bv.bv_len; - sectors += bv.bv_len >> 9; + + if (bvec_split_segs(q, &bv, &nsegs, &seg_size, + &front_seg_size, §ors)) + goto split; } @@ -233,8 +283,6 @@ static struct bio *blk_bio_segment_split(struct request_queue *q, bio = new; } - if (nsegs == 1 && seg_size > front_seg_size) - front_seg_size = seg_size; bio->bi_seg_front_size = front_seg_size; if (seg_size > bio->bi_seg_back_size) bio->bi_seg_back_size = seg_size; @@ -297,6 +345,7 @@ static unsigned int __blk_recalc_rq_segments(struct request_queue *q, struct bio_vec bv, bvprv = { NULL }; int cluster, prev = 0; unsigned int seg_size, nr_phys_segs; + unsigned front_seg_size = bio->bi_seg_front_size; struct bio *fbio, *bbio; struct bvec_iter iter; @@ -317,7 +366,7 @@ static unsigned int __blk_recalc_rq_segments(struct request_queue *q, seg_size = 0; nr_phys_segs = 0; for_each_bio(bio) { - bio_for_each_segment(bv, bio, iter) { + bio_for_each_bvec(bv, bio, iter) { /* * If SG merging is disabled, each bio vector is * a segment @@ -337,20 +386,20 @@ static unsigned int __blk_recalc_rq_segments(struct request_queue *q, continue; } new_segment: - if (nr_phys_segs == 1 && seg_size > - fbio->bi_seg_front_size) - fbio->bi_seg_front_size = seg_size; + if (nr_phys_segs == 1 && seg_size > front_seg_size) + front_seg_size = seg_size; - nr_phys_segs++; bvprv = bv; prev = 1; - seg_size = bv.bv_len; + bvec_split_segs(q, &bv, &nr_phys_segs, &seg_size, + &front_seg_size, NULL); } bbio = bio; } - if (nr_phys_segs == 1 && seg_size > fbio->bi_seg_front_size) - fbio->bi_seg_front_size = seg_size; + if (nr_phys_segs == 1 && seg_size > front_seg_size) + front_seg_size = seg_size; + fbio->bi_seg_front_size = front_seg_size; if (seg_size > bbio->bi_seg_back_size) bbio->bi_seg_back_size = seg_size;