From patchwork Tue Dec 27 15:56:14 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 9489445 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 64E2162AAD for ; Tue, 27 Dec 2016 16:03:46 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 56AED201BC for ; Tue, 27 Dec 2016 16:03:46 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 49F51223B2; Tue, 27 Dec 2016 16:03:46 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.3 required=2.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED, FREEMAIL_FROM, RCVD_IN_DNSWL_HI, RCVD_IN_SORBS_SPAM, T_DKIM_INVALID autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E1D9F201BC for ; Tue, 27 Dec 2016 16:03:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755971AbcL0QCa (ORCPT ); Tue, 27 Dec 2016 11:02:30 -0500 Received: from mail-pg0-f66.google.com ([74.125.83.66]:33823 "EHLO mail-pg0-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755742AbcL0QB3 (ORCPT ); Tue, 27 Dec 2016 11:01:29 -0500 Received: by mail-pg0-f66.google.com with SMTP id b1so12794337pgc.1; Tue, 27 Dec 2016 08:01:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=1vIof+9JX7qCkUIGDQ7tIBoYyzHoZ7vL6hHZMFWP2Wc=; b=NqdFYh3SxwmQi4QOUWLV9nQ7RePNk4q9G5uRvr6nd6TruI3lBwIP+stqY9ksejjAYT JMAjPxoFIDN3RkMj0dRNou/4DKOnqiypRNx/jcr6U0hy6LzspDICcNAicbvYxsakQrH7 BxvYjMWV7K4aIRLI+Cgc6NlikkFp0gj0A0OfWyOe3amTo6VxRh1nLf3tCK997tg1YOXK HG3Dzsnu72eFrWp0yMg6lI4FqACYkUKX9/GxNYHWe9do9Sm8lWohtayyUZVqtZ1ZUE5a 3ecfiQIcS+nNP3YZFmZv+/JhR5QvD2SIstdNe8vfjaavt5A4S9ii+bYMexwhoJ/9hoBz CSsw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=1vIof+9JX7qCkUIGDQ7tIBoYyzHoZ7vL6hHZMFWP2Wc=; b=CMqrkREla4VPcZHi6TVZw6wklaGdku9WGn7+rJSJjVEUfUpTLBUUt/TLOLiIfXQcaY //asVw2d97DLIxADxiSix3gLdF+JHppV9QQIM4b2RkSWJtAq72TDqb0t9R4ejCVS92Bi fTVvg1NKpAG63/1dhXoUU+iZF5opazMvxCz75TQ8eli0wWFJbKxZFDGz6qvB5MM1ZMcK VsvtYuyhCgEGrFnU8RSUIRIZi4Gc/ya0rV2p6kTGmopvEf83vjNi8MLiT2Jav+LWdsbQ JLOpRr7pOhKpY//50YsvAiJ69oeI0Kh+40RNGcM09yznsxoaDVw8nAaMcnjJIrkm8Yqm te8g== X-Gm-Message-State: AIkVDXJ/1Je/94JwT1Ue0T3FgIEed+PLugv/7mk/Ne1osEAqlGSyffFkcf990Ifm8LJhGA== X-Received: by 10.99.94.194 with SMTP id s185mr59097464pgb.35.1482854478592; Tue, 27 Dec 2016 08:01:18 -0800 (PST) Received: from localhost ([45.35.47.137]) by smtp.gmail.com with ESMTPSA id w125sm90618167pfb.8.2016.12.27.08.01.15 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 27 Dec 2016 08:01:17 -0800 (PST) From: Ming Lei To: Jens Axboe , linux-kernel@vger.kernel.org Cc: linux-block@vger.kernel.org, Christoph Hellwig , Ming Lei , Jens Axboe Subject: [PATCH v1 25/54] block: blk-merge: try to make front segments in full size Date: Tue, 27 Dec 2016 23:56:14 +0800 Message-Id: <1482854250-13481-26-git-send-email-tom.leiming@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1482854250-13481-1-git-send-email-tom.leiming@gmail.com> References: <1482854250-13481-1-git-send-email-tom.leiming@gmail.com> Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP When merging one bvec into segment, if the bvec is too big to merge, current policy is to move the whole bvec into another new segment. This patchset changes the policy into trying to maximize size of front segments, that means in above situation, part of bvec is merged into current segment, and the remainder is put into next segment. This patch prepares for support multipage bvec because it can be quite common to see this case and we should try to make front segments in full size. Signed-off-by: Ming Lei --- block/blk-merge.c | 54 +++++++++++++++++++++++++++++++++++++++++++++++++----- 1 file changed, 49 insertions(+), 5 deletions(-) diff --git a/block/blk-merge.c b/block/blk-merge.c index e3abc835e4b7..a801f62a104b 100644 --- a/block/blk-merge.c +++ b/block/blk-merge.c @@ -95,6 +95,7 @@ static struct bio *blk_bio_segment_split(struct request_queue *q, struct bio *new = NULL; const unsigned max_sectors = get_max_io_size(q, bio); unsigned bvecs = 0; + unsigned advance = 0; bio_for_each_segment(bv, bio, iter) { /* @@ -141,12 +142,32 @@ static struct bio *blk_bio_segment_split(struct request_queue *q, } if (bvprvp && blk_queue_cluster(q)) { - if (seg_size + bv.bv_len > queue_max_segment_size(q)) - goto new_segment; if (!BIOVEC_PHYS_MERGEABLE(bvprvp, &bv)) goto new_segment; if (!BIOVEC_SEG_BOUNDARY(q, bvprvp, &bv)) goto new_segment; + if (seg_size + bv.bv_len > queue_max_segment_size(q)) { + /* + * On assumption is that initial value of + * @seg_size(equals to bv.bv_len) won't be + * bigger than max segment size, but will + * becomes false after multipage bvec comes. + */ + advance = queue_max_segment_size(q) - seg_size; + + if (advance > 0) { + seg_size += advance; + sectors += advance >> 9; + bv.bv_len -= advance; + bv.bv_offset += advance; + } + + /* + * Still need to put remainder of current + * bvec into a new segment. + */ + goto new_segment; + } seg_size += bv.bv_len; bvprv = bv; @@ -168,6 +189,12 @@ static struct bio *blk_bio_segment_split(struct request_queue *q, seg_size = bv.bv_len; sectors += bv.bv_len >> 9; + /* restore the bvec for iterator */ + if (advance) { + bv.bv_len += advance; + bv.bv_offset -= advance; + advance = 0; + } } do_split = false; @@ -370,16 +397,29 @@ __blk_segment_map_sg(struct request_queue *q, struct bio_vec *bvec, { int nbytes = bvec->bv_len; + unsigned advance = 0; if (*sg && *cluster) { - if ((*sg)->length + nbytes > queue_max_segment_size(q)) - goto new_segment; - if (!BIOVEC_PHYS_MERGEABLE(bvprv, bvec)) goto new_segment; if (!BIOVEC_SEG_BOUNDARY(q, bvprv, bvec)) goto new_segment; + /* + * try best to merge part of the bvec into previous + * segment and follow same policy with + * blk_bio_segment_split() + */ + if ((*sg)->length + nbytes > queue_max_segment_size(q)) { + advance = queue_max_segment_size(q) - (*sg)->length; + if (advance) { + (*sg)->length += advance; + bvec->bv_offset += advance; + bvec->bv_len -= advance; + } + goto new_segment; + } + (*sg)->length += nbytes; } else { new_segment: @@ -402,6 +442,10 @@ __blk_segment_map_sg(struct request_queue *q, struct bio_vec *bvec, sg_set_page(*sg, bvec->bv_page, nbytes, bvec->bv_offset); (*nsegs)++; + + /* for making iterator happy */ + bvec->bv_offset -= advance; + bvec->bv_len += advance; } *bvprv = *bvec; }