From patchwork Sat Jan 23 00:05:33 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 8095081 Return-Path: X-Original-To: patchwork-linux-block@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id A29999F440 for ; Sat, 23 Jan 2016 00:05:46 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id A3973205BC for ; Sat, 23 Jan 2016 00:05:45 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 7F1E220595 for ; Sat, 23 Jan 2016 00:05:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755150AbcAWAFn (ORCPT ); Fri, 22 Jan 2016 19:05:43 -0500 Received: from mail-pf0-f194.google.com ([209.85.192.194]:33133 "EHLO mail-pf0-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753876AbcAWAFm (ORCPT ); Fri, 22 Jan 2016 19:05:42 -0500 Received: by mail-pf0-f194.google.com with SMTP id e65so3993249pfe.0; Fri, 22 Jan 2016 16:05:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:to:cc:subject:date:message-id; bh=SWvs3ZL/Gu1fiWuzHak2bt90E1EHCXTvCk1X1t+xV1Q=; b=WcIExH6G0XiyO1CCwuOv9WmMPqMmcMu7Jof053JuWScChdadf9xJevFuEW8o4G3VXE ulRdS35moBpeYVFG5PkEA+UCtLZQwQc1e1y6LM8syr/fWjyPqQNcDaQUSTq6UuDCMZ7x 93ckHJf+HQszEmh7uRvshG0xpVvGKqpLHg3qsFIKM4qkEz3TkqpaHukfId1PM2RJq/aw xBZRLgaKqjyMTGpjU8cOiOgqvSTMsSn7WL39q0w9x5FYNB9V6c0XdosRR9xlTvZuGuc3 RRy1Jd6BcMZU8dOO87jmSKWmRGdoC+jeWitjtNxbVOMrf+J4dD2qxOUa81AuacRaGURc bwbw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=SWvs3ZL/Gu1fiWuzHak2bt90E1EHCXTvCk1X1t+xV1Q=; b=gLIMcDYdLAfkoAr/7Rw0BEYeAHJafeOm3zomlOaxOBdNAqU3c36snamGxbv5Kcr7n4 3nz3aO3Hfey3gEFCi5LdROswJS2E3h/RMO++O4W9bmpl0H/pI56GVVF7rviaEhHObA37 748PM90CuPKSbb1R+UhgLe/C6WP4uJKyLpWWrGPNx4YhgkRgRFILbBCDUzSuRPIv+mXs CPFZT9BrkJQrL6LXfzfO7Y50QHgB7u0VnISXexznhdb31Kld3/WjEMMQFOMgyRpjhUKG P0S3MwzfpR5POai5SyD5J6HUJaJFwy2Gfk7K5beqk7xnCeTD3SjylzfXJSBCoBSrIsNM e37w== X-Gm-Message-State: AG10YOQFX8e7wpvkY2XpYDyXcxneMvOBorg21EVspAGl7k6wCZF9llXN1LpdzhgTX1n6Rg== X-Received: by 10.98.76.149 with SMTP id e21mr8506269pfj.89.1453507542071; Fri, 22 Jan 2016 16:05:42 -0800 (PST) Received: from localhost ([116.251.217.174]) by smtp.gmail.com with ESMTPSA id qz9sm12102114pab.39.2016.01.22.16.05.40 (version=TLS1_2 cipher=AES128-SHA bits=128/128); Fri, 22 Jan 2016 16:05:41 -0800 (PST) From: Ming Lei To: Jens Axboe , linux-kernel@vger.kernel.org Cc: linux-block@vger.kernel.org, Linus Torvalds , Stefan Haberland , Keith Busch , Ming Lei Subject: [PATCH v1] block: fix bio splitting on max sectors Date: Sat, 23 Jan 2016 08:05:33 +0800 Message-Id: <1453507533-25593-1-git-send-email-tom.leiming@gmail.com> X-Mailer: git-send-email 1.9.1 Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Spam-Status: No, score=-6.8 required=5.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED, FREEMAIL_FROM, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, T_DKIM_INVALID, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP After commit e36f62042880(block: split bios to maxpossible length), bio can be splitted in the middle of a vector entry, then it is easy to split out one bio which size isn't aligned with block size, especially when the block size is bigger than 512. This patch fixes the issue by making the max io size aligned to logical block size. Fixes: e36f62042880(block: split bios to maxpossible length) Reported-by: Stefan Haberland Cc: Keith Busch Suggested-by: Linus Torvalds Signed-off-by: Ming Lei --- V1: - avoid double shift as suggested by Linus - compute 'max_sectors' once as suggested by Keith block/blk-merge.c | 26 +++++++++++++++++++------- 1 file changed, 19 insertions(+), 7 deletions(-) diff --git a/block/blk-merge.c b/block/blk-merge.c index 1699df5..888a7fe 100644 --- a/block/blk-merge.c +++ b/block/blk-merge.c @@ -70,6 +70,18 @@ static struct bio *blk_bio_write_same_split(struct request_queue *q, return bio_split(bio, q->limits.max_write_same_sectors, GFP_NOIO, bs); } +static inline unsigned get_max_io_size(struct request_queue *q, + struct bio *bio) +{ + unsigned sectors = blk_max_size_offset(q, bio->bi_iter.bi_sector); + unsigned mask = queue_logical_block_size(q) - 1; + + /* aligned to logical block size */ + sectors &= ~(mask >> 9); + + return sectors; +} + static struct bio *blk_bio_segment_split(struct request_queue *q, struct bio *bio, struct bio_set *bs, @@ -81,6 +93,7 @@ static struct bio *blk_bio_segment_split(struct request_queue *q, unsigned front_seg_size = bio->bi_seg_front_size; bool do_split = true; struct bio *new = NULL; + const unsigned max_sectors = get_max_io_size(q, bio); bio_for_each_segment(bv, bio, iter) { /* @@ -90,20 +103,19 @@ static struct bio *blk_bio_segment_split(struct request_queue *q, if (bvprvp && bvec_gap_to_prev(q, bvprvp, bv.bv_offset)) goto split; - if (sectors + (bv.bv_len >> 9) > - blk_max_size_offset(q, bio->bi_iter.bi_sector)) { + if (sectors + (bv.bv_len >> 9) > max_sectors) { /* * Consider this a new segment if we're splitting in * the middle of this vector. */ if (nsegs < queue_max_segments(q) && - sectors < blk_max_size_offset(q, - bio->bi_iter.bi_sector)) { + sectors < max_sectors) { nsegs++; - sectors = blk_max_size_offset(q, - bio->bi_iter.bi_sector); + sectors = max_sectors; } - goto split; + if (sectors) + goto split; + /* Make this single bvec as the 1st segment */ } if (bvprvp && blk_queue_cluster(q)) {