From patchwork Mon Nov 4 23:36:21 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kent Overstreet X-Patchwork-Id: 3138351 Return-Path: X-Original-To: patchwork-linux-btrfs@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 94CEABEEB2 for ; Mon, 4 Nov 2013 23:41:01 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id B5020201C8 for ; Mon, 4 Nov 2013 23:41:00 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id CA5D62017B for ; Mon, 4 Nov 2013 23:40:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754528Ab3KDXkS (ORCPT ); Mon, 4 Nov 2013 18:40:18 -0500 Received: from mail-pb0-f50.google.com ([209.85.160.50]:35816 "EHLO mail-pb0-f50.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753703Ab3KDXgm (ORCPT ); Mon, 4 Nov 2013 18:36:42 -0500 Received: by mail-pb0-f50.google.com with SMTP id uo15so34233pbc.37 for ; Mon, 04 Nov 2013 15:36:41 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=IoZB1C7TPnSucx+h6m2UjsFOFPL6wbuAyVbIVA7KZbg=; b=Fsde6tA5kboc3S3x1zvR2Feuwe/r/KvTWbC4Hm1Nhv87mnYhIsAC6CDj9oesv7PQb1 AXUzZbCzHyzkegXSEwpET8UZMkaIQZaXImTCF4pqO9ZqKp1rkbmsozc3XSNZaaE1HQtg BlAKVvll4hEZuDPDiLigBZcoEtjk8LQPPxUmSeLoEBUU9JPgsQnFiy3jDF+ZdAlj+G1X +EXAf5QeHv6YIpeeOHJ+IJDgkrQZC4GiGpMXPZXXSr6I92Vo8fxAF/T6GhzwPCdeKA5h 0PdfDPQk1kwxkAVPdU3nj7R91inHFhlVIDke/P3lSdPzMRqPs4U7kbeVvVXabiRDdBL0 OMTA== X-Gm-Message-State: ALoCoQkPn0haee0BhTH/Vxp+HbGKkmvyZBzOGiTXo+m1ZnMXQM050efKgmvmPoaCQMDg6E3QCBFg X-Received: by 10.68.48.166 with SMTP id m6mr19875464pbn.105.1383608201739; Mon, 04 Nov 2013 15:36:41 -0800 (PST) Received: from kmo.daterainc.com ([157.22.22.146]) by mx.google.com with ESMTPSA id hz10sm30532839pbc.36.2013.11.04.15.36.40 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 04 Nov 2013 15:36:41 -0800 (PST) From: Kent Overstreet To: linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-btrfs@vger.kernel.org Cc: axboe@kernel.dk, hch@infradead.org, Kent Overstreet , Jiri Kosina , Asai Thambi S P Subject: [PATCH 3/9] block: Move bouncing to generic_make_request() Date: Mon, 4 Nov 2013 15:36:21 -0800 Message-Id: <1383608187-27368-4-git-send-email-kmo@daterainc.com> X-Mailer: git-send-email 1.8.4.rc3 In-Reply-To: <1383608187-27368-1-git-send-email-kmo@daterainc.com> References: <1383608187-27368-1-git-send-email-kmo@daterainc.com> Sender: linux-btrfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Next patch is going to make generic_make_request() handle arbitrary sized bios by splitting them if necessary. It makes more sense to call blk_queue_bounce() first, partly so it's working on larger bios - but also the code that splits bios, and __blk_recalc_rq_segments(), won't have to take into account bouncing (as it'll already have been done). Also, __blk_recalc_rq_segments() now doesn't have to take into account potential bouncing - it's already been done. Signed-off-by: Kent Overstreet Cc: Jens Axboe Cc: Jiri Kosina Cc: Asai Thambi S P --- block/blk-core.c | 14 +++++++------- block/blk-merge.c | 13 ++++--------- drivers/block/mtip32xx/mtip32xx.c | 2 -- drivers/block/pktcdvd.c | 2 -- 4 files changed, 11 insertions(+), 20 deletions(-) diff --git a/block/blk-core.c b/block/blk-core.c index d9cab97..3c7467e 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -1466,13 +1466,6 @@ void blk_queue_bio(struct request_queue *q, struct bio *bio) struct request *req; unsigned int request_count = 0; - /* - * low level driver can indicate that it wants pages above a - * certain limit bounced to low memory (ie for highmem, or even - * ISA dma in theory) - */ - blk_queue_bounce(q, &bio); - if (bio_integrity_enabled(bio) && bio_integrity_prep(bio)) { bio_endio(bio, -EIO); return; @@ -1822,6 +1815,13 @@ void generic_make_request(struct bio *bio) do { struct request_queue *q = bdev_get_queue(bio->bi_bdev); + /* + * low level driver can indicate that it wants pages above a + * certain limit bounced to low memory (ie for highmem, or even + * ISA dma in theory) + */ + blk_queue_bounce(q, &bio); + q->make_request_fn(q, bio); bio = bio_list_pop(current->bio_list); diff --git a/block/blk-merge.c b/block/blk-merge.c index 953b8df..9680ec73 100644 --- a/block/blk-merge.c +++ b/block/blk-merge.c @@ -13,7 +13,7 @@ static unsigned int __blk_recalc_rq_segments(struct request_queue *q, struct bio *bio) { struct bio_vec bv, bvprv = { NULL }; - int cluster, high, highprv = 1; + int cluster, prev = 0; unsigned int seg_size, nr_phys_segs; struct bio *fbio, *bbio; struct bvec_iter iter; @@ -27,13 +27,7 @@ static unsigned int __blk_recalc_rq_segments(struct request_queue *q, nr_phys_segs = 0; for_each_bio(bio) { bio_for_each_segment(bv, bio, iter) { - /* - * the trick here is making sure that a high page is - * never considered part of another segment, since that - * might change with the bounce page. - */ - high = page_to_pfn(bv.bv_page) > queue_bounce_pfn(q); - if (!high && !highprv && cluster) { + if (prev && cluster) { if (seg_size + bv.bv_len > queue_max_segment_size(q)) goto new_segment; @@ -44,6 +38,7 @@ static unsigned int __blk_recalc_rq_segments(struct request_queue *q, seg_size += bv.bv_len; bvprv = bv; + prev = 1; continue; } new_segment: @@ -53,8 +48,8 @@ new_segment: nr_phys_segs++; bvprv = bv; + prev = 1; seg_size = bv.bv_len; - highprv = high; } bbio = bio; } diff --git a/drivers/block/mtip32xx/mtip32xx.c b/drivers/block/mtip32xx/mtip32xx.c index 52b2f2a..d4c669b 100644 --- a/drivers/block/mtip32xx/mtip32xx.c +++ b/drivers/block/mtip32xx/mtip32xx.c @@ -4016,8 +4016,6 @@ static void mtip_make_request(struct request_queue *queue, struct bio *bio) sg = mtip_hw_get_scatterlist(dd, &tag, unaligned); if (likely(sg != NULL)) { - blk_queue_bounce(queue, &bio); - if (unlikely((bio)->bi_vcnt > MTIP_MAX_SG)) { dev_warn(&dd->pdev->dev, "Maximum number of SGL entries exceeded\n"); diff --git a/drivers/block/pktcdvd.c b/drivers/block/pktcdvd.c index 1bf1f22..7991cc8 100644 --- a/drivers/block/pktcdvd.c +++ b/drivers/block/pktcdvd.c @@ -2486,8 +2486,6 @@ static void pkt_make_request(struct request_queue *q, struct bio *bio) goto end_io; } - blk_queue_bounce(q, &bio); - do { sector_t zone = get_zone(bio->bi_iter.bi_sector, pd); sector_t last_zone = get_zone(bio_end_sector(bio) - 1, pd);