From patchwork Thu Jul 16 09:53:05 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Kara X-Patchwork-Id: 6805911 Return-Path: X-Original-To: patchwork-linux-fsdevel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 42B7A9F3A0 for ; Thu, 16 Jul 2015 09:56:46 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 4DA7D20781 for ; Thu, 16 Jul 2015 09:56:45 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 57E5E20788 for ; Thu, 16 Jul 2015 09:56:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755262AbbGPJ4J (ORCPT ); Thu, 16 Jul 2015 05:56:09 -0400 Received: from cantor2.suse.de ([195.135.220.15]:54111 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754797AbbGPJxP (ORCPT ); Thu, 16 Jul 2015 05:53:15 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (charybdis-ext.suse.de [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id D9CE9AC21; Thu, 16 Jul 2015 09:53:13 +0000 (UTC) Received: by quack.suse.cz (Postfix, from userid 1000) id 58C268282B; Thu, 16 Jul 2015 11:53:10 +0200 (CEST) From: Jan Kara To: linux-ext4@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org, LKML , Andrew Morton , Andreas Dilger , Jens Axboe , Ted Tso , Jan Kara Subject: [PATCH 3/4] block: Remove forced page bouncing under IO Date: Thu, 16 Jul 2015 11:53:05 +0200 Message-Id: <1437040386-31148-4-git-send-email-jack@suse.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1437040386-31148-1-git-send-email-jack@suse.com> References: <1437040386-31148-1-git-send-email-jack@suse.com> Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org X-Spam-Status: No, score=-8.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Jan Kara JBD layer wrote back data buffers without setting PageWriteback bit. Thus standard mechanism for guaranteeing stable pages under IO did not work. Since JBD is gone now and there is no other user of the functionality, just remove it. Acked-by: Jens Axboe Signed-off-by: Jan Kara --- block/bounce.c | 31 ++++--------------------------- include/linux/blk_types.h | 5 ++--- 2 files changed, 6 insertions(+), 30 deletions(-) diff --git a/block/bounce.c b/block/bounce.c index b17311227c12..31cad13a0c9d 100644 --- a/block/bounce.c +++ b/block/bounce.c @@ -176,26 +176,8 @@ static void bounce_end_io_read_isa(struct bio *bio, int err) __bounce_end_io_read(bio, isa_page_pool, err); } -#ifdef CONFIG_NEED_BOUNCE_POOL -static int must_snapshot_stable_pages(struct request_queue *q, struct bio *bio) -{ - if (bio_data_dir(bio) != WRITE) - return 0; - - if (!bdi_cap_stable_pages_required(&q->backing_dev_info)) - return 0; - - return test_bit(BIO_SNAP_STABLE, &bio->bi_flags); -} -#else -static int must_snapshot_stable_pages(struct request_queue *q, struct bio *bio) -{ - return 0; -} -#endif /* CONFIG_NEED_BOUNCE_POOL */ - static void __blk_queue_bounce(struct request_queue *q, struct bio **bio_orig, - mempool_t *pool, int force) + mempool_t *pool) { struct bio *bio; int rw = bio_data_dir(*bio_orig); @@ -203,8 +185,6 @@ static void __blk_queue_bounce(struct request_queue *q, struct bio **bio_orig, struct bvec_iter iter; unsigned i; - if (force) - goto bounce; bio_for_each_segment(from, *bio_orig, iter) if (page_to_pfn(from.bv_page) > queue_bounce_pfn(q)) goto bounce; @@ -216,7 +196,7 @@ bounce: bio_for_each_segment_all(to, bio, i) { struct page *page = to->bv_page; - if (page_to_pfn(page) <= queue_bounce_pfn(q) && !force) + if (page_to_pfn(page) <= queue_bounce_pfn(q)) continue; to->bv_page = mempool_alloc(pool, q->bounce_gfp); @@ -254,7 +234,6 @@ bounce: void blk_queue_bounce(struct request_queue *q, struct bio **bio_orig) { - int must_bounce; mempool_t *pool; /* @@ -263,15 +242,13 @@ void blk_queue_bounce(struct request_queue *q, struct bio **bio_orig) if (!bio_has_data(*bio_orig)) return; - must_bounce = must_snapshot_stable_pages(q, *bio_orig); - /* * for non-isa bounce case, just check if the bounce pfn is equal * to or bigger than the highest pfn in the system -- in that case, * don't waste time iterating over bio segments */ if (!(q->bounce_gfp & GFP_DMA)) { - if (queue_bounce_pfn(q) >= blk_max_pfn && !must_bounce) + if (queue_bounce_pfn(q) >= blk_max_pfn) return; pool = page_pool; } else { @@ -282,7 +259,7 @@ void blk_queue_bounce(struct request_queue *q, struct bio **bio_orig) /* * slow path */ - __blk_queue_bounce(q, bio_orig, pool, must_bounce); + __blk_queue_bounce(q, bio_orig, pool); } EXPORT_SYMBOL(blk_queue_bounce); diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h index 7303b3405520..89fd49184b48 100644 --- a/include/linux/blk_types.h +++ b/include/linux/blk_types.h @@ -118,9 +118,8 @@ struct bio { #define BIO_USER_MAPPED 4 /* contains user pages */ #define BIO_NULL_MAPPED 5 /* contains invalid user pages */ #define BIO_QUIET 6 /* Make BIO Quiet */ -#define BIO_SNAP_STABLE 7 /* bio data must be snapshotted during write */ -#define BIO_CHAIN 8 /* chained bio, ->bi_remaining in effect */ -#define BIO_REFFED 9 /* bio has elevated ->bi_cnt */ +#define BIO_CHAIN 7 /* chained bio, ->bi_remaining in effect */ +#define BIO_REFFED 8 /* bio has elevated ->bi_cnt */ /* * Flags starting here get preserved by bio_reset() - this includes