From patchwork Mon Oct 15 20:08:52 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kent Overstreet X-Patchwork-Id: 1595781 Return-Path: X-Original-To: patchwork-dm-devel@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork1.kernel.org Received: from mx3-phx2.redhat.com (mx3-phx2.redhat.com [209.132.183.24]) by patchwork1.kernel.org (Postfix) with ESMTP id 9C3003FD86 for ; Mon, 15 Oct 2012 20:14:11 +0000 (UTC) Received: from lists01.pubmisc.prod.ext.phx2.redhat.com (lists01.pubmisc.prod.ext.phx2.redhat.com [10.5.19.33]) by mx3-phx2.redhat.com (8.13.8/8.13.8) with ESMTP id q9FKBH75010329; Mon, 15 Oct 2012 16:11:17 -0400 Received: from int-mx12.intmail.prod.int.phx2.redhat.com (int-mx12.intmail.prod.int.phx2.redhat.com [10.5.11.25]) by lists01.pubmisc.prod.ext.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id q9FKAPST027029 for ; Mon, 15 Oct 2012 16:10:25 -0400 Received: from mx1.redhat.com (ext-mx13.extmail.prod.ext.phx2.redhat.com [10.5.110.18]) by int-mx12.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id q9FKAFd1003230 for ; Mon, 15 Oct 2012 16:10:19 -0400 Received: from mail-pb0-f46.google.com (mail-pb0-f46.google.com [209.85.160.46]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id q9FK9iMJ007481 for ; Mon, 15 Oct 2012 16:10:14 -0400 Received: by mail-pb0-f46.google.com with SMTP id rr4so5284636pbb.33 for ; Mon, 15 Oct 2012 13:10:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=from:to:cc:subject:date:message-id:x-mailer:in-reply-to:references; bh=KIeFzm5q4KldzCIpJKVvPL2n76Tn+AjLG2+R3QMFI9I=; b=Oby1/t5J7z7z2NBYMDOOa2J6Mtpp1Oo/KQefUUVLL/gfg5GUaTTnamLWEqhs1N8m9T jLAmxYi8yI7EUgjE6JFa3hcRpvGUSacmNVVb/HuV47CuuKQ6CApjSwkneBEfvaTUh1I9 4r5hqheLKJ+r1V++5a3TaqOsAe+/aOdrRv2DvgHe9u2gpkXaqeph4zgBFCveH95pYwLt gG/DEui0A0jnEZBF9DBxy9kYtFZgb4OMxK6k5QF9TIoPLjlSAb/G9b/rPZ9tGToRgzBK AkgGsrGMV6Nrm/8TgPu3gaW27FnOx57kymwnGX0P/f3JSCoXZsK18EMjWSgz5YgDjAZL lJVg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=from:to:cc:subject:date:message-id:x-mailer:in-reply-to:references :x-gm-message-state; bh=KIeFzm5q4KldzCIpJKVvPL2n76Tn+AjLG2+R3QMFI9I=; b=U3yAl3bxS2iPsBWH6+JoCKZKXDx/fwu8vrM9kjIDDJ8AweaY4U524KatDr7zrg5Nnb AM+7TaXFlrQnfih0vnf0Jz8TWvcNCoOgz6glcpu79D/dZrnich1sgnqnP50eOfv/pWYC 2okCn6ZmzPbe2cwAxaMwSyGRp1SIGq6wrfWD2vroKwYOC2tpJxhdXGv+GCBCy3f/bFe4 GwnLShH71/aX4JW7rPwwI7pqdOTTWeN9hs3uCWbJT5/SJvSWgf9f+jTV8A1vwJ1YHp0Y TWvCiDp/WzC8yNtwp8cQ20ZUPM4aJTxqsMzJvEsogKnUlQZAuUEmWyiEM9pDwhYa41XR +IZA== Received: by 10.68.225.7 with SMTP id rg7mr41197498pbc.32.1350331814178; Mon, 15 Oct 2012 13:10:14 -0700 (PDT) Received: from formenos.mtv.corp.google.com (formenos.mtv.corp.google.com [172.18.110.66]) by mx.google.com with ESMTPS id k9sm3021703paz.22.2012.10.15.13.10.13 (version=TLSv1/SSLv3 cipher=OTHER); Mon, 15 Oct 2012 13:10:13 -0700 (PDT) From: Kent Overstreet To: linux-bcache@vger.kernel.org, linux-kernel@vger.kernel.org, dm-devel@redhat.com Date: Mon, 15 Oct 2012 13:08:52 -0700 Message-Id: <1350331769-14856-19-git-send-email-koverstreet@google.com> In-Reply-To: <1350331769-14856-1-git-send-email-koverstreet@google.com> References: <1350331769-14856-1-git-send-email-koverstreet@google.com> X-Gm-Message-State: ALoCoQlGr0xaiQnBq8WfzEtl4lhlRnWCYfpEmk4c97awbc4hVt/pKxHbpH5ddik5YMC+N0UpE2+bEHh4H/mKmJuPlDraFaTlf5Jlg+ANAwYGhY+VMHZlA6D7c+DY3H3AHV7t32aNF3wQYJKMh8ZcOw9c3OpimQzuvDSATCMO3aIFdZfyoAhCGJpHJZCHi9ElkKM944vV4T4D X-RedHat-Spam-Score: -3.072 (BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, RCVD_IN_DNSWL_LOW, RP_MATCHES_RCVD, SPF_PASS) X-Scanned-By: MIMEDefang 2.68 on 10.5.11.25 X-Scanned-By: MIMEDefang 2.68 on 10.5.110.18 X-loop: dm-devel@redhat.com Cc: tj@kernel.org, axboe@kernel.dk, Kent Overstreet , vgoyal@redhat.com Subject: [dm-devel] [PATCH v4 18/24] bounce: Refactor __blk_queue_bounce to not use bi_io_vec X-BeenThere: dm-devel@redhat.com X-Mailman-Version: 2.1.12 Precedence: junk Reply-To: device-mapper development List-Id: device-mapper development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: dm-devel-bounces@redhat.com Errors-To: dm-devel-bounces@redhat.com A bunch of what __blk_queue_bounce() was doing was problematic for the immutable bvec work; this cleans that up and the code is quite a bit smaller, too. The __bio_for_each_segment() in copy_to_high_bio_irq() was changed because that one's looping over the original bio, not the bounce bio - a later patch renames __bio_for_each_segment() -> bio_for_each_segment_all(), and documents that bio_for_each_segment_all() is only for code that owns the bio. Signed-off-by: Kent Overstreet CC: Jens Axboe --- mm/bounce.c | 73 ++++++++++++++++--------------------------------------------- 1 file changed, 19 insertions(+), 54 deletions(-) diff --git a/mm/bounce.c b/mm/bounce.c index 0420867..3068300 100644 --- a/mm/bounce.c +++ b/mm/bounce.c @@ -101,7 +101,7 @@ static void copy_to_high_bio_irq(struct bio *to, struct bio *from) struct bio_vec *tovec, *fromvec; int i; - __bio_for_each_segment(tovec, to, i, 0) { + bio_for_each_segment(tovec, to, i) { fromvec = from->bi_io_vec + i; /* @@ -181,78 +181,43 @@ static void bounce_end_io_read_isa(struct bio *bio, int err) static void __blk_queue_bounce(struct request_queue *q, struct bio **bio_orig, mempool_t *pool) { - struct page *page; - struct bio *bio = NULL; - int i, rw = bio_data_dir(*bio_orig); + struct bio *bio; + int rw = bio_data_dir(*bio_orig); struct bio_vec *to, *from; + unsigned i; - bio_for_each_segment(from, *bio_orig, i) { - page = from->bv_page; + bio_for_each_segment(from, *bio_orig, i) + if (page_to_pfn(from->bv_page) > queue_bounce_pfn(q)) + goto bounce; - /* - * is destination page below bounce pfn? - */ - if (page_to_pfn(page) <= queue_bounce_pfn(q)) - continue; - - /* - * irk, bounce it - */ - if (!bio) { - unsigned int cnt = (*bio_orig)->bi_vcnt; + return; +bounce: + bio = bio_clone_bioset(*bio_orig, GFP_NOIO, fs_bio_set); - bio = bio_alloc(GFP_NOIO, cnt); - memset(bio->bi_io_vec, 0, cnt * sizeof(struct bio_vec)); - } - + bio_for_each_segment(to, bio, i) { + struct page *page = to->bv_page; - to = bio->bi_io_vec + i; + if (page_to_pfn(page) <= queue_bounce_pfn(q)) + continue; - to->bv_page = mempool_alloc(pool, q->bounce_gfp); - to->bv_len = from->bv_len; - to->bv_offset = from->bv_offset; inc_zone_page_state(to->bv_page, NR_BOUNCE); + to->bv_page = mempool_alloc(pool, q->bounce_gfp); if (rw == WRITE) { char *vto, *vfrom; - flush_dcache_page(from->bv_page); + flush_dcache_page(page); + vto = page_address(to->bv_page) + to->bv_offset; - vfrom = kmap(from->bv_page) + from->bv_offset; + vfrom = kmap_atomic(page) + to->bv_offset; memcpy(vto, vfrom, to->bv_len); - kunmap(from->bv_page); + kunmap_atomic(vfrom); } } - /* - * no pages bounced - */ - if (!bio) - return; - trace_block_bio_bounce(q, *bio_orig); - /* - * at least one page was bounced, fill in possible non-highmem - * pages - */ - __bio_for_each_segment(from, *bio_orig, i, 0) { - to = bio_iovec_idx(bio, i); - if (!to->bv_page) { - to->bv_page = from->bv_page; - to->bv_len = from->bv_len; - to->bv_offset = from->bv_offset; - } - } - - bio->bi_bdev = (*bio_orig)->bi_bdev; bio->bi_flags |= (1 << BIO_BOUNCED); - bio->bi_sector = (*bio_orig)->bi_sector; - bio->bi_rw = (*bio_orig)->bi_rw; - - bio->bi_vcnt = (*bio_orig)->bi_vcnt; - bio->bi_idx = (*bio_orig)->bi_idx; - bio->bi_size = (*bio_orig)->bi_size; if (pool == page_pool) { bio->bi_end_io = bounce_end_io_write;