From patchwork Wed Aug 7 21:54:23 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kent Overstreet X-Patchwork-Id: 2840639 X-Patchwork-Delegate: snitzer@redhat.com Return-Path: X-Original-To: patchwork-dm-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 809E1BF535 for ; Wed, 7 Aug 2013 22:55:50 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 7A42820529 for ; Wed, 7 Aug 2013 22:55:49 +0000 (UTC) Received: from mx4-phx2.redhat.com (mx4-phx2.redhat.com [209.132.183.25]) by mail.kernel.org (Postfix) with ESMTP id D094A2050D for ; Wed, 7 Aug 2013 22:55:47 +0000 (UTC) Received: from lists01.pubmisc.prod.ext.phx2.redhat.com (lists01.pubmisc.prod.ext.phx2.redhat.com [10.5.19.33]) by mx4-phx2.redhat.com (8.13.8/8.13.8) with ESMTP id r77MqtDk030777; Wed, 7 Aug 2013 18:52:55 -0400 Received: from int-mx01.intmail.prod.int.phx2.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) by lists01.pubmisc.prod.ext.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id r77LtNWp010883 for ; Wed, 7 Aug 2013 17:55:23 -0400 Received: from mx1.redhat.com (ext-mx12.extmail.prod.ext.phx2.redhat.com [10.5.110.17]) by int-mx01.intmail.prod.int.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id r77LtN7b000328 for ; Wed, 7 Aug 2013 17:55:23 -0400 Received: from mail-pd0-f182.google.com (mail-pd0-f182.google.com [209.85.192.182]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id r77LtMRZ027862 for ; Wed, 7 Aug 2013 17:55:23 -0400 Received: by mail-pd0-f182.google.com with SMTP id r10so1869589pdi.27 for ; Wed, 07 Aug 2013 14:55:22 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=XCkkutNkS5MFtvycEOfb2WLrgvi47d9YyYoFkFet7NQ=; b=g/z+QAyKUBuGjL42tegzr7dPbKMwIUQbnrDSukCQxpEE/a3+kL/oAF/PQN1DKGYFmr uL4bYIASjBZX0Dycm9gN65FKWgZ72DOU/zcz4nEqmi8nWPCMIOQNKEBWpfYKjrmiyPot m0qG+JIweBodJg962F2dWSD8GjEdFPwzcxa+L29aTKTTdx1QVNWXxi6GBvcnoP28lrH2 Uv4mEOjBJuwXNfchZuA8+5y3UQxCm3DNzOQFWkJ0/tac9SWXiCN7okMTj5Cj9FBeTiEx GR3Ei9TxD1QdwR+POk5anI0Dyp2JH5P1VxqgQC031D7OPyVd+potnlT04fdUWJcz4Zwc xPOw== X-Gm-Message-State: ALoCoQn1n4xgfA5u7aNBOM24UUDk3VxbHePMlSMpcJV8hpTOpaikoYi09OZypA4F7+oo4rI9y2+J X-Received: by 10.68.195.193 with SMTP id ig1mr2426163pbc.176.1375912522747; Wed, 07 Aug 2013 14:55:22 -0700 (PDT) Received: from localhost.localdomain (173-13-132-141-sfba.hfc.comcastbusiness.net. [173.13.132.141]) by mx.google.com with ESMTPSA id xe9sm12505640pab.0.2013.08.07.14.55.21 for (version=TLSv1.2 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 07 Aug 2013 14:55:22 -0700 (PDT) From: Kent Overstreet To: axboe@kernel.dk Date: Wed, 7 Aug 2013 14:54:23 -0700 Message-Id: <1375912471-5106-15-git-send-email-kmo@daterainc.com> In-Reply-To: <1375912471-5106-1-git-send-email-kmo@daterainc.com> References: <1375912471-5106-1-git-send-email-kmo@daterainc.com> X-RedHat-Spam-Score: -2.309 (BAYES_00, DCC_REPUT_00_12, RCVD_IN_DNSWL_NONE, URIBL_BLOCKED) X-Scanned-By: MIMEDefang 2.67 on 10.5.11.11 X-Scanned-By: MIMEDefang 2.68 on 10.5.110.17 X-loop: dm-devel@redhat.com X-Mailman-Approved-At: Wed, 07 Aug 2013 18:51:48 -0400 Cc: Kent Overstreet , linux-kernel@vger.kernel.org, linux-raid@vger.kernel.org, dm-devel@redhat.com, linux-fsdevel@vger.kernel.org Subject: [dm-devel] [PATCH 14/22] block: Kill bio_iovec_idx(), __bio_iovec() X-BeenThere: dm-devel@redhat.com X-Mailman-Version: 2.1.12 Precedence: junk Reply-To: device-mapper development List-Id: device-mapper development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: dm-devel-bounces@redhat.com Errors-To: dm-devel-bounces@redhat.com X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP bio_iovec_idx() and __bio_iovec() don't have any valid uses anymore - previous users have been converted to bio_iovec_iter() or other methods. __BVEC_END() has to go too - the bvec array can't be used directly for the last biovec because we might only be using the first portion of it, we have to iterate over the bvec array with bio_for_each_segment() which checks against the current value of bi_iter.bi_size. Signed-off-by: Kent Overstreet Cc: Jens Axboe --- block/blk-merge.c | 13 +++++++++++-- include/linux/bio.h | 26 ++++++++------------------ 2 files changed, 19 insertions(+), 20 deletions(-) diff --git a/block/blk-merge.c b/block/blk-merge.c index 8940562..b53ddac 100644 --- a/block/blk-merge.c +++ b/block/blk-merge.c @@ -88,6 +88,9 @@ EXPORT_SYMBOL(blk_recount_segments); static int blk_phys_contig_segment(struct request_queue *q, struct bio *bio, struct bio *nxt) { + struct bio_vec end_bv, nxt_bv; + struct bvec_iter iter; + if (!blk_queue_cluster(q)) return 0; @@ -98,14 +101,20 @@ static int blk_phys_contig_segment(struct request_queue *q, struct bio *bio, if (!bio_has_data(bio)) return 1; - if (!BIOVEC_PHYS_MERGEABLE(__BVEC_END(bio), __BVEC_START(nxt))) + bio_for_each_segment(end_bv, bio, iter) + if (end_bv.bv_len == iter.bi_size) + break; + + nxt_bv = bio_iovec(nxt); + + if (!BIOVEC_PHYS_MERGEABLE(&end_bv, &nxt_bv)) return 0; /* * bio and nxt are contiguous in memory; check if the queue allows * these two to be merged into one */ - if (BIO_SEG_BOUNDARY(q, bio, nxt)) + if (BIOVEC_SEG_BOUNDARY(q, &end_bv, &nxt_bv)) return 1; return 0; diff --git a/include/linux/bio.h b/include/linux/bio.h index 9736683..486a997 100644 --- a/include/linux/bio.h +++ b/include/linux/bio.h @@ -61,9 +61,6 @@ * various member access, note that bio_data should of course not be used * on highmem page vectors */ -#define bio_iovec_idx(bio, idx) (&((bio)->bi_io_vec[(idx)])) -#define __bio_iovec(bio) bio_iovec_idx((bio), (bio)->bi_iter.bi_idx) - #define __bvec_iter_bvec(bvec, iter) (&(bvec)[(iter).bi_idx]) #define bvec_iter_page(bvec, iter) \ @@ -143,19 +140,16 @@ static inline void *bio_data(struct bio *bio) * permanent PIO fall back, user is probably better off disabling highmem * I/O completely on that queue (see ide-dma for example) */ -#define __bio_kmap_atomic(bio, idx) \ - (kmap_atomic(bio_iovec_idx((bio), (idx))->bv_page) + \ - bio_iovec_idx((bio), (idx))->bv_offset) +#define __bio_kmap_atomic(bio, iter) \ + (kmap_atomic(bio_iter_iovec((bio), (iter)).bv_page) + \ + bio_iter_iovec((bio), (iter)).bv_offset) -#define __bio_kunmap_atomic(addr) kunmap_atomic(addr) +#define __bio_kunmap_atomic(addr) kunmap_atomic(addr) /* * merge helpers etc */ -#define __BVEC_END(bio) bio_iovec_idx((bio), (bio)->bi_vcnt - 1) -#define __BVEC_START(bio) bio_iovec_idx((bio), (bio)->bi_iter.bi_idx) - /* Default implementation of BIOVEC_PHYS_MERGEABLE */ #define __BIOVEC_PHYS_MERGEABLE(vec1, vec2) \ ((bvec_to_phys((vec1)) + (vec1)->bv_len) == bvec_to_phys((vec2))) @@ -172,8 +166,6 @@ static inline void *bio_data(struct bio *bio) (((addr1) | (mask)) == (((addr2) - 1) | (mask))) #define BIOVEC_SEG_BOUNDARY(q, b1, b2) \ __BIO_SEG_BOUNDARY(bvec_to_phys((b1)), bvec_to_phys((b2)) + (b2)->bv_len, queue_segment_boundary((q))) -#define BIO_SEG_BOUNDARY(q, b1, b2) \ - BIOVEC_SEG_BOUNDARY((q), __BVEC_END((b1)), __BVEC_START((b2))) #define bio_io_error(bio) bio_endio((bio), -EIO) @@ -182,9 +174,7 @@ static inline void *bio_data(struct bio *bio) * before it got to the driver and the driver won't own all of it */ #define bio_for_each_segment_all(bvl, bio, i) \ - for (i = 0; \ - bvl = bio_iovec_idx((bio), (i)), i < (bio)->bi_vcnt; \ - i++) + for (i = 0, bvl = (bio)->bi_io_vec; i < (bio)->bi_vcnt; i++, bvl++) static inline void bvec_iter_advance(struct bio_vec *bv, struct bvec_iter *iter, unsigned bytes) @@ -449,15 +439,15 @@ static inline void bvec_kunmap_irq(char *buffer, unsigned long *flags) } #endif -static inline char *__bio_kmap_irq(struct bio *bio, unsigned short idx, +static inline char *__bio_kmap_irq(struct bio *bio, struct bvec_iter iter, unsigned long *flags) { - return bvec_kmap_irq(bio_iovec_idx(bio, idx), flags); + return bvec_kmap_irq(&bio_iter_iovec(bio, iter), flags); } #define __bio_kunmap_irq(buf, flags) bvec_kunmap_irq(buf, flags) #define bio_kmap_irq(bio, flags) \ - __bio_kmap_irq((bio), (bio)->bi_iter.bi_idx, (flags)) + __bio_kmap_irq((bio), (bio)->bi_iter, (flags)) #define bio_kunmap_irq(buf,flags) __bio_kunmap_irq(buf, flags) static inline bool bio_is_rw(struct bio *bio)