From patchwork Mon Sep 24 22:34:47 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kent Overstreet X-Patchwork-Id: 1500971 Return-Path: X-Original-To: patchwork-dm-devel@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork2.kernel.org Received: from mx3-phx2.redhat.com (mx3-phx2.redhat.com [209.132.183.24]) by patchwork2.kernel.org (Postfix) with ESMTP id 5066EDF280 for ; Mon, 24 Sep 2012 22:40:48 +0000 (UTC) Received: from lists01.pubmisc.prod.ext.phx2.redhat.com (lists01.pubmisc.prod.ext.phx2.redhat.com [10.5.19.33]) by mx3-phx2.redhat.com (8.13.8/8.13.8) with ESMTP id q8OMb4QN030502; Mon, 24 Sep 2012 18:37:04 -0400 Received: from int-mx12.intmail.prod.int.phx2.redhat.com (int-mx12.intmail.prod.int.phx2.redhat.com [10.5.11.25]) by lists01.pubmisc.prod.ext.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id q8OMZkg6009180 for ; Mon, 24 Sep 2012 18:35:46 -0400 Received: from mx1.redhat.com (ext-mx13.extmail.prod.ext.phx2.redhat.com [10.5.110.18]) by int-mx12.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id q8OMZfRP028454 for ; Mon, 24 Sep 2012 18:35:41 -0400 Received: from mail-pb0-f46.google.com (mail-pb0-f46.google.com [209.85.160.46]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id q8OMZdAV027130 for ; Mon, 24 Sep 2012 18:35:39 -0400 Received: by pbbrr4 with SMTP id rr4so7786879pbb.33 for ; Mon, 24 Sep 2012 15:35:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=from:to:cc:subject:date:message-id:x-mailer:in-reply-to:references; bh=0yfL0oNPrbyD6f5B5KCTxp9J/3SooPWZ3YYpspBdYlg=; b=UcV6+eZsHrlF1o21QPjFP3IyLKs8qjj2PtzWKXRHem+zN8BIWmfzUnY+dH1GZlPaiC xl0Hf9QBCkH94eZ2kPT4PaSCYMb9vf2NFM2Jn5147VxmnBdK683AANsPn6hxq7LMqW4p zh9h++LuUQmTms/b3/zVU75qxABwRmIgwCXjS0U4ZDzybJBUJFO6J5p4ktVVfstVGvB4 paK11BBw8U+bMyukYmNs6dmkuCR+fIMyIiKp0iljCMPEbIU4oIvjufb20K5B3xCxdYu2 48PSWYRsrEk4DPoxB6mUpISgdJGnQZLJIaGpaKgdh6my7TjclJNzem2pHtuWg/9JOX0Z 1egw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=from:to:cc:subject:date:message-id:x-mailer:in-reply-to:references :x-gm-message-state; bh=0yfL0oNPrbyD6f5B5KCTxp9J/3SooPWZ3YYpspBdYlg=; b=YeFSeUMZZwWKz4Sox+3WHLzLTwL/u3VAbRFw5ru1s9j318Z8exvhRtqQxrtxp4p5yf QIo71fkf/Gup+GyW8Qy0u8d86yIAbDtc2Vd+WKii8y422vhnj8rcSKFbCfQzqCTIRSS0 G//AoSnqiAa6L8S5ftvLUbtsyfiec23+AfJlovr4WPmdBoqBQHfRtrgBxLa1yJFrkoPF lgvlj6ThHKN3QAjZE1BQ77/0MxYesuCvpY5L4RomY5S4fLv4cyMn3MTLTXRsK+fBSU82 e3Y+Xg3sN4JWtB6v60w/1TISIF9Zy2fS/rAJHNQORH8l+z5jWTuyiIzROgqcx4WlCFq/ HKqw== Received: by 10.68.234.71 with SMTP id uc7mr29590715pbc.72.1348526138985; Mon, 24 Sep 2012 15:35:38 -0700 (PDT) Received: by 10.68.234.71 with SMTP id uc7mr29590671pbc.72.1348526138821; Mon, 24 Sep 2012 15:35:38 -0700 (PDT) Received: from formenos.mtv.corp.google.com (formenos.mtv.corp.google.com [172.18.110.66]) by mx.google.com with ESMTPS id nu8sm133316pbc.45.2012.09.24.15.35.37 (version=TLSv1/SSLv3 cipher=OTHER); Mon, 24 Sep 2012 15:35:38 -0700 (PDT) From: Kent Overstreet To: linux-bcache@vger.kernel.org, linux-kernel@vger.kernel.org, dm-devel@redhat.com Date: Mon, 24 Sep 2012 15:34:47 -0700 Message-Id: <1348526106-17074-8-git-send-email-koverstreet@google.com> In-Reply-To: <1348526106-17074-1-git-send-email-koverstreet@google.com> References: <1348526106-17074-1-git-send-email-koverstreet@google.com> X-Gm-Message-State: ALoCoQn2+FFp4ymEwlJmQqKVGMD4kmxdZoO6/bjW26PzpKaJseY/5FGNXqnMdiev2q29dAkOP1zSqCNHNQyPXUm097e2a1HlaKvwfD0MlRmC63tFbr4oWHK0z/XSJiXjvzaUNFm94Ah01M8esYdN0t1ioB77Xe+HQIloI7TrpA0JqFPw0vipHO2rq9QJz2zyBwHLIwg8K1Qq X-RedHat-Spam-Score: -3.512 (BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, RCVD_IN_DNSWL_LOW, RP_MATCHES_RCVD, SPF_PASS) X-Scanned-By: MIMEDefang 2.68 on 10.5.11.25 X-Scanned-By: MIMEDefang 2.68 on 10.5.110.18 X-loop: dm-devel@redhat.com Cc: axboe@kernel.dk, Nick Piggin , "Ed L. Cashin" , Jiri Kosina , Kent Overstreet , Jim Paris , Geoff Levand , Steven Rostedt , Alasdair Kergon , tj@kernel.org, vgoyal@redhat.com Subject: [dm-devel] [PATCH v3 07/26] block: Use bio_sectors() more consistently X-BeenThere: dm-devel@redhat.com X-Mailman-Version: 2.1.12 Precedence: junk Reply-To: device-mapper development List-Id: device-mapper development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: dm-devel-bounces@redhat.com Errors-To: dm-devel-bounces@redhat.com Bunch of places in the code weren't using it where they could be - this'll reduce the size of the patch that puts bi_sector/bi_size/bi_idx into a struct bvec_iter. Signed-off-by: Kent Overstreet CC: Jens Axboe CC: "Ed L. Cashin" CC: Nick Piggin CC: Jiri Kosina CC: Jim Paris CC: Geoff Levand CC: Alasdair Kergon CC: dm-devel@redhat.com CC: Neil Brown CC: Steven Rostedt --- drivers/block/aoe/aoecmd.c | 2 +- drivers/block/brd.c | 3 +-- drivers/block/pktcdvd.c | 2 +- drivers/block/ps3vram.c | 2 +- drivers/md/dm-raid1.c | 2 +- drivers/md/raid0.c | 6 +++--- drivers/md/raid1.c | 17 ++++++++--------- drivers/md/raid10.c | 24 +++++++++++------------- drivers/md/raid5.c | 8 ++++---- include/trace/events/block.h | 10 +++++----- 10 files changed, 36 insertions(+), 40 deletions(-) diff --git a/drivers/block/aoe/aoecmd.c b/drivers/block/aoe/aoecmd.c index de0435e..2b52ebc 100644 --- a/drivers/block/aoe/aoecmd.c +++ b/drivers/block/aoe/aoecmd.c @@ -720,7 +720,7 @@ gettgt(struct aoedev *d, char *addr) static inline void diskstats(struct gendisk *disk, struct bio *bio, ulong duration, sector_t sector) { - unsigned long n_sect = bio->bi_size >> 9; + unsigned long n_sect = bio_sectors(bio); const int rw = bio_data_dir(bio); struct hd_struct *part; int cpu; diff --git a/drivers/block/brd.c b/drivers/block/brd.c index 531ceb3..d5c4978 100644 --- a/drivers/block/brd.c +++ b/drivers/block/brd.c @@ -334,8 +334,7 @@ static void brd_make_request(struct request_queue *q, struct bio *bio) int err = -EIO; sector = bio->bi_sector; - if (sector + (bio->bi_size >> SECTOR_SHIFT) > - get_capacity(bdev->bd_disk)) + if (sector + bio_sectors(bio) > get_capacity(bdev->bd_disk)) goto out; if (unlikely(bio->bi_rw & REQ_DISCARD)) { diff --git a/drivers/block/pktcdvd.c b/drivers/block/pktcdvd.c index 26938e8..2c27744 100644 --- a/drivers/block/pktcdvd.c +++ b/drivers/block/pktcdvd.c @@ -2433,7 +2433,7 @@ static void pkt_make_request(struct request_queue *q, struct bio *bio) cloned_bio->bi_bdev = pd->bdev; cloned_bio->bi_private = psd; cloned_bio->bi_end_io = pkt_end_io_read_cloned; - pd->stats.secs_r += bio->bi_size >> 9; + pd->stats.secs_r += bio_sectors(bio); pkt_queue_bio(pd, cloned_bio); return; } diff --git a/drivers/block/ps3vram.c b/drivers/block/ps3vram.c index f58cdcf..1ff38e8 100644 --- a/drivers/block/ps3vram.c +++ b/drivers/block/ps3vram.c @@ -553,7 +553,7 @@ static struct bio *ps3vram_do_bio(struct ps3_system_bus_device *dev, struct ps3vram_priv *priv = ps3_system_bus_get_drvdata(dev); int write = bio_data_dir(bio) == WRITE; const char *op = write ? "write" : "read"; - loff_t offset = bio->bi_sector << 9; + loff_t offset = bio_sectors(bio); int error = 0; struct bio_vec *bvec; unsigned int i; diff --git a/drivers/md/dm-raid1.c b/drivers/md/dm-raid1.c index bc5ddba8..3dac2de 100644 --- a/drivers/md/dm-raid1.c +++ b/drivers/md/dm-raid1.c @@ -457,7 +457,7 @@ static void map_region(struct dm_io_region *io, struct mirror *m, { io->bdev = m->dev->bdev; io->sector = map_sector(m, bio); - io->count = bio->bi_size >> 9; + io->count = bio_sectors(bio); } static void hold_bio(struct mirror_set *ms, struct bio *bio) diff --git a/drivers/md/raid0.c b/drivers/md/raid0.c index a9e4fa9..89016cb 100644 --- a/drivers/md/raid0.c +++ b/drivers/md/raid0.c @@ -485,11 +485,11 @@ static inline int is_io_in_chunk_boundary(struct mddev *mddev, { if (likely(is_power_of_2(chunk_sects))) { return chunk_sects >= ((bio->bi_sector & (chunk_sects-1)) - + (bio->bi_size >> 9)); + + bio_sectors(bio)); } else{ sector_t sector = bio->bi_sector; return chunk_sects >= (sector_div(sector, chunk_sects) - + (bio->bi_size >> 9)); + + bio_sectors(bio)); } } @@ -543,7 +543,7 @@ bad_map: printk("md/raid0:%s: make_request bug: can't convert block across chunks" " or bigger than %dk %llu %d\n", mdname(mddev), chunk_sects / 2, - (unsigned long long)bio->bi_sector, bio->bi_size >> 10); + (unsigned long long)bio->bi_sector, bio_sectors(bio) / 2); bio_io_error(bio); return; diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c index 32eab5b..deff0cd 100644 --- a/drivers/md/raid1.c +++ b/drivers/md/raid1.c @@ -267,7 +267,7 @@ static void raid_end_bio_io(struct r1bio *r1_bio) (bio_data_dir(bio) == WRITE) ? "write" : "read", (unsigned long long) bio->bi_sector, (unsigned long long) bio->bi_sector + - (bio->bi_size >> 9) - 1); + bio_sectors(bio) - 1); call_bio_endio(r1_bio); } @@ -458,7 +458,7 @@ static void raid1_end_write_request(struct bio *bio, int error) " %llu-%llu\n", (unsigned long long) mbio->bi_sector, (unsigned long long) mbio->bi_sector + - (mbio->bi_size >> 9) - 1); + bio_sectors(mbio) - 1); call_bio_endio(r1_bio); } } @@ -1041,7 +1041,7 @@ static void make_request(struct mddev *mddev, struct bio * bio) r1_bio = mempool_alloc(conf->r1bio_pool, GFP_NOIO); r1_bio->master_bio = bio; - r1_bio->sectors = bio->bi_size >> 9; + r1_bio->sectors = bio_sectors(bio); r1_bio->state = 0; r1_bio->mddev = mddev; r1_bio->sector = bio->bi_sector; @@ -1119,7 +1119,7 @@ read_again: r1_bio = mempool_alloc(conf->r1bio_pool, GFP_NOIO); r1_bio->master_bio = bio; - r1_bio->sectors = (bio->bi_size >> 9) - sectors_handled; + r1_bio->sectors = bio_sectors(bio) - sectors_handled; r1_bio->state = 0; r1_bio->mddev = mddev; r1_bio->sector = bio->bi_sector + sectors_handled; @@ -1320,14 +1320,14 @@ read_again: /* Mustn't call r1_bio_write_done before this next test, * as it could result in the bio being freed. */ - if (sectors_handled < (bio->bi_size >> 9)) { + if (sectors_handled < bio_sectors(bio)) { r1_bio_write_done(r1_bio); /* We need another r1_bio. It has already been counted * in bio->bi_phys_segments */ r1_bio = mempool_alloc(conf->r1bio_pool, GFP_NOIO); r1_bio->master_bio = bio; - r1_bio->sectors = (bio->bi_size >> 9) - sectors_handled; + r1_bio->sectors = bio_sectors(bio) - sectors_handled; r1_bio->state = 0; r1_bio->mddev = mddev; r1_bio->sector = bio->bi_sector + sectors_handled; @@ -1936,7 +1936,7 @@ static void sync_request_write(struct mddev *mddev, struct r1bio *r1_bio) wbio->bi_rw = WRITE; wbio->bi_end_io = end_sync_write; atomic_inc(&r1_bio->remaining); - md_sync_acct(conf->mirrors[i].rdev->bdev, wbio->bi_size >> 9); + md_sync_acct(conf->mirrors[i].rdev->bdev, bio_sectors(wbio)); generic_make_request(wbio); } @@ -2272,8 +2272,7 @@ read_more: r1_bio = mempool_alloc(conf->r1bio_pool, GFP_NOIO); r1_bio->master_bio = mbio; - r1_bio->sectors = (mbio->bi_size >> 9) - - sectors_handled; + r1_bio->sectors = bio_sectors(mbio) - sectors_handled; r1_bio->state = 0; set_bit(R1BIO_ReadError, &r1_bio->state); r1_bio->mddev = mddev; diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c index 1c2eb38..9715aaf 100644 --- a/drivers/md/raid10.c +++ b/drivers/md/raid10.c @@ -1075,7 +1075,7 @@ static void make_request(struct mddev *mddev, struct bio * bio) /* If this request crosses a chunk boundary, we need to * split it. This will only happen for 1 PAGE (or less) requests. */ - if (unlikely((bio->bi_sector & chunk_mask) + (bio->bi_size >> 9) + if (unlikely((bio->bi_sector & chunk_mask) + bio_sectors(bio) > chunk_sects && (conf->geo.near_copies < conf->geo.raid_disks || conf->prev.near_copies < conf->prev.raid_disks))) { @@ -1115,7 +1115,7 @@ static void make_request(struct mddev *mddev, struct bio * bio) bad_map: printk("md/raid10:%s: make_request bug: can't convert block across chunks" " or bigger than %dk %llu %d\n", mdname(mddev), chunk_sects/2, - (unsigned long long)bio->bi_sector, bio->bi_size >> 10); + (unsigned long long)bio->bi_sector, bio_sectors(bio) / 2); bio_io_error(bio); return; @@ -1130,7 +1130,7 @@ static void make_request(struct mddev *mddev, struct bio * bio) */ wait_barrier(conf); - sectors = bio->bi_size >> 9; + sectors = bio_sectors(bio); while (test_bit(MD_RECOVERY_RESHAPE, &mddev->recovery) && bio->bi_sector < conf->reshape_progress && bio->bi_sector + sectors > conf->reshape_progress) { @@ -1232,8 +1232,7 @@ read_again: r10_bio = mempool_alloc(conf->r10bio_pool, GFP_NOIO); r10_bio->master_bio = bio; - r10_bio->sectors = ((bio->bi_size >> 9) - - sectors_handled); + r10_bio->sectors = bio_sectors(bio) - sectors_handled; r10_bio->state = 0; r10_bio->mddev = mddev; r10_bio->sector = bio->bi_sector + sectors_handled; @@ -1455,7 +1454,7 @@ retry_write: * after checking if we need to go around again. */ - if (sectors_handled < (bio->bi_size >> 9)) { + if (sectors_handled < bio_sectors(bio)) { one_write_done(r10_bio); /* We need another r10_bio. It has already been counted * in bio->bi_phys_segments. @@ -1463,7 +1462,7 @@ retry_write: r10_bio = mempool_alloc(conf->r10bio_pool, GFP_NOIO); r10_bio->master_bio = bio; - r10_bio->sectors = (bio->bi_size >> 9) - sectors_handled; + r10_bio->sectors = bio_sectors(bio) - sectors_handled; r10_bio->mddev = mddev; r10_bio->sector = bio->bi_sector + sectors_handled; @@ -1984,7 +1983,7 @@ static void sync_request_write(struct mddev *mddev, struct r10bio *r10_bio) d = r10_bio->devs[i].devnum; atomic_inc(&conf->mirrors[d].rdev->nr_pending); atomic_inc(&r10_bio->remaining); - md_sync_acct(conf->mirrors[d].rdev->bdev, tbio->bi_size >> 9); + md_sync_acct(conf->mirrors[d].rdev->bdev, bio_sectors(tbio)); tbio->bi_sector += conf->mirrors[d].rdev->data_offset; tbio->bi_bdev = conf->mirrors[d].rdev->bdev; @@ -2009,7 +2008,7 @@ static void sync_request_write(struct mddev *mddev, struct r10bio *r10_bio) d = r10_bio->devs[i].devnum; atomic_inc(&r10_bio->remaining); md_sync_acct(conf->mirrors[d].replacement->bdev, - tbio->bi_size >> 9); + bio_sectors(tbio)); generic_make_request(tbio); } @@ -2135,13 +2134,13 @@ static void recovery_request_write(struct mddev *mddev, struct r10bio *r10_bio) wbio2 = r10_bio->devs[1].repl_bio; if (wbio->bi_end_io) { atomic_inc(&conf->mirrors[d].rdev->nr_pending); - md_sync_acct(conf->mirrors[d].rdev->bdev, wbio->bi_size >> 9); + md_sync_acct(conf->mirrors[d].rdev->bdev, bio_sectors(wbio)); generic_make_request(wbio); } if (wbio2 && wbio2->bi_end_io) { atomic_inc(&conf->mirrors[d].replacement->nr_pending); md_sync_acct(conf->mirrors[d].replacement->bdev, - wbio2->bi_size >> 9); + bio_sectors(wbio2)); generic_make_request(wbio2); } } @@ -2571,8 +2570,7 @@ read_more: r10_bio = mempool_alloc(conf->r10bio_pool, GFP_NOIO); r10_bio->master_bio = mbio; - r10_bio->sectors = (mbio->bi_size >> 9) - - sectors_handled; + r10_bio->sectors = bio_sectors(mbio) - sectors_handled; r10_bio->state = 0; set_bit(R10BIO_ReadError, &r10_bio->state); diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c index 45a632c..cbfc424 100644 --- a/drivers/md/raid5.c +++ b/drivers/md/raid5.c @@ -88,7 +88,7 @@ static inline struct hlist_head *stripe_hash(struct r5conf *conf, sector_t sect) */ static inline struct bio *r5_next_bio(struct bio *bio, sector_t sector) { - int sectors = bio->bi_size >> 9; + int sectors = bio_sectors(bio); if (bio->bi_sector + sectors < sector + STRIPE_SECTORS) return bio->bi_next; else @@ -3771,7 +3771,7 @@ static int in_chunk_boundary(struct mddev *mddev, struct bio *bio) { sector_t sector = bio->bi_sector + get_start_sect(bio->bi_bdev); unsigned int chunk_sectors = mddev->chunk_sectors; - unsigned int bio_sectors = bio->bi_size >> 9; + unsigned int bio_sectors = bio_sectors(bio); if (mddev->new_chunk_sectors < mddev->chunk_sectors) chunk_sectors = mddev->new_chunk_sectors; @@ -3861,7 +3861,7 @@ static int bio_fits_rdev(struct bio *bi) { struct request_queue *q = bdev_get_queue(bi->bi_bdev); - if ((bi->bi_size>>9) > queue_max_sectors(q)) + if (bio_sectors(bi) > queue_max_sectors(q)) return 0; blk_recount_segments(q, bi); if (bi->bi_phys_segments > queue_max_segments(q)) @@ -3931,7 +3931,7 @@ static int chunk_aligned_read(struct mddev *mddev, struct bio * raid_bio) align_bi->bi_flags &= ~(1 << BIO_SEG_VALID); if (!bio_fits_rdev(align_bi) || - is_badblock(rdev, align_bi->bi_sector, align_bi->bi_size>>9, + is_badblock(rdev, align_bi->bi_sector, bio_sectors(align_bi), &first_bad, &bad_sectors)) { /* too big in some way, or has a known bad block */ bio_put(align_bi); diff --git a/include/trace/events/block.h b/include/trace/events/block.h index 05c5e61..3c210ac 100644 --- a/include/trace/events/block.h +++ b/include/trace/events/block.h @@ -193,7 +193,7 @@ TRACE_EVENT(block_bio_bounce, __entry->dev = bio->bi_bdev ? bio->bi_bdev->bd_dev : 0; __entry->sector = bio->bi_sector; - __entry->nr_sector = bio->bi_size >> 9; + __entry->nr_sector = bio_sectors(bio); blk_fill_rwbs(__entry->rwbs, bio->bi_rw, bio->bi_size); memcpy(__entry->comm, current->comm, TASK_COMM_LEN); ), @@ -230,7 +230,7 @@ TRACE_EVENT(block_bio_complete, TP_fast_assign( __entry->dev = bio->bi_bdev->bd_dev; __entry->sector = bio->bi_sector; - __entry->nr_sector = bio->bi_size >> 9; + __entry->nr_sector = bio_sectors(bio); __entry->error = error; blk_fill_rwbs(__entry->rwbs, bio->bi_rw, bio->bi_size); ), @@ -258,7 +258,7 @@ DECLARE_EVENT_CLASS(block_bio, TP_fast_assign( __entry->dev = bio->bi_bdev->bd_dev; __entry->sector = bio->bi_sector; - __entry->nr_sector = bio->bi_size >> 9; + __entry->nr_sector = bio_sectors(bio); blk_fill_rwbs(__entry->rwbs, bio->bi_rw, bio->bi_size); memcpy(__entry->comm, current->comm, TASK_COMM_LEN); ), @@ -330,7 +330,7 @@ DECLARE_EVENT_CLASS(block_get_rq, TP_fast_assign( __entry->dev = bio ? bio->bi_bdev->bd_dev : 0; __entry->sector = bio ? bio->bi_sector : 0; - __entry->nr_sector = bio ? bio->bi_size >> 9 : 0; + __entry->nr_sector = bio ? bio_sectors(bio) : 0; blk_fill_rwbs(__entry->rwbs, bio ? bio->bi_rw : 0, __entry->nr_sector); memcpy(__entry->comm, current->comm, TASK_COMM_LEN); @@ -506,7 +506,7 @@ TRACE_EVENT(block_bio_remap, TP_fast_assign( __entry->dev = bio->bi_bdev->bd_dev; __entry->sector = bio->bi_sector; - __entry->nr_sector = bio->bi_size >> 9; + __entry->nr_sector = bio_sectors(bio); __entry->old_dev = dev; __entry->old_sector = from; blk_fill_rwbs(__entry->rwbs, bio->bi_rw, bio->bi_size);