From patchwork Fri Mar 7 00:47:27 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Snitzer X-Patchwork-Id: 3787121 X-Patchwork-Delegate: snitzer@redhat.com Return-Path: X-Original-To: patchwork-dm-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id E49299F35F for ; Fri, 7 Mar 2014 00:52:26 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 1630A201FB for ; Fri, 7 Mar 2014 00:52:26 +0000 (UTC) Received: from mx3-phx2.redhat.com (mx3-phx2.redhat.com [209.132.183.24]) by mail.kernel.org (Postfix) with ESMTP id 8812A201F0 for ; Fri, 7 Mar 2014 00:52:24 +0000 (UTC) Received: from lists01.pubmisc.prod.ext.phx2.redhat.com (lists01.pubmisc.prod.ext.phx2.redhat.com [10.5.19.33]) by mx3-phx2.redhat.com (8.13.8/8.13.8) with ESMTP id s270mO4h011917; Thu, 6 Mar 2014 19:48:25 -0500 Received: from int-mx01.intmail.prod.int.phx2.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) by lists01.pubmisc.prod.ext.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id s270lWTs025049 for ; Thu, 6 Mar 2014 19:47:32 -0500 Received: from localhost (vpn-49-74.rdu2.redhat.com [10.10.49.74]) by int-mx01.intmail.prod.int.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id s270lVJ6004259 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES128-SHA bits=128 verify=NO); Thu, 6 Mar 2014 19:47:32 -0500 From: Mike Snitzer To: dm-devel@redhat.com Date: Thu, 6 Mar 2014 19:47:27 -0500 Message-Id: <1394153248-32217-2-git-send-email-snitzer@redhat.com> In-Reply-To: <1394153248-32217-1-git-send-email-snitzer@redhat.com> References: <1394153248-32217-1-git-send-email-snitzer@redhat.com> X-Scanned-By: MIMEDefang 2.67 on 10.5.11.11 X-loop: dm-devel@redhat.com Cc: Morgan.Mears@netapp.com, ejt@redhat.com Subject: [dm-devel] [PATCH for-3.15 2/3] dm era: support non power-of-2 blocksize X-BeenThere: dm-devel@redhat.com X-Mailman-Version: 2.1.12 Precedence: junk Reply-To: device-mapper development List-Id: device-mapper development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: dm-devel-bounces@redhat.com Errors-To: dm-devel-bounces@redhat.com X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The dm-era target is expected to be paired with the dm-cache target which has support for non power-of-2 data blocksize. So dm-era should also support an origin data device blocksize that is non power-of-2. Signed-off-by: Mike Snitzer --- drivers/md/dm-era-target.c | 42 +++++++++++++++++++++++++++++------------- 1 file changed, 29 insertions(+), 13 deletions(-) diff --git a/drivers/md/dm-era-target.c b/drivers/md/dm-era-target.c index f772351..d7bd7b3 100644 --- a/drivers/md/dm-era-target.c +++ b/drivers/md/dm-era-target.c @@ -1080,9 +1080,9 @@ struct era { struct dm_dev *metadata_dev; struct dm_dev *origin_dev; - uint32_t block_size; dm_block_t nr_blocks; - unsigned sectors_per_block_shift; + uint32_t sectors_per_block; + int sectors_per_block_shift; struct era_metadata *md; struct workqueue_struct *wq; @@ -1113,9 +1113,21 @@ struct rpc { /*---------------------------------------------------------------- * Remapping. *---------------------------------------------------------------*/ +static bool block_size_is_power_of_two(struct era *era) +{ + return era->sectors_per_block_shift >= 0; +} + static dm_block_t get_block(struct era *era, struct bio *bio) { - return bio->bi_iter.bi_sector >> era->sectors_per_block_shift; + dm_block_t block_nr = bio->bi_iter.bi_sector; + + if (!block_size_is_power_of_two(era)) + (void) sector_div(block_nr, era->sectors_per_block); + else + block_nr >>= era->sectors_per_block_shift; + + return block_nr; } static void remap_to_origin(struct era *era, struct bio *bio) @@ -1336,7 +1348,7 @@ static void era_destroy(struct era *era) static dm_block_t calc_nr_blocks(struct era *era) { - return dm_sector_div_up(era->ti->len, era->block_size); + return dm_sector_div_up(era->ti->len, era->sectors_per_block); } /* @@ -1376,22 +1388,26 @@ static int era_ctr(struct dm_target *ti, unsigned argc, char **argv) return -EINVAL; } - r = sscanf(argv[2], "%u%c", &era->block_size, &dummy); + r = sscanf(argv[2], "%u%c", &era->sectors_per_block, &dummy); if (r != 1) { ti->error = "Error parsing block size"; era_destroy(era); return -EINVAL; } - era->sectors_per_block_shift = __ffs(era->block_size); - r = dm_set_target_max_io_len(ti, era->block_size); + r = dm_set_target_max_io_len(ti, era->sectors_per_block); if (r) { ti->error = "could not set max io len"; era_destroy(era); return -EINVAL; } - md = metadata_open(era->metadata_dev->bdev, era->block_size, true); + if (era->sectors_per_block & (era->sectors_per_block - 1)) + era->sectors_per_block_shift = -1; + else + era->sectors_per_block_shift = __ffs(era->sectors_per_block); + + md = metadata_open(era->metadata_dev->bdev, era->sectors_per_block, true); if (IS_ERR(md)) { ti->error = "Error reading metadata"; era_destroy(era); @@ -1549,7 +1565,7 @@ static void era_status(struct dm_target *ti, status_type_t type, format_dev_t(buf, era->metadata_dev->bdev->bd_dev); DMEMIT("%s ", buf); format_dev_t(buf, era->origin_dev->bdev->bd_dev); - DMEMIT("%s %u", buf, era->block_size); + DMEMIT("%s %u", buf, era->sectors_per_block); break; } @@ -1614,12 +1630,12 @@ static void era_io_hints(struct dm_target *ti, struct queue_limits *limits) /* * If the system-determined stacked limits are compatible with the - * cache's blocksize (io_opt is a factor) do not override them. + * era device's blocksize (io_opt is a factor) do not override them. */ - if (io_opt_sectors < era->block_size || - do_div(io_opt_sectors, era->block_size)) { + if (io_opt_sectors < era->sectors_per_block || + do_div(io_opt_sectors, era->sectors_per_block)) { blk_limits_io_min(limits, 0); - blk_limits_io_opt(limits, era->block_size << SECTOR_SHIFT); + blk_limits_io_opt(limits, era->sectors_per_block << SECTOR_SHIFT); } }