From patchwork Wed Apr 27 09:08:59 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joe Thornber X-Patchwork-Id: 736251 Received: from mx3-phx2.redhat.com (mx3-phx2.redhat.com [209.132.183.24]) by demeter1.kernel.org (8.14.4/8.14.3) with ESMTP id p3R9BdCG021350 for ; Wed, 27 Apr 2011 09:13:38 GMT Received: from lists01.pubmisc.prod.ext.phx2.redhat.com (lists01.pubmisc.prod.ext.phx2.redhat.com [10.5.19.33]) by mx3-phx2.redhat.com (8.13.8/8.13.8) with ESMTP id p3R99A5k008757; Wed, 27 Apr 2011 05:09:11 -0400 Received: from int-mx10.intmail.prod.int.phx2.redhat.com (int-mx10.intmail.prod.int.phx2.redhat.com [10.5.11.23]) by lists01.pubmisc.prod.ext.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id p3R998MP020335 for ; Wed, 27 Apr 2011 05:09:08 -0400 Received: from [10.36.9.48] (vpn2-9-48.ams2.redhat.com [10.36.9.48]) by int-mx10.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id p3R992Ex029442; Wed, 27 Apr 2011 05:09:02 -0400 From: Joe Thornber To: Christoph Hellwig In-Reply-To: <20110426184742.GA13880@infradead.org> References: <20110426184742.GA13880@infradead.org> Date: Wed, 27 Apr 2011 10:08:59 +0100 Message-ID: <1303895339.4679.51.camel@ubuntu> Mime-Version: 1.0 X-Scanned-By: MIMEDefang 2.68 on 10.5.11.23 X-loop: dm-devel@redhat.com Cc: dm-devel@redhat.com, ejt@redhat.com Subject: Re: [dm-devel] dm-thinp bug X-BeenThere: dm-devel@redhat.com X-Mailman-Version: 2.1.12 Precedence: junk Reply-To: device-mapper development List-Id: device-mapper development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: dm-devel-bounces@redhat.com Errors-To: dm-devel-bounces@redhat.com X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.6 (demeter1.kernel.org [140.211.167.41]); Wed, 27 Apr 2011 09:13:58 +0000 (UTC) On Tue, 2011-04-26 at 14:47 -0400, Christoph Hellwig wrote: > The virtio bug on says that it gets more segments than it allows to > higher layers. I think this is simply because I omitted the iterate_devices callback in the thinp target. I'll try and find time to test this patch later this week. Alternatively you could just switch to multisnap which already has this in. - Joe --- dm-devel mailing list dm-devel@redhat.com https://www.redhat.com/mailman/listinfo/dm-devel diff --git a/drivers/md/dm-thin-prov.c b/drivers/md/dm-thin-prov.c index 4d382c8..054b4f9 100644 --- a/drivers/md/dm-thin-prov.c +++ b/drivers/md/dm-thin-prov.c @@ -643,6 +643,14 @@ thinp_io_hints(struct dm_target *ti, struct queue_limits *limits) blk_limits_io_opt(limits, data_dev_block_size(tc)); } +static int thinp_iterate_devices(struct dm_target *ti, + iterate_devices_callout_fn fn, + void *data) +{ + struct thinp_c *tc = ti->private; + return fn(ti, tc->data_dev, 0, tc->data_size << tc->block_shift, data); +} + /* Thinp pool control target interface. */ static struct target_type thinp_target = { .name = "thin-prov", @@ -658,6 +666,7 @@ static struct target_type thinp_target = { .status = thinp_status, .merge = thinp_bvec_merge, .io_hints = thinp_io_hints, + .iterate_devices = thinp_iterate_devices, }; static int __init dm_thinp_init(void)