Message ID | 1405720783-7904-1-git-send-email-snitzer@redhat.com (mailing list archive) |
---|---|
State | Accepted, archived |
Delegated to: | Mike Snitzer |
Headers | show |
On Fri, Jul 18 2014 at 5:59pm -0400,
Mike Snitzer <snitzer@redhat.com> wrote:
> Before if the block layer's limit stacking didn't establish an
...
I revised this header when staging for 3.17 in for-next, see:
https://git.kernel.org/cgit/linux/kernel/git/device-mapper/linux-dm.git/commit/?h=for-next&id=a99c223c2a88aeb404afaa1bfce91eb0dd36b80b
--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel
diff --git a/drivers/md/dm-thin.c b/drivers/md/dm-thin.c index fc9c848..c92e1b9 100644 --- a/drivers/md/dm-thin.c +++ b/drivers/md/dm-thin.c @@ -3112,7 +3112,7 @@ static void pool_io_hints(struct dm_target *ti, struct queue_limits *limits) */ if (io_opt_sectors < pool->sectors_per_block || do_div(io_opt_sectors, pool->sectors_per_block)) { - blk_limits_io_min(limits, 0); + blk_limits_io_min(limits, pool->sectors_per_block << SECTOR_SHIFT); blk_limits_io_opt(limits, pool->sectors_per_block << SECTOR_SHIFT); }
Before if the block layer's limit stacking didn't establish an optimal_io_size that was compatible with the dm-thin-pool's data block size of the we'd set optimal_io_size to the data block size and minimum_io_size to 0 (which the block layer adjusts to be physical_block_size). Update pool_io_hints() to set both minimum_io_size and optimal_io_size to the pool's data block size. This fixes an issue reported where mkfs.xfs would create more XFS Allocation Groups on thinp volumes than on a normal linear LV of comparable size, see: https://bugzilla.redhat.com/show_bug.cgi?id=1003227 Reported-by: Chris Murphy <lists@colorremedies.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com> --- drivers/md/dm-thin.c | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-)