diff mbox

dm-thin: Export proper discard_granularity

Message ID 1402489929-16466-1-git-send-email-lczerner@redhat.com (mailing list archive)
State Accepted, archived
Delegated to: Mike Snitzer
Headers show

Commit Message

Lukas Czerner June 11, 2014, 12:32 p.m. UTC
Currently if the underlying device is discard capable and the
discard_passdown is enabled, the discard_granularity will be inherited
from that device.

This will pose a problem in the case that the device discard_granularity
is smaller than thin volume chunk size, because in that case discard
requests will not be chunk size aligned so it will be ignored by
dm-thin.

Fix this by setting thin volume discard granularity to the bigger of the
two max(device discard_granularity, thin volume chunk size). Strictly
speaking it is not necessary to get the bigger of the two, because
the thin volume chunk size will always be >= device discard_granularity.
However I believe that the reason for this is only because dm-thin can
not handle discard requests bigger than chunk size which is hopefully
going to change soon. This way it is future proof.

RHBZ: 1106856

Reported-by: Zdenek Kabelac <zkabelac@fedoraproject.org>
Signed-off-by: Lukas Czerner <lczerner@redhat.com>
---
 drivers/md/dm-thin.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

Comments

Mike Snitzer June 11, 2014, 9:31 p.m. UTC | #1
On Wed, Jun 11 2014 at  8:32am -0400,
Lukas Czerner <lczerner@redhat.com> wrote:

> Currently if the underlying device is discard capable and the
> discard_passdown is enabled, the discard_granularity will be inherited
> from that device.
> 
> This will pose a problem in the case that the device discard_granularity
> is smaller than thin volume chunk size, because in that case discard
> requests will not be chunk size aligned so it will be ignored by
> dm-thin.
> 
> Fix this by setting thin volume discard granularity to the bigger of the
> two max(device discard_granularity, thin volume chunk size). Strictly
> speaking it is not necessary to get the bigger of the two, because
> the thin volume chunk size will always be >= device discard_granularity.
> However I believe that the reason for this is only because dm-thin can
> not handle discard requests bigger than chunk size which is hopefully
> going to change soon. This way it is future proof.
> 
> RHBZ: 1106856
> 
> Reported-by: Zdenek Kabelac <zkabelac@fedoraproject.org>
> Signed-off-by: Lukas Czerner <lczerner@redhat.com>

Hi Lukas,

I missed this submission on dm-devel until now.  But I had already
picked this patch up earlier from the BZ, see the patch I staged in
linux-next here:
https://git.kernel.org/cgit/linux/kernel/git/device-mapper/linux-dm.git/commit/?h=for-next&id=09869de57ed2728ae3c619803932a86cb0e2c4f8

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel
Joe Thornber June 12, 2014, 8:54 a.m. UTC | #2
ack

On Wed, Jun 11, 2014 at 02:32:09PM +0200, Lukas Czerner wrote:
> Currently if the underlying device is discard capable and the
> discard_passdown is enabled, the discard_granularity will be inherited
> from that device.
> 
> This will pose a problem in the case that the device discard_granularity
> is smaller than thin volume chunk size, because in that case discard
> requests will not be chunk size aligned so it will be ignored by
> dm-thin.
> 
> Fix this by setting thin volume discard granularity to the bigger of the
> two max(device discard_granularity, thin volume chunk size). Strictly
> speaking it is not necessary to get the bigger of the two, because
> the thin volume chunk size will always be >= device discard_granularity.
> However I believe that the reason for this is only because dm-thin can
> not handle discard requests bigger than chunk size which is hopefully
> going to change soon. This way it is future proof.
> 
> RHBZ: 1106856
> 
> Reported-by: Zdenek Kabelac <zkabelac@fedoraproject.org>
> Signed-off-by: Lukas Czerner <lczerner@redhat.com>
> ---
>  drivers/md/dm-thin.c | 4 +++-
>  1 file changed, 3 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/md/dm-thin.c b/drivers/md/dm-thin.c
> index 242ac2e..fdd7089 100644
> --- a/drivers/md/dm-thin.c
> +++ b/drivers/md/dm-thin.c
> @@ -3068,7 +3068,9 @@ static void set_discard_limits(struct pool_c *pt, struct queue_limits *limits)
>  	 */
>  	if (pt->adjusted_pf.discard_passdown) {
>  		data_limits = &bdev_get_queue(pt->data_dev->bdev)->limits;
> -		limits->discard_granularity = data_limits->discard_granularity;
> +		limits->discard_granularity =
> +				max(data_limits->discard_granularity,
> +				    pool->sectors_per_block << SECTOR_SHIFT);
>  	} else
>  		limits->discard_granularity = pool->sectors_per_block << SECTOR_SHIFT;
>  }
> -- 
> 1.8.3.1
> 
> --
> dm-devel mailing list
> dm-devel@redhat.com
> https://www.redhat.com/mailman/listinfo/dm-devel

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel
diff mbox

Patch

diff --git a/drivers/md/dm-thin.c b/drivers/md/dm-thin.c
index 242ac2e..fdd7089 100644
--- a/drivers/md/dm-thin.c
+++ b/drivers/md/dm-thin.c
@@ -3068,7 +3068,9 @@  static void set_discard_limits(struct pool_c *pt, struct queue_limits *limits)
 	 */
 	if (pt->adjusted_pf.discard_passdown) {
 		data_limits = &bdev_get_queue(pt->data_dev->bdev)->limits;
-		limits->discard_granularity = data_limits->discard_granularity;
+		limits->discard_granularity =
+				max(data_limits->discard_granularity,
+				    pool->sectors_per_block << SECTOR_SHIFT);
 	} else
 		limits->discard_granularity = pool->sectors_per_block << SECTOR_SHIFT;
 }