diff mbox series

rbd: set io_min, io_opt and discard_granularity to alloc_size

Message ID 20190318191031.11066-1-idryomov@gmail.com (mailing list archive)
State New, archived
Headers show
Series rbd: set io_min, io_opt and discard_granularity to alloc_size | expand

Commit Message

Ilya Dryomov March 18, 2019, 7:10 p.m. UTC
Now that we have alloc_size that controls our discard behavior, it
doesn't make sense to have these set to object (set) size.  alloc_size
defaults to 64k, but because discard_granularity is likely 4M, only
ranges that are equal to or bigger than 4M can be considered during
fstrim.  A smaller io_min is also more likely to be met, resulting in
fewer deferred writes on bluestore OSDs.

Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
---
 drivers/block/rbd.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

Comments

Jason Dillaman March 18, 2019, 7:15 p.m. UTC | #1
On Mon, Mar 18, 2019 at 3:10 PM Ilya Dryomov <idryomov@gmail.com> wrote:
>
> Now that we have alloc_size that controls our discard behavior, it
> doesn't make sense to have these set to object (set) size.  alloc_size
> defaults to 64k, but because discard_granularity is likely 4M, only
> ranges that are equal to or bigger than 4M can be considered during
> fstrim.  A smaller io_min is also more likely to be met, resulting in
> fewer deferred writes on bluestore OSDs.
>
> Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
> ---
>  drivers/block/rbd.c | 8 ++++----
>  1 file changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c
> index a32d5b54d59e..e037f1ab2fde 100644
> --- a/drivers/block/rbd.c
> +++ b/drivers/block/rbd.c
> @@ -834,7 +834,7 @@ static int parse_rbd_opts_token(char *c, void *private)
>                 pctx->opts->queue_depth = intval;
>                 break;
>         case Opt_alloc_size:
> -               if (intval < 1) {
> +               if (intval < SECTOR_SIZE) {
>                         pr_err("alloc_size out of range\n");
>                         return -EINVAL;
>                 }
> @@ -4204,12 +4204,12 @@ static int rbd_init_disk(struct rbd_device *rbd_dev)
>         q->limits.max_sectors = queue_max_hw_sectors(q);
>         blk_queue_max_segments(q, USHRT_MAX);
>         blk_queue_max_segment_size(q, UINT_MAX);
> -       blk_queue_io_min(q, objset_bytes);
> -       blk_queue_io_opt(q, objset_bytes);
> +       blk_queue_io_min(q, rbd_dev->opts->alloc_size);
> +       blk_queue_io_opt(q, rbd_dev->opts->alloc_size);
>
>         if (rbd_dev->opts->trim) {
>                 blk_queue_flag_set(QUEUE_FLAG_DISCARD, q);
> -               q->limits.discard_granularity = objset_bytes;
> +               q->limits.discard_granularity = rbd_dev->opts->alloc_size;
>                 blk_queue_max_discard_sectors(q, objset_bytes >> SECTOR_SHIFT);
>                 blk_queue_max_write_zeroes_sectors(q, objset_bytes >> SECTOR_SHIFT);
>         }
> --
> 2.19.2
>

Reviewed-by: Jason Dillaman <dillaman@redhat.com>
diff mbox series

Patch

diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c
index a32d5b54d59e..e037f1ab2fde 100644
--- a/drivers/block/rbd.c
+++ b/drivers/block/rbd.c
@@ -834,7 +834,7 @@  static int parse_rbd_opts_token(char *c, void *private)
 		pctx->opts->queue_depth = intval;
 		break;
 	case Opt_alloc_size:
-		if (intval < 1) {
+		if (intval < SECTOR_SIZE) {
 			pr_err("alloc_size out of range\n");
 			return -EINVAL;
 		}
@@ -4204,12 +4204,12 @@  static int rbd_init_disk(struct rbd_device *rbd_dev)
 	q->limits.max_sectors = queue_max_hw_sectors(q);
 	blk_queue_max_segments(q, USHRT_MAX);
 	blk_queue_max_segment_size(q, UINT_MAX);
-	blk_queue_io_min(q, objset_bytes);
-	blk_queue_io_opt(q, objset_bytes);
+	blk_queue_io_min(q, rbd_dev->opts->alloc_size);
+	blk_queue_io_opt(q, rbd_dev->opts->alloc_size);
 
 	if (rbd_dev->opts->trim) {
 		blk_queue_flag_set(QUEUE_FLAG_DISCARD, q);
-		q->limits.discard_granularity = objset_bytes;
+		q->limits.discard_granularity = rbd_dev->opts->alloc_size;
 		blk_queue_max_discard_sectors(q, objset_bytes >> SECTOR_SHIFT);
 		blk_queue_max_write_zeroes_sectors(q, objset_bytes >> SECTOR_SHIFT);
 	}