diff mbox series

[02/15] block: refactor disk_update_readahead

Message ID 20240122173645.1686078-3-hch@lst.de (mailing list archive)
State New, archived
Headers show
Series [01/15] block: move max_{open,active}_zones to struct queue_limits | expand

Commit Message

Christoph Hellwig Jan. 22, 2024, 5:36 p.m. UTC
Factor out a blk_apply_bdi_limits limits helper that can be used with
an explicit queue_limits argument, which will be useful later.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/blk-settings.c | 21 ++++++++++++---------
 1 file changed, 12 insertions(+), 9 deletions(-)

Comments

Damien Le Moal Jan. 23, 2024, 4:41 a.m. UTC | #1
On 1/23/24 02:36, Christoph Hellwig wrote:
> Factor out a blk_apply_bdi_limits limits helper that can be used with
> an explicit queue_limits argument, which will be useful later.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>  block/blk-settings.c | 21 ++++++++++++---------
>  1 file changed, 12 insertions(+), 9 deletions(-)
> 
> diff --git a/block/blk-settings.c b/block/blk-settings.c
> index 06ea91e51b8b2e..e872b0e168525e 100644
> --- a/block/blk-settings.c
> +++ b/block/blk-settings.c
> @@ -85,6 +85,17 @@ void blk_set_stacking_limits(struct queue_limits *lim)
>  }
>  EXPORT_SYMBOL(blk_set_stacking_limits);
>  
> +static void blk_apply_bdi_limits(struct backing_dev_info *bdi,
> +		struct queue_limits *lim)
> +{
> +	/*
> +	 * For read-ahead of large files to be effective, we need to read ahead
> +	 * at least twice the optimal I/O size.
> +	 */
> +	bdi->ra_pages = max(lim->io_opt * 2 / PAGE_SIZE, VM_READAHEAD_PAGES);

Nit: while at it, you could replace that division by PAGE_SIZE with a right
shift by PAGE_SHIFT.

Other than that, looks good to me.

Reviewed-by: Damien Le Moal <dlemoal@kernel.org>

> +	bdi->io_pages = lim->max_sectors >> (PAGE_SHIFT - 9);
> +}
> +
>  /**
>   * blk_queue_bounce_limit - set bounce buffer limit for queue
>   * @q: the request queue for the device
> @@ -393,15 +404,7 @@ EXPORT_SYMBOL(blk_queue_alignment_offset);
>  
>  void disk_update_readahead(struct gendisk *disk)
>  {
> -	struct request_queue *q = disk->queue;
> -
> -	/*
> -	 * For read-ahead of large files to be effective, we need to read ahead
> -	 * at least twice the optimal I/O size.
> -	 */
> -	disk->bdi->ra_pages =
> -		max(queue_io_opt(q) * 2 / PAGE_SIZE, VM_READAHEAD_PAGES);
> -	disk->bdi->io_pages = queue_max_sectors(q) >> (PAGE_SHIFT - 9);
> +	blk_apply_bdi_limits(disk->bdi, &disk->queue->limits);
>  }
>  EXPORT_SYMBOL_GPL(disk_update_readahead);
>
Christoph Hellwig Jan. 23, 2024, 8:40 a.m. UTC | #2
On Tue, Jan 23, 2024 at 01:41:05PM +0900, Damien Le Moal wrote:
> > +{
> > +	/*
> > +	 * For read-ahead of large files to be effective, we need to read ahead
> > +	 * at least twice the optimal I/O size.
> > +	 */
> > +	bdi->ra_pages = max(lim->io_opt * 2 / PAGE_SIZE, VM_READAHEAD_PAGES);
> 
> Nit: while at it, you could replace that division by PAGE_SIZE with a right
> shift by PAGE_SHIFT.

I don't really see the point, this is everything but a fast path,
and for simple constant divisions compilers aren't actually bad at
doing optimizations.
Hannes Reinecke Jan. 24, 2024, 6:01 a.m. UTC | #3
On 1/22/24 18:36, Christoph Hellwig wrote:
> Factor out a blk_apply_bdi_limits limits helper that can be used with
> an explicit queue_limits argument, which will be useful later.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>   block/blk-settings.c | 21 ++++++++++++---------
>   1 file changed, 12 insertions(+), 9 deletions(-)
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
diff mbox series

Patch

diff --git a/block/blk-settings.c b/block/blk-settings.c
index 06ea91e51b8b2e..e872b0e168525e 100644
--- a/block/blk-settings.c
+++ b/block/blk-settings.c
@@ -85,6 +85,17 @@  void blk_set_stacking_limits(struct queue_limits *lim)
 }
 EXPORT_SYMBOL(blk_set_stacking_limits);
 
+static void blk_apply_bdi_limits(struct backing_dev_info *bdi,
+		struct queue_limits *lim)
+{
+	/*
+	 * For read-ahead of large files to be effective, we need to read ahead
+	 * at least twice the optimal I/O size.
+	 */
+	bdi->ra_pages = max(lim->io_opt * 2 / PAGE_SIZE, VM_READAHEAD_PAGES);
+	bdi->io_pages = lim->max_sectors >> (PAGE_SHIFT - 9);
+}
+
 /**
  * blk_queue_bounce_limit - set bounce buffer limit for queue
  * @q: the request queue for the device
@@ -393,15 +404,7 @@  EXPORT_SYMBOL(blk_queue_alignment_offset);
 
 void disk_update_readahead(struct gendisk *disk)
 {
-	struct request_queue *q = disk->queue;
-
-	/*
-	 * For read-ahead of large files to be effective, we need to read ahead
-	 * at least twice the optimal I/O size.
-	 */
-	disk->bdi->ra_pages =
-		max(queue_io_opt(q) * 2 / PAGE_SIZE, VM_READAHEAD_PAGES);
-	disk->bdi->io_pages = queue_max_sectors(q) >> (PAGE_SHIFT - 9);
+	blk_apply_bdi_limits(disk->bdi, &disk->queue->limits);
 }
 EXPORT_SYMBOL_GPL(disk_update_readahead);