Message ID | 20240128165813.3213508-5-hch@lst.de (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [01/14] block: move max_{open,active}_zones to struct queue_limits | expand |
On 28/01/2024 16:58, Christoph Hellwig wrote: > Convert queue_max_sectors_store to use queue_limits_commit_update to > check and update the max_sectors limit and freeze the queue before > doing so to ensure we don't have requests in flight while changing > the limits. > > Note that this removes the previously held queue_lock that doesn't > protect against any other reader or writer. I don't really get why we specifically locked that code segment in queue_max_sectors_store() previously. Was it to ensure max_sectors and q->disk->bdi->io_pages are always atomically updated? > > Signed-off-by: Christoph Hellwig <hch@lst.de> > Reviewed-by: Damien Le Moal <dlemoal@kernel.org> > Reviewed-by: Hannes Reinecke <hare@suse.de> Feel free to add: Reviewed-by: John Garry <john.g.garry@oracle.com>
On Tue, Jan 30, 2024 at 12:14:32PM +0000, John Garry wrote: > On 28/01/2024 16:58, Christoph Hellwig wrote: >> Convert queue_max_sectors_store to use queue_limits_commit_update to >> check and update the max_sectors limit and freeze the queue before >> doing so to ensure we don't have requests in flight while changing >> the limits. >> >> Note that this removes the previously held queue_lock that doesn't >> protect against any other reader or writer. > > I don't really get why we specifically locked that code segment in > queue_max_sectors_store() previously. Was it to ensure max_sectors and > q->disk->bdi->io_pages are always atomically updated? It's been there basically forever. Back in the day before blk-mq and lock splitting it might actually have protected something.
diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c index 6b2429cad81af1..26607f9825cb05 100644 --- a/block/blk-sysfs.c +++ b/block/blk-sysfs.c @@ -226,35 +226,22 @@ static ssize_t queue_zone_append_max_show(struct request_queue *q, char *page) static ssize_t queue_max_sectors_store(struct request_queue *q, const char *page, size_t count) { - unsigned long var; - unsigned int max_sectors_kb, - max_hw_sectors_kb = queue_max_hw_sectors(q) >> 1, - page_kb = 1 << (PAGE_SHIFT - 10); - ssize_t ret = queue_var_store(&var, page, count); + unsigned long max_sectors_kb; + struct queue_limits lim; + ssize_t ret; + int err; + ret = queue_var_store(&max_sectors_kb, page, count); if (ret < 0) return ret; - max_sectors_kb = (unsigned int)var; - max_hw_sectors_kb = min_not_zero(max_hw_sectors_kb, - q->limits.max_dev_sectors >> 1); - if (max_sectors_kb == 0) { - q->limits.max_user_sectors = 0; - max_sectors_kb = min(max_hw_sectors_kb, - BLK_DEF_MAX_SECTORS_CAP >> 1); - } else { - if (max_sectors_kb > max_hw_sectors_kb || - max_sectors_kb < page_kb) - return -EINVAL; - q->limits.max_user_sectors = max_sectors_kb << 1; - } - - spin_lock_irq(&q->queue_lock); - q->limits.max_sectors = max_sectors_kb << 1; - if (q->disk) - q->disk->bdi->io_pages = max_sectors_kb >> (PAGE_SHIFT - 10); - spin_unlock_irq(&q->queue_lock); - + blk_mq_freeze_queue(q); + lim = queue_limits_start_update(q); + lim.max_user_sectors = max_sectors_kb << 1; + err = queue_limits_commit_update(q, &lim); + blk_mq_unfreeze_queue(q); + if (err) + return err; return ret; }