Message ID | 20210709081005.421340-3-ming.lei@redhat.com (mailing list archive) |
---|---|
State | Changes Requested |
Headers | show |
Series | blk-mq: cleanup map queues & fix blk_mq_alloc_request_hctx | expand |
On Fri, Jul 09, 2021 at 04:09:57PM +0800, Ming Lei wrote: > +/** > + * blk_mq_dev_map_queues - provide generic queue mapping > + * @qmap: CPU to hardware queue map. > + * @dev_off: Offset to use for the device > + * @get_queue_affinity: Callback to retrieve queue affinity > + * @dev_data: Device data passed to get_queue_affinity() > + * @fallback: If true, fallback to default blk-mq mapping in case of > + * any failure The docs have a different order compared to the function definition (dev_data). > + * > + * Generic function to setup each queue mapping in @qmap. It will query > + * each queue's affinity via @get_queue_affinity and built queue mapping > + * that maps a queue to the CPUs in the queue affinity. > + * > + * Driver has to set correct @dev_data, so that the driver callback > + * of @get_queue_affinity can work correctly. > + */ > +int blk_mq_dev_map_queues(struct blk_mq_queue_map *qmap, void *dev_data, > + int dev_off, get_queue_affinty_fn *get_queue_affinity, > + bool fallback)
> + /* > + * fallback to default mapping if driver doesn't provide > + * get_queue_affinity callback > + */ > + if (!get_queue_affinity) { > + fallback = true; > + goto fallback; > + } > + > + for (queue = 0; queue < qmap->nr_queues; queue++) { > + mask = get_queue_affinity(dev_data, dev_off, queue); > + if (!mask) > + goto fallback; > + > + for_each_cpu(cpu, mask) > + qmap->mq_map[cpu] = qmap->queue_offset + queue; > + } > + > + return 0; > + > +fallback: > + if (!fallback) { > + WARN_ON_ONCE(qmap->nr_queues > 1); > + blk_mq_clear_mq_map(qmap); > + return 0; > + } > + return blk_mq_map_queues(qmap); Please remove the NULL get_affinity case and let the callers handle the fallback. Also I think it makes sense to leave the !mask fallback case to the callers as well to simplify the calling conventions.
diff --git a/block/blk-mq-map.c b/block/blk-mq-map.c index 3db84d3197f1..e3ba2ef1e9e2 100644 --- a/block/blk-mq-map.c +++ b/block/blk-mq-map.c @@ -94,3 +94,56 @@ int blk_mq_hw_queue_to_node(struct blk_mq_queue_map *qmap, unsigned int index) return NUMA_NO_NODE; } + +/** + * blk_mq_dev_map_queues - provide generic queue mapping + * @qmap: CPU to hardware queue map. + * @dev_off: Offset to use for the device + * @get_queue_affinity: Callback to retrieve queue affinity + * @dev_data: Device data passed to get_queue_affinity() + * @fallback: If true, fallback to default blk-mq mapping in case of + * any failure + * + * Generic function to setup each queue mapping in @qmap. It will query + * each queue's affinity via @get_queue_affinity and built queue mapping + * that maps a queue to the CPUs in the queue affinity. + * + * Driver has to set correct @dev_data, so that the driver callback + * of @get_queue_affinity can work correctly. + */ +int blk_mq_dev_map_queues(struct blk_mq_queue_map *qmap, void *dev_data, + int dev_off, get_queue_affinty_fn *get_queue_affinity, + bool fallback) +{ + const struct cpumask *mask; + unsigned int queue, cpu; + + /* + * fallback to default mapping if driver doesn't provide + * get_queue_affinity callback + */ + if (!get_queue_affinity) { + fallback = true; + goto fallback; + } + + for (queue = 0; queue < qmap->nr_queues; queue++) { + mask = get_queue_affinity(dev_data, dev_off, queue); + if (!mask) + goto fallback; + + for_each_cpu(cpu, mask) + qmap->mq_map[cpu] = qmap->queue_offset + queue; + } + + return 0; + +fallback: + if (!fallback) { + WARN_ON_ONCE(qmap->nr_queues > 1); + blk_mq_clear_mq_map(qmap); + return 0; + } + return blk_mq_map_queues(qmap); +} +EXPORT_SYMBOL_GPL(blk_mq_dev_map_queues); diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h index fd2de2b422ed..b6090d691594 100644 --- a/include/linux/blk-mq.h +++ b/include/linux/blk-mq.h @@ -553,7 +553,12 @@ void blk_mq_freeze_queue_wait(struct request_queue *q); int blk_mq_freeze_queue_wait_timeout(struct request_queue *q, unsigned long timeout); +typedef const struct cpumask * (get_queue_affinty_fn)(void *dev_data, + int dev_off, int queue_idx); int blk_mq_map_queues(struct blk_mq_queue_map *qmap); +int blk_mq_dev_map_queues(struct blk_mq_queue_map *qmap, void *dev_data, + int dev_off, get_queue_affinty_fn *get_queue_affinity, + bool fallback); void blk_mq_update_nr_hw_queues(struct blk_mq_tag_set *set, int nr_hw_queues); void blk_mq_quiesce_queue_nowait(struct request_queue *q);
Introduce blk_mq_dev_map_queues so that we can remove all kinds of map_queues implementation(pci, virtio, rdma, ...) out of block layer. Signed-off-by: Ming Lei <ming.lei@redhat.com> --- block/blk-mq-map.c | 53 ++++++++++++++++++++++++++++++++++++++++++ include/linux/blk-mq.h | 5 ++++ 2 files changed, 58 insertions(+)