Message ID | 20200611064452.12353-3-hch@lst.de (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [01/12] blk-mq: merge blk-softirq.c into blk-mq.c | expand |
On Thu, Jun 11, 2020 at 08:44:42AM +0200, Christoph Hellwig wrote: > /* > * Setup and invoke a run of 'trigger_softirq' on the given cpu. > */ > @@ -681,19 +689,8 @@ static void __blk_complete_request(struct request *req) > * avoids IPI sending from current CPU to the first CPU of a group. > */ > if (ccpu == cpu || shared) { > - struct list_head *list; > do_local: > - list = this_cpu_ptr(&blk_cpu_done); > - list_add_tail(&req->ipi_list, list); > - > - /* > - * if the list only contains our just added request, > - * signal a raise of the softirq. If there are already > - * entries there, someone already raised the irq but it > - * hasn't run yet. > - */ > - if (list->next == &req->ipi_list) > - raise_softirq_irqoff(BLOCK_SOFTIRQ); > + blk_mq_trigger_softirq(req); > } else if (raise_blk_irq(ccpu, req)) > goto do_local; Couldn't this be folded into the previous condition, e.g if (ccpu == cpu || shared || raised_blk_irq(ccpu, req)) ?
On Thu, Jun 18, 2020 at 04:34:04PM +0200, Daniel Wagner wrote: > Couldn't this be folded into the previous condition, e.g > > if (ccpu == cpu || shared || raised_blk_irq(ccpu, req)) to answer my own question, patch #3 does addresses this :)
On Thu, Jun 11, 2020 at 08:44:42AM +0200, Christoph Hellwig wrote: > Add a helper to deduplicate the logic that raises the block softirq. > > Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Daniel Wagner <dwagner@suse.de>
diff --git a/block/blk-mq.c b/block/blk-mq.c index bbdc6c97bd42db..aa1917949f0ded 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -598,19 +598,27 @@ static __latent_entropy void blk_done_softirq(struct softirq_action *h) } } -#ifdef CONFIG_SMP -static void trigger_softirq(void *data) +static void blk_mq_trigger_softirq(struct request *rq) { - struct request *rq = data; - struct list_head *list; + struct list_head *list = this_cpu_ptr(&blk_cpu_done); - list = this_cpu_ptr(&blk_cpu_done); list_add_tail(&rq->ipi_list, list); + /* + * If the list only contains our just added request, signal a raise of + * the softirq. If there are already entries there, someone already + * raised the irq but it hasn't run yet. + */ if (list->next == &rq->ipi_list) raise_softirq_irqoff(BLOCK_SOFTIRQ); } +#ifdef CONFIG_SMP +static void trigger_softirq(void *data) +{ + blk_mq_trigger_softirq(data); +} + /* * Setup and invoke a run of 'trigger_softirq' on the given cpu. */ @@ -681,19 +689,8 @@ static void __blk_complete_request(struct request *req) * avoids IPI sending from current CPU to the first CPU of a group. */ if (ccpu == cpu || shared) { - struct list_head *list; do_local: - list = this_cpu_ptr(&blk_cpu_done); - list_add_tail(&req->ipi_list, list); - - /* - * if the list only contains our just added request, - * signal a raise of the softirq. If there are already - * entries there, someone already raised the irq but it - * hasn't run yet. - */ - if (list->next == &req->ipi_list) - raise_softirq_irqoff(BLOCK_SOFTIRQ); + blk_mq_trigger_softirq(req); } else if (raise_blk_irq(ccpu, req)) goto do_local;
Add a helper to deduplicate the logic that raises the block softirq. Signed-off-by: Christoph Hellwig <hch@lst.de> --- block/blk-mq.c | 31 ++++++++++++++----------------- 1 file changed, 14 insertions(+), 17 deletions(-)