diff mbox series

[2/3] blk-mq: Always complete remote completions requests in softirq

Message ID 20210123201027.3262800-3-bigeasy@linutronix.de (mailing list archive)
State New, archived
Headers show
Series blk-mq: Don't complete in IRQ, use llist_head | expand

Commit Message

Sebastian Andrzej Siewior Jan. 23, 2021, 8:10 p.m. UTC
Controllers with multiple queues have their IRQ-handelers pinned to a
CPU. The core shouldn't need to complete the request on a remote CPU.

Remove this case and always raise the softirq to complete the request.

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 block/blk-mq.c | 14 +-------------
 1 file changed, 1 insertion(+), 13 deletions(-)

Comments

Hannes Reinecke Jan. 25, 2021, 7:10 a.m. UTC | #1
On 1/23/21 9:10 PM, Sebastian Andrzej Siewior wrote:
> Controllers with multiple queues have their IRQ-handelers pinned to a
> CPU. The core shouldn't need to complete the request on a remote CPU.
> 
> Remove this case and always raise the softirq to complete the request.
> 
> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> ---
>   block/blk-mq.c | 14 +-------------
>   1 file changed, 1 insertion(+), 13 deletions(-)
> 
> diff --git a/block/blk-mq.c b/block/blk-mq.c
> index f285a9123a8b0..90348ae518461 100644
> --- a/block/blk-mq.c
> +++ b/block/blk-mq.c
> @@ -628,19 +628,7 @@ static void __blk_mq_complete_request_remote(void *data)
>   {
>   	struct request *rq = data;
>   
> -	/*
> -	 * For most of single queue controllers, there is only one irq vector
> -	 * for handling I/O completion, and the only irq's affinity is set
> -	 * to all possible CPUs.  On most of ARCHs, this affinity means the irq
> -	 * is handled on one specific CPU.
> -	 *
> -	 * So complete I/O requests in softirq context in case of single queue
> -	 * devices to avoid degrading I/O performance due to irqsoff latency.
> -	 */
> -	if (rq->q->nr_hw_queues == 1)
> -		blk_mq_trigger_softirq(rq);
> -	else
> -		rq->q->mq_ops->complete(rq);
> +	blk_mq_trigger_softirq(rq);
>   }
>   
>   static inline bool blk_mq_complete_need_ipi(struct request *rq)
> 
I don't get this.
This code is about _avoiding_ having to raise a softirq if the driver 
exports more than one hardware queue.
So where exactly does the remote CPU case come in here?

Cheers,

Hannes
Christoph Hellwig Jan. 25, 2021, 8:22 a.m. UTC | #2
On Sat, Jan 23, 2021 at 09:10:26PM +0100, Sebastian Andrzej Siewior wrote:
> Controllers with multiple queues have their IRQ-handelers pinned to a
> CPU. The core shouldn't need to complete the request on a remote CPU.
> 
> Remove this case and always raise the softirq to complete the request.

What about changing blk_mq_trigger_softirq to take a void * argument
and thus removing __blk_mq_complete_request_remote entirely?
Christoph Hellwig Jan. 25, 2021, 8:25 a.m. UTC | #3
On Mon, Jan 25, 2021 at 08:10:16AM +0100, Hannes Reinecke wrote:
> I don't get this.
> This code is about _avoiding_ having to raise a softirq if the driver
> exports more than one hardware queue.
> So where exactly does the remote CPU case come in here?

__blk_mq_complete_request_remote is only called for the case where we
do not completelky locally.  The case that "degrades" here is where
the device supports multiple queues, but less than the number of CPUs,
and we bounce the completion to another CPU.
Sebastian Andrzej Siewior Jan. 25, 2021, 8:30 a.m. UTC | #4
On 2021-01-25 08:25:42 [+0000], Christoph Hellwig wrote:
> On Mon, Jan 25, 2021 at 08:10:16AM +0100, Hannes Reinecke wrote:
> > I don't get this.
> > This code is about _avoiding_ having to raise a softirq if the driver
> > exports more than one hardware queue.
> > So where exactly does the remote CPU case come in here?
> 
> __blk_mq_complete_request_remote is only called for the case where we
> do not completelky locally.  The case that "degrades" here is where
> the device supports multiple queues, but less than the number of CPUs,
> and we bounce the completion to another CPU.

Does it really "degrade" or just use the softirq more often? The usual
case is run the softirqs in irq_exit() which is just after IPI.

Sebastian
Christoph Hellwig Jan. 25, 2021, 8:32 a.m. UTC | #5
On Mon, Jan 25, 2021 at 09:30:29AM +0100, Sebastian Andrzej Siewior wrote:
> On 2021-01-25 08:25:42 [+0000], Christoph Hellwig wrote:
> > On Mon, Jan 25, 2021 at 08:10:16AM +0100, Hannes Reinecke wrote:
> > > I don't get this.
> > > This code is about _avoiding_ having to raise a softirq if the driver
> > > exports more than one hardware queue.
> > > So where exactly does the remote CPU case come in here?
> > 
> > __blk_mq_complete_request_remote is only called for the case where we
> > do not completelky locally.  The case that "degrades" here is where
> > the device supports multiple queues, but less than the number of CPUs,
> > and we bounce the completion to another CPU.
> 
> Does it really "degrade" or just use the softirq more often? The usual
> case is run the softirqs in irq_exit() which is just after IPI.

Well, I put it in quotes because I'm not sure what the exact effect
is.  But we do delay these completions to the softirq now instead of
hardirq context, which at least in theory increases latency.  OTOH it
might even have positive effects on the rest of the system.
Christoph Hellwig Jan. 25, 2021, 8:49 a.m. UTC | #6
On Mon, Jan 25, 2021 at 08:23:03AM +0000, Christoph Hellwig wrote:
> On Sat, Jan 23, 2021 at 09:10:26PM +0100, Sebastian Andrzej Siewior wrote:
> > Controllers with multiple queues have their IRQ-handelers pinned to a
> > CPU. The core shouldn't need to complete the request on a remote CPU.
> > 
> > Remove this case and always raise the softirq to complete the request.
> 
> What about changing blk_mq_trigger_softirq to take a void * argument
> and thus removing __blk_mq_complete_request_remote entirely?

I'll take this back - that change is in the way of what you do in patch
3.  So this looks good as-is:

Reviewed-by: Christoph Hellwig <hch@lst.de>
Sebastian Andrzej Siewior Jan. 25, 2021, 9:29 a.m. UTC | #7
On 2021-01-25 08:32:48 [+0000], Christoph Hellwig wrote:
> Well, I put it in quotes because I'm not sure what the exact effect
> is.  But we do delay these completions to the softirq now instead of
> hardirq context, which at least in theory increases latency.  OTOH it
> might even have positive effects on the rest of the system.

The last part is/was my motivation ;)

Sebastian
Daniel Wagner Jan. 27, 2021, 11:22 a.m. UTC | #8
On Sat, Jan 23, 2021 at 09:10:26PM +0100, Sebastian Andrzej Siewior wrote:
> Controllers with multiple queues have their IRQ-handelers pinned to a
> CPU. The core shouldn't need to complete the request on a remote CPU.
> 
> Remove this case and always raise the softirq to complete the request.
> 
> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

Reviewed-by: Daniel Wagner <dwagner@suse.de>
diff mbox series

Patch

diff --git a/block/blk-mq.c b/block/blk-mq.c
index f285a9123a8b0..90348ae518461 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -628,19 +628,7 @@  static void __blk_mq_complete_request_remote(void *data)
 {
 	struct request *rq = data;
 
-	/*
-	 * For most of single queue controllers, there is only one irq vector
-	 * for handling I/O completion, and the only irq's affinity is set
-	 * to all possible CPUs.  On most of ARCHs, this affinity means the irq
-	 * is handled on one specific CPU.
-	 *
-	 * So complete I/O requests in softirq context in case of single queue
-	 * devices to avoid degrading I/O performance due to irqsoff latency.
-	 */
-	if (rq->q->nr_hw_queues == 1)
-		blk_mq_trigger_softirq(rq);
-	else
-		rq->q->mq_ops->complete(rq);
+	blk_mq_trigger_softirq(rq);
 }
 
 static inline bool blk_mq_complete_need_ipi(struct request *rq)