diff mbox series

[7/7] blk-mq: ensure mq_ops ->poll() is entered at least once

Message ID 20181117214354.822-8-axboe@kernel.dk (mailing list archive)
State New, archived
Headers show
Series [1/7] block: avoid ordered task state change for polled IO | expand

Commit Message

Jens Axboe Nov. 17, 2018, 9:43 p.m. UTC
Right now we immediately bail if need_resched() is true, but
we need to do at least one loop in case we have entries waiting.
So just invert the need_resched() check, putting it at the
bottom of the loop.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 block/blk-mq.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

Comments

Christoph Hellwig Nov. 19, 2018, 8:05 a.m. UTC | #1
On Sat, Nov 17, 2018 at 02:43:54PM -0700, Jens Axboe wrote:
> Right now we immediately bail if need_resched() is true, but
> we need to do at least one loop in case we have entries waiting.
> So just invert the need_resched() check, putting it at the
> bottom of the loop.

Looks good,

Reviewed-by: Christoph Hellwig <hch@lst.de>
diff mbox series

Patch

diff --git a/block/blk-mq.c b/block/blk-mq.c
index 0a2847d9248b..4769c975b8c8 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -3336,7 +3336,7 @@  static int __blk_mq_poll(struct blk_mq_hw_ctx *hctx, bool spin)
 	hctx->poll_considered++;
 
 	state = current->state;
-	while (!need_resched()) {
+	do {
 		int ret;
 
 		hctx->poll_invoked++;
@@ -3356,7 +3356,7 @@  static int __blk_mq_poll(struct blk_mq_hw_ctx *hctx, bool spin)
 		if (ret < 0 || !spin)
 			break;
 		cpu_relax();
-	}
+	} while (!need_resched());
 
 	__set_current_state(TASK_RUNNING);
 	return 0;