diff mbox series

[2/2] io_uring: fix the judging condition in io_sequence_defer

Message ID 20190713035826.2987-2-liuzhengyuan@kylinos.cn (mailing list archive)
State New, archived
Headers show
Series [1/2] io_uring: make req from defer and link list not touch async list | expand

Commit Message

Zhengyuan Liu July 13, 2019, 3:58 a.m. UTC
sq->cached_sq_head and cq->cached_cq_tail are both unsigned int.
if cached_sq_head gets overflowed before cached_cq_tail, then we
may miss a barrier req. As cached_cq_tail moved always following
cached_sq_head, the NQ should be enough.

Signed-off-by: Zhengyuan Liu <liuzhengyuan@kylinos.cn>
---
 fs/io_uring.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Comments

Jens Axboe July 13, 2019, 9:43 p.m. UTC | #1
On 7/12/19 9:58 PM, Zhengyuan Liu wrote:
> sq->cached_sq_head and cq->cached_cq_tail are both unsigned int.
> if cached_sq_head gets overflowed before cached_cq_tail, then we
> may miss a barrier req. As cached_cq_tail moved always following
> cached_sq_head, the NQ should be enough.

This (and the previous patch) looks fine to me, just wondering if
you had a test case showing this issue?
Jens Axboe July 16, 2019, 2:10 p.m. UTC | #2
On 7/14/19 8:34 PM, Zhengyuan Liu wrote:
> 
> On 7/14/19 5:43 AM, Jens Axboe wrote:
>> On 7/12/19 9:58 PM, Zhengyuan Liu wrote:
>>> sq->cached_sq_head and cq->cached_cq_tail are both unsigned int.
>>> if cached_sq_head gets overflowed before cached_cq_tail, then we
>>> may miss a barrier req. As cached_cq_tail moved always following
>>> cached_sq_head, the NQ should be enough.
>> This (and the previous patch) looks fine to me, just wondering if
>> you had a test case showing this issue?
>>
> Hi.
> 
> Actually I don't have a real test case,  but I have emulated a test case
> and showed the issue.
> 
> Let's preset those count to close overflowed status at the begin:
> 
>         @@ -2979,6 +2987,10 @@ static int io_allocate_scq_urings(struct
> io_ring_ctx *ctx,
>             cq_ring->ring_entries = p->cq_entries;
>             ctx->cq_mask = cq_ring->ring_mask;
>             ctx->cq_entries = cq_ring->ring_entries;
> +
> +          ctx->cached_sq_head = ctx->sq_ring->r.head =
> ctx->sq_ring->r.tail = 0xfffffff8;
> +          ctx->cached_cq_tail = ctx->cq_ring->r.head =
> ctx->cq_ring->r.tail = 0xfffffff8
> +
>             return 0;
>      }
> 
> And queue serveral sqes  following a io_uring_enter like this:
> 
>       write1 write2 ... write8  barrier write9 write10
> 
> Then we can miss the barrier almost every time.

No problem, just curious if you had a test case, since then I'd add
it to the arsenal.

I've applied this one, thank you.
diff mbox series

Patch

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 3e48fd7cd08f..55294ef82102 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -429,7 +429,7 @@  static inline bool io_sequence_defer(struct io_ring_ctx *ctx,
 	if ((req->flags & (REQ_F_IO_DRAIN|REQ_F_IO_DRAINED)) != REQ_F_IO_DRAIN)
 		return false;
 
-	return req->sequence > ctx->cached_cq_tail + ctx->sq_ring->dropped;
+	return req->sequence != ctx->cached_cq_tail + ctx->sq_ring->dropped;
 }
 
 static struct io_kiocb *io_get_deferred_req(struct io_ring_ctx *ctx)