diff mbox series

[V4,4/8] io_uring: support SQE group

Message ID 20240706031000.310430-5-ming.lei@redhat.com (mailing list archive)
State New, archived
Headers show
Series io_uring: support sqe group and provide group kbuf | expand

Commit Message

Ming Lei July 6, 2024, 3:09 a.m. UTC
SQE group is defined as one chain of SQEs starting with the first SQE that
has IOSQE_SQE_GROUP set, and ending with the first subsequent SQE that
doesn't have it set, and it is similar with chain of linked SQEs.

Not like linked SQEs, each sqe is issued after the previous one is completed.
All SQEs in one group are submitted in parallel, so there isn't any dependency
among SQEs in one group.

The 1st SQE is group leader, and the other SQEs are group member. The whole
group share single IOSQE_IO_LINK and IOSQE_IO_DRAIN from group leader, and
the two flags are ignored for group members.

When the group is in one link chain, this group isn't submitted until the
previous SQE or group is completed. And the following SQE or group can't
be started if this group isn't completed. Failure from any group member will
fail the group leader, then the link chain can be terminated.

When IOSQE_IO_DRAIN is set for group leader, all requests in this group and
previous requests submitted are drained. Given IOSQE_IO_DRAIN can be set for
group leader only, we respect IO_DRAIN by always completing group leader as
the last one in the group.

Working together with IOSQE_IO_LINK, SQE group provides flexible way to
support N:M dependency, such as:

- group A is chained with group B together
- group A has N SQEs
- group B has M SQEs

then M SQEs in group B depend on N SQEs in group A.

N:M dependency can support some interesting use cases in efficient way:

1) read from multiple files, then write the read data into single file

2) read from single file, and write the read data into multiple files

3) write same data into multiple files, and read data from multiple files and
compare if correct data is written

Also IOSQE_SQE_GROUP takes the last bit in sqe->flags, but we still can
extend sqe->flags with one uring context flag, such as use __pad3 for
non-uring_cmd OPs and part of uring_cmd_flags for uring_cmd OP.

One simple sqe group based copy example[1] shows that:
1) buffered copy:
- perf is improved by 5%

2) direct IO mode
- perf is improved by 27%

3) sqe group copy, which keeps QD not changed, just re-organize IOs in the
following ways:

- each group have 4 READ IOs, linked by one single write IO for writing
  the read data in sqe group to destination file

- the 1st 12 groups have (4 + 1) IOs, and the last group have (3 + 1)
  IOs

- test code:
	https://github.com/ming1/liburing/commits/sqe_group_v2/

Suggested-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 include/linux/io_uring_types.h |  18 ++
 include/uapi/linux/io_uring.h  |   4 +
 io_uring/io_uring.c            | 304 ++++++++++++++++++++++++++++++---
 io_uring/io_uring.h            |  16 ++
 io_uring/timeout.c             |   2 +
 5 files changed, 324 insertions(+), 20 deletions(-)

Comments

Pavel Begunkov July 22, 2024, 3:33 p.m. UTC | #1
On 7/6/24 04:09, Ming Lei wrote:
> SQE group is defined as one chain of SQEs starting with the first SQE that
> has IOSQE_SQE_GROUP set, and ending with the first subsequent SQE that
> doesn't have it set, and it is similar with chain of linked SQEs.
> 
> Not like linked SQEs, each sqe is issued after the previous one is completed.
> All SQEs in one group are submitted in parallel, so there isn't any dependency
> among SQEs in one group.
> 
> The 1st SQE is group leader, and the other SQEs are group member. The whole
> group share single IOSQE_IO_LINK and IOSQE_IO_DRAIN from group leader, and
> the two flags are ignored for group members.
> 
> When the group is in one link chain, this group isn't submitted until the
> previous SQE or group is completed. And the following SQE or group can't
> be started if this group isn't completed. Failure from any group member will
> fail the group leader, then the link chain can be terminated.
> 
> When IOSQE_IO_DRAIN is set for group leader, all requests in this group and
> previous requests submitted are drained. Given IOSQE_IO_DRAIN can be set for
> group leader only, we respect IO_DRAIN by always completing group leader as
> the last one in the group.
> 
> Working together with IOSQE_IO_LINK, SQE group provides flexible way to
> support N:M dependency, such as:
> 
> - group A is chained with group B together
> - group A has N SQEs
> - group B has M SQEs
> 
> then M SQEs in group B depend on N SQEs in group A.
> 
> N:M dependency can support some interesting use cases in efficient way:
> 
> 1) read from multiple files, then write the read data into single file
> 
> 2) read from single file, and write the read data into multiple files
> 
> 3) write same data into multiple files, and read data from multiple files and
> compare if correct data is written
> 
> Also IOSQE_SQE_GROUP takes the last bit in sqe->flags, but we still can
> extend sqe->flags with one uring context flag, such as use __pad3 for
> non-uring_cmd OPs and part of uring_cmd_flags for uring_cmd OP.
> 
> One simple sqe group based copy example[1] shows that:

Sorry to say, but it's a flawed misleading example. We just can't
coalesce 4 writes into 1 and say it's a win of this feature. It'd
be a different thing if you compare perfectly optimised version
without vs with groups. Note, I'm not asking you to do that, the
use case here is zerocopy ublk.


> 1) buffered copy:
> - perf is improved by 5%
> 
> 2) direct IO mode
> - perf is improved by 27%
> 
> 3) sqe group copy, which keeps QD not changed, just re-organize IOs in the
> following ways:
> 
> - each group have 4 READ IOs, linked by one single write IO for writing
>    the read data in sqe group to destination file
> 
> - the 1st 12 groups have (4 + 1) IOs, and the last group have (3 + 1)
>    IOs
> 
> - test code:
> 	https://github.com/ming1/liburing/commits/sqe_group_v2/
> 
> Suggested-by: Kevin Wolf <kwolf@redhat.com>
> Signed-off-by: Ming Lei <ming.lei@redhat.com>
> ---
> diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
> index 7597344a6440..b5415f0774e5 100644
> --- a/io_uring/io_uring.c
> +++ b/io_uring/io_uring.c
...
>   
> @@ -421,6 +422,10 @@ static inline void io_req_track_inflight(struct io_kiocb *req)
>   	if (!(req->flags & REQ_F_INFLIGHT)) {
>   		req->flags |= REQ_F_INFLIGHT;
>   		atomic_inc(&req->task->io_uring->inflight_tracked);
> +
> +		/* make members' REQ_F_INFLIGHT discoverable via leader's */
> +		if (req_is_group_member(req))
> +			io_req_track_inflight(req->grp_leader);

Requests in a group can be run in parallel with the leader (i.e.
io_issue_sqe()), right? In which case it'd race setting req->flags. We'd
need to think how make it sane.

>   	}
>   }
...
> +void io_queue_group_members(struct io_kiocb *req, bool async)
> +{
> +	struct io_kiocb *member = req->grp_link;
> +
> +	if (!member)
> +		return;
> +
> +	req->grp_link = NULL;
> +	while (member) {
> +		struct io_kiocb *next = member->grp_link;
> +
> +		member->grp_leader = req;
> +		if (async)
> +			member->flags |= REQ_F_FORCE_ASYNC;

It doesn't need REQ_F_FORCE_ASYNC. That forces
io-wq -> task_work -> io-wq for no reason when the user
didn't ask for that.

> +
> +		if (unlikely(member->flags & REQ_F_FAIL)) {
> +			io_req_task_queue_fail(member, member->cqe.res);
> +		} else if (member->flags & REQ_F_FORCE_ASYNC) {
> +			io_req_task_queue(member);
> +		} else {
> +			io_queue_sqe(member);
> +		}
> +		member = next;
> +	}
> +}
> +
> +static inline bool __io_complete_group_req(struct io_kiocb *req,
> +			     struct io_kiocb *lead)
> +{
> +	WARN_ON_ONCE(!(req->flags & REQ_F_SQE_GROUP));
> +
> +	if (WARN_ON_ONCE(lead->grp_refs <= 0))
> +		return false;
> +
> +	/*
> +	 * Set linked leader as failed if any member is failed, so
> +	 * the remained link chain can be terminated
> +	 */
> +	if (unlikely((req->flags & REQ_F_FAIL) &&
> +		     ((lead->flags & IO_REQ_LINK_FLAGS) && lead->link)))
> +		req_set_fail(lead);
> +	return !--lead->grp_refs;
> +}
> +
> +/* Complete group request and collect completed leader for freeing */
> +static void io_complete_group_req(struct io_kiocb *req,
> +		struct io_wq_work_list *grp_list)
> +{
> +	struct io_kiocb *lead = get_group_leader(req);
> +
> +	if (__io_complete_group_req(req, lead)) {
> +		req->flags &= ~REQ_F_SQE_GROUP;
> +		lead->flags &= ~REQ_F_SQE_GROUP_LEADER;
> +
> +		if (!(lead->flags & REQ_F_CQE_SKIP))
> +			io_req_commit_cqe(lead->ctx, lead);
> +
> +		if (req != lead) {
> +			/*
> +			 * Add leader to free list if it isn't there
> +			 * otherwise clearing group flag for freeing it
> +			 * in current batch
> +			 */
> +			if (!(lead->flags & REQ_F_SQE_GROUP))
> +				wq_list_add_tail(&lead->comp_list, grp_list);
> +			else
> +				lead->flags &= ~REQ_F_SQE_GROUP;
> +		}
> +	} else if (req != lead) {
> +		req->flags &= ~REQ_F_SQE_GROUP;
> +	} else {
> +		/*
> +		 * Leader's group flag clearing is delayed until it is
> +		 * removed from free list
> +		 */
> +	}
> +}
> +
>   static void io_req_complete_post(struct io_kiocb *req, unsigned issue_flags)
>   {
>   	struct io_ring_ctx *ctx = req->ctx;
> @@ -897,7 +1013,7 @@ static void io_req_complete_post(struct io_kiocb *req, unsigned issue_flags)
>   	}
>   
>   	io_cq_lock(ctx);
> -	if (!(req->flags & REQ_F_CQE_SKIP)) {
> +	if (!(req->flags & REQ_F_CQE_SKIP) && !req_is_group_leader(req)) {
>   		if (!io_fill_cqe_req(ctx, req))
>   			io_req_cqe_overflow(req);
>   	}
> @@ -974,16 +1090,22 @@ __cold bool __io_alloc_req_refill(struct io_ring_ctx *ctx)
>   	return true;
>   }

...
>   static void __io_req_find_next_prep(struct io_kiocb *req)
>   {
>   	struct io_ring_ctx *ctx = req->ctx;
> @@ -1388,6 +1510,17 @@ static void io_free_batch_list(struct io_ring_ctx *ctx,
>   						    comp_list);
>   
>   		if (unlikely(req->flags & IO_REQ_CLEAN_SLOW_FLAGS)) {
> +			/*
> +			 * Group leader may be removed twice, don't free it
> +			 * if group flag isn't cleared, when some members
> +			 * aren't completed yet
> +			 */
> +			if (req->flags & REQ_F_SQE_GROUP) {
> +				node = req->comp_list.next;
> +				req->flags &= ~REQ_F_SQE_GROUP;
> +				continue;
> +			}
> +
>   			if (req->flags & REQ_F_REFCOUNT) {
>   				node = req->comp_list.next;
>   				if (!req_ref_put_and_test(req))
> @@ -1420,6 +1553,7 @@ void __io_submit_flush_completions(struct io_ring_ctx *ctx)
>   	__must_hold(&ctx->uring_lock)
>   {
>   	struct io_submit_state *state = &ctx->submit_state;
> +	struct io_wq_work_list grp_list = {NULL};
>   	struct io_wq_work_node *node;
>   
>   	__io_cq_lock(ctx);
> @@ -1427,11 +1561,22 @@ void __io_submit_flush_completions(struct io_ring_ctx *ctx)
>   		struct io_kiocb *req = container_of(node, struct io_kiocb,
>   					    comp_list);
>   
> -		if (!(req->flags & REQ_F_CQE_SKIP))
> +		/*
> +		 * For group leader, cqe has to be committed after all
> +		 * members are committed, when the group leader flag is
> +		 * cleared
> +		 */
> +		if (!(req->flags & REQ_F_CQE_SKIP) &&
> +				likely(!req_is_group_leader(req)))
>   			io_req_commit_cqe(ctx, req);
> +		if (req->flags & REQ_F_SQE_GROUP)
> +			io_complete_group_req(req, &grp_list);


if (unlikely(flags & (SKIP_CQE|GROUP))) {
	<sqe group code>
	if (/* needs to skip CQE posting */)
		continue;
	<more sqe group code>
}

io_req_commit_cqe();


Please. And, what's the point of reversing the CQE order and
posting the "leader" completion last? It breaks the natural
order of how IO complete, that is first the "leader" completes
what it has need to do including IO, and then "members" follow
doing their stuff. And besides, you can even post a CQE for the
"leader" when its IO is done and let the user possibly continue
executing. And the user can count when the entire group complete,
if that's necessary to know.

>   	}
>   	__io_cq_unlock_post(ctx);
>   
> +	if (!wq_list_empty(&grp_list))
> +		__wq_list_splice(&grp_list, state->compl_reqs.first);
> +
>   	if (!wq_list_empty(&state->compl_reqs)) {
>   		io_free_batch_list(ctx, state->compl_reqs.first);
>   		INIT_WQ_LIST(&state->compl_reqs);
...
>   
> @@ -1754,9 +1903,18 @@ struct io_wq_work *io_wq_free_work(struct io_wq_work *work)
>   	struct io_kiocb *nxt = NULL;
>   
>   	if (req_ref_put_and_test(req)) {
> -		if (req->flags & IO_REQ_LINK_FLAGS)
> -			nxt = io_req_find_next(req);
> -		io_free_req(req);
> +		/*
> +		 * CQEs have been posted in io_req_complete_post() except
> +		 * for group leader, and we can't advance the link for
> +		 * group leader until its CQE is posted.
> +		 */
> +		if (!req_is_group_leader(req)) {
> +			if (req->flags & IO_REQ_LINK_FLAGS)
> +				nxt = io_req_find_next(req);
> +			io_free_req(req);
> +		} else {
> +			__io_free_req(req, false);

Something fishy is going on here. io-wq only holds a ref that the
request is not killed, but it's owned by someone else. And the
owner is responsible for CQE posting and logical flow of the
request.

Now, the owner put the request down but for some reason didn't
finish with the request like posting a CQE, but it's delayed to
iowq dropping the ref?

I assume the refcounting hierarchy, first grp_refs go down,
and when it hits zero it does whatever it needs, posting a
CQE at that point of prior, and then puts the request reference
down.

> +		}
>   	}
>   	return nxt ? &nxt->work : NULL;
>   }
> @@ -1821,6 +1979,8 @@ void io_wq_submit_work(struct io_wq_work *work)
>   		}
>   	}
>   
> +	if (need_queue_group_members(req->flags))
> +		io_queue_group_members(req, true);
>   	do {
>   		ret = io_issue_sqe(req, issue_flags);
>   		if (ret != -EAGAIN)
> @@ -1932,9 +2092,17 @@ static inline void io_queue_sqe(struct io_kiocb *req)
>   	/*
>   	 * We async punt it if the file wasn't marked NOWAIT, or if the file
>   	 * doesn't support non-blocking read/write attempts
> +	 *
> +	 * Request is always freed after returning from io_queue_sqe(), so
> +	 * it is fine to check its flags after it is issued
> +	 *
> +	 * For group leader, members holds leader references, so it is safe
> +	 * to touch the leader after leader is issued
>   	 */
>   	if (unlikely(ret))
>   		io_queue_async(req, ret);
> +	else if (need_queue_group_members(req->flags))
> +		io_queue_group_members(req, false);

It absolutely cannot be here. There is no relation between this place
in code and lifetime of the request. It could've been failed or
completed, it can also be flying around in a completely arbitrary
context being executed. We're not introducing weird special lifetime
rules for group links. It complicates the code, and no way it can be
sanely supported.
For example, it's not forbidden for issue_sqe callbacks to queue requests
to io-wq and return 0 (IOU_ISSUE_SKIP_COMPLETE which would be turned
into 0), and then we have two racing io_queue_group_members() calls.

You mentioned before there are 2 cases, w/ and w/o deps. That gives me
a very _very_ bad feeling that you're taking two separate features,
and trying to hammer them together into one. And the current
dependencies thing looks terrible as a user api, when you have
requests run in parallel, unless there is a special request ahead which
provides buffers but doesn't advertise that in a good way. I don't
know what to do about it. Either we need to find a better api or
just implement one. Maybe, always delaying until leader executes
like with dependencies is good enough, and then you set a nop
as a leader to emulate the dependency-less mode.

Taking uapi aside, the two mode dichotomy tells the story how
io_queue_group_members() placing should look like. If there
are dependencies, the leader should be executed, completed,
and quueue members in a good place denoting completion, like
io_free_batch_list. In case of no deps it need to queue everyone
when you'd queue up a linked request (or issue the head).


>   }
>   
>   static void io_queue_sqe_fallback(struct io_kiocb *req)
> @@ -2101,6 +2269,56 @@ static int io_init_req(struct io_ring_ctx *ctx, struct io_kiocb *req,
>   	return def->prep(req, sqe);
>   }
>   
> +static struct io_kiocb *io_group_sqe(struct io_submit_link *group,
> +				     struct io_kiocb *req)
> +{
> +	/*
> +	 * Group chain is similar with link chain: starts with 1st sqe with
> +	 * REQ_F_SQE_GROUP, and ends with the 1st sqe without REQ_F_SQE_GROUP
> +	 */
> +	if (group->head) {
> +		struct io_kiocb *lead = group->head;
> +
> +		/* members can't be in link chain, can't be drained */
> +		req->flags &= ~(IO_REQ_LINK_FLAGS | REQ_F_IO_DRAIN);

needs to fail instead seeing these flags

> +		lead->grp_refs += 1;
> +		group->last->grp_link = req;
> +		group->last = req;
> +
> +		if (req->flags & REQ_F_SQE_GROUP)
> +			return NULL;
> +
> +		req->grp_link = NULL;
> +		req->flags |= REQ_F_SQE_GROUP;
> +		group->head = NULL;
> +		return lead;
> +	} else if (req->flags & REQ_F_SQE_GROUP) {
> +		group->head = req;
> +		group->last = req;
> +		req->grp_refs = 1;
> +		req->flags |= REQ_F_SQE_GROUP_LEADER;
> +		return NULL;
> +	} else {
> +		return req;
> +	}
> +}
> +
...
> diff --git a/io_uring/io_uring.h b/io_uring/io_uring.h
> index e1ce908f0679..8cc347959f7e 100644
> --- a/io_uring/io_uring.h
> +++ b/io_uring/io_uring.h
> @@ -68,6 +68,8 @@ bool io_post_aux_cqe(struct io_ring_ctx *ctx, u64 user_data, s32 res, u32 cflags
>   void io_add_aux_cqe(struct io_ring_ctx *ctx, u64 user_data, s32 res, u32 cflags);
>   bool io_req_post_cqe(struct io_kiocb *req, s32 res, u32 cflags);
>   void __io_commit_cqring_flush(struct io_ring_ctx *ctx);
> +void io_queue_group_members(struct io_kiocb *req, bool async);
> +void io_cancel_group_members(struct io_kiocb *req, bool ignore_cqes);
>   
>   struct file *io_file_get_normal(struct io_kiocb *req, int fd);
>   struct file *io_file_get_fixed(struct io_kiocb *req, int fd,
> @@ -339,6 +341,16 @@ static inline void io_tw_lock(struct io_ring_ctx *ctx, struct io_tw_state *ts)
>   	lockdep_assert_held(&ctx->uring_lock);
>   }
>   
> +static inline bool req_is_group_leader(struct io_kiocb *req)
> +{
> +	return req->flags & REQ_F_SQE_GROUP_LEADER;
> +}
> +
> +static inline bool req_is_group_member(struct io_kiocb *req)
> +{
> +	return !req_is_group_leader(req) && (req->flags & REQ_F_SQE_GROUP);
> +}
> +
>   /*
>    * Don't complete immediately but use deferred completion infrastructure.
>    * Protected by ->uring_lock and can only be used either with
> @@ -352,6 +364,10 @@ static inline void io_req_complete_defer(struct io_kiocb *req)
>   	lockdep_assert_held(&req->ctx->uring_lock);
>   
>   	wq_list_add_tail(&req->comp_list, &state->compl_reqs);
> +
> +	/* members may not be issued when leader is completed */
> +	if (unlikely(req_is_group_leader(req) && req->grp_link))
> +		io_queue_group_members(req, false);
>   }

Here should be no processing, it's not a place where it can be
Ming Lei July 23, 2024, 2:34 p.m. UTC | #2
Hi Pavel,

Thanks for the review!

On Mon, Jul 22, 2024 at 04:33:05PM +0100, Pavel Begunkov wrote:
> On 7/6/24 04:09, Ming Lei wrote:
> > SQE group is defined as one chain of SQEs starting with the first SQE that
> > has IOSQE_SQE_GROUP set, and ending with the first subsequent SQE that
> > doesn't have it set, and it is similar with chain of linked SQEs.
> > 
> > Not like linked SQEs, each sqe is issued after the previous one is completed.
> > All SQEs in one group are submitted in parallel, so there isn't any dependency
> > among SQEs in one group.
> > 
> > The 1st SQE is group leader, and the other SQEs are group member. The whole
> > group share single IOSQE_IO_LINK and IOSQE_IO_DRAIN from group leader, and
> > the two flags are ignored for group members.
> > 
> > When the group is in one link chain, this group isn't submitted until the
> > previous SQE or group is completed. And the following SQE or group can't
> > be started if this group isn't completed. Failure from any group member will
> > fail the group leader, then the link chain can be terminated.
> > 
> > When IOSQE_IO_DRAIN is set for group leader, all requests in this group and
> > previous requests submitted are drained. Given IOSQE_IO_DRAIN can be set for
> > group leader only, we respect IO_DRAIN by always completing group leader as
> > the last one in the group.
> > 
> > Working together with IOSQE_IO_LINK, SQE group provides flexible way to
> > support N:M dependency, such as:
> > 
> > - group A is chained with group B together
> > - group A has N SQEs
> > - group B has M SQEs
> > 
> > then M SQEs in group B depend on N SQEs in group A.
> > 
> > N:M dependency can support some interesting use cases in efficient way:
> > 
> > 1) read from multiple files, then write the read data into single file
> > 
> > 2) read from single file, and write the read data into multiple files
> > 
> > 3) write same data into multiple files, and read data from multiple files and
> > compare if correct data is written
> > 
> > Also IOSQE_SQE_GROUP takes the last bit in sqe->flags, but we still can
> > extend sqe->flags with one uring context flag, such as use __pad3 for
> > non-uring_cmd OPs and part of uring_cmd_flags for uring_cmd OP.
> > 
> > One simple sqe group based copy example[1] shows that:
> 
> Sorry to say, but it's a flawed misleading example. We just can't
> coalesce 4 writes into 1 and say it's a win of this feature. It'd
> be a different thing if you compare perfectly optimised version
> without vs with groups. Note, I'm not asking you to do that, the
> use case here is zerocopy ublk.

OK, group provides one easy way to do that.

> 
> 
> > 1) buffered copy:
> > - perf is improved by 5%
> > 
> > 2) direct IO mode
> > - perf is improved by 27%
> > 
> > 3) sqe group copy, which keeps QD not changed, just re-organize IOs in the
> > following ways:
> > 
> > - each group have 4 READ IOs, linked by one single write IO for writing
> >    the read data in sqe group to destination file
> > 
> > - the 1st 12 groups have (4 + 1) IOs, and the last group have (3 + 1)
> >    IOs
> > 
> > - test code:
> > 	https://github.com/ming1/liburing/commits/sqe_group_v2/
> > 
> > Suggested-by: Kevin Wolf <kwolf@redhat.com>
> > Signed-off-by: Ming Lei <ming.lei@redhat.com>
> > ---
> > diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
> > index 7597344a6440..b5415f0774e5 100644
> > --- a/io_uring/io_uring.c
> > +++ b/io_uring/io_uring.c
> ...
> > @@ -421,6 +422,10 @@ static inline void io_req_track_inflight(struct io_kiocb *req)
> >   	if (!(req->flags & REQ_F_INFLIGHT)) {
> >   		req->flags |= REQ_F_INFLIGHT;
> >   		atomic_inc(&req->task->io_uring->inflight_tracked);
> > +
> > +		/* make members' REQ_F_INFLIGHT discoverable via leader's */
> > +		if (req_is_group_member(req))
> > +			io_req_track_inflight(req->grp_leader);
> 
> Requests in a group can be run in parallel with the leader (i.e.
> io_issue_sqe()), right? In which case it'd race setting req->flags. We'd
> need to think how make it sane.

Yeah, another easier way could be to always mark leader as INFLIGHT.

> 
> >   	}
> >   }
> ...
> > +void io_queue_group_members(struct io_kiocb *req, bool async)
> > +{
> > +	struct io_kiocb *member = req->grp_link;
> > +
> > +	if (!member)
> > +		return;
> > +
> > +	req->grp_link = NULL;
> > +	while (member) {
> > +		struct io_kiocb *next = member->grp_link;
> > +
> > +		member->grp_leader = req;
> > +		if (async)
> > +			member->flags |= REQ_F_FORCE_ASYNC;
> 
> It doesn't need REQ_F_FORCE_ASYNC. That forces
> io-wq -> task_work -> io-wq for no reason when the user
> didn't ask for that.

OK.

> 
> > +
> > +		if (unlikely(member->flags & REQ_F_FAIL)) {
> > +			io_req_task_queue_fail(member, member->cqe.res);
> > +		} else if (member->flags & REQ_F_FORCE_ASYNC) {
> > +			io_req_task_queue(member);
> > +		} else {
> > +			io_queue_sqe(member);
> > +		}
> > +		member = next;
> > +	}
> > +}
> > +
> > +static inline bool __io_complete_group_req(struct io_kiocb *req,
> > +			     struct io_kiocb *lead)
> > +{
> > +	WARN_ON_ONCE(!(req->flags & REQ_F_SQE_GROUP));
> > +
> > +	if (WARN_ON_ONCE(lead->grp_refs <= 0))
> > +		return false;
> > +
> > +	/*
> > +	 * Set linked leader as failed if any member is failed, so
> > +	 * the remained link chain can be terminated
> > +	 */
> > +	if (unlikely((req->flags & REQ_F_FAIL) &&
> > +		     ((lead->flags & IO_REQ_LINK_FLAGS) && lead->link)))
> > +		req_set_fail(lead);
> > +	return !--lead->grp_refs;
> > +}
> > +
> > +/* Complete group request and collect completed leader for freeing */
> > +static void io_complete_group_req(struct io_kiocb *req,
> > +		struct io_wq_work_list *grp_list)
> > +{
> > +	struct io_kiocb *lead = get_group_leader(req);
> > +
> > +	if (__io_complete_group_req(req, lead)) {
> > +		req->flags &= ~REQ_F_SQE_GROUP;
> > +		lead->flags &= ~REQ_F_SQE_GROUP_LEADER;
> > +
> > +		if (!(lead->flags & REQ_F_CQE_SKIP))
> > +			io_req_commit_cqe(lead->ctx, lead);
> > +
> > +		if (req != lead) {
> > +			/*
> > +			 * Add leader to free list if it isn't there
> > +			 * otherwise clearing group flag for freeing it
> > +			 * in current batch
> > +			 */
> > +			if (!(lead->flags & REQ_F_SQE_GROUP))
> > +				wq_list_add_tail(&lead->comp_list, grp_list);
> > +			else
> > +				lead->flags &= ~REQ_F_SQE_GROUP;
> > +		}
> > +	} else if (req != lead) {
> > +		req->flags &= ~REQ_F_SQE_GROUP;
> > +	} else {
> > +		/*
> > +		 * Leader's group flag clearing is delayed until it is
> > +		 * removed from free list
> > +		 */
> > +	}
> > +}
> > +
> >   static void io_req_complete_post(struct io_kiocb *req, unsigned issue_flags)
> >   {
> >   	struct io_ring_ctx *ctx = req->ctx;
> > @@ -897,7 +1013,7 @@ static void io_req_complete_post(struct io_kiocb *req, unsigned issue_flags)
> >   	}
> >   	io_cq_lock(ctx);
> > -	if (!(req->flags & REQ_F_CQE_SKIP)) {
> > +	if (!(req->flags & REQ_F_CQE_SKIP) && !req_is_group_leader(req)) {
> >   		if (!io_fill_cqe_req(ctx, req))
> >   			io_req_cqe_overflow(req);
> >   	}
> > @@ -974,16 +1090,22 @@ __cold bool __io_alloc_req_refill(struct io_ring_ctx *ctx)
> >   	return true;
> >   }
> 
> ...
> >   static void __io_req_find_next_prep(struct io_kiocb *req)
> >   {
> >   	struct io_ring_ctx *ctx = req->ctx;
> > @@ -1388,6 +1510,17 @@ static void io_free_batch_list(struct io_ring_ctx *ctx,
> >   						    comp_list);
> >   		if (unlikely(req->flags & IO_REQ_CLEAN_SLOW_FLAGS)) {
> > +			/*
> > +			 * Group leader may be removed twice, don't free it
> > +			 * if group flag isn't cleared, when some members
> > +			 * aren't completed yet
> > +			 */
> > +			if (req->flags & REQ_F_SQE_GROUP) {
> > +				node = req->comp_list.next;
> > +				req->flags &= ~REQ_F_SQE_GROUP;
> > +				continue;
> > +			}
> > +
> >   			if (req->flags & REQ_F_REFCOUNT) {
> >   				node = req->comp_list.next;
> >   				if (!req_ref_put_and_test(req))
> > @@ -1420,6 +1553,7 @@ void __io_submit_flush_completions(struct io_ring_ctx *ctx)
> >   	__must_hold(&ctx->uring_lock)
> >   {
> >   	struct io_submit_state *state = &ctx->submit_state;
> > +	struct io_wq_work_list grp_list = {NULL};
> >   	struct io_wq_work_node *node;
> >   	__io_cq_lock(ctx);
> > @@ -1427,11 +1561,22 @@ void __io_submit_flush_completions(struct io_ring_ctx *ctx)
> >   		struct io_kiocb *req = container_of(node, struct io_kiocb,
> >   					    comp_list);
> > -		if (!(req->flags & REQ_F_CQE_SKIP))
> > +		/*
> > +		 * For group leader, cqe has to be committed after all
> > +		 * members are committed, when the group leader flag is
> > +		 * cleared
> > +		 */
> > +		if (!(req->flags & REQ_F_CQE_SKIP) &&
> > +				likely(!req_is_group_leader(req)))
> >   			io_req_commit_cqe(ctx, req);
> > +		if (req->flags & REQ_F_SQE_GROUP)
> > +			io_complete_group_req(req, &grp_list);
> 
> 
> if (unlikely(flags & (SKIP_CQE|GROUP))) {
> 	<sqe group code>
> 	if (/* needs to skip CQE posting */)
> 		continue;

io_complete_group_req() needs to be called too in case of CQE_SKIP
because the current request may belong to group.

> 	<more sqe group code>
> }
> 
> io_req_commit_cqe();
> 
> 
> Please. And, what's the point of reversing the CQE order and
> posting the "leader" completion last? It breaks the natural
> order of how IO complete, that is first the "leader" completes
> what it has need to do including IO, and then "members" follow
> doing their stuff. And besides, you can even post a CQE for the
> "leader" when its IO is done and let the user possibly continue
> executing. And the user can count when the entire group complete,
> if that's necessary to know.

There are several reasons for posting leader completion last:

1) only the leader is visible in link chain, IO drain has to wait
the whole group by draining the leader

2) when members depend on leader, leader holds group-wide resource,
so it has to be completed after all members are done

> 
> >   	}
> >   	__io_cq_unlock_post(ctx);
> > +	if (!wq_list_empty(&grp_list))
> > +		__wq_list_splice(&grp_list, state->compl_reqs.first);
> > +
> >   	if (!wq_list_empty(&state->compl_reqs)) {
> >   		io_free_batch_list(ctx, state->compl_reqs.first);
> >   		INIT_WQ_LIST(&state->compl_reqs);
> ...
> > @@ -1754,9 +1903,18 @@ struct io_wq_work *io_wq_free_work(struct io_wq_work *work)
> >   	struct io_kiocb *nxt = NULL;
> >   	if (req_ref_put_and_test(req)) {
> > -		if (req->flags & IO_REQ_LINK_FLAGS)
> > -			nxt = io_req_find_next(req);
> > -		io_free_req(req);
> > +		/*
> > +		 * CQEs have been posted in io_req_complete_post() except
> > +		 * for group leader, and we can't advance the link for
> > +		 * group leader until its CQE is posted.
> > +		 */
> > +		if (!req_is_group_leader(req)) {
> > +			if (req->flags & IO_REQ_LINK_FLAGS)
> > +				nxt = io_req_find_next(req);
> > +			io_free_req(req);
> > +		} else {
> > +			__io_free_req(req, false);
> 
> Something fishy is going on here. io-wq only holds a ref that the
> request is not killed, but it's owned by someone else. And the
> owner is responsible for CQE posting and logical flow of the
> request.

io_req_complete_post() is always called in io-wq for CQE posting
before io-wq drops ref.

The ref held by io-wq prevents the owner from calling io_free_req(),
so the owner actually can't run CQE post.

> 
> Now, the owner put the request down but for some reason didn't
> finish with the request like posting a CQE, but it's delayed to
> iowq dropping the ref?
> 
> I assume the refcounting hierarchy, first grp_refs go down,
> and when it hits zero it does whatever it needs, posting a
> CQE at that point of prior, and then puts the request reference
> down.

Yes, that is why the patch doesn't mark CQE_SKIP for leader in
io_wq_free_work(), meantime leader->link has to be issued after
leader's CQE is posted in case of io-wq.

But grp_refs is dropped after io-wq request reference drops to
zero, then both io-wq and nor-io-wq code path can be unified
wrt. dealing with grp_refs, meantime it needn't to be updated
in extra(io-wq) context.

> 
> > +		}
> >   	}
> >   	return nxt ? &nxt->work : NULL;
> >   }
> > @@ -1821,6 +1979,8 @@ void io_wq_submit_work(struct io_wq_work *work)
> >   		}
> >   	}
> > +	if (need_queue_group_members(req->flags))
> > +		io_queue_group_members(req, true);
> >   	do {
> >   		ret = io_issue_sqe(req, issue_flags);
> >   		if (ret != -EAGAIN)
> > @@ -1932,9 +2092,17 @@ static inline void io_queue_sqe(struct io_kiocb *req)
> >   	/*
> >   	 * We async punt it if the file wasn't marked NOWAIT, or if the file
> >   	 * doesn't support non-blocking read/write attempts
> > +	 *
> > +	 * Request is always freed after returning from io_queue_sqe(), so
> > +	 * it is fine to check its flags after it is issued
> > +	 *
> > +	 * For group leader, members holds leader references, so it is safe
> > +	 * to touch the leader after leader is issued
> >   	 */
> >   	if (unlikely(ret))
> >   		io_queue_async(req, ret);
> > +	else if (need_queue_group_members(req->flags))
> > +		io_queue_group_members(req, false);
> 
> It absolutely cannot be here. There is no relation between this place
> in code and lifetime of the request. It could've been failed or
> completed, it can also be flying around in a completely arbitrary
> context being executed. We're not introducing weird special lifetime
> rules for group links. It complicates the code, and no way it can be
> sanely supported.
> For example, it's not forbidden for issue_sqe callbacks to queue requests
> to io-wq and return 0 (IOU_ISSUE_SKIP_COMPLETE which would be turned
> into 0), and then we have two racing io_queue_group_members() calls.

It can by handled by adding io_queue_sqe_group() in which:

- req->grp_link is moved to one local variable, and make every
  member's grp_leader point to req

- call io_queue_sqe() for leader

- then call io_queue_group_members() for all members, and make sure
not touch leader in io_queue_group_members()

What do you think of this way?

> 
> You mentioned before there are 2 cases, w/ and w/o deps. That gives me
> a very _very_ bad feeling that you're taking two separate features,
> and trying to hammer them together into one. And the current

Yes, because w/ deps reuses most of code of w/ deps.

> dependencies thing looks terrible as a user api, when you have
> requests run in parallel, unless there is a special request ahead which
> provides buffers but doesn't advertise that in a good way. I don't
> know what to do about it. Either we need to find a better api or
> just implement one. Maybe, always delaying until leader executes
> like with dependencies is good enough, and then you set a nop
> as a leader to emulate the dependency-less mode.

Another ways is to start with w/ deps mode only, which is exactly
what ublk zero copy is needed, and w/o deps is just for making
the feature more generic, suggested by Kevin.

> 
> Taking uapi aside, the two mode dichotomy tells the story how
> io_queue_group_members() placing should look like. If there
> are dependencies, the leader should be executed, completed,
> and quueue members in a good place denoting completion, like
> io_free_batch_list. In case of no deps it need to queue everyone
> when you'd queue up a linked request (or issue the head).
> 
> 
> >   }
> >   static void io_queue_sqe_fallback(struct io_kiocb *req)
> > @@ -2101,6 +2269,56 @@ static int io_init_req(struct io_ring_ctx *ctx, struct io_kiocb *req,
> >   	return def->prep(req, sqe);
> >   }
> > +static struct io_kiocb *io_group_sqe(struct io_submit_link *group,
> > +				     struct io_kiocb *req)
> > +{
> > +	/*
> > +	 * Group chain is similar with link chain: starts with 1st sqe with
> > +	 * REQ_F_SQE_GROUP, and ends with the 1st sqe without REQ_F_SQE_GROUP
> > +	 */
> > +	if (group->head) {
> > +		struct io_kiocb *lead = group->head;
> > +
> > +		/* members can't be in link chain, can't be drained */
> > +		req->flags &= ~(IO_REQ_LINK_FLAGS | REQ_F_IO_DRAIN);
> 
> needs to fail instead seeing these flags

OK.

> 
> > +		lead->grp_refs += 1;
> > +		group->last->grp_link = req;
> > +		group->last = req;
> > +
> > +		if (req->flags & REQ_F_SQE_GROUP)
> > +			return NULL;
> > +
> > +		req->grp_link = NULL;
> > +		req->flags |= REQ_F_SQE_GROUP;
> > +		group->head = NULL;
> > +		return lead;
> > +	} else if (req->flags & REQ_F_SQE_GROUP) {
> > +		group->head = req;
> > +		group->last = req;
> > +		req->grp_refs = 1;
> > +		req->flags |= REQ_F_SQE_GROUP_LEADER;
> > +		return NULL;
> > +	} else {
> > +		return req;
> > +	}
> > +}
> > +
> ...
> > diff --git a/io_uring/io_uring.h b/io_uring/io_uring.h
> > index e1ce908f0679..8cc347959f7e 100644
> > --- a/io_uring/io_uring.h
> > +++ b/io_uring/io_uring.h
> > @@ -68,6 +68,8 @@ bool io_post_aux_cqe(struct io_ring_ctx *ctx, u64 user_data, s32 res, u32 cflags
> >   void io_add_aux_cqe(struct io_ring_ctx *ctx, u64 user_data, s32 res, u32 cflags);
> >   bool io_req_post_cqe(struct io_kiocb *req, s32 res, u32 cflags);
> >   void __io_commit_cqring_flush(struct io_ring_ctx *ctx);
> > +void io_queue_group_members(struct io_kiocb *req, bool async);
> > +void io_cancel_group_members(struct io_kiocb *req, bool ignore_cqes);
> >   struct file *io_file_get_normal(struct io_kiocb *req, int fd);
> >   struct file *io_file_get_fixed(struct io_kiocb *req, int fd,
> > @@ -339,6 +341,16 @@ static inline void io_tw_lock(struct io_ring_ctx *ctx, struct io_tw_state *ts)
> >   	lockdep_assert_held(&ctx->uring_lock);
> >   }
> > +static inline bool req_is_group_leader(struct io_kiocb *req)
> > +{
> > +	return req->flags & REQ_F_SQE_GROUP_LEADER;
> > +}
> > +
> > +static inline bool req_is_group_member(struct io_kiocb *req)
> > +{
> > +	return !req_is_group_leader(req) && (req->flags & REQ_F_SQE_GROUP);
> > +}
> > +
> >   /*
> >    * Don't complete immediately but use deferred completion infrastructure.
> >    * Protected by ->uring_lock and can only be used either with
> > @@ -352,6 +364,10 @@ static inline void io_req_complete_defer(struct io_kiocb *req)
> >   	lockdep_assert_held(&req->ctx->uring_lock);
> >   	wq_list_add_tail(&req->comp_list, &state->compl_reqs);
> > +
> > +	/* members may not be issued when leader is completed */
> > +	if (unlikely(req_is_group_leader(req) && req->grp_link))
> > +		io_queue_group_members(req, false);
> >   }
> 
> Here should be no processing, it's not a place where it can be

OK.


Thanks,
Ming
Pavel Begunkov July 24, 2024, 1:41 p.m. UTC | #3
On 7/23/24 15:34, Ming Lei wrote:
> Hi Pavel,
> 
> Thanks for the review!
> 
> On Mon, Jul 22, 2024 at 04:33:05PM +0100, Pavel Begunkov wrote:
>> On 7/6/24 04:09, Ming Lei wrote:
>>> SQE group is defined as one chain of SQEs starting with the first SQE that
>>> has IOSQE_SQE_GROUP set, and ending with the first subsequent SQE that
>>> doesn't have it set, and it is similar with chain of linked SQEs.
>>>
...
>>> ---
>>> diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
>>> index 7597344a6440..b5415f0774e5 100644
>>> --- a/io_uring/io_uring.c
>>> +++ b/io_uring/io_uring.c
>> ...
>>> @@ -421,6 +422,10 @@ static inline void io_req_track_inflight(struct io_kiocb *req)
>>>    	if (!(req->flags & REQ_F_INFLIGHT)) {
>>>    		req->flags |= REQ_F_INFLIGHT;
>>>    		atomic_inc(&req->task->io_uring->inflight_tracked);
>>> +
>>> +		/* make members' REQ_F_INFLIGHT discoverable via leader's */
>>> +		if (req_is_group_member(req))
>>> +			io_req_track_inflight(req->grp_leader);
>>
>> Requests in a group can be run in parallel with the leader (i.e.
>> io_issue_sqe()), right? In which case it'd race setting req->flags. We'd
>> need to think how make it sane.
> 
> Yeah, another easier way could be to always mark leader as INFLIGHT.

I've been thinking a bit more about it, there should be an easier way
out since we now have lazy file assignment. I sent a patch closing
a gap, I need to double check if that's enough, but let's forget
about additional REQ_F_INFLIGHT handling here.

...
>>> @@ -1420,6 +1553,7 @@ void __io_submit_flush_completions(struct io_ring_ctx *ctx)
>>>    	__must_hold(&ctx->uring_lock)
>>>    {
>>>    	struct io_submit_state *state = &ctx->submit_state;
>>> +	struct io_wq_work_list grp_list = {NULL};
>>>    	struct io_wq_work_node *node;
>>>    	__io_cq_lock(ctx);
>>> @@ -1427,11 +1561,22 @@ void __io_submit_flush_completions(struct io_ring_ctx *ctx)
>>>    		struct io_kiocb *req = container_of(node, struct io_kiocb,
>>>    					    comp_list);
>>> -		if (!(req->flags & REQ_F_CQE_SKIP))
>>> +		/*
>>> +		 * For group leader, cqe has to be committed after all
>>> +		 * members are committed, when the group leader flag is
>>> +		 * cleared
>>> +		 */
>>> +		if (!(req->flags & REQ_F_CQE_SKIP) &&
>>> +				likely(!req_is_group_leader(req)))
>>>    			io_req_commit_cqe(ctx, req);
>>> +		if (req->flags & REQ_F_SQE_GROUP)
>>> +			io_complete_group_req(req, &grp_list);
>>
>>
>> if (unlikely(flags & (SKIP_CQE|GROUP))) {
>> 	<sqe group code>
>> 	if (/* needs to skip CQE posting */)
>> 		continue;
> 
> io_complete_group_req() needs to be called too in case of CQE_SKIP
> because the current request may belong to group.

What's the problem? You can even do

if (flags & GROUP) {
	// do all group specific stuff
	// handle CQE_SKIP if needed
} else if (flags & CQE_SKIP) {
	continue;
}

And call io_complete_group_req() and other group stuff
at any place there.

>> 	<more sqe group code>
>> }
>>
>> io_req_commit_cqe();
>>
>>
>> Please. And, what's the point of reversing the CQE order and
>> posting the "leader" completion last? It breaks the natural
>> order of how IO complete, that is first the "leader" completes
>> what it has need to do including IO, and then "members" follow
>> doing their stuff. And besides, you can even post a CQE for the
>> "leader" when its IO is done and let the user possibly continue
>> executing. And the user can count when the entire group complete,
>> if that's necessary to know.
> 
> There are several reasons for posting leader completion last:
> 
> 1) only the leader is visible in link chain, IO drain has to wait
> the whole group by draining the leader

Let's forget about IO drain. It's a feature I'd love to see killed
(if only we can), it's a slow path, for same reasons I'll discourage
anyone using it.

For correctness we can just copy the link trick, i.e. mark the next
request outside of the current group/link as drained like below or
just fail the group.

diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index 7ed1e009aaec..aa0b93765406 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -1975,7 +1975,7 @@ static void io_init_req_drain(struct io_kiocb *req)
  	struct io_kiocb *head = ctx->submit_state.link.head;
  
  	ctx->drain_active = true;
-	if (head) {
+	if (head || ctx->submit_state.group.head) {
  		/*
  		 * If we need to drain a request in the middle of a link, drain
  		 * the head request and the next request/link after the current



> 2) when members depend on leader, leader holds group-wide resource,
> so it has to be completed after all members are done

I'm talking about posting a CQE but not destroying the request
(and associated resources).

>>>    	}
>>>    	__io_cq_unlock_post(ctx);
>>> +	if (!wq_list_empty(&grp_list))
>>> +		__wq_list_splice(&grp_list, state->compl_reqs.first);
>>> +
>>>    	if (!wq_list_empty(&state->compl_reqs)) {
>>>    		io_free_batch_list(ctx, state->compl_reqs.first);
>>>    		INIT_WQ_LIST(&state->compl_reqs);
>> ...
>>> @@ -1754,9 +1903,18 @@ struct io_wq_work *io_wq_free_work(struct io_wq_work *work)
>>>    	struct io_kiocb *nxt = NULL;
>>>    	if (req_ref_put_and_test(req)) {
>>> -		if (req->flags & IO_REQ_LINK_FLAGS)
>>> -			nxt = io_req_find_next(req);
>>> -		io_free_req(req);
>>> +		/*
>>> +		 * CQEs have been posted in io_req_complete_post() except
>>> +		 * for group leader, and we can't advance the link for
>>> +		 * group leader until its CQE is posted.
>>> +		 */
>>> +		if (!req_is_group_leader(req)) {
>>> +			if (req->flags & IO_REQ_LINK_FLAGS)
>>> +				nxt = io_req_find_next(req);
>>> +			io_free_req(req);
>>> +		} else {
>>> +			__io_free_req(req, false);
>>
>> Something fishy is going on here. io-wq only holds a ref that the
>> request is not killed, but it's owned by someone else. And the
>> owner is responsible for CQE posting and logical flow of the
>> request.
> 
> io_req_complete_post() is always called in io-wq for CQE posting
> before io-wq drops ref.
> 
> The ref held by io-wq prevents the owner from calling io_free_req(),
> so the owner actually can't run CQE post.
> 
>>
>> Now, the owner put the request down but for some reason didn't
>> finish with the request like posting a CQE, but it's delayed to
>> iowq dropping the ref?
>>
>> I assume the refcounting hierarchy, first grp_refs go down,
>> and when it hits zero it does whatever it needs, posting a
>> CQE at that point of prior, and then puts the request reference
>> down.
> 
> Yes, that is why the patch doesn't mark CQE_SKIP for leader in
> io_wq_free_work(), meantime leader->link has to be issued after
> leader's CQE is posted in case of io-wq.

The point is that io_wq_free_work() doesn't need to know
anything about groups and can just continue setting the
skip cqe flag as before if it's done differently

> But grp_refs is dropped after io-wq request reference drops to
> zero, then both io-wq and nor-io-wq code path can be unified
> wrt. dealing with grp_refs, meantime it needn't to be updated
> in extra(io-wq) context.

Let's try to describe how it can work. First, I'm only describing
the dep mode for simplicity. And for the argument's sake we can say
that all CQEs are posted via io_submit_flush_completions.

io_req_complete_post() {
	if (flags & GROUP) {
		req->io_task_work.func = io_req_task_complete;
		io_req_task_work_add(req);
		return;
	}
	...
}

You can do it this way, nobody would ever care, and it shouldn't
affect performance. Otherwise everything down below can probably
be extended to io_req_complete_post().

To avoid confusion in terminology, what I call a member below doesn't
include a leader. IOW, a group leader request is not a member.

At the init we have:
grp_refs = nr_members; /* doesn't include the leader */

Let's also say that the group leader can and always goes
through io_submit_flush_completions() twice, just how it's
with your patches.

1) The first time we see the leader in io_submit_flush_completions()
is when it's done with resource preparation. For example, it was
doing some IO into a buffer, and now is ready to give that buffer
with data to group members. At this point it should queue up all group
members. And we also drop 1 grp_ref. There will also be no
io_issue_sqe() for it anymore.

2) Members are executed and completed, in io_submit_flush_completions()
they drop 1 grp_leader->grp_ref reference each.

3) When all members complete, leader's grp_ref becomes 0. Here
the leader is queued for io_submit_flush_completions() a second time,
at which point it drops ublk buffers and such and gets destroyed.

You can post a CQE in 1) and then set CQE_SKIP. Can also be fitted
into 3). A pseudo code for when we post it in step 1)

io_free_batch_list() {
	if (req->flags & GROUP) {
		if (req_is_member(req)) {
			req->grp_leader->grp_refs--;
			if (req->grp_leader->grp_refs == 0) {
				req->io_task_work.func = io_req_task_complete;
				io_req_task_work_add(req->grp_leader);
				// can be done better....
			}
			goto free_req;
		}
		WARN_ON_ONCE(!req_is_leader());

		if (!(req->flags & SEEN_FIRST_TIME)) {
			// already posted it just before coming here
			req->flags |= SKIP_CQE;
			// we'll see it again when grp_refs hit 0
			req->flags |= SEEN_FIRST_TIME;

			// Don't free the req, we're leaving it alive for now.
			// req->ref/REQ_F_REFCOUNT will be put next time we get here.
			return; // or continue
		}

		clean_up_request_resources(); // calls back into ublk
		// and now free the leader
	}

free_req:
	// the rest of io_free_batch_list()
	if (flags & REQ_F_REFCOUNT) {
		req_drop_ref();
		....
	}
	...
}


This way
1) There are relatively easy request lifetime rules.
2) No special ownership/lifetime rules for the group link field.
3) You don't need to touch io_req_complete_defer()
4) io-wq doesn't know anything about grp_refs and doesn't play tricks
with SKIP_CQE.
5) All handling is in one place, doesn't need multiple checks in common
hot path. You might need some extra, but at least it's contained.


>>> +		}
>>>    	}
>>>    	return nxt ? &nxt->work : NULL;
>>>    }
>>> @@ -1821,6 +1979,8 @@ void io_wq_submit_work(struct io_wq_work *work)
>>>    		}
>>>    	}
>>> +	if (need_queue_group_members(req->flags))
>>> +		io_queue_group_members(req, true);
>>>    	do {
>>>    		ret = io_issue_sqe(req, issue_flags);
>>>    		if (ret != -EAGAIN)
>>> @@ -1932,9 +2092,17 @@ static inline void io_queue_sqe(struct io_kiocb *req)
>>>    	/*
>>>    	 * We async punt it if the file wasn't marked NOWAIT, or if the file
>>>    	 * doesn't support non-blocking read/write attempts
>>> +	 *
>>> +	 * Request is always freed after returning from io_queue_sqe(), so
>>> +	 * it is fine to check its flags after it is issued
>>> +	 *
>>> +	 * For group leader, members holds leader references, so it is safe
>>> +	 * to touch the leader after leader is issued
>>>    	 */
>>>    	if (unlikely(ret))
>>>    		io_queue_async(req, ret);
>>> +	else if (need_queue_group_members(req->flags))
>>> +		io_queue_group_members(req, false);
>>
>> It absolutely cannot be here. There is no relation between this place
>> in code and lifetime of the request. It could've been failed or
>> completed, it can also be flying around in a completely arbitrary
>> context being executed. We're not introducing weird special lifetime
>> rules for group links. It complicates the code, and no way it can be
>> sanely supported.
>> For example, it's not forbidden for issue_sqe callbacks to queue requests
>> to io-wq and return 0 (IOU_ISSUE_SKIP_COMPLETE which would be turned
>> into 0), and then we have two racing io_queue_group_members() calls.
> 
> It can by handled by adding io_queue_sqe_group() in which:
> 
> - req->grp_link is moved to one local variable, and make every
>    member's grp_leader point to req
> 
> - call io_queue_sqe() for leader
> 
> - then call io_queue_group_members() for all members, and make sure
> not touch leader in io_queue_group_members()
> 
> What do you think of this way?

By the end of the day, io_queue_sqe is just not the right place for
it. Take a look at the scheme above, I believe it should work
Pavel Begunkov July 24, 2024, 2:54 p.m. UTC | #4
On 7/24/24 14:41, Pavel Begunkov wrote:
...> io_free_batch_list() {
>      if (req->flags & GROUP) {
>          if (req_is_member(req)) {
>              req->grp_leader->grp_refs--;
>              if (req->grp_leader->grp_refs == 0) {
>                  req->io_task_work.func = io_req_task_complete;
>                  io_req_task_work_add(req->grp_leader);
>                  // can be done better....
>              }
>              goto free_req;
>          }
>          WARN_ON_ONCE(!req_is_leader());
> 
>          if (!(req->flags & SEEN_FIRST_TIME)) {
>              // already posted it just before coming here
>              req->flags |= SKIP_CQE;
>              // we'll see it again when grp_refs hit 0
>              req->flags |= SEEN_FIRST_TIME;

Forgot queue_group_members() here
  
>              // Don't free the req, we're leaving it alive for now.
>              // req->ref/REQ_F_REFCOUNT will be put next time we get here.
>              return; // or continue
>          }
> 
>          clean_up_request_resources(); // calls back into ublk
>          // and now free the leader
>      }
> 
> free_req:
>      // the rest of io_free_batch_list()
>      if (flags & REQ_F_REFCOUNT) {
>          req_drop_ref();
>          ....
>      }
>      ...
> }
Ming Lei July 25, 2024, 10:33 a.m. UTC | #5
On Wed, Jul 24, 2024 at 02:41:38PM +0100, Pavel Begunkov wrote:
> On 7/23/24 15:34, Ming Lei wrote:
> > Hi Pavel,
> > 
> > Thanks for the review!
> > 
> > On Mon, Jul 22, 2024 at 04:33:05PM +0100, Pavel Begunkov wrote:
> > > On 7/6/24 04:09, Ming Lei wrote:
> > > > SQE group is defined as one chain of SQEs starting with the first SQE that
> > > > has IOSQE_SQE_GROUP set, and ending with the first subsequent SQE that
> > > > doesn't have it set, and it is similar with chain of linked SQEs.
> > > > 
> ...
> > > > ---
> > > > diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
> > > > index 7597344a6440..b5415f0774e5 100644
> > > > --- a/io_uring/io_uring.c
> > > > +++ b/io_uring/io_uring.c
> > > ...
> > > > @@ -421,6 +422,10 @@ static inline void io_req_track_inflight(struct io_kiocb *req)
> > > >    	if (!(req->flags & REQ_F_INFLIGHT)) {
> > > >    		req->flags |= REQ_F_INFLIGHT;
> > > >    		atomic_inc(&req->task->io_uring->inflight_tracked);
> > > > +
> > > > +		/* make members' REQ_F_INFLIGHT discoverable via leader's */
> > > > +		if (req_is_group_member(req))
> > > > +			io_req_track_inflight(req->grp_leader);
> > > 
> > > Requests in a group can be run in parallel with the leader (i.e.
> > > io_issue_sqe()), right? In which case it'd race setting req->flags. We'd
> > > need to think how make it sane.
> > 
> > Yeah, another easier way could be to always mark leader as INFLIGHT.
> 
> I've been thinking a bit more about it, there should be an easier way
> out since we now have lazy file assignment. I sent a patch closing
> a gap, I need to double check if that's enough, but let's forget
> about additional REQ_F_INFLIGHT handling here.

Great, thanks!

> 
> ...
> > > > @@ -1420,6 +1553,7 @@ void __io_submit_flush_completions(struct io_ring_ctx *ctx)
> > > >    	__must_hold(&ctx->uring_lock)
> > > >    {
> > > >    	struct io_submit_state *state = &ctx->submit_state;
> > > > +	struct io_wq_work_list grp_list = {NULL};
> > > >    	struct io_wq_work_node *node;
> > > >    	__io_cq_lock(ctx);
> > > > @@ -1427,11 +1561,22 @@ void __io_submit_flush_completions(struct io_ring_ctx *ctx)
> > > >    		struct io_kiocb *req = container_of(node, struct io_kiocb,
> > > >    					    comp_list);
> > > > -		if (!(req->flags & REQ_F_CQE_SKIP))
> > > > +		/*
> > > > +		 * For group leader, cqe has to be committed after all
> > > > +		 * members are committed, when the group leader flag is
> > > > +		 * cleared
> > > > +		 */
> > > > +		if (!(req->flags & REQ_F_CQE_SKIP) &&
> > > > +				likely(!req_is_group_leader(req)))
> > > >    			io_req_commit_cqe(ctx, req);
> > > > +		if (req->flags & REQ_F_SQE_GROUP)
> > > > +			io_complete_group_req(req, &grp_list);
> > > 
> > > 
> > > if (unlikely(flags & (SKIP_CQE|GROUP))) {
> > > 	<sqe group code>
> > > 	if (/* needs to skip CQE posting */)
> > > 		continue;
> > 
> > io_complete_group_req() needs to be called too in case of CQE_SKIP
> > because the current request may belong to group.
> 
> What's the problem? You can even do
> 
> if (flags & GROUP) {
> 	// do all group specific stuff
> 	// handle CQE_SKIP if needed
> } else if (flags & CQE_SKIP) {
> 	continue;
> }
> 
> And call io_complete_group_req() and other group stuff
> at any place there.

It depends on if leader CQE posting need to be the last one posted.

> 
> > > 	<more sqe group code>
> > > }
> > > 
> > > io_req_commit_cqe();
> > > 
> > > 
> > > Please. And, what's the point of reversing the CQE order and
> > > posting the "leader" completion last? It breaks the natural
> > > order of how IO complete, that is first the "leader" completes
> > > what it has need to do including IO, and then "members" follow
> > > doing their stuff. And besides, you can even post a CQE for the
> > > "leader" when its IO is done and let the user possibly continue
> > > executing. And the user can count when the entire group complete,
> > > if that's necessary to know.
> > 
> > There are several reasons for posting leader completion last:
> > 
> > 1) only the leader is visible in link chain, IO drain has to wait
> > the whole group by draining the leader
> 
> Let's forget about IO drain. It's a feature I'd love to see killed
> (if only we can), it's a slow path, for same reasons I'll discourage
> anyone using it.

Then we have to fail io group in case of IO_DRAIN, otherwise, it may
fail liburing test.

I am fine with this way, at least ublk doesn't use IO_DRAIN at all.

> 
> For correctness we can just copy the link trick, i.e. mark the next
> request outside of the current group/link as drained like below or
> just fail the group.
> 
> diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
> index 7ed1e009aaec..aa0b93765406 100644
> --- a/io_uring/io_uring.c
> +++ b/io_uring/io_uring.c
> @@ -1975,7 +1975,7 @@ static void io_init_req_drain(struct io_kiocb *req)
>  	struct io_kiocb *head = ctx->submit_state.link.head;
>  	ctx->drain_active = true;
> -	if (head) {
> +	if (head || ctx->submit_state.group.head) {
>  		/*
>  		 * If we need to drain a request in the middle of a link, drain
>  		 * the head request and the next request/link after the current
> 
> 
> 
> > 2) when members depend on leader, leader holds group-wide resource,
> > so it has to be completed after all members are done
> 
> I'm talking about posting a CQE but not destroying the request
> (and associated resources).

Such as, in ublk, the group leader is one uring command for providing
buffer, if its cqe is observed before member consumers's CQE, this way
may confuse application, cause consumer is supposed to be consuming
the provided buffer/resource, and it shouldn't have been completed
before all consumers.

> 
> > > >    	}
> > > >    	__io_cq_unlock_post(ctx);
> > > > +	if (!wq_list_empty(&grp_list))
> > > > +		__wq_list_splice(&grp_list, state->compl_reqs.first);
> > > > +
> > > >    	if (!wq_list_empty(&state->compl_reqs)) {
> > > >    		io_free_batch_list(ctx, state->compl_reqs.first);
> > > >    		INIT_WQ_LIST(&state->compl_reqs);
> > > ...
> > > > @@ -1754,9 +1903,18 @@ struct io_wq_work *io_wq_free_work(struct io_wq_work *work)
> > > >    	struct io_kiocb *nxt = NULL;
> > > >    	if (req_ref_put_and_test(req)) {
> > > > -		if (req->flags & IO_REQ_LINK_FLAGS)
> > > > -			nxt = io_req_find_next(req);
> > > > -		io_free_req(req);
> > > > +		/*
> > > > +		 * CQEs have been posted in io_req_complete_post() except
> > > > +		 * for group leader, and we can't advance the link for
> > > > +		 * group leader until its CQE is posted.
> > > > +		 */
> > > > +		if (!req_is_group_leader(req)) {
> > > > +			if (req->flags & IO_REQ_LINK_FLAGS)
> > > > +				nxt = io_req_find_next(req);
> > > > +			io_free_req(req);
> > > > +		} else {
> > > > +			__io_free_req(req, false);
> > > 
> > > Something fishy is going on here. io-wq only holds a ref that the
> > > request is not killed, but it's owned by someone else. And the
> > > owner is responsible for CQE posting and logical flow of the
> > > request.
> > 
> > io_req_complete_post() is always called in io-wq for CQE posting
> > before io-wq drops ref.
> > 
> > The ref held by io-wq prevents the owner from calling io_free_req(),
> > so the owner actually can't run CQE post.
> > 
> > > 
> > > Now, the owner put the request down but for some reason didn't
> > > finish with the request like posting a CQE, but it's delayed to
> > > iowq dropping the ref?
> > > 
> > > I assume the refcounting hierarchy, first grp_refs go down,
> > > and when it hits zero it does whatever it needs, posting a
> > > CQE at that point of prior, and then puts the request reference
> > > down.
> > 
> > Yes, that is why the patch doesn't mark CQE_SKIP for leader in
> > io_wq_free_work(), meantime leader->link has to be issued after
> > leader's CQE is posted in case of io-wq.
> 
> The point is that io_wq_free_work() doesn't need to know
> anything about groups and can just continue setting the
> skip cqe flag as before if it's done differently
> 
> > But grp_refs is dropped after io-wq request reference drops to
> > zero, then both io-wq and nor-io-wq code path can be unified
> > wrt. dealing with grp_refs, meantime it needn't to be updated
> > in extra(io-wq) context.
> 
> Let's try to describe how it can work. First, I'm only describing
> the dep mode for simplicity. And for the argument's sake we can say
> that all CQEs are posted via io_submit_flush_completions.
> 
> io_req_complete_post() {
> 	if (flags & GROUP) {
> 		req->io_task_work.func = io_req_task_complete;
> 		io_req_task_work_add(req);
> 		return;
> 	}
> 	...
> }

OK.

io_wq_free_work() still need to change to not deal with
next link & ignoring skip_cqe, because group handling(
cqe posting, link advance) is completely moved into
io_submit_flush_completions().

> 
> You can do it this way, nobody would ever care, and it shouldn't
> affect performance. Otherwise everything down below can probably
> be extended to io_req_complete_post().
> 
> To avoid confusion in terminology, what I call a member below doesn't
> include a leader. IOW, a group leader request is not a member.
> 
> At the init we have:
> grp_refs = nr_members; /* doesn't include the leader */
> 
> Let's also say that the group leader can and always goes
> through io_submit_flush_completions() twice, just how it's
> with your patches.
> 
> 1) The first time we see the leader in io_submit_flush_completions()
> is when it's done with resource preparation. For example, it was
> doing some IO into a buffer, and now is ready to give that buffer
> with data to group members. At this point it should queue up all group
> members. And we also drop 1 grp_ref. There will also be no
> io_issue_sqe() for it anymore.

Ok, then it is just the case with dependency.

> 
> 2) Members are executed and completed, in io_submit_flush_completions()
> they drop 1 grp_leader->grp_ref reference each.
> 
> 3) When all members complete, leader's grp_ref becomes 0. Here
> the leader is queued for io_submit_flush_completions() a second time,
> at which point it drops ublk buffers and such and gets destroyed.
> 
> You can post a CQE in 1) and then set CQE_SKIP. Can also be fitted
> into 3). A pseudo code for when we post it in step 1)

This way should work, but it confuses application because
the leader is completed before all members:

- leader usually provide resources in group wide
- member consumes this resource
- leader is supposed to be completed after all consumer(member) are
done.

Given it is UAPI, we have to be careful with CQE posting order.

> 
> io_free_batch_list() {
> 	if (req->flags & GROUP) {
> 		if (req_is_member(req)) {
> 			req->grp_leader->grp_refs--;
> 			if (req->grp_leader->grp_refs == 0) {
> 				req->io_task_work.func = io_req_task_complete;
> 				io_req_task_work_add(req->grp_leader);
> 				// can be done better....
> 			}
> 			goto free_req;
> 		}
> 		WARN_ON_ONCE(!req_is_leader());
> 
> 		if (!(req->flags & SEEN_FIRST_TIME)) {
> 			// already posted it just before coming here
> 			req->flags |= SKIP_CQE;
> 			// we'll see it again when grp_refs hit 0
> 			req->flags |= SEEN_FIRST_TIME;
> 
> 			// Don't free the req, we're leaving it alive for now.
> 			// req->ref/REQ_F_REFCOUNT will be put next time we get here.
> 			return; // or continue
> 		}
> 
> 		clean_up_request_resources(); // calls back into ublk
> 		// and now free the leader
> 	}
> 
> free_req:
> 	// the rest of io_free_batch_list()
> 	if (flags & REQ_F_REFCOUNT) {
> 		req_drop_ref();
> 		....
> 	}
> 	...
> }
> 
> 
> This way
> 1) There are relatively easy request lifetime rules.
> 2) No special ownership/lifetime rules for the group link field.
> 3) You don't need to touch io_req_complete_defer()
> 4) io-wq doesn't know anything about grp_refs and doesn't play tricks
> with SKIP_CQE.

io_wq_free_work() still need to change for not setting SKIP_CQE and
not advancing the link.

> 5) All handling is in one place, doesn't need multiple checks in common
> hot path. You might need some extra, but at least it's contained.
> 
> 
> > > > +		}
> > > >    	}
> > > >    	return nxt ? &nxt->work : NULL;
> > > >    }
> > > > @@ -1821,6 +1979,8 @@ void io_wq_submit_work(struct io_wq_work *work)
> > > >    		}
> > > >    	}
> > > > +	if (need_queue_group_members(req->flags))
> > > > +		io_queue_group_members(req, true);
> > > >    	do {
> > > >    		ret = io_issue_sqe(req, issue_flags);
> > > >    		if (ret != -EAGAIN)
> > > > @@ -1932,9 +2092,17 @@ static inline void io_queue_sqe(struct io_kiocb *req)
> > > >    	/*
> > > >    	 * We async punt it if the file wasn't marked NOWAIT, or if the file
> > > >    	 * doesn't support non-blocking read/write attempts
> > > > +	 *
> > > > +	 * Request is always freed after returning from io_queue_sqe(), so
> > > > +	 * it is fine to check its flags after it is issued
> > > > +	 *
> > > > +	 * For group leader, members holds leader references, so it is safe
> > > > +	 * to touch the leader after leader is issued
> > > >    	 */
> > > >    	if (unlikely(ret))
> > > >    		io_queue_async(req, ret);
> > > > +	else if (need_queue_group_members(req->flags))
> > > > +		io_queue_group_members(req, false);
> > > 
> > > It absolutely cannot be here. There is no relation between this place
> > > in code and lifetime of the request. It could've been failed or
> > > completed, it can also be flying around in a completely arbitrary
> > > context being executed. We're not introducing weird special lifetime
> > > rules for group links. It complicates the code, and no way it can be
> > > sanely supported.
> > > For example, it's not forbidden for issue_sqe callbacks to queue requests
> > > to io-wq and return 0 (IOU_ISSUE_SKIP_COMPLETE which would be turned
> > > into 0), and then we have two racing io_queue_group_members() calls.
> > 
> > It can by handled by adding io_queue_sqe_group() in which:
> > 
> > - req->grp_link is moved to one local variable, and make every
> >    member's grp_leader point to req
> > 
> > - call io_queue_sqe() for leader
> > 
> > - then call io_queue_group_members() for all members, and make sure
> > not touch leader in io_queue_group_members()
> > 
> > What do you think of this way?
> 
> By the end of the day, io_queue_sqe is just not the right place for
> it. Take a look at the scheme above, I believe it should work

OK, thanks again for the review!


Thanks,
Ming
Pavel Begunkov July 29, 2024, 1:58 p.m. UTC | #6
On 7/25/24 11:33, Ming Lei wrote:
> On Wed, Jul 24, 2024 at 02:41:38PM +0100, Pavel Begunkov wrote:
>> On 7/23/24 15:34, Ming Lei wrote:
...
>>> But grp_refs is dropped after io-wq request reference drops to
>>> zero, then both io-wq and nor-io-wq code path can be unified
>>> wrt. dealing with grp_refs, meantime it needn't to be updated
>>> in extra(io-wq) context.
>>
>> Let's try to describe how it can work. First, I'm only describing
>> the dep mode for simplicity. And for the argument's sake we can say
>> that all CQEs are posted via io_submit_flush_completions.
>>
>> io_req_complete_post() {
>> 	if (flags & GROUP) {
>> 		req->io_task_work.func = io_req_task_complete;
>> 		io_req_task_work_add(req);
>> 		return;
>> 	}
>> 	...
>> }
> 
> OK.
> 
> io_wq_free_work() still need to change to not deal with
> next link & ignoring skip_cqe, because group handling(

No, it doesn't need to know about all that.

> cqe posting, link advance) is completely moved into
> io_submit_flush_completions().

It has never been guaranteed that io_req_complete_post()
will be the one completing the request,
io_submit_flush_completions() can always happen.


struct io_wq_work *io_wq_free_work(struct io_wq_work *work)
{
	...
	if (req_ref_put_and_test(req)) {
		nxt = io_req_find_next(req);
		io_free_req();
	}
}

We queue linked requests only when all refs are dropped, and
the group handling in my snippet is done before we drop the
owner's reference.

IOW, you won't hit io_free_req() in io_wq_free_work() for a
leader unless all members in its group got completed and
the leader already went through the code dropping those shared
ublk buffers.


>> You can do it this way, nobody would ever care, and it shouldn't
>> affect performance. Otherwise everything down below can probably
>> be extended to io_req_complete_post().
>>
>> To avoid confusion in terminology, what I call a member below doesn't
>> include a leader. IOW, a group leader request is not a member.
>>
>> At the init we have:
>> grp_refs = nr_members; /* doesn't include the leader */
>>
>> Let's also say that the group leader can and always goes
>> through io_submit_flush_completions() twice, just how it's
>> with your patches.
>>
>> 1) The first time we see the leader in io_submit_flush_completions()
>> is when it's done with resource preparation. For example, it was
>> doing some IO into a buffer, and now is ready to give that buffer
>> with data to group members. At this point it should queue up all group
>> members. And we also drop 1 grp_ref. There will also be no
>> io_issue_sqe() for it anymore.
> 
> Ok, then it is just the case with dependency.
> 
>>
>> 2) Members are executed and completed, in io_submit_flush_completions()
>> they drop 1 grp_leader->grp_ref reference each.
>>
>> 3) When all members complete, leader's grp_ref becomes 0. Here
>> the leader is queued for io_submit_flush_completions() a second time,
>> at which point it drops ublk buffers and such and gets destroyed.
>>
>> You can post a CQE in 1) and then set CQE_SKIP. Can also be fitted
>> into 3). A pseudo code for when we post it in step 1)
> 
> This way should work, but it confuses application because
> the leader is completed before all members:
> 
> - leader usually provide resources in group wide
> - member consumes this resource
> - leader is supposed to be completed after all consumer(member) are
> done.
> 
> Given it is UAPI, we have to be careful with CQE posting order.

That's exactly what I was telling about the uapi previously,
I don't believe we want to inverse the natural CQE order.

Regardless, the order can be inversed like this:

__io_flush_completions() {
	...
	if (req->flags & (SKIP_CQE|GROUP)) {
		if (req->flags & SKIP_CQE)
			continue;
		// post a CQE only when we see it for a 2nd time in io_flush_completion();
		if (is_leader() && !(req->flags & ALREADY_SEEN))
			continue;
	}
	post_cqe();
}


>> io_free_batch_list() {
>> 	if (req->flags & GROUP) {
>> 		if (req_is_member(req)) {
>> 			req->grp_leader->grp_refs--;
>> 			if (req->grp_leader->grp_refs == 0) {
>> 				req->io_task_work.func = io_req_task_complete;
>> 				io_req_task_work_add(req->grp_leader);
>> 				// can be done better....
>> 			}
>> 			goto free_req;
>> 		}
>> 		WARN_ON_ONCE(!req_is_leader());
>>
>> 		if (!(req->flags & SEEN_FIRST_TIME)) {
>> 			// already posted it just before coming here
>> 			req->flags |= SKIP_CQE;
>> 			// we'll see it again when grp_refs hit 0
>> 			req->flags |= SEEN_FIRST_TIME;
>>
>> 			// Don't free the req, we're leaving it alive for now.
>> 			// req->ref/REQ_F_REFCOUNT will be put next time we get here.
>> 			return; // or continue
>> 		}
>>
>> 		clean_up_request_resources(); // calls back into ublk
>> 		// and now free the leader
>> 	}
>>
>> free_req:
>> 	// the rest of io_free_batch_list()
>> 	if (flags & REQ_F_REFCOUNT) {
>> 		req_drop_ref();
>> 		....
>> 	}
>> 	...
>> }
Ming Lei Aug. 6, 2024, 8:38 a.m. UTC | #7
On Mon, Jul 29, 2024 at 02:58:58PM +0100, Pavel Begunkov wrote:
> On 7/25/24 11:33, Ming Lei wrote:
> > On Wed, Jul 24, 2024 at 02:41:38PM +0100, Pavel Begunkov wrote:
> > > On 7/23/24 15:34, Ming Lei wrote:
> ...
> > > > But grp_refs is dropped after io-wq request reference drops to
> > > > zero, then both io-wq and nor-io-wq code path can be unified
> > > > wrt. dealing with grp_refs, meantime it needn't to be updated
> > > > in extra(io-wq) context.
> > > 
> > > Let's try to describe how it can work. First, I'm only describing
> > > the dep mode for simplicity. And for the argument's sake we can say
> > > that all CQEs are posted via io_submit_flush_completions.
> > > 
> > > io_req_complete_post() {
> > > 	if (flags & GROUP) {
> > > 		req->io_task_work.func = io_req_task_complete;
> > > 		io_req_task_work_add(req);
> > > 		return;
> > > 	}
> > > 	...
> > > }
> > 
> > OK.
> > 
> > io_wq_free_work() still need to change to not deal with
> > next link & ignoring skip_cqe, because group handling(
> 
> No, it doesn't need to know about all that.
> 
> > cqe posting, link advance) is completely moved into
> > io_submit_flush_completions().
> 
> It has never been guaranteed that io_req_complete_post()
> will be the one completing the request,
> io_submit_flush_completions() can always happen.
> 
> 
> struct io_wq_work *io_wq_free_work(struct io_wq_work *work)
> {
> 	...
> 	if (req_ref_put_and_test(req)) {
> 		nxt = io_req_find_next(req);
> 		io_free_req();
> 	}
> }
> 
> We queue linked requests only when all refs are dropped, and
> the group handling in my snippet is done before we drop the
> owner's reference.
> 
> IOW, you won't hit io_free_req() in io_wq_free_work() for a
> leader unless all members in its group got completed and
> the leader already went through the code dropping those shared
> ublk buffers.

If io_free_req() won't be called for leader, leader won't be added
to ->compl_reqs, and it has to be generated when all members are
completed in __io_submit_flush_completions().

For !io_wq, we can align to this way by not completing leader in
io_req_complete_defer().

The above implementation looks simpler, and more readable.

> 
> 
> > > You can do it this way, nobody would ever care, and it shouldn't
> > > affect performance. Otherwise everything down below can probably
> > > be extended to io_req_complete_post().
> > > 
> > > To avoid confusion in terminology, what I call a member below doesn't
> > > include a leader. IOW, a group leader request is not a member.
> > > 
> > > At the init we have:
> > > grp_refs = nr_members; /* doesn't include the leader */
> > > 
> > > Let's also say that the group leader can and always goes
> > > through io_submit_flush_completions() twice, just how it's
> > > with your patches.
> > > 
> > > 1) The first time we see the leader in io_submit_flush_completions()
> > > is when it's done with resource preparation. For example, it was
> > > doing some IO into a buffer, and now is ready to give that buffer
> > > with data to group members. At this point it should queue up all group
> > > members. And we also drop 1 grp_ref. There will also be no
> > > io_issue_sqe() for it anymore.
> > 
> > Ok, then it is just the case with dependency.
> > 
> > > 
> > > 2) Members are executed and completed, in io_submit_flush_completions()
> > > they drop 1 grp_leader->grp_ref reference each.
> > > 
> > > 3) When all members complete, leader's grp_ref becomes 0. Here
> > > the leader is queued for io_submit_flush_completions() a second time,
> > > at which point it drops ublk buffers and such and gets destroyed.
> > > 
> > > You can post a CQE in 1) and then set CQE_SKIP. Can also be fitted
> > > into 3). A pseudo code for when we post it in step 1)
> > 
> > This way should work, but it confuses application because
> > the leader is completed before all members:
> > 
> > - leader usually provide resources in group wide
> > - member consumes this resource
> > - leader is supposed to be completed after all consumer(member) are
> > done.
> > 
> > Given it is UAPI, we have to be careful with CQE posting order.
> 
> That's exactly what I was telling about the uapi previously,
> I don't believe we want to inverse the natural CQE order.
> 
> Regardless, the order can be inversed like this:
> 
> __io_flush_completions() {
> 	...
> 	if (req->flags & (SKIP_CQE|GROUP)) {
> 		if (req->flags & SKIP_CQE)
> 			continue;
> 		// post a CQE only when we see it for a 2nd time in io_flush_completion();
> 		if (is_leader() && !(req->flags & ALREADY_SEEN))
> 			continue;

I am afraid that ALREADY_SEEN can't work, since leader can only be added
to ->compl_reqs once.


Thanks, 
Ming
Ming Lei Aug. 6, 2024, 2:26 p.m. UTC | #8
On Tue, Aug 06, 2024 at 04:38:08PM +0800, Ming Lei wrote:
> On Mon, Jul 29, 2024 at 02:58:58PM +0100, Pavel Begunkov wrote:
> > On 7/25/24 11:33, Ming Lei wrote:
> > > On Wed, Jul 24, 2024 at 02:41:38PM +0100, Pavel Begunkov wrote:
> > > > On 7/23/24 15:34, Ming Lei wrote:
> > ...
> > > > > But grp_refs is dropped after io-wq request reference drops to
> > > > > zero, then both io-wq and nor-io-wq code path can be unified
> > > > > wrt. dealing with grp_refs, meantime it needn't to be updated
> > > > > in extra(io-wq) context.
> > > > 
> > > > Let's try to describe how it can work. First, I'm only describing
> > > > the dep mode for simplicity. And for the argument's sake we can say
> > > > that all CQEs are posted via io_submit_flush_completions.
> > > > 
> > > > io_req_complete_post() {
> > > > 	if (flags & GROUP) {
> > > > 		req->io_task_work.func = io_req_task_complete;
> > > > 		io_req_task_work_add(req);
> > > > 		return;
> > > > 	}
> > > > 	...
> > > > }
> > > 
> > > OK.
> > > 
> > > io_wq_free_work() still need to change to not deal with
> > > next link & ignoring skip_cqe, because group handling(
> > 
> > No, it doesn't need to know about all that.
> > 
> > > cqe posting, link advance) is completely moved into
> > > io_submit_flush_completions().
> > 
> > It has never been guaranteed that io_req_complete_post()
> > will be the one completing the request,
> > io_submit_flush_completions() can always happen.
> > 
> > 
> > struct io_wq_work *io_wq_free_work(struct io_wq_work *work)
> > {
> > 	...
> > 	if (req_ref_put_and_test(req)) {
> > 		nxt = io_req_find_next(req);
> > 		io_free_req();
> > 	}
> > }
> > 
> > We queue linked requests only when all refs are dropped, and
> > the group handling in my snippet is done before we drop the
> > owner's reference.
> > 
> > IOW, you won't hit io_free_req() in io_wq_free_work() for a
> > leader unless all members in its group got completed and
> > the leader already went through the code dropping those shared
> > ublk buffers.
> 
> If io_free_req() won't be called for leader, leader won't be added
> to ->compl_reqs, and it has to be generated when all members are
> completed in __io_submit_flush_completions().
> 
> For !io_wq, we can align to this way by not completing leader in
> io_req_complete_defer().
> 
> The above implementation looks simpler, and more readable.

Thinking of this issue further, looks the above is still not doable:

1) for avoiding to hit io_free_req(), extra req->refs has to be grabbed,
then the leader's completion may not be notified.

2) 1) may be avoided by holding one leader's refcount for each member,
and call req_ref_put_and_test(leader) when leader or member is
completed, and post leader's CQE when leader's refs drops to zero.
But there are other issues:

	- other req_ref_inc_not_zero() or req_ref_get() may cause leader's
	CQE post missed

	- the req_ref_put_and_test() in io_free_batch_list() can be called
	on group leader unexpectedly.

both 1) and 2) need to touch io_req_complete_defer() for completing group
leader



Thanks,
Ming
Ming Lei Aug. 7, 2024, 3:26 a.m. UTC | #9
On Tue, Aug 06, 2024 at 10:26:50PM +0800, Ming Lei wrote:
> On Tue, Aug 06, 2024 at 04:38:08PM +0800, Ming Lei wrote:
> > On Mon, Jul 29, 2024 at 02:58:58PM +0100, Pavel Begunkov wrote:
> > > On 7/25/24 11:33, Ming Lei wrote:
> > > > On Wed, Jul 24, 2024 at 02:41:38PM +0100, Pavel Begunkov wrote:
> > > > > On 7/23/24 15:34, Ming Lei wrote:
> > > ...
> > > > > > But grp_refs is dropped after io-wq request reference drops to
> > > > > > zero, then both io-wq and nor-io-wq code path can be unified
> > > > > > wrt. dealing with grp_refs, meantime it needn't to be updated
> > > > > > in extra(io-wq) context.
> > > > > 
> > > > > Let's try to describe how it can work. First, I'm only describing
> > > > > the dep mode for simplicity. And for the argument's sake we can say
> > > > > that all CQEs are posted via io_submit_flush_completions.
> > > > > 
> > > > > io_req_complete_post() {
> > > > > 	if (flags & GROUP) {
> > > > > 		req->io_task_work.func = io_req_task_complete;
> > > > > 		io_req_task_work_add(req);
> > > > > 		return;
> > > > > 	}
> > > > > 	...
> > > > > }
> > > > 
> > > > OK.
> > > > 
> > > > io_wq_free_work() still need to change to not deal with
> > > > next link & ignoring skip_cqe, because group handling(
> > > 
> > > No, it doesn't need to know about all that.
> > > 
> > > > cqe posting, link advance) is completely moved into
> > > > io_submit_flush_completions().
> > > 
> > > It has never been guaranteed that io_req_complete_post()
> > > will be the one completing the request,
> > > io_submit_flush_completions() can always happen.
> > > 
> > > 
> > > struct io_wq_work *io_wq_free_work(struct io_wq_work *work)
> > > {
> > > 	...
> > > 	if (req_ref_put_and_test(req)) {
> > > 		nxt = io_req_find_next(req);
> > > 		io_free_req();
> > > 	}
> > > }
> > > 
> > > We queue linked requests only when all refs are dropped, and
> > > the group handling in my snippet is done before we drop the
> > > owner's reference.
> > > 
> > > IOW, you won't hit io_free_req() in io_wq_free_work() for a
> > > leader unless all members in its group got completed and
> > > the leader already went through the code dropping those shared
> > > ublk buffers.
> > 
> > If io_free_req() won't be called for leader, leader won't be added
> > to ->compl_reqs, and it has to be generated when all members are
> > completed in __io_submit_flush_completions().
> > 
> > For !io_wq, we can align to this way by not completing leader in
> > io_req_complete_defer().
> > 
> > The above implementation looks simpler, and more readable.
> 
> Thinking of this issue further, looks the above is still not doable:
> 
> 1) for avoiding to hit io_free_req(), extra req->refs has to be grabbed,
> then the leader's completion may not be notified.
> 
> 2) 1) may be avoided by holding one leader's refcount for each member,
> and call req_ref_put_and_test(leader) when leader or member is
> completed, and post leader's CQE when leader's refs drops to zero.
> But there are other issues:
> 
> 	- other req_ref_inc_not_zero() or req_ref_get() may cause leader's
> 	CQE post missed
> 
> 	- the req_ref_put_and_test() in io_free_batch_list() can be called
> 	on group leader unexpectedly.
> 
> both 1) and 2) need to touch io_req_complete_defer() for completing group
> leader

oops, looks I missed the point of completing request from
io_req_complete_post() directly in the following link:

https://lore.kernel.org/linux-block/0fa0c9b9-cfb9-4710-85d0-2f6b4398603c@gmail.com/

Please ignore yesterday's replies, and now I should get the whole
picture of your point.

Thanks,
Ming
diff mbox series

Patch

diff --git a/include/linux/io_uring_types.h b/include/linux/io_uring_types.h
index ede42dce1506..b5cc3dee8fa2 100644
--- a/include/linux/io_uring_types.h
+++ b/include/linux/io_uring_types.h
@@ -202,6 +202,8 @@  struct io_submit_state {
 	/* batch completion logic */
 	struct io_wq_work_list	compl_reqs;
 	struct io_submit_link	link;
+	/* points to current group */
+	struct io_submit_link	group;
 
 	bool			plug_started;
 	bool			need_plug;
@@ -436,6 +438,7 @@  enum {
 	REQ_F_FORCE_ASYNC_BIT	= IOSQE_ASYNC_BIT,
 	REQ_F_BUFFER_SELECT_BIT	= IOSQE_BUFFER_SELECT_BIT,
 	REQ_F_CQE_SKIP_BIT	= IOSQE_CQE_SKIP_SUCCESS_BIT,
+	REQ_F_SQE_GROUP_BIT	= IOSQE_SQE_GROUP_BIT,
 
 	/* first byte is taken by user flags, shift it to not overlap */
 	REQ_F_FAIL_BIT		= 8,
@@ -467,6 +470,7 @@  enum {
 	REQ_F_BL_EMPTY_BIT,
 	REQ_F_BL_NO_RECYCLE_BIT,
 	REQ_F_BUFFERS_COMMIT_BIT,
+	REQ_F_SQE_GROUP_LEADER_BIT,
 
 	/* not a real bit, just to check we're not overflowing the space */
 	__REQ_F_LAST_BIT,
@@ -490,6 +494,8 @@  enum {
 	REQ_F_BUFFER_SELECT	= IO_REQ_FLAG(REQ_F_BUFFER_SELECT_BIT),
 	/* IOSQE_CQE_SKIP_SUCCESS */
 	REQ_F_CQE_SKIP		= IO_REQ_FLAG(REQ_F_CQE_SKIP_BIT),
+	/* IOSQE_SQE_GROUP */
+	REQ_F_SQE_GROUP		= IO_REQ_FLAG(REQ_F_SQE_GROUP_BIT),
 
 	/* fail rest of links */
 	REQ_F_FAIL		= IO_REQ_FLAG(REQ_F_FAIL_BIT),
@@ -547,6 +553,8 @@  enum {
 	REQ_F_BL_NO_RECYCLE	= IO_REQ_FLAG(REQ_F_BL_NO_RECYCLE_BIT),
 	/* buffer ring head needs incrementing on put */
 	REQ_F_BUFFERS_COMMIT	= IO_REQ_FLAG(REQ_F_BUFFERS_COMMIT_BIT),
+	/* sqe group lead */
+	REQ_F_SQE_GROUP_LEADER	= IO_REQ_FLAG(REQ_F_SQE_GROUP_LEADER_BIT),
 };
 
 typedef void (*io_req_tw_func_t)(struct io_kiocb *req, struct io_tw_state *ts);
@@ -652,6 +660,8 @@  struct io_kiocb {
 	void				*async_data;
 	/* linked requests, IFF REQ_F_HARDLINK or REQ_F_LINK are set */
 	atomic_t			poll_refs;
+	/* reference for group leader requests */
+	int				grp_refs;
 	struct io_kiocb			*link;
 	/* custom credentials, valid IFF REQ_F_CREDS is set */
 	const struct cred		*creds;
@@ -661,6 +671,14 @@  struct io_kiocb {
 		u64			extra1;
 		u64			extra2;
 	} big_cqe;
+
+	union {
+		/* links all group members for leader */
+		struct io_kiocb			*grp_link;
+
+		/* points to group leader for member */
+		struct io_kiocb			*grp_leader;
+	};
 };
 
 struct io_overflow_cqe {
diff --git a/include/uapi/linux/io_uring.h b/include/uapi/linux/io_uring.h
index 2aaf7ee256ac..e6d321b3add7 100644
--- a/include/uapi/linux/io_uring.h
+++ b/include/uapi/linux/io_uring.h
@@ -124,6 +124,7 @@  enum io_uring_sqe_flags_bit {
 	IOSQE_ASYNC_BIT,
 	IOSQE_BUFFER_SELECT_BIT,
 	IOSQE_CQE_SKIP_SUCCESS_BIT,
+	IOSQE_SQE_GROUP_BIT,
 };
 
 /*
@@ -143,6 +144,8 @@  enum io_uring_sqe_flags_bit {
 #define IOSQE_BUFFER_SELECT	(1U << IOSQE_BUFFER_SELECT_BIT)
 /* don't post CQE if request succeeded */
 #define IOSQE_CQE_SKIP_SUCCESS	(1U << IOSQE_CQE_SKIP_SUCCESS_BIT)
+/* defines sqe group */
+#define IOSQE_SQE_GROUP		(1U << IOSQE_SQE_GROUP_BIT)
 
 /*
  * io_uring_setup() flags
@@ -542,6 +545,7 @@  struct io_uring_params {
 #define IORING_FEAT_LINKED_FILE		(1U << 12)
 #define IORING_FEAT_REG_REG_RING	(1U << 13)
 #define IORING_FEAT_RECVSEND_BUNDLE	(1U << 14)
+#define IORING_FEAT_SQE_GROUP		(1U << 15)
 
 /*
  * io_uring_register(2) opcodes and arguments
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index 7597344a6440..b5415f0774e5 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -111,14 +111,15 @@ 
 			  IOSQE_IO_HARDLINK | IOSQE_ASYNC)
 
 #define SQE_VALID_FLAGS	(SQE_COMMON_FLAGS | IOSQE_BUFFER_SELECT | \
-			IOSQE_IO_DRAIN | IOSQE_CQE_SKIP_SUCCESS)
+			IOSQE_IO_DRAIN | IOSQE_CQE_SKIP_SUCCESS | \
+			IOSQE_SQE_GROUP)
 
 #define IO_REQ_CLEAN_FLAGS (REQ_F_BUFFER_SELECTED | REQ_F_NEED_CLEANUP | \
 				REQ_F_POLLED | REQ_F_INFLIGHT | REQ_F_CREDS | \
 				REQ_F_ASYNC_DATA)
 
 #define IO_REQ_CLEAN_SLOW_FLAGS (REQ_F_REFCOUNT | REQ_F_LINK | REQ_F_HARDLINK |\
-				 IO_REQ_CLEAN_FLAGS)
+				 REQ_F_SQE_GROUP | IO_REQ_CLEAN_FLAGS)
 
 #define IO_TCTX_REFS_CACHE_NR	(1U << 10)
 
@@ -421,6 +422,10 @@  static inline void io_req_track_inflight(struct io_kiocb *req)
 	if (!(req->flags & REQ_F_INFLIGHT)) {
 		req->flags |= REQ_F_INFLIGHT;
 		atomic_inc(&req->task->io_uring->inflight_tracked);
+
+		/* make members' REQ_F_INFLIGHT discoverable via leader's */
+		if (req_is_group_member(req))
+			io_req_track_inflight(req->grp_leader);
 	}
 }
 
@@ -875,6 +880,117 @@  static __always_inline void io_req_commit_cqe(struct io_ring_ctx *ctx,
 	}
 }
 
+static inline bool need_queue_group_members(io_req_flags_t flags)
+{
+	return flags & REQ_F_SQE_GROUP_LEADER;
+}
+
+/* Can only be called after this request is issued */
+static inline struct io_kiocb *get_group_leader(struct io_kiocb *req)
+{
+	if (req->flags & REQ_F_SQE_GROUP) {
+		if (req_is_group_leader(req))
+			return req;
+		return req->grp_leader;
+	}
+	return NULL;
+}
+
+void io_cancel_group_members(struct io_kiocb *req, bool ignore_cqes)
+{
+	struct io_kiocb *member = req->grp_link;
+
+	while (member) {
+		struct io_kiocb *next = member->grp_link;
+
+		if (ignore_cqes)
+			member->flags |= REQ_F_CQE_SKIP;
+		if (!(member->flags & REQ_F_FAIL)) {
+			req_set_fail(member);
+			io_req_set_res(member, -ECANCELED, 0);
+		}
+		member = next;
+	}
+}
+
+void io_queue_group_members(struct io_kiocb *req, bool async)
+{
+	struct io_kiocb *member = req->grp_link;
+
+	if (!member)
+		return;
+
+	req->grp_link = NULL;
+	while (member) {
+		struct io_kiocb *next = member->grp_link;
+
+		member->grp_leader = req;
+		if (async)
+			member->flags |= REQ_F_FORCE_ASYNC;
+
+		if (unlikely(member->flags & REQ_F_FAIL)) {
+			io_req_task_queue_fail(member, member->cqe.res);
+		} else if (member->flags & REQ_F_FORCE_ASYNC) {
+			io_req_task_queue(member);
+		} else {
+			io_queue_sqe(member);
+		}
+		member = next;
+	}
+}
+
+static inline bool __io_complete_group_req(struct io_kiocb *req,
+			     struct io_kiocb *lead)
+{
+	WARN_ON_ONCE(!(req->flags & REQ_F_SQE_GROUP));
+
+	if (WARN_ON_ONCE(lead->grp_refs <= 0))
+		return false;
+
+	/*
+	 * Set linked leader as failed if any member is failed, so
+	 * the remained link chain can be terminated
+	 */
+	if (unlikely((req->flags & REQ_F_FAIL) &&
+		     ((lead->flags & IO_REQ_LINK_FLAGS) && lead->link)))
+		req_set_fail(lead);
+	return !--lead->grp_refs;
+}
+
+/* Complete group request and collect completed leader for freeing */
+static void io_complete_group_req(struct io_kiocb *req,
+		struct io_wq_work_list *grp_list)
+{
+	struct io_kiocb *lead = get_group_leader(req);
+
+	if (__io_complete_group_req(req, lead)) {
+		req->flags &= ~REQ_F_SQE_GROUP;
+		lead->flags &= ~REQ_F_SQE_GROUP_LEADER;
+
+		if (!(lead->flags & REQ_F_CQE_SKIP))
+			io_req_commit_cqe(lead->ctx, lead);
+
+		if (req != lead) {
+			/*
+			 * Add leader to free list if it isn't there
+			 * otherwise clearing group flag for freeing it
+			 * in current batch
+			 */
+			if (!(lead->flags & REQ_F_SQE_GROUP))
+				wq_list_add_tail(&lead->comp_list, grp_list);
+			else
+				lead->flags &= ~REQ_F_SQE_GROUP;
+		}
+	} else if (req != lead) {
+		req->flags &= ~REQ_F_SQE_GROUP;
+	} else {
+		/*
+		 * Leader's group flag clearing is delayed until it is
+		 * removed from free list
+		 */
+	}
+}
+
 static void io_req_complete_post(struct io_kiocb *req, unsigned issue_flags)
 {
 	struct io_ring_ctx *ctx = req->ctx;
@@ -897,7 +1013,7 @@  static void io_req_complete_post(struct io_kiocb *req, unsigned issue_flags)
 	}
 
 	io_cq_lock(ctx);
-	if (!(req->flags & REQ_F_CQE_SKIP)) {
+	if (!(req->flags & REQ_F_CQE_SKIP) && !req_is_group_leader(req)) {
 		if (!io_fill_cqe_req(ctx, req))
 			io_req_cqe_overflow(req);
 	}
@@ -974,16 +1090,22 @@  __cold bool __io_alloc_req_refill(struct io_ring_ctx *ctx)
 	return true;
 }
 
-__cold void io_free_req(struct io_kiocb *req)
+static void __io_free_req(struct io_kiocb *req, bool cqe_skip)
 {
 	/* refs were already put, restore them for io_req_task_complete() */
 	req->flags &= ~REQ_F_REFCOUNT;
 	/* we only want to free it, don't post CQEs */
-	req->flags |= REQ_F_CQE_SKIP;
+	if (cqe_skip)
+		req->flags |= REQ_F_CQE_SKIP;
 	req->io_task_work.func = io_req_task_complete;
 	io_req_task_work_add(req);
 }
 
+__cold void io_free_req(struct io_kiocb *req)
+{
+	__io_free_req(req, true);
+}
+
 static void __io_req_find_next_prep(struct io_kiocb *req)
 {
 	struct io_ring_ctx *ctx = req->ctx;
@@ -1388,6 +1510,17 @@  static void io_free_batch_list(struct io_ring_ctx *ctx,
 						    comp_list);
 
 		if (unlikely(req->flags & IO_REQ_CLEAN_SLOW_FLAGS)) {
+			/*
+			 * Group leader may be removed twice, don't free it
+			 * if group flag isn't cleared, when some members
+			 * aren't completed yet
+			 */
+			if (req->flags & REQ_F_SQE_GROUP) {
+				node = req->comp_list.next;
+				req->flags &= ~REQ_F_SQE_GROUP;
+				continue;
+			}
+
 			if (req->flags & REQ_F_REFCOUNT) {
 				node = req->comp_list.next;
 				if (!req_ref_put_and_test(req))
@@ -1420,6 +1553,7 @@  void __io_submit_flush_completions(struct io_ring_ctx *ctx)
 	__must_hold(&ctx->uring_lock)
 {
 	struct io_submit_state *state = &ctx->submit_state;
+	struct io_wq_work_list grp_list = {NULL};
 	struct io_wq_work_node *node;
 
 	__io_cq_lock(ctx);
@@ -1427,11 +1561,22 @@  void __io_submit_flush_completions(struct io_ring_ctx *ctx)
 		struct io_kiocb *req = container_of(node, struct io_kiocb,
 					    comp_list);
 
-		if (!(req->flags & REQ_F_CQE_SKIP))
+		/*
+		 * For group leader, cqe has to be committed after all
+		 * members are committed, when the group leader flag is
+		 * cleared
+		 */
+		if (!(req->flags & REQ_F_CQE_SKIP) &&
+				likely(!req_is_group_leader(req)))
 			io_req_commit_cqe(ctx, req);
+		if (req->flags & REQ_F_SQE_GROUP)
+			io_complete_group_req(req, &grp_list);
 	}
 	__io_cq_unlock_post(ctx);
 
+	if (!wq_list_empty(&grp_list))
+		__wq_list_splice(&grp_list, state->compl_reqs.first);
+
 	if (!wq_list_empty(&state->compl_reqs)) {
 		io_free_batch_list(ctx, state->compl_reqs.first);
 		INIT_WQ_LIST(&state->compl_reqs);
@@ -1638,8 +1783,12 @@  static u32 io_get_sequence(struct io_kiocb *req)
 	struct io_kiocb *cur;
 
 	/* need original cached_sq_head, but it was increased for each req */
-	io_for_each_link(cur, req)
-		seq--;
+	io_for_each_link(cur, req) {
+		if (req_is_group_leader(cur))
+			seq -= cur->grp_refs;
+		else
+			seq--;
+	}
 	return seq;
 }
 
@@ -1754,9 +1903,18 @@  struct io_wq_work *io_wq_free_work(struct io_wq_work *work)
 	struct io_kiocb *nxt = NULL;
 
 	if (req_ref_put_and_test(req)) {
-		if (req->flags & IO_REQ_LINK_FLAGS)
-			nxt = io_req_find_next(req);
-		io_free_req(req);
+		/*
+		 * CQEs have been posted in io_req_complete_post() except
+		 * for group leader, and we can't advance the link for
+		 * group leader until its CQE is posted.
+		 */
+		if (!req_is_group_leader(req)) {
+			if (req->flags & IO_REQ_LINK_FLAGS)
+				nxt = io_req_find_next(req);
+			io_free_req(req);
+		} else {
+			__io_free_req(req, false);
+		}
 	}
 	return nxt ? &nxt->work : NULL;
 }
@@ -1821,6 +1979,8 @@  void io_wq_submit_work(struct io_wq_work *work)
 		}
 	}
 
+	if (need_queue_group_members(req->flags))
+		io_queue_group_members(req, true);
 	do {
 		ret = io_issue_sqe(req, issue_flags);
 		if (ret != -EAGAIN)
@@ -1932,9 +2092,17 @@  static inline void io_queue_sqe(struct io_kiocb *req)
 	/*
 	 * We async punt it if the file wasn't marked NOWAIT, or if the file
 	 * doesn't support non-blocking read/write attempts
+	 *
+	 * Request is always freed after returning from io_queue_sqe(), so
+	 * it is fine to check its flags after it is issued
+	 *
+	 * For group leader, members holds leader references, so it is safe
+	 * to touch the leader after leader is issued
 	 */
 	if (unlikely(ret))
 		io_queue_async(req, ret);
+	else if (need_queue_group_members(req->flags))
+		io_queue_group_members(req, false);
 }
 
 static void io_queue_sqe_fallback(struct io_kiocb *req)
@@ -2101,6 +2269,56 @@  static int io_init_req(struct io_ring_ctx *ctx, struct io_kiocb *req,
 	return def->prep(req, sqe);
 }
 
+static struct io_kiocb *io_group_sqe(struct io_submit_link *group,
+				     struct io_kiocb *req)
+{
+	/*
+	 * Group chain is similar with link chain: starts with 1st sqe with
+	 * REQ_F_SQE_GROUP, and ends with the 1st sqe without REQ_F_SQE_GROUP
+	 */
+	if (group->head) {
+		struct io_kiocb *lead = group->head;
+
+		/* members can't be in link chain, can't be drained */
+		req->flags &= ~(IO_REQ_LINK_FLAGS | REQ_F_IO_DRAIN);
+		lead->grp_refs += 1;
+		group->last->grp_link = req;
+		group->last = req;
+
+		if (req->flags & REQ_F_SQE_GROUP)
+			return NULL;
+
+		req->grp_link = NULL;
+		req->flags |= REQ_F_SQE_GROUP;
+		group->head = NULL;
+		return lead;
+	} else if (req->flags & REQ_F_SQE_GROUP) {
+		group->head = req;
+		group->last = req;
+		req->grp_refs = 1;
+		req->flags |= REQ_F_SQE_GROUP_LEADER;
+		return NULL;
+	} else {
+		return req;
+	}
+}
+
+static __cold struct io_kiocb *io_submit_fail_group(
+		struct io_submit_link *link, struct io_kiocb *req)
+{
+	struct io_kiocb *lead = link->head;
+
+	/*
+	 * Instead of failing eagerly, continue assembling the group link
+	 * if applicable and mark the leader with REQ_F_FAIL. The group
+	 * flushing code should find the flag and handle the rest
+	 */
+	if (lead && (lead->flags & IO_REQ_LINK_FLAGS) && !(lead->flags & REQ_F_FAIL))
+		req_fail_link_node(lead, -ECANCELED);
+
+	return io_group_sqe(link, req);
+}
+
 static __cold int io_submit_fail_link(struct io_submit_link *link,
 				      struct io_kiocb *req, int ret)
 {
@@ -2139,11 +2357,18 @@  static __cold int io_submit_fail_init(const struct io_uring_sqe *sqe,
 {
 	struct io_ring_ctx *ctx = req->ctx;
 	struct io_submit_link *link = &ctx->submit_state.link;
+	struct io_submit_link *group = &ctx->submit_state.group;
 
 	trace_io_uring_req_failed(sqe, req, ret);
 
 	req_fail_link_node(req, ret);
 
+	if (group->head || (req->flags & REQ_F_SQE_GROUP)) {
+		req = io_submit_fail_group(group, req);
+		if (!req)
+			return 0;
+	}
+
 	/* cover both linked and non-linked request */
 	return io_submit_fail_link(link, req, ret);
 }
@@ -2187,11 +2412,28 @@  static struct io_kiocb *io_link_sqe(struct io_submit_link *link,
 	return req;
 }
 
+static inline bool io_group_assembling(const struct io_submit_state *state,
+				       const struct io_kiocb *req)
+{
+	if (state->group.head || req->flags & REQ_F_SQE_GROUP)
+		return true;
+	return false;
+}
+
+static inline bool io_link_assembling(const struct io_submit_state *state,
+				      const struct io_kiocb *req)
+{
+	if (state->link.head || (req->flags & (IO_REQ_LINK_FLAGS |
+				 REQ_F_FORCE_ASYNC | REQ_F_FAIL)))
+		return true;
+	return false;
+}
+
 static inline int io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
 			 const struct io_uring_sqe *sqe)
 	__must_hold(&ctx->uring_lock)
 {
-	struct io_submit_link *link = &ctx->submit_state.link;
+	struct io_submit_state *state = &ctx->submit_state;
 	int ret;
 
 	ret = io_init_req(ctx, req, sqe);
@@ -2200,11 +2442,18 @@  static inline int io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
 
 	trace_io_uring_submit_req(req);
 
-	if (unlikely(link->head || (req->flags & (IO_REQ_LINK_FLAGS |
-				    REQ_F_FORCE_ASYNC | REQ_F_FAIL)))) {
-		req = io_link_sqe(link, req);
-		if (!req)
-			return 0;
+	if (unlikely(io_link_assembling(state, req) ||
+		     io_group_assembling(state, req))) {
+		if (io_group_assembling(state, req)) {
+			req = io_group_sqe(&state->group, req);
+			if (!req)
+				return 0;
+		}
+		if (io_link_assembling(state, req)) {
+			req = io_link_sqe(&state->link, req);
+			if (!req)
+				return 0;
+		}
 	}
 	io_queue_sqe(req);
 	return 0;
@@ -2217,8 +2466,22 @@  static void io_submit_state_end(struct io_ring_ctx *ctx)
 {
 	struct io_submit_state *state = &ctx->submit_state;
 
-	if (unlikely(state->link.head))
-		io_queue_sqe_fallback(state->link.head);
+	if (unlikely(state->group.head || state->link.head)) {
+		/* the last member must set REQ_F_SQE_GROUP */
+		if (state->group.head) {
+			struct io_kiocb *lead = state->group.head;
+
+			state->group.last->grp_link = NULL;
+			if (lead->flags & IO_REQ_LINK_FLAGS)
+				io_link_sqe(&state->link, lead);
+			else
+				io_queue_sqe_fallback(lead);
+		}
+
+		if (unlikely(state->link.head))
+			io_queue_sqe_fallback(state->link.head);
+	}
+
 	/* flush only after queuing links as they can generate completions */
 	io_submit_flush_completions(ctx);
 	if (state->plug_started)
@@ -2236,6 +2499,7 @@  static void io_submit_state_start(struct io_submit_state *state,
 	state->submit_nr = max_ios;
 	/* set only head, no need to init link_last in advance */
 	state->link.head = NULL;
+	state->group.head = NULL;
 }
 
 static void io_commit_sqring(struct io_ring_ctx *ctx)
@@ -3559,7 +3823,7 @@  static __cold int io_uring_create(unsigned entries, struct io_uring_params *p,
 			IORING_FEAT_EXT_ARG | IORING_FEAT_NATIVE_WORKERS |
 			IORING_FEAT_RSRC_TAGS | IORING_FEAT_CQE_SKIP |
 			IORING_FEAT_LINKED_FILE | IORING_FEAT_REG_REG_RING |
-			IORING_FEAT_RECVSEND_BUNDLE;
+			IORING_FEAT_RECVSEND_BUNDLE | IORING_FEAT_SQE_GROUP;
 
 	if (copy_to_user(params, p, sizeof(*p))) {
 		ret = -EFAULT;
diff --git a/io_uring/io_uring.h b/io_uring/io_uring.h
index e1ce908f0679..8cc347959f7e 100644
--- a/io_uring/io_uring.h
+++ b/io_uring/io_uring.h
@@ -68,6 +68,8 @@  bool io_post_aux_cqe(struct io_ring_ctx *ctx, u64 user_data, s32 res, u32 cflags
 void io_add_aux_cqe(struct io_ring_ctx *ctx, u64 user_data, s32 res, u32 cflags);
 bool io_req_post_cqe(struct io_kiocb *req, s32 res, u32 cflags);
 void __io_commit_cqring_flush(struct io_ring_ctx *ctx);
+void io_queue_group_members(struct io_kiocb *req, bool async);
+void io_cancel_group_members(struct io_kiocb *req, bool ignore_cqes);
 
 struct file *io_file_get_normal(struct io_kiocb *req, int fd);
 struct file *io_file_get_fixed(struct io_kiocb *req, int fd,
@@ -339,6 +341,16 @@  static inline void io_tw_lock(struct io_ring_ctx *ctx, struct io_tw_state *ts)
 	lockdep_assert_held(&ctx->uring_lock);
 }
 
+static inline bool req_is_group_leader(struct io_kiocb *req)
+{
+	return req->flags & REQ_F_SQE_GROUP_LEADER;
+}
+
+static inline bool req_is_group_member(struct io_kiocb *req)
+{
+	return !req_is_group_leader(req) && (req->flags & REQ_F_SQE_GROUP);
+}
+
 /*
  * Don't complete immediately but use deferred completion infrastructure.
  * Protected by ->uring_lock and can only be used either with
@@ -352,6 +364,10 @@  static inline void io_req_complete_defer(struct io_kiocb *req)
 	lockdep_assert_held(&req->ctx->uring_lock);
 
 	wq_list_add_tail(&req->comp_list, &state->compl_reqs);
+
+	/* members may not be issued when leader is completed */
+	if (unlikely(req_is_group_leader(req) && req->grp_link))
+		io_queue_group_members(req, false);
 }
 
 static inline void io_commit_cqring_flush(struct io_ring_ctx *ctx)
diff --git a/io_uring/timeout.c b/io_uring/timeout.c
index 1c9bf07499b1..4e5eaf4054c3 100644
--- a/io_uring/timeout.c
+++ b/io_uring/timeout.c
@@ -168,6 +168,8 @@  static void io_fail_links(struct io_kiocb *req)
 			link->flags |= REQ_F_CQE_SKIP;
 		else
 			link->flags &= ~REQ_F_CQE_SKIP;
+		if (req_is_group_leader(link))
+			io_cancel_group_members(link, ignore_cqes);
 		trace_io_uring_fail_link(req, link);
 		link = link->link;
 	}