diff mbox series

[V3,2/3] blk-mq: grab rq->refcount before calling ->fn in blk_mq_tagset_busy_iter

Message ID 20210427151058.2833168-3-ming.lei@redhat.com (mailing list archive)
State New, archived
Headers show
Series blk-mq: fix request UAF related with iterating over tagset requests | expand

Commit Message

Ming Lei April 27, 2021, 3:10 p.m. UTC
Grab rq->refcount before calling ->fn in blk_mq_tagset_busy_iter(), and
this way will prevent the request from being re-used when ->fn is
running. The approach is same as what we do during handling timeout.

Fix request UAF related with completion race or queue releasing:

- If one rq is referred before rq->q is frozen, then queue won't be
frozen before the request is released during iteration.

- If one rq is referred after rq->q is frozen, refcount_inc_not_zero()
will return false, and we won't iterate over this request.

However, still one request UAF not covered: refcount_inc_not_zero() may
read one freed request, and it will be handled in next patch.

Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 block/blk-mq-tag.c | 22 ++++++++++++++++------
 block/blk-mq.c     | 14 +++++++++-----
 block/blk-mq.h     |  1 +
 3 files changed, 26 insertions(+), 11 deletions(-)

Comments

Bart Van Assche April 27, 2021, 8:17 p.m. UTC | #1
On 4/27/21 8:10 AM, Ming Lei wrote:
> +void blk_mq_put_rq_ref(struct request *rq)
> +{
> +	if (is_flush_rq(rq, rq->mq_hctx))
> +		rq->end_io(rq, 0);
> +	else if (refcount_dec_and_test(&rq->ref))
> +		__blk_mq_free_request(rq);
> +}

The above function needs more work. blk_mq_put_rq_ref() may be called 
from multiple CPUs concurrently and hence must handle concurrent calls 
safely. The flush .end_io callbacks have not been designed to handle 
concurrent calls.

Bart.
Ming Lei April 28, 2021, 12:07 a.m. UTC | #2
On Tue, Apr 27, 2021 at 01:17:06PM -0700, Bart Van Assche wrote:
> On 4/27/21 8:10 AM, Ming Lei wrote:
> > +void blk_mq_put_rq_ref(struct request *rq)
> > +{
> > +	if (is_flush_rq(rq, rq->mq_hctx))
> > +		rq->end_io(rq, 0);
> > +	else if (refcount_dec_and_test(&rq->ref))
> > +		__blk_mq_free_request(rq);
> > +}
> 
> The above function needs more work. blk_mq_put_rq_ref() may be called from
> multiple CPUs concurrently and hence must handle concurrent calls safely.
> The flush .end_io callbacks have not been designed to handle concurrent
> calls.

static void flush_end_io(struct request *flush_rq, blk_status_t error)
{
        struct request_queue *q = flush_rq->q;
        struct list_head *running;
        struct request *rq, *n;
        unsigned long flags = 0;
        struct blk_flush_queue *fq = blk_get_flush_queue(q, flush_rq->mq_ctx);

        /* release the tag's ownership to the req cloned from */
        spin_lock_irqsave(&fq->mq_flush_lock, flags);

        if (!refcount_dec_and_test(&flush_rq->ref)) {
                fq->rq_status = error;
                spin_unlock_irqrestore(&fq->mq_flush_lock, flags);
                return;
        }
		...
		spin_unlock_irqrestore(&fq->mq_flush_lock, flags);
}

Both spin lock and refcount_dec_and_test() are called at the beginning of
flush_end_io(), so it is absolutely reliable in case of concurrent
calls.

Otherwise, it is simply one issue between normal completion and timeout
since the pattern in this patch is same with timeout.

Or do I miss something?


Thanks,
Ming
Bart Van Assche April 28, 2021, 1:37 a.m. UTC | #3
On 4/27/21 5:07 PM, Ming Lei wrote:
> On Tue, Apr 27, 2021 at 01:17:06PM -0700, Bart Van Assche wrote:
>> On 4/27/21 8:10 AM, Ming Lei wrote:
>>> +void blk_mq_put_rq_ref(struct request *rq)
>>> +{
>>> +	if (is_flush_rq(rq, rq->mq_hctx))
>>> +		rq->end_io(rq, 0);
>>> +	else if (refcount_dec_and_test(&rq->ref))
>>> +		__blk_mq_free_request(rq);
>>> +}
>>
>> The above function needs more work. blk_mq_put_rq_ref() may be called from
>> multiple CPUs concurrently and hence must handle concurrent calls safely.
>> The flush .end_io callbacks have not been designed to handle concurrent
>> calls.
> 
> static void flush_end_io(struct request *flush_rq, blk_status_t error)
> {
>         struct request_queue *q = flush_rq->q;
>         struct list_head *running;
>         struct request *rq, *n;
>         unsigned long flags = 0;
>         struct blk_flush_queue *fq = blk_get_flush_queue(q, flush_rq->mq_ctx);
> 
>         /* release the tag's ownership to the req cloned from */
>         spin_lock_irqsave(&fq->mq_flush_lock, flags);
> 
>         if (!refcount_dec_and_test(&flush_rq->ref)) {
>                 fq->rq_status = error;
>                 spin_unlock_irqrestore(&fq->mq_flush_lock, flags);
>                 return;
>         }
> 		...
> 		spin_unlock_irqrestore(&fq->mq_flush_lock, flags);
> }
> 
> Both spin lock and refcount_dec_and_test() are called at the beginning of
> flush_end_io(), so it is absolutely reliable in case of concurrent
> calls.
> 
> Otherwise, it is simply one issue between normal completion and timeout
> since the pattern in this patch is same with timeout.
> 
> Or do I miss something?

The following code from blk_flush_restore_request() modifies the end_io
pointer:

	rq->end_io = rq->flush.saved_end_io;

If blk_mq_put_rq_ref() is called from two different contexts then one of
the two rq->end_io(rq, 0) calls in blk_mq_put_rq_ref() races with the
end_io assignment in blk_flush_restore_request().

Bart.
Ming Lei April 28, 2021, 2:22 a.m. UTC | #4
On Wed, Apr 28, 2021 at 9:37 AM Bart Van Assche <bvanassche@acm.org> wrote:
>
> On 4/27/21 5:07 PM, Ming Lei wrote:
> > On Tue, Apr 27, 2021 at 01:17:06PM -0700, Bart Van Assche wrote:
> >> On 4/27/21 8:10 AM, Ming Lei wrote:
> >>> +void blk_mq_put_rq_ref(struct request *rq)
> >>> +{
> >>> +   if (is_flush_rq(rq, rq->mq_hctx))
> >>> +           rq->end_io(rq, 0);
> >>> +   else if (refcount_dec_and_test(&rq->ref))
> >>> +           __blk_mq_free_request(rq);
> >>> +}
> >>
> >> The above function needs more work. blk_mq_put_rq_ref() may be called from
> >> multiple CPUs concurrently and hence must handle concurrent calls safely.
> >> The flush .end_io callbacks have not been designed to handle concurrent
> >> calls.
> >
> > static void flush_end_io(struct request *flush_rq, blk_status_t error)
> > {
> >         struct request_queue *q = flush_rq->q;
> >         struct list_head *running;
> >         struct request *rq, *n;
> >         unsigned long flags = 0;
> >         struct blk_flush_queue *fq = blk_get_flush_queue(q, flush_rq->mq_ctx);
> >
> >         /* release the tag's ownership to the req cloned from */
> >         spin_lock_irqsave(&fq->mq_flush_lock, flags);
> >
> >         if (!refcount_dec_and_test(&flush_rq->ref)) {
> >                 fq->rq_status = error;
> >                 spin_unlock_irqrestore(&fq->mq_flush_lock, flags);
> >                 return;
> >         }
> >               ...
> >               spin_unlock_irqrestore(&fq->mq_flush_lock, flags);
> > }
> >
> > Both spin lock and refcount_dec_and_test() are called at the beginning of
> > flush_end_io(), so it is absolutely reliable in case of concurrent
> > calls.
> >
> > Otherwise, it is simply one issue between normal completion and timeout
> > since the pattern in this patch is same with timeout.
> >
> > Or do I miss something?
>
> The following code from blk_flush_restore_request() modifies the end_io
> pointer:
>
>         rq->end_io = rq->flush.saved_end_io;

blk_flush_restore_request() is only done for request passed to
blk_insert_flush(),
here we only call ->end_io() for flush_rq which is one flush internal
request instance, please see is_flush_rq() definition.  Also
flush_rq->end_io always
points to flush_end_io().

So there isn't such issue you mentioned.

Thanks,
Ming
diff mbox series

Patch

diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c
index 2a37731e8244..9329b94a9743 100644
--- a/block/blk-mq-tag.c
+++ b/block/blk-mq-tag.c
@@ -264,6 +264,8 @@  static bool bt_tags_iter(struct sbitmap *bitmap, unsigned int bitnr, void *data)
 	struct blk_mq_tags *tags = iter_data->tags;
 	bool reserved = iter_data->flags & BT_TAG_ITER_RESERVED;
 	struct request *rq;
+	bool ret;
+	bool iter_static_rqs = !!(iter_data->flags & BT_TAG_ITER_STATIC_RQS);
 
 	if (!reserved)
 		bitnr += tags->nr_reserved_tags;
@@ -272,16 +274,21 @@  static bool bt_tags_iter(struct sbitmap *bitmap, unsigned int bitnr, void *data)
 	 * We can hit rq == NULL here, because the tagging functions
 	 * test and set the bit before assigning ->rqs[].
 	 */
-	if (iter_data->flags & BT_TAG_ITER_STATIC_RQS)
+	if (iter_static_rqs)
 		rq = tags->static_rqs[bitnr];
-	else
+	else {
 		rq = tags->rqs[bitnr];
-	if (!rq)
-		return true;
+		if (!rq || !refcount_inc_not_zero(&rq->ref))
+			return true;
+	}
 	if ((iter_data->flags & BT_TAG_ITER_STARTED) &&
 	    !blk_mq_request_started(rq))
-		return true;
-	return iter_data->fn(rq, iter_data->data, reserved);
+		ret = true;
+	else
+		ret = iter_data->fn(rq, iter_data->data, reserved);
+	if (!iter_static_rqs)
+		blk_mq_put_rq_ref(rq);
+	return ret;
 }
 
 /**
@@ -348,6 +355,9 @@  void blk_mq_all_tag_iter(struct blk_mq_tags *tags, busy_tag_iter_fn *fn,
  *		indicates whether or not @rq is a reserved request. Return
  *		true to continue iterating tags, false to stop.
  * @priv:	Will be passed as second argument to @fn.
+ *
+ * We grab one request reference before calling @fn and release it after
+ * @fn returns.
  */
 void blk_mq_tagset_busy_iter(struct blk_mq_tag_set *tagset,
 		busy_tag_iter_fn *fn, void *priv)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 927189a55575..4bd6c11bd8bc 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -909,6 +909,14 @@  static bool blk_mq_req_expired(struct request *rq, unsigned long *next)
 	return false;
 }
 
+void blk_mq_put_rq_ref(struct request *rq)
+{
+	if (is_flush_rq(rq, rq->mq_hctx))
+		rq->end_io(rq, 0);
+	else if (refcount_dec_and_test(&rq->ref))
+		__blk_mq_free_request(rq);
+}
+
 static bool blk_mq_check_expired(struct blk_mq_hw_ctx *hctx,
 		struct request *rq, void *priv, bool reserved)
 {
@@ -942,11 +950,7 @@  static bool blk_mq_check_expired(struct blk_mq_hw_ctx *hctx,
 	if (blk_mq_req_expired(rq, next))
 		blk_mq_rq_timed_out(rq, reserved);
 
-	if (is_flush_rq(rq, hctx))
-		rq->end_io(rq, 0);
-	else if (refcount_dec_and_test(&rq->ref))
-		__blk_mq_free_request(rq);
-
+	blk_mq_put_rq_ref(rq);
 	return true;
 }
 
diff --git a/block/blk-mq.h b/block/blk-mq.h
index 3616453ca28c..143afe42c63a 100644
--- a/block/blk-mq.h
+++ b/block/blk-mq.h
@@ -47,6 +47,7 @@  void blk_mq_add_to_requeue_list(struct request *rq, bool at_head,
 void blk_mq_flush_busy_ctxs(struct blk_mq_hw_ctx *hctx, struct list_head *list);
 struct request *blk_mq_dequeue_from_ctx(struct blk_mq_hw_ctx *hctx,
 					struct blk_mq_ctx *start);
+void blk_mq_put_rq_ref(struct request *rq);
 
 /*
  * Internal helpers for allocating/freeing the request map