diff mbox series

[6/7] block: cleanup ioc_clear_queue

Message ID 20211130124636.2505904-7-hch@lst.de (mailing list archive)
State New, archived
Headers show
Series [1/7] block: remove the nr_task field from struct io_context | expand

Commit Message

Christoph Hellwig Nov. 30, 2021, 12:46 p.m. UTC
Fold __ioc_clear_queue into ioc_clear_queue, remove the spurious RCU
criticial section and unify the irq locking style.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/blk-ioc.c | 31 +++++++++----------------------
 1 file changed, 9 insertions(+), 22 deletions(-)

Comments

Jan Kara Nov. 30, 2021, 5:26 p.m. UTC | #1
On Tue 30-11-21 13:46:35, Christoph Hellwig wrote:
> Fold __ioc_clear_queue into ioc_clear_queue, remove the spurious RCU
> criticial section and unify the irq locking style.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>  block/blk-ioc.c | 31 +++++++++----------------------
>  1 file changed, 9 insertions(+), 22 deletions(-)
> 
> diff --git a/block/blk-ioc.c b/block/blk-ioc.c
> index 902bca2b273ba..32ae006e1b3e8 100644
> --- a/block/blk-ioc.c
> +++ b/block/blk-ioc.c
> @@ -192,27 +192,6 @@ void exit_io_context(struct task_struct *task)
>  	}
>  }
>  
> -static void __ioc_clear_queue(struct list_head *icq_list)
> -{
> -	unsigned long flags;
> -
> -	rcu_read_lock();
> -	while (!list_empty(icq_list)) {
> -		struct io_cq *icq = list_entry(icq_list->next,
> -						struct io_cq, q_node);
> -		struct io_context *ioc = icq->ioc;
> -
> -		spin_lock_irqsave(&ioc->lock, flags);
> -		if (icq->flags & ICQ_DESTROYED) {
> -			spin_unlock_irqrestore(&ioc->lock, flags);
> -			continue;
> -		}
> -		ioc_destroy_icq(icq);
> -		spin_unlock_irqrestore(&ioc->lock, flags);
> -	}
> -	rcu_read_unlock();
> -}
> -
>  /**
>   * ioc_clear_queue - break any ioc association with the specified queue
>   * @q: request_queue being cleared
> @@ -227,7 +206,15 @@ void ioc_clear_queue(struct request_queue *q)
>  	list_splice_init(&q->icq_list, &icq_list);
>  	spin_unlock_irq(&q->queue_lock);
>  
> -	__ioc_clear_queue(&icq_list);

> +	while (!list_empty(&icq_list)) {
> +		struct io_cq *icq =
> +			list_entry(icq_list.next, struct io_cq, q_node);

I'm not quite sure about dropping the rcu protection here. This function
generally runs without any protection so what guards us against icq being
freed just after we've got its pointer from the list?

								Honza

> +
> +		spin_lock_irq(&icq->ioc->lock);
> +		if (!(icq->flags & ICQ_DESTROYED))
> +			ioc_destroy_icq(icq);
> +		spin_unlock_irq(&icq->ioc->lock);
> +	}
>  }
>  
>  static struct io_context *alloc_io_context(gfp_t gfp_flags, int node)
> -- 
> 2.30.2
>
Christoph Hellwig Dec. 1, 2021, 7:27 a.m. UTC | #2
On Tue, Nov 30, 2021 at 06:26:13PM +0100, Jan Kara wrote:
> I'm not quite sure about dropping the rcu protection here. This function
> generally runs without any protection so what guards us against icq being
> freed just after we've got its pointer from the list?

How does the RCU protection scheme work for the icq lookups?
ioc_lookup_icq takes it and then drops it before getting any kind
of refcount, so this all looks weird.  But I guess you are right that
I should probably keep this cargo culted scheme unless I have an
actual plan on how this could work.

While we're at it:  I don't see how put put_io_context could
be called under q->queue_lock and thus actually need the whole
workqueue scheme.  Then again we really need to do an audit on
queue_lock and split it into actually documented locks now that the
old request code is gone.
Jan Kara Dec. 1, 2021, 2:33 p.m. UTC | #3
On Wed 01-12-21 08:27:02, Christoph Hellwig wrote:
> On Tue, Nov 30, 2021 at 06:26:13PM +0100, Jan Kara wrote:
> > I'm not quite sure about dropping the rcu protection here. This function
> > generally runs without any protection so what guards us against icq being
> > freed just after we've got its pointer from the list?
> 
> How does the RCU protection scheme work for the icq lookups?
> ioc_lookup_icq takes it and then drops it before getting any kind
> of refcount, so this all looks weird.  But I guess you are right that
> I should probably keep this cargo culted scheme unless I have an
> actual plan on how this could work.

I agree the RCU there looks like a bit of cargo-cult and would need better
documentation if nothing else. But I think the logic behind RCU protection
inside __ioc_clear_queue() is that you can safely acquire ioc->lock and
check ICQ_DESTROYED flag - which should be set if ioc got already freed, if
not set, you hold the ioc->lock so you won the race to free the ioc.
For ioc_lookup_icq() I'm not sure what's going on there, there RCU looks
completely pointless.

> While we're at it:  I don't see how put put_io_context could
> be called under q->queue_lock and thus actually need the whole
> workqueue scheme.

I don't see that either but I think in the past an equivalent of
blk_mq_free_request() could get called during request merging while holding
all the locks (I have just recently fixed a deadlock due to this in BFQ by
postponing freeing of merged requests to the caller) and
blk_mq_free_request() will call put_io_context(). So at this point I don't
think it is needed anymore.

> Then again we really need to do an audit on queue_lock and split it into
> actually documented locks now that the old request code is gone.

A worthy goal :)
								Honza
diff mbox series

Patch

diff --git a/block/blk-ioc.c b/block/blk-ioc.c
index 902bca2b273ba..32ae006e1b3e8 100644
--- a/block/blk-ioc.c
+++ b/block/blk-ioc.c
@@ -192,27 +192,6 @@  void exit_io_context(struct task_struct *task)
 	}
 }
 
-static void __ioc_clear_queue(struct list_head *icq_list)
-{
-	unsigned long flags;
-
-	rcu_read_lock();
-	while (!list_empty(icq_list)) {
-		struct io_cq *icq = list_entry(icq_list->next,
-						struct io_cq, q_node);
-		struct io_context *ioc = icq->ioc;
-
-		spin_lock_irqsave(&ioc->lock, flags);
-		if (icq->flags & ICQ_DESTROYED) {
-			spin_unlock_irqrestore(&ioc->lock, flags);
-			continue;
-		}
-		ioc_destroy_icq(icq);
-		spin_unlock_irqrestore(&ioc->lock, flags);
-	}
-	rcu_read_unlock();
-}
-
 /**
  * ioc_clear_queue - break any ioc association with the specified queue
  * @q: request_queue being cleared
@@ -227,7 +206,15 @@  void ioc_clear_queue(struct request_queue *q)
 	list_splice_init(&q->icq_list, &icq_list);
 	spin_unlock_irq(&q->queue_lock);
 
-	__ioc_clear_queue(&icq_list);
+	while (!list_empty(&icq_list)) {
+		struct io_cq *icq =
+			list_entry(icq_list.next, struct io_cq, q_node);
+
+		spin_lock_irq(&icq->ioc->lock);
+		if (!(icq->flags & ICQ_DESTROYED))
+			ioc_destroy_icq(icq);
+		spin_unlock_irq(&icq->ioc->lock);
+	}
 }
 
 static struct io_context *alloc_io_context(gfp_t gfp_flags, int node)