diff mbox

[v2,2/2] block: Fix a race between the throttling code and request queue initialization

Message ID 1517501761.2746.21.camel@wdc.com (mailing list archive)
State New, archived
Headers show

Commit Message

Bart Van Assche Feb. 1, 2018, 4:16 p.m. UTC
On Thu, 2018-02-01 at 09:53 +0800, Joseph Qi wrote:
> I'm afraid the risk may also exist in blk_cleanup_queue, which will
> set queue_lock to to the default internal lock.
> 
> spin_lock_irq(lock);
> if (q->queue_lock != &q->__queue_lock)
> 	q->queue_lock = &q->__queue_lock;
> spin_unlock_irq(lock);
> 
> I'm thinking of getting blkg->q->queue_lock to local first, but this
> will result in still using driver lock even the queue_lock has already
> been set to the default internal lock.

Hello Joseph,

I think the race between the queue_lock assignment in blk_cleanup_queue()
and the use of that pointer by cgroup attributes could be solved by
removing the visibility of these attributes from blk_cleanup_queue() instead
of __blk_release_queue(). However, last time I proposed to move code from
__blk_release_queue() into blk_cleanup_queue() I received the feedback that
from some kernel developers that they didn't like this.

Is the block driver that triggered the race on the q->queue_lock assignment
using legacy (single queue) or multiqueue (blk-mq) mode? If that driver is
using legacy mode, are you aware that there are plans to remove legacy mode
from the upstream kernel? And if your driver is using multiqueue mode, how
about the following change instead of the two patches in this patch series:


Thanks,

Bart.

Comments

Joseph Qi Feb. 2, 2018, 1:02 a.m. UTC | #1
Hi Bart,

On 18/2/2 00:16, Bart Van Assche wrote:
> On Thu, 2018-02-01 at 09:53 +0800, Joseph Qi wrote:
>> I'm afraid the risk may also exist in blk_cleanup_queue, which will
>> set queue_lock to to the default internal lock.
>>
>> spin_lock_irq(lock);
>> if (q->queue_lock != &q->__queue_lock)
>> 	q->queue_lock = &q->__queue_lock;
>> spin_unlock_irq(lock);
>>
>> I'm thinking of getting blkg->q->queue_lock to local first, but this
>> will result in still using driver lock even the queue_lock has already
>> been set to the default internal lock.
> 
> Hello Joseph,
> 
> I think the race between the queue_lock assignment in blk_cleanup_queue()
> and the use of that pointer by cgroup attributes could be solved by
> removing the visibility of these attributes from blk_cleanup_queue() instead
> of __blk_release_queue(). However, last time I proposed to move code from
> __blk_release_queue() into blk_cleanup_queue() I received the feedback that
> from some kernel developers that they didn't like this.
> 
> Is the block driver that triggered the race on the q->queue_lock assignment
> using legacy (single queue) or multiqueue (blk-mq) mode? If that driver is
> using legacy mode, are you aware that there are plans to remove legacy mode
> from the upstream kernel? And if your driver is using multiqueue mode, how
> about the following change instead of the two patches in this patch series:
> 
We triggered this race when using single queue. I'm not sure if it
exists in multi-queue.
Do you mean upstream won't fix bugs any more in single queue?

Thanks,
Joseph

> --- a/block/blk-core.c
> +++ b/block/blk-core.c
> @@ -1093,7 +1093,7 @@ blk_init_queue_node(request_fn_proc *rfn, spinlock_t *lock, int node_id)
>  		return NULL;
>  
>  	q->request_fn = rfn;
> -	if (lock)
> +	if (!q->mq_ops && lock)
>  		q->queue_lock = lock;
>  	if (blk_init_allocated_queue(q) < 0) {
>  		blk_cleanup_queue(q);
> 
> Thanks,
> 
> Bart.
>
Jens Axboe Feb. 2, 2018, 2:52 p.m. UTC | #2
On 2/1/18 6:02 PM, Joseph Qi wrote:
> Hi Bart,
> 
> On 18/2/2 00:16, Bart Van Assche wrote:
>> On Thu, 2018-02-01 at 09:53 +0800, Joseph Qi wrote:
>>> I'm afraid the risk may also exist in blk_cleanup_queue, which will
>>> set queue_lock to to the default internal lock.
>>>
>>> spin_lock_irq(lock);
>>> if (q->queue_lock != &q->__queue_lock)
>>> 	q->queue_lock = &q->__queue_lock;
>>> spin_unlock_irq(lock);
>>>
>>> I'm thinking of getting blkg->q->queue_lock to local first, but this
>>> will result in still using driver lock even the queue_lock has already
>>> been set to the default internal lock.
>>
>> Hello Joseph,
>>
>> I think the race between the queue_lock assignment in blk_cleanup_queue()
>> and the use of that pointer by cgroup attributes could be solved by
>> removing the visibility of these attributes from blk_cleanup_queue() instead
>> of __blk_release_queue(). However, last time I proposed to move code from
>> __blk_release_queue() into blk_cleanup_queue() I received the feedback that
>> from some kernel developers that they didn't like this.
>>
>> Is the block driver that triggered the race on the q->queue_lock assignment
>> using legacy (single queue) or multiqueue (blk-mq) mode? If that driver is
>> using legacy mode, are you aware that there are plans to remove legacy mode
>> from the upstream kernel? And if your driver is using multiqueue mode, how
>> about the following change instead of the two patches in this patch series:
>>
> We triggered this race when using single queue. I'm not sure if it
> exists in multi-queue.
> Do you mean upstream won't fix bugs any more in single queue?

No, we'll still fix bugs in the legacy path, we just won't introduce
any new features of accept any new drivers that use that path.
Ultimately that path will go away once there are no more users,
but until then it is maintained.
Bart Van Assche Feb. 2, 2018, 4:21 p.m. UTC | #3
On Fri, 2018-02-02 at 09:02 +0800, Joseph Qi wrote:
> We triggered this race when using single queue. I'm not sure if it

> exists in multi-queue.


Regarding the races between modifying the queue_lock pointer and the code that
uses that pointer, I think the following construct in blk_cleanup_queue() is
sufficient to avoid races between the queue_lock pointer assignment and the code
that executes concurrently with blk_cleanup_queue():

	spin_lock_irq(lock);
	if (q->queue_lock != &q->__queue_lock)
		q->queue_lock = &q->__queue_lock;
	spin_unlock_irq(lock);

In other words, I think that this patch series should be sufficient to address
all races between .queue_lock assignments and the code that uses that pointer.

Thanks,

Bart.
Joseph Qi Feb. 3, 2018, 2:51 a.m. UTC | #4
Hi Bart,

On 18/2/3 00:21, Bart Van Assche wrote:
> On Fri, 2018-02-02 at 09:02 +0800, Joseph Qi wrote:
>> We triggered this race when using single queue. I'm not sure if it
>> exists in multi-queue.
> 
> Regarding the races between modifying the queue_lock pointer and the code that
> uses that pointer, I think the following construct in blk_cleanup_queue() is
> sufficient to avoid races between the queue_lock pointer assignment and the code
> that executes concurrently with blk_cleanup_queue():
> 
> 	spin_lock_irq(lock);
> 	if (q->queue_lock != &q->__queue_lock)
> 		q->queue_lock = &q->__queue_lock;
> 	spin_unlock_irq(lock);
> 
IMO, the race also exists.

blk_cleanup_queue                   blkcg_print_blkgs
  spin_lock_irq(lock) (1)           spin_lock_irq(blkg->q->queue_lock) (2,5)
    q->queue_lock = &q->__queue_lock (3)
  spin_unlock_irq(lock) (4)
                                    spin_unlock_irq(blkg->q->queue_lock) (6)

(1) take driver lock;
(2) busy loop for driver lock;
(3) override driver lock with internal lock;
(4) unlock driver lock; 
(5) can take driver lock now;
(6) but unlock internal lock.

If we get blkg->q->queue_lock to local first like blk_cleanup_queue,
it indeed can fix the different lock use in lock/unlock. But since
blk_cleanup_queue has overridden queue lock to internal lock now, I'm
afraid we couldn't still use driver lock in blkcg_print_blkgs.

Thanks,
Joseph

> In other words, I think that this patch series should be sufficient to address
> all races between .queue_lock assignments and the code that uses that pointer.
> 
> Thanks,
> 
> Bart.
>
Bart Van Assche Feb. 5, 2018, 5:58 p.m. UTC | #5
On Sat, 2018-02-03 at 10:51 +0800, Joseph Qi wrote:
> Hi Bart,

> 

> On 18/2/3 00:21, Bart Van Assche wrote:

> > On Fri, 2018-02-02 at 09:02 +0800, Joseph Qi wrote:

> > > We triggered this race when using single queue. I'm not sure if it

> > > exists in multi-queue.

> > 

> > Regarding the races between modifying the queue_lock pointer and the code that

> > uses that pointer, I think the following construct in blk_cleanup_queue() is

> > sufficient to avoid races between the queue_lock pointer assignment and the code

> > that executes concurrently with blk_cleanup_queue():

> > 

> > 	spin_lock_irq(lock);

> > 	if (q->queue_lock != &q->__queue_lock)

> > 		q->queue_lock = &q->__queue_lock;

> > 	spin_unlock_irq(lock);

> > 

> 

> IMO, the race also exists.

> 

> blk_cleanup_queue                   blkcg_print_blkgs

>   spin_lock_irq(lock) (1)           spin_lock_irq(blkg->q->queue_lock) (2,5)

>     q->queue_lock = &q->__queue_lock (3)

>   spin_unlock_irq(lock) (4)

>                                     spin_unlock_irq(blkg->q->queue_lock) (6)

> 

> (1) take driver lock;

> (2) busy loop for driver lock;

> (3) override driver lock with internal lock;

> (4) unlock driver lock; 

> (5) can take driver lock now;

> (6) but unlock internal lock.

> 

> If we get blkg->q->queue_lock to local first like blk_cleanup_queue,

> it indeed can fix the different lock use in lock/unlock. But since

> blk_cleanup_queue has overridden queue lock to internal lock now, I'm

> afraid we couldn't still use driver lock in blkcg_print_blkgs.


(+ Jan Kara)

Hello Joseph,

That's a good catch. Since modifying all code that accesses the queue_lock
pointer and that can race with blk_cleanup_queue() would be too cumbersome I
see only one solution, namely making the request queue cgroup and sysfs
attributes invisible before the queue_lock pointer is restored. Leaving the
debugfs attributes visible while blk_cleanup_queue() is in progress should
be fine if the request queue initialization code is modified such that it
only modifies the queue_lock pointer for legacy queues. Jan, I think some
time ago you objected when I proposed to move code from __blk_release_queue()
into blk_cleanup_queue(). Would you be fine with a slightly different
approach, namely making block cgroup and sysfs attributes invisible earlier,
namely from inside blk_cleanup_queue() instead of from inside
__blk_release_queue()?

Thanks,

Bart.
Jan Kara Feb. 7, 2018, 11:54 a.m. UTC | #6
On Mon 05-02-18 17:58:12, Bart Van Assche wrote:
> On Sat, 2018-02-03 at 10:51 +0800, Joseph Qi wrote:
> > Hi Bart,
> > 
> > On 18/2/3 00:21, Bart Van Assche wrote:
> > > On Fri, 2018-02-02 at 09:02 +0800, Joseph Qi wrote:
> > > > We triggered this race when using single queue. I'm not sure if it
> > > > exists in multi-queue.
> > > 
> > > Regarding the races between modifying the queue_lock pointer and the code that
> > > uses that pointer, I think the following construct in blk_cleanup_queue() is
> > > sufficient to avoid races between the queue_lock pointer assignment and the code
> > > that executes concurrently with blk_cleanup_queue():
> > > 
> > > 	spin_lock_irq(lock);
> > > 	if (q->queue_lock != &q->__queue_lock)
> > > 		q->queue_lock = &q->__queue_lock;
> > > 	spin_unlock_irq(lock);
> > > 
> > 
> > IMO, the race also exists.
> > 
> > blk_cleanup_queue                   blkcg_print_blkgs
> >   spin_lock_irq(lock) (1)           spin_lock_irq(blkg->q->queue_lock) (2,5)
> >     q->queue_lock = &q->__queue_lock (3)
> >   spin_unlock_irq(lock) (4)
> >                                     spin_unlock_irq(blkg->q->queue_lock) (6)
> > 
> > (1) take driver lock;
> > (2) busy loop for driver lock;
> > (3) override driver lock with internal lock;
> > (4) unlock driver lock; 
> > (5) can take driver lock now;
> > (6) but unlock internal lock.
> > 
> > If we get blkg->q->queue_lock to local first like blk_cleanup_queue,
> > it indeed can fix the different lock use in lock/unlock. But since
> > blk_cleanup_queue has overridden queue lock to internal lock now, I'm
> > afraid we couldn't still use driver lock in blkcg_print_blkgs.
> 
> (+ Jan Kara)
> 
> Hello Joseph,
> 
> That's a good catch. Since modifying all code that accesses the queue_lock
> pointer and that can race with blk_cleanup_queue() would be too cumbersome I
> see only one solution, namely making the request queue cgroup and sysfs
> attributes invisible before the queue_lock pointer is restored. Leaving the
> debugfs attributes visible while blk_cleanup_queue() is in progress should
> be fine if the request queue initialization code is modified such that it
> only modifies the queue_lock pointer for legacy queues. Jan, I think some
> time ago you objected when I proposed to move code from __blk_release_queue()
> into blk_cleanup_queue(). Would you be fine with a slightly different
> approach, namely making block cgroup and sysfs attributes invisible earlier,
> namely from inside blk_cleanup_queue() instead of from inside
> __blk_release_queue()?

Making attributes invisible earlier should be fine. But this whole
switching of queue_lock in blk_cleanup_queue() looks error-prone to me.
Generally anyone having access to request_queue can have old value of
q->queue_lock in its CPU caches and can happily use that value after
blk_cleanup_queue() finishes and the driver specific structure storing the
lock is freed. blkcg_print_blkgs() is one such example but I wouldn't bet a
penny that there are no other paths with a similar problem.

Logically, the lifetime of storage for q->queue_lock should be at least as
long as that of the request_queue itself - i.e., released only after
__blk_release_queue(). Otherwise all users of q->queue_lock need a proper
synchronization against lock switch in blk_cleanup_queue(). Either of these
looks like a lot of work. I guess since this involves only a legacy path,
your approach to move removal sysfs attributes earlier might be a
reasonable band aid.

								Honza
diff mbox

Patch

--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -1093,7 +1093,7 @@  blk_init_queue_node(request_fn_proc *rfn, spinlock_t *lock, int node_id)
 		return NULL;
 
 	q->request_fn = rfn;
-	if (lock)
+	if (!q->mq_ops && lock)
 		q->queue_lock = lock;
 	if (blk_init_allocated_queue(q) < 0) {
 		blk_cleanup_queue(q);