diff mbox series

[v2] blk_iocost: remove some duplicate irq disable/enables

Message ID Zv0kudA9xyGdaA4g@stanley.mountain (mailing list archive)
State New, archived
Headers show
Series [v2] blk_iocost: remove some duplicate irq disable/enables | expand

Commit Message

Dan Carpenter Oct. 2, 2024, 10:47 a.m. UTC
These are called from blkcg_print_blkgs() which already disables IRQs so
disabling it again is wrong.  It means that IRQs will be enabled slightly
earlier than intended, however, so far as I can see, this bug is harmless.

Fixes: 35198e323001 ("blk-iocost: read params inside lock in sysfs apis")
Signed-off-by: Dan Carpenter <dan.carpenter@linaro.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
---
v2: Fix typo in the subject

 block/blk-iocost.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

Comments

Jens Axboe Oct. 2, 2024, 1:17 p.m. UTC | #1
On Wed, 02 Oct 2024 13:47:21 +0300, Dan Carpenter wrote:
> These are called from blkcg_print_blkgs() which already disables IRQs so
> disabling it again is wrong.  It means that IRQs will be enabled slightly
> earlier than intended, however, so far as I can see, this bug is harmless.
> 
> 

Applied, thanks!

[1/1] blk_iocost: remove some duplicate irq disable/enables
      commit: 14d57ec3b86369d0037567f12caae0c9e9eaad9e

Best regards,
Waiman Long Oct. 2, 2024, 5:49 p.m. UTC | #2
On 10/2/24 06:47, Dan Carpenter wrote:
> These are called from blkcg_print_blkgs() which already disables IRQs so
> disabling it again is wrong.  It means that IRQs will be enabled slightly
> earlier than intended, however, so far as I can see, this bug is harmless.
>
> Fixes: 35198e323001 ("blk-iocost: read params inside lock in sysfs apis")
> Signed-off-by: Dan Carpenter <dan.carpenter@linaro.org>
> Reviewed-by: Christoph Hellwig <hch@lst.de>
> ---
> v2: Fix typo in the subject
>
>   block/blk-iocost.c | 8 ++++----
>   1 file changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/block/blk-iocost.c b/block/blk-iocost.c
> index 9dc9323f84ac..384aa15e8260 100644
> --- a/block/blk-iocost.c
> +++ b/block/blk-iocost.c
> @@ -3166,7 +3166,7 @@ static u64 ioc_qos_prfill(struct seq_file *sf, struct blkg_policy_data *pd,
>   	if (!dname)
>   		return 0;
>   
> -	spin_lock_irq(&ioc->lock);
> +	spin_lock(&ioc->lock);
>   	seq_printf(sf, "%s enable=%d ctrl=%s rpct=%u.%02u rlat=%u wpct=%u.%02u wlat=%u min=%u.%02u max=%u.%02u\n",
>   		   dname, ioc->enabled, ioc->user_qos_params ? "user" : "auto",
>   		   ioc->params.qos[QOS_RPPM] / 10000,
> @@ -3179,7 +3179,7 @@ static u64 ioc_qos_prfill(struct seq_file *sf, struct blkg_policy_data *pd,
>   		   ioc->params.qos[QOS_MIN] % 10000 / 100,
>   		   ioc->params.qos[QOS_MAX] / 10000,
>   		   ioc->params.qos[QOS_MAX] % 10000 / 100);
> -	spin_unlock_irq(&ioc->lock);
> +	spin_unlock(&ioc->lock);
>   	return 0;
>   }
>   
> @@ -3366,14 +3366,14 @@ static u64 ioc_cost_model_prfill(struct seq_file *sf,
>   	if (!dname)
>   		return 0;
>   
> -	spin_lock_irq(&ioc->lock);
> +	spin_lock(&ioc->lock);
>   	seq_printf(sf, "%s ctrl=%s model=linear "
>   		   "rbps=%llu rseqiops=%llu rrandiops=%llu "
>   		   "wbps=%llu wseqiops=%llu wrandiops=%llu\n",
>   		   dname, ioc->user_cost_model ? "user" : "auto",
>   		   u[I_LCOEF_RBPS], u[I_LCOEF_RSEQIOPS], u[I_LCOEF_RRANDIOPS],
>   		   u[I_LCOEF_WBPS], u[I_LCOEF_WSEQIOPS], u[I_LCOEF_WRANDIOPS]);
> -	spin_unlock_irq(&ioc->lock);
> +	spin_unlock(&ioc->lock);
>   	return 0;
>   }
>   

I would suggest adding a "lockdep_assert_irqs_disabled()" call before 
spin_lock() to confirm that irq is indeed disabled just in case the 
callers are changed in the future.

Cheers,
Longman
Dan Carpenter Oct. 2, 2024, 6:10 p.m. UTC | #3
On Wed, Oct 02, 2024 at 01:49:48PM -0400, Waiman Long wrote:
> > -	spin_unlock_irq(&ioc->lock);
> > +	spin_unlock(&ioc->lock);
> >   	return 0;
> >   }
> 
> I would suggest adding a "lockdep_assert_irqs_disabled()" call before
> spin_lock() to confirm that irq is indeed disabled just in case the callers
> are changed in the future.

It's really hard to predict future bugs.  I doubt we'll add new callers.
Outputting this information to a struct seq_file *sf is pretty specific.

If there were a bug related to this, then wouldn't it be caught by lockdep?

The other idea is that we could catch bugs like this using static analysis.
Like every time we take the &ioc->lock, either IRQs should already be disabled
or we disable it ourselves.  I could write a Smatch check like this.

KTODO: add Smatch check to ensure IRQs are disabled for &ioc->lock

regards,
dan carpenter
Waiman Long Oct. 2, 2024, 6:40 p.m. UTC | #4
On 10/2/24 14:10, Dan Carpenter wrote:
> On Wed, Oct 02, 2024 at 01:49:48PM -0400, Waiman Long wrote:
>>> -	spin_unlock_irq(&ioc->lock);
>>> +	spin_unlock(&ioc->lock);
>>>    	return 0;
>>>    }
>> I would suggest adding a "lockdep_assert_irqs_disabled()" call before
>> spin_lock() to confirm that irq is indeed disabled just in case the callers
>> are changed in the future.
> It's really hard to predict future bugs.  I doubt we'll add new callers.
> Outputting this information to a struct seq_file *sf is pretty specific.
>
> If there were a bug related to this, then wouldn't it be caught by lockdep?
>
> The other idea is that we could catch bugs like this using static analysis.
> Like every time we take the &ioc->lock, either IRQs should already be disabled
> or we disable it ourselves.  I could write a Smatch check like this.
>
> KTODO: add Smatch check to ensure IRQs are disabled for &ioc->lock

This is just a suggestion and it is fine if you don't think it is 
necessary. The call can also serve as a comment that irq should have 
been disabled at this point.

Cheers,
Longman
Dan Carpenter Oct. 3, 2024, 12:03 p.m. UTC | #5
On Wed, Oct 02, 2024 at 02:40:52PM -0400, Waiman Long wrote:
> 
> On 10/2/24 14:10, Dan Carpenter wrote:
> > On Wed, Oct 02, 2024 at 01:49:48PM -0400, Waiman Long wrote:
> > > > -	spin_unlock_irq(&ioc->lock);
> > > > +	spin_unlock(&ioc->lock);
> > > >    	return 0;
> > > >    }
> > > I would suggest adding a "lockdep_assert_irqs_disabled()" call before
> > > spin_lock() to confirm that irq is indeed disabled just in case the callers
> > > are changed in the future.
> > It's really hard to predict future bugs.  I doubt we'll add new callers.
> > Outputting this information to a struct seq_file *sf is pretty specific.
> > 
> > If there were a bug related to this, then wouldn't it be caught by lockdep?
> > 
> > The other idea is that we could catch bugs like this using static analysis.
> > Like every time we take the &ioc->lock, either IRQs should already be disabled
> > or we disable it ourselves.  I could write a Smatch check like this.
> > 
> > KTODO: add Smatch check to ensure IRQs are disabled for &ioc->lock
> 
> This is just a suggestion and it is fine if you don't think it is necessary.
> The call can also serve as a comment that irq should have been disabled at
> this point.

I mean it's good to think about preventing future bugs.  I just feel like when
it comes to adding asserts probably that's more useful when there are a lot of
call paths.  Meanwhile if we add a static checker rule then we're probably going
to find bugs.  Boom, maybe I've found one already?:

block/blk-iocost.c:3144 ioc_weight_write() warn: expected irq_disable for '&iocg->ioc->lock'

block/blk-iocost.c
  3090  static ssize_t ioc_weight_write(struct kernfs_open_file *of, char *buf,
  3091                                  size_t nbytes, loff_t off)
  3092  {
  3093          struct blkcg *blkcg = css_to_blkcg(of_css(of));
  3094          struct ioc_cgrp *iocc = blkcg_to_iocc(blkcg);
  3095          struct blkg_conf_ctx ctx;
  3096          struct ioc_now now;
  3097          struct ioc_gq *iocg;
  3098          u32 v;
  3099          int ret;
  3100  
  3101          if (!strchr(buf, ':')) {
  3102                  struct blkcg_gq *blkg;
  3103  
  3104                  if (!sscanf(buf, "default %u", &v) && !sscanf(buf, "%u", &v))
  3105                          return -EINVAL;
  3106  
  3107                  if (v < CGROUP_WEIGHT_MIN || v > CGROUP_WEIGHT_MAX)
  3108                          return -EINVAL;
  3109  
  3110                  spin_lock_irq(&blkcg->lock);

Here we disable IRQs.

  3111                  iocc->dfl_weight = v * WEIGHT_ONE;
  3112                  hlist_for_each_entry(blkg, &blkcg->blkg_list, blkcg_node) {
  3113                          struct ioc_gq *iocg = blkg_to_iocg(blkg);
  3114  
  3115                          if (iocg) {
  3116                                  spin_lock(&iocg->ioc->lock);

So this is fine.

  3117                                  ioc_now(iocg->ioc, &now);
  3118                                  weight_updated(iocg, &now);
  3119                                  spin_unlock(&iocg->ioc->lock);
  3120                          }
  3121                  }
  3122                  spin_unlock_irq(&blkcg->lock);
  3123  
  3124                  return nbytes;
  3125          }
  3126  
  3127          blkg_conf_init(&ctx, buf);
  3128  
  3129          ret = blkg_conf_prep(blkcg, &blkcg_policy_iocost, &ctx);
  3130          if (ret)
  3131                  goto err;
  3132  
  3133          iocg = blkg_to_iocg(ctx.blkg);
  3134  
  3135          if (!strncmp(ctx.body, "default", 7)) {
  3136                  v = 0;
  3137          } else {
  3138                  if (!sscanf(ctx.body, "%u", &v))
  3139                          goto einval;
  3140                  if (v < CGROUP_WEIGHT_MIN || v > CGROUP_WEIGHT_MAX)
  3141                          goto einval;
  3142          }
  3143  
  3144          spin_lock(&iocg->ioc->lock);

But why is this not spin_lock_irq()?  I haven't analyzed this so maybe it's
fine.

  3145          iocg->cfg_weight = v * WEIGHT_ONE;
  3146          ioc_now(iocg->ioc, &now);
  3147          weight_updated(iocg, &now);
  3148          spin_unlock(&iocg->ioc->lock);
  3149  
  3150          blkg_conf_exit(&ctx);
  3151          return nbytes;
  3152  
  3153  einval:
  3154          ret = -EINVAL;
  3155  err:
  3156          blkg_conf_exit(&ctx);
  3157          return ret;
  3158  }

regards,
dan carpenter
Jens Axboe Oct. 3, 2024, 1:21 p.m. UTC | #6
On 10/3/24 6:03 AM, Dan Carpenter wrote:
>   3117                                  ioc_now(iocg->ioc, &now);
>   3118                                  weight_updated(iocg, &now);
>   3119                                  spin_unlock(&iocg->ioc->lock);
>   3120                          }
>   3121                  }
>   3122                  spin_unlock_irq(&blkcg->lock);
>   3123  
>   3124                  return nbytes;
>   3125          }
>   3126  
>   3127          blkg_conf_init(&ctx, buf);
>   3128  
>   3129          ret = blkg_conf_prep(blkcg, &blkcg_policy_iocost, &ctx);
>   3130          if (ret)
>   3131                  goto err;
>   3132  
>   3133          iocg = blkg_to_iocg(ctx.blkg);
>   3134  
>   3135          if (!strncmp(ctx.body, "default", 7)) {
>   3136                  v = 0;
>   3137          } else {
>   3138                  if (!sscanf(ctx.body, "%u", &v))
>   3139                          goto einval;
>   3140                  if (v < CGROUP_WEIGHT_MIN || v > CGROUP_WEIGHT_MAX)
>   3141                          goto einval;
>   3142          }
>   3143  
>   3144          spin_lock(&iocg->ioc->lock);
> 
> But why is this not spin_lock_irq()?  I haven't analyzed this so maybe it's
> fine.

That's a bug.
Dan Carpenter Oct. 3, 2024, 2:31 p.m. UTC | #7
On Thu, Oct 03, 2024 at 07:21:25AM -0600, Jens Axboe wrote:
> On 10/3/24 6:03 AM, Dan Carpenter wrote:
> >   3117                                  ioc_now(iocg->ioc, &now);
> >   3118                                  weight_updated(iocg, &now);
> >   3119                                  spin_unlock(&iocg->ioc->lock);
> >   3120                          }
> >   3121                  }
> >   3122                  spin_unlock_irq(&blkcg->lock);
> >   3123  
> >   3124                  return nbytes;
> >   3125          }
> >   3126  
> >   3127          blkg_conf_init(&ctx, buf);
> >   3128  
> >   3129          ret = blkg_conf_prep(blkcg, &blkcg_policy_iocost, &ctx);
> >   3130          if (ret)
> >   3131                  goto err;
> >   3132  
> >   3133          iocg = blkg_to_iocg(ctx.blkg);
> >   3134  
> >   3135          if (!strncmp(ctx.body, "default", 7)) {
> >   3136                  v = 0;
> >   3137          } else {
> >   3138                  if (!sscanf(ctx.body, "%u", &v))
> >   3139                          goto einval;
> >   3140                  if (v < CGROUP_WEIGHT_MIN || v > CGROUP_WEIGHT_MAX)
> >   3141                          goto einval;
> >   3142          }
> >   3143  
> >   3144          spin_lock(&iocg->ioc->lock);
> > 
> > But why is this not spin_lock_irq()?  I haven't analyzed this so maybe it's
> > fine.
> 
> That's a bug.
> 

I could obviously write this patch but I feel stupid writing the commit message.
My level of understanding is Monkey See Monkey do.  Could you take care of this?

So somewhere we're taking a lock in the IRQ handler and this can lead to a
deadlock? I thought this would have been caught by lockdep?

regards,
dan carpenter
Jens Axboe Oct. 3, 2024, 2:38 p.m. UTC | #8
On 10/3/24 8:31 AM, Dan Carpenter wrote:
> On Thu, Oct 03, 2024 at 07:21:25AM -0600, Jens Axboe wrote:
>> On 10/3/24 6:03 AM, Dan Carpenter wrote:
>>>   3117                                  ioc_now(iocg->ioc, &now);
>>>   3118                                  weight_updated(iocg, &now);
>>>   3119                                  spin_unlock(&iocg->ioc->lock);
>>>   3120                          }
>>>   3121                  }
>>>   3122                  spin_unlock_irq(&blkcg->lock);
>>>   3123  
>>>   3124                  return nbytes;
>>>   3125          }
>>>   3126  
>>>   3127          blkg_conf_init(&ctx, buf);
>>>   3128  
>>>   3129          ret = blkg_conf_prep(blkcg, &blkcg_policy_iocost, &ctx);
>>>   3130          if (ret)
>>>   3131                  goto err;
>>>   3132  
>>>   3133          iocg = blkg_to_iocg(ctx.blkg);
>>>   3134  
>>>   3135          if (!strncmp(ctx.body, "default", 7)) {
>>>   3136                  v = 0;
>>>   3137          } else {
>>>   3138                  if (!sscanf(ctx.body, "%u", &v))
>>>   3139                          goto einval;
>>>   3140                  if (v < CGROUP_WEIGHT_MIN || v > CGROUP_WEIGHT_MAX)
>>>   3141                          goto einval;
>>>   3142          }
>>>   3143  
>>>   3144          spin_lock(&iocg->ioc->lock);
>>>
>>> But why is this not spin_lock_irq()?  I haven't analyzed this so maybe it's
>>> fine.
>>
>> That's a bug.
>>
> 
> I could obviously write this patch but I feel stupid writing the
> commit message. My level of understanding is Monkey See Monkey do.
> Could you take care of this?

Sure - or let's add Tejun who knows this code better. Ah he's already
added. Tejun?

> So somewhere we're taking a lock in the IRQ handler and this can lead
> to a deadlock? I thought this would have been caught by lockdep?

It's nested inside blkcg->lock which is IRQ safe, that is enough. But
doing a quick scan of the file, the usage is definitely (widly)
inconsistent. Most times ioc->lock is grabbed disabling interrupts, but
there are also uses that doesn't disable interrupts, coming from things
like seq_file show paths which certainly look like they need it. lockdep
should certainly warn about this, only explanation I have is that nobody
bothered to do that :-)
Waiman Long Oct. 3, 2024, 3:49 p.m. UTC | #9
On 10/3/24 10:38, Jens Axboe wrote:
> On 10/3/24 8:31 AM, Dan Carpenter wrote:
>> On Thu, Oct 03, 2024 at 07:21:25AM -0600, Jens Axboe wrote:
>>> On 10/3/24 6:03 AM, Dan Carpenter wrote:
>>>>    3117                                  ioc_now(iocg->ioc, &now);
>>>>    3118                                  weight_updated(iocg, &now);
>>>>    3119                                  spin_unlock(&iocg->ioc->lock);
>>>>    3120                          }
>>>>    3121                  }
>>>>    3122                  spin_unlock_irq(&blkcg->lock);
>>>>    3123
>>>>    3124                  return nbytes;
>>>>    3125          }
>>>>    3126
>>>>    3127          blkg_conf_init(&ctx, buf);
>>>>    3128
>>>>    3129          ret = blkg_conf_prep(blkcg, &blkcg_policy_iocost, &ctx);
>>>>    3130          if (ret)
>>>>    3131                  goto err;
>>>>    3132
>>>>    3133          iocg = blkg_to_iocg(ctx.blkg);
>>>>    3134
>>>>    3135          if (!strncmp(ctx.body, "default", 7)) {
>>>>    3136                  v = 0;
>>>>    3137          } else {
>>>>    3138                  if (!sscanf(ctx.body, "%u", &v))
>>>>    3139                          goto einval;
>>>>    3140                  if (v < CGROUP_WEIGHT_MIN || v > CGROUP_WEIGHT_MAX)
>>>>    3141                          goto einval;
>>>>    3142          }
>>>>    3143
>>>>    3144          spin_lock(&iocg->ioc->lock);
>>>>
>>>> But why is this not spin_lock_irq()?  I haven't analyzed this so maybe it's
>>>> fine.
>>> That's a bug.
>>>
>> I could obviously write this patch but I feel stupid writing the
>> commit message. My level of understanding is Monkey See Monkey do.
>> Could you take care of this?
> Sure - or let's add Tejun who knows this code better. Ah he's already
> added. Tejun?
>
>> So somewhere we're taking a lock in the IRQ handler and this can lead
>> to a deadlock? I thought this would have been caught by lockdep?
> It's nested inside blkcg->lock which is IRQ safe, that is enough. But
> doing a quick scan of the file, the usage is definitely (widly)
> inconsistent. Most times ioc->lock is grabbed disabling interrupts, but
> there are also uses that doesn't disable interrupts, coming from things
> like seq_file show paths which certainly look like they need it. lockdep
> should certainly warn about this, only explanation I have is that nobody
> bothered to do that :-)

The lockdep validator will only warn about this if a debug kernel with 
lockdep enabled has run a workload that exercises all the relevant 
locking sequences that can implicate a potential for deadlock.

Cheers,
Longman
Jens Axboe Oct. 3, 2024, 4:24 p.m. UTC | #10
On 10/3/24 9:49 AM, Waiman Long wrote:
> On 10/3/24 10:38, Jens Axboe wrote:
>> On 10/3/24 8:31 AM, Dan Carpenter wrote:
>>> On Thu, Oct 03, 2024 at 07:21:25AM -0600, Jens Axboe wrote:
>>>> On 10/3/24 6:03 AM, Dan Carpenter wrote:
>>>>>    3117                                  ioc_now(iocg->ioc, &now);
>>>>>    3118                                  weight_updated(iocg, &now);
>>>>>    3119                                  spin_unlock(&iocg->ioc->lock);
>>>>>    3120                          }
>>>>>    3121                  }
>>>>>    3122                  spin_unlock_irq(&blkcg->lock);
>>>>>    3123
>>>>>    3124                  return nbytes;
>>>>>    3125          }
>>>>>    3126
>>>>>    3127          blkg_conf_init(&ctx, buf);
>>>>>    3128
>>>>>    3129          ret = blkg_conf_prep(blkcg, &blkcg_policy_iocost, &ctx);
>>>>>    3130          if (ret)
>>>>>    3131                  goto err;
>>>>>    3132
>>>>>    3133          iocg = blkg_to_iocg(ctx.blkg);
>>>>>    3134
>>>>>    3135          if (!strncmp(ctx.body, "default", 7)) {
>>>>>    3136                  v = 0;
>>>>>    3137          } else {
>>>>>    3138                  if (!sscanf(ctx.body, "%u", &v))
>>>>>    3139                          goto einval;
>>>>>    3140                  if (v < CGROUP_WEIGHT_MIN || v > CGROUP_WEIGHT_MAX)
>>>>>    3141                          goto einval;
>>>>>    3142          }
>>>>>    3143
>>>>>    3144          spin_lock(&iocg->ioc->lock);
>>>>>
>>>>> But why is this not spin_lock_irq()?  I haven't analyzed this so maybe it's
>>>>> fine.
>>>> That's a bug.
>>>>
>>> I could obviously write this patch but I feel stupid writing the
>>> commit message. My level of understanding is Monkey See Monkey do.
>>> Could you take care of this?
>> Sure - or let's add Tejun who knows this code better. Ah he's already
>> added. Tejun?
>>
>>> So somewhere we're taking a lock in the IRQ handler and this can lead
>>> to a deadlock? I thought this would have been caught by lockdep?
>> It's nested inside blkcg->lock which is IRQ safe, that is enough. But
>> doing a quick scan of the file, the usage is definitely (widly)
>> inconsistent. Most times ioc->lock is grabbed disabling interrupts, but
>> there are also uses that doesn't disable interrupts, coming from things
>> like seq_file show paths which certainly look like they need it. lockdep
>> should certainly warn about this, only explanation I have is that nobody
>> bothered to do that :-)
> 
> The lockdep validator will only warn about this if a debug kernel with
> lockdep enabled has run a workload that exercises all the relevant
> locking sequences that can implicate a potential for deadlock.

Sure that's obvious, but there are quite a few easy ones in there, so
seems like it should be easy to trigger. It's not like it's only some
odd path, the irq on/off looks trivial.
Tejun Heo Oct. 3, 2024, 9:22 p.m. UTC | #11
Hello,

On Thu, Oct 03, 2024 at 08:38:48AM -0600, Jens Axboe wrote:
...
> >>>   3144          spin_lock(&iocg->ioc->lock);
> >>>
> >>> But why is this not spin_lock_irq()?  I haven't analyzed this so maybe it's
> >>> fine.
> >>
> >> That's a bug.
> >>
> > 
> > I could obviously write this patch but I feel stupid writing the
> > commit message. My level of understanding is Monkey See Monkey do.
> > Could you take care of this?
> 
> Sure - or let's add Tejun who knows this code better. Ah he's already
> added. Tejun?

Yeah, that should be spin_lock_irq() for consistency but at the same time it
doesn't look like anything is actually grabbing that lock (or blkcg->lock
nesting outside of it) from an IRQ context, so no actual deadlock scenario
exists and lockdep doesn't trigger.

> > So somewhere we're taking a lock in the IRQ handler and this can lead
> > to a deadlock? I thought this would have been caught by lockdep?
> 
> It's nested inside blkcg->lock which is IRQ safe, that is enough. But
> doing a quick scan of the file, the usage is definitely (widly)
> inconsistent. Most times ioc->lock is grabbed disabling interrupts, but

Hmm... the only place I see is the one Dan pointed out.

> there are also uses that doesn't disable interrupts, coming from things
> like seq_file show paths which certainly look like they need it. lockdep
> should certainly warn about this, only explanation I have is that nobody
> bothered to do that :-)

The locks are intended to be IRQ-safe but it looks like they don't need to
be at least for now. I'll send a patch to update the ioc_weight_write()
pair.

Thanks.
Tejun Heo Oct. 3, 2024, 9:30 p.m. UTC | #12
On Thu, Oct 03, 2024 at 11:22:09AM -1000, Tejun Heo wrote:
> Yeah, that should be spin_lock_irq() for consistency but at the same time it
> doesn't look like anything is actually grabbing that lock (or blkcg->lock
> nesting outside of it) from an IRQ context, so no actual deadlock scenario
> exists and lockdep doesn't trigger.

Oh, wait, it's not that. blkg_conf_prep() implies queue_lock, so the IRQ is
disabled around it and adding _irq will trigger lockdep.

Thanks.
Jens Axboe Oct. 3, 2024, 10:06 p.m. UTC | #13
On 10/3/24 3:30 PM, Tejun Heo wrote:
> On Thu, Oct 03, 2024 at 11:22:09AM -1000, Tejun Heo wrote:
>> Yeah, that should be spin_lock_irq() for consistency but at the same time it
>> doesn't look like anything is actually grabbing that lock (or blkcg->lock
>> nesting outside of it) from an IRQ context, so no actual deadlock scenario
>> exists and lockdep doesn't trigger.
> 
> Oh, wait, it's not that. blkg_conf_prep() implies queue_lock, so the IRQ is
> disabled around it and adding _irq will trigger lockdep.

Ah makes sense, didn't realize it was nested under the queue lock. Then it
does look like it's just that one spot.
Dan Carpenter Oct. 4, 2024, 10:53 a.m. UTC | #14
On Thu, Oct 03, 2024 at 11:30:44AM -1000, Tejun Heo wrote:
> On Thu, Oct 03, 2024 at 11:22:09AM -1000, Tejun Heo wrote:
> > Yeah, that should be spin_lock_irq() for consistency but at the same time it
> > doesn't look like anything is actually grabbing that lock (or blkcg->lock
> > nesting outside of it) from an IRQ context, so no actual deadlock scenario
> > exists and lockdep doesn't trigger.
> 
> Oh, wait, it's not that. blkg_conf_prep() implies queue_lock, so the IRQ is
> disabled around it and adding _irq will trigger lockdep.
> 

Ugh...  Yeah.  Sorry for the noise on this.  I've fixed my checker to not
print this warning any more.

regards,
dan carpenter
diff mbox series

Patch

diff --git a/block/blk-iocost.c b/block/blk-iocost.c
index 9dc9323f84ac..384aa15e8260 100644
--- a/block/blk-iocost.c
+++ b/block/blk-iocost.c
@@ -3166,7 +3166,7 @@  static u64 ioc_qos_prfill(struct seq_file *sf, struct blkg_policy_data *pd,
 	if (!dname)
 		return 0;
 
-	spin_lock_irq(&ioc->lock);
+	spin_lock(&ioc->lock);
 	seq_printf(sf, "%s enable=%d ctrl=%s rpct=%u.%02u rlat=%u wpct=%u.%02u wlat=%u min=%u.%02u max=%u.%02u\n",
 		   dname, ioc->enabled, ioc->user_qos_params ? "user" : "auto",
 		   ioc->params.qos[QOS_RPPM] / 10000,
@@ -3179,7 +3179,7 @@  static u64 ioc_qos_prfill(struct seq_file *sf, struct blkg_policy_data *pd,
 		   ioc->params.qos[QOS_MIN] % 10000 / 100,
 		   ioc->params.qos[QOS_MAX] / 10000,
 		   ioc->params.qos[QOS_MAX] % 10000 / 100);
-	spin_unlock_irq(&ioc->lock);
+	spin_unlock(&ioc->lock);
 	return 0;
 }
 
@@ -3366,14 +3366,14 @@  static u64 ioc_cost_model_prfill(struct seq_file *sf,
 	if (!dname)
 		return 0;
 
-	spin_lock_irq(&ioc->lock);
+	spin_lock(&ioc->lock);
 	seq_printf(sf, "%s ctrl=%s model=linear "
 		   "rbps=%llu rseqiops=%llu rrandiops=%llu "
 		   "wbps=%llu wseqiops=%llu wrandiops=%llu\n",
 		   dname, ioc->user_cost_model ? "user" : "auto",
 		   u[I_LCOEF_RBPS], u[I_LCOEF_RSEQIOPS], u[I_LCOEF_RRANDIOPS],
 		   u[I_LCOEF_WBPS], u[I_LCOEF_WSEQIOPS], u[I_LCOEF_WRANDIOPS]);
-	spin_unlock_irq(&ioc->lock);
+	spin_unlock(&ioc->lock);
 	return 0;
 }