diff mbox series

rcu: Make call_rcu() lazy only when CONFIG_RCU_LAZY is enabled

Message ID 20221019104045.160283-1-qiang1.zhang@intel.com (mailing list archive)
State Superseded
Headers show
Series rcu: Make call_rcu() lazy only when CONFIG_RCU_LAZY is enabled | expand

Commit Message

Zqiang Oct. 19, 2022, 10:40 a.m. UTC
Currently, regardless of whether the CONFIG_RCU_LAZY is enabled,
invoke the call_rcu() is always lazy, it also means that when
CONFIG_RCU_LAZY is disabled, invoke the call_rcu_flush() is also
lazy. therefore, this commit make call_rcu() lazy only when
CONFIG_RCU_LAZY is enabled.

Signed-off-by: Zqiang <qiang1.zhang@intel.com>
---
 kernel/rcu/tree.c | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

Comments

Joel Fernandes Oct. 19, 2022, 12:10 p.m. UTC | #1
> On Oct 19, 2022, at 6:34 AM, Zqiang <qiang1.zhang@intel.com> wrote:
> 
> Currently, regardless of whether the CONFIG_RCU_LAZY is enabled,
> invoke the call_rcu() is always lazy, it also means that when
> CONFIG_RCU_LAZY is disabled, invoke the call_rcu_flush() is also
> lazy. therefore, this commit make call_rcu() lazy only when
> CONFIG_RCU_LAZY is enabled.
> 
> Signed-off-by: Zqiang <qiang1.zhang@intel.com>
> ---
> kernel/rcu/tree.c | 8 +++++++-
> 1 file changed, 7 insertions(+), 1 deletion(-)
> 
> diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
> index abc615808b6e..97ef602da3d5 100644
> --- a/kernel/rcu/tree.c
> +++ b/kernel/rcu/tree.c
> @@ -2839,7 +2839,6 @@ void call_rcu_flush(struct rcu_head *head, rcu_callback_t func)
>    return __call_rcu_common(head, func, false);
> }
> EXPORT_SYMBOL_GPL(call_rcu_flush);
> -#endif
> 
> /**
>  * call_rcu() - Queue an RCU callback for invocation after a grace period.
> @@ -2890,6 +2889,13 @@ void call_rcu(struct rcu_head *head, rcu_callback_t func)
>    return __call_rcu_common(head, func, true);
> }
> EXPORT_SYMBOL_GPL(call_rcu);
> +#else
> +void call_rcu(struct rcu_head *head, rcu_callback_t func)
> +{
> +    return __call_rcu_common(head, func, false);

Thanks. Instead of adding new function, you can also pass IS_ENABLED(CONFIG…) to the existing function of the same name.

Looks like though I made every one test the patch without having to enable the config option ;-). Hey, I’m a half glass full kind of guy, why do you ask?

Paul, I’ll take a closer look once I’m at the desk, but would you prefer to squash a diff into the existing patch, or want a new patch altogether?

Thanks.

- Joel


> +}
> +EXPORT_SYMBOL_GPL(call_rcu);
> +#endif
> 
> /* Maximum number of jiffies to wait before draining a batch. */
> #define KFREE_DRAIN_JIFFIES (5 * HZ)
> -- 
> 2.25.1
>
Joel Fernandes Oct. 19, 2022, 12:12 p.m. UTC | #2
> On Oct 19, 2022, at 8:10 AM, Joel Fernandes <joel@joelfernandes.org> wrote:
> 
> 
> 
>> On Oct 19, 2022, at 6:34 AM, Zqiang <qiang1.zhang@intel.com> wrote:
>> 
>> Currently, regardless of whether the CONFIG_RCU_LAZY is enabled,
>> invoke the call_rcu() is always lazy, it also means that when
>> CONFIG_RCU_LAZY is disabled, invoke the call_rcu_flush() is also
>> lazy. therefore, this commit make call_rcu() lazy only when
>> CONFIG_RCU_LAZY is enabled.
>> 
>> Signed-off-by: Zqiang <qiang1.zhang@intel.com>
>> ---
>> kernel/rcu/tree.c | 8 +++++++-
>> 1 file changed, 7 insertions(+), 1 deletion(-)
>> 
>> diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
>> index abc615808b6e..97ef602da3d5 100644
>> --- a/kernel/rcu/tree.c
>> +++ b/kernel/rcu/tree.c
>> @@ -2839,7 +2839,6 @@ void call_rcu_flush(struct rcu_head *head, rcu_callback_t func)
>>   return __call_rcu_common(head, func, false);
>> }
>> EXPORT_SYMBOL_GPL(call_rcu_flush);
>> -#endif
>> 
>> /**
>> * call_rcu() - Queue an RCU callback for invocation after a grace period.
>> @@ -2890,6 +2889,13 @@ void call_rcu(struct rcu_head *head, rcu_callback_t func)
>>   return __call_rcu_common(head, func, true);
>> }
>> EXPORT_SYMBOL_GPL(call_rcu);
>> +#else
>> +void call_rcu(struct rcu_head *head, rcu_callback_t func)
>> +{
>> +    return __call_rcu_common(head, func, false);
> 
> Thanks. Instead of adding new function, you can also pass IS_ENABLED(CONFIG…) to the existing function of the same name.
> 
> Looks like though I made every one test the patch without having to enable the config option ;-). Hey, I’m a half glass full kind of guy, why do you ask?
> 
> Paul, I’ll take a closer look once I’m at the desk, but would you prefer to squash a diff into the existing patch, or want a new patch altogether?

On the other hand, what I’d want is to nuke the config option altogether or make it default y, we want to catch issues sooner than later.

Thanks.

> 
> Thanks.
> 
> - Joel
> 
> 
>> +}
>> +EXPORT_SYMBOL_GPL(call_rcu);
>> +#endif
>> 
>> /* Maximum number of jiffies to wait before draining a batch. */
>> #define KFREE_DRAIN_JIFFIES (5 * HZ)
>> -- 
>> 2.25.1
>>
Paul E. McKenney Oct. 19, 2022, 5:44 p.m. UTC | #3
On Wed, Oct 19, 2022 at 08:12:30AM -0400, Joel Fernandes wrote:
> > On Oct 19, 2022, at 8:10 AM, Joel Fernandes <joel@joelfernandes.org> wrote:
> >> On Oct 19, 2022, at 6:34 AM, Zqiang <qiang1.zhang@intel.com> wrote:
> >> 
> >> Currently, regardless of whether the CONFIG_RCU_LAZY is enabled,
> >> invoke the call_rcu() is always lazy, it also means that when
> >> CONFIG_RCU_LAZY is disabled, invoke the call_rcu_flush() is also
> >> lazy. therefore, this commit make call_rcu() lazy only when
> >> CONFIG_RCU_LAZY is enabled.

First, good eyes!  Thank you for spotting this!!!

> >> Signed-off-by: Zqiang <qiang1.zhang@intel.com>
> >> ---
> >> kernel/rcu/tree.c | 8 +++++++-
> >> 1 file changed, 7 insertions(+), 1 deletion(-)
> >> 
> >> diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
> >> index abc615808b6e..97ef602da3d5 100644
> >> --- a/kernel/rcu/tree.c
> >> +++ b/kernel/rcu/tree.c
> >> @@ -2839,7 +2839,6 @@ void call_rcu_flush(struct rcu_head *head, rcu_callback_t func)
> >>   return __call_rcu_common(head, func, false);
> >> }
> >> EXPORT_SYMBOL_GPL(call_rcu_flush);
> >> -#endif
> >> 
> >> /**
> >> * call_rcu() - Queue an RCU callback for invocation after a grace period.
> >> @@ -2890,6 +2889,13 @@ void call_rcu(struct rcu_head *head, rcu_callback_t func)
> >>   return __call_rcu_common(head, func, true);
> >> }
> >> EXPORT_SYMBOL_GPL(call_rcu);
> >> +#else
> >> +void call_rcu(struct rcu_head *head, rcu_callback_t func)
> >> +{
> >> +    return __call_rcu_common(head, func, false);
> > 
> > Thanks. Instead of adding new function, you can also pass IS_ENABLED(CONFIG…) to the existing function of the same name.

I do like this approach better -- less code, more obvious what is going on.

> > Looks like though I made every one test the patch without having to enable the config option ;-). Hey, I’m a half glass full kind of guy, why do you ask?
> > 
> > Paul, I’ll take a closer look once I’m at the desk, but would you prefer to squash a diff into the existing patch, or want a new patch altogether?
> 
> On the other hand, what I’d want is to nuke the config option altogether or make it default y, we want to catch issues sooner than later.

That might be what we do at some point, but one thing at a time.  Let's
not penalize innocent bystanders, at least not just yet.

I do very strongly encourage the ChromeOS and Android folks to test this
very severely, however.

							Thanx, Paul

> Thanks.
> 
> > 
> > Thanks.
> > 
> > - Joel
> > 
> > 
> >> +}
> >> +EXPORT_SYMBOL_GPL(call_rcu);
> >> +#endif
> >> 
> >> /* Maximum number of jiffies to wait before draining a batch. */
> >> #define KFREE_DRAIN_JIFFIES (5 * HZ)
> >> -- 
> >> 2.25.1
> >>
Joel Fernandes Oct. 19, 2022, 6:25 p.m. UTC | #4
> On Oct 19, 2022, at 1:45 PM, Paul E. McKenney <paulmck@kernel.org> wrote:
> 
> On Wed, Oct 19, 2022 at 08:12:30AM -0400, Joel Fernandes wrote:
>>> On Oct 19, 2022, at 8:10 AM, Joel Fernandes <joel@joelfernandes.org> wrote:
>>>>> On Oct 19, 2022, at 6:34 AM, Zqiang <qiang1.zhang@intel.com> wrote:
>>>>> 
>>>>> Currently, regardless of whether the CONFIG_RCU_LAZY is enabled,
>>>>> invoke the call_rcu() is always lazy, it also means that when
>>>>> CONFIG_RCU_LAZY is disabled, invoke the call_rcu_flush() is also
>>>>> lazy. therefore, this commit make call_rcu() lazy only when
>>>>> CONFIG_RCU_LAZY is enabled.
>> 
>> First, good eyes!  Thank you for spotting this!!
>>>>> Signed-off-by: Zqiang <qiang1.zhang@intel.com>
>>>>> ---
>>>>> kernel/rcu/tree.c | 8 +++++++-
>>>>> 1 file changed, 7 insertions(+), 1 deletion(-)
>>>>> 
>>>>> diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
>>>>> index abc615808b6e..97ef602da3d5 100644
>>>>> --- a/kernel/rcu/tree.c
>>>>> +++ b/kernel/rcu/tree.c
>>>>> @@ -2839,7 +2839,6 @@ void call_rcu_flush(struct rcu_head *head, rcu_callback_t func)
>>>>>  return __call_rcu_common(head, func, false);
>>>>> }
>>>>> EXPORT_SYMBOL_GPL(call_rcu_flush);
>>>>> -#endif
>>>>> 
>>>>> /**
>>>>> * call_rcu() - Queue an RCU callback for invocation after a grace period.
>>>>> @@ -2890,6 +2889,13 @@ void call_rcu(struct rcu_head *head, rcu_callback_t func)
>>>>>  return __call_rcu_common(head, func, true);
>>>>> }
>>>>> EXPORT_SYMBOL_GPL(call_rcu);
>>>>> +#else
>>>>> +void call_rcu(struct rcu_head *head, rcu_callback_t func)
>>>>> +{
>>>>> +    return __call_rcu_common(head, func, false);
>>> 
>>> Thanks. Instead of adding new function, you can also pass IS_ENABLED(CONFIG…) to the existing function of the same name.
> 
> I do like this approach better -- less code, more obvious what is going on.

Sounds good. Zqiang, do you mind updating your patch along these lines? That way you get the proper attribution.

More comments below:
> 
>>> Looks like though I made every one test the patch without having to enable the config option ;-). Hey, I’m a half glass full kind of guy, why do you ask?
>>> 
>>> Paul, I’ll take a closer look once I’m at the desk, but would you prefer to squash a diff into the existing patch, or want a new patch altogether?
>> 
>> On the other hand, what I’d want is to nuke the config option altogether or make it default y, we want to catch issues sooner than later.
> 
> That might be what we do at some point, but one thing at a time.  Let's
> not penalize innocent bystanders, at least not just yet.

It’s a trade off, I thought that’s why we wanted to have the binary search stuff. If no one reports issue on Linux-next, then that code won’t be put to use in the near future at least.

> I do very strongly encourage the ChromeOS and Android folks to test this
> very severely, however.

Agreed. Yes that will happen, though I have to make a note for Android folks other than Vlad, to backports these (and enable the config option), carefully! Especially on pre-5.15 kernels. Luckily I had to do this (not so trivial) exercise myself.

Thanks!

 - Joel

> 
>                            Thanx, Paul
> 
>> Thanks.
>> 
>>> 
>>> Thanks.
>>> 
>>> - Joel
>>> 
>>> 
>>>> +}
>>>> +EXPORT_SYMBOL_GPL(call_rcu);
>>>> +#endif
>>>> 
>>>> /* Maximum number of jiffies to wait before draining a batch. */
>>>> #define KFREE_DRAIN_JIFFIES (5 * HZ)
>>>> -- 
>>>> 2.25.1
>>>>
Zqiang Oct. 19, 2022, 11:21 p.m. UTC | #5
> On Oct 19, 2022, at 1:45 PM, Paul E. McKenney <paulmck@kernel.org> wrote:
> 
> On Wed, Oct 19, 2022 at 08:12:30AM -0400, Joel Fernandes wrote:
>>> On Oct 19, 2022, at 8:10 AM, Joel Fernandes <joel@joelfernandes.org> wrote:
>>>>> On Oct 19, 2022, at 6:34 AM, Zqiang <qiang1.zhang@intel.com> wrote:
>>>>> 
>>>>> Currently, regardless of whether the CONFIG_RCU_LAZY is enabled,
>>>>> invoke the call_rcu() is always lazy, it also means that when
>>>>> CONFIG_RCU_LAZY is disabled, invoke the call_rcu_flush() is also
>>>>> lazy. therefore, this commit make call_rcu() lazy only when
>>>>> CONFIG_RCU_LAZY is enabled.
>> 
>> First, good eyes!  Thank you for spotting this!!
>>>>> Signed-off-by: Zqiang <qiang1.zhang@intel.com>
>>>>> ---
>>>>> kernel/rcu/tree.c | 8 +++++++-
>>>>> 1 file changed, 7 insertions(+), 1 deletion(-)
>>>>> 
>>>>> diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
>>>>> index abc615808b6e..97ef602da3d5 100644
>>>>> --- a/kernel/rcu/tree.c
>>>>> +++ b/kernel/rcu/tree.c
>>>>> @@ -2839,7 +2839,6 @@ void call_rcu_flush(struct rcu_head *head, rcu_callback_t func)
>>>>>  return __call_rcu_common(head, func, false);
>>>>> }
>>>>> EXPORT_SYMBOL_GPL(call_rcu_flush);
>>>>> -#endif
>>>>> 
>>>>> /**
>>>>> * call_rcu() - Queue an RCU callback for invocation after a grace period.
>>>>> @@ -2890,6 +2889,13 @@ void call_rcu(struct rcu_head *head, rcu_callback_t func)
>>>>>  return __call_rcu_common(head, func, true);
>>>>> }
>>>>> EXPORT_SYMBOL_GPL(call_rcu);
>>>>> +#else
>>>>> +void call_rcu(struct rcu_head *head, rcu_callback_t func)
>>>>> +{
>>>>> +    return __call_rcu_common(head, func, false);
>>> 
>>> Thanks. Instead of adding new function, you can also pass IS_ENABLED(CONFIG…) to the existing function of the same name.
> 
> I do like this approach better -- less code, more obvious what is going on.
>
>Sounds good. Zqiang, do you mind updating your patch along these lines? That way you get the proper attribution.
>

Thanks Joel and Paul review, I will update my patch and resend.

Thanks
Zqiang

>More comments below:
> 
>>> Looks like though I made every one test the patch without having to enable the config option ;-). Hey, I’m a half glass full kind of guy, why do you ask?
>>> 
>>> Paul, I’ll take a closer look once I’m at the desk, but would you prefer to squash a diff into the existing patch, or want a new patch altogether?
>> 
>> On the other hand, what I’d want is to nuke the config option altogether or make it default y, we want to catch issues sooner than later.
> 
> That might be what we do at some point, but one thing at a time.  Let's
> not penalize innocent bystanders, at least not just yet.
>
>It’s a trade off, I thought that’s why we wanted to have the binary search stuff. If no one reports issue on Linux-next, then that code won’t be put to use in the near future at least.
>
> I do very strongly encourage the ChromeOS and Android folks to test this
> very severely, however.
>
>Agreed. Yes that will happen, though I have to make a note for Android folks other than Vlad, to backports these (and enable the config option), carefully! Especially on pre-5.15 kernels. Luckily I had to do this (not so trivial) exercise myself.
>
>Thanks!
>
> - Joel
>
> 
>                            Thanx, Paul
> 
>> Thanks.
>> 
>>> 
>>> Thanks.
>>> 
>>> - Joel
>>> 
>>> 
>>>> +}
>>>> +EXPORT_SYMBOL_GPL(call_rcu);
>>>> +#endif
>>>> 
>>>> /* Maximum number of jiffies to wait before draining a batch. */
>>>> #define KFREE_DRAIN_JIFFIES (5 * HZ)
>>>> -- 
>>>> 2.25.1
>>>>
Paul E. McKenney Oct. 19, 2022, 11:34 p.m. UTC | #6
On Wed, Oct 19, 2022 at 02:25:29PM -0400, Joel Fernandes wrote:
> 
> 
> > On Oct 19, 2022, at 1:45 PM, Paul E. McKenney <paulmck@kernel.org> wrote:
> > 
> > On Wed, Oct 19, 2022 at 08:12:30AM -0400, Joel Fernandes wrote:
> >>> On Oct 19, 2022, at 8:10 AM, Joel Fernandes <joel@joelfernandes.org> wrote:
> >>>>> On Oct 19, 2022, at 6:34 AM, Zqiang <qiang1.zhang@intel.com> wrote:
> >>>>> 
> >>>>> Currently, regardless of whether the CONFIG_RCU_LAZY is enabled,
> >>>>> invoke the call_rcu() is always lazy, it also means that when
> >>>>> CONFIG_RCU_LAZY is disabled, invoke the call_rcu_flush() is also
> >>>>> lazy. therefore, this commit make call_rcu() lazy only when
> >>>>> CONFIG_RCU_LAZY is enabled.
> >> 
> >> First, good eyes!  Thank you for spotting this!!
> >>>>> Signed-off-by: Zqiang <qiang1.zhang@intel.com>
> >>>>> ---
> >>>>> kernel/rcu/tree.c | 8 +++++++-
> >>>>> 1 file changed, 7 insertions(+), 1 deletion(-)
> >>>>> 
> >>>>> diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
> >>>>> index abc615808b6e..97ef602da3d5 100644
> >>>>> --- a/kernel/rcu/tree.c
> >>>>> +++ b/kernel/rcu/tree.c
> >>>>> @@ -2839,7 +2839,6 @@ void call_rcu_flush(struct rcu_head *head, rcu_callback_t func)
> >>>>>  return __call_rcu_common(head, func, false);
> >>>>> }
> >>>>> EXPORT_SYMBOL_GPL(call_rcu_flush);
> >>>>> -#endif
> >>>>> 
> >>>>> /**
> >>>>> * call_rcu() - Queue an RCU callback for invocation after a grace period.
> >>>>> @@ -2890,6 +2889,13 @@ void call_rcu(struct rcu_head *head, rcu_callback_t func)
> >>>>>  return __call_rcu_common(head, func, true);
> >>>>> }
> >>>>> EXPORT_SYMBOL_GPL(call_rcu);
> >>>>> +#else
> >>>>> +void call_rcu(struct rcu_head *head, rcu_callback_t func)
> >>>>> +{
> >>>>> +    return __call_rcu_common(head, func, false);
> >>> 
> >>> Thanks. Instead of adding new function, you can also pass IS_ENABLED(CONFIG…) to the existing function of the same name.
> > 
> > I do like this approach better -- less code, more obvious what is going on.
> 
> Sounds good. Zqiang, do you mind updating your patch along these lines? That way you get the proper attribution.
> 
> More comments below:
> > 
> >>> Looks like though I made every one test the patch without having to enable the config option ;-). Hey, I’m a half glass full kind of guy, why do you ask?
> >>> 
> >>> Paul, I’ll take a closer look once I’m at the desk, but would you prefer to squash a diff into the existing patch, or want a new patch altogether?
> >> 
> >> On the other hand, what I’d want is to nuke the config option altogether or make it default y, we want to catch issues sooner than later.
> > 
> > That might be what we do at some point, but one thing at a time.  Let's
> > not penalize innocent bystanders, at least not just yet.
> 
> It’s a trade off, I thought that’s why we wanted to have the binary search stuff. If no one reports issue on Linux-next, then that code won’t be put to use in the near future at least.

Well, not to put too fine a point on it, but we currently really are
exposing -next to lazy call_rcu().  ;-)

> > I do very strongly encourage the ChromeOS and Android folks to test this
> > very severely, however.
> 
> Agreed. Yes that will happen, though I have to make a note for Android folks other than Vlad, to backports these (and enable the config option), carefully! Especially on pre-5.15 kernels. Luckily I had to do this (not so trivial) exercise myself.

And this is another situation in which the binary search stuff may prove
extremely useful.

							Thanx, Paul

> Thanks!
> 
>  - Joel
> 
> > 
> >                            Thanx, Paul
> > 
> >> Thanks.
> >> 
> >>> 
> >>> Thanks.
> >>> 
> >>> - Joel
> >>> 
> >>> 
> >>>> +}
> >>>> +EXPORT_SYMBOL_GPL(call_rcu);
> >>>> +#endif
> >>>> 
> >>>> /* Maximum number of jiffies to wait before draining a batch. */
> >>>> #define KFREE_DRAIN_JIFFIES (5 * HZ)
> >>>> -- 
> >>>> 2.25.1
> >>>>
Joel Fernandes Oct. 20, 2022, 8:42 a.m. UTC | #7
> On Oct 19, 2022, at 7:34 PM, Paul E. McKenney <paulmck@kernel.org> wrote:
> 
> On Wed, Oct 19, 2022 at 02:25:29PM -0400, Joel Fernandes wrote:
>> 
>> 
>>>> On Oct 19, 2022, at 1:45 PM, Paul E. McKenney <paulmck@kernel.org> wrote:
>>> 
>>> On Wed, Oct 19, 2022 at 08:12:30AM -0400, Joel Fernandes wrote:
>>>>> On Oct 19, 2022, at 8:10 AM, Joel Fernandes <joel@joelfernandes.org> wrote:
>>>>>>> On Oct 19, 2022, at 6:34 AM, Zqiang <qiang1.zhang@intel.com> wrote:
>>>>>>> 
>>>>>>> Currently, regardless of whether the CONFIG_RCU_LAZY is enabled,
>>>>>>> invoke the call_rcu() is always lazy, it also means that when
>>>>>>> CONFIG_RCU_LAZY is disabled, invoke the call_rcu_flush() is also
>>>>>>> lazy. therefore, this commit make call_rcu() lazy only when
>>>>>>> CONFIG_RCU_LAZY is enabled.
>>>> 
>>>> First, good eyes!  Thank you for spotting this!!

Indeed.

>>>>>>> Signed-off-by: Zqiang <qiang1.zhang@intel.com>
>>>>>>> ---
>>>>>>> kernel/rcu/tree.c | 8 +++++++-
>>>>>>> 1 file changed, 7 insertions(+), 1 deletion(-)
>>>>>>> 
>>>>>>> diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
>>>>>>> index abc615808b6e..97ef602da3d5 100644
>>>>>>> --- a/kernel/rcu/tree.c
>>>>>>> +++ b/kernel/rcu/tree.c
>>>>>>> @@ -2839,7 +2839,6 @@ void call_rcu_flush(struct rcu_head *head, rcu_callback_t func)
>>>>>>> return __call_rcu_common(head, func, false);
>>>>>>> }
>>>>>>> EXPORT_SYMBOL_GPL(call_rcu_flush);
>>>>>>> -#endif
>>>>>>> 
>>>>>>> /**
>>>>>>> * call_rcu() - Queue an RCU callback for invocation after a grace period.
>>>>>>> @@ -2890,6 +2889,13 @@ void call_rcu(struct rcu_head *head, rcu_callback_t func)
>>>>>>> return __call_rcu_common(head, func, true);
>>>>>>> }
>>>>>>> EXPORT_SYMBOL_GPL(call_rcu);
>>>>>>> +#else
>>>>>>> +void call_rcu(struct rcu_head *head, rcu_callback_t func)
>>>>>>> +{
>>>>>>> +    return __call_rcu_common(head, func, false);
>>>>> 
>>>>> Thanks. Instead of adding new function, you can also pass IS_ENABLED(CONFIG…) to the existing function of the same name.
>>> 
>>> I do like this approach better -- less code, more obvious what is going on.
>> 
>> Sounds good. Zqiang, do you mind updating your patch along these lines? That way you get the proper attribution.

Acked that patch.

>> More comments below:
>>> 
>>>>> Looks like though I made every one test the patch without having to enable the config option ;-). Hey, I’m a half glass full kind of guy, why do you ask?
>>>>> 
>>>>> Paul, I’ll take a closer look once I’m at the desk, but would you prefer to squash a diff into the existing patch, or want a new patch altogether?
>>>> 
>>>> On the other hand, what I’d want is to nuke the config option altogether or make it default y, we want to catch issues sooner than later.
>>> 
>>> That might be what we do at some point, but one thing at a time.  Let's
>>> not penalize innocent bystanders, at least not just yet.
>> 
>> It’s a trade off, I thought that’s why we wanted to have the binary search stuff. If no one reports issue on Linux-next, then that code won’t be put to use in the near future at least.
> 
> Well, not to put too fine a point on it, but we currently really are
> exposing -next to lazy call_rcu().  ;-)

This is true. I think I assumed nobody will enable a default off config option but I probably meant a smaller percentage will.

>>> I do very strongly encourage the ChromeOS and Android folks to test this
>>> very severely, however.
>> 
>> Agreed. Yes that will happen, though I have to make a note for Android folks other than Vlad, to backports these (and enable the config option), carefully! Especially on pre-5.15 kernels. Luckily I had to do this (not so trivial) exercise myself.
> 
> And this is another situation in which the binary search stuff may prove
> extremely useful.

Agreed. Thanks. Very least I owe per-rdp splitting of the hashtable, to that code.  Steven and me talked today that probably the hashtable can go into the rcu_segcblist itself, and protect it by the nocb lock.

Thanks,

 - Joel


>                            Thanx, Paul
> 
>> Thanks!
>> 
>> - Joel
>> 
>>> 
>>>                           Thanx, Paul
>>> 
>>>> Thanks.
>>>> 
>>>>> 
>>>>> Thanks.
>>>>> 
>>>>> - Joel
>>>>> 
>>>>> 
>>>>>> +}
>>>>>> +EXPORT_SYMBOL_GPL(call_rcu);
>>>>>> +#endif
>>>>>> 
>>>>>> /* Maximum number of jiffies to wait before draining a batch. */
>>>>>> #define KFREE_DRAIN_JIFFIES (5 * HZ)
>>>>>> -- 
>>>>>> 2.25.1
>>>>>>
Paul E. McKenney Oct. 20, 2022, 6:39 p.m. UTC | #8
On Thu, Oct 20, 2022 at 04:42:05AM -0400, Joel Fernandes wrote:
> > On Oct 19, 2022, at 7:34 PM, Paul E. McKenney <paulmck@kernel.org> wrote:
> > 
> > On Wed, Oct 19, 2022 at 02:25:29PM -0400, Joel Fernandes wrote:
> >> 
> >> 
> >>>> On Oct 19, 2022, at 1:45 PM, Paul E. McKenney <paulmck@kernel.org> wrote:
> >>> 
> >>> On Wed, Oct 19, 2022 at 08:12:30AM -0400, Joel Fernandes wrote:
> >>>>> On Oct 19, 2022, at 8:10 AM, Joel Fernandes <joel@joelfernandes.org> wrote:
> >>>>>>> On Oct 19, 2022, at 6:34 AM, Zqiang <qiang1.zhang@intel.com> wrote:
> >>>>>>> 
> >>>>>>> Currently, regardless of whether the CONFIG_RCU_LAZY is enabled,
> >>>>>>> invoke the call_rcu() is always lazy, it also means that when
> >>>>>>> CONFIG_RCU_LAZY is disabled, invoke the call_rcu_flush() is also
> >>>>>>> lazy. therefore, this commit make call_rcu() lazy only when
> >>>>>>> CONFIG_RCU_LAZY is enabled.
> >>>> 
> >>>> First, good eyes!  Thank you for spotting this!!
> 
> Indeed.
> 
> >>>>>>> Signed-off-by: Zqiang <qiang1.zhang@intel.com>
> >>>>>>> ---
> >>>>>>> kernel/rcu/tree.c | 8 +++++++-
> >>>>>>> 1 file changed, 7 insertions(+), 1 deletion(-)
> >>>>>>> 
> >>>>>>> diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
> >>>>>>> index abc615808b6e..97ef602da3d5 100644
> >>>>>>> --- a/kernel/rcu/tree.c
> >>>>>>> +++ b/kernel/rcu/tree.c
> >>>>>>> @@ -2839,7 +2839,6 @@ void call_rcu_flush(struct rcu_head *head, rcu_callback_t func)
> >>>>>>> return __call_rcu_common(head, func, false);
> >>>>>>> }
> >>>>>>> EXPORT_SYMBOL_GPL(call_rcu_flush);
> >>>>>>> -#endif
> >>>>>>> 
> >>>>>>> /**
> >>>>>>> * call_rcu() - Queue an RCU callback for invocation after a grace period.
> >>>>>>> @@ -2890,6 +2889,13 @@ void call_rcu(struct rcu_head *head, rcu_callback_t func)
> >>>>>>> return __call_rcu_common(head, func, true);
> >>>>>>> }
> >>>>>>> EXPORT_SYMBOL_GPL(call_rcu);
> >>>>>>> +#else
> >>>>>>> +void call_rcu(struct rcu_head *head, rcu_callback_t func)
> >>>>>>> +{
> >>>>>>> +    return __call_rcu_common(head, func, false);
> >>>>> 
> >>>>> Thanks. Instead of adding new function, you can also pass IS_ENABLED(CONFIG…) to the existing function of the same name.
> >>> 
> >>> I do like this approach better -- less code, more obvious what is going on.
> >> 
> >> Sounds good. Zqiang, do you mind updating your patch along these lines? That way you get the proper attribution.
> 
> Acked that patch.
> 
> >> More comments below:
> >>> 
> >>>>> Looks like though I made every one test the patch without having to enable the config option ;-). Hey, I’m a half glass full kind of guy, why do you ask?
> >>>>> 
> >>>>> Paul, I’ll take a closer look once I’m at the desk, but would you prefer to squash a diff into the existing patch, or want a new patch altogether?
> >>>> 
> >>>> On the other hand, what I’d want is to nuke the config option altogether or make it default y, we want to catch issues sooner than later.
> >>> 
> >>> That might be what we do at some point, but one thing at a time.  Let's
> >>> not penalize innocent bystanders, at least not just yet.
> >> 
> >> It’s a trade off, I thought that’s why we wanted to have the binary search stuff. If no one reports issue on Linux-next, then that code won’t be put to use in the near future at least.
> > 
> > Well, not to put too fine a point on it, but we currently really are
> > exposing -next to lazy call_rcu().  ;-)
> 
> This is true. I think I assumed nobody will enable a default off config option but I probably meant a smaller percentage will.
> 
> >>> I do very strongly encourage the ChromeOS and Android folks to test this
> >>> very severely, however.
> >> 
> >> Agreed. Yes that will happen, though I have to make a note for Android folks other than Vlad, to backports these (and enable the config option), carefully! Especially on pre-5.15 kernels. Luckily I had to do this (not so trivial) exercise myself.
> > 
> > And this is another situation in which the binary search stuff may prove
> > extremely useful.
> 
> Agreed. Thanks. Very least I owe per-rdp splitting of the hashtable, to that code.  Steven and me talked today that probably the hashtable can go into the rcu_segcblist itself, and protect it by the nocb lock.

I have to ask...

How does this fit in with CPU-hotplug and callback migration?

More to the point, what events would cause us to decide that this is
required?  For example, shouldn't we give your current binary-search
code at least a few chances to save the day?

							Thanx, Paul

> >>>>>> +}
> >>>>>> +EXPORT_SYMBOL_GPL(call_rcu);
> >>>>>> +#endif
> >>>>>> 
> >>>>>> /* Maximum number of jiffies to wait before draining a batch. */
> >>>>>> #define KFREE_DRAIN_JIFFIES (5 * HZ)
> >>>>>> -- 
> >>>>>> 2.25.1
> >>>>>>
Joel Fernandes Oct. 20, 2022, 6:46 p.m. UTC | #9
> On Oct 20, 2022, at 2:39 PM, Paul E. McKenney <paulmck@kernel.org> wrote:
> 
> On Thu, Oct 20, 2022 at 04:42:05AM -0400, Joel Fernandes wrote:
>>>> On Oct 19, 2022, at 7:34 PM, Paul E. McKenney <paulmck@kernel.org> wrote:
>>> 
>>> On Wed, Oct 19, 2022 at 02:25:29PM -0400, Joel Fernandes wrote:
>>>> 
>>>> 
>>>>>> On Oct 19, 2022, at 1:45 PM, Paul E. McKenney <paulmck@kernel.org> wrote:
>>>>> 
>>>>> On Wed, Oct 19, 2022 at 08:12:30AM -0400, Joel Fernandes wrote:
>>>>>>> On Oct 19, 2022, at 8:10 AM, Joel Fernandes <joel@joelfernandes.org> wrote:
>>>>>>>>> On Oct 19, 2022, at 6:34 AM, Zqiang <qiang1.zhang@intel.com> wrote:
>>>>>>>>> 
>>>>>>>>> Currently, regardless of whether the CONFIG_RCU_LAZY is enabled,
>>>>>>>>> invoke the call_rcu() is always lazy, it also means that when
>>>>>>>>> CONFIG_RCU_LAZY is disabled, invoke the call_rcu_flush() is also
>>>>>>>>> lazy. therefore, this commit make call_rcu() lazy only when
>>>>>>>>> CONFIG_RCU_LAZY is enabled.
>>>>>> 
>>>>>> First, good eyes!  Thank you for spotting this!!
>> 
>> Indeed.
>> 
>>>>>>>>> Signed-off-by: Zqiang <qiang1.zhang@intel.com>
>>>>>>>>> ---
>>>>>>>>> kernel/rcu/tree.c | 8 +++++++-
>>>>>>>>> 1 file changed, 7 insertions(+), 1 deletion(-)
>>>>>>>>> 
>>>>>>>>> diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
>>>>>>>>> index abc615808b6e..97ef602da3d5 100644
>>>>>>>>> --- a/kernel/rcu/tree.c
>>>>>>>>> +++ b/kernel/rcu/tree.c
>>>>>>>>> @@ -2839,7 +2839,6 @@ void call_rcu_flush(struct rcu_head *head, rcu_callback_t func)
>>>>>>>>> return __call_rcu_common(head, func, false);
>>>>>>>>> }
>>>>>>>>> EXPORT_SYMBOL_GPL(call_rcu_flush);
>>>>>>>>> -#endif
>>>>>>>>> 
>>>>>>>>> /**
>>>>>>>>> * call_rcu() - Queue an RCU callback for invocation after a grace period.
>>>>>>>>> @@ -2890,6 +2889,13 @@ void call_rcu(struct rcu_head *head, rcu_callback_t func)
>>>>>>>>> return __call_rcu_common(head, func, true);
>>>>>>>>> }
>>>>>>>>> EXPORT_SYMBOL_GPL(call_rcu);
>>>>>>>>> +#else
>>>>>>>>> +void call_rcu(struct rcu_head *head, rcu_callback_t func)
>>>>>>>>> +{
>>>>>>>>> +    return __call_rcu_common(head, func, false);
>>>>>>> 
>>>>>>> Thanks. Instead of adding new function, you can also pass IS_ENABLED(CONFIG…) to the existing function of the same name.
>>>>> 
>>>>> I do like this approach better -- less code, more obvious what is going on.
>>>> 
>>>> Sounds good. Zqiang, do you mind updating your patch along these lines? That way you get the proper attribution.
>> 
>> Acked that patch.
>> 
>>>> More comments below:
>>>>> 
>>>>>>> Looks like though I made every one test the patch without having to enable the config option ;-). Hey, I’m a half glass full kind of guy, why do you ask?
>>>>>>> 
>>>>>>> Paul, I’ll take a closer look once I’m at the desk, but would you prefer to squash a diff into the existing patch, or want a new patch altogether?
>>>>>> 
>>>>>> On the other hand, what I’d want is to nuke the config option altogether or make it default y, we want to catch issues sooner than later.
>>>>> 
>>>>> That might be what we do at some point, but one thing at a time.  Let's
>>>>> not penalize innocent bystanders, at least not just yet.
>>>> 
>>>> It’s a trade off, I thought that’s why we wanted to have the binary search stuff. If no one reports issue on Linux-next, then that code won’t be put to use in the near future at least.
>>> 
>>> Well, not to put too fine a point on it, but we currently really are
>>> exposing -next to lazy call_rcu().  ;-)
>> 
>> This is true. I think I assumed nobody will enable a default off config option but I probably meant a smaller percentage will.
>> 
>>>>> I do very strongly encourage the ChromeOS and Android folks to test this
>>>>> very severely, however.
>>>> 
>>>> Agreed. Yes that will happen, though I have to make a note for Android folks other than Vlad, to backports these (and enable the config option), carefully! Especially on pre-5.15 kernels. Luckily I had to do this (not so trivial) exercise myself.
>>> 
>>> And this is another situation in which the binary search stuff may prove
>>> extremely useful.
>> 
>> Agreed. Thanks. Very least I owe per-rdp splitting of the hashtable, to that code.  Steven and me talked today that probably the hashtable can go into the rcu_segcblist itself, and protect it by the nocb lock.
> 
> I have to ask...
> 
> How does this fit in with CPU-hotplug and callback migration?

Yes it will require change and I already thought of that, have to update the hashtable on all such events.

> More to the point, what events would cause us to decide that this is
> required?  For example, shouldn't we give your current binary-search
> code at least a few chances to save the day?

Totally, if you’re taking the patch as is, I would be very happy. And I’ll continue to improve it with the above. But I was not sure yet if you’re taking it.

I think it’s a worthwhile to take it for mainline in the current state and I’ll also add more data about callbacks to it in future (queuing time of callback, etc) — basically all the stuff I wanted to add to rcu_head.

One reason for the above proposal is I also want to keep it turned on in production, and the current solution cannot be, due to the global locking and is not expected to be kept on in production. But is still a worthwhile addition for debug kernels IMO.

Thanks,

 - Joel


>                            Thanx, Paul
> 
>>>>>>>> +}
>>>>>>>> +EXPORT_SYMBOL_GPL(call_rcu);
>>>>>>>> +#endif
>>>>>>>> 
>>>>>>>> /* Maximum number of jiffies to wait before draining a batch. */
>>>>>>>> #define KFREE_DRAIN_JIFFIES (5 * HZ)
>>>>>>>> -- 
>>>>>>>> 2.25.1
>>>>>>>>
Joel Fernandes Oct. 20, 2022, 9:33 p.m. UTC | #10
> On Oct 20, 2022, at 2:46 PM, Joel Fernandes <joel@joelfernandes.org> wrote:
>>> 
>>>>> More comments below:
>>>>>> 
>>>>>>>> Looks like though I made every one test the patch without having to enable the config option ;-). Hey, I’m a half glass full kind of guy, why do you ask?
>>>>>>>> 
>>>>>>>> Paul, I’ll take a closer look once I’m at the desk, but would you prefer to squash a diff into the existing patch, or want a new patch altogether?
>>>>>>> 
>>>>>>> On the other hand, what I’d want is to nuke the config option altogether or make it default y, we want to catch issues sooner than later.
>>>>>> 
>>>>>> That might be what we do at some point, but one thing at a time.  Let's
>>>>>> not penalize innocent bystanders, at least not just yet.
>>>>> 
>>>>> It’s a trade off, I thought that’s why we wanted to have the binary search stuff. If no one reports issue on Linux-next, then that code won’t be put to use in the near future at least.
>>>> 
>>>> Well, not to put too fine a point on it, but we currently really are
>>>> exposing -next to lazy call_rcu().  ;-)
>>> 
>>> This is true. I think I assumed nobody will enable a default off config option but I probably meant a smaller percentage will.
>>> 
>>>>>> I do very strongly encourage the ChromeOS and Android folks to test this
>>>>>> very severely, however.
>>>>> 
>>>>> Agreed. Yes that will happen, though I have to make a note for Android folks other than Vlad, to backports these (and enable the config option), carefully! Especially on pre-5.15 kernels. Luckily I had to do this (not so trivial) exercise myself.
>>>> 
>>>> And this is another situation in which the binary search stuff may prove
>>>> extremely useful.
>>> 
>>> Agreed. Thanks. Very least I owe per-rdp splitting of the hashtable, to that code.  Steven and me talked today that probably the hashtable can go into the rcu_segcblist itself, and protect it by the nocb lock.
>> 
>> I have to ask...
>> 
>> How does this fit in with CPU-hotplug and callback migration?
> 
> Yes it will require change and I already thought of that, have to update the hashtable on all such events.
> 
>> More to the point, what events would cause us to decide that this is
>> required?  For example, shouldn't we give your current binary-search
>> code at least a few chances to save the day?
> 
> Totally, if you’re taking the patch as is, I would be very happy. And I’ll continue to improve it with the above. But I was not sure yet if you’re taking it.
> 
> I think it’s a worthwhile to take it for mainline in the current state and I’ll also add more data about callbacks to it in future (queuing time of callback, etc) — basically all the stuff I wanted to add to rcu_head.
> 
> One reason for the above proposal is I also want to keep it turned on in production, and the current solution cannot be, due to the global locking and is not expected to be kept on in production. But is still a worthwhile addition for debug kernels IMO.

I realized while talking to Steve that the hashtable has to be per CPU if we are to store more than a lazy flag, such as queuing timestamps. This is because you can have multiple callbacks of the same function pointer queued on multiple CPUs. So you can have multiple timestamps to store. Same thing if we stored automata. It’s per callback instance, not per callback function.

Thanks,

 - Joel


> 
> Thanks,
> 
> - Joel
> 
> 
>>                           Thanx, Paul
>> 
>>>>>>>>> +}
>>>>>>>>> +EXPORT_SYMBOL_GPL(call_rcu);
>>>>>>>>> +#endif
>>>>>>>>> 
>>>>>>>>> /* Maximum number of jiffies to wait before draining a batch. */
>>>>>>>>> #define KFREE_DRAIN_JIFFIES (5 * HZ)
>>>>>>>>> -- 
>>>>>>>>> 2.25.1
>>>>>>>>>
Paul E. McKenney Oct. 20, 2022, 10:17 p.m. UTC | #11
On Thu, Oct 20, 2022 at 05:33:37PM -0400, Joel Fernandes wrote:
> 
> 
> > On Oct 20, 2022, at 2:46 PM, Joel Fernandes <joel@joelfernandes.org> wrote:
> >>> 
> >>>>> More comments below:
> >>>>>> 
> >>>>>>>> Looks like though I made every one test the patch without having to enable the config option ;-). Hey, I’m a half glass full kind of guy, why do you ask?
> >>>>>>>> 
> >>>>>>>> Paul, I’ll take a closer look once I’m at the desk, but would you prefer to squash a diff into the existing patch, or want a new patch altogether?
> >>>>>>> 
> >>>>>>> On the other hand, what I’d want is to nuke the config option altogether or make it default y, we want to catch issues sooner than later.
> >>>>>> 
> >>>>>> That might be what we do at some point, but one thing at a time.  Let's
> >>>>>> not penalize innocent bystanders, at least not just yet.
> >>>>> 
> >>>>> It’s a trade off, I thought that’s why we wanted to have the binary search stuff. If no one reports issue on Linux-next, then that code won’t be put to use in the near future at least.
> >>>> 
> >>>> Well, not to put too fine a point on it, but we currently really are
> >>>> exposing -next to lazy call_rcu().  ;-)
> >>> 
> >>> This is true. I think I assumed nobody will enable a default off config option but I probably meant a smaller percentage will.
> >>> 
> >>>>>> I do very strongly encourage the ChromeOS and Android folks to test this
> >>>>>> very severely, however.
> >>>>> 
> >>>>> Agreed. Yes that will happen, though I have to make a note for Android folks other than Vlad, to backports these (and enable the config option), carefully! Especially on pre-5.15 kernels. Luckily I had to do this (not so trivial) exercise myself.
> >>>> 
> >>>> And this is another situation in which the binary search stuff may prove
> >>>> extremely useful.
> >>> 
> >>> Agreed. Thanks. Very least I owe per-rdp splitting of the hashtable, to that code.  Steven and me talked today that probably the hashtable can go into the rcu_segcblist itself, and protect it by the nocb lock.
> >> 
> >> I have to ask...
> >> 
> >> How does this fit in with CPU-hotplug and callback migration?
> > 
> > Yes it will require change and I already thought of that, have to update the hashtable on all such events.
> > 
> >> More to the point, what events would cause us to decide that this is
> >> required?  For example, shouldn't we give your current binary-search
> >> code at least a few chances to save the day?
> > 
> > Totally, if you’re taking the patch as is, I would be very happy. And I’ll continue to improve it with the above. But I was not sure yet if you’re taking it.
> > 
> > I think it’s a worthwhile to take it for mainline in the current state and I’ll also add more data about callbacks to it in future (queuing time of callback, etc) — basically all the stuff I wanted to add to rcu_head.
> > 
> > One reason for the above proposal is I also want to keep it turned on in production, and the current solution cannot be, due to the global locking and is not expected to be kept on in production. But is still a worthwhile addition for debug kernels IMO.
> 
> I realized while talking to Steve that the hashtable has to be per CPU if we are to store more than a lazy flag, such as queuing timestamps. This is because you can have multiple callbacks of the same function pointer queued on multiple CPUs. So you can have multiple timestamps to store. Same thing if we stored automata. It’s per callback instance, not per callback function.

Agreed, to be useful, this must be per callback instance.

							Thanx, Paul

> Thanks,
> 
>  - Joel
> 
> 
> > 
> > Thanks,
> > 
> > - Joel
> > 
> > 
> >>                           Thanx, Paul
> >> 
> >>>>>>>>> +}
> >>>>>>>>> +EXPORT_SYMBOL_GPL(call_rcu);
> >>>>>>>>> +#endif
> >>>>>>>>> 
> >>>>>>>>> /* Maximum number of jiffies to wait before draining a batch. */
> >>>>>>>>> #define KFREE_DRAIN_JIFFIES (5 * HZ)
> >>>>>>>>> -- 
> >>>>>>>>> 2.25.1
> >>>>>>>>>
diff mbox series

Patch

diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index abc615808b6e..97ef602da3d5 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -2839,7 +2839,6 @@  void call_rcu_flush(struct rcu_head *head, rcu_callback_t func)
 	return __call_rcu_common(head, func, false);
 }
 EXPORT_SYMBOL_GPL(call_rcu_flush);
-#endif
 
 /**
  * call_rcu() - Queue an RCU callback for invocation after a grace period.
@@ -2890,6 +2889,13 @@  void call_rcu(struct rcu_head *head, rcu_callback_t func)
 	return __call_rcu_common(head, func, true);
 }
 EXPORT_SYMBOL_GPL(call_rcu);
+#else
+void call_rcu(struct rcu_head *head, rcu_callback_t func)
+{
+	return __call_rcu_common(head, func, false);
+}
+EXPORT_SYMBOL_GPL(call_rcu);
+#endif
 
 /* Maximum number of jiffies to wait before draining a batch. */
 #define KFREE_DRAIN_JIFFIES (5 * HZ)