diff mbox series

padata: Honor the caller's alignment in case of chunk_size 0

Message ID 20240822-max-v1-1-cb4bc5b1c101@ti.com (mailing list archive)
State Accepted
Delegated to: Herbert Xu
Headers show
Series padata: Honor the caller's alignment in case of chunk_size 0 | expand

Commit Message

Kamlesh Gurudasani Aug. 21, 2024, 9:02 p.m. UTC
In the case where we are forcing the ps.chunk_size to be at least 1,
we are ignoring the caller's alignment.

Move the forcing of ps.chunk_size to be at least 1 before rounding it
up to caller's alignment, so that caller's alignment is honored.

While at it, use max() to force the ps.chunk_size to be at least 1 to
improve readability.

Fixes: 6d45e1c948a8 ("padata: Fix possible divide-by-0 panic in padata_mt_helper()")
Signed-off-by: Kamlesh Gurudasani <kamlesh@ti.com>
---
 kernel/padata.c | 12 ++++--------
 1 file changed, 4 insertions(+), 8 deletions(-)


---
base-commit: b311c1b497e51a628aa89e7cb954481e5f9dced2
change-id: 20240822-max-93c17adc6457

Best regards,

Comments

Andrew Morton Aug. 21, 2024, 10:41 p.m. UTC | #1
On Thu, 22 Aug 2024 02:32:52 +0530 Kamlesh Gurudasani <kamlesh@ti.com> wrote:

> In the case where we are forcing the ps.chunk_size to be at least 1,
> we are ignoring the caller's alignment.
> 
> Move the forcing of ps.chunk_size to be at least 1 before rounding it
> up to caller's alignment, so that caller's alignment is honored.
> 
> While at it, use max() to force the ps.chunk_size to be at least 1 to
> improve readability.

Please (as always) describe the userspace-visible runtime effects of
this change.  This helps others to determine which kernel(s) should be
patched.  And it helps others decide whether this fix might address an
issue which they are encountering.
Waiman Long Aug. 21, 2024, 11:30 p.m. UTC | #2
On 8/21/24 17:02, Kamlesh Gurudasani wrote:
> In the case where we are forcing the ps.chunk_size to be at least 1,
> we are ignoring the caller's alignment.
>
> Move the forcing of ps.chunk_size to be at least 1 before rounding it
> up to caller's alignment, so that caller's alignment is honored.
>
> While at it, use max() to force the ps.chunk_size to be at least 1 to
> improve readability.
>
> Fixes: 6d45e1c948a8 ("padata: Fix possible divide-by-0 panic in padata_mt_helper()")
> Signed-off-by: Kamlesh Gurudasani <kamlesh@ti.com>
> ---
>   kernel/padata.c | 12 ++++--------
>   1 file changed, 4 insertions(+), 8 deletions(-)
>
> diff --git a/kernel/padata.c b/kernel/padata.c
> index 0fa6c2895460..d8a51eff1581 100644
> --- a/kernel/padata.c
> +++ b/kernel/padata.c
> @@ -509,21 +509,17 @@ void __init padata_do_multithreaded(struct padata_mt_job *job)
>   
>   	/*
>   	 * Chunk size is the amount of work a helper does per call to the
> -	 * thread function.  Load balance large jobs between threads by
> +	 * thread function. Load balance large jobs between threads by
>   	 * increasing the number of chunks, guarantee at least the minimum
>   	 * chunk size from the caller, and honor the caller's alignment.
> +	 * Ensure chunk_size is at least 1 to prevent divide-by-0
> +	 * panic in padata_mt_helper().
>   	 */
>   	ps.chunk_size = job->size / (ps.nworks * load_balance_factor);
>   	ps.chunk_size = max(ps.chunk_size, job->min_chunk);
> +	ps.chunk_size = max(ps.chunk_size, 1ul);
>   	ps.chunk_size = roundup(ps.chunk_size, job->align);
>   
> -	/*
> -	 * chunk_size can be 0 if the caller sets min_chunk to 0. So force it
> -	 * to at least 1 to prevent divide-by-0 panic in padata_mt_helper().`
> -	 */
> -	if (!ps.chunk_size)
> -		ps.chunk_size = 1U;
> -
>   	list_for_each_entry(pw, &works, pw_list)
>   		if (job->numa_aware) {
>   			int old_node = atomic_read(&last_used_nid);
>
> ---
> base-commit: b311c1b497e51a628aa89e7cb954481e5f9dced2
> change-id: 20240822-max-93c17adc6457

LGTM, my only nit is the use of "1ul" which is less common and harder to 
read than "1UL" as the former one may be misread as a "lul" variable.

Acked-by:  Waiman Long <longman@redhat.com>
Daniel Jordan Aug. 23, 2024, 12:50 a.m. UTC | #3
On Thu, Aug 22, 2024 at 02:32:52AM GMT, Kamlesh Gurudasani wrote:
> In the case where we are forcing the ps.chunk_size to be at least 1,
> we are ignoring the caller's alignment.
> 
> Move the forcing of ps.chunk_size to be at least 1 before rounding it
> up to caller's alignment, so that caller's alignment is honored.
> 
> While at it, use max() to force the ps.chunk_size to be at least 1 to
> improve readability.
> 
> Fixes: 6d45e1c948a8 ("padata: Fix possible divide-by-0 panic in padata_mt_helper()")

Looks fine.

Acked-by: Daniel Jordan <daniel.m.jordan@oracle.com>

> Signed-off-by: Kamlesh Gurudasani <kamlesh@ti.com>
> ---
>  kernel/padata.c | 12 ++++--------
>  1 file changed, 4 insertions(+), 8 deletions(-)
> 
> diff --git a/kernel/padata.c b/kernel/padata.c
> index 0fa6c2895460..d8a51eff1581 100644
> --- a/kernel/padata.c
> +++ b/kernel/padata.c
> @@ -509,21 +509,17 @@ void __init padata_do_multithreaded(struct padata_mt_job *job)
>  
>  	/*
>  	 * Chunk size is the amount of work a helper does per call to the
> -	 * thread function.  Load balance large jobs between threads by
> +	 * thread function. Load balance large jobs between threads by

Though whitespace changes like these just add noise...
Kamlesh Gurudasani Aug. 25, 2024, 11:27 a.m. UTC | #4
Andrew Morton <akpm@linux-foundation.org> writes:

> This message was sent from outside of Texas Instruments. 
> Do not click links or open attachments unless you recognize the source of this email and know the content is safe. 
> Report Suspicious 
>  
> On Thu, 22 Aug 2024 02:32:52 +0530 Kamlesh Gurudasani <kamlesh@ti.com> wrote:
>
>> In the case where we are forcing the ps.chunk_size to be at least 1,
>> we are ignoring the caller's alignment.
>> 
>> Move the forcing of ps.chunk_size to be at least 1 before rounding it
>> up to caller's alignment, so that caller's alignment is honored.
>> 
>> While at it, use max() to force the ps.chunk_size to be at least 1 to
>> improve readability.
>
> Please (as always) describe the userspace-visible runtime effects of
> this change.  This helps others to determine which kernel(s) should be
> patched.  And it helps others decide whether this fix might address an
> issue which they are encountering.

Thanks for the review, Andrew.
Honestly, I'm not sure the effects would be visble to userspace or not.
or even how to reproduce it.

I have fixed according to discussion here,
https://patchwork.kernel.org/project/linux-crypto/patch/20240806174647.1050398-1-longman@redhat.com/#25984314

Kamlesh
Kamlesh Gurudasani Aug. 25, 2024, 11:29 a.m. UTC | #5
Daniel Jordan <daniel.m.jordan@oracle.com> writes:

> On Thu, Aug 22, 2024 at 02:32:52AM GMT, Kamlesh Gurudasani wrote:
>> In the case where we are forcing the ps.chunk_size to be at least 1,
>> we are ignoring the caller's alignment.
>> 
>> Move the forcing of ps.chunk_size to be at least 1 before rounding it
>> up to caller's alignment, so that caller's alignment is honored.
>> 
>> While at it, use max() to force the ps.chunk_size to be at least 1 to
>> improve readability.
>> 
>> Fixes: 6d45e1c948a8 ("padata: Fix possible divide-by-0 panic in padata_mt_helper()")
>
> Looks fine.
>
> Acked-by: Daniel Jordan <daniel.m.jordan@oracle.com>
>
>> Signed-off-by: Kamlesh Gurudasani <kamlesh@ti.com>
>> ---
>>  kernel/padata.c | 12 ++++--------
>>  1 file changed, 4 insertions(+), 8 deletions(-)
>> 
>> diff --git a/kernel/padata.c b/kernel/padata.c
>> index 0fa6c2895460..d8a51eff1581 100644
>> --- a/kernel/padata.c
>> +++ b/kernel/padata.c
>> @@ -509,21 +509,17 @@ void __init padata_do_multithreaded(struct padata_mt_job *job)
>>  
>>  	/*
>>  	 * Chunk size is the amount of work a helper does per call to the
>> -	 * thread function.  Load balance large jobs between threads by
>> +	 * thread function. Load balance large jobs between threads by
>
> Though whitespace changes like these just add noise...
Thanks for the Ack, would keep these in mind from next time onwards.

Kamlesh
Kamlesh Gurudasani Aug. 25, 2024, 11:34 a.m. UTC | #6
Waiman Long <longman@redhat.com> writes:

> This message was sent from outside of Texas Instruments. 
> Do not click links or open attachments unless you recognize the source of this email and know the content is safe. 
> Report Suspicious 
>  
> On 8/21/24 17:02, Kamlesh Gurudasani wrote:
>> In the case where we are forcing the ps.chunk_size to be at least 1,
>> we are ignoring the caller's alignment.
>>
>> Move the forcing of ps.chunk_size to be at least 1 before rounding it
>> up to caller's alignment, so that caller's alignment is honored.
>>
>> While at it, use max() to force the ps.chunk_size to be at least 1 to
>> improve readability.
>>
>> Fixes: 6d45e1c948a8 ("padata: Fix possible divide-by-0 panic in padata_mt_helper()")
>> Signed-off-by: Kamlesh Gurudasani <kamlesh@ti.com>
>> ---
>>   kernel/padata.c | 12 ++++--------
>>   1 file changed, 4 insertions(+), 8 deletions(-)
>>
>> diff --git a/kernel/padata.c b/kernel/padata.c
>> index 0fa6c2895460..d8a51eff1581 100644
>> --- a/kernel/padata.c
>> +++ b/kernel/padata.c
>> @@ -509,21 +509,17 @@ void __init padata_do_multithreaded(struct padata_mt_job *job)
>>   
>>   	/*
>>   	 * Chunk size is the amount of work a helper does per call to the
>> -	 * thread function.  Load balance large jobs between threads by
>> +	 * thread function. Load balance large jobs between threads by
>>   	 * increasing the number of chunks, guarantee at least the minimum
>>   	 * chunk size from the caller, and honor the caller's alignment.
>> +	 * Ensure chunk_size is at least 1 to prevent divide-by-0
>> +	 * panic in padata_mt_helper().
>>   	 */
>>   	ps.chunk_size = job->size / (ps.nworks * load_balance_factor);
>>   	ps.chunk_size = max(ps.chunk_size, job->min_chunk);
>> +	ps.chunk_size = max(ps.chunk_size, 1ul);
>>   	ps.chunk_size = roundup(ps.chunk_size, job->align);
>>   
>> -	/*
>> -	 * chunk_size can be 0 if the caller sets min_chunk to 0. So force it
>> -	 * to at least 1 to prevent divide-by-0 panic in padata_mt_helper().`
>> -	 */
>> -	if (!ps.chunk_size)
>> -		ps.chunk_size = 1U;
>> -
>>   	list_for_each_entry(pw, &works, pw_list)
>>   		if (job->numa_aware) {
>>   			int old_node = atomic_read(&last_used_nid);
>>
>> ---
>> base-commit: b311c1b497e51a628aa89e7cb954481e5f9dced2
>> change-id: 20240822-max-93c17adc6457
>
> LGTM, my only nit is the use of "1ul" which is less common and harder to 
> read than "1UL" as the former one may be misread as a "lul" variable.
>
> Acked-by:  Waiman Long <longman@redhat.com>
Thanks for the Acked-by, Waiman. I understand your point, though Daniel seems
to be okay with this, so will keep it as is this time.

Cheers,
Kamlesh
Waiman Long Aug. 25, 2024, 3:57 p.m. UTC | #7
On 8/25/24 07:34, Kamlesh Gurudasani wrote:
> Waiman Long <longman@redhat.com> writes:
>
>> This message was sent from outside of Texas Instruments.
>> Do not click links or open attachments unless you recognize the source of this email and know the content is safe.
>> Report Suspicious
>>   
>> On 8/21/24 17:02, Kamlesh Gurudasani wrote:
>>> In the case where we are forcing the ps.chunk_size to be at least 1,
>>> we are ignoring the caller's alignment.
>>>
>>> Move the forcing of ps.chunk_size to be at least 1 before rounding it
>>> up to caller's alignment, so that caller's alignment is honored.
>>>
>>> While at it, use max() to force the ps.chunk_size to be at least 1 to
>>> improve readability.
>>>
>>> Fixes: 6d45e1c948a8 ("padata: Fix possible divide-by-0 panic in padata_mt_helper()")
>>> Signed-off-by: Kamlesh Gurudasani <kamlesh@ti.com>
>>> ---
>>>    kernel/padata.c | 12 ++++--------
>>>    1 file changed, 4 insertions(+), 8 deletions(-)
>>>
>>> diff --git a/kernel/padata.c b/kernel/padata.c
>>> index 0fa6c2895460..d8a51eff1581 100644
>>> --- a/kernel/padata.c
>>> +++ b/kernel/padata.c
>>> @@ -509,21 +509,17 @@ void __init padata_do_multithreaded(struct padata_mt_job *job)
>>>    
>>>    	/*
>>>    	 * Chunk size is the amount of work a helper does per call to the
>>> -	 * thread function.  Load balance large jobs between threads by
>>> +	 * thread function. Load balance large jobs between threads by
>>>    	 * increasing the number of chunks, guarantee at least the minimum
>>>    	 * chunk size from the caller, and honor the caller's alignment.
>>> +	 * Ensure chunk_size is at least 1 to prevent divide-by-0
>>> +	 * panic in padata_mt_helper().
>>>    	 */
>>>    	ps.chunk_size = job->size / (ps.nworks * load_balance_factor);
>>>    	ps.chunk_size = max(ps.chunk_size, job->min_chunk);
>>> +	ps.chunk_size = max(ps.chunk_size, 1ul);
>>>    	ps.chunk_size = roundup(ps.chunk_size, job->align);
>>>    
>>> -	/*
>>> -	 * chunk_size can be 0 if the caller sets min_chunk to 0. So force it
>>> -	 * to at least 1 to prevent divide-by-0 panic in padata_mt_helper().`
>>> -	 */
>>> -	if (!ps.chunk_size)
>>> -		ps.chunk_size = 1U;
>>> -
>>>    	list_for_each_entry(pw, &works, pw_list)
>>>    		if (job->numa_aware) {
>>>    			int old_node = atomic_read(&last_used_nid);
>>>
>>> ---
>>> base-commit: b311c1b497e51a628aa89e7cb954481e5f9dced2
>>> change-id: 20240822-max-93c17adc6457
>> LGTM, my only nit is the use of "1ul" which is less common and harder to
>> read than "1UL" as the former one may be misread as a "lul" variable.
>>
>> Acked-by:  Waiman Long <longman@redhat.com>
> Thanks for the Acked-by, Waiman. I understand your point, though Daniel seems
> to be okay with this, so will keep it as is this time.

This is just a suggestion in case you need to update your patch. I am 
fine with keeping it as is if no further update is needed.

Cheers,
Longman
Herbert Xu Aug. 30, 2024, 10:39 a.m. UTC | #8
On Thu, Aug 22, 2024 at 02:32:52AM +0530, Kamlesh Gurudasani wrote:
> In the case where we are forcing the ps.chunk_size to be at least 1,
> we are ignoring the caller's alignment.
> 
> Move the forcing of ps.chunk_size to be at least 1 before rounding it
> up to caller's alignment, so that caller's alignment is honored.
> 
> While at it, use max() to force the ps.chunk_size to be at least 1 to
> improve readability.
> 
> Fixes: 6d45e1c948a8 ("padata: Fix possible divide-by-0 panic in padata_mt_helper()")
> Signed-off-by: Kamlesh Gurudasani <kamlesh@ti.com>
> ---
>  kernel/padata.c | 12 ++++--------
>  1 file changed, 4 insertions(+), 8 deletions(-)

Patch applied.  Thanks.
diff mbox series

Patch

diff --git a/kernel/padata.c b/kernel/padata.c
index 0fa6c2895460..d8a51eff1581 100644
--- a/kernel/padata.c
+++ b/kernel/padata.c
@@ -509,21 +509,17 @@  void __init padata_do_multithreaded(struct padata_mt_job *job)
 
 	/*
 	 * Chunk size is the amount of work a helper does per call to the
-	 * thread function.  Load balance large jobs between threads by
+	 * thread function. Load balance large jobs between threads by
 	 * increasing the number of chunks, guarantee at least the minimum
 	 * chunk size from the caller, and honor the caller's alignment.
+	 * Ensure chunk_size is at least 1 to prevent divide-by-0
+	 * panic in padata_mt_helper().
 	 */
 	ps.chunk_size = job->size / (ps.nworks * load_balance_factor);
 	ps.chunk_size = max(ps.chunk_size, job->min_chunk);
+	ps.chunk_size = max(ps.chunk_size, 1ul);
 	ps.chunk_size = roundup(ps.chunk_size, job->align);
 
-	/*
-	 * chunk_size can be 0 if the caller sets min_chunk to 0. So force it
-	 * to at least 1 to prevent divide-by-0 panic in padata_mt_helper().`
-	 */
-	if (!ps.chunk_size)
-		ps.chunk_size = 1U;
-
 	list_for_each_entry(pw, &works, pw_list)
 		if (job->numa_aware) {
 			int old_node = atomic_read(&last_used_nid);