diff mbox series

mm/mmu_notifier.c: micro-optimization substitute kzalloc with kmalloc

Message ID 20200906114321.16493-1-mateusznosek0@gmail.com (mailing list archive)
State New, archived
Headers show
Series mm/mmu_notifier.c: micro-optimization substitute kzalloc with kmalloc | expand

Commit Message

Mateusz Nosek Sept. 6, 2020, 11:43 a.m. UTC
From: Mateusz Nosek <mateusznosek0@gmail.com>

Most fields in struct pointed by 'subscriptions' are initialized explicitly
after the allocation. By changing kzalloc to kmalloc the call to memset
is avoided. As the only new code consists of 2 simple memory accesses,
the performance is increased.

Signed-off-by: Mateusz Nosek <mateusznosek0@gmail.com>
---
 mm/mmu_notifier.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

Comments

Mike Rapoport Sept. 6, 2020, 2:26 p.m. UTC | #1
Hi,

On Sun, Sep 06, 2020 at 01:43:21PM +0200, mateusznosek0@gmail.com wrote:
> From: Mateusz Nosek <mateusznosek0@gmail.com>
> 
> Most fields in struct pointed by 'subscriptions' are initialized explicitly
> after the allocation. By changing kzalloc to kmalloc the call to memset
> is avoided. As the only new code consists of 2 simple memory accesses,
> the performance is increased.

Is there a measurable performance increase?

The __mmu_notifier_register() is not used that frequently to trade off
robustness of kzalloc() for slight (if visible at all) performance gain.

> Signed-off-by: Mateusz Nosek <mateusznosek0@gmail.com>
> ---
>  mm/mmu_notifier.c | 4 +++-
>  1 file changed, 3 insertions(+), 1 deletion(-)
> 
> diff --git a/mm/mmu_notifier.c b/mm/mmu_notifier.c
> index 4fc918163dd3..190e198dc5be 100644
> --- a/mm/mmu_notifier.c
> +++ b/mm/mmu_notifier.c
> @@ -625,7 +625,7 @@ int __mmu_notifier_register(struct mmu_notifier *subscription,
>  		 * know that mm->notifier_subscriptions can't change while we
>  		 * hold the write side of the mmap_lock.
>  		 */
> -		subscriptions = kzalloc(
> +		subscriptions = kmalloc(
>  			sizeof(struct mmu_notifier_subscriptions), GFP_KERNEL);
>  		if (!subscriptions)
>  			return -ENOMEM;
> @@ -636,6 +636,8 @@ int __mmu_notifier_register(struct mmu_notifier *subscription,
>  		subscriptions->itree = RB_ROOT_CACHED;
>  		init_waitqueue_head(&subscriptions->wq);
>  		INIT_HLIST_HEAD(&subscriptions->deferred_list);
> +		subscriptions->active_invalidate_ranges = 0;
> +		subscriptions->has_itree = false;
>  	}
>  
>  	ret = mm_take_all_locks(mm);
> -- 
> 2.20.1
> 
>
Mateusz Nosek Sept. 6, 2020, 4:06 p.m. UTC | #2
Hi,

I performed simple benchmarks using custom kernel module with the code 
fragment in question 'copy-pasted' in there in both versions. In case of 
1k, 10k and 100k iterations the average time for kzalloc version was 5.1 
and for kmalloc 3.9, for each iterations number.
The time was measured using 'ktime_get(void)' function and the results 
given here are in ktime_t units.
The machine I use has 4 core Intel(R) Core(TM) i5-3470 CPU @ 3.20GHz CPU.

The performance increase happens, but as you wrote it is probably not 
really noticeable.
I have found 3 other places in kernel code with similar kzalloc related 
issues, none of which seems to be 'hot' code.
I leave the decision if this patch and potential others I would send 
regarding this issue, are worth applying to the community and maintainers.

Best regards,
Mateusz Nosek

On 9/6/2020 4:26 PM, Mike Rapoport wrote:
> Hi,
> 
> On Sun, Sep 06, 2020 at 01:43:21PM +0200, mateusznosek0@gmail.com wrote:
>> From: Mateusz Nosek <mateusznosek0@gmail.com>
>>
>> Most fields in struct pointed by 'subscriptions' are initialized explicitly
>> after the allocation. By changing kzalloc to kmalloc the call to memset
>> is avoided. As the only new code consists of 2 simple memory accesses,
>> the performance is increased.
> 
> Is there a measurable performance increase?
> 
> The __mmu_notifier_register() is not used that frequently to trade off
> robustness of kzalloc() for slight (if visible at all) performance gain.
> 
>> Signed-off-by: Mateusz Nosek <mateusznosek0@gmail.com>
>> ---
>>   mm/mmu_notifier.c | 4 +++-
>>   1 file changed, 3 insertions(+), 1 deletion(-)
>>
>> diff --git a/mm/mmu_notifier.c b/mm/mmu_notifier.c
>> index 4fc918163dd3..190e198dc5be 100644
>> --- a/mm/mmu_notifier.c
>> +++ b/mm/mmu_notifier.c
>> @@ -625,7 +625,7 @@ int __mmu_notifier_register(struct mmu_notifier *subscription,
>>   		 * know that mm->notifier_subscriptions can't change while we
>>   		 * hold the write side of the mmap_lock.
>>   		 */
>> -		subscriptions = kzalloc(
>> +		subscriptions = kmalloc(
>>   			sizeof(struct mmu_notifier_subscriptions), GFP_KERNEL);
>>   		if (!subscriptions)
>>   			return -ENOMEM;
>> @@ -636,6 +636,8 @@ int __mmu_notifier_register(struct mmu_notifier *subscription,
>>   		subscriptions->itree = RB_ROOT_CACHED;
>>   		init_waitqueue_head(&subscriptions->wq);
>>   		INIT_HLIST_HEAD(&subscriptions->deferred_list);
>> +		subscriptions->active_invalidate_ranges = 0;
>> +		subscriptions->has_itree = false;
>>   	}
>>   
>>   	ret = mm_take_all_locks(mm);
>> -- 
>> 2.20.1
>>
>>
>
Mike Rapoport Sept. 8, 2020, 6:42 a.m. UTC | #3
On Sun, Sep 06, 2020 at 06:06:39PM +0200, Mateusz Nosek wrote:
> Hi,
> 
> I performed simple benchmarks using custom kernel module with the code
> fragment in question 'copy-pasted' in there in both versions. In case of 1k,
> 10k and 100k iterations the average time for kzalloc version was 5.1 and for
> kmalloc 3.9, for each iterations number.
> The time was measured using 'ktime_get(void)' function and the results given
> here are in ktime_t units.
> The machine I use has 4 core Intel(R) Core(TM) i5-3470 CPU @ 3.20GHz CPU.
> 
> The performance increase happens, but as you wrote it is probably not really
> noticeable.

I don't think that saving a few cylces of memset() in a function that
called only on the initialization path in very particular cases is worth
risking uninitialized variables when somebody will add a new field to
the 'struct mmu_notifier_subscriptions' and will forget to explicitly
set it.

> I have found 3 other places in kernel code with similar kzalloc related
> issues, none of which seems to be 'hot' code.
> I leave the decision if this patch and potential others I would send
> regarding this issue, are worth applying to the community and maintainers.
> 
> Best regards,
> Mateusz Nosek
> 
> On 9/6/2020 4:26 PM, Mike Rapoport wrote:
> > Hi,
> > 
> > On Sun, Sep 06, 2020 at 01:43:21PM +0200, mateusznosek0@gmail.com wrote:
> > > From: Mateusz Nosek <mateusznosek0@gmail.com>
> > > 
> > > Most fields in struct pointed by 'subscriptions' are initialized explicitly
> > > after the allocation. By changing kzalloc to kmalloc the call to memset
> > > is avoided. As the only new code consists of 2 simple memory accesses,
> > > the performance is increased.
> > 
> > Is there a measurable performance increase?
> > 
> > The __mmu_notifier_register() is not used that frequently to trade off
> > robustness of kzalloc() for slight (if visible at all) performance gain.
> > 
> > > Signed-off-by: Mateusz Nosek <mateusznosek0@gmail.com>
> > > ---
> > >   mm/mmu_notifier.c | 4 +++-
> > >   1 file changed, 3 insertions(+), 1 deletion(-)
> > > 
> > > diff --git a/mm/mmu_notifier.c b/mm/mmu_notifier.c
> > > index 4fc918163dd3..190e198dc5be 100644
> > > --- a/mm/mmu_notifier.c +++ b/mm/mmu_notifier.c @@ -625,7 +625,7
> > > @@ int __mmu_notifier_register(struct mmu_notifier *subscription,
> > > * know that mm->notifier_subscriptions can't change while we *
> > > hold the write side of the mmap_lock.  */
> > > -		subscriptions = kzalloc(
> > > +		subscriptions = kmalloc(
> > >   			sizeof(struct mmu_notifier_subscriptions), GFP_KERNEL);
> > >   		if (!subscriptions)
> > >   			return -ENOMEM;
> > > @@ -636,6 +636,8 @@ int __mmu_notifier_register(struct mmu_notifier *subscription,
> > >   		subscriptions->itree = RB_ROOT_CACHED;
> > >   		init_waitqueue_head(&subscriptions->wq);
> > >   		INIT_HLIST_HEAD(&subscriptions->deferred_list);
> > > +		subscriptions->active_invalidate_ranges = 0;
> > > +		subscriptions->has_itree = false;
> > >   	}
> > >   	ret = mm_take_all_locks(mm);
> > > -- 
> > > 2.20.1
> > > 
> > > 
> >
Jason Gunthorpe Sept. 8, 2020, 11:32 p.m. UTC | #4
On Tue, Sep 08, 2020 at 09:42:45AM +0300, Mike Rapoport wrote:
> On Sun, Sep 06, 2020 at 06:06:39PM +0200, Mateusz Nosek wrote:
> > Hi,
> > 
> > I performed simple benchmarks using custom kernel module with the code
> > fragment in question 'copy-pasted' in there in both versions. In case of 1k,
> > 10k and 100k iterations the average time for kzalloc version was 5.1 and for
> > kmalloc 3.9, for each iterations number.
> > The time was measured using 'ktime_get(void)' function and the results given
> > here are in ktime_t units.
> > The machine I use has 4 core Intel(R) Core(TM) i5-3470 CPU @ 3.20GHz CPU.
> > 
> > The performance increase happens, but as you wrote it is probably not really
> > noticeable.
> 
> I don't think that saving a few cylces of memset() in a function that
> called only on the initialization path in very particular cases is worth
> risking uninitialized variables when somebody will add a new field to
> the 'struct mmu_notifier_subscriptions' and will forget to explicitly
> set it.

Indeed, it is not a common path, it is already very expensive if code
is running here (eg it does mm_take_all_locks()).

So there is no reason at all to optimize this and risk problems down
the road.

Jason
diff mbox series

Patch

diff --git a/mm/mmu_notifier.c b/mm/mmu_notifier.c
index 4fc918163dd3..190e198dc5be 100644
--- a/mm/mmu_notifier.c
+++ b/mm/mmu_notifier.c
@@ -625,7 +625,7 @@  int __mmu_notifier_register(struct mmu_notifier *subscription,
 		 * know that mm->notifier_subscriptions can't change while we
 		 * hold the write side of the mmap_lock.
 		 */
-		subscriptions = kzalloc(
+		subscriptions = kmalloc(
 			sizeof(struct mmu_notifier_subscriptions), GFP_KERNEL);
 		if (!subscriptions)
 			return -ENOMEM;
@@ -636,6 +636,8 @@  int __mmu_notifier_register(struct mmu_notifier *subscription,
 		subscriptions->itree = RB_ROOT_CACHED;
 		init_waitqueue_head(&subscriptions->wq);
 		INIT_HLIST_HEAD(&subscriptions->deferred_list);
+		subscriptions->active_invalidate_ranges = 0;
+		subscriptions->has_itree = false;
 	}
 
 	ret = mm_take_all_locks(mm);