diff mbox series

mm/vma: correct legacy comment in vm_[un]lock_anon_vma()

Message ID 20250401094346.25200-1-richard.weiyang@gmail.com (mailing list archive)
State New
Headers show
Series mm/vma: correct legacy comment in vm_[un]lock_anon_vma() | expand

Commit Message

Wei Yang April 1, 2025, 9:43 a.m. UTC
In commit bf181b9f9d8d ("mm anon rmap: replace same_anon_vma linked
list with an interval tree."), the anon_vma.same_anon_vma is replaced by
interval tree.

But the related comment is left behind. Correct it here.

Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Cc: Jann Horn <jannh@google.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Michel Lespinasse <michel@lespinasse.org>
---
 mm/vma.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

Comments

Lorenzo Stoakes April 1, 2025, 9:55 a.m. UTC | #1
On Tue, Apr 01, 2025 at 09:43:46AM +0000, Wei Yang wrote:
> In commit bf181b9f9d8d ("mm anon rmap: replace same_anon_vma linked
> list with an interval tree."), the anon_vma.same_anon_vma is replaced by
> interval tree.
>
> But the related comment is left behind. Correct it here.
>
> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>

Sorry, but I'd rather we didn't, this is un-useful churn, and I am planning to
make fairly wide-ranging alterations to anon_vma going forward, at which point
things like this can be addressed if needed.

Also, not to sound mean, but we have repeatedly asked you not to submit
these kinds of small 'fix up' patches. This one is relatively benign as
it's a comment change only, but previous ones you have made have been
directly problematic for us.

So to reiterate - please stop sending patches like this. Core mm is not the
place for them.

Thanks!

> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> Cc: Jann Horn <jannh@google.com>
> Cc: Vlastimil Babka <vbabka@suse.cz>
> Cc: Michel Lespinasse <michel@lespinasse.org>
> ---
>  mm/vma.c | 10 +++++-----
>  1 file changed, 5 insertions(+), 5 deletions(-)
>
> diff --git a/mm/vma.c b/mm/vma.c
> index 5cdc5612bfc1..1a155031f1fb 100644
> --- a/mm/vma.c
> +++ b/mm/vma.c
> @@ -1989,16 +1989,16 @@ static void vm_lock_anon_vma(struct mm_struct *mm, struct anon_vma *anon_vma)
>  {
>  	if (!test_bit(0, (unsigned long *) &anon_vma->root->rb_root.rb_root.rb_node)) {
>  		/*
> -		 * The LSB of head.next can't change from under us
> +		 * The LSB of rb_root.rb_node can't change from under us
>  		 * because we hold the mm_all_locks_mutex.
>  		 */
>  		down_write_nest_lock(&anon_vma->root->rwsem, &mm->mmap_lock);
>  		/*
> -		 * We can safely modify head.next after taking the
> +		 * We can safely modify rb_root.rb_node after taking the
>  		 * anon_vma->root->rwsem. If some other vma in this mm shares
>  		 * the same anon_vma we won't take it again.
>  		 *
> -		 * No need of atomic instructions here, head.next
> +		 * No need of atomic instructions here, rb_root.rb_node
>  		 * can't change from under us thanks to the
>  		 * anon_vma->root->rwsem.
>  		 */
> @@ -2124,14 +2124,14 @@ static void vm_unlock_anon_vma(struct anon_vma *anon_vma)
>  {
>  	if (test_bit(0, (unsigned long *) &anon_vma->root->rb_root.rb_root.rb_node)) {
>  		/*
> -		 * The LSB of head.next can't change to 0 from under
> +		 * The LSB of rb_root.rb_node can't change to 0 from under
>  		 * us because we hold the mm_all_locks_mutex.
>  		 *
>  		 * We must however clear the bitflag before unlocking
>  		 * the vma so the users using the anon_vma->rb_root will
>  		 * never see our bitflag.
>  		 *
> -		 * No need of atomic instructions here, head.next
> +		 * No need of atomic instructions here, rb_root.rb_node
>  		 * can't change from under us until we release the
>  		 * anon_vma->root->rwsem.
>  		 */
> --
> 2.34.1
>
>
Wei Yang April 1, 2025, 11:46 a.m. UTC | #2
On Tue, Apr 01, 2025 at 10:55:16AM +0100, Lorenzo Stoakes wrote:
>On Tue, Apr 01, 2025 at 09:43:46AM +0000, Wei Yang wrote:
>> In commit bf181b9f9d8d ("mm anon rmap: replace same_anon_vma linked
>> list with an interval tree."), the anon_vma.same_anon_vma is replaced by
>> interval tree.
>>
>> But the related comment is left behind. Correct it here.
>>
>> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
>
>Sorry, but I'd rather we didn't, this is un-useful churn, and I am planning to
>make fairly wide-ranging alterations to anon_vma going forward, at which point
>things like this can be addressed if needed.
>

Would you mind telling more about your plan on anon_vma alteration?

>Also, not to sound mean, but we have repeatedly asked you not to submit
>these kinds of small 'fix up' patches. This one is relatively benign as
>it's a comment change only, but previous ones you have made have been
>directly problematic for us.
>
>So to reiterate - please stop sending patches like this. Core mm is not the
>place for them.
>
>Thanks!
>
>> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
>> Cc: Jann Horn <jannh@google.com>
>> Cc: Vlastimil Babka <vbabka@suse.cz>
>> Cc: Michel Lespinasse <michel@lespinasse.org>
>> ---
>>  mm/vma.c | 10 +++++-----
>>  1 file changed, 5 insertions(+), 5 deletions(-)
>>
>> diff --git a/mm/vma.c b/mm/vma.c
>> index 5cdc5612bfc1..1a155031f1fb 100644
>> --- a/mm/vma.c
>> +++ b/mm/vma.c
>> @@ -1989,16 +1989,16 @@ static void vm_lock_anon_vma(struct mm_struct *mm, struct anon_vma *anon_vma)
>>  {
>>  	if (!test_bit(0, (unsigned long *) &anon_vma->root->rb_root.rb_root.rb_node)) {
>>  		/*
>> -		 * The LSB of head.next can't change from under us
>> +		 * The LSB of rb_root.rb_node can't change from under us
>>  		 * because we hold the mm_all_locks_mutex.
>>  		 */
>>  		down_write_nest_lock(&anon_vma->root->rwsem, &mm->mmap_lock);
>>  		/*
>> -		 * We can safely modify head.next after taking the
>> +		 * We can safely modify rb_root.rb_node after taking the
>>  		 * anon_vma->root->rwsem. If some other vma in this mm shares
>>  		 * the same anon_vma we won't take it again.
>>  		 *
>> -		 * No need of atomic instructions here, head.next
>> +		 * No need of atomic instructions here, rb_root.rb_node
>>  		 * can't change from under us thanks to the
>>  		 * anon_vma->root->rwsem.
>>  		 */
>> @@ -2124,14 +2124,14 @@ static void vm_unlock_anon_vma(struct anon_vma *anon_vma)
>>  {
>>  	if (test_bit(0, (unsigned long *) &anon_vma->root->rb_root.rb_root.rb_node)) {
>>  		/*
>> -		 * The LSB of head.next can't change to 0 from under
>> +		 * The LSB of rb_root.rb_node can't change to 0 from under
>>  		 * us because we hold the mm_all_locks_mutex.
>>  		 *
>>  		 * We must however clear the bitflag before unlocking
>>  		 * the vma so the users using the anon_vma->rb_root will
>>  		 * never see our bitflag.
>>  		 *
>> -		 * No need of atomic instructions here, head.next
>> +		 * No need of atomic instructions here, rb_root.rb_node
>>  		 * can't change from under us until we release the
>>  		 * anon_vma->root->rwsem.
>>  		 */
>> --
>> 2.34.1
>>
>>
Lorenzo Stoakes April 1, 2025, 12:11 p.m. UTC | #3
On Tue, Apr 01, 2025 at 11:46:29AM +0000, Wei Yang wrote:
> On Tue, Apr 01, 2025 at 10:55:16AM +0100, Lorenzo Stoakes wrote:
> >On Tue, Apr 01, 2025 at 09:43:46AM +0000, Wei Yang wrote:
> >> In commit bf181b9f9d8d ("mm anon rmap: replace same_anon_vma linked
> >> list with an interval tree."), the anon_vma.same_anon_vma is replaced by
> >> interval tree.
> >>
> >> But the related comment is left behind. Correct it here.
> >>
> >> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
> >
> >Sorry, but I'd rather we didn't, this is un-useful churn, and I am planning to
> >make fairly wide-ranging alterations to anon_vma going forward, at which point
> >things like this can be addressed if needed.
> >
>
> Would you mind telling more about your plan on anon_vma alteration?

Well ultimately finding some means of representing this better _in general_ :)

But you can see some early spring roots of this in the LWN article on my talk at
lsf/mm - https://lwn.net/Articles/1015762/

There's a bunch of patches in relation to this, the article links the
MREMAP_RELOCATE_ANON stuff but there's also a preparatory patch that improves
merging.

I plan to do quite a bit more with all of this.
Wei Yang April 1, 2025, 11:37 p.m. UTC | #4
On Tue, Apr 01, 2025 at 01:11:15PM +0100, Lorenzo Stoakes wrote:
>On Tue, Apr 01, 2025 at 11:46:29AM +0000, Wei Yang wrote:
>> On Tue, Apr 01, 2025 at 10:55:16AM +0100, Lorenzo Stoakes wrote:
>> >On Tue, Apr 01, 2025 at 09:43:46AM +0000, Wei Yang wrote:
>> >> In commit bf181b9f9d8d ("mm anon rmap: replace same_anon_vma linked
>> >> list with an interval tree."), the anon_vma.same_anon_vma is replaced by
>> >> interval tree.
>> >>
>> >> But the related comment is left behind. Correct it here.
>> >>
>> >> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
>> >
>> >Sorry, but I'd rather we didn't, this is un-useful churn, and I am planning to
>> >make fairly wide-ranging alterations to anon_vma going forward, at which point
>> >things like this can be addressed if needed.
>> >
>>
>> Would you mind telling more about your plan on anon_vma alteration?
>
>Well ultimately finding some means of representing this better _in general_ :)
>
>But you can see some early spring roots of this in the LWN article on my talk at
>lsf/mm - https://lwn.net/Articles/1015762/
>
>There's a bunch of patches in relation to this, the article links the
>MREMAP_RELOCATE_ANON stuff but there's also a preparatory patch that improves
>merging.
>

Thanks for your information. I would take a look.

>I plan to do quite a bit more with all of this.
diff mbox series

Patch

diff --git a/mm/vma.c b/mm/vma.c
index 5cdc5612bfc1..1a155031f1fb 100644
--- a/mm/vma.c
+++ b/mm/vma.c
@@ -1989,16 +1989,16 @@  static void vm_lock_anon_vma(struct mm_struct *mm, struct anon_vma *anon_vma)
 {
 	if (!test_bit(0, (unsigned long *) &anon_vma->root->rb_root.rb_root.rb_node)) {
 		/*
-		 * The LSB of head.next can't change from under us
+		 * The LSB of rb_root.rb_node can't change from under us
 		 * because we hold the mm_all_locks_mutex.
 		 */
 		down_write_nest_lock(&anon_vma->root->rwsem, &mm->mmap_lock);
 		/*
-		 * We can safely modify head.next after taking the
+		 * We can safely modify rb_root.rb_node after taking the
 		 * anon_vma->root->rwsem. If some other vma in this mm shares
 		 * the same anon_vma we won't take it again.
 		 *
-		 * No need of atomic instructions here, head.next
+		 * No need of atomic instructions here, rb_root.rb_node
 		 * can't change from under us thanks to the
 		 * anon_vma->root->rwsem.
 		 */
@@ -2124,14 +2124,14 @@  static void vm_unlock_anon_vma(struct anon_vma *anon_vma)
 {
 	if (test_bit(0, (unsigned long *) &anon_vma->root->rb_root.rb_root.rb_node)) {
 		/*
-		 * The LSB of head.next can't change to 0 from under
+		 * The LSB of rb_root.rb_node can't change to 0 from under
 		 * us because we hold the mm_all_locks_mutex.
 		 *
 		 * We must however clear the bitflag before unlocking
 		 * the vma so the users using the anon_vma->rb_root will
 		 * never see our bitflag.
 		 *
-		 * No need of atomic instructions here, head.next
+		 * No need of atomic instructions here, rb_root.rb_node
 		 * can't change from under us until we release the
 		 * anon_vma->root->rwsem.
 		 */