diff mbox series

[v3,1/3] mm/rmap: support move to different root anon_vma in folio_move_anon_rmap()

Message ID 20231009064230.2952396-2-surenb@google.com (mailing list archive)
State New
Headers show
Series userfaultfd move option | expand

Commit Message

Suren Baghdasaryan Oct. 9, 2023, 6:42 a.m. UTC
From: Andrea Arcangeli <aarcange@redhat.com>

For now, folio_move_anon_rmap() was only used to move a folio to a
different anon_vma after fork(), whereby the root anon_vma stayed
unchanged. For that, it was sufficient to hold the folio lock when
calling folio_move_anon_rmap().

However, we want to make use of folio_move_anon_rmap() to move folios
between VMAs that have a different root anon_vma. As folio_referenced()
performs an RMAP walk without holding the folio lock but only holding the
anon_vma in read mode, holding the folio lock is insufficient.

When moving to an anon_vma with a different root anon_vma, we'll have to
hold both, the folio lock and the anon_vma lock in write mode.
Consequently, whenever we succeeded in folio_lock_anon_vma_read() to
read-lock the anon_vma, we have to re-check if the mapping was changed
in the meantime. If that was the case, we have to retry.

Note that folio_move_anon_rmap() must only be called if the anon page is
exclusive to a process, and must not be called on KSM folios.

This is a preparation for UFFDIO_MOVE, which will hold the folio lock,
the anon_vma lock in write mode, and the mmap_lock in read mode.

Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>
Signed-off-by: Suren Baghdasaryan <surenb@google.com>
---
 mm/rmap.c | 24 ++++++++++++++++++++++++
 1 file changed, 24 insertions(+)

Comments

Peter Xu Oct. 12, 2023, 10:01 p.m. UTC | #1
On Sun, Oct 08, 2023 at 11:42:26PM -0700, Suren Baghdasaryan wrote:
> From: Andrea Arcangeli <aarcange@redhat.com>
> 
> For now, folio_move_anon_rmap() was only used to move a folio to a
> different anon_vma after fork(), whereby the root anon_vma stayed
> unchanged. For that, it was sufficient to hold the folio lock when
> calling folio_move_anon_rmap().
> 
> However, we want to make use of folio_move_anon_rmap() to move folios
> between VMAs that have a different root anon_vma. As folio_referenced()
> performs an RMAP walk without holding the folio lock but only holding the
> anon_vma in read mode, holding the folio lock is insufficient.
> 
> When moving to an anon_vma with a different root anon_vma, we'll have to
> hold both, the folio lock and the anon_vma lock in write mode.
> Consequently, whenever we succeeded in folio_lock_anon_vma_read() to
> read-lock the anon_vma, we have to re-check if the mapping was changed
> in the meantime. If that was the case, we have to retry.
> 
> Note that folio_move_anon_rmap() must only be called if the anon page is
> exclusive to a process, and must not be called on KSM folios.
> 
> This is a preparation for UFFDIO_MOVE, which will hold the folio lock,
> the anon_vma lock in write mode, and the mmap_lock in read mode.
> 
> Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>
> Signed-off-by: Suren Baghdasaryan <surenb@google.com>
> ---
>  mm/rmap.c | 24 ++++++++++++++++++++++++
>  1 file changed, 24 insertions(+)
> 
> diff --git a/mm/rmap.c b/mm/rmap.c
> index c1f11c9dbe61..f9ddc50269d2 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -542,7 +542,9 @@ struct anon_vma *folio_lock_anon_vma_read(struct folio *folio,
>  	struct anon_vma *root_anon_vma;
>  	unsigned long anon_mapping;
>  
> +retry:
>  	rcu_read_lock();
> +retry_under_rcu:
>  	anon_mapping = (unsigned long)READ_ONCE(folio->mapping);
>  	if ((anon_mapping & PAGE_MAPPING_FLAGS) != PAGE_MAPPING_ANON)
>  		goto out;
> @@ -552,6 +554,16 @@ struct anon_vma *folio_lock_anon_vma_read(struct folio *folio,
>  	anon_vma = (struct anon_vma *) (anon_mapping - PAGE_MAPPING_ANON);
>  	root_anon_vma = READ_ONCE(anon_vma->root);
>  	if (down_read_trylock(&root_anon_vma->rwsem)) {
> +		/*
> +		 * folio_move_anon_rmap() might have changed the anon_vma as we
> +		 * might not hold the folio lock here.
> +		 */
> +		if (unlikely((unsigned long)READ_ONCE(folio->mapping) !=
> +			     anon_mapping)) {
> +			up_read(&root_anon_vma->rwsem);
> +			goto retry_under_rcu;

Is adding this specific label worthwhile?  How about rcu unlock and goto
retry (then it'll also be clear that we won't hold rcu read lock for
unpredictable time)?

> +		}
> +
>  		/*
>  		 * If the folio is still mapped, then this anon_vma is still
>  		 * its anon_vma, and holding the mutex ensures that it will
David Hildenbrand Oct. 13, 2023, 8:04 a.m. UTC | #2
On 13.10.23 00:01, Peter Xu wrote:
> On Sun, Oct 08, 2023 at 11:42:26PM -0700, Suren Baghdasaryan wrote:
>> From: Andrea Arcangeli <aarcange@redhat.com>
>>
>> For now, folio_move_anon_rmap() was only used to move a folio to a
>> different anon_vma after fork(), whereby the root anon_vma stayed
>> unchanged. For that, it was sufficient to hold the folio lock when
>> calling folio_move_anon_rmap().
>>
>> However, we want to make use of folio_move_anon_rmap() to move folios
>> between VMAs that have a different root anon_vma. As folio_referenced()
>> performs an RMAP walk without holding the folio lock but only holding the
>> anon_vma in read mode, holding the folio lock is insufficient.
>>
>> When moving to an anon_vma with a different root anon_vma, we'll have to
>> hold both, the folio lock and the anon_vma lock in write mode.
>> Consequently, whenever we succeeded in folio_lock_anon_vma_read() to
>> read-lock the anon_vma, we have to re-check if the mapping was changed
>> in the meantime. If that was the case, we have to retry.
>>
>> Note that folio_move_anon_rmap() must only be called if the anon page is
>> exclusive to a process, and must not be called on KSM folios.
>>
>> This is a preparation for UFFDIO_MOVE, which will hold the folio lock,
>> the anon_vma lock in write mode, and the mmap_lock in read mode.
>>
>> Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>
>> Signed-off-by: Suren Baghdasaryan <surenb@google.com>
>> ---
>>   mm/rmap.c | 24 ++++++++++++++++++++++++
>>   1 file changed, 24 insertions(+)
>>
>> diff --git a/mm/rmap.c b/mm/rmap.c
>> index c1f11c9dbe61..f9ddc50269d2 100644
>> --- a/mm/rmap.c
>> +++ b/mm/rmap.c
>> @@ -542,7 +542,9 @@ struct anon_vma *folio_lock_anon_vma_read(struct folio *folio,
>>   	struct anon_vma *root_anon_vma;
>>   	unsigned long anon_mapping;
>>   
>> +retry:
>>   	rcu_read_lock();
>> +retry_under_rcu:
>>   	anon_mapping = (unsigned long)READ_ONCE(folio->mapping);
>>   	if ((anon_mapping & PAGE_MAPPING_FLAGS) != PAGE_MAPPING_ANON)
>>   		goto out;
>> @@ -552,6 +554,16 @@ struct anon_vma *folio_lock_anon_vma_read(struct folio *folio,
>>   	anon_vma = (struct anon_vma *) (anon_mapping - PAGE_MAPPING_ANON);
>>   	root_anon_vma = READ_ONCE(anon_vma->root);
>>   	if (down_read_trylock(&root_anon_vma->rwsem)) {
>> +		/*
>> +		 * folio_move_anon_rmap() might have changed the anon_vma as we
>> +		 * might not hold the folio lock here.
>> +		 */
>> +		if (unlikely((unsigned long)READ_ONCE(folio->mapping) !=
>> +			     anon_mapping)) {
>> +			up_read(&root_anon_vma->rwsem);
>> +			goto retry_under_rcu;
> 
> Is adding this specific label worthwhile?  How about rcu unlock and goto
> retry (then it'll also be clear that we won't hold rcu read lock for
> unpredictable time)?

+1, sounds good to me
Suren Baghdasaryan Oct. 19, 2023, 3:19 p.m. UTC | #3
On Fri, Oct 13, 2023 at 1:04 AM David Hildenbrand <david@redhat.com> wrote:
>
> On 13.10.23 00:01, Peter Xu wrote:
> > On Sun, Oct 08, 2023 at 11:42:26PM -0700, Suren Baghdasaryan wrote:
> >> From: Andrea Arcangeli <aarcange@redhat.com>
> >>
> >> For now, folio_move_anon_rmap() was only used to move a folio to a
> >> different anon_vma after fork(), whereby the root anon_vma stayed
> >> unchanged. For that, it was sufficient to hold the folio lock when
> >> calling folio_move_anon_rmap().
> >>
> >> However, we want to make use of folio_move_anon_rmap() to move folios
> >> between VMAs that have a different root anon_vma. As folio_referenced()
> >> performs an RMAP walk without holding the folio lock but only holding the
> >> anon_vma in read mode, holding the folio lock is insufficient.
> >>
> >> When moving to an anon_vma with a different root anon_vma, we'll have to
> >> hold both, the folio lock and the anon_vma lock in write mode.
> >> Consequently, whenever we succeeded in folio_lock_anon_vma_read() to
> >> read-lock the anon_vma, we have to re-check if the mapping was changed
> >> in the meantime. If that was the case, we have to retry.
> >>
> >> Note that folio_move_anon_rmap() must only be called if the anon page is
> >> exclusive to a process, and must not be called on KSM folios.
> >>
> >> This is a preparation for UFFDIO_MOVE, which will hold the folio lock,
> >> the anon_vma lock in write mode, and the mmap_lock in read mode.
> >>
> >> Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>
> >> Signed-off-by: Suren Baghdasaryan <surenb@google.com>
> >> ---
> >>   mm/rmap.c | 24 ++++++++++++++++++++++++
> >>   1 file changed, 24 insertions(+)
> >>
> >> diff --git a/mm/rmap.c b/mm/rmap.c
> >> index c1f11c9dbe61..f9ddc50269d2 100644
> >> --- a/mm/rmap.c
> >> +++ b/mm/rmap.c
> >> @@ -542,7 +542,9 @@ struct anon_vma *folio_lock_anon_vma_read(struct folio *folio,
> >>      struct anon_vma *root_anon_vma;
> >>      unsigned long anon_mapping;
> >>
> >> +retry:
> >>      rcu_read_lock();
> >> +retry_under_rcu:
> >>      anon_mapping = (unsigned long)READ_ONCE(folio->mapping);
> >>      if ((anon_mapping & PAGE_MAPPING_FLAGS) != PAGE_MAPPING_ANON)
> >>              goto out;
> >> @@ -552,6 +554,16 @@ struct anon_vma *folio_lock_anon_vma_read(struct folio *folio,
> >>      anon_vma = (struct anon_vma *) (anon_mapping - PAGE_MAPPING_ANON);
> >>      root_anon_vma = READ_ONCE(anon_vma->root);
> >>      if (down_read_trylock(&root_anon_vma->rwsem)) {
> >> +            /*
> >> +             * folio_move_anon_rmap() might have changed the anon_vma as we
> >> +             * might not hold the folio lock here.
> >> +             */
> >> +            if (unlikely((unsigned long)READ_ONCE(folio->mapping) !=
> >> +                         anon_mapping)) {
> >> +                    up_read(&root_anon_vma->rwsem);
> >> +                    goto retry_under_rcu;
> >
> > Is adding this specific label worthwhile?  How about rcu unlock and goto
> > retry (then it'll also be clear that we won't hold rcu read lock for
> > unpredictable time)?
>
> +1, sounds good to me

Sorry for the delay, I was travelling for a week.

I was hesitant about RCU unlocking and then immediately re-locking but
your point about holding it for unpredictable time makes sense. Will
change. Thanks!

>
> --
> Cheers,
>
> David / dhildenb
>
diff mbox series

Patch

diff --git a/mm/rmap.c b/mm/rmap.c
index c1f11c9dbe61..f9ddc50269d2 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -542,7 +542,9 @@  struct anon_vma *folio_lock_anon_vma_read(struct folio *folio,
 	struct anon_vma *root_anon_vma;
 	unsigned long anon_mapping;
 
+retry:
 	rcu_read_lock();
+retry_under_rcu:
 	anon_mapping = (unsigned long)READ_ONCE(folio->mapping);
 	if ((anon_mapping & PAGE_MAPPING_FLAGS) != PAGE_MAPPING_ANON)
 		goto out;
@@ -552,6 +554,16 @@  struct anon_vma *folio_lock_anon_vma_read(struct folio *folio,
 	anon_vma = (struct anon_vma *) (anon_mapping - PAGE_MAPPING_ANON);
 	root_anon_vma = READ_ONCE(anon_vma->root);
 	if (down_read_trylock(&root_anon_vma->rwsem)) {
+		/*
+		 * folio_move_anon_rmap() might have changed the anon_vma as we
+		 * might not hold the folio lock here.
+		 */
+		if (unlikely((unsigned long)READ_ONCE(folio->mapping) !=
+			     anon_mapping)) {
+			up_read(&root_anon_vma->rwsem);
+			goto retry_under_rcu;
+		}
+
 		/*
 		 * If the folio is still mapped, then this anon_vma is still
 		 * its anon_vma, and holding the mutex ensures that it will
@@ -586,6 +598,18 @@  struct anon_vma *folio_lock_anon_vma_read(struct folio *folio,
 	rcu_read_unlock();
 	anon_vma_lock_read(anon_vma);
 
+	/*
+	 * folio_move_anon_rmap() might have changed the anon_vma as we might
+	 * not hold the folio lock here.
+	 */
+	if (unlikely((unsigned long)READ_ONCE(folio->mapping) !=
+		     anon_mapping)) {
+		anon_vma_unlock_read(anon_vma);
+		put_anon_vma(anon_vma);
+		anon_vma = NULL;
+		goto retry;
+	}
+
 	if (atomic_dec_and_test(&anon_vma->refcount)) {
 		/*
 		 * Oops, we held the last refcount, release the lock