diff mbox series

[v3,5/5] Documentation/mm: Update references to __m[un]lock_page() to *_folio()

Message ID cf3c5615d98f4e690dad46b074933024b8469d37.1672043615.git.lstoakes@gmail.com (mailing list archive)
State New
Headers show
Series update mlock to use folios | expand

Commit Message

Lorenzo Stoakes Dec. 26, 2022, 8:44 a.m. UTC
We now pass folios to these functions, so update the documentation
accordingly.

Additionally, correct the outdated reference to __pagevec_lru_add_fn(), the
referenced action occurs in __munlock_folio() directly now.

Signed-off-by: Lorenzo Stoakes <lstoakes@gmail.com>
---
 Documentation/mm/unevictable-lru.rst | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

Comments

Vlastimil Babka Jan. 12, 2023, 11:04 a.m. UTC | #1
On 12/26/22 09:44, Lorenzo Stoakes wrote:
> We now pass folios to these functions, so update the documentation
> accordingly.
> 
> Additionally, correct the outdated reference to __pagevec_lru_add_fn(), the
> referenced action occurs in __munlock_folio() directly now.
> 
> Signed-off-by: Lorenzo Stoakes <lstoakes@gmail.com>

Acked-by: Vlastimil Babka <vbabka@suse.cz>

With:

> ---
>  Documentation/mm/unevictable-lru.rst | 12 ++++++------
>  1 file changed, 6 insertions(+), 6 deletions(-)
> 
> diff --git a/Documentation/mm/unevictable-lru.rst b/Documentation/mm/unevictable-lru.rst
> index 4a0e158aa9ce..153629e0c100 100644
> --- a/Documentation/mm/unevictable-lru.rst
> +++ b/Documentation/mm/unevictable-lru.rst
> @@ -308,22 +308,22 @@ do end up getting faulted into this VM_LOCKED VMA, they will be handled in the
>  fault path - which is also how mlock2()'s MLOCK_ONFAULT areas are handled.
>  
>  For each PTE (or PMD) being faulted into a VMA, the page add rmap function
> -calls mlock_vma_page(), which calls mlock_page() when the VMA is VM_LOCKED
> +calls mlock_vma_page(), which calls mlock_folio() when the VMA is VM_LOCKED
>  (unless it is a PTE mapping of a part of a transparent huge page).  Or when
>  it is a newly allocated anonymous page, lru_cache_add_inactive_or_unevictable()

Think it would be more appropriate now:    ^ folio_add_lru_vma()

> -calls mlock_new_page() instead: similar to mlock_page(), but can make better
> +calls mlock_new_folio() instead: similar to mlock_folio(), but can make better
>  judgments, since this page is held exclusively and known not to be on LRU yet.
>  
> -mlock_page() sets PageMlocked immediately, then places the page on the CPU's

		     PG_mlocked?

> -mlock pagevec, to batch up the rest of the work to be done under lru_lock by
> -__mlock_page().  __mlock_page() sets PageUnevictable, initializes mlock_count

					PG_unevictable

ditto below

> +mlock_folio() sets PageMlocked immediately, then places the page on the CPU's
> +mlock folio batch, to batch up the rest of the work to be done under lru_lock by
> +__mlock_folio().  __mlock_folio() sets PageUnevictable, initializes mlock_count
>  and moves the page to unevictable state ("the unevictable LRU", but with
>  mlock_count in place of LRU threading).  Or if the page was already PageLRU
>  and PageUnevictable and PageMlocked, it simply increments the mlock_count.
>  
>  But in practice that may not work ideally: the page may not yet be on an LRU, or
>  it may have been temporarily isolated from LRU.  In such cases the mlock_count
> -field cannot be touched, but will be set to 0 later when __pagevec_lru_add_fn()
> +field cannot be touched, but will be set to 0 later when __munlock_folio()
>  returns the page to "LRU".  Races prohibit mlock_count from being set to 1 then:
>  rather than risk stranding a page indefinitely as unevictable, always err with
>  mlock_count on the low side, so that when munlocked the page will be rescued to
diff mbox series

Patch

diff --git a/Documentation/mm/unevictable-lru.rst b/Documentation/mm/unevictable-lru.rst
index 4a0e158aa9ce..153629e0c100 100644
--- a/Documentation/mm/unevictable-lru.rst
+++ b/Documentation/mm/unevictable-lru.rst
@@ -308,22 +308,22 @@  do end up getting faulted into this VM_LOCKED VMA, they will be handled in the
 fault path - which is also how mlock2()'s MLOCK_ONFAULT areas are handled.
 
 For each PTE (or PMD) being faulted into a VMA, the page add rmap function
-calls mlock_vma_page(), which calls mlock_page() when the VMA is VM_LOCKED
+calls mlock_vma_page(), which calls mlock_folio() when the VMA is VM_LOCKED
 (unless it is a PTE mapping of a part of a transparent huge page).  Or when
 it is a newly allocated anonymous page, lru_cache_add_inactive_or_unevictable()
-calls mlock_new_page() instead: similar to mlock_page(), but can make better
+calls mlock_new_folio() instead: similar to mlock_folio(), but can make better
 judgments, since this page is held exclusively and known not to be on LRU yet.
 
-mlock_page() sets PageMlocked immediately, then places the page on the CPU's
-mlock pagevec, to batch up the rest of the work to be done under lru_lock by
-__mlock_page().  __mlock_page() sets PageUnevictable, initializes mlock_count
+mlock_folio() sets PageMlocked immediately, then places the page on the CPU's
+mlock folio batch, to batch up the rest of the work to be done under lru_lock by
+__mlock_folio().  __mlock_folio() sets PageUnevictable, initializes mlock_count
 and moves the page to unevictable state ("the unevictable LRU", but with
 mlock_count in place of LRU threading).  Or if the page was already PageLRU
 and PageUnevictable and PageMlocked, it simply increments the mlock_count.
 
 But in practice that may not work ideally: the page may not yet be on an LRU, or
 it may have been temporarily isolated from LRU.  In such cases the mlock_count
-field cannot be touched, but will be set to 0 later when __pagevec_lru_add_fn()
+field cannot be touched, but will be set to 0 later when __munlock_folio()
 returns the page to "LRU".  Races prohibit mlock_count from being set to 1 then:
 rather than risk stranding a page indefinitely as unevictable, always err with
 mlock_count on the low side, so that when munlocked the page will be rescued to