diff mbox series

[v2,1/8] mm/swap: remember PG_anon_exclusive via a swp pte bit

Message ID 20220329164329.208407-2-david@redhat.com (mailing list archive)
State New
Headers show
Series mm: COW fixes part 3: reliable GUP R/W FOLL_GET of anonymous pages | expand

Commit Message

David Hildenbrand March 29, 2022, 4:43 p.m. UTC
Currently, we clear PG_anon_exclusive in try_to_unmap() and forget about
it. We do this, to keep fork() logic on swap entries easy and efficient:
for example, if we wouldn't clear it when unmapping, we'd have to lookup
the page in the swapcache for each and every swap entry during fork() and
clear PG_anon_exclusive if set.

Instead, we want to store that information directly in the swap pte,
protected by the page table lock, similarly to how we handle
SWP_MIGRATION_READ_EXCLUSIVE for migration entries. However, for actual
swap entries, we don't want to mess with the swap type (e.g., still one
bit) because it overcomplicates swap code.

In try_to_unmap(), we already reject to unmap in case the page might be
pinned, because we must not lose PG_anon_exclusive on pinned pages ever.
Checking if there are other unexpected references reliably *before*
completely unmapping a page is unfortunately not really possible: THP
heavily overcomplicate the situation. Once fully unmapped it's easier --
we, for example, make sure that there are no unexpected references
*after* unmapping a page before starting writeback on that page.

So, we currently might end up unmapping a page and clearing
PG_anon_exclusive if that page has additional references, for example,
due to a FOLL_GET.

do_swap_page() has to re-determine if a page is exclusive, which will
easily fail if there are other references on a page, most prominently
GUP references via FOLL_GET. This can currently result in memory
corruptions when taking a FOLL_GET | FOLL_WRITE reference on a page even
when fork() is never involved: try_to_unmap() will succeed, and when
refaulting the page, it cannot be marked exclusive and will get replaced
by a copy in the page tables on the next write access, resulting in writes
via the GUP reference to the page being lost.

In an ideal world, everybody that uses GUP and wants to modify page
content, such as O_DIRECT, would properly use FOLL_PIN. However, that
conversion will take a while. It's easier to fix what used to work in the
past (FOLL_GET | FOLL_WRITE) remembering PG_anon_exclusive. In addition,
by remembering PG_anon_exclusive we can further reduce unnecessary COW
in some cases, so it's the natural thing to do.

So let's transfer the PG_anon_exclusive information to the swap pte and
store it via an architecture-dependant pte bit; use that information when
restoring the swap pte in do_swap_page() and unuse_pte(). During fork(), we
simply have to clear the pte bit and are done.

Of course, there is one corner case to handle: swap backends that don't
support concurrent page modifications while the page is under writeback.
Special case these, and drop the exclusive marker. Add a comment why that
is just fine (also, reuse_swap_page() would have done the same in the
past).

In the future, we'll hopefully have all architectures support
__HAVE_ARCH_PTE_SWP_EXCLUSIVE, such that we can get rid of the empty
stubs and the define completely. Then, we can also convert
SWP_MIGRATION_READ_EXCLUSIVE. For architectures it's fairly easy to
support: either simply use a yet unused pte bit that can be used for swap
entries, steal one from the arch type bits if they exceed 5, or steal one
from the offset bits.

Note: R/O FOLL_GET references were never really reliable, especially
when taking one on a shared page and then writing to the page (e.g., GUP
after fork()). FOLL_GET, including R/W references, were never really
reliable once fork was involved (e.g., GUP before fork(),
GUP during fork()). KSM steps back in case it stumbles over unexpected
references and is, therefore, fine.

Signed-off-by: David Hildenbrand <david@redhat.com>
---
 include/linux/pgtable.h | 29 ++++++++++++++++++++++
 include/linux/swapops.h |  2 ++
 mm/memory.c             | 55 ++++++++++++++++++++++++++++++++++++++---
 mm/rmap.c               | 19 ++++++++------
 mm/swapfile.c           | 13 +++++++++-
 5 files changed, 105 insertions(+), 13 deletions(-)

Comments

Miaohe Lin April 13, 2022, 8:58 a.m. UTC | #1
On 2022/3/30 0:43, David Hildenbrand wrote:
> Currently, we clear PG_anon_exclusive in try_to_unmap() and forget about
...
> diff --git a/mm/memory.c b/mm/memory.c
> index 14618f446139..9060cc7f2123 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -792,6 +792,11 @@ copy_nonpresent_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm,
>  						&src_mm->mmlist);
>  			spin_unlock(&mmlist_lock);
>  		}
> +		/* Mark the swap entry as shared. */
> +		if (pte_swp_exclusive(*src_pte)) {
> +			pte = pte_swp_clear_exclusive(*src_pte);
> +			set_pte_at(src_mm, addr, src_pte, pte);
> +		}
>  		rss[MM_SWAPENTS]++;
>  	} else if (is_migration_entry(entry)) {
>  		page = pfn_swap_entry_to_page(entry);
> @@ -3559,6 +3564,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
>  	struct page *page = NULL, *swapcache;
>  	struct swap_info_struct *si = NULL;
>  	rmap_t rmap_flags = RMAP_NONE;
> +	bool exclusive = false;
>  	swp_entry_t entry;
>  	pte_t pte;
>  	int locked;
> @@ -3724,6 +3730,46 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
>  	BUG_ON(!PageAnon(page) && PageMappedToDisk(page));
>  	BUG_ON(PageAnon(page) && PageAnonExclusive(page));
>  
> +	/*
> +	 * Check under PT lock (to protect against concurrent fork() sharing
> +	 * the swap entry concurrently) for certainly exclusive pages.
> +	 */
> +	if (!PageKsm(page)) {
> +		/*
> +		 * Note that pte_swp_exclusive() == false for architectures
> +		 * without __HAVE_ARCH_PTE_SWP_EXCLUSIVE.
> +		 */
> +		exclusive = pte_swp_exclusive(vmf->orig_pte);
> +		if (page != swapcache) {
> +			/*
> +			 * We have a fresh page that is not exposed to the
> +			 * swapcache -> certainly exclusive.
> +			 */
> +			exclusive = true;
> +		} else if (exclusive && PageWriteback(page) &&
> +			   !(swp_swap_info(entry)->flags & SWP_STABLE_WRITES)) {

Really sorry for late respond and a newbie question. IIUC, if SWP_STABLE_WRITES is set,
it means concurrent page modifications while under writeback is not supported. For these
problematic swap backends, exclusive marker is dropped. So the above if statement is to
filter out these problematic swap backends which have SWP_STABLE_WRITES set. If so, the
above check should be && (swp_swap_info(entry)->flags & SWP_STABLE_WRITES)), i.e. no "!".
Or am I miss something?

It's very kind of you if you can answer this question. Many thanks!

> +			/*
> +			 * This is tricky: not all swap backends support
> +			 * concurrent page modifications while under writeback.
> +			 *
> +			 * So if we stumble over such a page in the swapcache
> +			 * we must not set the page exclusive, otherwise we can
> +			 * map it writable without further checks and modify it
> +			 * while still under writeback.
> +			 *
> +			 * For these problematic swap backends, simply drop the
> +			 * exclusive marker: this is perfectly fine as we start
> +			 * writeback only if we fully unmapped the page and
> +			 * there are no unexpected references on the page after
> +			 * unmapping succeeded. After fully unmapped, no
> +			 * further GUP references (FOLL_GET and FOLL_PIN) can
> +			 * appear, so dropping the exclusive marker and mapping
> +			 * it only R/O is fine.
> +			 */
> +			exclusive = false;
> +		}
> +	}
> +
>  	/*
>  	 * Remove the swap entry and conditionally try to free up the swapcache.
>  	 * We're already holding a reference on the page but haven't mapped it
> @@ -3738,11 +3784,12 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
>  	pte = mk_pte(page, vma->vm_page_prot);
>  
>  	/*
> -	 * Same logic as in do_wp_page(); however, optimize for fresh pages
> -	 * that are certainly not shared because we just allocated them without
> -	 * exposing them to the swapcache.
> +	 * Same logic as in do_wp_page(); however, optimize for pages that are
> +	 * certainly not shared either because we just allocated them without
> +	 * exposing them to the swapcache or because the swap entry indicates
> +	 * exclusivity.
>  	 */
> -	if (!PageKsm(page) && (page != swapcache || page_count(page) == 1)) {
> +	if (!PageKsm(page) && (exclusive || page_count(page) == 1)) {
>  		if (vmf->flags & FAULT_FLAG_WRITE) {
>  			pte = maybe_mkwrite(pte_mkdirty(pte), vma);
>  			vmf->flags &= ~FAULT_FLAG_WRITE;
> diff --git a/mm/rmap.c b/mm/rmap.c
> index 4de07234cbcf..c8c257d94962 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -1656,14 +1656,15 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
>  				break;
>  			}
>  			/*
> -			 * Note: We *don't* remember yet if the page was mapped
> -			 * exclusively in the swap entry, so swapin code has
> -			 * to re-determine that manually and might detect the
> -			 * page as possibly shared, for example, if there are
> -			 * other references on the page or if the page is under
> -			 * writeback. We made sure that there are no GUP pins
> -			 * on the page that would rely on it, so for GUP pins
> -			 * this is fine.
> +			 * Note: We *don't* remember if the page was mapped
> +			 * exclusively in the swap pte if the architecture
> +			 * doesn't support __HAVE_ARCH_PTE_SWP_EXCLUSIVE. In
> +			 * that case, swapin code has to re-determine that
> +			 * manually and might detect the page as possibly
> +			 * shared, for example, if there are other references on
> +			 * the page or if the page is under writeback. We made
> +			 * sure that there are no GUP pins on the page that
> +			 * would rely on it, so for GUP pins this is fine.
>  			 */
>  			if (list_empty(&mm->mmlist)) {
>  				spin_lock(&mmlist_lock);
> @@ -1674,6 +1675,8 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
>  			dec_mm_counter(mm, MM_ANONPAGES);
>  			inc_mm_counter(mm, MM_SWAPENTS);
>  			swp_pte = swp_entry_to_pte(entry);
> +			if (anon_exclusive)
> +				swp_pte = pte_swp_mkexclusive(swp_pte);
>  			if (pte_soft_dirty(pteval))
>  				swp_pte = pte_swp_mksoft_dirty(swp_pte);
>  			if (pte_uffd_wp(pteval))
> diff --git a/mm/swapfile.c b/mm/swapfile.c
> index a7847324d476..7279b2d2d71d 100644
> --- a/mm/swapfile.c
> +++ b/mm/swapfile.c
> @@ -1804,7 +1804,18 @@ static int unuse_pte(struct vm_area_struct *vma, pmd_t *pmd,
>  	inc_mm_counter(vma->vm_mm, MM_ANONPAGES);
>  	get_page(page);
>  	if (page == swapcache) {
> -		page_add_anon_rmap(page, vma, addr, RMAP_NONE);
> +		rmap_t rmap_flags = RMAP_NONE;
> +
> +		/*
> +		 * See do_swap_page(): PageWriteback() would be problematic.
> +		 * However, we do a wait_on_page_writeback() just before this
> +		 * call and have the page locked.
> +		 */
> +		VM_BUG_ON_PAGE(PageWriteback(page), page);
> +		if (pte_swp_exclusive(*pte))
> +			rmap_flags |= RMAP_EXCLUSIVE;
> +
> +		page_add_anon_rmap(page, vma, addr, rmap_flags);
>  	} else { /* ksm created a completely new copy */
>  		page_add_new_anon_rmap(page, vma, addr);
>  		lru_cache_add_inactive_or_unevictable(page, vma);
>
David Hildenbrand April 13, 2022, 9:30 a.m. UTC | #2
On 13.04.22 10:58, Miaohe Lin wrote:
> On 2022/3/30 0:43, David Hildenbrand wrote:
>> Currently, we clear PG_anon_exclusive in try_to_unmap() and forget about
> ...
>> diff --git a/mm/memory.c b/mm/memory.c
>> index 14618f446139..9060cc7f2123 100644
>> --- a/mm/memory.c
>> +++ b/mm/memory.c
>> @@ -792,6 +792,11 @@ copy_nonpresent_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm,
>>  						&src_mm->mmlist);
>>  			spin_unlock(&mmlist_lock);
>>  		}
>> +		/* Mark the swap entry as shared. */
>> +		if (pte_swp_exclusive(*src_pte)) {
>> +			pte = pte_swp_clear_exclusive(*src_pte);
>> +			set_pte_at(src_mm, addr, src_pte, pte);
>> +		}
>>  		rss[MM_SWAPENTS]++;
>>  	} else if (is_migration_entry(entry)) {
>>  		page = pfn_swap_entry_to_page(entry);
>> @@ -3559,6 +3564,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
>>  	struct page *page = NULL, *swapcache;
>>  	struct swap_info_struct *si = NULL;
>>  	rmap_t rmap_flags = RMAP_NONE;
>> +	bool exclusive = false;
>>  	swp_entry_t entry;
>>  	pte_t pte;
>>  	int locked;
>> @@ -3724,6 +3730,46 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
>>  	BUG_ON(!PageAnon(page) && PageMappedToDisk(page));
>>  	BUG_ON(PageAnon(page) && PageAnonExclusive(page));
>>  
>> +	/*
>> +	 * Check under PT lock (to protect against concurrent fork() sharing
>> +	 * the swap entry concurrently) for certainly exclusive pages.
>> +	 */
>> +	if (!PageKsm(page)) {
>> +		/*
>> +		 * Note that pte_swp_exclusive() == false for architectures
>> +		 * without __HAVE_ARCH_PTE_SWP_EXCLUSIVE.
>> +		 */
>> +		exclusive = pte_swp_exclusive(vmf->orig_pte);
>> +		if (page != swapcache) {
>> +			/*
>> +			 * We have a fresh page that is not exposed to the
>> +			 * swapcache -> certainly exclusive.
>> +			 */
>> +			exclusive = true;
>> +		} else if (exclusive && PageWriteback(page) &&
>> +			   !(swp_swap_info(entry)->flags & SWP_STABLE_WRITES)) {
> 
> Really sorry for late respond and a newbie question. IIUC, if SWP_STABLE_WRITES is set,
> it means concurrent page modifications while under writeback is not supported. For these
> problematic swap backends, exclusive marker is dropped. So the above if statement is to
> filter out these problematic swap backends which have SWP_STABLE_WRITES set. If so, the
> above check should be && (swp_swap_info(entry)->flags & SWP_STABLE_WRITES)), i.e. no "!".
> Or am I miss something?

Oh, thanks for your careful eyes!

Indeed, SWP_STABLE_WRITES indicates that the backend *requires* stable
writes, meaning, we must not modify the page while writeback is active.

So if and only if that is set, we must drop the exclusive marker.

This essentially corresponds to previous reuse_swap_page() logic:

bool reuse_swap_page(struct page *page)
{
...
	if (!PageWriteback(page)) {
		...
	} else {
		...
		if (p->flags & SWP_STABLE_WRITES) {
			spin_unlock(&p->lock);
			return false;
		}
...
}

Fortunately, this only affects such backends. For backends without
SWP_STABLE_WRITES, the current code is simply sub-optimal.


So yes, this has to be

} else if (exclusive && PageWriteback(page) &&
	   (swp_swap_info(entry)->flags & SWP_STABLE_WRITES)) {


Let me try finding a way to test this, the tests I was running so far
were apparently not using a backend with SWP_STABLE_WRITES.
Miaohe Lin April 13, 2022, 9:38 a.m. UTC | #3
On 2022/4/13 17:30, David Hildenbrand wrote:
> On 13.04.22 10:58, Miaohe Lin wrote:
>> On 2022/3/30 0:43, David Hildenbrand wrote:
>>> Currently, we clear PG_anon_exclusive in try_to_unmap() and forget about
>> ...
>>> diff --git a/mm/memory.c b/mm/memory.c
>>> index 14618f446139..9060cc7f2123 100644
>>> --- a/mm/memory.c
>>> +++ b/mm/memory.c
>>> @@ -792,6 +792,11 @@ copy_nonpresent_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm,
>>>  						&src_mm->mmlist);
>>>  			spin_unlock(&mmlist_lock);
>>>  		}
>>> +		/* Mark the swap entry as shared. */
>>> +		if (pte_swp_exclusive(*src_pte)) {
>>> +			pte = pte_swp_clear_exclusive(*src_pte);
>>> +			set_pte_at(src_mm, addr, src_pte, pte);
>>> +		}
>>>  		rss[MM_SWAPENTS]++;
>>>  	} else if (is_migration_entry(entry)) {
>>>  		page = pfn_swap_entry_to_page(entry);
>>> @@ -3559,6 +3564,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
>>>  	struct page *page = NULL, *swapcache;
>>>  	struct swap_info_struct *si = NULL;
>>>  	rmap_t rmap_flags = RMAP_NONE;
>>> +	bool exclusive = false;
>>>  	swp_entry_t entry;
>>>  	pte_t pte;
>>>  	int locked;
>>> @@ -3724,6 +3730,46 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
>>>  	BUG_ON(!PageAnon(page) && PageMappedToDisk(page));
>>>  	BUG_ON(PageAnon(page) && PageAnonExclusive(page));
>>>  
>>> +	/*
>>> +	 * Check under PT lock (to protect against concurrent fork() sharing
>>> +	 * the swap entry concurrently) for certainly exclusive pages.
>>> +	 */
>>> +	if (!PageKsm(page)) {
>>> +		/*
>>> +		 * Note that pte_swp_exclusive() == false for architectures
>>> +		 * without __HAVE_ARCH_PTE_SWP_EXCLUSIVE.
>>> +		 */
>>> +		exclusive = pte_swp_exclusive(vmf->orig_pte);
>>> +		if (page != swapcache) {
>>> +			/*
>>> +			 * We have a fresh page that is not exposed to the
>>> +			 * swapcache -> certainly exclusive.
>>> +			 */
>>> +			exclusive = true;
>>> +		} else if (exclusive && PageWriteback(page) &&
>>> +			   !(swp_swap_info(entry)->flags & SWP_STABLE_WRITES)) {
>>
>> Really sorry for late respond and a newbie question. IIUC, if SWP_STABLE_WRITES is set,
>> it means concurrent page modifications while under writeback is not supported. For these
>> problematic swap backends, exclusive marker is dropped. So the above if statement is to
>> filter out these problematic swap backends which have SWP_STABLE_WRITES set. If so, the
>> above check should be && (swp_swap_info(entry)->flags & SWP_STABLE_WRITES)), i.e. no "!".
>> Or am I miss something?
> 
> Oh, thanks for your careful eyes!
> 
> Indeed, SWP_STABLE_WRITES indicates that the backend *requires* stable
> writes, meaning, we must not modify the page while writeback is active.
> 
> So if and only if that is set, we must drop the exclusive marker.
> 
> This essentially corresponds to previous reuse_swap_page() logic:
> 
> bool reuse_swap_page(struct page *page)
> {
> ...
> 	if (!PageWriteback(page)) {
> 		...
> 	} else {
> 		...
> 		if (p->flags & SWP_STABLE_WRITES) {
> 			spin_unlock(&p->lock);
> 			return false;
> 		}
> ...
> }
> 
> Fortunately, this only affects such backends. For backends without
> SWP_STABLE_WRITES, the current code is simply sub-optimal.
> 
> 
> So yes, this has to be
> 
> } else if (exclusive && PageWriteback(page) &&
> 	   (swp_swap_info(entry)->flags & SWP_STABLE_WRITES)) {
> 

I am glad that my question helps. :)

> 
> Let me try finding a way to test this, the tests I was running so far
> were apparently not using a backend with SWP_STABLE_WRITES.
> 

That will be really helpful. Many thanks for your hard work!
David Hildenbrand April 13, 2022, 10:46 a.m. UTC | #4
On 13.04.22 11:38, Miaohe Lin wrote:
> On 2022/4/13 17:30, David Hildenbrand wrote:
>> On 13.04.22 10:58, Miaohe Lin wrote:
>>> On 2022/3/30 0:43, David Hildenbrand wrote:
>>>> Currently, we clear PG_anon_exclusive in try_to_unmap() and forget about
>>> ...
>>>> diff --git a/mm/memory.c b/mm/memory.c
>>>> index 14618f446139..9060cc7f2123 100644
>>>> --- a/mm/memory.c
>>>> +++ b/mm/memory.c
>>>> @@ -792,6 +792,11 @@ copy_nonpresent_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm,
>>>>  						&src_mm->mmlist);
>>>>  			spin_unlock(&mmlist_lock);
>>>>  		}
>>>> +		/* Mark the swap entry as shared. */
>>>> +		if (pte_swp_exclusive(*src_pte)) {
>>>> +			pte = pte_swp_clear_exclusive(*src_pte);
>>>> +			set_pte_at(src_mm, addr, src_pte, pte);
>>>> +		}
>>>>  		rss[MM_SWAPENTS]++;
>>>>  	} else if (is_migration_entry(entry)) {
>>>>  		page = pfn_swap_entry_to_page(entry);
>>>> @@ -3559,6 +3564,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
>>>>  	struct page *page = NULL, *swapcache;
>>>>  	struct swap_info_struct *si = NULL;
>>>>  	rmap_t rmap_flags = RMAP_NONE;
>>>> +	bool exclusive = false;
>>>>  	swp_entry_t entry;
>>>>  	pte_t pte;
>>>>  	int locked;
>>>> @@ -3724,6 +3730,46 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
>>>>  	BUG_ON(!PageAnon(page) && PageMappedToDisk(page));
>>>>  	BUG_ON(PageAnon(page) && PageAnonExclusive(page));
>>>>  
>>>> +	/*
>>>> +	 * Check under PT lock (to protect against concurrent fork() sharing
>>>> +	 * the swap entry concurrently) for certainly exclusive pages.
>>>> +	 */
>>>> +	if (!PageKsm(page)) {
>>>> +		/*
>>>> +		 * Note that pte_swp_exclusive() == false for architectures
>>>> +		 * without __HAVE_ARCH_PTE_SWP_EXCLUSIVE.
>>>> +		 */
>>>> +		exclusive = pte_swp_exclusive(vmf->orig_pte);
>>>> +		if (page != swapcache) {
>>>> +			/*
>>>> +			 * We have a fresh page that is not exposed to the
>>>> +			 * swapcache -> certainly exclusive.
>>>> +			 */
>>>> +			exclusive = true;
>>>> +		} else if (exclusive && PageWriteback(page) &&
>>>> +			   !(swp_swap_info(entry)->flags & SWP_STABLE_WRITES)) {
>>>
>>> Really sorry for late respond and a newbie question. IIUC, if SWP_STABLE_WRITES is set,
>>> it means concurrent page modifications while under writeback is not supported. For these
>>> problematic swap backends, exclusive marker is dropped. So the above if statement is to
>>> filter out these problematic swap backends which have SWP_STABLE_WRITES set. If so, the
>>> above check should be && (swp_swap_info(entry)->flags & SWP_STABLE_WRITES)), i.e. no "!".
>>> Or am I miss something?
>>
>> Oh, thanks for your careful eyes!
>>
>> Indeed, SWP_STABLE_WRITES indicates that the backend *requires* stable
>> writes, meaning, we must not modify the page while writeback is active.
>>
>> So if and only if that is set, we must drop the exclusive marker.
>>
>> This essentially corresponds to previous reuse_swap_page() logic:
>>
>> bool reuse_swap_page(struct page *page)
>> {
>> ...
>> 	if (!PageWriteback(page)) {
>> 		...
>> 	} else {
>> 		...
>> 		if (p->flags & SWP_STABLE_WRITES) {
>> 			spin_unlock(&p->lock);
>> 			return false;
>> 		}
>> ...
>> }
>>
>> Fortunately, this only affects such backends. For backends without
>> SWP_STABLE_WRITES, the current code is simply sub-optimal.
>>
>>
>> So yes, this has to be
>>
>> } else if (exclusive && PageWriteback(page) &&
>> 	   (swp_swap_info(entry)->flags & SWP_STABLE_WRITES)) {
>>
> 
> I am glad that my question helps. :)
> 

This is the kind of review I was hoping for :)


@Andrew, the following change is necessary:

diff --git a/mm/memory.c b/mm/memory.c
index 3ad39bd66203..8b3cb73f5e44 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3747,7 +3747,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
                         */
                        exclusive = true;
                } else if (exclusive && PageWriteback(page) &&
-                          !(swp_swap_info(entry)->flags & SWP_STABLE_WRITES)) {
+                          (swp_swap_info(entry)->flags & SWP_STABLE_WRITES)) {
                        /*
                         * This is tricky: not all swap backends support
                         * concurrent page modifications while under writeback.


Do you:

a) Want to squash it
b) Want me to resend a new version of this patch only
c) Want me to resend a new version of the patch set

In the meantime, I'll try testing with a suitable backend. IIRC, zram should do the trick.
David Hildenbrand April 13, 2022, 12:31 p.m. UTC | #5
On 13.04.22 11:38, Miaohe Lin wrote:
> On 2022/4/13 17:30, David Hildenbrand wrote:
>> On 13.04.22 10:58, Miaohe Lin wrote:
>>> On 2022/3/30 0:43, David Hildenbrand wrote:
>>>> Currently, we clear PG_anon_exclusive in try_to_unmap() and forget about
>>> ...
>>>> diff --git a/mm/memory.c b/mm/memory.c
>>>> index 14618f446139..9060cc7f2123 100644
>>>> --- a/mm/memory.c
>>>> +++ b/mm/memory.c
>>>> @@ -792,6 +792,11 @@ copy_nonpresent_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm,
>>>>  						&src_mm->mmlist);
>>>>  			spin_unlock(&mmlist_lock);
>>>>  		}
>>>> +		/* Mark the swap entry as shared. */
>>>> +		if (pte_swp_exclusive(*src_pte)) {
>>>> +			pte = pte_swp_clear_exclusive(*src_pte);
>>>> +			set_pte_at(src_mm, addr, src_pte, pte);
>>>> +		}
>>>>  		rss[MM_SWAPENTS]++;
>>>>  	} else if (is_migration_entry(entry)) {
>>>>  		page = pfn_swap_entry_to_page(entry);
>>>> @@ -3559,6 +3564,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
>>>>  	struct page *page = NULL, *swapcache;
>>>>  	struct swap_info_struct *si = NULL;
>>>>  	rmap_t rmap_flags = RMAP_NONE;
>>>> +	bool exclusive = false;
>>>>  	swp_entry_t entry;
>>>>  	pte_t pte;
>>>>  	int locked;
>>>> @@ -3724,6 +3730,46 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
>>>>  	BUG_ON(!PageAnon(page) && PageMappedToDisk(page));
>>>>  	BUG_ON(PageAnon(page) && PageAnonExclusive(page));
>>>>  
>>>> +	/*
>>>> +	 * Check under PT lock (to protect against concurrent fork() sharing
>>>> +	 * the swap entry concurrently) for certainly exclusive pages.
>>>> +	 */
>>>> +	if (!PageKsm(page)) {
>>>> +		/*
>>>> +		 * Note that pte_swp_exclusive() == false for architectures
>>>> +		 * without __HAVE_ARCH_PTE_SWP_EXCLUSIVE.
>>>> +		 */
>>>> +		exclusive = pte_swp_exclusive(vmf->orig_pte);
>>>> +		if (page != swapcache) {
>>>> +			/*
>>>> +			 * We have a fresh page that is not exposed to the
>>>> +			 * swapcache -> certainly exclusive.
>>>> +			 */
>>>> +			exclusive = true;
>>>> +		} else if (exclusive && PageWriteback(page) &&
>>>> +			   !(swp_swap_info(entry)->flags & SWP_STABLE_WRITES)) {
>>>
>>> Really sorry for late respond and a newbie question. IIUC, if SWP_STABLE_WRITES is set,
>>> it means concurrent page modifications while under writeback is not supported. For these
>>> problematic swap backends, exclusive marker is dropped. So the above if statement is to
>>> filter out these problematic swap backends which have SWP_STABLE_WRITES set. If so, the
>>> above check should be && (swp_swap_info(entry)->flags & SWP_STABLE_WRITES)), i.e. no "!".
>>> Or am I miss something?
>>
>> Oh, thanks for your careful eyes!
>>
>> Indeed, SWP_STABLE_WRITES indicates that the backend *requires* stable
>> writes, meaning, we must not modify the page while writeback is active.
>>
>> So if and only if that is set, we must drop the exclusive marker.
>>
>> This essentially corresponds to previous reuse_swap_page() logic:
>>
>> bool reuse_swap_page(struct page *page)
>> {
>> ...
>> 	if (!PageWriteback(page)) {
>> 		...
>> 	} else {
>> 		...
>> 		if (p->flags & SWP_STABLE_WRITES) {
>> 			spin_unlock(&p->lock);
>> 			return false;
>> 		}
>> ...
>> }
>>
>> Fortunately, this only affects such backends. For backends without
>> SWP_STABLE_WRITES, the current code is simply sub-optimal.
>>
>>
>> So yes, this has to be
>>
>> } else if (exclusive && PageWriteback(page) &&
>> 	   (swp_swap_info(entry)->flags & SWP_STABLE_WRITES)) {
>>
> 
> I am glad that my question helps. :)
> 
>>
>> Let me try finding a way to test this, the tests I was running so far
>> were apparently not using a backend with SWP_STABLE_WRITES.
>>
> 
> That will be really helpful. Many thanks for your hard work!
> 

FWIW, I tried with zram, which sets SWP_STABLE_WRITES ... but, it seems
to always do a synchronous writeback, so it cannot really trigger this
code path.

commit f05714293a591038304ddae7cb0dd747bb3786cc
Author: Minchan Kim <minchan@kernel.org>
Date:   Tue Jan 10 16:58:15 2017 -0800

    mm: support anonymous stable page


mentions "During developemnt for zram-swap asynchronous writeback,";
maybe that can be activated somehow? Putting Minchan on CC.
Miaohe Lin April 14, 2022, 2:40 a.m. UTC | #6
On 2022/4/13 20:31, David Hildenbrand wrote:
> On 13.04.22 11:38, Miaohe Lin wrote:
>> On 2022/4/13 17:30, David Hildenbrand wrote:
>>> On 13.04.22 10:58, Miaohe Lin wrote:
>>>> On 2022/3/30 0:43, David Hildenbrand wrote:
>>>>> Currently, we clear PG_anon_exclusive in try_to_unmap() and forget about
>>>> ...
>>>>> diff --git a/mm/memory.c b/mm/memory.c
>>>>> index 14618f446139..9060cc7f2123 100644
>>>>> --- a/mm/memory.c
>>>>> +++ b/mm/memory.c
>>>>> @@ -792,6 +792,11 @@ copy_nonpresent_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm,
>>>>>  						&src_mm->mmlist);
>>>>>  			spin_unlock(&mmlist_lock);
>>>>>  		}
>>>>> +		/* Mark the swap entry as shared. */
>>>>> +		if (pte_swp_exclusive(*src_pte)) {
>>>>> +			pte = pte_swp_clear_exclusive(*src_pte);
>>>>> +			set_pte_at(src_mm, addr, src_pte, pte);
>>>>> +		}
>>>>>  		rss[MM_SWAPENTS]++;
>>>>>  	} else if (is_migration_entry(entry)) {
>>>>>  		page = pfn_swap_entry_to_page(entry);
>>>>> @@ -3559,6 +3564,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
>>>>>  	struct page *page = NULL, *swapcache;
>>>>>  	struct swap_info_struct *si = NULL;
>>>>>  	rmap_t rmap_flags = RMAP_NONE;
>>>>> +	bool exclusive = false;
>>>>>  	swp_entry_t entry;
>>>>>  	pte_t pte;
>>>>>  	int locked;
>>>>> @@ -3724,6 +3730,46 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
>>>>>  	BUG_ON(!PageAnon(page) && PageMappedToDisk(page));
>>>>>  	BUG_ON(PageAnon(page) && PageAnonExclusive(page));
>>>>>  
>>>>> +	/*
>>>>> +	 * Check under PT lock (to protect against concurrent fork() sharing
>>>>> +	 * the swap entry concurrently) for certainly exclusive pages.
>>>>> +	 */
>>>>> +	if (!PageKsm(page)) {
>>>>> +		/*
>>>>> +		 * Note that pte_swp_exclusive() == false for architectures
>>>>> +		 * without __HAVE_ARCH_PTE_SWP_EXCLUSIVE.
>>>>> +		 */
>>>>> +		exclusive = pte_swp_exclusive(vmf->orig_pte);
>>>>> +		if (page != swapcache) {
>>>>> +			/*
>>>>> +			 * We have a fresh page that is not exposed to the
>>>>> +			 * swapcache -> certainly exclusive.
>>>>> +			 */
>>>>> +			exclusive = true;
>>>>> +		} else if (exclusive && PageWriteback(page) &&
>>>>> +			   !(swp_swap_info(entry)->flags & SWP_STABLE_WRITES)) {
>>>>
>>>> Really sorry for late respond and a newbie question. IIUC, if SWP_STABLE_WRITES is set,
>>>> it means concurrent page modifications while under writeback is not supported. For these
>>>> problematic swap backends, exclusive marker is dropped. So the above if statement is to
>>>> filter out these problematic swap backends which have SWP_STABLE_WRITES set. If so, the
>>>> above check should be && (swp_swap_info(entry)->flags & SWP_STABLE_WRITES)), i.e. no "!".
>>>> Or am I miss something?
>>>
>>> Oh, thanks for your careful eyes!
>>>
>>> Indeed, SWP_STABLE_WRITES indicates that the backend *requires* stable
>>> writes, meaning, we must not modify the page while writeback is active.
>>>
>>> So if and only if that is set, we must drop the exclusive marker.
>>>
>>> This essentially corresponds to previous reuse_swap_page() logic:
>>>
>>> bool reuse_swap_page(struct page *page)
>>> {
>>> ...
>>> 	if (!PageWriteback(page)) {
>>> 		...
>>> 	} else {
>>> 		...
>>> 		if (p->flags & SWP_STABLE_WRITES) {
>>> 			spin_unlock(&p->lock);
>>> 			return false;
>>> 		}
>>> ...
>>> }
>>>
>>> Fortunately, this only affects such backends. For backends without
>>> SWP_STABLE_WRITES, the current code is simply sub-optimal.
>>>
>>>
>>> So yes, this has to be
>>>
>>> } else if (exclusive && PageWriteback(page) &&
>>> 	   (swp_swap_info(entry)->flags & SWP_STABLE_WRITES)) {
>>>
>>
>> I am glad that my question helps. :)
>>
>>>
>>> Let me try finding a way to test this, the tests I was running so far
>>> were apparently not using a backend with SWP_STABLE_WRITES.
>>>
>>
>> That will be really helpful. Many thanks for your hard work!
>>
> 
> FWIW, I tried with zram, which sets SWP_STABLE_WRITES ... but, it seems
> to always do a synchronous writeback, so it cannot really trigger this
> code path.

That's a pity. We really need a asynchronous writeback to trigger this code path.

> 
> commit f05714293a591038304ddae7cb0dd747bb3786cc
> Author: Minchan Kim <minchan@kernel.org>
> Date:   Tue Jan 10 16:58:15 2017 -0800
> 
>     mm: support anonymous stable page
> 
> 
> mentions "During developemnt for zram-swap asynchronous writeback,";
> maybe that can be activated somehow? Putting Minchan on CC.
> 

ZRAM_WRITEBACK might need to be configured to enable asynchronous IO:

+
+config ZRAM_WRITEBACK
+       bool "Write back incompressible page to backing device"
+       depends on ZRAM
+       default n
+       help
+        With incompressible page, there is no memory saving to keep it
+        in memory. Instead, write it out to backing device.
+        For this feature, admin should set up backing device via
+        /sys/block/zramX/backing_dev.
+
+        See zram.txt for more infomration.

It seems there is only asynchronous IO for swapin ops. I browsed the source code
and I can only found read_from_bdev_async. But I'm not familiar with the zram code.
Minchan might could kindly help us solving this question.

Thanks!
Vlastimil Babka April 20, 2022, 5:10 p.m. UTC | #7
On 3/29/22 18:43, David Hildenbrand wrote:
> Currently, we clear PG_anon_exclusive in try_to_unmap() and forget about
> it. We do this, to keep fork() logic on swap entries easy and efficient:
> for example, if we wouldn't clear it when unmapping, we'd have to lookup
> the page in the swapcache for each and every swap entry during fork() and
> clear PG_anon_exclusive if set.
> 
> Instead, we want to store that information directly in the swap pte,
> protected by the page table lock, similarly to how we handle
> SWP_MIGRATION_READ_EXCLUSIVE for migration entries. However, for actual
> swap entries, we don't want to mess with the swap type (e.g., still one
> bit) because it overcomplicates swap code.
> 
> In try_to_unmap(), we already reject to unmap in case the page might be
> pinned, because we must not lose PG_anon_exclusive on pinned pages ever.
> Checking if there are other unexpected references reliably *before*
> completely unmapping a page is unfortunately not really possible: THP
> heavily overcomplicate the situation. Once fully unmapped it's easier --
> we, for example, make sure that there are no unexpected references
> *after* unmapping a page before starting writeback on that page.
> 
> So, we currently might end up unmapping a page and clearing
> PG_anon_exclusive if that page has additional references, for example,
> due to a FOLL_GET.
> 
> do_swap_page() has to re-determine if a page is exclusive, which will
> easily fail if there are other references on a page, most prominently
> GUP references via FOLL_GET. This can currently result in memory
> corruptions when taking a FOLL_GET | FOLL_WRITE reference on a page even
> when fork() is never involved: try_to_unmap() will succeed, and when
> refaulting the page, it cannot be marked exclusive and will get replaced
> by a copy in the page tables on the next write access, resulting in writes
> via the GUP reference to the page being lost.
> 
> In an ideal world, everybody that uses GUP and wants to modify page
> content, such as O_DIRECT, would properly use FOLL_PIN. However, that
> conversion will take a while. It's easier to fix what used to work in the
> past (FOLL_GET | FOLL_WRITE) remembering PG_anon_exclusive. In addition,
> by remembering PG_anon_exclusive we can further reduce unnecessary COW
> in some cases, so it's the natural thing to do.
> 
> So let's transfer the PG_anon_exclusive information to the swap pte and
> store it via an architecture-dependant pte bit; use that information when
> restoring the swap pte in do_swap_page() and unuse_pte(). During fork(), we
> simply have to clear the pte bit and are done.
> 
> Of course, there is one corner case to handle: swap backends that don't
> support concurrent page modifications while the page is under writeback.
> Special case these, and drop the exclusive marker. Add a comment why that
> is just fine (also, reuse_swap_page() would have done the same in the
> past).
> 
> In the future, we'll hopefully have all architectures support
> __HAVE_ARCH_PTE_SWP_EXCLUSIVE, such that we can get rid of the empty
> stubs and the define completely. Then, we can also convert
> SWP_MIGRATION_READ_EXCLUSIVE. For architectures it's fairly easy to
> support: either simply use a yet unused pte bit that can be used for swap
> entries, steal one from the arch type bits if they exceed 5, or steal one
> from the offset bits.
> 
> Note: R/O FOLL_GET references were never really reliable, especially
> when taking one on a shared page and then writing to the page (e.g., GUP
> after fork()). FOLL_GET, including R/W references, were never really
> reliable once fork was involved (e.g., GUP before fork(),
> GUP during fork()). KSM steps back in case it stumbles over unexpected
> references and is, therefore, fine.
> 
> Signed-off-by: David Hildenbrand <david@redhat.com>

With the fixup as reportedy by Miaohe Lin

Acked-by: Vlastimil Babka <vbabka@suse.cz>

(sent a separate mm-commits mail to inquire about the fix going missing from
mmotm)

https://lore.kernel.org/mm-commits/c3195d8a-2931-0749-973a-1d04e4baec94@suse.cz/T/#m4e98ccae6f747e11f45e4d0726427ba2fef740eb
David Hildenbrand April 20, 2022, 5:13 p.m. UTC | #8
On 20.04.22 19:10, Vlastimil Babka wrote:
> On 3/29/22 18:43, David Hildenbrand wrote:
>> Currently, we clear PG_anon_exclusive in try_to_unmap() and forget about
>> it. We do this, to keep fork() logic on swap entries easy and efficient:
>> for example, if we wouldn't clear it when unmapping, we'd have to lookup
>> the page in the swapcache for each and every swap entry during fork() and
>> clear PG_anon_exclusive if set.
>>
>> Instead, we want to store that information directly in the swap pte,
>> protected by the page table lock, similarly to how we handle
>> SWP_MIGRATION_READ_EXCLUSIVE for migration entries. However, for actual
>> swap entries, we don't want to mess with the swap type (e.g., still one
>> bit) because it overcomplicates swap code.
>>
>> In try_to_unmap(), we already reject to unmap in case the page might be
>> pinned, because we must not lose PG_anon_exclusive on pinned pages ever.
>> Checking if there are other unexpected references reliably *before*
>> completely unmapping a page is unfortunately not really possible: THP
>> heavily overcomplicate the situation. Once fully unmapped it's easier --
>> we, for example, make sure that there are no unexpected references
>> *after* unmapping a page before starting writeback on that page.
>>
>> So, we currently might end up unmapping a page and clearing
>> PG_anon_exclusive if that page has additional references, for example,
>> due to a FOLL_GET.
>>
>> do_swap_page() has to re-determine if a page is exclusive, which will
>> easily fail if there are other references on a page, most prominently
>> GUP references via FOLL_GET. This can currently result in memory
>> corruptions when taking a FOLL_GET | FOLL_WRITE reference on a page even
>> when fork() is never involved: try_to_unmap() will succeed, and when
>> refaulting the page, it cannot be marked exclusive and will get replaced
>> by a copy in the page tables on the next write access, resulting in writes
>> via the GUP reference to the page being lost.
>>
>> In an ideal world, everybody that uses GUP and wants to modify page
>> content, such as O_DIRECT, would properly use FOLL_PIN. However, that
>> conversion will take a while. It's easier to fix what used to work in the
>> past (FOLL_GET | FOLL_WRITE) remembering PG_anon_exclusive. In addition,
>> by remembering PG_anon_exclusive we can further reduce unnecessary COW
>> in some cases, so it's the natural thing to do.
>>
>> So let's transfer the PG_anon_exclusive information to the swap pte and
>> store it via an architecture-dependant pte bit; use that information when
>> restoring the swap pte in do_swap_page() and unuse_pte(). During fork(), we
>> simply have to clear the pte bit and are done.
>>
>> Of course, there is one corner case to handle: swap backends that don't
>> support concurrent page modifications while the page is under writeback.
>> Special case these, and drop the exclusive marker. Add a comment why that
>> is just fine (also, reuse_swap_page() would have done the same in the
>> past).
>>
>> In the future, we'll hopefully have all architectures support
>> __HAVE_ARCH_PTE_SWP_EXCLUSIVE, such that we can get rid of the empty
>> stubs and the define completely. Then, we can also convert
>> SWP_MIGRATION_READ_EXCLUSIVE. For architectures it's fairly easy to
>> support: either simply use a yet unused pte bit that can be used for swap
>> entries, steal one from the arch type bits if they exceed 5, or steal one
>> from the offset bits.
>>
>> Note: R/O FOLL_GET references were never really reliable, especially
>> when taking one on a shared page and then writing to the page (e.g., GUP
>> after fork()). FOLL_GET, including R/W references, were never really
>> reliable once fork was involved (e.g., GUP before fork(),
>> GUP during fork()). KSM steps back in case it stumbles over unexpected
>> references and is, therefore, fine.
>>
>> Signed-off-by: David Hildenbrand <david@redhat.com>
> 
> With the fixup as reportedy by Miaohe Lin
> 
> Acked-by: Vlastimil Babka <vbabka@suse.cz>
> 
> (sent a separate mm-commits mail to inquire about the fix going missing from
> mmotm)
> 
> https://lore.kernel.org/mm-commits/c3195d8a-2931-0749-973a-1d04e4baec94@suse.cz/T/#m4e98ccae6f747e11f45e4d0726427ba2fef740eb

Yes I saw that, thanks for catching that!
diff mbox series

Patch

diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
index f4f4077b97aa..53750224e176 100644
--- a/include/linux/pgtable.h
+++ b/include/linux/pgtable.h
@@ -1003,6 +1003,35 @@  static inline pgprot_t pgprot_modify(pgprot_t oldprot, pgprot_t newprot)
 #define arch_start_context_switch(prev)	do {} while (0)
 #endif
 
+/*
+ * When replacing an anonymous page by a real (!non) swap entry, we clear
+ * PG_anon_exclusive from the page and instead remember whether the flag was
+ * set in the swp pte. During fork(), we have to mark the entry as !exclusive
+ * (possibly shared). On swapin, we use that information to restore
+ * PG_anon_exclusive, which is very helpful in cases where we might have
+ * additional (e.g., FOLL_GET) references on a page and wouldn't be able to
+ * detect exclusivity.
+ *
+ * These functions don't apply to non-swap entries (e.g., migration, hwpoison,
+ * ...).
+ */
+#ifndef __HAVE_ARCH_PTE_SWP_EXCLUSIVE
+static inline pte_t pte_swp_mkexclusive(pte_t pte)
+{
+	return pte;
+}
+
+static inline int pte_swp_exclusive(pte_t pte)
+{
+	return false;
+}
+
+static inline pte_t pte_swp_clear_exclusive(pte_t pte)
+{
+	return pte;
+}
+#endif
+
 #ifdef CONFIG_HAVE_ARCH_SOFT_DIRTY
 #ifndef CONFIG_ARCH_ENABLE_THP_MIGRATION
 static inline pmd_t pmd_swp_mksoft_dirty(pmd_t pmd)
diff --git a/include/linux/swapops.h b/include/linux/swapops.h
index 06280fc1c99b..32d517a28969 100644
--- a/include/linux/swapops.h
+++ b/include/linux/swapops.h
@@ -26,6 +26,8 @@ 
 /* Clear all flags but only keep swp_entry_t related information */
 static inline pte_t pte_swp_clear_flags(pte_t pte)
 {
+	if (pte_swp_exclusive(pte))
+		pte = pte_swp_clear_exclusive(pte);
 	if (pte_swp_soft_dirty(pte))
 		pte = pte_swp_clear_soft_dirty(pte);
 	if (pte_swp_uffd_wp(pte))
diff --git a/mm/memory.c b/mm/memory.c
index 14618f446139..9060cc7f2123 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -792,6 +792,11 @@  copy_nonpresent_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm,
 						&src_mm->mmlist);
 			spin_unlock(&mmlist_lock);
 		}
+		/* Mark the swap entry as shared. */
+		if (pte_swp_exclusive(*src_pte)) {
+			pte = pte_swp_clear_exclusive(*src_pte);
+			set_pte_at(src_mm, addr, src_pte, pte);
+		}
 		rss[MM_SWAPENTS]++;
 	} else if (is_migration_entry(entry)) {
 		page = pfn_swap_entry_to_page(entry);
@@ -3559,6 +3564,7 @@  vm_fault_t do_swap_page(struct vm_fault *vmf)
 	struct page *page = NULL, *swapcache;
 	struct swap_info_struct *si = NULL;
 	rmap_t rmap_flags = RMAP_NONE;
+	bool exclusive = false;
 	swp_entry_t entry;
 	pte_t pte;
 	int locked;
@@ -3724,6 +3730,46 @@  vm_fault_t do_swap_page(struct vm_fault *vmf)
 	BUG_ON(!PageAnon(page) && PageMappedToDisk(page));
 	BUG_ON(PageAnon(page) && PageAnonExclusive(page));
 
+	/*
+	 * Check under PT lock (to protect against concurrent fork() sharing
+	 * the swap entry concurrently) for certainly exclusive pages.
+	 */
+	if (!PageKsm(page)) {
+		/*
+		 * Note that pte_swp_exclusive() == false for architectures
+		 * without __HAVE_ARCH_PTE_SWP_EXCLUSIVE.
+		 */
+		exclusive = pte_swp_exclusive(vmf->orig_pte);
+		if (page != swapcache) {
+			/*
+			 * We have a fresh page that is not exposed to the
+			 * swapcache -> certainly exclusive.
+			 */
+			exclusive = true;
+		} else if (exclusive && PageWriteback(page) &&
+			   !(swp_swap_info(entry)->flags & SWP_STABLE_WRITES)) {
+			/*
+			 * This is tricky: not all swap backends support
+			 * concurrent page modifications while under writeback.
+			 *
+			 * So if we stumble over such a page in the swapcache
+			 * we must not set the page exclusive, otherwise we can
+			 * map it writable without further checks and modify it
+			 * while still under writeback.
+			 *
+			 * For these problematic swap backends, simply drop the
+			 * exclusive marker: this is perfectly fine as we start
+			 * writeback only if we fully unmapped the page and
+			 * there are no unexpected references on the page after
+			 * unmapping succeeded. After fully unmapped, no
+			 * further GUP references (FOLL_GET and FOLL_PIN) can
+			 * appear, so dropping the exclusive marker and mapping
+			 * it only R/O is fine.
+			 */
+			exclusive = false;
+		}
+	}
+
 	/*
 	 * Remove the swap entry and conditionally try to free up the swapcache.
 	 * We're already holding a reference on the page but haven't mapped it
@@ -3738,11 +3784,12 @@  vm_fault_t do_swap_page(struct vm_fault *vmf)
 	pte = mk_pte(page, vma->vm_page_prot);
 
 	/*
-	 * Same logic as in do_wp_page(); however, optimize for fresh pages
-	 * that are certainly not shared because we just allocated them without
-	 * exposing them to the swapcache.
+	 * Same logic as in do_wp_page(); however, optimize for pages that are
+	 * certainly not shared either because we just allocated them without
+	 * exposing them to the swapcache or because the swap entry indicates
+	 * exclusivity.
 	 */
-	if (!PageKsm(page) && (page != swapcache || page_count(page) == 1)) {
+	if (!PageKsm(page) && (exclusive || page_count(page) == 1)) {
 		if (vmf->flags & FAULT_FLAG_WRITE) {
 			pte = maybe_mkwrite(pte_mkdirty(pte), vma);
 			vmf->flags &= ~FAULT_FLAG_WRITE;
diff --git a/mm/rmap.c b/mm/rmap.c
index 4de07234cbcf..c8c257d94962 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1656,14 +1656,15 @@  static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
 				break;
 			}
 			/*
-			 * Note: We *don't* remember yet if the page was mapped
-			 * exclusively in the swap entry, so swapin code has
-			 * to re-determine that manually and might detect the
-			 * page as possibly shared, for example, if there are
-			 * other references on the page or if the page is under
-			 * writeback. We made sure that there are no GUP pins
-			 * on the page that would rely on it, so for GUP pins
-			 * this is fine.
+			 * Note: We *don't* remember if the page was mapped
+			 * exclusively in the swap pte if the architecture
+			 * doesn't support __HAVE_ARCH_PTE_SWP_EXCLUSIVE. In
+			 * that case, swapin code has to re-determine that
+			 * manually and might detect the page as possibly
+			 * shared, for example, if there are other references on
+			 * the page or if the page is under writeback. We made
+			 * sure that there are no GUP pins on the page that
+			 * would rely on it, so for GUP pins this is fine.
 			 */
 			if (list_empty(&mm->mmlist)) {
 				spin_lock(&mmlist_lock);
@@ -1674,6 +1675,8 @@  static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
 			dec_mm_counter(mm, MM_ANONPAGES);
 			inc_mm_counter(mm, MM_SWAPENTS);
 			swp_pte = swp_entry_to_pte(entry);
+			if (anon_exclusive)
+				swp_pte = pte_swp_mkexclusive(swp_pte);
 			if (pte_soft_dirty(pteval))
 				swp_pte = pte_swp_mksoft_dirty(swp_pte);
 			if (pte_uffd_wp(pteval))
diff --git a/mm/swapfile.c b/mm/swapfile.c
index a7847324d476..7279b2d2d71d 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -1804,7 +1804,18 @@  static int unuse_pte(struct vm_area_struct *vma, pmd_t *pmd,
 	inc_mm_counter(vma->vm_mm, MM_ANONPAGES);
 	get_page(page);
 	if (page == swapcache) {
-		page_add_anon_rmap(page, vma, addr, RMAP_NONE);
+		rmap_t rmap_flags = RMAP_NONE;
+
+		/*
+		 * See do_swap_page(): PageWriteback() would be problematic.
+		 * However, we do a wait_on_page_writeback() just before this
+		 * call and have the page locked.
+		 */
+		VM_BUG_ON_PAGE(PageWriteback(page), page);
+		if (pte_swp_exclusive(*pte))
+			rmap_flags |= RMAP_EXCLUSIVE;
+
+		page_add_anon_rmap(page, vma, addr, rmap_flags);
 	} else { /* ksm created a completely new copy */
 		page_add_new_anon_rmap(page, vma, addr);
 		lru_cache_add_inactive_or_unevictable(page, vma);