[2/4] mm: Add arch hooks for saving/restoring tags
diff mbox series

Message ID 20200422142530.32619-3-steven.price@arm.com
State New
Headers show
Series
  • arm64: MTE swap and hibernation support
Related show

Commit Message

Steven Price April 22, 2020, 2:25 p.m. UTC
Arm's Memory Tagging Extension (MTE) adds some metadata (taga) to
every physical page, when swapping pages out to disk it is necessary to
save these tags, and later restore them when reading the pages back.

Add some hooks along with dummy implementations to enable the
arch code to handle this.

Three new hooks are added to the swap code:
 * arch_prepare_to_swap() and
 * arch_swap_invalidate_page() / arch_swap_invalidate_area().
One new hook is added to shmem:
 * arch_swap_restore_tags()

Signed-off-by: Steven Price <steven.price@arm.com>
---
 include/asm-generic/pgtable.h | 23 +++++++++++++++++++++++
 mm/page_io.c                  |  6 ++++++
 mm/shmem.c                    |  6 ++++++
 mm/swapfile.c                 |  2 ++
 4 files changed, 37 insertions(+)

Comments

Dave Hansen April 22, 2020, 6:08 p.m. UTC | #1
On 4/22/20 7:25 AM, Steven Price wrote:
> Three new hooks are added to the swap code:
>  * arch_prepare_to_swap() and
>  * arch_swap_invalidate_page() / arch_swap_invalidate_area().
> One new hook is added to shmem:
>  * arch_swap_restore_tags()

How do the tags get restored outside of the shmem path?  I was expecting
to see more arch_swap_restore_tags() sites.
Catalin Marinas April 23, 2020, 9:09 a.m. UTC | #2
On Wed, Apr 22, 2020 at 11:08:10AM -0700, Dave Hansen wrote:
> On 4/22/20 7:25 AM, Steven Price wrote:
> > Three new hooks are added to the swap code:
> >  * arch_prepare_to_swap() and
> >  * arch_swap_invalidate_page() / arch_swap_invalidate_area().
> > One new hook is added to shmem:
> >  * arch_swap_restore_tags()
> 
> How do the tags get restored outside of the shmem path?  I was expecting
> to see more arch_swap_restore_tags() sites.

The restoring is done via set_pte_at() -> mte_sync_tags() ->
mte_restore_tags() in the arch code (see patch 3).
arch_swap_restore_tags() just calls mte_restore_tags() directly.

shmem is slightly problematic as it moves the page from the swap cache
to the shmem one and I think arch_swap_invalidate_page() would have
already been called by the time we get to set_pte_at() (Steven can
correct me if I got this wrong).
Steven Price April 23, 2020, 12:37 p.m. UTC | #3
On 23/04/2020 10:09, Catalin Marinas wrote:
> On Wed, Apr 22, 2020 at 11:08:10AM -0700, Dave Hansen wrote:
>> On 4/22/20 7:25 AM, Steven Price wrote:
>>> Three new hooks are added to the swap code:
>>>   * arch_prepare_to_swap() and
>>>   * arch_swap_invalidate_page() / arch_swap_invalidate_area().
>>> One new hook is added to shmem:
>>>   * arch_swap_restore_tags()
>>
>> How do the tags get restored outside of the shmem path?  I was expecting
>> to see more arch_swap_restore_tags() sites.
> 
> The restoring is done via set_pte_at() -> mte_sync_tags() ->
> mte_restore_tags() in the arch code (see patch 3).
> arch_swap_restore_tags() just calls mte_restore_tags() directly.
> 
> shmem is slightly problematic as it moves the page from the swap cache
> to the shmem one and I think arch_swap_invalidate_page() would have
> already been called by the time we get to set_pte_at() (Steven can
> correct me if I got this wrong).

That's correct - shmem can pull in pages (into it's own cache) and 
invalidate the swap entries without any process having a PTE restored. 
So we need to hook shmem to restore the tags even though there's no PTE 
restored yet.

The set_pte_at() 'trick' enables delaying the restoring of the tags (in 
the usual case) until the I/O for the page has completed, which might be 
necessary in some cases if the I/O can clobber the tags in memory. I 
couldn't find a better way of hooking this.

Steve

Patch
diff mbox series

diff --git a/include/asm-generic/pgtable.h b/include/asm-generic/pgtable.h
index 329b8c8ca703..306cee75b9ec 100644
--- a/include/asm-generic/pgtable.h
+++ b/include/asm-generic/pgtable.h
@@ -475,6 +475,29 @@  static inline int arch_unmap_one(struct mm_struct *mm,
 }
 #endif
 
+#ifndef __HAVE_ARCH_PREPARE_TO_SWAP
+static inline int arch_prepare_to_swap(struct page *page)
+{
+	return 0;
+}
+#endif
+
+#ifndef __HAVE_ARCH_SWAP_INVALIDATE
+static inline void arch_swap_invalidate_page(int type, pgoff_t offset)
+{
+}
+
+static inline void arch_swap_invalidate_area(int type)
+{
+}
+#endif
+
+#ifndef __HAVE_ARCH_SWAP_RESTORE_TAGS
+static inline void arch_swap_restore_tags(swp_entry_t entry, struct page *page)
+{
+}
+#endif
+
 #ifndef __HAVE_ARCH_PGD_OFFSET_GATE
 #define pgd_offset_gate(mm, addr)	pgd_offset(mm, addr)
 #endif
diff --git a/mm/page_io.c b/mm/page_io.c
index 76965be1d40e..7baee316ac99 100644
--- a/mm/page_io.c
+++ b/mm/page_io.c
@@ -253,6 +253,12 @@  int swap_writepage(struct page *page, struct writeback_control *wbc)
 		unlock_page(page);
 		goto out;
 	}
+	/* Arch code may have to preserve more data
+	 * than just the page contents, e.g. memory tags
+	 */
+	ret = arch_prepare_to_swap(page);
+	if (ret)
+		goto out;
 	if (frontswap_store(page) == 0) {
 		set_page_writeback(page);
 		unlock_page(page);
diff --git a/mm/shmem.c b/mm/shmem.c
index 73754ed7af69..1010b91f267e 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -1658,6 +1658,12 @@  static int shmem_swapin_page(struct inode *inode, pgoff_t index,
 	}
 	wait_on_page_writeback(page);
 
+	/*
+	 * Some architectures may have to restore extra metadata to the
+	 * physical page after reading from swap
+	 */
+	arch_swap_restore_tags(swap, page);
+
 	if (shmem_should_replace_page(page, gfp)) {
 		error = shmem_replace_page(&page, gfp, info, index);
 		if (error)
diff --git a/mm/swapfile.c b/mm/swapfile.c
index 5871a2aa86a5..b39c6520b0cf 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -722,6 +722,7 @@  static void swap_range_free(struct swap_info_struct *si, unsigned long offset,
 	else
 		swap_slot_free_notify = NULL;
 	while (offset <= end) {
+		arch_swap_invalidate_page(si->type, offset);
 		frontswap_invalidate_page(si->type, offset);
 		if (swap_slot_free_notify)
 			swap_slot_free_notify(si->bdev, offset);
@@ -2645,6 +2646,7 @@  SYSCALL_DEFINE1(swapoff, const char __user *, specialfile)
 	frontswap_map = frontswap_map_get(p);
 	spin_unlock(&p->lock);
 	spin_unlock(&swap_lock);
+	arch_swap_invalidate_area(p->type);
 	frontswap_invalidate_area(p->type);
 	frontswap_map_set(p, NULL);
 	mutex_unlock(&swapon_mutex);