diff mbox series

[V2] mm: fix use-after free of page_ext after race with memory-offline

Message ID 1658931303-17024-1-git-send-email-quic_charante@quicinc.com (mailing list archive)
State New
Headers show
Series [V2] mm: fix use-after free of page_ext after race with memory-offline | expand

Commit Message

Charan Teja Kalla July 27, 2022, 2:15 p.m. UTC
The below is one path where race between page_ext and  offline of the
respective memory blocks will cause use-after-free on the access of
page_ext structure.

process1		              process2
---------                             ---------
a)doing /proc/page_owner           doing memory offline
			           through offline_pages.

b)PageBuddy check is failed
thus proceed to get the
page_owner information
through page_ext access.
page_ext = lookup_page_ext(page);

				    migrate_pages();
				    .................
				Since all pages are successfully
				migrated as part of the offline
				operation,send MEM_OFFLINE notification
				where for page_ext it calls:
				offline_page_ext()-->
				__free_page_ext()-->
				   free_page_ext()-->
				     vfree(ms->page_ext)
			           mem_section->page_ext = NULL

c) Check for the PAGE_EXT flags
in the page_ext->flags access
results into the use-after-free(leading
to the translation faults).

As mentioned above, there is really no synchronization between page_ext
access and its freeing in the memory_offline.

The memory offline steps(roughly) on a memory block is as below:
1) Isolate all the pages
2) while(1)
  try free the pages to buddy.(->free_list[MIGRATE_ISOLATE])
3) delete the pages from this buddy list.
4) Then free page_ext.(Note: The struct page is still alive as it is
freed only during hot remove of the memory which frees the memmap, which
steps the user might not perform).

This design leads to the state where struct page is alive but the struct
page_ext is freed, where the later is ideally part of the former which
just representing the page_flags.

The above mentioned race is just one example __but the problem persists
in the other paths too involving page_ext->flags access(eg:
page_is_idle())__. Since offline waits till the last reference on the
page goes down i.e. any path that took the refcount on the page can make
the memory offline operation to wait. Eg: In the migrate_pages()
operation, we do take the extra refcount on the pages that are under
migration and then we do copy page_owner by accessing page_ext. For

Fix those paths where offline races with page_ext access by maintaining
synchronization with rcu lock and is achieved in 3 steps:
1) Invalidate all the page_ext's of the sections of a memory block by
storing a flag in the LSB of mem_section->page_ext.

2) Wait till all the existing readers to finish working with the
->page_ext's with synchronize_rcu(). Any parallel process that starts
after this call will not get page_ext, through lookup_page_ext(), for
the block parallel offline operation is being performed.

3) Now safely free all sections ->page_ext's of the block on which
offline operation is being performed.

Thanks to David Hildenbrand for his views/suggestions on the initial
discussion[1] and Pavan kondeti for various inputs on this patch.

FAQ's:
Q) Should page_ext_[get|put]() needs to be used for every page_ext
access?
A) NO, the synchronization is really not needed in all the paths of
accessing page_ext. One case is where extra refcount is taken on a
page for which memory block, this pages falls into, offline operation is
being performed. This extra refcount makes the offline operation not to
succeed hence the freeing of page_ext.  Another case is where the page
is already being freed and we do reset its page_owner.

Some examples where the rcu_lock is not taken while accessing the
page_ext are:
1) In migration (where we also migrate the page_owner information), we
take the extra refcount on the source and destination pages and then
start the migration. This extra refcount makes the test_pages_isolated()
function to fail thus retry the offline operation.

2) In free_pages_prepare(), we do reset the page_owner(through page_ext)
which again doesn't need the protection to access because the page is
already freeing (through only one path).

So, users need not to use page_ext_[get|put]() when they are sure that
extra refcount is taken on a page preventing the offline operation.

Q) Why can't the page_ext is freed in the hot_remove path, where memmap
is also freed ?

A) As per David's answers, there are many reasons and a few are:
1) Discussions had happened in the past to eventually also use rcu
protection for handling pfn_to_online_page(). So doing it cleanly here
is certainly an improvement.

2) It's not good having to scatter section online checks all over the
place in page ext code. Once there is a difference between active vs.
stale page ext data things get a bit messy and error prone. This is
already ugly enough in our generic memmap handling code.

3) Having on-demand allocations, such as KASAN or page ext from the
memory online notifier is at least currently cleaner, because we don't
have to handle each and every subsystem that hooks into that during the
core memory hotadd/remove phase, which primarily only setups the
vmemmap, direct map and memory block devices.

[1] https://lore.kernel.org/linux-mm/59edde13-4167-8550-86f0-11fc67882107@quicinc.com/

Suggested-by: David Hildenbrand <david@redhat.com>
Suggested-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Charan Teja Kalla <quic_charante@quicinc.com>
---
Changes in V2:
   o Use only get/put_page_ext() to get the page_ext in the 
     required paths. Add proper comments for them.
   o Use synchronize_rcu() only once instead of calling it for
     every mem_section::page_ext of a memory block.
   o Free'd page_ext in 3 steps of invalidate, wait till all the
     users are finished using and then finally free page_ext.

Changes in V1:
   o Used the RCU lock while accessing the page_ext in the paths that
     can race with the memory offline operation.
   o Introduced (get|put)_page_ext() function to get the page_ext of page.
   o https://lore.kernel.org/all/1657810063-28938-1-git-send-email-quic_charante@quicinc.com/

 include/linux/page_ext.h  | 53 +++++++++++++++++++++++++++++++++++++++++++++++
 include/linux/page_idle.h | 40 ++++++++++++++++++++++++++---------
 mm/page_ext.c             | 41 ++++++++++++++++++++++++++++++++----
 mm/page_owner.c           | 35 +++++++++++++++++++++++--------
 mm/page_table_check.c     | 10 ++++++---
 5 files changed, 153 insertions(+), 26 deletions(-)

Comments

Charan Teja Kalla July 27, 2022, 2:19 p.m. UTC | #1
Adding Michal Hocko. Sorry for the spam.

On 7/27/2022 7:45 PM, Charan Teja Kalla wrote:
> The below is one path where race between page_ext and  offline of the
> respective memory blocks will cause use-after-free on the access of
> page_ext structure.
> 
> process1		              process2
> ---------                             ---------
> a)doing /proc/page_owner           doing memory offline
> 			           through offline_pages.
> 
> b)PageBuddy check is failed
> thus proceed to get the
> page_owner information
> through page_ext access.
> page_ext = lookup_page_ext(page);
> 
> 				    migrate_pages();
> 				    .................
> 				Since all pages are successfully
> 				migrated as part of the offline
> 				operation,send MEM_OFFLINE notification
> 				where for page_ext it calls:
> 				offline_page_ext()-->
> 				__free_page_ext()-->
> 				   free_page_ext()-->
> 				     vfree(ms->page_ext)
> 			           mem_section->page_ext = NULL
> 
> c) Check for the PAGE_EXT flags
> in the page_ext->flags access
> results into the use-after-free(leading
> to the translation faults).
> 
> As mentioned above, there is really no synchronization between page_ext
> access and its freeing in the memory_offline.
> 
> The memory offline steps(roughly) on a memory block is as below:
> 1) Isolate all the pages
> 2) while(1)
>   try free the pages to buddy.(->free_list[MIGRATE_ISOLATE])
> 3) delete the pages from this buddy list.
> 4) Then free page_ext.(Note: The struct page is still alive as it is
> freed only during hot remove of the memory which frees the memmap, which
> steps the user might not perform).
> 
> This design leads to the state where struct page is alive but the struct
> page_ext is freed, where the later is ideally part of the former which
> just representing the page_flags.
> 
> The above mentioned race is just one example __but the problem persists
> in the other paths too involving page_ext->flags access(eg:
> page_is_idle())__. Since offline waits till the last reference on the
> page goes down i.e. any path that took the refcount on the page can make
> the memory offline operation to wait. Eg: In the migrate_pages()
> operation, we do take the extra refcount on the pages that are under
> migration and then we do copy page_owner by accessing page_ext. For
> 
> Fix those paths where offline races with page_ext access by maintaining
> synchronization with rcu lock and is achieved in 3 steps:
> 1) Invalidate all the page_ext's of the sections of a memory block by
> storing a flag in the LSB of mem_section->page_ext.
> 
> 2) Wait till all the existing readers to finish working with the
> ->page_ext's with synchronize_rcu(). Any parallel process that starts
> after this call will not get page_ext, through lookup_page_ext(), for
> the block parallel offline operation is being performed.
> 
> 3) Now safely free all sections ->page_ext's of the block on which
> offline operation is being performed.
> 
> Thanks to David Hildenbrand for his views/suggestions on the initial
> discussion[1] and Pavan kondeti for various inputs on this patch.
> 
> FAQ's:
> Q) Should page_ext_[get|put]() needs to be used for every page_ext
> access?
> A) NO, the synchronization is really not needed in all the paths of
> accessing page_ext. One case is where extra refcount is taken on a
> page for which memory block, this pages falls into, offline operation is
> being performed. This extra refcount makes the offline operation not to
> succeed hence the freeing of page_ext.  Another case is where the page
> is already being freed and we do reset its page_owner.
> 
> Some examples where the rcu_lock is not taken while accessing the
> page_ext are:
> 1) In migration (where we also migrate the page_owner information), we
> take the extra refcount on the source and destination pages and then
> start the migration. This extra refcount makes the test_pages_isolated()
> function to fail thus retry the offline operation.
> 
> 2) In free_pages_prepare(), we do reset the page_owner(through page_ext)
> which again doesn't need the protection to access because the page is
> already freeing (through only one path).
> 
> So, users need not to use page_ext_[get|put]() when they are sure that
> extra refcount is taken on a page preventing the offline operation.
> 
> Q) Why can't the page_ext is freed in the hot_remove path, where memmap
> is also freed ?
> 
> A) As per David's answers, there are many reasons and a few are:
> 1) Discussions had happened in the past to eventually also use rcu
> protection for handling pfn_to_online_page(). So doing it cleanly here
> is certainly an improvement.
> 
> 2) It's not good having to scatter section online checks all over the
> place in page ext code. Once there is a difference between active vs.
> stale page ext data things get a bit messy and error prone. This is
> already ugly enough in our generic memmap handling code.
> 
> 3) Having on-demand allocations, such as KASAN or page ext from the
> memory online notifier is at least currently cleaner, because we don't
> have to handle each and every subsystem that hooks into that during the
> core memory hotadd/remove phase, which primarily only setups the
> vmemmap, direct map and memory block devices.
> 
> [1] https://lore.kernel.org/linux-mm/59edde13-4167-8550-86f0-11fc67882107@quicinc.com/
> 
> Suggested-by: David Hildenbrand <david@redhat.com>
> Suggested-by: Michal Hocko <mhocko@suse.com>
> Signed-off-by: Charan Teja Kalla <quic_charante@quicinc.com>
> ---
> Changes in V2:
>    o Use only get/put_page_ext() to get the page_ext in the 
>      required paths. Add proper comments for them.
>    o Use synchronize_rcu() only once instead of calling it for
>      every mem_section::page_ext of a memory block.
>    o Free'd page_ext in 3 steps of invalidate, wait till all the
>      users are finished using and then finally free page_ext.
> 
> Changes in V1:
>    o Used the RCU lock while accessing the page_ext in the paths that
>      can race with the memory offline operation.
>    o Introduced (get|put)_page_ext() function to get the page_ext of page.
>    o https://lore.kernel.org/all/1657810063-28938-1-git-send-email-quic_charante@quicinc.com/
> 
>  include/linux/page_ext.h  | 53 +++++++++++++++++++++++++++++++++++++++++++++++
>  include/linux/page_idle.h | 40 ++++++++++++++++++++++++++---------
>  mm/page_ext.c             | 41 ++++++++++++++++++++++++++++++++----
>  mm/page_owner.c           | 35 +++++++++++++++++++++++--------
>  mm/page_table_check.c     | 10 ++++++---
>  5 files changed, 153 insertions(+), 26 deletions(-)
> 
> diff --git a/include/linux/page_ext.h b/include/linux/page_ext.h
> index fabb2e1..3a35c95 100644
> --- a/include/linux/page_ext.h
> +++ b/include/linux/page_ext.h
> @@ -5,6 +5,7 @@
>  #include <linux/types.h>
>  #include <linux/stacktrace.h>
>  #include <linux/stackdepot.h>
> +#include <linux/rcupdate.h>
>  
>  struct pglist_data;
>  struct page_ext_operations {
> @@ -36,6 +37,8 @@ struct page_ext {
>  	unsigned long flags;
>  };
>  
> +#define PAGE_EXT_INVALID       (0x1)
> +
>  extern unsigned long page_ext_size;
>  extern void pgdat_page_ext_init(struct pglist_data *pgdat);
>  
> @@ -57,6 +60,11 @@ static inline void page_ext_init(void)
>  
>  struct page_ext *lookup_page_ext(const struct page *page);
>  
> +static inline bool page_ext_invalid(struct page_ext *page_ext)
> +{
> +	return !page_ext || (((unsigned long)page_ext & PAGE_EXT_INVALID) == 1);
> +}
> +
>  static inline struct page_ext *page_ext_next(struct page_ext *curr)
>  {
>  	void *next = curr;
> @@ -64,6 +72,37 @@ static inline struct page_ext *page_ext_next(struct page_ext *curr)
>  	return next;
>  }
>  
> +/*
> + * This function gives proper page_ext of a memory section
> + * during race with the offline operation on a memory block
> + * this section falls into. Not using this function to get
> + * page_ext of a page, in code paths where extra refcount
> + * is not taken on that page eg: pfn walking, can lead to
> + * use-after-free access of page_ext.
> + */
> +static inline struct page_ext *page_ext_get(struct page *page)
> +{
> +	struct page_ext *page_ext;
> +
> +	rcu_read_lock();
> +	page_ext = lookup_page_ext(page);
> +	if (!page_ext) {
> +		rcu_read_unlock();
> +		return NULL;
> +	}
> +
> +	return page_ext;
> +}
> +
> +/*
> + * Must be called after work is done with the page_ext received
> + * with page_ext_get().
> + */
> +static inline void page_ext_put(void)
> +{
> +	rcu_read_unlock();
> +}
> +
>  #else /* !CONFIG_PAGE_EXTENSION */
>  struct page_ext;
>  
> @@ -87,5 +126,19 @@ static inline void page_ext_init_flatmem_late(void)
>  static inline void page_ext_init_flatmem(void)
>  {
>  }
> +
> +static inline struct page_ext *page_ext_get(struct page *page)
> +{
> +	return NULL;
> +}
> +
> +static inline bool page_ext_invalid(struct page_ext *page_ext)
> +{
> +	return true;
> +}
> +
> +static inline void page_ext_put(void)
> +{
> +}
>  #endif /* CONFIG_PAGE_EXTENSION */
>  #endif /* __LINUX_PAGE_EXT_H */
> diff --git a/include/linux/page_idle.h b/include/linux/page_idle.h
> index 4663dfe..3dd3718 100644
> --- a/include/linux/page_idle.h
> +++ b/include/linux/page_idle.h
> @@ -13,65 +13,85 @@
>   * If there is not enough space to store Idle and Young bits in page flags, use
>   * page ext flags instead.
>   */
> -
>  static inline bool folio_test_young(struct folio *folio)
>  {
> -	struct page_ext *page_ext = lookup_page_ext(&folio->page);
> +	struct page_ext *page_ext;
> +	bool page_young;
>  
> +	page_ext = page_ext_get(&folio->page);
>  	if (unlikely(!page_ext))
>  		return false;
>  
> -	return test_bit(PAGE_EXT_YOUNG, &page_ext->flags);
> +	page_young = test_bit(PAGE_EXT_YOUNG, &page_ext->flags);
> +	page_ext_put();
> +
> +	return page_young;
>  }
>  
>  static inline void folio_set_young(struct folio *folio)
>  {
> -	struct page_ext *page_ext = lookup_page_ext(&folio->page);
> +	struct page_ext *page_ext;
>  
> +	page_ext = page_ext_get(&folio->page);
>  	if (unlikely(!page_ext))
>  		return;
>  
>  	set_bit(PAGE_EXT_YOUNG, &page_ext->flags);
> +	page_ext_put();
>  }
>  
>  static inline bool folio_test_clear_young(struct folio *folio)
>  {
> -	struct page_ext *page_ext = lookup_page_ext(&folio->page);
> +	struct page_ext *page_ext;
> +	bool page_young;
>  
> +	page_ext = page_ext_get(&folio->page);
>  	if (unlikely(!page_ext))
>  		return false;
>  
> -	return test_and_clear_bit(PAGE_EXT_YOUNG, &page_ext->flags);
> +	page_young = test_and_clear_bit(PAGE_EXT_YOUNG, &page_ext->flags);
> +	page_ext_put();
> +
> +	return page_young;
>  }
>  
>  static inline bool folio_test_idle(struct folio *folio)
>  {
> -	struct page_ext *page_ext = lookup_page_ext(&folio->page);
> +	struct page_ext *page_ext;
> +	bool page_idle;
>  
> +	page_ext = page_ext_get(&folio->page);
>  	if (unlikely(!page_ext))
>  		return false;
>  
> -	return test_bit(PAGE_EXT_IDLE, &page_ext->flags);
> +	page_idle =  test_bit(PAGE_EXT_IDLE, &page_ext->flags);
> +	page_ext_put();
> +
> +	return page_idle;
>  }
>  
>  static inline void folio_set_idle(struct folio *folio)
>  {
> -	struct page_ext *page_ext = lookup_page_ext(&folio->page);
> +	struct page_ext *page_ext;
>  
> +	page_ext = page_ext_get(&folio->page);
>  	if (unlikely(!page_ext))
>  		return;
>  
>  	set_bit(PAGE_EXT_IDLE, &page_ext->flags);
> +	page_ext_put();
>  }
>  
>  static inline void folio_clear_idle(struct folio *folio)
>  {
> -	struct page_ext *page_ext = lookup_page_ext(&folio->page);
> +	struct page_ext *page_ext;
>  
> +	page_ext = page_ext_get(&folio->page);
>  	if (unlikely(!page_ext))
>  		return;
>  
>  	clear_bit(PAGE_EXT_IDLE, &page_ext->flags);
> +	page_ext_put();
>  }
>  #endif /* !CONFIG_64BIT */
>  
> diff --git a/mm/page_ext.c b/mm/page_ext.c
> index 3dc715d..404a2eb 100644
> --- a/mm/page_ext.c
> +++ b/mm/page_ext.c
> @@ -211,15 +211,17 @@ struct page_ext *lookup_page_ext(const struct page *page)
>  {
>  	unsigned long pfn = page_to_pfn(page);
>  	struct mem_section *section = __pfn_to_section(pfn);
> +	struct page_ext *page_ext = READ_ONCE(section->page_ext);
> +
>  	/*
>  	 * The sanity checks the page allocator does upon freeing a
>  	 * page can reach here before the page_ext arrays are
>  	 * allocated when feeding a range of pages to the allocator
>  	 * for the first time during bootup or memory hotplug.
>  	 */
> -	if (!section->page_ext)
> +	if (page_ext_invalid(page_ext))
>  		return NULL;
> -	return get_entry(section->page_ext, pfn);
> +	return get_entry(page_ext, pfn);
>  }
>  
>  static void *__meminit alloc_page_ext(size_t size, int nid)
> @@ -298,9 +300,26 @@ static void __free_page_ext(unsigned long pfn)
>  	ms = __pfn_to_section(pfn);
>  	if (!ms || !ms->page_ext)
>  		return;
> -	base = get_entry(ms->page_ext, pfn);
> +
> +	base = READ_ONCE(ms->page_ext);
> +	if (page_ext_invalid(base))
> +		base = (void *)base - PAGE_EXT_INVALID;
> +	WRITE_ONCE(ms->page_ext, NULL);
> +
> +	base = get_entry(base, pfn);
>  	free_page_ext(base);
> -	ms->page_ext = NULL;
> +}
> +
> +static void __invalidate_page_ext(unsigned long pfn)
> +{
> +	struct mem_section *ms;
> +	void *val;
> +
> +	ms = __pfn_to_section(pfn);
> +	if (!ms || !ms->page_ext)
> +		return;
> +	val = (void *)ms->page_ext + PAGE_EXT_INVALID;
> +	WRITE_ONCE(ms->page_ext, val);
>  }
>  
>  static int __meminit online_page_ext(unsigned long start_pfn,
> @@ -343,6 +362,20 @@ static int __meminit offline_page_ext(unsigned long start_pfn,
>  	start = SECTION_ALIGN_DOWN(start_pfn);
>  	end = SECTION_ALIGN_UP(start_pfn + nr_pages);
>  
> +	/*
> +	 * Freeing of page_ext is done in 3 steps to avoid
> +	 * use-after-free of it:
> +	 * 1) Traverse all the sections and mark their page_ext
> +	 *    as invalid.
> +	 * 2) Wait for all the existing users of page_ext who
> +	 *    started before invalidation to finish.
> +	 * 3) Free the page_ext.
> +	 */
> +	for (pfn = start; pfn < end; pfn += PAGES_PER_SECTION)
> +		__invalidate_page_ext(pfn);
> +
> +	synchronize_rcu();
> +
>  	for (pfn = start; pfn < end; pfn += PAGES_PER_SECTION)
>  		__free_page_ext(pfn);
>  	return 0;
> diff --git a/mm/page_owner.c b/mm/page_owner.c
> index e4c6f3f..0520dda 100644
> --- a/mm/page_owner.c
> +++ b/mm/page_owner.c
> @@ -195,14 +195,16 @@ noinline void __set_page_owner(struct page *page, unsigned short order,
>  
>  void __set_page_owner_migrate_reason(struct page *page, int reason)
>  {
> -	struct page_ext *page_ext = lookup_page_ext(page);
> +	struct page_ext *page_ext;
>  	struct page_owner *page_owner;
>  
> +	page_ext = page_ext_get(page);
>  	if (unlikely(!page_ext))
>  		return;
>  
>  	page_owner = get_page_owner(page_ext);
>  	page_owner->last_migrate_reason = reason;
> +	page_ext_put();
>  }
>  
>  void __split_page_owner(struct page *page, unsigned int nr)
> @@ -307,12 +309,12 @@ void pagetypeinfo_showmixedcount_print(struct seq_file *m,
>  			if (PageReserved(page))
>  				continue;
>  
> -			page_ext = lookup_page_ext(page);
> +			page_ext = page_ext_get(page);
>  			if (unlikely(!page_ext))
>  				continue;
>  
>  			if (!test_bit(PAGE_EXT_OWNER_ALLOCATED, &page_ext->flags))
> -				continue;
> +				goto loop;
>  
>  			page_owner = get_page_owner(page_ext);
>  			page_mt = gfp_migratetype(page_owner->gfp_mask);
> @@ -323,9 +325,12 @@ void pagetypeinfo_showmixedcount_print(struct seq_file *m,
>  					count[pageblock_mt]++;
>  
>  				pfn = block_end_pfn;
> +				page_ext_put();
>  				break;
>  			}
>  			pfn += (1UL << page_owner->order) - 1;
> +loop:
> +			page_ext_put();
>  		}
>  	}
>  
> @@ -508,6 +513,14 @@ read_page_owner(struct file *file, char __user *buf, size_t count, loff_t *ppos)
>  	/* Find an allocated page */
>  	for (; pfn < max_pfn; pfn++) {
>  		/*
> +		 * This temporary page_owner is required so
> +		 * that we can avoid the context switches while holding
> +		 * the rcu lock and copying the page owner information to
> +		 * user through copy_to_user() or GFP_KERNEL allocations.
> +		 */
> +		struct page_owner page_owner_tmp = {0};
> +
> +		/*
>  		 * If the new page is in a new MAX_ORDER_NR_PAGES area,
>  		 * validate the area as existing, skip it if not
>  		 */
> @@ -525,7 +538,7 @@ read_page_owner(struct file *file, char __user *buf, size_t count, loff_t *ppos)
>  			continue;
>  		}
>  
> -		page_ext = lookup_page_ext(page);
> +		page_ext = page_ext_get(page);
>  		if (unlikely(!page_ext))
>  			continue;
>  
> @@ -534,14 +547,14 @@ read_page_owner(struct file *file, char __user *buf, size_t count, loff_t *ppos)
>  		 * because we don't hold the zone lock.
>  		 */
>  		if (!test_bit(PAGE_EXT_OWNER, &page_ext->flags))
> -			continue;
> +			goto loop;
>  
>  		/*
>  		 * Although we do have the info about past allocation of free
>  		 * pages, it's not relevant for current memory usage.
>  		 */
>  		if (!test_bit(PAGE_EXT_OWNER_ALLOCATED, &page_ext->flags))
> -			continue;
> +			goto loop;
>  
>  		page_owner = get_page_owner(page_ext);
>  
> @@ -550,7 +563,7 @@ read_page_owner(struct file *file, char __user *buf, size_t count, loff_t *ppos)
>  		 * would inflate the stats.
>  		 */
>  		if (!IS_ALIGNED(pfn, 1 << page_owner->order))
> -			continue;
> +			goto loop;
>  
>  		/*
>  		 * Access to page_ext->handle isn't synchronous so we should
> @@ -558,13 +571,17 @@ read_page_owner(struct file *file, char __user *buf, size_t count, loff_t *ppos)
>  		 */
>  		handle = READ_ONCE(page_owner->handle);
>  		if (!handle)
> -			continue;
> +			goto loop;
>  
>  		/* Record the next PFN to read in the file offset */
>  		*ppos = (pfn - min_low_pfn) + 1;
>  
> +		memcpy(&page_owner_tmp, page_owner, sizeof(struct page_owner));
> +		page_ext_put();
>  		return print_page_owner(buf, count, pfn, page,
> -				page_owner, handle);
> +				&page_owner_tmp, handle);
> +loop:
> +		page_ext_put();
>  	}
>  
>  	return 0;
> diff --git a/mm/page_table_check.c b/mm/page_table_check.c
> index e206274..ec371b9 100644
> --- a/mm/page_table_check.c
> +++ b/mm/page_table_check.c
> @@ -68,7 +68,7 @@ static void page_table_check_clear(struct mm_struct *mm, unsigned long addr,
>  		return;
>  
>  	page = pfn_to_page(pfn);
> -	page_ext = lookup_page_ext(page);
> +	page_ext = page_ext_get(page);
>  	anon = PageAnon(page);
>  
>  	for (i = 0; i < pgcnt; i++) {
> @@ -83,6 +83,7 @@ static void page_table_check_clear(struct mm_struct *mm, unsigned long addr,
>  		}
>  		page_ext = page_ext_next(page_ext);
>  	}
> +	page_ext_put();
>  }
>  
>  /*
> @@ -103,7 +104,7 @@ static void page_table_check_set(struct mm_struct *mm, unsigned long addr,
>  		return;
>  
>  	page = pfn_to_page(pfn);
> -	page_ext = lookup_page_ext(page);
> +	page_ext = page_ext_get(page);
>  	anon = PageAnon(page);
>  
>  	for (i = 0; i < pgcnt; i++) {
> @@ -118,6 +119,7 @@ static void page_table_check_set(struct mm_struct *mm, unsigned long addr,
>  		}
>  		page_ext = page_ext_next(page_ext);
>  	}
> +	page_ext_put();
>  }
>  
>  /*
> @@ -126,9 +128,10 @@ static void page_table_check_set(struct mm_struct *mm, unsigned long addr,
>   */
>  void __page_table_check_zero(struct page *page, unsigned int order)
>  {
> -	struct page_ext *page_ext = lookup_page_ext(page);
> +	struct page_ext *page_ext;
>  	unsigned long i;
>  
> +	page_ext = page_ext_get(page);
>  	BUG_ON(!page_ext);
>  	for (i = 0; i < (1ul << order); i++) {
>  		struct page_table_check *ptc = get_page_table_check(page_ext);
> @@ -137,6 +140,7 @@ void __page_table_check_zero(struct page *page, unsigned int order)
>  		BUG_ON(atomic_read(&ptc->file_map_count));
>  		page_ext = page_ext_next(page_ext);
>  	}
> +	page_ext_put();
>  }
>  
>  void __page_table_check_pte_clear(struct mm_struct *mm, unsigned long addr,
David Hildenbrand July 27, 2022, 5:29 p.m. UTC | #2
On 27.07.22 16:15, Charan Teja Kalla wrote:
> The below is one path where race between page_ext and  offline of the
> respective memory blocks will cause use-after-free on the access of
> page_ext structure.
> 
> process1		              process2
> ---------                             ---------
> a)doing /proc/page_owner           doing memory offline
> 			           through offline_pages.
> 
> b)PageBuddy check is failed
> thus proceed to get the
> page_owner information
> through page_ext access.
> page_ext = lookup_page_ext(page);
> 
> 				    migrate_pages();
> 				    .................
> 				Since all pages are successfully
> 				migrated as part of the offline
> 				operation,send MEM_OFFLINE notification
> 				where for page_ext it calls:
> 				offline_page_ext()-->
> 				__free_page_ext()-->
> 				   free_page_ext()-->
> 				     vfree(ms->page_ext)
> 			           mem_section->page_ext = NULL
> 
> c) Check for the PAGE_EXT flags
> in the page_ext->flags access
> results into the use-after-free(leading
> to the translation faults).
> 
> As mentioned above, there is really no synchronization between page_ext
> access and its freeing in the memory_offline.
> 
> The memory offline steps(roughly) on a memory block is as below:
> 1) Isolate all the pages
> 2) while(1)
>   try free the pages to buddy.(->free_list[MIGRATE_ISOLATE])
> 3) delete the pages from this buddy list.
> 4) Then free page_ext.(Note: The struct page is still alive as it is
> freed only during hot remove of the memory which frees the memmap, which
> steps the user might not perform).
> 
> This design leads to the state where struct page is alive but the struct
> page_ext is freed, where the later is ideally part of the former which
> just representing the page_flags.
> 
> The above mentioned race is just one example __but the problem persists
> in the other paths too involving page_ext->flags access(eg:
> page_is_idle())__. Since offline waits till the last reference on the
> page goes down i.e. any path that took the refcount on the page can make
> the memory offline operation to wait. Eg: In the migrate_pages()
> operation, we do take the extra refcount on the pages that are under
> migration and then we do copy page_owner by accessing page_ext. For
> 
> Fix those paths where offline races with page_ext access by maintaining
> synchronization with rcu lock and is achieved in 3 steps:
> 1) Invalidate all the page_ext's of the sections of a memory block by
> storing a flag in the LSB of mem_section->page_ext.
> 
> 2) Wait till all the existing readers to finish working with the
> ->page_ext's with synchronize_rcu(). Any parallel process that starts
> after this call will not get page_ext, through lookup_page_ext(), for
> the block parallel offline operation is being performed.
> 
> 3) Now safely free all sections ->page_ext's of the block on which
> offline operation is being performed.
> 
> Thanks to David Hildenbrand for his views/suggestions on the initial
> discussion[1] and Pavan kondeti for various inputs on this patch.
> 
> FAQ's:
> Q) Should page_ext_[get|put]() needs to be used for every page_ext
> access?
> A) NO, the synchronization is really not needed in all the paths of
> accessing page_ext. One case is where extra refcount is taken on a
> page for which memory block, this pages falls into, offline operation is
> being performed. This extra refcount makes the offline operation not to
> succeed hence the freeing of page_ext.  Another case is where the page
> is already being freed and we do reset its page_owner.
> 
> Some examples where the rcu_lock is not taken while accessing the
> page_ext are:
> 1) In migration (where we also migrate the page_owner information), we
> take the extra refcount on the source and destination pages and then
> start the migration. This extra refcount makes the test_pages_isolated()
> function to fail thus retry the offline operation.
> 
> 2) In free_pages_prepare(), we do reset the page_owner(through page_ext)
> which again doesn't need the protection to access because the page is
> already freeing (through only one path).
> 
> So, users need not to use page_ext_[get|put]() when they are sure that
> extra refcount is taken on a page preventing the offline operation.
> 
> Q) Why can't the page_ext is freed in the hot_remove path, where memmap
> is also freed ?
> 
> A) As per David's answers, there are many reasons and a few are:
> 1) Discussions had happened in the past to eventually also use rcu
> protection for handling pfn_to_online_page(). So doing it cleanly here
> is certainly an improvement.
> 
> 2) It's not good having to scatter section online checks all over the
> place in page ext code. Once there is a difference between active vs.
> stale page ext data things get a bit messy and error prone. This is
> already ugly enough in our generic memmap handling code.
> 
> 3) Having on-demand allocations, such as KASAN or page ext from the
> memory online notifier is at least currently cleaner, because we don't
> have to handle each and every subsystem that hooks into that during the
> core memory hotadd/remove phase, which primarily only setups the
> vmemmap, direct map and memory block devices.
> 
> [1] https://lore.kernel.org/linux-mm/59edde13-4167-8550-86f0-11fc67882107@quicinc.com/
> 

I guess if we care about the synchronize_rcu() we could go crazy with
temporary allocations for data-to-free + call_rcu().
Charan Teja Kalla July 28, 2022, 9:53 a.m. UTC | #3
Thanks David for the inputs!!

On 7/27/2022 10:59 PM, David Hildenbrand wrote:
>> Fix those paths where offline races with page_ext access by maintaining
>> synchronization with rcu lock and is achieved in 3 steps:
>> 1) Invalidate all the page_ext's of the sections of a memory block by
>> storing a flag in the LSB of mem_section->page_ext.
>>
>> 2) Wait till all the existing readers to finish working with the
>> ->page_ext's with synchronize_rcu(). Any parallel process that starts
>> after this call will not get page_ext, through lookup_page_ext(), for
>> the block parallel offline operation is being performed.
>>
>> 3) Now safely free all sections ->page_ext's of the block on which
>> offline operation is being performed.
>>
>> Thanks to David Hildenbrand for his views/suggestions on the initial
>> discussion[1] and Pavan kondeti for various inputs on this patch.
>>
>> FAQ's:
>> Q) Should page_ext_[get|put]() needs to be used for every page_ext
>> access?
>> A) NO, the synchronization is really not needed in all the paths of
>> accessing page_ext. One case is where extra refcount is taken on a
>> page for which memory block, this pages falls into, offline operation is
>> being performed. This extra refcount makes the offline operation not to
>> succeed hence the freeing of page_ext.  Another case is where the page
>> is already being freed and we do reset its page_owner.
>>
>> Some examples where the rcu_lock is not taken while accessing the
>> page_ext are:
>> 1) In migration (where we also migrate the page_owner information), we
>> take the extra refcount on the source and destination pages and then
>> start the migration. This extra refcount makes the test_pages_isolated()
>> function to fail thus retry the offline operation.
>>
>> 2) In free_pages_prepare(), we do reset the page_owner(through page_ext)
>> which again doesn't need the protection to access because the page is
>> already freeing (through only one path).
>>
>> So, users need not to use page_ext_[get|put]() when they are sure that
>> extra refcount is taken on a page preventing the offline operation.
>>
>> Q) Why can't the page_ext is freed in the hot_remove path, where memmap
>> is also freed ?
>>
>> A) As per David's answers, there are many reasons and a few are:
>> 1) Discussions had happened in the past to eventually also use rcu
>> protection for handling pfn_to_online_page(). So doing it cleanly here
>> is certainly an improvement.
>>
>> 2) It's not good having to scatter section online checks all over the
>> place in page ext code. Once there is a difference between active vs.
>> stale page ext data things get a bit messy and error prone. This is
>> already ugly enough in our generic memmap handling code.
>>
>> 3) Having on-demand allocations, such as KASAN or page ext from the
>> memory online notifier is at least currently cleaner, because we don't
>> have to handle each and every subsystem that hooks into that during the
>> core memory hotadd/remove phase, which primarily only setups the
>> vmemmap, direct map and memory block devices.
>>
>> [1] https://lore.kernel.org/linux-mm/59edde13-4167-8550-86f0-11fc67882107@quicinc.com/
>>
> I guess if we care about the synchronize_rcu() we could go crazy with
> temporary allocations for data-to-free + call_rcu().

IMO, single synchronize_rcu() call overhead shouldn't be cared
especially if the memory offline operation it self is expected to
complete in seconds. On the Snapdragon system, I can see the lowest it
can complete in 3-4secs for a complete memory block of size 512M. And
agree that this time depends on lot of other factors too but wanted to
raise a point that it is really not a path where tiny optimizations
should be strictly considered. __Please help in correcting me If I am
really downplaying the scenario here__.

But then I moved to single synchronize_rcu() just to avoid any visible
effects that can cause by multiple synchronize_rcu() for a single memory
block with lot of sections.

Having said that, I am open to go for call_rcu() and infact it will be a
much simple change where I can do the freeing of page_ext in the
__free_page_ext() itself which is called for every section there by
avoid the extra tracking flag PAGE_EXT_INVALID.
      ...........
        WRITE_ONCE(ms->page_ext, NULL);
	call_rcu(rcu_head, fun); // Free in fun()
       .............

Or your opinion is to use call_rcu () only once in place of
synchronize_rcu() after invalidating all the page_ext's of memory block?

Thanks,
Charan
Michal Hocko July 28, 2022, 2:37 p.m. UTC | #4
On Wed 27-07-22 19:45:03, Charan Teja Kalla wrote:
[...]

Thanks for looking into this and improving the changelog. It is much
more easier to follow and also much better to understand.

> FAQ's:
> Q) Should page_ext_[get|put]() needs to be used for every page_ext
> access?
> A) NO, the synchronization is really not needed in all the paths of
> accessing page_ext. One case is where extra refcount is taken on a
> page for which memory block, this pages falls into, offline operation is
> being performed. This extra refcount makes the offline operation not to
> succeed hence the freeing of page_ext.  Another case is where the page
> is already being freed and we do reset its page_owner.

This is just subtlety and something that can get misunderstood over
time. Moreover there is no documentation explaining the difference.
What is the reason to have these two different APIs in the first place.
RCU read side is almost zero cost. So what is the point?

[...]

> Q) Why can't the page_ext is freed in the hot_remove path, where memmap
> is also freed ?
> 
> A) As per David's answers, there are many reasons and a few are:
> 1) Discussions had happened in the past to eventually also use rcu
> protection for handling pfn_to_online_page(). So doing it cleanly here
> is certainly an improvement.
> 
> 2) It's not good having to scatter section online checks all over the
> place in page ext code. Once there is a difference between active vs.
> stale page ext data things get a bit messy and error prone. This is
> already ugly enough in our generic memmap handling code.
> 
> 3) Having on-demand allocations, such as KASAN or page ext from the
> memory online notifier is at least currently cleaner, because we don't
> have to handle each and every subsystem that hooks into that during the
> core memory hotadd/remove phase, which primarily only setups the
> vmemmap, direct map and memory block devices.

I cannot say I agree with this reasoning but whatever.

Few more notes below

> [1] https://lore.kernel.org/linux-mm/59edde13-4167-8550-86f0-11fc67882107@quicinc.com/
> 
> Suggested-by: David Hildenbrand <david@redhat.com>
> Suggested-by: Michal Hocko <mhocko@suse.com>
> Signed-off-by: Charan Teja Kalla <quic_charante@quicinc.com>
> ---
> Changes in V2:
>    o Use only get/put_page_ext() to get the page_ext in the 
>      required paths. Add proper comments for them.
>    o Use synchronize_rcu() only once instead of calling it for
>      every mem_section::page_ext of a memory block.
>    o Free'd page_ext in 3 steps of invalidate, wait till all the
>      users are finished using and then finally free page_ext.
> 
> Changes in V1:
>    o Used the RCU lock while accessing the page_ext in the paths that
>      can race with the memory offline operation.
>    o Introduced (get|put)_page_ext() function to get the page_ext of page.
>    o https://lore.kernel.org/all/1657810063-28938-1-git-send-email-quic_charante@quicinc.com/
> 
>  include/linux/page_ext.h  | 53 +++++++++++++++++++++++++++++++++++++++++++++++
>  include/linux/page_idle.h | 40 ++++++++++++++++++++++++++---------
>  mm/page_ext.c             | 41 ++++++++++++++++++++++++++++++++----
>  mm/page_owner.c           | 35 +++++++++++++++++++++++--------
>  mm/page_table_check.c     | 10 ++++++---
>  5 files changed, 153 insertions(+), 26 deletions(-)
> 
> diff --git a/include/linux/page_ext.h b/include/linux/page_ext.h
> index fabb2e1..3a35c95 100644
> --- a/include/linux/page_ext.h
> +++ b/include/linux/page_ext.h
> @@ -5,6 +5,7 @@
>  #include <linux/types.h>
>  #include <linux/stacktrace.h>
>  #include <linux/stackdepot.h>
> +#include <linux/rcupdate.h>
>  
>  struct pglist_data;
>  struct page_ext_operations {
> @@ -36,6 +37,8 @@ struct page_ext {
>  	unsigned long flags;
>  };
>  
> +#define PAGE_EXT_INVALID       (0x1)
> +
>  extern unsigned long page_ext_size;
>  extern void pgdat_page_ext_init(struct pglist_data *pgdat);
>  
> @@ -57,6 +60,11 @@ static inline void page_ext_init(void)
>  
>  struct page_ext *lookup_page_ext(const struct page *page);
>  
> +static inline bool page_ext_invalid(struct page_ext *page_ext)
> +{
> +	return !page_ext || (((unsigned long)page_ext & PAGE_EXT_INVALID) == 1);
> +}
> +

No real reason to expose this into a header file. Nothing but page_ext.c
should know and care about this.

>  static inline struct page_ext *page_ext_next(struct page_ext *curr)
>  {
>  	void *next = curr;
> @@ -64,6 +72,37 @@ static inline struct page_ext *page_ext_next(struct page_ext *curr)
>  	return next;
>  }
>  
> +/*
> + * This function gives proper page_ext of a memory section
> + * during race with the offline operation on a memory block
> + * this section falls into. Not using this function to get
> + * page_ext of a page, in code paths where extra refcount
> + * is not taken on that page eg: pfn walking, can lead to
> + * use-after-free access of page_ext.
> + */
> +static inline struct page_ext *page_ext_get(struct page *page)
> +{
> +	struct page_ext *page_ext;
> +
> +	rcu_read_lock();
> +	page_ext = lookup_page_ext(page);
> +	if (!page_ext) {
> +		rcu_read_unlock();
> +		return NULL;
> +	}
> +
> +	return page_ext;

If you make this an extern you can actually hide lookup_page_ext and
prevent from future bugs where people are using non serialized API
without realizing that.

[...]
> diff --git a/mm/page_ext.c b/mm/page_ext.c
> index 3dc715d..404a2eb 100644
> --- a/mm/page_ext.c
> +++ b/mm/page_ext.c
> @@ -211,15 +211,17 @@ struct page_ext *lookup_page_ext(const struct page *page)
>  {
>  	unsigned long pfn = page_to_pfn(page);
>  	struct mem_section *section = __pfn_to_section(pfn);
> +	struct page_ext *page_ext = READ_ONCE(section->page_ext);
> +

	WARN_ON_ONCE(!rcu_read_lock_held());

>  	/*
>  	 * The sanity checks the page allocator does upon freeing a
>  	 * page can reach here before the page_ext arrays are
>  	 * allocated when feeding a range of pages to the allocator
>  	 * for the first time during bootup or memory hotplug.
>  	 */
> -	if (!section->page_ext)
> +	if (page_ext_invalid(page_ext))
>  		return NULL;
> -	return get_entry(section->page_ext, pfn);
> +	return get_entry(page_ext, pfn);
>  }
>  
>  static void *__meminit alloc_page_ext(size_t size, int nid)
> @@ -298,9 +300,26 @@ static void __free_page_ext(unsigned long pfn)
>  	ms = __pfn_to_section(pfn);
>  	if (!ms || !ms->page_ext)
>  		return;
> -	base = get_entry(ms->page_ext, pfn);
> +
> +	base = READ_ONCE(ms->page_ext);
> +	if (page_ext_invalid(base))
> +		base = (void *)base - PAGE_EXT_INVALID;

All page_ext accesses should use the same fetched pointer including the
ms->page_ext check. Also page_ext_invalid _must_ be true here otherwise
something bad is going on so I would go with
	if (WARN_ON_ONCE(!page_ext_invalid(base)))
		return;
	base = (void *)base - PAGE_EXT_INVALID;

> +	WRITE_ONCE(ms->page_ext, NULL);
> +
> +	base = get_entry(base, pfn);
>  	free_page_ext(base);
> -	ms->page_ext = NULL;
> +}
> +
Charan Teja Kalla July 29, 2022, 3:47 p.m. UTC | #5
Thanks Michal for the reviews!!

On 7/28/2022 8:07 PM, Michal Hocko wrote:
>> FAQ's:
>> Q) Should page_ext_[get|put]() needs to be used for every page_ext
>> access?
>> A) NO, the synchronization is really not needed in all the paths of
>> accessing page_ext. One case is where extra refcount is taken on a
>> page for which memory block, this pages falls into, offline operation is
>> being performed. This extra refcount makes the offline operation not to
>> succeed hence the freeing of page_ext.  Another case is where the page
>> is already being freed and we do reset its page_owner.
> This is just subtlety and something that can get misunderstood over
> time. Moreover there is no documentation explaining the difference.
> What is the reason to have these two different APIs in the first place.
> RCU read side is almost zero cost. So what is the point?
Currently not all the places where page_ext is being used is put under
the rcu_lock. I just used rcu lock in the places where it is possible to
have the use-after-free of page_ext. You recommend to use rcu lock while
using with page_ext in all the places?

My only point here is since there may be a non-atomic context exist
across page_ext_get/put() and If users are sure that this page's
page_ext will not be freed by parallel offline operation, they need not
get the rcu lock.

I agree that this can be misunderstood over time, let me check if I can
use page_ext_get/put in all the places.

>> @@ -57,6 +60,11 @@ static inline void page_ext_init(void)
>>  
>>  struct page_ext *lookup_page_ext(const struct page *page);
>>  
>> +static inline bool page_ext_invalid(struct page_ext *page_ext)
>> +{
>> +	return !page_ext || (((unsigned long)page_ext & PAGE_EXT_INVALID) == 1);
>> +}
>> +
> No real reason to expose this into a header file. Nothing but page_ext.c
> should know and care about this.
Agree. Will move it accordingly.

> 
>> +static inline struct page_ext *page_ext_get(struct page *page)
>> +{
>> +	struct page_ext *page_ext;
>> +
>> +	rcu_read_lock();
>> +	page_ext = lookup_page_ext(page);
>> +	if (!page_ext) {
>> +		rcu_read_unlock();
>> +		return NULL;
>> +	}
>> +
>> +	return page_ext;
> If you make this an extern you can actually hide lookup_page_ext and
> prevent from future bugs where people are using non serialized API
> without realizing that.

This design looks good. Let me check the feasibility in its implementation.

>> diff --git a/mm/page_ext.c b/mm/page_ext.c
>> index 3dc715d..404a2eb 100644
>> --- a/mm/page_ext.c
>> +++ b/mm/page_ext.c
>> @@ -211,15 +211,17 @@ struct page_ext *lookup_page_ext(const struct page *page)
>>  {
>>  	unsigned long pfn = page_to_pfn(page);
>>  	struct mem_section *section = __pfn_to_section(pfn);
>> +	struct page_ext *page_ext = READ_ONCE(section->page_ext);
>> +
> 	WARN_ON_ONCE(!rcu_read_lock_held());

Again this requires page_ext usage should be under the rcu lock always
by the user.

> 
>>  static void *__meminit alloc_page_ext(size_t size, int nid)
>> @@ -298,9 +300,26 @@ static void __free_page_ext(unsigned long pfn)
>>  	ms = __pfn_to_section(pfn);
>>  	if (!ms || !ms->page_ext)
>>  		return;
>> -	base = get_entry(ms->page_ext, pfn);
>> +
>> +	base = READ_ONCE(ms->page_ext);
>> +	if (page_ext_invalid(base))
>> +		base = (void *)base - PAGE_EXT_INVALID;
> All page_ext accesses should use the same fetched pointer including the
> ms->page_ext check. Also page_ext_invalid _must_ be true here otherwise
> something bad is going on so I would go with
> 	if (WARN_ON_ONCE(!page_ext_invalid(base)))
> 		return;
> 	base = (void *)base - PAGE_EXT_INVALID;

The roll back operation in the online_page_ext(), where we free the
allocated page_ext's, will not have the PAGE_EXT_INVALID flag thus
WARN() may not work here. no?
> 

Thanks,
Charan
Michal Hocko Aug. 1, 2022, 8:27 a.m. UTC | #6
On Fri 29-07-22 21:17:44, Charan Teja Kalla wrote:
> Thanks Michal for the reviews!!
> 
> On 7/28/2022 8:07 PM, Michal Hocko wrote:
> >> FAQ's:
> >> Q) Should page_ext_[get|put]() needs to be used for every page_ext
> >> access?
> >> A) NO, the synchronization is really not needed in all the paths of
> >> accessing page_ext. One case is where extra refcount is taken on a
> >> page for which memory block, this pages falls into, offline operation is
> >> being performed. This extra refcount makes the offline operation not to
> >> succeed hence the freeing of page_ext.  Another case is where the page
> >> is already being freed and we do reset its page_owner.
> > This is just subtlety and something that can get misunderstood over
> > time. Moreover there is no documentation explaining the difference.
> > What is the reason to have these two different APIs in the first place.
> > RCU read side is almost zero cost. So what is the point?
> Currently not all the places where page_ext is being used is put under
> the rcu_lock. I just used rcu lock in the places where it is possible to
> have the use-after-free of page_ext. You recommend to use rcu lock while
> using with page_ext in all the places?

Yes. Using locking inconsistently just begs for future problems. There
should be a very good reason to use lockless approach in some paths and
that would be where the locking overhead is not really acceptable or
when the locking cannot be used for other reasons.

RCU read lock is essentially zero overhead so the only reason would be
that the critical section would require to sleep. Is any of that the
case?

If there is a real need to have a lockless variant then I would propose
to add __page_ext_get/put which would be lockless and clearly documented
under which contexts it can be used and enfore those condictions (e.g.
reference count assumption).

> My only point here is since there may be a non-atomic context exist
> across page_ext_get/put() and If users are sure that this page's
> page_ext will not be freed by parallel offline operation, they need not
> get the rcu lock.

Existing users are probably easy to check but think about the future.
Most developers (even a large part of the MM community) is not deeply
familiar with the memory hotplug. Not to mention people do not tend to
follow development in that area and assumptions might change.

[...]
> >> @@ -298,9 +300,26 @@ static void __free_page_ext(unsigned long pfn)
> >>  	ms = __pfn_to_section(pfn);
> >>  	if (!ms || !ms->page_ext)
> >>  		return;
> >> -	base = get_entry(ms->page_ext, pfn);
> >> +
> >> +	base = READ_ONCE(ms->page_ext);
> >> +	if (page_ext_invalid(base))
> >> +		base = (void *)base - PAGE_EXT_INVALID;
> > All page_ext accesses should use the same fetched pointer including the
> > ms->page_ext check. Also page_ext_invalid _must_ be true here otherwise
> > something bad is going on so I would go with
> > 	if (WARN_ON_ONCE(!page_ext_invalid(base)))
> > 		return;
> > 	base = (void *)base - PAGE_EXT_INVALID;
> 
> The roll back operation in the online_page_ext(), where we free the
> allocated page_ext's, will not have the PAGE_EXT_INVALID flag thus
> WARN() may not work here. no?

Wouldn't ms->page_ext be NULL in that case?
David Hildenbrand Aug. 1, 2022, 8:30 a.m. UTC | #7
On 28.07.22 11:53, Charan Teja Kalla wrote:
> Thanks David for the inputs!!
> 
> On 7/27/2022 10:59 PM, David Hildenbrand wrote:
>>> Fix those paths where offline races with page_ext access by maintaining
>>> synchronization with rcu lock and is achieved in 3 steps:
>>> 1) Invalidate all the page_ext's of the sections of a memory block by
>>> storing a flag in the LSB of mem_section->page_ext.
>>>
>>> 2) Wait till all the existing readers to finish working with the
>>> ->page_ext's with synchronize_rcu(). Any parallel process that starts
>>> after this call will not get page_ext, through lookup_page_ext(), for
>>> the block parallel offline operation is being performed.
>>>
>>> 3) Now safely free all sections ->page_ext's of the block on which
>>> offline operation is being performed.
>>>
>>> Thanks to David Hildenbrand for his views/suggestions on the initial
>>> discussion[1] and Pavan kondeti for various inputs on this patch.
>>>
>>> FAQ's:
>>> Q) Should page_ext_[get|put]() needs to be used for every page_ext
>>> access?
>>> A) NO, the synchronization is really not needed in all the paths of
>>> accessing page_ext. One case is where extra refcount is taken on a
>>> page for which memory block, this pages falls into, offline operation is
>>> being performed. This extra refcount makes the offline operation not to
>>> succeed hence the freeing of page_ext.  Another case is where the page
>>> is already being freed and we do reset its page_owner.
>>>
>>> Some examples where the rcu_lock is not taken while accessing the
>>> page_ext are:
>>> 1) In migration (where we also migrate the page_owner information), we
>>> take the extra refcount on the source and destination pages and then
>>> start the migration. This extra refcount makes the test_pages_isolated()
>>> function to fail thus retry the offline operation.
>>>
>>> 2) In free_pages_prepare(), we do reset the page_owner(through page_ext)
>>> which again doesn't need the protection to access because the page is
>>> already freeing (through only one path).
>>>
>>> So, users need not to use page_ext_[get|put]() when they are sure that
>>> extra refcount is taken on a page preventing the offline operation.
>>>
>>> Q) Why can't the page_ext is freed in the hot_remove path, where memmap
>>> is also freed ?
>>>
>>> A) As per David's answers, there are many reasons and a few are:
>>> 1) Discussions had happened in the past to eventually also use rcu
>>> protection for handling pfn_to_online_page(). So doing it cleanly here
>>> is certainly an improvement.
>>>
>>> 2) It's not good having to scatter section online checks all over the
>>> place in page ext code. Once there is a difference between active vs.
>>> stale page ext data things get a bit messy and error prone. This is
>>> already ugly enough in our generic memmap handling code.
>>>
>>> 3) Having on-demand allocations, such as KASAN or page ext from the
>>> memory online notifier is at least currently cleaner, because we don't
>>> have to handle each and every subsystem that hooks into that during the
>>> core memory hotadd/remove phase, which primarily only setups the
>>> vmemmap, direct map and memory block devices.
>>>
>>> [1] https://lore.kernel.org/linux-mm/59edde13-4167-8550-86f0-11fc67882107@quicinc.com/
>>>
>> I guess if we care about the synchronize_rcu() we could go crazy with
>> temporary allocations for data-to-free + call_rcu().
> 
> IMO, single synchronize_rcu() call overhead shouldn't be cared
> especially if the memory offline operation it self is expected to
> complete in seconds. On the Snapdragon system, I can see the lowest it
> can complete in 3-4secs for a complete memory block of size 512M. And
> agree that this time depends on lot of other factors too but wanted to
> raise a point that it is really not a path where tiny optimizations
> should be strictly considered. __Please help in correcting me If I am
> really downplaying the scenario here__.

I agree that we should optimize only if we find this to be an issue.

> 
> But then I moved to single synchronize_rcu() just to avoid any visible
> effects that can cause by multiple synchronize_rcu() for a single memory
> block with lot of sections.

Makes sense.

> 
> Having said that, I am open to go for call_rcu() and infact it will be a
> much simple change where I can do the freeing of page_ext in the
> __free_page_ext() itself which is called for every section there by
> avoid the extra tracking flag PAGE_EXT_INVALID.
>       ...........
>         WRITE_ONCE(ms->page_ext, NULL);
> 	call_rcu(rcu_head, fun); // Free in fun()
>        .............
> 
> Or your opinion is to use call_rcu () only once in place of
> synchronize_rcu() after invalidating all the page_ext's of memory block?


Yeah, that would be an option. And if you fail to allocate a temporary
buffer to hold the data-to-free (structure containing rcu_head), the
slower fallback path would be synchronize_rcu().

But again, I'm also not sure if we have to optimize here right now.
Charan Teja Kalla Aug. 1, 2022, 11:50 a.m. UTC | #8
Thanks David!!

On 8/1/2022 2:00 PM, David Hildenbrand wrote:
>> Having said that, I am open to go for call_rcu() and infact it will be a
>> much simple change where I can do the freeing of page_ext in the
>> __free_page_ext() itself which is called for every section there by
>> avoid the extra tracking flag PAGE_EXT_INVALID.
>>       ...........
>>         WRITE_ONCE(ms->page_ext, NULL);
>> 	call_rcu(rcu_head, fun); // Free in fun()
>>        .............
>>
>> Or your opinion is to use call_rcu () only once in place of
>> synchronize_rcu() after invalidating all the page_ext's of memory block?
> 
> Yeah, that would be an option. And if you fail to allocate a temporary
> buffer to hold the data-to-free (structure containing rcu_head), the
> slower fallback path would be synchronize_rcu().
> 

I will add this as a note in the code that in future If some
optimizations needs to be done in this path, this option can be
considered.  Hope this will be fine for now?

> But again, I'm also not sure if we have to optimize here right now.

Thanks,
Charan
David Hildenbrand Aug. 1, 2022, 12:04 p.m. UTC | #9
On 01.08.22 13:50, Charan Teja Kalla wrote:
> Thanks David!!
> 
> On 8/1/2022 2:00 PM, David Hildenbrand wrote:
>>> Having said that, I am open to go for call_rcu() and infact it will be a
>>> much simple change where I can do the freeing of page_ext in the
>>> __free_page_ext() itself which is called for every section there by
>>> avoid the extra tracking flag PAGE_EXT_INVALID.
>>>       ...........
>>>         WRITE_ONCE(ms->page_ext, NULL);
>>> 	call_rcu(rcu_head, fun); // Free in fun()
>>>        .............
>>>
>>> Or your opinion is to use call_rcu () only once in place of
>>> synchronize_rcu() after invalidating all the page_ext's of memory block?
>>
>> Yeah, that would be an option. And if you fail to allocate a temporary
>> buffer to hold the data-to-free (structure containing rcu_head), the
>> slower fallback path would be synchronize_rcu().
>>
> 
> I will add this as a note in the code that in future If some
> optimizations needs to be done in this path, this option can be
> considered.  Hope this will be fine for now?

IMHO yes. But not need to add all these details to the patch description
(try keeping it short and precise). You can always just link to the
discussion, e.g., via

https://lkml.kernel.org/r/a26ce299-aed1-b8ad-711e-a49e82bdd180@quicinc.com
Charan Teja Kalla Aug. 1, 2022, 1:01 p.m. UTC | #10
Thanks Michal !!

On 8/1/2022 1:57 PM, Michal Hocko wrote:
>> Currently not all the places where page_ext is being used is put under
>> the rcu_lock. I just used rcu lock in the places where it is possible to
>> have the use-after-free of page_ext. You recommend to use rcu lock while
>> using with page_ext in all the places?
> Yes. Using locking inconsistently just begs for future problems. There
> should be a very good reason to use lockless approach in some paths and
> that would be where the locking overhead is not really acceptable or
> when the locking cannot be used for other reasons.
> 
> RCU read lock is essentially zero overhead so the only reason would be
> that the critical section would require to sleep. Is any of that the
> case?
> 
> If there is a real need to have a lockless variant then I would propose
> to add __page_ext_get/put which would be lockless and clearly documented
> under which contexts it can be used and enfore those condictions (e.g.
> reference count assumption).
> 

Let me try to use a single interface here.

>> The roll back operation in the online_page_ext(), where we free the
>> allocated page_ext's, will not have the PAGE_EXT_INVALID flag thus
>> WARN() may not work here. no?
> Wouldn't ms->page_ext be NULL in that case?
I don't think that ms->page_ext would be NULL here.
online_page_ext():
  (a) for (pfn = start; !fail && pfn < end; pfn += PAGES_PER_SECTION)
     fail = init_section_page_ext():
	   ms->page_ext = (void *)base - page_ext_size * pfn;

  //If fail = -ERROR in the middle, roll back operation.
  (b) for (pfn = start; pfn < end; pfn += PAGES_PER_SECTION)
       __free_page_ext();

   Here (b) can be called on the sections without PAGE_EXT_INVALID with
ms->page_ext != NULL.

Thanks,
Charan
Michal Hocko Aug. 1, 2022, 1:08 p.m. UTC | #11
On Mon 01-08-22 18:31:45, Charan Teja Kalla wrote:
[...]
> >> The roll back operation in the online_page_ext(), where we free the
> >> allocated page_ext's, will not have the PAGE_EXT_INVALID flag thus
> >> WARN() may not work here. no?
> > Wouldn't ms->page_ext be NULL in that case?
> I don't think that ms->page_ext would be NULL here.
> online_page_ext():
>   (a) for (pfn = start; !fail && pfn < end; pfn += PAGES_PER_SECTION)
>      fail = init_section_page_ext():
> 	   ms->page_ext = (void *)base - page_ext_size * pfn;
> 
>   //If fail = -ERROR in the middle, roll back operation.
>   (b) for (pfn = start; pfn < end; pfn += PAGES_PER_SECTION)
>        __free_page_ext();
> 
>    Here (b) can be called on the sections without PAGE_EXT_INVALID with
> ms->page_ext != NULL.
> 
You are right. My sloppy code reading. A tiny comment would be nice.
Because this shouldn't really happen for normal calls.
diff mbox series

Patch

diff --git a/include/linux/page_ext.h b/include/linux/page_ext.h
index fabb2e1..3a35c95 100644
--- a/include/linux/page_ext.h
+++ b/include/linux/page_ext.h
@@ -5,6 +5,7 @@ 
 #include <linux/types.h>
 #include <linux/stacktrace.h>
 #include <linux/stackdepot.h>
+#include <linux/rcupdate.h>
 
 struct pglist_data;
 struct page_ext_operations {
@@ -36,6 +37,8 @@  struct page_ext {
 	unsigned long flags;
 };
 
+#define PAGE_EXT_INVALID       (0x1)
+
 extern unsigned long page_ext_size;
 extern void pgdat_page_ext_init(struct pglist_data *pgdat);
 
@@ -57,6 +60,11 @@  static inline void page_ext_init(void)
 
 struct page_ext *lookup_page_ext(const struct page *page);
 
+static inline bool page_ext_invalid(struct page_ext *page_ext)
+{
+	return !page_ext || (((unsigned long)page_ext & PAGE_EXT_INVALID) == 1);
+}
+
 static inline struct page_ext *page_ext_next(struct page_ext *curr)
 {
 	void *next = curr;
@@ -64,6 +72,37 @@  static inline struct page_ext *page_ext_next(struct page_ext *curr)
 	return next;
 }
 
+/*
+ * This function gives proper page_ext of a memory section
+ * during race with the offline operation on a memory block
+ * this section falls into. Not using this function to get
+ * page_ext of a page, in code paths where extra refcount
+ * is not taken on that page eg: pfn walking, can lead to
+ * use-after-free access of page_ext.
+ */
+static inline struct page_ext *page_ext_get(struct page *page)
+{
+	struct page_ext *page_ext;
+
+	rcu_read_lock();
+	page_ext = lookup_page_ext(page);
+	if (!page_ext) {
+		rcu_read_unlock();
+		return NULL;
+	}
+
+	return page_ext;
+}
+
+/*
+ * Must be called after work is done with the page_ext received
+ * with page_ext_get().
+ */
+static inline void page_ext_put(void)
+{
+	rcu_read_unlock();
+}
+
 #else /* !CONFIG_PAGE_EXTENSION */
 struct page_ext;
 
@@ -87,5 +126,19 @@  static inline void page_ext_init_flatmem_late(void)
 static inline void page_ext_init_flatmem(void)
 {
 }
+
+static inline struct page_ext *page_ext_get(struct page *page)
+{
+	return NULL;
+}
+
+static inline bool page_ext_invalid(struct page_ext *page_ext)
+{
+	return true;
+}
+
+static inline void page_ext_put(void)
+{
+}
 #endif /* CONFIG_PAGE_EXTENSION */
 #endif /* __LINUX_PAGE_EXT_H */
diff --git a/include/linux/page_idle.h b/include/linux/page_idle.h
index 4663dfe..3dd3718 100644
--- a/include/linux/page_idle.h
+++ b/include/linux/page_idle.h
@@ -13,65 +13,85 @@ 
  * If there is not enough space to store Idle and Young bits in page flags, use
  * page ext flags instead.
  */
-
 static inline bool folio_test_young(struct folio *folio)
 {
-	struct page_ext *page_ext = lookup_page_ext(&folio->page);
+	struct page_ext *page_ext;
+	bool page_young;
 
+	page_ext = page_ext_get(&folio->page);
 	if (unlikely(!page_ext))
 		return false;
 
-	return test_bit(PAGE_EXT_YOUNG, &page_ext->flags);
+	page_young = test_bit(PAGE_EXT_YOUNG, &page_ext->flags);
+	page_ext_put();
+
+	return page_young;
 }
 
 static inline void folio_set_young(struct folio *folio)
 {
-	struct page_ext *page_ext = lookup_page_ext(&folio->page);
+	struct page_ext *page_ext;
 
+	page_ext = page_ext_get(&folio->page);
 	if (unlikely(!page_ext))
 		return;
 
 	set_bit(PAGE_EXT_YOUNG, &page_ext->flags);
+	page_ext_put();
 }
 
 static inline bool folio_test_clear_young(struct folio *folio)
 {
-	struct page_ext *page_ext = lookup_page_ext(&folio->page);
+	struct page_ext *page_ext;
+	bool page_young;
 
+	page_ext = page_ext_get(&folio->page);
 	if (unlikely(!page_ext))
 		return false;
 
-	return test_and_clear_bit(PAGE_EXT_YOUNG, &page_ext->flags);
+	page_young = test_and_clear_bit(PAGE_EXT_YOUNG, &page_ext->flags);
+	page_ext_put();
+
+	return page_young;
 }
 
 static inline bool folio_test_idle(struct folio *folio)
 {
-	struct page_ext *page_ext = lookup_page_ext(&folio->page);
+	struct page_ext *page_ext;
+	bool page_idle;
 
+	page_ext = page_ext_get(&folio->page);
 	if (unlikely(!page_ext))
 		return false;
 
-	return test_bit(PAGE_EXT_IDLE, &page_ext->flags);
+	page_idle =  test_bit(PAGE_EXT_IDLE, &page_ext->flags);
+	page_ext_put();
+
+	return page_idle;
 }
 
 static inline void folio_set_idle(struct folio *folio)
 {
-	struct page_ext *page_ext = lookup_page_ext(&folio->page);
+	struct page_ext *page_ext;
 
+	page_ext = page_ext_get(&folio->page);
 	if (unlikely(!page_ext))
 		return;
 
 	set_bit(PAGE_EXT_IDLE, &page_ext->flags);
+	page_ext_put();
 }
 
 static inline void folio_clear_idle(struct folio *folio)
 {
-	struct page_ext *page_ext = lookup_page_ext(&folio->page);
+	struct page_ext *page_ext;
 
+	page_ext = page_ext_get(&folio->page);
 	if (unlikely(!page_ext))
 		return;
 
 	clear_bit(PAGE_EXT_IDLE, &page_ext->flags);
+	page_ext_put();
 }
 #endif /* !CONFIG_64BIT */
 
diff --git a/mm/page_ext.c b/mm/page_ext.c
index 3dc715d..404a2eb 100644
--- a/mm/page_ext.c
+++ b/mm/page_ext.c
@@ -211,15 +211,17 @@  struct page_ext *lookup_page_ext(const struct page *page)
 {
 	unsigned long pfn = page_to_pfn(page);
 	struct mem_section *section = __pfn_to_section(pfn);
+	struct page_ext *page_ext = READ_ONCE(section->page_ext);
+
 	/*
 	 * The sanity checks the page allocator does upon freeing a
 	 * page can reach here before the page_ext arrays are
 	 * allocated when feeding a range of pages to the allocator
 	 * for the first time during bootup or memory hotplug.
 	 */
-	if (!section->page_ext)
+	if (page_ext_invalid(page_ext))
 		return NULL;
-	return get_entry(section->page_ext, pfn);
+	return get_entry(page_ext, pfn);
 }
 
 static void *__meminit alloc_page_ext(size_t size, int nid)
@@ -298,9 +300,26 @@  static void __free_page_ext(unsigned long pfn)
 	ms = __pfn_to_section(pfn);
 	if (!ms || !ms->page_ext)
 		return;
-	base = get_entry(ms->page_ext, pfn);
+
+	base = READ_ONCE(ms->page_ext);
+	if (page_ext_invalid(base))
+		base = (void *)base - PAGE_EXT_INVALID;
+	WRITE_ONCE(ms->page_ext, NULL);
+
+	base = get_entry(base, pfn);
 	free_page_ext(base);
-	ms->page_ext = NULL;
+}
+
+static void __invalidate_page_ext(unsigned long pfn)
+{
+	struct mem_section *ms;
+	void *val;
+
+	ms = __pfn_to_section(pfn);
+	if (!ms || !ms->page_ext)
+		return;
+	val = (void *)ms->page_ext + PAGE_EXT_INVALID;
+	WRITE_ONCE(ms->page_ext, val);
 }
 
 static int __meminit online_page_ext(unsigned long start_pfn,
@@ -343,6 +362,20 @@  static int __meminit offline_page_ext(unsigned long start_pfn,
 	start = SECTION_ALIGN_DOWN(start_pfn);
 	end = SECTION_ALIGN_UP(start_pfn + nr_pages);
 
+	/*
+	 * Freeing of page_ext is done in 3 steps to avoid
+	 * use-after-free of it:
+	 * 1) Traverse all the sections and mark their page_ext
+	 *    as invalid.
+	 * 2) Wait for all the existing users of page_ext who
+	 *    started before invalidation to finish.
+	 * 3) Free the page_ext.
+	 */
+	for (pfn = start; pfn < end; pfn += PAGES_PER_SECTION)
+		__invalidate_page_ext(pfn);
+
+	synchronize_rcu();
+
 	for (pfn = start; pfn < end; pfn += PAGES_PER_SECTION)
 		__free_page_ext(pfn);
 	return 0;
diff --git a/mm/page_owner.c b/mm/page_owner.c
index e4c6f3f..0520dda 100644
--- a/mm/page_owner.c
+++ b/mm/page_owner.c
@@ -195,14 +195,16 @@  noinline void __set_page_owner(struct page *page, unsigned short order,
 
 void __set_page_owner_migrate_reason(struct page *page, int reason)
 {
-	struct page_ext *page_ext = lookup_page_ext(page);
+	struct page_ext *page_ext;
 	struct page_owner *page_owner;
 
+	page_ext = page_ext_get(page);
 	if (unlikely(!page_ext))
 		return;
 
 	page_owner = get_page_owner(page_ext);
 	page_owner->last_migrate_reason = reason;
+	page_ext_put();
 }
 
 void __split_page_owner(struct page *page, unsigned int nr)
@@ -307,12 +309,12 @@  void pagetypeinfo_showmixedcount_print(struct seq_file *m,
 			if (PageReserved(page))
 				continue;
 
-			page_ext = lookup_page_ext(page);
+			page_ext = page_ext_get(page);
 			if (unlikely(!page_ext))
 				continue;
 
 			if (!test_bit(PAGE_EXT_OWNER_ALLOCATED, &page_ext->flags))
-				continue;
+				goto loop;
 
 			page_owner = get_page_owner(page_ext);
 			page_mt = gfp_migratetype(page_owner->gfp_mask);
@@ -323,9 +325,12 @@  void pagetypeinfo_showmixedcount_print(struct seq_file *m,
 					count[pageblock_mt]++;
 
 				pfn = block_end_pfn;
+				page_ext_put();
 				break;
 			}
 			pfn += (1UL << page_owner->order) - 1;
+loop:
+			page_ext_put();
 		}
 	}
 
@@ -508,6 +513,14 @@  read_page_owner(struct file *file, char __user *buf, size_t count, loff_t *ppos)
 	/* Find an allocated page */
 	for (; pfn < max_pfn; pfn++) {
 		/*
+		 * This temporary page_owner is required so
+		 * that we can avoid the context switches while holding
+		 * the rcu lock and copying the page owner information to
+		 * user through copy_to_user() or GFP_KERNEL allocations.
+		 */
+		struct page_owner page_owner_tmp = {0};
+
+		/*
 		 * If the new page is in a new MAX_ORDER_NR_PAGES area,
 		 * validate the area as existing, skip it if not
 		 */
@@ -525,7 +538,7 @@  read_page_owner(struct file *file, char __user *buf, size_t count, loff_t *ppos)
 			continue;
 		}
 
-		page_ext = lookup_page_ext(page);
+		page_ext = page_ext_get(page);
 		if (unlikely(!page_ext))
 			continue;
 
@@ -534,14 +547,14 @@  read_page_owner(struct file *file, char __user *buf, size_t count, loff_t *ppos)
 		 * because we don't hold the zone lock.
 		 */
 		if (!test_bit(PAGE_EXT_OWNER, &page_ext->flags))
-			continue;
+			goto loop;
 
 		/*
 		 * Although we do have the info about past allocation of free
 		 * pages, it's not relevant for current memory usage.
 		 */
 		if (!test_bit(PAGE_EXT_OWNER_ALLOCATED, &page_ext->flags))
-			continue;
+			goto loop;
 
 		page_owner = get_page_owner(page_ext);
 
@@ -550,7 +563,7 @@  read_page_owner(struct file *file, char __user *buf, size_t count, loff_t *ppos)
 		 * would inflate the stats.
 		 */
 		if (!IS_ALIGNED(pfn, 1 << page_owner->order))
-			continue;
+			goto loop;
 
 		/*
 		 * Access to page_ext->handle isn't synchronous so we should
@@ -558,13 +571,17 @@  read_page_owner(struct file *file, char __user *buf, size_t count, loff_t *ppos)
 		 */
 		handle = READ_ONCE(page_owner->handle);
 		if (!handle)
-			continue;
+			goto loop;
 
 		/* Record the next PFN to read in the file offset */
 		*ppos = (pfn - min_low_pfn) + 1;
 
+		memcpy(&page_owner_tmp, page_owner, sizeof(struct page_owner));
+		page_ext_put();
 		return print_page_owner(buf, count, pfn, page,
-				page_owner, handle);
+				&page_owner_tmp, handle);
+loop:
+		page_ext_put();
 	}
 
 	return 0;
diff --git a/mm/page_table_check.c b/mm/page_table_check.c
index e206274..ec371b9 100644
--- a/mm/page_table_check.c
+++ b/mm/page_table_check.c
@@ -68,7 +68,7 @@  static void page_table_check_clear(struct mm_struct *mm, unsigned long addr,
 		return;
 
 	page = pfn_to_page(pfn);
-	page_ext = lookup_page_ext(page);
+	page_ext = page_ext_get(page);
 	anon = PageAnon(page);
 
 	for (i = 0; i < pgcnt; i++) {
@@ -83,6 +83,7 @@  static void page_table_check_clear(struct mm_struct *mm, unsigned long addr,
 		}
 		page_ext = page_ext_next(page_ext);
 	}
+	page_ext_put();
 }
 
 /*
@@ -103,7 +104,7 @@  static void page_table_check_set(struct mm_struct *mm, unsigned long addr,
 		return;
 
 	page = pfn_to_page(pfn);
-	page_ext = lookup_page_ext(page);
+	page_ext = page_ext_get(page);
 	anon = PageAnon(page);
 
 	for (i = 0; i < pgcnt; i++) {
@@ -118,6 +119,7 @@  static void page_table_check_set(struct mm_struct *mm, unsigned long addr,
 		}
 		page_ext = page_ext_next(page_ext);
 	}
+	page_ext_put();
 }
 
 /*
@@ -126,9 +128,10 @@  static void page_table_check_set(struct mm_struct *mm, unsigned long addr,
  */
 void __page_table_check_zero(struct page *page, unsigned int order)
 {
-	struct page_ext *page_ext = lookup_page_ext(page);
+	struct page_ext *page_ext;
 	unsigned long i;
 
+	page_ext = page_ext_get(page);
 	BUG_ON(!page_ext);
 	for (i = 0; i < (1ul << order); i++) {
 		struct page_table_check *ptc = get_page_table_check(page_ext);
@@ -137,6 +140,7 @@  void __page_table_check_zero(struct page *page, unsigned int order)
 		BUG_ON(atomic_read(&ptc->file_map_count));
 		page_ext = page_ext_next(page_ext);
 	}
+	page_ext_put();
 }
 
 void __page_table_check_pte_clear(struct mm_struct *mm, unsigned long addr,