diff mbox series

[v5,1/4] mm: arch: remove indirection level in alloc_zeroed_user_highpage_movable()

Message ID 20210601195049.2695657-2-pcc@google.com (mailing list archive)
State New, archived
Headers show
Series arm64: improve efficiency of setting tags for user pages | expand

Commit Message

Peter Collingbourne June 1, 2021, 7:50 p.m. UTC
In an upcoming change we would like to add a flag to
GFP_HIGHUSER_MOVABLE so that it would no longer be an OR
of GFP_HIGHUSER and __GFP_MOVABLE. This poses a problem for
alloc_zeroed_user_highpage_movable() which passes __GFP_MOVABLE
into an arch-specific __alloc_zeroed_user_highpage() hook which ORs
in GFP_HIGHUSER.

Since __alloc_zeroed_user_highpage() is only ever called from
alloc_zeroed_user_highpage_movable(), we can remove one level
of indirection here. Remove __alloc_zeroed_user_highpage(),
make alloc_zeroed_user_highpage_movable() the hook, and use
GFP_HIGHUSER_MOVABLE in the hook implementations so that they will
pick up the new flag that we are going to add.

Signed-off-by: Peter Collingbourne <pcc@google.com>
Link: https://linux-review.googlesource.com/id/Ic6361c657b2cdcd896adbe0cf7cb5a7fbb1ed7bf
---
v5:
- fix s390 build error

 arch/alpha/include/asm/page.h   |  6 +++---
 arch/arm64/include/asm/page.h   |  6 +++---
 arch/ia64/include/asm/page.h    |  6 +++---
 arch/m68k/include/asm/page_no.h |  6 +++---
 arch/s390/include/asm/page.h    |  6 +++---
 arch/x86/include/asm/page.h     |  6 +++---
 include/linux/highmem.h         | 35 ++++++++-------------------------
 7 files changed, 26 insertions(+), 45 deletions(-)

Comments

kernel test robot June 2, 2021, 3:25 a.m. UTC | #1
Hi Peter,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on arm64/for-next/core]
[also build test ERROR on m68knommu/for-next s390/features tip/x86/core tip/perf/core linus/master v5.13-rc4 next-20210601]
[cannot apply to hnaz-linux-mm/master]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/Peter-Collingbourne/arm64-improve-efficiency-of-setting-tags-for-user-pages/20210602-035317
base:   https://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-next/core
config: arm64-allyesconfig (attached as .config)
compiler: aarch64-linux-gcc (GCC) 9.3.0
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # https://github.com/0day-ci/linux/commit/1344809b8a7ee8c81147702ffae35c577aab33ba
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review Peter-Collingbourne/arm64-improve-efficiency-of-setting-tags-for-user-pages/20210602-035317
        git checkout 1344809b8a7ee8c81147702ffae35c577aab33ba
        # save the attached .config to linux build tree
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-9.3.0 make.cross ARCH=arm64 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

Note: the linux-review/Peter-Collingbourne/arm64-improve-efficiency-of-setting-tags-for-user-pages/20210602-035317 HEAD ead2e307c4f44ebc1cfe727a2bfc28ceec0bc4e9 builds fine.
      It only hurts bisectibility.

All errors (new ones prefixed by >>):

   mm/memory.c: In function 'wp_page_copy':
>> mm/memory.c:2892:26: error: macro "alloc_zeroed_user_highpage_movable" requires 3 arguments, but only 2 given
    2892 |              vmf->address);
         |                          ^
   In file included from include/linux/shm.h:6,
                    from include/linux/sched.h:16,
                    from include/linux/hardirq.h:9,
                    from include/linux/interrupt.h:11,
                    from include/linux/kernel_stat.h:9,
                    from mm/memory.c:42:
   arch/arm64/include/asm/page.h:31: note: macro "alloc_zeroed_user_highpage_movable" defined here
      31 | #define alloc_zeroed_user_highpage_movable(movableflags, vma, vaddr) \
         | 
>> mm/memory.c:2891:14: error: 'alloc_zeroed_user_highpage_movable' undeclared (first use in this function)
    2891 |   new_page = alloc_zeroed_user_highpage_movable(vma,
         |              ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   mm/memory.c:2891:14: note: each undeclared identifier is reported only once for each function it appears in
   mm/memory.c: In function 'do_anonymous_page':
   mm/memory.c:3589:61: error: macro "alloc_zeroed_user_highpage_movable" requires 3 arguments, but only 2 given
    3589 |  page = alloc_zeroed_user_highpage_movable(vma, vmf->address);
         |                                                             ^
   In file included from include/linux/shm.h:6,
                    from include/linux/sched.h:16,
                    from include/linux/hardirq.h:9,
                    from include/linux/interrupt.h:11,
                    from include/linux/kernel_stat.h:9,
                    from mm/memory.c:42:
   arch/arm64/include/asm/page.h:31: note: macro "alloc_zeroed_user_highpage_movable" defined here
      31 | #define alloc_zeroed_user_highpage_movable(movableflags, vma, vaddr) \
         | 
   mm/memory.c:3589:9: error: 'alloc_zeroed_user_highpage_movable' undeclared (first use in this function)
    3589 |  page = alloc_zeroed_user_highpage_movable(vma, vmf->address);
         |         ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~


vim +/alloc_zeroed_user_highpage_movable +2892 mm/memory.c

4e047f89777122 Shachar Raindel    2015-04-14  2860  
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2861  /*
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2862   * Handle the case of a page which we actually need to copy to a new page.
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2863   *
c1e8d7c6a7a682 Michel Lespinasse  2020-06-08  2864   * Called with mmap_lock locked and the old page referenced, but
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2865   * without the ptl held.
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2866   *
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2867   * High level logic flow:
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2868   *
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2869   * - Allocate a page, copy the content of the old page to the new one.
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2870   * - Handle book keeping and accounting - cgroups, mmu-notifiers, etc.
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2871   * - Take the PTL. If the pte changed, bail out and release the allocated page
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2872   * - If the pte is still the way we remember it, update the page table and all
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2873   *   relevant references. This includes dropping the reference the page-table
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2874   *   held to the old page, as well as updating the rmap.
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2875   * - In any case, unlock the PTL and drop the reference we took to the old page.
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2876   */
2b7403035459c7 Souptick Joarder   2018-08-23  2877  static vm_fault_t wp_page_copy(struct vm_fault *vmf)
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2878  {
82b0f8c39a3869 Jan Kara           2016-12-14  2879  	struct vm_area_struct *vma = vmf->vma;
bae473a423f65e Kirill A. Shutemov 2016-07-26  2880  	struct mm_struct *mm = vma->vm_mm;
a41b70d6dfc28b Jan Kara           2016-12-14  2881  	struct page *old_page = vmf->page;
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2882  	struct page *new_page = NULL;
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2883  	pte_t entry;
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2884  	int page_copied = 0;
ac46d4f3c43241 Jérôme Glisse      2018-12-28  2885  	struct mmu_notifier_range range;
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2886  
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2887  	if (unlikely(anon_vma_prepare(vma)))
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2888  		goto oom;
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2889  
2994302bc8a171 Jan Kara           2016-12-14  2890  	if (is_zero_pfn(pte_pfn(vmf->orig_pte))) {
82b0f8c39a3869 Jan Kara           2016-12-14 @2891  		new_page = alloc_zeroed_user_highpage_movable(vma,
82b0f8c39a3869 Jan Kara           2016-12-14 @2892  							      vmf->address);
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2893  		if (!new_page)
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2894  			goto oom;
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2895  	} else {
bae473a423f65e Kirill A. Shutemov 2016-07-26  2896  		new_page = alloc_page_vma(GFP_HIGHUSER_MOVABLE, vma,
82b0f8c39a3869 Jan Kara           2016-12-14  2897  				vmf->address);
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2898  		if (!new_page)
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2899  			goto oom;
83d116c53058d5 Jia He             2019-10-11  2900  
83d116c53058d5 Jia He             2019-10-11  2901  		if (!cow_user_page(new_page, old_page, vmf)) {
83d116c53058d5 Jia He             2019-10-11  2902  			/*
83d116c53058d5 Jia He             2019-10-11  2903  			 * COW failed, if the fault was solved by other,
83d116c53058d5 Jia He             2019-10-11  2904  			 * it's fine. If not, userspace would re-fault on
83d116c53058d5 Jia He             2019-10-11  2905  			 * the same address and we will handle the fault
83d116c53058d5 Jia He             2019-10-11  2906  			 * from the second attempt.
83d116c53058d5 Jia He             2019-10-11  2907  			 */
83d116c53058d5 Jia He             2019-10-11  2908  			put_page(new_page);
83d116c53058d5 Jia He             2019-10-11  2909  			if (old_page)
83d116c53058d5 Jia He             2019-10-11  2910  				put_page(old_page);
83d116c53058d5 Jia He             2019-10-11  2911  			return 0;
83d116c53058d5 Jia He             2019-10-11  2912  		}
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2913  	}
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2914  
d9eb1ea2bf8734 Johannes Weiner    2020-06-03  2915  	if (mem_cgroup_charge(new_page, mm, GFP_KERNEL))
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2916  		goto oom_free_new;
9d82c69438d0df Johannes Weiner    2020-06-03  2917  	cgroup_throttle_swaprate(new_page, GFP_KERNEL);
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2918  
eb3c24f305e56c Mel Gorman         2015-06-24  2919  	__SetPageUptodate(new_page);
eb3c24f305e56c Mel Gorman         2015-06-24  2920  
7269f999934b28 Jérôme Glisse      2019-05-13  2921  	mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma, mm,
6f4f13e8d9e27c Jérôme Glisse      2019-05-13  2922  				vmf->address & PAGE_MASK,
ac46d4f3c43241 Jérôme Glisse      2018-12-28  2923  				(vmf->address & PAGE_MASK) + PAGE_SIZE);
ac46d4f3c43241 Jérôme Glisse      2018-12-28  2924  	mmu_notifier_invalidate_range_start(&range);
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2925  
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2926  	/*
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2927  	 * Re-check the pte - we dropped the lock
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2928  	 */
82b0f8c39a3869 Jan Kara           2016-12-14  2929  	vmf->pte = pte_offset_map_lock(mm, vmf->pmd, vmf->address, &vmf->ptl);
2994302bc8a171 Jan Kara           2016-12-14  2930  	if (likely(pte_same(*vmf->pte, vmf->orig_pte))) {
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2931  		if (old_page) {
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2932  			if (!PageAnon(old_page)) {
eca56ff906bdd0 Jerome Marchand    2016-01-14  2933  				dec_mm_counter_fast(mm,
eca56ff906bdd0 Jerome Marchand    2016-01-14  2934  						mm_counter_file(old_page));
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2935  				inc_mm_counter_fast(mm, MM_ANONPAGES);
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2936  			}
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2937  		} else {
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2938  			inc_mm_counter_fast(mm, MM_ANONPAGES);
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2939  		}
2994302bc8a171 Jan Kara           2016-12-14  2940  		flush_cache_page(vma, vmf->address, pte_pfn(vmf->orig_pte));
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2941  		entry = mk_pte(new_page, vma->vm_page_prot);
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2942  		entry = maybe_mkwrite(pte_mkdirty(entry), vma);
111fe7186b29d1 Nicholas Piggin    2020-12-29  2943  
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2944  		/*
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2945  		 * Clear the pte entry and flush it first, before updating the
111fe7186b29d1 Nicholas Piggin    2020-12-29  2946  		 * pte with the new entry, to keep TLBs on different CPUs in
111fe7186b29d1 Nicholas Piggin    2020-12-29  2947  		 * sync. This code used to set the new PTE then flush TLBs, but
111fe7186b29d1 Nicholas Piggin    2020-12-29  2948  		 * that left a window where the new PTE could be loaded into
111fe7186b29d1 Nicholas Piggin    2020-12-29  2949  		 * some TLBs while the old PTE remains in others.
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2950  		 */
82b0f8c39a3869 Jan Kara           2016-12-14  2951  		ptep_clear_flush_notify(vma, vmf->address, vmf->pte);
82b0f8c39a3869 Jan Kara           2016-12-14  2952  		page_add_new_anon_rmap(new_page, vma, vmf->address, false);
b518154e59aab3 Joonsoo Kim        2020-08-11  2953  		lru_cache_add_inactive_or_unevictable(new_page, vma);
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2954  		/*
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2955  		 * We call the notify macro here because, when using secondary
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2956  		 * mmu page tables (such as kvm shadow page tables), we want the
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2957  		 * new page to be mapped directly into the secondary page table.
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2958  		 */
82b0f8c39a3869 Jan Kara           2016-12-14  2959  		set_pte_at_notify(mm, vmf->address, vmf->pte, entry);
82b0f8c39a3869 Jan Kara           2016-12-14  2960  		update_mmu_cache(vma, vmf->address, vmf->pte);
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2961  		if (old_page) {
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2962  			/*
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2963  			 * Only after switching the pte to the new page may
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2964  			 * we remove the mapcount here. Otherwise another
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2965  			 * process may come and find the rmap count decremented
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2966  			 * before the pte is switched to the new page, and
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2967  			 * "reuse" the old page writing into it while our pte
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2968  			 * here still points into it and can be read by other
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2969  			 * threads.
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2970  			 *
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2971  			 * The critical issue is to order this
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2972  			 * page_remove_rmap with the ptp_clear_flush above.
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2973  			 * Those stores are ordered by (if nothing else,)
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2974  			 * the barrier present in the atomic_add_negative
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2975  			 * in page_remove_rmap.
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2976  			 *
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2977  			 * Then the TLB flush in ptep_clear_flush ensures that
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2978  			 * no process can access the old page before the
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2979  			 * decremented mapcount is visible. And the old page
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2980  			 * cannot be reused until after the decremented
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2981  			 * mapcount is visible. So transitively, TLBs to
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2982  			 * old page will be flushed before it can be reused.
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2983  			 */
d281ee61451835 Kirill A. Shutemov 2016-01-15  2984  			page_remove_rmap(old_page, false);
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2985  		}
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2986  
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2987  		/* Free the old page.. */
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2988  		new_page = old_page;
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2989  		page_copied = 1;
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2990  	} else {
7df676974359f9 Bibo Mao           2020-05-27  2991  		update_mmu_tlb(vma, vmf->address, vmf->pte);
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2992  	}
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2993  
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2994  	if (new_page)
09cbfeaf1a5a67 Kirill A. Shutemov 2016-04-01  2995  		put_page(new_page);
2f38ab2c3c7fef Shachar Raindel    2015-04-14  2996  
82b0f8c39a3869 Jan Kara           2016-12-14  2997  	pte_unmap_unlock(vmf->pte, vmf->ptl);
4645b9fe84bf48 Jérôme Glisse      2017-11-15  2998  	/*
4645b9fe84bf48 Jérôme Glisse      2017-11-15  2999  	 * No need to double call mmu_notifier->invalidate_range() callback as
4645b9fe84bf48 Jérôme Glisse      2017-11-15  3000  	 * the above ptep_clear_flush_notify() did already call it.
4645b9fe84bf48 Jérôme Glisse      2017-11-15  3001  	 */
ac46d4f3c43241 Jérôme Glisse      2018-12-28  3002  	mmu_notifier_invalidate_range_only_end(&range);
2f38ab2c3c7fef Shachar Raindel    2015-04-14  3003  	if (old_page) {
2f38ab2c3c7fef Shachar Raindel    2015-04-14  3004  		/*
2f38ab2c3c7fef Shachar Raindel    2015-04-14  3005  		 * Don't let another task, with possibly unlocked vma,
2f38ab2c3c7fef Shachar Raindel    2015-04-14  3006  		 * keep the mlocked page.
2f38ab2c3c7fef Shachar Raindel    2015-04-14  3007  		 */
2f38ab2c3c7fef Shachar Raindel    2015-04-14  3008  		if (page_copied && (vma->vm_flags & VM_LOCKED)) {
2f38ab2c3c7fef Shachar Raindel    2015-04-14  3009  			lock_page(old_page);	/* LRU manipulation */
e90309c9f7722d Kirill A. Shutemov 2016-01-15  3010  			if (PageMlocked(old_page))
2f38ab2c3c7fef Shachar Raindel    2015-04-14  3011  				munlock_vma_page(old_page);
2f38ab2c3c7fef Shachar Raindel    2015-04-14  3012  			unlock_page(old_page);
2f38ab2c3c7fef Shachar Raindel    2015-04-14  3013  		}
09cbfeaf1a5a67 Kirill A. Shutemov 2016-04-01  3014  		put_page(old_page);
2f38ab2c3c7fef Shachar Raindel    2015-04-14  3015  	}
2f38ab2c3c7fef Shachar Raindel    2015-04-14  3016  	return page_copied ? VM_FAULT_WRITE : 0;
2f38ab2c3c7fef Shachar Raindel    2015-04-14  3017  oom_free_new:
09cbfeaf1a5a67 Kirill A. Shutemov 2016-04-01  3018  	put_page(new_page);
2f38ab2c3c7fef Shachar Raindel    2015-04-14  3019  oom:
2f38ab2c3c7fef Shachar Raindel    2015-04-14  3020  	if (old_page)
09cbfeaf1a5a67 Kirill A. Shutemov 2016-04-01  3021  		put_page(old_page);
2f38ab2c3c7fef Shachar Raindel    2015-04-14  3022  	return VM_FAULT_OOM;
2f38ab2c3c7fef Shachar Raindel    2015-04-14  3023  }
2f38ab2c3c7fef Shachar Raindel    2015-04-14  3024  

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org
diff mbox series

Patch

diff --git a/arch/alpha/include/asm/page.h b/arch/alpha/include/asm/page.h
index 268f99b4602b..18f48a6f2ff6 100644
--- a/arch/alpha/include/asm/page.h
+++ b/arch/alpha/include/asm/page.h
@@ -17,9 +17,9 @@ 
 extern void clear_page(void *page);
 #define clear_user_page(page, vaddr, pg)	clear_page(page)
 
-#define __alloc_zeroed_user_highpage(movableflags, vma, vaddr) \
-	alloc_page_vma(GFP_HIGHUSER | __GFP_ZERO | movableflags, vma, vmaddr)
-#define __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE
+#define alloc_zeroed_user_highpage_movable(vma, vaddr) \
+	alloc_page_vma(GFP_HIGHUSER_MOVABLE | __GFP_ZERO, vma, vmaddr)
+#define __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE_MOVABLE
 
 extern void copy_page(void * _to, void * _from);
 #define copy_user_page(to, from, vaddr, pg)	copy_page(to, from)
diff --git a/arch/arm64/include/asm/page.h b/arch/arm64/include/asm/page.h
index 012cffc574e8..0cfe4f7e7055 100644
--- a/arch/arm64/include/asm/page.h
+++ b/arch/arm64/include/asm/page.h
@@ -28,9 +28,9 @@  void copy_user_highpage(struct page *to, struct page *from,
 void copy_highpage(struct page *to, struct page *from);
 #define __HAVE_ARCH_COPY_HIGHPAGE
 
-#define __alloc_zeroed_user_highpage(movableflags, vma, vaddr) \
-	alloc_page_vma(GFP_HIGHUSER | __GFP_ZERO | movableflags, vma, vaddr)
-#define __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE
+#define alloc_zeroed_user_highpage_movable(movableflags, vma, vaddr) \
+	alloc_page_vma(GFP_HIGHUSER_MOVABLE | __GFP_ZERO, vma, vaddr)
+#define __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE_MOVABLE
 
 #define clear_user_page(page, vaddr, pg)	clear_page(page)
 #define copy_user_page(to, from, vaddr, pg)	copy_page(to, from)
diff --git a/arch/ia64/include/asm/page.h b/arch/ia64/include/asm/page.h
index f4dc81fa7146..1b990466d540 100644
--- a/arch/ia64/include/asm/page.h
+++ b/arch/ia64/include/asm/page.h
@@ -82,16 +82,16 @@  do {						\
 } while (0)
 
 
-#define __alloc_zeroed_user_highpage(movableflags, vma, vaddr)		\
+#define alloc_zeroed_user_highpage_movable(vma, vaddr)			\
 ({									\
 	struct page *page = alloc_page_vma(				\
-		GFP_HIGHUSER | __GFP_ZERO | movableflags, vma, vaddr);	\
+		GFP_HIGHUSER_MOVABLE | __GFP_ZERO, vma, vaddr);		\
 	if (page)							\
  		flush_dcache_page(page);				\
 	page;								\
 })
 
-#define __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE
+#define __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE_MOVABLE
 
 #define virt_addr_valid(kaddr)	pfn_valid(__pa(kaddr) >> PAGE_SHIFT)
 
diff --git a/arch/m68k/include/asm/page_no.h b/arch/m68k/include/asm/page_no.h
index 8d0f862ee9d7..c9d0d84158a4 100644
--- a/arch/m68k/include/asm/page_no.h
+++ b/arch/m68k/include/asm/page_no.h
@@ -13,9 +13,9 @@  extern unsigned long memory_end;
 #define clear_user_page(page, vaddr, pg)	clear_page(page)
 #define copy_user_page(to, from, vaddr, pg)	copy_page(to, from)
 
-#define __alloc_zeroed_user_highpage(movableflags, vma, vaddr) \
-	alloc_page_vma(GFP_HIGHUSER | __GFP_ZERO | movableflags, vma, vaddr)
-#define __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE
+#define alloc_zeroed_user_highpage_movable(vma, vaddr) \
+	alloc_page_vma(GFP_HIGHUSER_MOVABLE | __GFP_ZERO, vma, vaddr)
+#define __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE_MOVABLE
 
 #define __pa(vaddr)		((unsigned long)(vaddr))
 #define __va(paddr)		((void *)((unsigned long)(paddr)))
diff --git a/arch/s390/include/asm/page.h b/arch/s390/include/asm/page.h
index cc98f9b78fd4..479dc76e0eca 100644
--- a/arch/s390/include/asm/page.h
+++ b/arch/s390/include/asm/page.h
@@ -68,9 +68,9 @@  static inline void copy_page(void *to, void *from)
 #define clear_user_page(page, vaddr, pg)	clear_page(page)
 #define copy_user_page(to, from, vaddr, pg)	copy_page(to, from)
 
-#define __alloc_zeroed_user_highpage(movableflags, vma, vaddr) \
-	alloc_page_vma(GFP_HIGHUSER | __GFP_ZERO | movableflags, vma, vaddr)
-#define __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE
+#define alloc_zeroed_user_highpage_movable(vma, vaddr) \
+	alloc_page_vma(GFP_HIGHUSER_MOVABLE | __GFP_ZERO, vma, vaddr)
+#define __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE_MOVABLE
 
 /*
  * These are used to make use of C type-checking..
diff --git a/arch/x86/include/asm/page.h b/arch/x86/include/asm/page.h
index 7555b48803a8..4d5810c8fab7 100644
--- a/arch/x86/include/asm/page.h
+++ b/arch/x86/include/asm/page.h
@@ -34,9 +34,9 @@  static inline void copy_user_page(void *to, void *from, unsigned long vaddr,
 	copy_page(to, from);
 }
 
-#define __alloc_zeroed_user_highpage(movableflags, vma, vaddr) \
-	alloc_page_vma(GFP_HIGHUSER | __GFP_ZERO | movableflags, vma, vaddr)
-#define __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE
+#define alloc_zeroed_user_highpage_movable(vma, vaddr) \
+	alloc_page_vma(GFP_HIGHUSER_MOVABLE | __GFP_ZERO, vma, vaddr)
+#define __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE_MOVABLE
 
 #ifndef __pa
 #define __pa(x)		__phys_addr((unsigned long)(x))
diff --git a/include/linux/highmem.h b/include/linux/highmem.h
index 832b49b50c7b..54d0643b8fcf 100644
--- a/include/linux/highmem.h
+++ b/include/linux/highmem.h
@@ -152,28 +152,24 @@  static inline void clear_user_highpage(struct page *page, unsigned long vaddr)
 }
 #endif
 
-#ifndef __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE
+#ifndef __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE_MOVABLE
 /**
- * __alloc_zeroed_user_highpage - Allocate a zeroed HIGHMEM page for a VMA with caller-specified movable GFP flags
- * @movableflags: The GFP flags related to the pages future ability to move like __GFP_MOVABLE
+ * alloc_zeroed_user_highpage_movable - Allocate a zeroed HIGHMEM page for a VMA that the caller knows can move
  * @vma: The VMA the page is to be allocated for
  * @vaddr: The virtual address the page will be inserted into
  *
- * This function will allocate a page for a VMA but the caller is expected
- * to specify via movableflags whether the page will be movable in the
- * future or not
+ * This function will allocate a page for a VMA that the caller knows will
+ * be able to migrate in the future using move_pages() or reclaimed
  *
  * An architecture may override this function by defining
- * __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE and providing their own
+ * __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE_MOVABLE and providing their own
  * implementation.
  */
 static inline struct page *
-__alloc_zeroed_user_highpage(gfp_t movableflags,
-			struct vm_area_struct *vma,
-			unsigned long vaddr)
+alloc_zeroed_user_highpage_movable(struct vm_area_struct *vma,
+				   unsigned long vaddr)
 {
-	struct page *page = alloc_page_vma(GFP_HIGHUSER | movableflags,
-			vma, vaddr);
+	struct page *page = alloc_page_vma(GFP_HIGHUSER_MOVABLE, vma, vaddr);
 
 	if (page)
 		clear_user_highpage(page, vaddr);
@@ -182,21 +178,6 @@  __alloc_zeroed_user_highpage(gfp_t movableflags,
 }
 #endif
 
-/**
- * alloc_zeroed_user_highpage_movable - Allocate a zeroed HIGHMEM page for a VMA that the caller knows can move
- * @vma: The VMA the page is to be allocated for
- * @vaddr: The virtual address the page will be inserted into
- *
- * This function will allocate a page for a VMA that the caller knows will
- * be able to migrate in the future using move_pages() or reclaimed
- */
-static inline struct page *
-alloc_zeroed_user_highpage_movable(struct vm_area_struct *vma,
-					unsigned long vaddr)
-{
-	return __alloc_zeroed_user_highpage(__GFP_MOVABLE, vma, vaddr);
-}
-
 static inline void clear_highpage(struct page *page)
 {
 	void *kaddr = kmap_atomic(page);