From patchwork Fri May 28 01:04:12 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Collingbourne X-Patchwork-Id: 12285951 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8198CC4708A for ; Fri, 28 May 2021 01:04:44 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 13A40613AF for ; Fri, 28 May 2021 01:04:44 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 13A40613AF Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id A781B8D0001; Thu, 27 May 2021 21:04:43 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A52176B0072; Thu, 27 May 2021 21:04:43 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 93B6E8D0001; Thu, 27 May 2021 21:04:43 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0131.hostedemail.com [216.40.44.131]) by kanga.kvack.org (Postfix) with ESMTP id 619C16B0071 for ; Thu, 27 May 2021 21:04:43 -0400 (EDT) Received: from smtpin40.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 115C63499 for ; Fri, 28 May 2021 01:04:43 +0000 (UTC) X-FDA: 78188844846.40.1902234 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) by imf16.hostedemail.com (Postfix) with ESMTP id EB2B7801AE6E for ; Fri, 28 May 2021 01:04:34 +0000 (UTC) Received: by mail-yb1-f202.google.com with SMTP id a7-20020a5b00070000b02904ed415d9d84so2528092ybp.0 for ; Thu, 27 May 2021 18:04:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=ASRsxhfUCbqemG/n1ULdvytPmmjF9FQBeibaK9QBIbg=; b=BxIHNF0qNoCjN6HjtFWezUXwqdtNr6m2K/5V9IosOfu3MXvWh4vsCwxe4289r8qZM8 BLQ/d3FJTUC4x4UHjng3Wea5RV7j7tVrEOsw/te1qdJOA/xuIIhjj4qMhGoZF7bnPcOn yHaQcoZLR2mJBpl3w/izQel6VZDJIcW6HMY+bpuOBQUjrOR632E9zeb/LE46vKwDiafw 7+3ERIoZm2AaJ22wMTAWlslNlNfOSiUWCMv0LRqA4Of0xYYvd7mpfqeX1PpU8t5AEDZj HgVdSuN2APCpRQaiKgRDY8x3mRLHNznyNtRuEz9D/Fbnp4bDJszFWmUQ4OeRSl2Z8QgC mwQA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=ASRsxhfUCbqemG/n1ULdvytPmmjF9FQBeibaK9QBIbg=; b=mSO/+Nj2LGqquzYff6fsQcINF1xjxlo0LD+GjDahHtcC9DPyzBlCQtvt4EQWt9j9v+ had8xySQB66lhtnS1jj0aufYMp9kYa3i7oTw+dMqYtSdTfYhS8c+kk+T3momsmnFIRRD C88u/OZBZz9sVl/WGM7LOUvNTU2QvPJHaydbxfEk7fSm/oHeh0T7Fv2c4//q0K6hRLcf 1n7DEkJYFlsY4qH3AH1tSVln4N+IilX+cC0iabAP63TMFIuu2oj/8//bPD207mSbt5J4 aun9tRuW4Oh41GoVAciDW3UJn9D8N8W7HAbb0oMPebkyyVUUYPBeoHAMtoqGj0NaqMze R9MA== X-Gm-Message-State: AOAM530wfLi3DKEUEGVSVFyFObXNrSI1ouaydFOb5Vxlj/LUzrYFG3BT 1kNSauuE7sIlELp2bmd9K8uKLuc= X-Google-Smtp-Source: ABdhPJza39Cc5dOWYHW0FDTZ5pS1tSdbWvcGJxlnAeWBEnBAMnZgWPyTpVNWiUqua7zytCk2mFzOulY= X-Received: from pcc-desktop.svl.corp.google.com ([2620:15c:2ce:200:81d5:efb9:ec22:d557]) (user=pcc job=sendgmr) by 2002:a25:883:: with SMTP id 125mr9321345ybi.523.1622163882012; Thu, 27 May 2021 18:04:42 -0700 (PDT) Date: Thu, 27 May 2021 18:04:12 -0700 In-Reply-To: <20210528010415.1852012-1-pcc@google.com> Message-Id: <20210528010415.1852012-2-pcc@google.com> Mime-Version: 1.0 References: <20210528010415.1852012-1-pcc@google.com> X-Mailer: git-send-email 2.32.0.rc0.204.g9fa02ecfa5-goog Subject: [PATCH v4 1/4] mm: arch: remove indirection level in alloc_zeroed_user_highpage_movable() From: Peter Collingbourne To: Andrey Konovalov , Alexander Potapenko , Catalin Marinas , Vincenzo Frascino , Andrew Morton , Jann Horn Cc: Peter Collingbourne , Evgenii Stepanov , linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=google.com header.s=20161025 header.b=BxIHNF0q; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf16.hostedemail.com: domain of 3qkGwYAMKCPkqddhpphmf.dpnmjovy-nnlwbdl.psh@flex--pcc.bounces.google.com designates 209.85.219.202 as permitted sender) smtp.mailfrom=3qkGwYAMKCPkqddhpphmf.dpnmjovy-nnlwbdl.psh@flex--pcc.bounces.google.com X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: EB2B7801AE6E X-Stat-Signature: 1czqtfcgpao9m38qyn9sk5jc7oxmkixr X-HE-Tag: 1622163874-620274 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In an upcoming change we would like to add a flag to GFP_HIGHUSER_MOVABLE so that it would no longer be an OR of GFP_HIGHUSER and __GFP_MOVABLE. This poses a problem for alloc_zeroed_user_highpage_movable() which passes __GFP_MOVABLE into an arch-specific __alloc_zeroed_user_highpage() hook which ORs in GFP_HIGHUSER. Since __alloc_zeroed_user_highpage() is only ever called from alloc_zeroed_user_highpage_movable(), we can remove one level of indirection here. Remove __alloc_zeroed_user_highpage(), make alloc_zeroed_user_highpage_movable() the hook, and use GFP_HIGHUSER_MOVABLE in the hook implementations so that they will pick up the new flag that we are going to add. Signed-off-by: Peter Collingbourne Link: https://linux-review.googlesource.com/id/Ic6361c657b2cdcd896adbe0cf7cb5a7fbb1ed7bf Reported-by: kernel test robot --- arch/alpha/include/asm/page.h | 6 +++--- arch/arm64/include/asm/page.h | 6 +++--- arch/ia64/include/asm/page.h | 6 +++--- arch/m68k/include/asm/page_no.h | 6 +++--- arch/s390/include/asm/page.h | 6 +++--- arch/x86/include/asm/page.h | 6 +++--- include/linux/highmem.h | 35 ++++++++------------------------- 7 files changed, 26 insertions(+), 45 deletions(-) diff --git a/arch/alpha/include/asm/page.h b/arch/alpha/include/asm/page.h index 268f99b4602b..18f48a6f2ff6 100644 --- a/arch/alpha/include/asm/page.h +++ b/arch/alpha/include/asm/page.h @@ -17,9 +17,9 @@ extern void clear_page(void *page); #define clear_user_page(page, vaddr, pg) clear_page(page) -#define __alloc_zeroed_user_highpage(movableflags, vma, vaddr) \ - alloc_page_vma(GFP_HIGHUSER | __GFP_ZERO | movableflags, vma, vmaddr) -#define __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE +#define alloc_zeroed_user_highpage_movable(vma, vaddr) \ + alloc_page_vma(GFP_HIGHUSER_MOVABLE | __GFP_ZERO, vma, vmaddr) +#define __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE_MOVABLE extern void copy_page(void * _to, void * _from); #define copy_user_page(to, from, vaddr, pg) copy_page(to, from) diff --git a/arch/arm64/include/asm/page.h b/arch/arm64/include/asm/page.h index 012cffc574e8..0cfe4f7e7055 100644 --- a/arch/arm64/include/asm/page.h +++ b/arch/arm64/include/asm/page.h @@ -28,9 +28,9 @@ void copy_user_highpage(struct page *to, struct page *from, void copy_highpage(struct page *to, struct page *from); #define __HAVE_ARCH_COPY_HIGHPAGE -#define __alloc_zeroed_user_highpage(movableflags, vma, vaddr) \ - alloc_page_vma(GFP_HIGHUSER | __GFP_ZERO | movableflags, vma, vaddr) -#define __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE +#define alloc_zeroed_user_highpage_movable(movableflags, vma, vaddr) \ + alloc_page_vma(GFP_HIGHUSER_MOVABLE | __GFP_ZERO, vma, vaddr) +#define __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE_MOVABLE #define clear_user_page(page, vaddr, pg) clear_page(page) #define copy_user_page(to, from, vaddr, pg) copy_page(to, from) diff --git a/arch/ia64/include/asm/page.h b/arch/ia64/include/asm/page.h index f4dc81fa7146..1b990466d540 100644 --- a/arch/ia64/include/asm/page.h +++ b/arch/ia64/include/asm/page.h @@ -82,16 +82,16 @@ do { \ } while (0) -#define __alloc_zeroed_user_highpage(movableflags, vma, vaddr) \ +#define alloc_zeroed_user_highpage_movable(vma, vaddr) \ ({ \ struct page *page = alloc_page_vma( \ - GFP_HIGHUSER | __GFP_ZERO | movableflags, vma, vaddr); \ + GFP_HIGHUSER_MOVABLE | __GFP_ZERO, vma, vaddr); \ if (page) \ flush_dcache_page(page); \ page; \ }) -#define __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE +#define __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE_MOVABLE #define virt_addr_valid(kaddr) pfn_valid(__pa(kaddr) >> PAGE_SHIFT) diff --git a/arch/m68k/include/asm/page_no.h b/arch/m68k/include/asm/page_no.h index 8d0f862ee9d7..c9d0d84158a4 100644 --- a/arch/m68k/include/asm/page_no.h +++ b/arch/m68k/include/asm/page_no.h @@ -13,9 +13,9 @@ extern unsigned long memory_end; #define clear_user_page(page, vaddr, pg) clear_page(page) #define copy_user_page(to, from, vaddr, pg) copy_page(to, from) -#define __alloc_zeroed_user_highpage(movableflags, vma, vaddr) \ - alloc_page_vma(GFP_HIGHUSER | __GFP_ZERO | movableflags, vma, vaddr) -#define __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE +#define alloc_zeroed_user_highpage_movable(vma, vaddr) \ + alloc_page_vma(GFP_HIGHUSER_MOVABLE | __GFP_ZERO, vma, vaddr) +#define __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE_MOVABLE #define __pa(vaddr) ((unsigned long)(vaddr)) #define __va(paddr) ((void *)((unsigned long)(paddr))) diff --git a/arch/s390/include/asm/page.h b/arch/s390/include/asm/page.h index cc98f9b78fd4..346a0cbb6515 100644 --- a/arch/s390/include/asm/page.h +++ b/arch/s390/include/asm/page.h @@ -68,9 +68,9 @@ static inline void copy_page(void *to, void *from) #define clear_user_page(page, vaddr, pg) clear_page(page) #define copy_user_page(to, from, vaddr, pg) copy_page(to, from) -#define __alloc_zeroed_user_highpage(movableflags, vma, vaddr) \ - alloc_page_vma(GFP_HIGHUSER | __GFP_ZERO | movableflags, vma, vaddr) -#define __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE +#define __alloc_zeroed_user_highpage_movable(vma, vaddr) \ + alloc_page_vma(GFP_HIGHUSER_MOVABLE | __GFP_ZERO, vma, vaddr) +#define __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE_MOVABLE /* * These are used to make use of C type-checking.. diff --git a/arch/x86/include/asm/page.h b/arch/x86/include/asm/page.h index 7555b48803a8..4d5810c8fab7 100644 --- a/arch/x86/include/asm/page.h +++ b/arch/x86/include/asm/page.h @@ -34,9 +34,9 @@ static inline void copy_user_page(void *to, void *from, unsigned long vaddr, copy_page(to, from); } -#define __alloc_zeroed_user_highpage(movableflags, vma, vaddr) \ - alloc_page_vma(GFP_HIGHUSER | __GFP_ZERO | movableflags, vma, vaddr) -#define __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE +#define alloc_zeroed_user_highpage_movable(vma, vaddr) \ + alloc_page_vma(GFP_HIGHUSER_MOVABLE | __GFP_ZERO, vma, vaddr) +#define __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE_MOVABLE #ifndef __pa #define __pa(x) __phys_addr((unsigned long)(x)) diff --git a/include/linux/highmem.h b/include/linux/highmem.h index 832b49b50c7b..54d0643b8fcf 100644 --- a/include/linux/highmem.h +++ b/include/linux/highmem.h @@ -152,28 +152,24 @@ static inline void clear_user_highpage(struct page *page, unsigned long vaddr) } #endif -#ifndef __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE +#ifndef __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE_MOVABLE /** - * __alloc_zeroed_user_highpage - Allocate a zeroed HIGHMEM page for a VMA with caller-specified movable GFP flags - * @movableflags: The GFP flags related to the pages future ability to move like __GFP_MOVABLE + * alloc_zeroed_user_highpage_movable - Allocate a zeroed HIGHMEM page for a VMA that the caller knows can move * @vma: The VMA the page is to be allocated for * @vaddr: The virtual address the page will be inserted into * - * This function will allocate a page for a VMA but the caller is expected - * to specify via movableflags whether the page will be movable in the - * future or not + * This function will allocate a page for a VMA that the caller knows will + * be able to migrate in the future using move_pages() or reclaimed * * An architecture may override this function by defining - * __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE and providing their own + * __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE_MOVABLE and providing their own * implementation. */ static inline struct page * -__alloc_zeroed_user_highpage(gfp_t movableflags, - struct vm_area_struct *vma, - unsigned long vaddr) +alloc_zeroed_user_highpage_movable(struct vm_area_struct *vma, + unsigned long vaddr) { - struct page *page = alloc_page_vma(GFP_HIGHUSER | movableflags, - vma, vaddr); + struct page *page = alloc_page_vma(GFP_HIGHUSER_MOVABLE, vma, vaddr); if (page) clear_user_highpage(page, vaddr); @@ -182,21 +178,6 @@ __alloc_zeroed_user_highpage(gfp_t movableflags, } #endif -/** - * alloc_zeroed_user_highpage_movable - Allocate a zeroed HIGHMEM page for a VMA that the caller knows can move - * @vma: The VMA the page is to be allocated for - * @vaddr: The virtual address the page will be inserted into - * - * This function will allocate a page for a VMA that the caller knows will - * be able to migrate in the future using move_pages() or reclaimed - */ -static inline struct page * -alloc_zeroed_user_highpage_movable(struct vm_area_struct *vma, - unsigned long vaddr) -{ - return __alloc_zeroed_user_highpage(__GFP_MOVABLE, vma, vaddr); -} - static inline void clear_highpage(struct page *page) { void *kaddr = kmap_atomic(page);