From patchwork Fri Mar 17 10:57:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Roberts X-Patchwork-Id: 13178879 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6CA50C76195 for ; Fri, 17 Mar 2023 10:58:27 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8FF3B6B007B; Fri, 17 Mar 2023 06:58:26 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 888186B007E; Fri, 17 Mar 2023 06:58:26 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 750256B0080; Fri, 17 Mar 2023 06:58:26 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 633506B007B for ; Fri, 17 Mar 2023 06:58:26 -0400 (EDT) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 24A261617C3 for ; Fri, 17 Mar 2023 10:58:26 +0000 (UTC) X-FDA: 80578091412.08.DD1136E Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf01.hostedemail.com (Postfix) with ESMTP id 6C12B40005 for ; Fri, 17 Mar 2023 10:58:24 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf01.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1679050704; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=i8vpyiuakDg4IaGPZg8fhta9U0S6yowD12SXWlo6jeg=; b=2TPPpKEdTT6gSJJw8cLsCm8z0TOTsAkhYtdEXRUplU7LsF1SOr9p/plD4/X1a2t/17QJMv jCHi9WUgiyRDG5kNzDH60PwERivLU4g3YEaBRlprcmMH+dJzoEy/MS9tUAYVDosND8Kpo2 IDoRwYY3aVycnGtJ/RMRF/8XIfh0hDw= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf01.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1679050704; a=rsa-sha256; cv=none; b=VesGWAWZwwbQny6aHTySzg0UZcHmoKYFrTsSxLuQ5AFCJuVYktsq2rNQfoXRBgCnmTFmE8 ZaSeMzEk0xN4Oek4LwWka/UGHj9aL73UOwNMEBybWVL9EKMj77Fj4GQ16NW8m/RXBzHe/A xgbmdpB6wZrrjCvleGSv96h+PvlUIhQ= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 30B22176C; Fri, 17 Mar 2023 03:59:07 -0700 (PDT) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.26]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 503C93F885; Fri, 17 Mar 2023 03:58:22 -0700 (PDT) From: Ryan Roberts To: Andrew Morton , "Matthew Wilcox (Oracle)" , "Yin, Fengwei" , Yu Zhao Cc: Ryan Roberts , linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org Subject: [RFC PATCH 2/6] mm: pass gfp flags and order to vma_alloc_zeroed_movable_folio() Date: Fri, 17 Mar 2023 10:57:58 +0000 Message-Id: <20230317105802.2634004-3-ryan.roberts@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230317105802.2634004-1-ryan.roberts@arm.com> References: <20230317105802.2634004-1-ryan.roberts@arm.com> MIME-Version: 1.0 X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 6C12B40005 X-Stat-Signature: 5twpo1onqgggetjqcdg34rtwx8pkok8h X-Rspam-User: X-HE-Tag: 1679050704-708860 X-HE-Meta: U2FsdGVkX1+J7PJO0ShPJ2LAQhXJ6HASmWWN6cj5Sa9fvXC9M2baW3AxxL3+RY+/aDQeUuF6a/YMGezsZyO8fKnVNydq+ehOjF6sIkKsxiUms9MUWwP2/8d2XkvGoG5b5NH+4JIcZT23w64v8V/iqcwUlkmVg31wB+PegaUkObKBD/ZgaQPfOb0zKcT5DmeODcHXbuADO2z5HksB8SDsMYcRMFfa7dPLhDYLFXdRcPKRWQY7/LaUOUCxFPL26GZqyUlstyffKfy/eY7B0EgZsEs+Qxj61ksJTZVXUrgOOJKd2YIPB2nwhXHO8u+PbA3p48rhUHkWcxV43UWMuSXRU//sEKtL9SfcN79QwLGk4khYzyTxcimA/ttnidnhYD73DzQ2ZAk3Cvu3KmxU2uhONJ9/DsOBz3t9VNdwh5HHJ2OLq2YumYgtWx+KvqVEOUYvQCpezr99VnT1X4wlIx4sBgsr336jQlKnS/9XhzM2BpRSeu3RD76mTZVsI+Rxn3C7AF2uttPDoXNTtXGLwFefCI5bBn4+PU5fbTrbptNlKsgMnC4PJVGC8nh7sZeQ2gsijKfF35NTmwyexmEQBiKlT6oVVxVmPnNMk37nbwsXwheGkIkISGl+18PFT87RmombGhlL6u5SPE4w4EQmEI51tkFupWZ6McFifA8dE4ZS56F11ZqsoxIflUCIJsaJNZW1hFapZPvveK8xyyUPI2TRy4Gek9G+fal56f86lIJehb3Gl00ZBNkxaCUJH/T7co5t+NCXEfc6txvblDVTj7skS2ZQbLMocHcV0bD6Fk053nWo+A2iMSJc9pYBiLRLdYnAyWGSv1fU8dYe05ot+dLd7MSAikxnuVPA/RGwCSrIJzRj3NQAcUfkVZDDG+7Ye8gCsn396hpJ/KwqneGkVcsEuXsvCKVj9Ku3WOwZIkx/z9/GMzFP5WBpOgtP1lO9Hng6SrPWp1ly3d/bEX7pLzE Hqstko7n WibBqLx2KcfD4unRRWDzAfhmAQQqHyA5o+n5IiOYN2+mTSHdHzc27IkLGqBkRRJySTDbgVVlTdi/VOZPZdgP9BtGxnIIKz1UcdpeP/S7t3XkI68ShXfiqa2AqPaHkqnyyiO/rX6Nuu0IWJx+ZaqpC/xYsrVegovDrROJung813y6Z63kmiRrS2O7mdiQ4HmPNT3YkKbLhmXDNzRzSBIrPanN7cjLajHOHpxkl X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Allow allocation of large folios with vma_alloc_zeroed_movable_folio(). This prepares the ground for large anonymous folios. The generic implementation of vma_alloc_zeroed_movable_folio() now uses clear_huge_page() to zero the allocated folio since it may now be a non-0 order. Currently the function is always called with order 0 and no extra gfp flags, so no functional change intended. Signed-off-by: Ryan Roberts --- arch/alpha/include/asm/page.h | 5 +++-- arch/arm64/include/asm/page.h | 3 ++- arch/arm64/mm/fault.c | 7 ++++--- arch/ia64/include/asm/page.h | 5 +++-- arch/m68k/include/asm/page_no.h | 7 ++++--- arch/s390/include/asm/page.h | 5 +++-- arch/x86/include/asm/page.h | 5 +++-- include/linux/highmem.h | 23 +++++++++++++---------- mm/memory.c | 5 +++-- 9 files changed, 38 insertions(+), 27 deletions(-) -- 2.25.1 diff --git a/arch/alpha/include/asm/page.h b/arch/alpha/include/asm/page.h index 4db1ebc0ed99..6fc7fe91b6cb 100644 --- a/arch/alpha/include/asm/page.h +++ b/arch/alpha/include/asm/page.h @@ -17,8 +17,9 @@ extern void clear_page(void *page); #define clear_user_page(page, vaddr, pg) clear_page(page) -#define vma_alloc_zeroed_movable_folio(vma, vaddr) \ - vma_alloc_folio(GFP_HIGHUSER_MOVABLE | __GFP_ZERO, 0, vma, vaddr, false) +#define vma_alloc_zeroed_movable_folio(vma, vaddr, gfp, order) \ + vma_alloc_folio(GFP_HIGHUSER_MOVABLE | __GFP_ZERO | (gfp), \ + order, vma, vaddr, false) extern void copy_page(void * _to, void * _from); #define copy_user_page(to, from, vaddr, pg) copy_page(to, from) diff --git a/arch/arm64/include/asm/page.h b/arch/arm64/include/asm/page.h index 2312e6ee595f..47710852f872 100644 --- a/arch/arm64/include/asm/page.h +++ b/arch/arm64/include/asm/page.h @@ -30,7 +30,8 @@ void copy_highpage(struct page *to, struct page *from); #define __HAVE_ARCH_COPY_HIGHPAGE struct folio *vma_alloc_zeroed_movable_folio(struct vm_area_struct *vma, - unsigned long vaddr); + unsigned long vaddr, + gfp_t gfp, int order); #define vma_alloc_zeroed_movable_folio vma_alloc_zeroed_movable_folio void tag_clear_highpage(struct page *to); diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index f4cb0f85ccf4..3b4cc04f7a23 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -926,9 +926,10 @@ NOKPROBE_SYMBOL(do_debug_exception); * Used during anonymous page fault handling. */ struct folio *vma_alloc_zeroed_movable_folio(struct vm_area_struct *vma, - unsigned long vaddr) + unsigned long vaddr, + gfp_t gfp, int order) { - gfp_t flags = GFP_HIGHUSER_MOVABLE | __GFP_ZERO; + gfp_t flags = GFP_HIGHUSER_MOVABLE | __GFP_ZERO | gfp; /* * If the page is mapped with PROT_MTE, initialise the tags at the @@ -938,7 +939,7 @@ struct folio *vma_alloc_zeroed_movable_folio(struct vm_area_struct *vma, if (vma->vm_flags & VM_MTE) flags |= __GFP_ZEROTAGS; - return vma_alloc_folio(flags, 0, vma, vaddr, false); + return vma_alloc_folio(flags, order, vma, vaddr, false); } void tag_clear_highpage(struct page *page) diff --git a/arch/ia64/include/asm/page.h b/arch/ia64/include/asm/page.h index 310b09c3342d..ebdf04274023 100644 --- a/arch/ia64/include/asm/page.h +++ b/arch/ia64/include/asm/page.h @@ -82,10 +82,11 @@ do { \ } while (0) -#define vma_alloc_zeroed_movable_folio(vma, vaddr) \ +#define vma_alloc_zeroed_movable_folio(vma, vaddr, gfp, order) \ ({ \ struct folio *folio = vma_alloc_folio( \ - GFP_HIGHUSER_MOVABLE | __GFP_ZERO, 0, vma, vaddr, false); \ + GFP_HIGHUSER_MOVABLE | __GFP_ZERO | (gfp), \ + order, vma, vaddr, false); \ if (folio) \ flush_dcache_folio(folio); \ folio; \ diff --git a/arch/m68k/include/asm/page_no.h b/arch/m68k/include/asm/page_no.h index 060e4c0e7605..4a2fe57fef5e 100644 --- a/arch/m68k/include/asm/page_no.h +++ b/arch/m68k/include/asm/page_no.h @@ -3,7 +3,7 @@ #define _M68K_PAGE_NO_H #ifndef __ASSEMBLY__ - + extern unsigned long memory_start; extern unsigned long memory_end; @@ -13,8 +13,9 @@ extern unsigned long memory_end; #define clear_user_page(page, vaddr, pg) clear_page(page) #define copy_user_page(to, from, vaddr, pg) copy_page(to, from) -#define vma_alloc_zeroed_movable_folio(vma, vaddr) \ - vma_alloc_folio(GFP_HIGHUSER_MOVABLE | __GFP_ZERO, 0, vma, vaddr, false) +#define vma_alloc_zeroed_movable_folio(vma, vaddr, gfp, order) \ + vma_alloc_folio(GFP_HIGHUSER_MOVABLE | __GFP_ZERO | (gfp), \ + order, vma, vaddr, false) #define __pa(vaddr) ((unsigned long)(vaddr)) #define __va(paddr) ((void *)((unsigned long)(paddr))) diff --git a/arch/s390/include/asm/page.h b/arch/s390/include/asm/page.h index 8a2a3b5d1e29..b749564140f1 100644 --- a/arch/s390/include/asm/page.h +++ b/arch/s390/include/asm/page.h @@ -73,8 +73,9 @@ static inline void copy_page(void *to, void *from) #define clear_user_page(page, vaddr, pg) clear_page(page) #define copy_user_page(to, from, vaddr, pg) copy_page(to, from) -#define vma_alloc_zeroed_movable_folio(vma, vaddr) \ - vma_alloc_folio(GFP_HIGHUSER_MOVABLE | __GFP_ZERO, 0, vma, vaddr, false) +#define vma_alloc_zeroed_movable_folio(vma, vaddr, gfp, order) \ + vma_alloc_folio(GFP_HIGHUSER_MOVABLE | __GFP_ZERO | (gfp), \ + order, vma, vaddr, false) /* * These are used to make use of C type-checking.. diff --git a/arch/x86/include/asm/page.h b/arch/x86/include/asm/page.h index d18e5c332cb9..34deab1a8dae 100644 --- a/arch/x86/include/asm/page.h +++ b/arch/x86/include/asm/page.h @@ -34,8 +34,9 @@ static inline void copy_user_page(void *to, void *from, unsigned long vaddr, copy_page(to, from); } -#define vma_alloc_zeroed_movable_folio(vma, vaddr) \ - vma_alloc_folio(GFP_HIGHUSER_MOVABLE | __GFP_ZERO, 0, vma, vaddr, false) +#define vma_alloc_zeroed_movable_folio(vma, vaddr, gfp, order) \ + vma_alloc_folio(GFP_HIGHUSER_MOVABLE | __GFP_ZERO | (gfp), \ + order, vma, vaddr, false) #ifndef __pa #define __pa(x) __phys_addr((unsigned long)(x)) diff --git a/include/linux/highmem.h b/include/linux/highmem.h index b06254e76d99..e2127af4997b 100644 --- a/include/linux/highmem.h +++ b/include/linux/highmem.h @@ -209,26 +209,29 @@ static inline void clear_user_highpage(struct page *page, unsigned long vaddr) #ifndef vma_alloc_zeroed_movable_folio /** - * vma_alloc_zeroed_movable_folio - Allocate a zeroed page for a VMA. - * @vma: The VMA the page is to be allocated for. - * @vaddr: The virtual address the page will be inserted into. - * - * This function will allocate a page suitable for inserting into this - * VMA at this virtual address. It may be allocated from highmem or + * vma_alloc_zeroed_movable_folio - Allocate a zeroed folio for a VMA. + * @vma: The start VMA the folio is to be allocated for. + * @vaddr: The virtual address the folio will be inserted into. + * @gfp: Additional gfp falgs to mix in or 0. + * @order: The order of the folio (2^order pages). + * + * This function will allocate a folio suitable for inserting into this + * VMA starting at this virtual address. It may be allocated from highmem or * the movable zone. An architecture may provide its own implementation. * - * Return: A folio containing one allocated and zeroed page or NULL if + * Return: A folio containing 2^order allocated and zeroed pages or NULL if * we are out of memory. */ static inline struct folio *vma_alloc_zeroed_movable_folio(struct vm_area_struct *vma, - unsigned long vaddr) + unsigned long vaddr, gfp_t gfp, int order) { struct folio *folio; - folio = vma_alloc_folio(GFP_HIGHUSER_MOVABLE, 0, vma, vaddr, false); + folio = vma_alloc_folio(GFP_HIGHUSER_MOVABLE | gfp, + order, vma, vaddr, false); if (folio) - clear_user_highpage(&folio->page, vaddr); + clear_huge_page(&folio->page, vaddr, 1U << order); return folio; } diff --git a/mm/memory.c b/mm/memory.c index c08645908ee2..8798da968686 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3061,7 +3061,8 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf) goto oom; if (is_zero_pfn(pte_pfn(vmf->orig_pte))) { - new_folio = vma_alloc_zeroed_movable_folio(vma, vmf->address); + new_folio = vma_alloc_zeroed_movable_folio(vma, vmf->address, + 0, 0); if (!new_folio) goto oom; } else { @@ -4049,7 +4050,7 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf) /* Allocate our own private page. */ if (unlikely(anon_vma_prepare(vma))) goto oom; - folio = vma_alloc_zeroed_movable_folio(vma, vmf->address); + folio = vma_alloc_zeroed_movable_folio(vma, vmf->address, 0, 0); if (!folio) goto oom;