From patchwork Thu Jan 25 16:42:32 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Elisei X-Patchwork-Id: 13531328 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A244FC47258 for ; Thu, 25 Jan 2024 16:43:53 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 368036B0071; Thu, 25 Jan 2024 11:43:53 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 256766B007B; Thu, 25 Jan 2024 11:43:53 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 033B76B0087; Thu, 25 Jan 2024 11:43:52 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id E25FA6B0071 for ; Thu, 25 Jan 2024 11:43:52 -0500 (EST) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id C3E4080804 for ; Thu, 25 Jan 2024 16:43:52 +0000 (UTC) X-FDA: 81718405104.27.4EBEF38 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf18.hostedemail.com (Postfix) with ESMTP id 4793F1C001C for ; Thu, 25 Jan 2024 16:43:51 +0000 (UTC) Authentication-Results: imf18.hostedemail.com; dkim=none; spf=pass (imf18.hostedemail.com: domain of alexandru.elisei@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=alexandru.elisei@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1706201031; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=2laAblo3Li8xXd9cFA3POwPPSKs1U3ZPUbQ/1+rjObk=; b=w9WCtRmjA5UA9B3NDh3TOibyux+WV7rMmDne8M1/XcYYoHRLI6rLV4qUDh2sq5ZL+506vc jXHxcI7k5pGmBX6Qmdt/N4ngsSPlMXb3SSR+qDeOlB+DRnhoy+CEeL2OvODRBDNKOUUZ3L lMPComlX+7bQyt+2PM+TzRzs1EjD3x8= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1706201031; a=rsa-sha256; cv=none; b=qTcVNKvqAqn8BkkOfwBpJsZF9xHHEDPJNlSLXelbD5dXL/OFYOiPzlxA+jpQp3Hg5DI7KG 739Q18DRzrhr2zar4fmBj9cA1KIXzrV2ogZqDD8cEBOz9r67NQCAQNWwLjrnD5U8VQ27Xc 4UV2RouNdSxjL+PpQBoXfjSR4/fDuzE= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=none; spf=pass (imf18.hostedemail.com: domain of alexandru.elisei@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=alexandru.elisei@arm.com; dmarc=pass (policy=none) header.from=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 113601570; Thu, 25 Jan 2024 08:44:35 -0800 (PST) Received: from e121798.cable.virginm.net (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 240823F5A1; Thu, 25 Jan 2024 08:43:45 -0800 (PST) From: Alexandru Elisei To: catalin.marinas@arm.com, will@kernel.org, oliver.upton@linux.dev, maz@kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, arnd@arndb.de, akpm@linux-foundation.org, mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, mhiramat@kernel.org, rppt@kernel.org, hughd@google.com Cc: pcc@google.com, steven.price@arm.com, anshuman.khandual@arm.com, vincenzo.frascino@arm.com, david@redhat.com, eugenis@google.com, kcc@google.com, hyesoo.yu@samsung.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, kvmarm@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org Subject: [PATCH RFC v3 11/35] mm: Allow an arch to hook into folio allocation when VMA is known Date: Thu, 25 Jan 2024 16:42:32 +0000 Message-Id: <20240125164256.4147-12-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240125164256.4147-1-alexandru.elisei@arm.com> References: <20240125164256.4147-1-alexandru.elisei@arm.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 4793F1C001C X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: pbkda4frhookgahedy11abob4y78ej7y X-HE-Tag: 1706201031-985229 X-HE-Meta: U2FsdGVkX1+WQl5ioXjeg16QQ75Hx+EWvdiH6fEx/0WA0MXsFPX5QViPnGe/y1eIwzVPXXxTtzuBHFDkL8Ke4R79IK2DF5iXM0S0OMmxxCJg8ocRAJ+Mq/73cMKQlhVuicrbTqPslel/BjHR6AK9+eokVZT+cFgnUz8sqmA+gzHEbjeWXFr9cELS7byayzdhFUJOhHfSJ2rCjWRpU9xOtlfAq9p6GrrdVEtA76j82duV+QD+ji2rOYBLwHCqrLUS7oozYrzIwSdrbEfJ1Y6FtbfFZ/ihE+3tVcxBo3VufZ+RfVGFSTve1h4hSMe/QtvggIik+6aYyCAVS/B12JVHjrer9IS8D84Bq5f5/yFDVuH2J7W2NoYs9M4tEIpF0ja60Q4XH/N2I9KwCmTC5MMgK7VFNs7C4VuTRPPDcB7rXCBJ+sggAPBKJe09Gzf3P3OZ87D26wmoLaLpu335DuV/c8wc5E8N3z2qn9Ew3FG8RaSu8lPYEarixFLBq7NU867y4swA6oim05iVOnHCRvmkdumNb4LHbyi8LKYbBfoy+PA+t7r3ERqnIXmEi5nMRCj/wcCvmXV+el+noQIKBZI9vghS2ujz0YizxWluGsUogMZkLQ0V5mgXcDjsSZZyeX0oDnvEoFfMjcuEofvALxFsfeOnwGL2culE7NSCwB9GqqqXVt5MHIGRiOEPggmF/hctCDot1Ykhh6YmAUhak/hK8Z/H7PZYPh3D/fpZgAkPY8AJQXeV3SDFAGxD4jQ44T1mkbN7tINT81k4t9PdKNH5J4NCUbXa+rSVd6eDw/YgZMRkJWN2ZShHz2ciauDh7q7i3o/AOXsB+Y7GNmFASnVk2mtiuWXN/vos X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: arm64 uses VM_HIGH_ARCH_0 and VM_HIGH_ARCH_1 for enabling MTE for a VMA. When VM_HIGH_ARCH_0, which arm64 renames to VM_MTE, is set for a VMA, and the gfp flag __GFP_ZERO is present, the __GFP_ZEROTAGS gfp flag also gets set in vma_alloc_zeroed_movable_folio(). Expand this to be more generic by adding an arch hook that modifes the gfp flags for an allocation when the VMA is known. Note that __GFP_ZEROTAGS is ignored by the page allocator unless __GFP_ZERO is also set; from that point of view, the current behaviour is unchanged, even though the arm64 flag is set in more places. When arm64 will have support to reuse the tag storage for data allocation, the uses of the __GFP_ZEROTAGS flag will be expanded to instruct the page allocator to try to reserve the corresponding tag storage for the pages being allocated. The flags returned by arch_calc_vma_gfp() are or'ed with the flags set by the caller; this has been done to keep an architecture from modifying the flags already set by the core memory management code; this is similar to how do_mmap() -> calc_vm_flag_bits() -> arch_calc_vm_flag_bits() has been implemented. This can be revisited in the future if there's a need to do so. Signed-off-by: Alexandru Elisei --- arch/arm64/include/asm/page.h | 5 ++--- arch/arm64/include/asm/pgtable.h | 3 +++ arch/arm64/mm/fault.c | 19 ++++++------------- include/linux/pgtable.h | 7 +++++++ mm/mempolicy.c | 1 + mm/shmem.c | 5 ++++- 6 files changed, 23 insertions(+), 17 deletions(-) diff --git a/arch/arm64/include/asm/page.h b/arch/arm64/include/asm/page.h index 2312e6ee595f..88bab032a493 100644 --- a/arch/arm64/include/asm/page.h +++ b/arch/arm64/include/asm/page.h @@ -29,9 +29,8 @@ void copy_user_highpage(struct page *to, struct page *from, void copy_highpage(struct page *to, struct page *from); #define __HAVE_ARCH_COPY_HIGHPAGE -struct folio *vma_alloc_zeroed_movable_folio(struct vm_area_struct *vma, - unsigned long vaddr); -#define vma_alloc_zeroed_movable_folio vma_alloc_zeroed_movable_folio +#define vma_alloc_zeroed_movable_folio(vma, vaddr) \ + vma_alloc_folio(GFP_HIGHUSER_MOVABLE | __GFP_ZERO, 0, vma, vaddr, false) void tag_clear_highpage(struct page *to); #define __HAVE_ARCH_TAG_CLEAR_HIGHPAGE diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index 79ce70fbb751..08f0904dbfc2 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -1071,6 +1071,9 @@ static inline void arch_swap_restore(swp_entry_t entry, struct folio *folio) #endif /* CONFIG_ARM64_MTE */ +#define __HAVE_ARCH_CALC_VMA_GFP +gfp_t arch_calc_vma_gfp(struct vm_area_struct *vma, gfp_t gfp); + /* * On AArch64, the cache coherency is handled via the set_pte_at() function. */ diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index 55f6455a8284..4d3f0a870ad8 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -937,22 +937,15 @@ void do_debug_exception(unsigned long addr_if_watchpoint, unsigned long esr, NOKPROBE_SYMBOL(do_debug_exception); /* - * Used during anonymous page fault handling. + * If this is called during anonymous page fault handling, and the page is + * mapped with PROT_MTE, initialise the tags at the point of tag zeroing as this + * is usually faster than separate DC ZVA and STGM. */ -struct folio *vma_alloc_zeroed_movable_folio(struct vm_area_struct *vma, - unsigned long vaddr) +gfp_t arch_calc_vma_gfp(struct vm_area_struct *vma, gfp_t gfp) { - gfp_t flags = GFP_HIGHUSER_MOVABLE | __GFP_ZERO; - - /* - * If the page is mapped with PROT_MTE, initialise the tags at the - * point of allocation and page zeroing as this is usually faster than - * separate DC ZVA and STGM. - */ if (vma->vm_flags & VM_MTE) - flags |= __GFP_ZEROTAGS; - - return vma_alloc_folio(flags, 0, vma, vaddr, false); + return __GFP_ZEROTAGS; + return 0; } void tag_clear_highpage(struct page *page) diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index c5ddec6b5305..98f81ca08cbe 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -901,6 +901,13 @@ static inline void arch_do_swap_page(struct mm_struct *mm, } #endif +#ifndef __HAVE_ARCH_CALC_VMA_GFP +static inline gfp_t arch_calc_vma_gfp(struct vm_area_struct *vma, gfp_t gfp) +{ + return 0; +} +#endif + #ifndef __HAVE_ARCH_FREE_PAGES_PREPARE static inline void arch_free_pages_prepare(struct page *page, int order) { } #endif diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 10a590ee1c89..f7ef52760b32 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -2168,6 +2168,7 @@ struct folio *vma_alloc_folio(gfp_t gfp, int order, struct vm_area_struct *vma, pgoff_t ilx; struct page *page; + gfp |= arch_calc_vma_gfp(vma, gfp); pol = get_vma_policy(vma, addr, order, &ilx); page = alloc_pages_mpol(gfp | __GFP_COMP, order, pol, ilx, numa_node_id()); diff --git a/mm/shmem.c b/mm/shmem.c index d7c84ff62186..14427e9982f9 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1585,7 +1585,7 @@ static struct folio *shmem_swapin_cluster(swp_entry_t swap, gfp_t gfp, */ static gfp_t limit_gfp_mask(gfp_t huge_gfp, gfp_t limit_gfp) { - gfp_t allowflags = __GFP_IO | __GFP_FS | __GFP_RECLAIM; + gfp_t allowflags = __GFP_IO | __GFP_FS | __GFP_RECLAIM | __GFP_ZEROTAGS; gfp_t denyflags = __GFP_NOWARN | __GFP_NORETRY; gfp_t zoneflags = limit_gfp & GFP_ZONEMASK; gfp_t result = huge_gfp & ~(allowflags | GFP_ZONEMASK); @@ -2038,6 +2038,7 @@ static int shmem_get_folio_gfp(struct inode *inode, pgoff_t index, gfp_t huge_gfp; huge_gfp = vma_thp_gfp_mask(vma); + huge_gfp |= arch_calc_vma_gfp(vma, huge_gfp); huge_gfp = limit_gfp_mask(huge_gfp, gfp); folio = shmem_alloc_and_add_folio(huge_gfp, inode, index, fault_mm, true); @@ -2214,6 +2215,8 @@ static vm_fault_t shmem_fault(struct vm_fault *vmf) vm_fault_t ret = 0; int err; + gfp |= arch_calc_vma_gfp(vmf->vma, gfp); + /* * Trinity finds that probing a hole which tmpfs is punching can * prevent the hole-punch from ever completing: noted in i_private.