From patchwork Thu Jan 25 16:42:22 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Elisei X-Patchwork-Id: 13531273 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 68DC1C47258 for ; Thu, 25 Jan 2024 16:42:56 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 00B296B009A; Thu, 25 Jan 2024 11:42:56 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id EFE416B009B; Thu, 25 Jan 2024 11:42:55 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DC5F66B009C; Thu, 25 Jan 2024 11:42:55 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id D0D286B009A for ; Thu, 25 Jan 2024 11:42:55 -0500 (EST) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 8C0ECC0E2D for ; Thu, 25 Jan 2024 16:42:55 +0000 (UTC) X-FDA: 81718402710.28.89D863C Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf09.hostedemail.com (Postfix) with ESMTP id C54FE14000E for ; Thu, 25 Jan 2024 16:42:53 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=none; spf=pass (imf09.hostedemail.com: domain of alexandru.elisei@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=alexandru.elisei@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1706200974; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=lvCVPFOT+E1y4qvPXhrPFSszPH81dpxngYX4wFum1Fg=; b=MlhNZsR9zmlPGue9ONZvFdqZyZgtjFu1tNWubC1jYWGE+XxSAKE8o9K5T3BmoMaOqwohU5 kCXpv3TQR9Uehc8Qf2YjyQY66FKMSdJ4O0Cyjfnn0qD6Xtzo+MdV2A1jxRC34325e7IGqt dsWjWbhzid1jGAJCZA3C5Q0R3gWxHrI= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1706200974; a=rsa-sha256; cv=none; b=gtKgYEVibdlTOozo+dRqGmztO9I6k4jjF70KzQvjgZzevAkBEBQ/dZzmIOcM0CF9WCZkip lLfbMUroeg71bhXJPjaUWX8VIV0lhZ9QeOctz7qL7LBvOUpUFdxXjofvF45gKUQlRrlg23 v6wVne3AnqZpgWsZbmZ4RHEeIbvOia4= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=none; spf=pass (imf09.hostedemail.com: domain of alexandru.elisei@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=alexandru.elisei@arm.com; dmarc=pass (policy=none) header.from=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 46F74FEC; Thu, 25 Jan 2024 08:43:37 -0800 (PST) Received: from e121798.cable.virginm.net (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 52C263F5A1; Thu, 25 Jan 2024 08:42:47 -0800 (PST) From: Alexandru Elisei To: catalin.marinas@arm.com, will@kernel.org, oliver.upton@linux.dev, maz@kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, arnd@arndb.de, akpm@linux-foundation.org, mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, mhiramat@kernel.org, rppt@kernel.org, hughd@google.com Cc: pcc@google.com, steven.price@arm.com, anshuman.khandual@arm.com, vincenzo.frascino@arm.com, david@redhat.com, eugenis@google.com, kcc@google.com, hyesoo.yu@samsung.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, kvmarm@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org Subject: [PATCH RFC v3 01/35] mm: page_alloc: Add gfp_flags parameter to arch_alloc_page() Date: Thu, 25 Jan 2024 16:42:22 +0000 Message-Id: <20240125164256.4147-2-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240125164256.4147-1-alexandru.elisei@arm.com> References: <20240125164256.4147-1-alexandru.elisei@arm.com> MIME-Version: 1.0 X-Stat-Signature: 1tdjz6qpfasfodxmfi5xdx5x3s5o1wxa X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: C54FE14000E X-Rspam-User: X-HE-Tag: 1706200973-767503 X-HE-Meta: U2FsdGVkX1/NSgND2WQUZud7fg4BALqCc26uwR1NarUeJiTPdTFxYsis2wq0sOJQ1Tg4BXQEqHGJv4FziFWQUhlBCsa/yyT8fZSuG3WWHKO/5ehzMEWb16ep0TP9zKp6rjuAlwr6bBcTBn9hkKmvOJeEYMS2DF+V8GIrfbjSjSgV3aA1MNZCYHkuts2ygMUzzVjkUzqbXjQe+uVNiSfqTN8VUIOMB6pIBn+XVcteo70F2zAfcw0uIGHvML1uFzOER/gBfDtd710Vr3bkhcOQJY8maJkVq1PW5HAT+uy1XuQCOCPhDsRngW+nrsiR5Qli+5U+immtKptiDwzvaCAM7gT24oCMA5xCTGzQmWO5Eo61xK8YJ7Ls0LtQiNrTI5HWYO5BPcAImsNKSyL3Rb2pYvewcYzpXaIafM96emtESJOUr4ogrS0imBnKXhvh3aZZuQOI16oyVoRwgC+rZ1msbksJ+BzMohjzwH6j9txM1CjzVAt6DWN4Q3UWL0lMKv7l1lPQtuwkY/jTLklyB/iDP/edtQJqqBpXKcfaapbfxOTY66jh0aGqh+pvEcMDyubhLlT488kIFkjCQndaoEj+0AFy00Mn5oG7gYzsA4tgTvkKLyb3CgIncRtVbjFy+/SObkiUeaMEpA+5yuyFMP7WKCSzYcynVu372K5S5ButISuJPfp4qFEPsTNLLqrc1Zma5Xa6kmuiBCg/TNThGABqvrzDNuIvvl3edwC+reaHEzwbhC2puhinhkfZ+JjZQOC9JQ8FLgCaHGgcevswtIbSciB6Kdbksac2LkSKY5OzfnKeb5EG2XYZkcqBZyprluwslHOyQQb+jD1LDwJn9a9wUcm//mBYySDvF/5DnUOQONlisz12qr3Z+DrzmkDAA8wpmZhnDUHWrJi3FW1lZnD9xHOzfr+0eLPN8fET98dpi7/SQfz8QiCdL++LIKbHnD63/s5sFOK1xK3c38U/P0Q 5qied3WO 4PxiZyLEKTahVH0rPoo5ZyRJ6HwuTcmz1uI4V0TqNmUCOfHl60wffO/ChlgyoF1KqZZ2tk8/lsAFrpEBN/ESm6dWI+MLJpRkYmJ9B4GcLpAzQTVHulCIqtgUBYWNPKey9Hw62WUpr/POcHCHLR2ytpa0POSDLplLaqg8Ti5R+3J5kydO3UDxihWPDag6GaPV/NcNoSPzXowEG/I4= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Extend the usefulness of arch_alloc_page() by adding the gfp_flags parameter. Signed-off-by: Alexandru Elisei --- Changes since rfc v2: * New patch. arch/s390/include/asm/page.h | 2 +- arch/s390/mm/page-states.c | 2 +- include/linux/gfp.h | 2 +- mm/page_alloc.c | 2 +- 4 files changed, 4 insertions(+), 4 deletions(-) diff --git a/arch/s390/include/asm/page.h b/arch/s390/include/asm/page.h index 73b9c3bf377f..859f0958c574 100644 --- a/arch/s390/include/asm/page.h +++ b/arch/s390/include/asm/page.h @@ -163,7 +163,7 @@ static inline int page_reset_referenced(unsigned long addr) struct page; void arch_free_page(struct page *page, int order); -void arch_alloc_page(struct page *page, int order); +void arch_alloc_page(struct page *page, int order, gfp_t gfp_flags); static inline int devmem_is_allowed(unsigned long pfn) { diff --git a/arch/s390/mm/page-states.c b/arch/s390/mm/page-states.c index 01f9b39e65f5..b986c8b158e3 100644 --- a/arch/s390/mm/page-states.c +++ b/arch/s390/mm/page-states.c @@ -21,7 +21,7 @@ void arch_free_page(struct page *page, int order) __set_page_unused(page_to_virt(page), 1UL << order); } -void arch_alloc_page(struct page *page, int order) +void arch_alloc_page(struct page *page, int order, gfp_t gfp_flags) { if (!cmma_flag) return; diff --git a/include/linux/gfp.h b/include/linux/gfp.h index de292a007138..9e8aa3d144db 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -172,7 +172,7 @@ static inline struct zonelist *node_zonelist(int nid, gfp_t flags) static inline void arch_free_page(struct page *page, int order) { } #endif #ifndef HAVE_ARCH_ALLOC_PAGE -static inline void arch_alloc_page(struct page *page, int order) { } +static inline void arch_alloc_page(struct page *page, int order, gfp_t gfp_flags) { } #endif struct page *__alloc_pages(gfp_t gfp, unsigned int order, int preferred_nid, diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 150d4f23b010..2c140abe5ee6 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1485,7 +1485,7 @@ inline void post_alloc_hook(struct page *page, unsigned int order, set_page_private(page, 0); set_page_refcounted(page); - arch_alloc_page(page, order); + arch_alloc_page(page, order, gfp_flags); debug_pagealloc_map_pages(page, 1 << order); /* From patchwork Thu Jan 25 16:42:23 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Elisei X-Patchwork-Id: 13531274 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id EB664C47422 for ; Thu, 25 Jan 2024 16:43:01 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6C2656B009B; Thu, 25 Jan 2024 11:43:01 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 6746F6B009C; Thu, 25 Jan 2024 11:43:01 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 53BEC6B009D; Thu, 25 Jan 2024 11:43:01 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 4711F6B009B for ; Thu, 25 Jan 2024 11:43:01 -0500 (EST) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 2A7B61C1884 for ; Thu, 25 Jan 2024 16:43:01 +0000 (UTC) X-FDA: 81718402962.07.B5B16BB Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf15.hostedemail.com (Postfix) with ESMTP id 74B2DA000C for ; Thu, 25 Jan 2024 16:42:59 +0000 (UTC) Authentication-Results: imf15.hostedemail.com; dkim=none; spf=pass (imf15.hostedemail.com: domain of alexandru.elisei@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=alexandru.elisei@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1706200979; a=rsa-sha256; cv=none; b=cUDgOx2vo/YtbhF+5CjjF6Xoilp/I2aqUXi+pbjFL14HidML85TuMvNwImkBt08zc4elOt jHQqbG0+E22NS6gTlRBDDtxNn/e+0hjC6bM3m2DBDFM+eElmvG5DbrG83kQVqRLc1WG/OU yZGz1md0WpcAhYP+Xf6dRM0XCocp1gE= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=none; spf=pass (imf15.hostedemail.com: domain of alexandru.elisei@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=alexandru.elisei@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1706200979; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=wwaMmcFR+7gYAKWJTVbpRrP8BqwS476rsGrY3piUjRU=; b=dP+4ErrOoU8NbCekAKpaX+m/FrQiPA5UVj+/TOKcurkbrU34Cr10DBTdwcvksHdpKLnJ2e rq57tuws1WT7PP7gAsb+oNL/AUGDZNPT3SV/RViZxrnK20vnV+xk4yyJarQfJSbDLPOJB1 bsXi856laSilG3AysGHbMw5FrwSoRZ8= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 13EB31476; Thu, 25 Jan 2024 08:43:43 -0800 (PST) Received: from e121798.cable.virginm.net (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 219223F5A1; Thu, 25 Jan 2024 08:42:52 -0800 (PST) From: Alexandru Elisei To: catalin.marinas@arm.com, will@kernel.org, oliver.upton@linux.dev, maz@kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, arnd@arndb.de, akpm@linux-foundation.org, mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, mhiramat@kernel.org, rppt@kernel.org, hughd@google.com Cc: pcc@google.com, steven.price@arm.com, anshuman.khandual@arm.com, vincenzo.frascino@arm.com, david@redhat.com, eugenis@google.com, kcc@google.com, hyesoo.yu@samsung.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, kvmarm@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org Subject: [PATCH RFC v3 02/35] mm: page_alloc: Add an arch hook early in free_pages_prepare() Date: Thu, 25 Jan 2024 16:42:23 +0000 Message-Id: <20240125164256.4147-3-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240125164256.4147-1-alexandru.elisei@arm.com> References: <20240125164256.4147-1-alexandru.elisei@arm.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 74B2DA000C X-Stat-Signature: giz7sezkfxxj594sw1bdedpm5t9sgeu4 X-HE-Tag: 1706200979-424301 X-HE-Meta: U2FsdGVkX18udFxVcI1aAfD2ffzQmvALs8CqQOE9a5vCvUPTZEipdhEJQFyHZDJfVP64RNDXKcZhnnTwLAdiU5U95PmVWj0ar2gRW/U9B25RERYejfGqh+EdI2kCT5x+18RpO4cH6ZuVdKlhbzJUPQ3CAgkI7WiJUXWv9xapYp6+FekF7h7H95rcnzGj6IvQqdgFEoIgH0ugH9qnZeCUE3eL8mY7w4TRupopcg3AbVsDuAW98sADFE5NWNAaVGXyHqEu9iXj5Tcfzbj7qcqZ7EZXTJsrg0ZpVjWmbUhavFc3so9GNC7W2N1iNQ8hK3PFCt3EPVOlHbk2rJpy7fnyscp8l06upIDTaHxvvyNRqFzw1DWQnirokmzExkkXeBEpgKPrHek+y+oYgKTngxzKNHUTb1Ax+kpbTytcZOc8D1REMGWd3xTl3YUZsLZc/ASE5j5XvZmvqYWY29fPMbl2dIFTL67jvCcxR3KLof4UUHCLnG71Azv2Zpr43xoI/lR129oiIC6uePx1eTHi9j01AWCo7/OZDxjo9U7CJiAkjDH4PmPFPIDsFvDJjjmX9541f79h+DUtYEZ3uJmp3dSiPUEWnIDJK7sROUhPh24LWAwz37y9iGe81KavWnGN9dQFIp7pzWRM1c+BffcRX6MQiqvwx4zI/0x+0+lW+Y/vJeBpYkrRnQwMRqst8hY9/zBk/1ixA1z5+nQv9vtbbjwZme8xownjgpuATvpDH57LnpW93tWyPEi/eb6PLlzzUcjSGpeyp2S+A/32IjW/LOaKxpo3k4YUSUCgdauK2vKjzConQrVi88rPswNtkiHbcVl6G1tv3eIA4jobmGbmIDkZuikWFZljmSJjGrxfvIqR1Yh5/H+y0rJ8k5lyjUaLYkEsVBiZrV4YS33GJIToEh66VzJggVsqgl7XSyyiCCayl45EjHSkvYA1+uwndZC6l33qJEgjw52rcCmPCLKAaLA pL8TGd6I 4sa55emOducHxdVszDb3hqD2TVfE7WE5KbEl4UphOP7S7BOuCIEmnnmjWoQT2WreipYqwRsvDaUxZH2YTbZGn1D/FBUPKjPs3IYL2CVcctjEZwpnlEWNCpW94+02pXL/OWN83ZyHwfCoG+fE16FR7traCQV4C884HHeOoxCftNgY+Hl1SQM1o2AS4hI24o1qD06aoefVt4rg8n9Y= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: The arm64 MTE code uses the PG_arch_2 page flag, which it renames to PG_mte_tagged, to track if a page has been mapped with tagging enabled. That flag is cleared by free_pages_prepare() by doing: page->flags &= ~PAGE_FLAGS_CHECK_AT_PREP; When tag storage management is added, tag storage will be reserved for a page if and only if the page is mapped as tagged (the page flag PG_mte_tagged is set). When a page is freed, likewise, the code will have to look at the the page flags to determine if the page has tag storage reserved, which should also be freed. For this purpose, add an arch_free_pages_prepare() hook that is called before that page flags are cleared. The function arch_free_page() has also been considered for this purpose, but it is called after the flags are cleared. Signed-off-by: Alexandru Elisei --- Changes since rfc v2: * Expanded commit message (David Hildenbrand). include/linux/pgtable.h | 4 ++++ mm/page_alloc.c | 1 + 2 files changed, 5 insertions(+) diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index f6d0e3513948..6d98d5fdd697 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -901,6 +901,10 @@ static inline void arch_do_swap_page(struct mm_struct *mm, } #endif +#ifndef __HAVE_ARCH_FREE_PAGES_PREPARE +static inline void arch_free_pages_prepare(struct page *page, int order) { } +#endif + #ifndef __HAVE_ARCH_UNMAP_ONE /* * Some architectures support metadata associated with a page. When a diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 2c140abe5ee6..27282a1c82fe 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1092,6 +1092,7 @@ static __always_inline bool free_pages_prepare(struct page *page, trace_mm_page_free(page, order); kmsan_free_page(page, order); + arch_free_pages_prepare(page, order); if (memcg_kmem_online() && PageMemcgKmem(page)) __memcg_kmem_uncharge_page(page, order); From patchwork Thu Jan 25 16:42:24 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Elisei X-Patchwork-Id: 13531275 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 324ADC47422 for ; Thu, 25 Jan 2024 16:43:08 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B57026B009E; Thu, 25 Jan 2024 11:43:07 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id B07C16B009F; Thu, 25 Jan 2024 11:43:07 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9A7706B00A0; Thu, 25 Jan 2024 11:43:07 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 865976B009E for ; Thu, 25 Jan 2024 11:43:07 -0500 (EST) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 1B4E9140456 for ; Thu, 25 Jan 2024 16:43:07 +0000 (UTC) X-FDA: 81718403214.04.75EEAB3 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf02.hostedemail.com (Postfix) with ESMTP id 6B32C80013 for ; Thu, 25 Jan 2024 16:43:05 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=none; spf=pass (imf02.hostedemail.com: domain of alexandru.elisei@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=alexandru.elisei@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1706200985; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=/m4vNs7M03/9f2nYaUHeWkQCui+Og+uQ/XcHmyqte6Y=; b=Sd7f+jj3ELVXYZBhMQr8xcHrklTHI6rkfaugdxtHpIefBUccGUAAncGJSdwuOjebpgZQDZ HD8Md7uu4F6OkK65fcGkfn89f9ZNlSOTviOOAwblpgpfZY5k0sMzcXPuLEfjQoINcEVyR1 SVqxrKB5n1iTae1FkWmhOw5Ju6uyqIE= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1706200985; a=rsa-sha256; cv=none; b=e17WvnsQ8I7hMnRuwXD9QA3YmJxlmEdt7quiJU38EiiY9yrWdvCVAPO2ThfeuJ8wy07h83 WmKJ0kYuvls9uSDxeo6e4qq0Plpqc8aAyQSIUXbHO4GJiIFyCANwAL+itSxxH2uP6qte7B 68NENPkllXV2p1wtzkVEjy3ht6nje6s= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=none; spf=pass (imf02.hostedemail.com: domain of alexandru.elisei@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=alexandru.elisei@arm.com; dmarc=pass (policy=none) header.from=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D2B4A1477; Thu, 25 Jan 2024 08:43:48 -0800 (PST) Received: from e121798.cable.virginm.net (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id E1DA43F5A1; Thu, 25 Jan 2024 08:42:58 -0800 (PST) From: Alexandru Elisei To: catalin.marinas@arm.com, will@kernel.org, oliver.upton@linux.dev, maz@kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, arnd@arndb.de, akpm@linux-foundation.org, mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, mhiramat@kernel.org, rppt@kernel.org, hughd@google.com Cc: pcc@google.com, steven.price@arm.com, anshuman.khandual@arm.com, vincenzo.frascino@arm.com, david@redhat.com, eugenis@google.com, kcc@google.com, hyesoo.yu@samsung.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, kvmarm@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org Subject: [PATCH RFC v3 03/35] mm: page_alloc: Add an arch hook to filter MIGRATE_CMA allocations Date: Thu, 25 Jan 2024 16:42:24 +0000 Message-Id: <20240125164256.4147-4-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240125164256.4147-1-alexandru.elisei@arm.com> References: <20240125164256.4147-1-alexandru.elisei@arm.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 6B32C80013 X-Rspam-User: X-Stat-Signature: efr54wjgaaxmsx7xy4fp84z3yn491dbp X-Rspamd-Server: rspam03 X-HE-Tag: 1706200985-869809 X-HE-Meta: U2FsdGVkX19L2kSAdzmvAQ5nNGWKjA8B6ol0HTrxFspAEhAs0r/rKDNIBovsd/IsutUtoQ6uXblbl1g2rNGHI13ONNvB+qhTgAHMYrltHvC2UXdoepQiLZwZqrRLfbNhX+BSTbSdAz/cm1Opw1/JCduEHup/ivHeHn/ZI44v39cCdsmiFd478tOeAtLEo641ZYBsMla/8wgIxoLkEpQNjMKeelYILjfwGJozqqo6iupioJFn/g0LW5hXA/+wgrOjTC8dYriLWU4QBG469WlOX/Svhr3D7D2eeXGi3U8W0/L200PjlFq2SxfScKDK4rIyvnV1mLM+JzzPhawj1KRwg1tehre9MLVc1K7/s5CgqQ92lJ1Wn5Bap3Zt8q8aCAiS7Md67amlpIvyVgJ/D+0v0XMKstV62A7hkAJMiJagTt0fw5wsXizfVuaDd8tlObh3DRkpKkUIHVxKRxC4+c1FE/h/Y6nvTjIdXrY2tRXfySQ2R7veqZowOdI8EOb0fwxIaVIoHmYttWLJwK2SRZ9zjqk6lrjR42hQVQz8NcJMAd3U1G9UcQ+0tYI1GQAPp0SkzWDF79r5OO5IbKugB76eRrk7c5qVS3+CPGQtuRLSknAdp2JFNLasrwDsXEJlrzyISA61zuwquQv/u2qnOjj0kI1TLG9sBA4ayKHhIQ0BC0nxDcdmvOqHeJz7bkKBnm2ORMN7u3K7Cbn7xZiZdH9HcHC8IPQiWyqd8fAvWVlKniOhia5A/+1usV3OY2YBEO4aknHRPipjUYLl3B2f8iBitU/zmP2I9ry2hD0XtT09vXjcfxlS0CmKUxr9Un2qluSAXNvnqm7rp6f6i3FIR/ywEYmdfwpLxjgdGfWeYnYzv4k3rBuLMyJra1lArDleMGqkRZXC3Qwjeuchu9dxxwHWNZPyZsK0CBU7xfK/7oDdUE4k8s3Y5pqX1snUoa5/DZd+1BUC8y7m/MI8Lh6wqmj ibVxRUbW jw3yRG10QdreTmD+P2Bs9XoBSottaRQnsdBgQ2vP+YFT82fmG1WoTdGcV5q/mExty0iiWTkj9JpKRV7k8MBF1ARIq9hMbfl5GlgkQour+Ka3AJYFHBeufLQkjrhQww4CKH+uD+Ja2QU+NKpkeIvb1L/95yrJHyoFJa9xhdtSSS3PkDBHcapXLdCH2CsPPYsY/UsfFISZsMMJASWc= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: As an architecture might have specific requirements around the allocation of CMA pages, add an arch hook that can disable allocations from MIGRATE_CMA, if the allocation was otherwise allowed. This will be used by arm64, which will put tag storage pages on the MIGRATE_CMA list, and tag storage pages cannot be tagged. The filter will be used to deny using MIGRATE_CMA for __GFP_TAGGED allocations. Signed-off-by: Alexandru Elisei --- include/linux/pgtable.h | 7 +++++++ mm/page_alloc.c | 3 ++- 2 files changed, 9 insertions(+), 1 deletion(-) diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index 6d98d5fdd697..c5ddec6b5305 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -905,6 +905,13 @@ static inline void arch_do_swap_page(struct mm_struct *mm, static inline void arch_free_pages_prepare(struct page *page, int order) { } #endif +#ifndef __HAVE_ARCH_ALLOC_CMA +static inline bool arch_alloc_cma(gfp_t gfp) +{ + return true; +} +#endif + #ifndef __HAVE_ARCH_UNMAP_ONE /* * Some architectures support metadata associated with a page. When a diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 27282a1c82fe..a96d47a6393e 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -3157,7 +3157,8 @@ static inline unsigned int gfp_to_alloc_flags_cma(gfp_t gfp_mask, unsigned int alloc_flags) { #ifdef CONFIG_CMA - if (gfp_migratetype(gfp_mask) == MIGRATE_MOVABLE) + if (gfp_migratetype(gfp_mask) == MIGRATE_MOVABLE && + arch_alloc_cma(gfp_mask)) alloc_flags |= ALLOC_CMA; #endif return alloc_flags; From patchwork Thu Jan 25 16:42:25 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Elisei X-Patchwork-Id: 13531276 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A2D76C47258 for ; Thu, 25 Jan 2024 16:43:13 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 429EF6B009F; Thu, 25 Jan 2024 11:43:13 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 3DAB06B00A0; Thu, 25 Jan 2024 11:43:13 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2A2686B00A1; Thu, 25 Jan 2024 11:43:13 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 1B8186B009F for ; Thu, 25 Jan 2024 11:43:13 -0500 (EST) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id C17DC160DAD for ; Thu, 25 Jan 2024 16:43:12 +0000 (UTC) X-FDA: 81718403424.08.B76F8E3 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf14.hostedemail.com (Postfix) with ESMTP id 1982D10001F for ; Thu, 25 Jan 2024 16:43:10 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf14.hostedemail.com: domain of alexandru.elisei@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=alexandru.elisei@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1706200991; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=yCJEAk/Tce1G3ebANNHIXIwk1XBp1lyQjk0tJJyzuHM=; b=U0DXhu6/U3AYNSb4CtMkEeBpSnqJMRbhXYiNYATfVXeAvjJ3374NRnzCHHfeQFiiUdy7vE NNt/Vsa3h0M2bTISWsrGCnIadsjeImtc/PErqy1UspCFXfeV9jqv+ph7xxskr8iZ7PLuDL DgC0/QNSVSUn0fBgKL/CDHcPGf4ktxA= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf14.hostedemail.com: domain of alexandru.elisei@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=alexandru.elisei@arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1706200991; a=rsa-sha256; cv=none; b=Ga1zKiomT0lDdiHon1U7BK/maf9bBMwws9d7m8rxpwlxRUV4AuS7YQRxg8XtGRJWPJLyP9 jx3+qIpjwoMprB77ttm3Z9LPNOLeloQ/sBqusrtQ1d6B794JnlxgAhUVUBovck3+fkXYzF 6YTZP4MUn9/3560w6RSuXbJbyPAxmV8= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B2D5A1480; Thu, 25 Jan 2024 08:43:54 -0800 (PST) Received: from e121798.cable.virginm.net (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id ADAA33F5A1; Thu, 25 Jan 2024 08:43:04 -0800 (PST) From: Alexandru Elisei To: catalin.marinas@arm.com, will@kernel.org, oliver.upton@linux.dev, maz@kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, arnd@arndb.de, akpm@linux-foundation.org, mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, mhiramat@kernel.org, rppt@kernel.org, hughd@google.com Cc: pcc@google.com, steven.price@arm.com, anshuman.khandual@arm.com, vincenzo.frascino@arm.com, david@redhat.com, eugenis@google.com, kcc@google.com, hyesoo.yu@samsung.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, kvmarm@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org Subject: [PATCH RFC v3 04/35] mm: page_alloc: Partially revert "mm: page_alloc: remove stale CMA guard code" Date: Thu, 25 Jan 2024 16:42:25 +0000 Message-Id: <20240125164256.4147-5-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240125164256.4147-1-alexandru.elisei@arm.com> References: <20240125164256.4147-1-alexandru.elisei@arm.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 1982D10001F X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: rkokq6b59zpbgsbuesek7gjgwfc8kgo5 X-HE-Tag: 1706200990-44118 X-HE-Meta: U2FsdGVkX1+uApxGbU3AiLH465AHYhTLI1uesXr4FuQTq2qts4bta7qLHsKuObAvszf4fzs5tNSdq+AQUrC7RDufAhXvC0tk+4+gnmBud3H2n3xaC7owgnzlJtT1sztgGBRhJUEAppp5+Xrqyc5PJEB12QH+jd3Mkii0C4VTJ9sdNcyaCgEi0xsw9Vf181gh/gnewmxSQKurT5U8Iye5dPjR3MVdzIkGbwOJCwVHog0PAgIR7YIRLrfEraRr9dUuLV65/KC0MQ3O4adQ5bZqIEj5AJqNiIe9KuR/7NTy2XJyXN9nAV0vfpIJ2JWrvqiDRiJ9L4uL2+tfhUYy0beA58FuMVGb6TJ6/k1rhoJ57QyElA0bMvqfzyWFPqkxsR/aor+2aUKwSSpgsX8od/xvIUKMS93/4+BzhGecQsfKdH6m6DFG36ViA49EhXKS25ZYHbvMpEvbmFHsVRg6eZ9hGG2DwbjkGa/PXjsE83Sets1Y8c3vDZ3G7QhBKKkj8fkQ11/cpKgSgrg2LyjjnxO5pFy1sOxUEFvJqLndsGw3a3CqVFsm82xmiejzpwJs4PraE0mRRoST8n5lvp7qUFdkFX9vmYXnGAzRPv99sStRn2LCFKadu8Egxtw4JPHJuSmBIBlzzFvyG7T0mwAo9dQjufTlygcQZBWbJsxF/gBvdSJaR6poPjB0d6FobWQiOrTp5FYQNW0k0Zf3XRrOt+Xuaotj8p9xHhrRpdO5ANtGUi5VYFP7q7fKBr3s6vtGyaOvbJqD5EzFQE1GBhP1zMBcM2qoo3/klGa1JXZAyLH1lf96pbBlLv1AnNvK0wwDV3UfzvfZUN8hm0xKFVgxsNyLD9SHb1kGw36tWfcnzh+irayWEKzlpeaqCRC0pQv24+F89hD4+UFa7vE7eZy/Fny7iJCDk5C/A3ErmgGS6y2OG9D3SqzL5QJM5J4+pajoUznZbS3G+n4fXT7vNqCaRIm I5+2ru0R es45yGXpzR19ml0bqGbhmOaDVAUYdfF6N2OW/z14gTaGLEsNVLKLdDt93cWHTNSidWooLjcAX4IUWpMTBHFahVZx3d86wXhC9i3CqwNfmZKWluhEs9R5VYabsH0yJxH0hw+8j29a7+aFMlXajnkeXQw4t8cdyxzoYG3SfKP0upczDx5Kdm3zLbq5hW614SM8oG9KVLoqJYh4eFTY= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: The patch f945116e4e19 ("mm: page_alloc: remove stale CMA guard code") removed the CMA filter when allocating from the MIGRATE_MOVABLE pcp list because CMA is always allowed when __GFP_MOVABLE is set. With the introduction of the arch_alloc_cma() function, the above is not true anymore, so bring back the filter. This is a partially revert because the stale comment remains removed. Signed-off-by: Alexandru Elisei --- mm/page_alloc.c | 15 +++++++++++---- 1 file changed, 11 insertions(+), 4 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index a96d47a6393e..0fa34bcfb1af 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -2897,10 +2897,17 @@ struct page *rmqueue(struct zone *preferred_zone, WARN_ON_ONCE((gfp_flags & __GFP_NOFAIL) && (order > 1)); if (likely(pcp_allowed_order(order))) { - page = rmqueue_pcplist(preferred_zone, zone, order, - migratetype, alloc_flags); - if (likely(page)) - goto out; + /* + * MIGRATE_MOVABLE pcplist could have the pages on CMA area and + * we need to skip it when CMA area isn't allowed. + */ + if (!IS_ENABLED(CONFIG_CMA) || alloc_flags & ALLOC_CMA || + migratetype != MIGRATE_MOVABLE) { + page = rmqueue_pcplist(preferred_zone, zone, order, + migratetype, alloc_flags); + if (likely(page)) + goto out; + } } page = rmqueue_buddy(preferred_zone, zone, order, alloc_flags, From patchwork Thu Jan 25 16:42:26 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Elisei X-Patchwork-Id: 13531277 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4DC93C47422 for ; Thu, 25 Jan 2024 16:43:19 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E152A6B00A0; Thu, 25 Jan 2024 11:43:18 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id DC5D86B00A1; Thu, 25 Jan 2024 11:43:18 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CDBD16B00A2; Thu, 25 Jan 2024 11:43:18 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id C1D7C6B00A0 for ; Thu, 25 Jan 2024 11:43:18 -0500 (EST) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 922E78097E for ; Thu, 25 Jan 2024 16:43:18 +0000 (UTC) X-FDA: 81718403676.08.5C2924F Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf14.hostedemail.com (Postfix) with ESMTP id CD8E5100017 for ; Thu, 25 Jan 2024 16:43:16 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf14.hostedemail.com: domain of alexandru.elisei@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=alexandru.elisei@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1706200996; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ODjHMj/0/o/rrnNnjwRWW4UhWj0gLm0ALtcm3XFN698=; b=jeu4R6I5qThyPn/JC4Pi5iM6Ms5SnxT8pZYZkkDVhHMQ08oZtUa5eudMd/sTiTHF0+R7ix usTML+VkTTv+frsJ5gPAj+MqAoe1nGoJ2SvR0MUGDb93+JHf2WnmTxjtTxR84xPMx7bM8Z RWDmKXew+Mt3t9DxoFbYmvdfK6DsrE0= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf14.hostedemail.com: domain of alexandru.elisei@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=alexandru.elisei@arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1706200996; a=rsa-sha256; cv=none; b=RYgfjb+KWDvRSaoqJg3dKGYKe3dtJZac0GVpBMKVUJOzMNPT6TUf1Zqd8i/9W5/N5tiPsu QNgfSn+M/183wZICBUMQl/mFD2nclgyg541EqBtYm3DkBHjloh5erQalwXbtgU96DhQtUJ yJSdNfFYyWIogfv77sVVo+GYxhYEe7o= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 6546F14BF; Thu, 25 Jan 2024 08:44:00 -0800 (PST) Received: from e121798.cable.virginm.net (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 776B83F5A1; Thu, 25 Jan 2024 08:43:10 -0800 (PST) From: Alexandru Elisei To: catalin.marinas@arm.com, will@kernel.org, oliver.upton@linux.dev, maz@kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, arnd@arndb.de, akpm@linux-foundation.org, mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, mhiramat@kernel.org, rppt@kernel.org, hughd@google.com Cc: pcc@google.com, steven.price@arm.com, anshuman.khandual@arm.com, vincenzo.frascino@arm.com, david@redhat.com, eugenis@google.com, kcc@google.com, hyesoo.yu@samsung.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, kvmarm@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org Subject: [PATCH RFC v3 05/35] mm: cma: Don't append newline when generating CMA area name Date: Thu, 25 Jan 2024 16:42:26 +0000 Message-Id: <20240125164256.4147-6-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240125164256.4147-1-alexandru.elisei@arm.com> References: <20240125164256.4147-1-alexandru.elisei@arm.com> MIME-Version: 1.0 X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: CD8E5100017 X-Stat-Signature: oq7xjbyiukinz9i5rxem4kekgd9xb4e5 X-Rspam-User: X-HE-Tag: 1706200996-489705 X-HE-Meta: U2FsdGVkX19+bO69ifzWtf23MmRA7lVjRRnDS7YEC99RIthFX9x6prTNk0JrVD8AEtdoZWvaxRo7YNccuF0E+34pncDzwBKljj/gtIDwpcpymy8+TBMV3e5SExeTycvAjZfYuULEg52tiGskzVCeZTNa8yYH4ZJSo/20/FXfDpiQ4PGVA5ttw/NcEJ7kHtYKF7Ktyu+k764Z1cIX/p4eLOOGELCQSHJ5YTyqxzdJvrp4NmQaQpB5xc1lv49WWtOoKgQEdhsX9C7xXNauzih9uybrnIauFzf3SPbqxI3l/3NGzvsrYCYFj6CEhj5ugbIuA0BpLZq12dA1nGFMDCzJX7dvtXfudTga0F2I/xWj5LHuNuB+RJZtuGk50bp6IIhM0MHTqQWIc0pNAm/42OEBmmVZnK3R2/SVyLBCKzOd8hkC2uA5SH9TKzn1VJWajJ8M52X6VuAqqv6WBQIQP0g1FdUHIChLNWwrADH/QfHPObGQ3T9ZltceSik2CEoKvMaoaVrjuSoOpOhuI6fEw6aW/NmmW4O4z68Knuckar7rPDuZFCRQMfCjZMzmidW1Bg/Oe9SeqZcZ3dgKTt8BhFIGiPVTwwWTAGzDLsMhkcNYzpWoqnl1vEQ6Qfb8LPWTdJ5e7f2ry4ycXj2X6d901VDlHRbfIm/P7inT5XR80h/IybXCuSZH8d3dQmuEYtpIFksv5tL2g5BmPYnteXTi7Qpo+t5vKwbC6ghS6cmj4UEMLTLzHeiUEyCzq4DcMR2NE9HPfcvlapcF0t8OfAsHoULwwMyGvz470SuPB8dZ0pTeroGl2ksGEf9Vb+TeX2zst3HAuRFojZNb3JgvvBVHu9PFCp6OUPLQQu97Mg2jYJtrT+ET6lhAO7owTP+Iihy66pTSgAKjMBZrLOAmZCItz7BEL0IV/Ncb+pCGXnKsAbC1KHkdMJG64r3DB3DSna2ui85HIxYImab1IuAWGTFG/Yw VHXWWYrb /3kTH05SvnOqvRR2LGfJpE+leLmBF1bLJ96pFNhxqcyiWwOTduLWYDWtajc4JPXHwj+FHxhHLqHWj174FV5lHatqiNFxRJ1S7FrlOSvS8UG5ciAEeB+r1rYZcchuyjQLPEhIIWBde3eRR03bMM32vodu5CsyoRjbQI8FlNq18d5pCVCHuEGhKiYfU3w== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: cma->name is displayed in several CMA messages. When the name is generated by the CMA code, don't append a newline to avoid breaking the text across two lines. Signed-off-by: Alexandru Elisei Reviewed-by: Anshuman Khandual --- Changes since rfc v2: * New patch. This is a fix, and can be merged independently of the other patches. mm/cma.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/cma.c b/mm/cma.c index 7c09c47e530b..f49c95f8ee37 100644 --- a/mm/cma.c +++ b/mm/cma.c @@ -204,7 +204,7 @@ int __init cma_init_reserved_mem(phys_addr_t base, phys_addr_t size, if (name) snprintf(cma->name, CMA_MAX_NAME, name); else - snprintf(cma->name, CMA_MAX_NAME, "cma%d\n", cma_area_count); + snprintf(cma->name, CMA_MAX_NAME, "cma%d", cma_area_count); cma->base_pfn = PFN_DOWN(base); cma->count = size >> PAGE_SHIFT; From patchwork Thu Jan 25 16:42:27 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Elisei X-Patchwork-Id: 13531278 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 06755C48260 for ; Thu, 25 Jan 2024 16:43:25 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 95ADF6B007B; Thu, 25 Jan 2024 11:43:24 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 90ADF6B007E; Thu, 25 Jan 2024 11:43:24 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7FC956B0099; Thu, 25 Jan 2024 11:43:24 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 6DA326B007B for ; Thu, 25 Jan 2024 11:43:24 -0500 (EST) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 4BF041C1869 for ; Thu, 25 Jan 2024 16:43:24 +0000 (UTC) X-FDA: 81718403928.17.D782980 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf15.hostedemail.com (Postfix) with ESMTP id 916FFA000C for ; Thu, 25 Jan 2024 16:43:22 +0000 (UTC) Authentication-Results: imf15.hostedemail.com; dkim=none; spf=pass (imf15.hostedemail.com: domain of alexandru.elisei@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=alexandru.elisei@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1706201002; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=sJlbLFwFZ82BrxHaABwdnVINM7HdR/JOb3sQoajXNFk=; b=7PWWoGJH98ZUluLfgdbdUkfkRQiKjdkdJ/e5FSfe+DKQ8xjD/fu9taBjeCgyCcDJIcj2t6 OPxvGcBqmZRlsBgiaPQJJkSb02aUdcuHwArTbPDH5++7vjuPLFwZiel6hwkMIlmyBtQGlJ u07KgqzL4m1gYGy/H3x6WlwvMtrie0k= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=none; spf=pass (imf15.hostedemail.com: domain of alexandru.elisei@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=alexandru.elisei@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1706201002; a=rsa-sha256; cv=none; b=y0GB5vTd27UsdS9K77zpZ/sllxrJtamFoQRAlrhsCWMb8ty93W0mMG3oI4kpObqvFOWvqi tYvr2NHWjgviYFasfaLtYrY1HKFrDFOK0gc+FQSTQ65XjdHuN3jWpJVOV0KsT91yol78rO ZjgJ5duJNJ5+XCITs6j1z3VJovsrCzM= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 2BBFC150C; Thu, 25 Jan 2024 08:44:06 -0800 (PST) Received: from e121798.cable.virginm.net (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 3BDCC3F5A1; Thu, 25 Jan 2024 08:43:16 -0800 (PST) From: Alexandru Elisei To: catalin.marinas@arm.com, will@kernel.org, oliver.upton@linux.dev, maz@kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, arnd@arndb.de, akpm@linux-foundation.org, mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, mhiramat@kernel.org, rppt@kernel.org, hughd@google.com Cc: pcc@google.com, steven.price@arm.com, anshuman.khandual@arm.com, vincenzo.frascino@arm.com, david@redhat.com, eugenis@google.com, kcc@google.com, hyesoo.yu@samsung.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, kvmarm@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org Subject: [PATCH RFC v3 06/35] mm: cma: Make CMA_ALLOC_SUCCESS/FAIL count the number of pages Date: Thu, 25 Jan 2024 16:42:27 +0000 Message-Id: <20240125164256.4147-7-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240125164256.4147-1-alexandru.elisei@arm.com> References: <20240125164256.4147-1-alexandru.elisei@arm.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 916FFA000C X-Rspam-User: X-Stat-Signature: zf35nes8t81r6nwq889r4i19qqdhjn1o X-Rspamd-Server: rspam01 X-HE-Tag: 1706201002-414733 X-HE-Meta: U2FsdGVkX1/XgQXRfJMxq2xIlviIoFjbY4B7brsJlgOSCU0q9Dp5/sG9lzFp+DbkRiKkzqDT+MYtCEHzQoUdnRu4o/TlrsVF2rLTFPGjYfiHIzF3fSc8qBsXvgEdHMCz6k3mBiV+ZX+SugdmdJmRcsgsgiUG5lfCaZG/UDj+smNoaalZL64SIMEk3em3wU+qbhf9i+L0aZMG6tKGyIrJ88wltcPW1i6WcF0/iWTlWafoJr6cYM5KXgv1nC2P/u7KMNYTGKmaIp6e/0uYlZL5NuLRMUPxie6JxbmJSPi9TND7UQ9aBy093WR8t4Pzi1f8SsDOTsla2hK0zRr7UeeLDuPRTwrzq+qfraol8/oc22OzLJ3wQSvtb5uIcEAKaSDdxnpXQDgFWQshBE8vjzuS2Cj0PANaFpGCPj/kkomqo1WdXRVHwmedjyo5xfUMLqlMY7/qLNVlwD7Ur2H2BJI5ORqd0SbKBKvDHAja1NF06XnLmluDC3UvAK1ZDKO686a2/1C8moncVrSRqTGZ5kb4eCHe3qH162LtZfYJrfBFviRqD5mN8UfRu9nog528Y2CnIhs+s7bYTHSUwpMMLjjEopyEqSi9cCMsL2E8c98T8h2Ju8gJ11jfOvuwtpL/PzqFdkQ+9hvSjW+SsWnM01nRUKBiIs/qiUoLRO2xT74T51xX5y1FVNkPMb/kaeqYCc2pZbzUdatpsufVfxNoaQkMUvZCaXU+KDZY88zG5HV3r1xTv/esQMGRr/eHYkTrelQS333LNAWWHW1q10L7Fs4hfXUAAwiGfTNKxOlN9fvQVeivNoo9BHKO8WAXgPmhnGPe3JKgZfkc8Lq5N0l3TbwCRlP9AuEi4Av7VBmN7+nNQ9IKdjHZ8VhMxwZzT+l2Qs3ohlcL+/6A65k= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: The CMA_ALLOC_SUCCESS, respectively CMA_ALLOC_FAIL, are increased by one after each cma_alloc() function call. This is done even though cma_alloc() can allocate an arbitrary number of CMA pages. When looking at /proc/vmstat, the number of successful (or failed) cma_alloc() calls doesn't tell much with regards to how many CMA pages were allocated via cma_alloc() versus via the page allocator (regular allocation request or PCP lists refill). This can also be rather confusing to a user who isn't familiar with the code, since the unit of measurement for nr_free_cma is the number of pages, but cma_alloc_success and cma_alloc_fail count the number of cma_alloc() function calls. Let's make this consistent, and arguably more useful, by having CMA_ALLOC_SUCCESS count the number of successfully allocated CMA pages, and CMA_ALLOC_FAIL count the number of pages the cma_alloc() failed to allocate. For users that wish to track the number of cma_alloc() calls, there are tracepoints for that already implemented. Signed-off-by: Alexandru Elisei --- mm/cma.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/mm/cma.c b/mm/cma.c index f49c95f8ee37..dbf7fe8cb1bd 100644 --- a/mm/cma.c +++ b/mm/cma.c @@ -517,10 +517,10 @@ struct page *cma_alloc(struct cma *cma, unsigned long count, pr_debug("%s(): returned %p\n", __func__, page); out: if (page) { - count_vm_event(CMA_ALLOC_SUCCESS); + count_vm_events(CMA_ALLOC_SUCCESS, count); cma_sysfs_account_success_pages(cma, count); } else { - count_vm_event(CMA_ALLOC_FAIL); + count_vm_events(CMA_ALLOC_FAIL, count); if (cma) cma_sysfs_account_fail_pages(cma, count); } From patchwork Thu Jan 25 16:42:28 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Elisei X-Patchwork-Id: 13531279 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BB239C47258 for ; Thu, 25 Jan 2024 16:43:30 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 403BA6B0087; Thu, 25 Jan 2024 11:43:30 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 3B5A66B009A; Thu, 25 Jan 2024 11:43:30 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 22F406B009B; Thu, 25 Jan 2024 11:43:30 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 108776B0087 for ; Thu, 25 Jan 2024 11:43:30 -0500 (EST) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id D274080D08 for ; Thu, 25 Jan 2024 16:43:29 +0000 (UTC) X-FDA: 81718404138.06.06B79E9 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf25.hostedemail.com (Postfix) with ESMTP id 2E3E9A0016 for ; Thu, 25 Jan 2024 16:43:27 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf25.hostedemail.com: domain of alexandru.elisei@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=alexandru.elisei@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1706201008; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ZcRtYEggfm+f7xDIgsgc7902oxOeLmGnu30EfcHKqJ8=; b=E9c55/hTIuobGSUTPF1TltNn+mPv41o0Y621qDuEGryEWo8WHYkPbgI0oifN8RtLzMyOBX b9BWUI+FhpTBt0vbmmT4FHGABtG5263VcQYzDu71WJ3K/fYyJwewmwBki5wT2fspiw09nk X0fQZB7ojaD3cEO+0aFxIo5q5nv6S7Q= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf25.hostedemail.com: domain of alexandru.elisei@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=alexandru.elisei@arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1706201008; a=rsa-sha256; cv=none; b=F/sJNr0G3SnaGRThUY1xxrHsEo9AAFGSC/AQePqpI8xiyBAnqD0pbqlBVXBXtownUwpMxG eldlrTiaunquRPuDNdClN7vSoxvhBN71ENYuM8anrLMri5uuCqzg7RxDGRfMU6TzlG+C1e o7C4qG7es/AHgbPAGdR4EEVsXGYFfdc= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E9A0F1515; Thu, 25 Jan 2024 08:44:11 -0800 (PST) Received: from e121798.cable.virginm.net (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 053093F5A1; Thu, 25 Jan 2024 08:43:21 -0800 (PST) From: Alexandru Elisei To: catalin.marinas@arm.com, will@kernel.org, oliver.upton@linux.dev, maz@kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, arnd@arndb.de, akpm@linux-foundation.org, mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, mhiramat@kernel.org, rppt@kernel.org, hughd@google.com Cc: pcc@google.com, steven.price@arm.com, anshuman.khandual@arm.com, vincenzo.frascino@arm.com, david@redhat.com, eugenis@google.com, kcc@google.com, hyesoo.yu@samsung.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, kvmarm@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org Subject: [PATCH RFC v3 07/35] mm: cma: Add CMA_RELEASE_{SUCCESS,FAIL} events Date: Thu, 25 Jan 2024 16:42:28 +0000 Message-Id: <20240125164256.4147-8-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240125164256.4147-1-alexandru.elisei@arm.com> References: <20240125164256.4147-1-alexandru.elisei@arm.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 2E3E9A0016 X-Stat-Signature: igitd7zx5gm81taeek3tud5se1gndj6f X-HE-Tag: 1706201007-217305 X-HE-Meta: U2FsdGVkX1+IT0gv65cY0g1qPlvUJ9+jTNDsYmKmR5RohT3kEUg0Ou8LLJD4UHrtUqDEwlZqhNMuBeWUS+bioqT+Zo3X4s7dALE2/qhsALnQXVNIWTrxSpbMUkkCPPw/Ov4OSx1R1y1qEvK+dLOuhXysOTawr8g2053jDnZt/Q4PdJ57e4ouA47Pj1tPRPYBDLiqPhVYulO1FAY+Aldjbe3hDeQRB/weyXYPxayNu1Za/Y3XDU+/yc2lrAoObI3FNx4BfK70TztyveZB2yKE+iHNJPdMllrZzcM8zqbZGadDaUB9kdgXX1oXCAhzQWEGtELPmSXxEpJdZGvWl5RsAsUyHXjPOhehypIbvwW1bcpeZL6rs+5dwngx7zLnlh5uZUFhrnb7xbYwkUmhGKkzs8CG6XJRYIckqvUVCeqZHx3iwdoGgGoN0Kx/+SM1hMG9tKxLD1kBxijZiX9uOFHjVL+CHQjmFszvhNc3pcu7VfT3oBA1IV0UGzyks/rXGCaSnVbZ5R3YHCr/o7r6u9d8CHTtKZUdPC+BQYT/VChwFdUjyFTLd8JjRCdJxL6hWRXyb2hb2lHBUcTyZYj19nF6XKRNCCZNe5yCMLWguQGyG2tK583uCj+rJkm86fFKK4jRXKZR8SaB6E+8Wmg8Ff9nYR5zh2J8b1GWpMxFJqQHuqE+LnBYMfS8/m5q3H2lCR+UgJons6ghLvi+wHOGKvP3me3oQciq2RZofVDQLGVCXmIIOqmz8FyPD68MBtKQxpppQ29yrk16W8zRD7dVU97FzzDe8IPRaQETFf26XM06L59lD+njypVrtY9SeV3HbRftnUqjWuob0TN6ANwUrjeZO7AAxs1IdOMSUtF48erUE4uycVyMvFy1bpx+x57LUGXJpFsMnc/YojSJjlOUEUofz2XChQS0U+aVMwatfTnX/YWM2sONB/uP+VYh2xojqkLQcTD9GXpAaZQavAM06FS IDvdzoLQ XAs+FBkaFa28xSOQ1mh+dIOELXiU+7N5S7vvYmir9eNGfWYThn74VbaCp1ngTXpFn++SbB7Z3x0Im2ad6JrMMwBx0TF2e0Dtnzv5mKfDJom90TCI/kl3gQ5Q9bZImWG+yxDjwhqaUY1U461ihdG+rsvVy4WmzKVIFUcP/R+gvlJ/NOykXGqKbT1BC/2UPjDqv3e/V7lVastk+dVg= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Similar to the two events that relate to CMA allocations, add the CMA_RELEASE_SUCCESS and CMA_RELEASE_FAIL events that count when CMA pages are freed. Signed-off-by: Alexandru Elisei --- Changes since rfc v2: * New patch. include/linux/vm_event_item.h | 2 ++ mm/cma.c | 6 +++++- mm/vmstat.c | 2 ++ 3 files changed, 9 insertions(+), 1 deletion(-) diff --git a/include/linux/vm_event_item.h b/include/linux/vm_event_item.h index 747943bc8cc2..aba5c5bf8127 100644 --- a/include/linux/vm_event_item.h +++ b/include/linux/vm_event_item.h @@ -83,6 +83,8 @@ enum vm_event_item { PGPGIN, PGPGOUT, PSWPIN, PSWPOUT, #ifdef CONFIG_CMA CMA_ALLOC_SUCCESS, CMA_ALLOC_FAIL, + CMA_RELEASE_SUCCESS, + CMA_RELEASE_FAIL, #endif UNEVICTABLE_PGCULLED, /* culled to noreclaim list */ UNEVICTABLE_PGSCANNED, /* scanned for reclaimability */ diff --git a/mm/cma.c b/mm/cma.c index dbf7fe8cb1bd..543bb6b3be8e 100644 --- a/mm/cma.c +++ b/mm/cma.c @@ -562,8 +562,10 @@ bool cma_release(struct cma *cma, const struct page *pages, { unsigned long pfn; - if (!cma_pages_valid(cma, pages, count)) + if (!cma_pages_valid(cma, pages, count)) { + count_vm_events(CMA_RELEASE_FAIL, count); return false; + } pr_debug("%s(page %p, count %lu)\n", __func__, (void *)pages, count); @@ -575,6 +577,8 @@ bool cma_release(struct cma *cma, const struct page *pages, cma_clear_bitmap(cma, pfn, count); trace_cma_release(cma->name, pfn, pages, count); + count_vm_events(CMA_RELEASE_SUCCESS, count); + return true; } diff --git a/mm/vmstat.c b/mm/vmstat.c index db79935e4a54..eebfd5c6c723 100644 --- a/mm/vmstat.c +++ b/mm/vmstat.c @@ -1340,6 +1340,8 @@ const char * const vmstat_text[] = { #ifdef CONFIG_CMA "cma_alloc_success", "cma_alloc_fail", + "cma_release_success", + "cma_release_fail", #endif "unevictable_pgs_culled", "unevictable_pgs_scanned", From patchwork Thu Jan 25 16:42:29 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Elisei X-Patchwork-Id: 13531280 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A84C7C48260 for ; Thu, 25 Jan 2024 16:43:36 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4033E8D0005; Thu, 25 Jan 2024 11:43:36 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 3B3C78D0002; Thu, 25 Jan 2024 11:43:36 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 27C228D0005; Thu, 25 Jan 2024 11:43:36 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 19E808D0002 for ; Thu, 25 Jan 2024 11:43:36 -0500 (EST) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id DA1E6160DEB for ; Thu, 25 Jan 2024 16:43:35 +0000 (UTC) X-FDA: 81718404390.06.25D052C Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf10.hostedemail.com (Postfix) with ESMTP id 2B15BC000F for ; Thu, 25 Jan 2024 16:43:33 +0000 (UTC) Authentication-Results: imf10.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf10.hostedemail.com: domain of alexandru.elisei@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=alexandru.elisei@arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1706201014; a=rsa-sha256; cv=none; b=ZjXI46E1iGwHwHJt7MGyetK3UcsQYpqSXGa7WmH6Tyy2ozFHIONDS1vApP72RMxWYfft3+ 0fhRwR/HLQr3cXw1GM/kIl445Bxddxx4cz1LbUGV+2ry0R2hMeLPA3WihDoLo+4CTigZnh EodS65v82uJEQV0/0YztruhJrsZTtRM= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf10.hostedemail.com: domain of alexandru.elisei@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=alexandru.elisei@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1706201014; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=n4gyVS+FZwn0ELNMXxE7WUDpw2A+Q/seeQlJytUHaRY=; b=W9LjHoutzA5H6eYEYCwAOt3tDIvmWZjgOThlC6juQuagXGE5sGz6aOUvVZc8bWh87dsQNK 3C8D1HUDiGLbH/mslmkYjuOsQjfyGVfVSCS5MtOvFF2II/IfCjs/n5cMyIr9kpm0JGPA9N 9/JsJZ2SQYPPnxqtPc+IKx3K9hOqwiI= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B3E191516; Thu, 25 Jan 2024 08:44:17 -0800 (PST) Received: from e121798.cable.virginm.net (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id C41E03F5A1; Thu, 25 Jan 2024 08:43:27 -0800 (PST) From: Alexandru Elisei To: catalin.marinas@arm.com, will@kernel.org, oliver.upton@linux.dev, maz@kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, arnd@arndb.de, akpm@linux-foundation.org, mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, mhiramat@kernel.org, rppt@kernel.org, hughd@google.com Cc: pcc@google.com, steven.price@arm.com, anshuman.khandual@arm.com, vincenzo.frascino@arm.com, david@redhat.com, eugenis@google.com, kcc@google.com, hyesoo.yu@samsung.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, kvmarm@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org Subject: [PATCH RFC v3 08/35] mm: cma: Introduce cma_alloc_range() Date: Thu, 25 Jan 2024 16:42:29 +0000 Message-Id: <20240125164256.4147-9-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240125164256.4147-1-alexandru.elisei@arm.com> References: <20240125164256.4147-1-alexandru.elisei@arm.com> MIME-Version: 1.0 X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 2B15BC000F X-Stat-Signature: p6697nsi1n1e5mfrpcjuh51qgdztphe7 X-Rspam-User: X-HE-Tag: 1706201013-998652 X-HE-Meta: U2FsdGVkX1/F8Ykhz0dk9SytB0wWiACC0JJ0M7gcAKkdqEQ0KgGxtviLJys6RRiUHSGXyRd2ZgDQHz/b+L4+OsutihN8FPiC1ubNwd33AC34BYqlWF3En+u8t+oDs9banibjILz/Yk1+2CA50HIOcq2bkGu4ME4QVeNJ8pjFnCzM+3cVleqMchkU1+fO3TYg5bmTpG6ObCW54Nt2l+Ly24Dj7g5j97cdbcN4uOcOPoTG2osHd5UJHWxYlneBMjo0RhfzOYu5HpX0zuCg9glgtZ/VxCoUxuJa1BeejME4+44e7eQGzKFS8Au7h2Ar/je/9ILpWw37Z7iWj8Rsbt/ebN6HdT5d1R5A+ZuBUJHHA+n9f9RWjZlsSyYKE6KYM1EZYYibTcwon06VXO7ZZwjsd9x1jmhRVGmoJAx1dx0E0V85I1erNKVRReZplYf8GSF0/M37Iyf1bRKhOFd2MPhdaHvGultuhxvl47YbyPHVOyfOdzo3EBLjtue8WlkkZDBZcv7Ssnw7KTV4twnORXZp4WEal0hTw0QDxF1Bdj9/+HSZMXCcJeVspxNoA1FBaIqcokEj4/z0A61S9etRudo6yfAWVAUrNnyyJoZf4sXGBMKAijPNdC5fwuD+ofLVSUAnGIYxdO4IXaUC4oK9+BoZH+DF1JbmBXAnrE9Oz5/qO6IHPzOXChqT+2btagS5q27PQY/6Q1Xxgiwo/JQKgl9Y1M3eMVf2Zjk6WbgiucT7JfvaO4yHrqIujrDY3a17SSsdJS/E6U4zEuPBIZbwyBgEDMkcg8gY3t+34TzTZ7Jl89be9h+EHWMdGiOaF87gwlupsY7Jo9kR8Pqisql5eIqBDsIEgBkuAVuvX6bOrNs0Og7YYfbdetfHPCukOsDmpgpwnzPK+10qa/XvilqLGx1s7tCNq8RUIsyX6PoGGbwG4g2UUAthQgN/stAjiT7jwFFYEkdxnn6pVTaZYtPrx4d hk1GKkG9 cO2XExwSmzwe1shOKYDaG+Cn5jrw3uNcTu45x4VZp+0CzdDXiEKEiF95/REazsyhMjSOKsS7cTfz1Yf6FxErXIPpvnFhu/bFjEB76k4rpJhHr4uK8HV5pFRxwVMIDPyO2J2jlyyGPquZdbvnc5c7VYeISuYSGlbQJTF7wsFIGyhz7IU40FuzBE1duPGaL8dd3V3ahsDTZUt+n3uQ= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Today, cma_alloc() is used to allocate a contiguous memory region. The function allows the caller to specify the number of pages to allocate, but not the starting address. cma_alloc() will walk over the entire CMA region trying to allocate the first available range of the specified size. Introduce cma_alloc_range(), which makes CMA more versatile by allowing the caller to specify a particular range in the CMA region, defined by the start pfn and the size. arm64 will make use of this function when tag storage management will be implemented: cma_alloc_range() will be used to reserve the tag storage associated with a tagged page. Signed-off-by: Alexandru Elisei --- Changes since rfc v2: * New patch. include/linux/cma.h | 2 + include/trace/events/cma.h | 59 ++++++++++++++++++++++++++ mm/cma.c | 86 ++++++++++++++++++++++++++++++++++++++ 3 files changed, 147 insertions(+) diff --git a/include/linux/cma.h b/include/linux/cma.h index 63873b93deaa..e32559da6942 100644 --- a/include/linux/cma.h +++ b/include/linux/cma.h @@ -50,6 +50,8 @@ extern int cma_init_reserved_mem(phys_addr_t base, phys_addr_t size, struct cma **res_cma); extern struct page *cma_alloc(struct cma *cma, unsigned long count, unsigned int align, bool no_warn); +extern int cma_alloc_range(struct cma *cma, unsigned long start, unsigned long count, + unsigned tries, gfp_t gfp); extern bool cma_pages_valid(struct cma *cma, const struct page *pages, unsigned long count); extern bool cma_release(struct cma *cma, const struct page *pages, unsigned long count); diff --git a/include/trace/events/cma.h b/include/trace/events/cma.h index 25103e67737c..a89af313a572 100644 --- a/include/trace/events/cma.h +++ b/include/trace/events/cma.h @@ -36,6 +36,65 @@ TRACE_EVENT(cma_release, __entry->count) ); +TRACE_EVENT(cma_alloc_range_start, + + TP_PROTO(const char *name, unsigned long start, unsigned long count, + unsigned tries), + + TP_ARGS(name, start, count, tries), + + TP_STRUCT__entry( + __string(name, name) + __field(unsigned long, start) + __field(unsigned long, count) + __field(unsigned, tries) + ), + + TP_fast_assign( + __assign_str(name, name); + __entry->start = start; + __entry->count = count; + __entry->tries = tries; + ), + + TP_printk("name=%s start=%lx count=%lu tries=%u", + __get_str(name), + __entry->start, + __entry->count, + __entry->tries) +); + +TRACE_EVENT(cma_alloc_range_finish, + + TP_PROTO(const char *name, unsigned long start, unsigned long count, + unsigned attempts, int err), + + TP_ARGS(name, start, count, attempts, err), + + TP_STRUCT__entry( + __string(name, name) + __field(unsigned long, start) + __field(unsigned long, count) + __field(unsigned, attempts) + __field(int, err) + ), + + TP_fast_assign( + __assign_str(name, name); + __entry->start = start; + __entry->count = count; + __entry->attempts = attempts; + __entry->err = err; + ), + + TP_printk("name=%s start=%lx count=%lu attempts=%u err=%d", + __get_str(name), + __entry->start, + __entry->count, + __entry->attempts, + __entry->err) +); + TRACE_EVENT(cma_alloc_start, TP_PROTO(const char *name, unsigned long count, unsigned int align), diff --git a/mm/cma.c b/mm/cma.c index 543bb6b3be8e..4a0f68b9443b 100644 --- a/mm/cma.c +++ b/mm/cma.c @@ -416,6 +416,92 @@ static void cma_debug_show_areas(struct cma *cma) static inline void cma_debug_show_areas(struct cma *cma) { } #endif +/** + * cma_alloc_range() - allocate pages in a specific range + * @cma: Contiguous memory region for which the allocation is performed. + * @start: Starting pfn of the allocation. + * @count: Requested number of pages + * @tries: Number of tries if the range is busy + * @no_warn: Avoid printing message about failed allocation + * + * This function allocates part of contiguous memory from a specific contiguous + * memory area, from the specified starting address. The 'start' pfn and the the + * 'count' number of pages must be aligned to the CMA bitmap order per bit. + */ +int cma_alloc_range(struct cma *cma, unsigned long start, unsigned long count, + unsigned tries, gfp_t gfp) +{ + unsigned long bitmap_maxno, bitmap_no, bitmap_start, bitmap_count; + unsigned long i = 0; + struct page *page; + int err = -EINVAL; + + if (!cma || !cma->count || !cma->bitmap) + goto out_stats; + + trace_cma_alloc_range_start(cma->name, start, count, tries); + + if (!count || start < cma->base_pfn || + start + count > cma->base_pfn + cma->count) + goto out_stats; + + if (!IS_ALIGNED(start | count, 1 << cma->order_per_bit)) + goto out_stats; + + bitmap_start = (start - cma->base_pfn) >> cma->order_per_bit; + bitmap_maxno = cma_bitmap_maxno(cma); + bitmap_count = cma_bitmap_pages_to_bits(cma, count); + + spin_lock_irq(&cma->lock); + bitmap_no = bitmap_find_next_zero_area(cma->bitmap, bitmap_maxno, + bitmap_start, bitmap_count, 0); + if (bitmap_no != bitmap_start) { + spin_unlock_irq(&cma->lock); + err = -EEXIST; + goto out_stats; + } + bitmap_set(cma->bitmap, bitmap_start, bitmap_count); + spin_unlock_irq(&cma->lock); + + for (i = 0; i < tries; i++) { + mutex_lock(&cma_mutex); + err = alloc_contig_range(start, start + count, MIGRATE_CMA, gfp); + mutex_unlock(&cma_mutex); + + if (err != -EBUSY) + break; + } + + if (err) { + cma_clear_bitmap(cma, start, count); + } else { + page = pfn_to_page(start); + + /* + * CMA can allocate multiple page blocks, which results in + * different blocks being marked with different tags. Reset the + * tags to ignore those page blocks. + */ + for (i = 0; i < count; i++) + page_kasan_tag_reset(nth_page(page, i)); + } + +out_stats: + trace_cma_alloc_range_finish(cma->name, start, count, i, err); + + if (err) { + count_vm_events(CMA_ALLOC_FAIL, count); + if (cma) + cma_sysfs_account_fail_pages(cma, count); + } else { + count_vm_events(CMA_ALLOC_SUCCESS, count); + cma_sysfs_account_success_pages(cma, count); + } + + return err; +} + + /** * cma_alloc() - allocate pages from contiguous area * @cma: Contiguous memory region for which the allocation is performed. From patchwork Thu Jan 25 16:42:30 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Elisei X-Patchwork-Id: 13531326 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 74617C47258 for ; Thu, 25 Jan 2024 16:43:42 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0F64A8D0007; Thu, 25 Jan 2024 11:43:42 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 0A73E8D0002; Thu, 25 Jan 2024 11:43:42 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EB10B8D0007; Thu, 25 Jan 2024 11:43:41 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id DE3378D0002 for ; Thu, 25 Jan 2024 11:43:41 -0500 (EST) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id B37D9C0E3B for ; Thu, 25 Jan 2024 16:43:41 +0000 (UTC) X-FDA: 81718404642.07.F51EE58 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf29.hostedemail.com (Postfix) with ESMTP id 025CF120003 for ; Thu, 25 Jan 2024 16:43:39 +0000 (UTC) Authentication-Results: imf29.hostedemail.com; dkim=none; spf=pass (imf29.hostedemail.com: domain of alexandru.elisei@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=alexandru.elisei@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1706201020; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=basGy/kodhiUsaxMMrjtrbD17AoweJAAvAcr8Rm0dh0=; b=rQWujZTPgYVX43MjX61CWmtQsoIN3fY4pgPFKspHiGoWsLc4qgDTuQW2oYg+VxV9lhwxCw MbpTvGZae6TDipfUDjt2yibEmaHcGS8AuGFTW13ztWTVdRi5OFwYfMYqkPyvK/D+FjpaFE lX5pwtk+a3BxD12b0qHmA6BD7/ogDXg= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1706201020; a=rsa-sha256; cv=none; b=Zo8/TJgMEHbGKgXi/+c2x0ji6KSm7CgD0/sPwRLuIDYafvvh3pfXabQDup6BAr1qvoTfQK fbxPO0f5hKjYnayhIPDN3yCjrCfmZPJWa6ZgpG7pGffjMGR1L02npua+/Dl2JIar+E9gfn 5leuTrTVsFuTmrgYf23+qoWcI43wbPw= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=none; spf=pass (imf29.hostedemail.com: domain of alexandru.elisei@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=alexandru.elisei@arm.com; dmarc=pass (policy=none) header.from=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 7CBA5153B; Thu, 25 Jan 2024 08:44:23 -0800 (PST) Received: from e121798.cable.virginm.net (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 8DF503F5A1; Thu, 25 Jan 2024 08:43:33 -0800 (PST) From: Alexandru Elisei To: catalin.marinas@arm.com, will@kernel.org, oliver.upton@linux.dev, maz@kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, arnd@arndb.de, akpm@linux-foundation.org, mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, mhiramat@kernel.org, rppt@kernel.org, hughd@google.com Cc: pcc@google.com, steven.price@arm.com, anshuman.khandual@arm.com, vincenzo.frascino@arm.com, david@redhat.com, eugenis@google.com, kcc@google.com, hyesoo.yu@samsung.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, kvmarm@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org Subject: [PATCH RFC v3 09/35] mm: cma: Introduce cma_remove_mem() Date: Thu, 25 Jan 2024 16:42:30 +0000 Message-Id: <20240125164256.4147-10-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240125164256.4147-1-alexandru.elisei@arm.com> References: <20240125164256.4147-1-alexandru.elisei@arm.com> MIME-Version: 1.0 X-Stat-Signature: omdgjf1a5t1rs9tshmbie57b3y8nkk7m X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 025CF120003 X-Rspam-User: X-HE-Tag: 1706201019-148603 X-HE-Meta: U2FsdGVkX1/C2bSuHJAfhlo7pe6CQZo2o3ZkS/b3qaYFWY0fTFIM96EqIZT1xkunqgOwnS7VN2TYmOTPwBZy5QYV/bwItPw+as/lUg/1Etnhv4+jiGZo/mmIohQmzuOGQo6VHQY9UItCyoNdLfq1FFtF8eph0cu53lPcCu5qRPoIkt1Ytxfo7WnqI7McDpEnQ73rWMI6HEMfWk5ae0oSJ27hnRzOEFMU3Ljnv63Z0teBv/gZQnokknEghTzscSEEzIcrZSTFtji5y9JDQWlt/KAKyoVQPFKu1cavoK4duYW9Nb8VSFo7wQpuyFaasWlY2QPLfdrmO09w0W40BRE8O4QA/FvANTCFbwkgD6z6Ru930pySQ6oe/jNskKEAq2OqKyxEhlv69P0ayFO9mC3kAUZgV4qubHFBNHKF3q+K2xdaCChUgmYaG00GQNdtPBYGjcPLRAFZyApSZMzMAW9YMEmU/0pgLmOc+5YSPrXPf6twytiIKDa5kxlFVrd8+tdUMZe4D0+xgjLK/u8wr03+Why+syxiJW4pcZbmjwCD0DUiwd1PDTqv2Kd4VcYMjRp3u7D26aPLrXrMmnjWwbpNUTjn+hAzTvVC0XdFBT7i+qcspPZUcXhi+b3wg/hof0jYadc3ZROq3cAAnGx6E8e36U7IwxuBj6NMpowXq4OvtiOPnOQtAQDQwCzG59X52XgiE+RZne2E5hFSIG8DzOf0B4XqH1sUGmyXTHI8+ledKIW8VGg77kLvSTRSPtEhGbZV9xoPwcwgQMnC+RUVEKPue2mL/WkuvwGYd8SVopsPxsvbky+SZt89PLwdfagUbtN99zU1lJWsXGn207AiW64D1dIj1rg1lT6KB/gqRVaWmlgRE6MVGMV7+kgg2OC5vpWsubHc0G/yLTtzgHpyNRPQd3nQfXsF+6g3ElaPsu0TJaKekymx6xsBagcuLswk8bPUaOjJg7MgBXsPpNMsbMb P8BtVrg1 hXjAZebErodKClBAYUpaGjRBd8HlvLLRg1NDBke6sOXZCaqS9uoTtNjIJOugCJ/Zyv/6SSkl2rvH9erCp7f/ze2E3szjLMMq6rdVdSSyQ0E7AZ8YmJbv9YrcJLOGe3nmjVZjP3qPm40I38jrnZA/Vd4zUQgLCclPW3j5YwXE98/ywG/kgFwZ3p9kGoLcAqKE/Iket9fagvU4nWXg= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Memory is added to CMA with cma_declare_contiguous_nid() and cma_init_reserved_mem(). This memory is then put on the MIGRATE_CMA list in cma_init_reserved_areas(), where the page allocator can make use of it. If a device manages multiple CMA areas, and there's an error when one of the areas is added to CMA, there is no mechanism for the device to prevent the rest of the areas, which were added before the error occured, from being later added to the MIGRATE_CMA list. Add cma_remove_mem() which allows a previously reserved CMA area to be removed and thus it cannot be used by the page allocator. Signed-off-by: Alexandru Elisei --- Changes since rfc v2: * New patch. include/linux/cma.h | 1 + mm/cma.c | 30 +++++++++++++++++++++++++++++- 2 files changed, 30 insertions(+), 1 deletion(-) diff --git a/include/linux/cma.h b/include/linux/cma.h index e32559da6942..787cbec1702e 100644 --- a/include/linux/cma.h +++ b/include/linux/cma.h @@ -48,6 +48,7 @@ extern int cma_init_reserved_mem(phys_addr_t base, phys_addr_t size, unsigned int order_per_bit, const char *name, struct cma **res_cma); +extern void cma_remove_mem(struct cma **res_cma); extern struct page *cma_alloc(struct cma *cma, unsigned long count, unsigned int align, bool no_warn); extern int cma_alloc_range(struct cma *cma, unsigned long start, unsigned long count, diff --git a/mm/cma.c b/mm/cma.c index 4a0f68b9443b..2881bab12b01 100644 --- a/mm/cma.c +++ b/mm/cma.c @@ -147,8 +147,12 @@ static int __init cma_init_reserved_areas(void) { int i; - for (i = 0; i < cma_area_count; i++) + for (i = 0; i < cma_area_count; i++) { + /* Region was removed. */ + if (!cma_areas[i].count) + continue; cma_activate_area(&cma_areas[i]); + } return 0; } @@ -216,6 +220,30 @@ int __init cma_init_reserved_mem(phys_addr_t base, phys_addr_t size, return 0; } +/** + * cma_remove_mem() - remove cma area + * @res_cma: Pointer to the cma region. + * + * This function removes a cma region created with cma_init_reserved_mem(). The + * ->count is set to 0. + */ +void __init cma_remove_mem(struct cma **res_cma) +{ + struct cma *cma; + + if (WARN_ON_ONCE(!res_cma || !(*res_cma))) + return; + + cma = *res_cma; + if (WARN_ON_ONCE(!cma->count)) + return; + + totalcma_pages -= cma->count; + cma->count = 0; + + *res_cma = NULL; +} + /** * cma_declare_contiguous_nid() - reserve custom contiguous area * @base: Base address of the reserved area optional, use 0 for any From patchwork Thu Jan 25 16:42:31 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Elisei X-Patchwork-Id: 13531327 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 124A5C47258 for ; Thu, 25 Jan 2024 16:43:48 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A1EE7280001; Thu, 25 Jan 2024 11:43:47 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 9817C8D0002; Thu, 25 Jan 2024 11:43:47 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7D59C280001; Thu, 25 Jan 2024 11:43:47 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 65C508D0002 for ; Thu, 25 Jan 2024 11:43:47 -0500 (EST) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 1FF15C0E3D for ; Thu, 25 Jan 2024 16:43:47 +0000 (UTC) X-FDA: 81718404894.18.DC4DB55 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf18.hostedemail.com (Postfix) with ESMTP id 80E811C001A for ; Thu, 25 Jan 2024 16:43:45 +0000 (UTC) Authentication-Results: imf18.hostedemail.com; dkim=none; spf=pass (imf18.hostedemail.com: domain of alexandru.elisei@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=alexandru.elisei@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1706201025; a=rsa-sha256; cv=none; b=HKApxrc9PSI9e8Med4Rb29ny/i14j/gXbqtfuDpCM9+tA7OaQbyEOzXKGx//f1Wh44gmT/ DfZdUq9qK3+8yltFqupmHK/164NQcwhwSrUsd7PS5xlK1BmPtyRJgkdH0AIa9W0HTBmssP EOQ86jExXwPazmjq9QxAGeUarTrtgUM= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=none; spf=pass (imf18.hostedemail.com: domain of alexandru.elisei@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=alexandru.elisei@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1706201025; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=BrAYirD1C9ZlW+xHPq+DxNqIyz5EwyKAGNdhCWfnghE=; b=Vk3pmoDVR9a00+IeOfrC9tA488gxzuFEpcnVtqyltK8uEeFcQZP/hJ6pmWkm42mOhuyI5i r+jtFjbFNtU+3mZV7fXOGuMr7/OSsxiqusuWTphquKfXn54d9VzZlOjv3VNy55VOIFwl6o q2OsVpo6KbYKmlqrhUgUCCG3B38f1Ug= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 4954D152B; Thu, 25 Jan 2024 08:44:29 -0800 (PST) Received: from e121798.cable.virginm.net (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 5543F3F8A4; Thu, 25 Jan 2024 08:43:39 -0800 (PST) From: Alexandru Elisei To: catalin.marinas@arm.com, will@kernel.org, oliver.upton@linux.dev, maz@kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, arnd@arndb.de, akpm@linux-foundation.org, mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, mhiramat@kernel.org, rppt@kernel.org, hughd@google.com Cc: pcc@google.com, steven.price@arm.com, anshuman.khandual@arm.com, vincenzo.frascino@arm.com, david@redhat.com, eugenis@google.com, kcc@google.com, hyesoo.yu@samsung.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, kvmarm@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org Subject: [PATCH RFC v3 10/35] mm: cma: Fast track allocating memory when the pages are free Date: Thu, 25 Jan 2024 16:42:31 +0000 Message-Id: <20240125164256.4147-11-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240125164256.4147-1-alexandru.elisei@arm.com> References: <20240125164256.4147-1-alexandru.elisei@arm.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 80E811C001A X-Stat-Signature: dchkjutpc4h5ksa8zbsxwpbcykats9zj X-HE-Tag: 1706201025-143589 X-HE-Meta: U2FsdGVkX18AyJgDmuRp709Vhvex3eXF/I0ilYO4nflcCDqdU/POB5wylKt/XTLdhm/J9bpNjIATBYrPtlo8lKe71rmsDvLMqRStQmbCZmH1cIvXi2BkcTKgcbG4+TP9uQy79gJT9adnUVokzsvSFFHLpybxrFQU1SaCc5kjhyI2Ybku/qeIxona0HOM2Cgxu1NnULRCKSx1JgdG8LDv0HQvKwAURLjXDSVYK9StYE6rUiADF9+B+nSkzkOyNRcnpfj2Gq+Pnb33+ZWeA0jG5hsit62bru/4eqVh1R6LFKFav5ha2crkRMHrPKe8+mQ5G649pzzMTaWAkSXiY5zChv8wxWjCbh0Z33kKTDDsVdLXWbLkh7lZMCwEkVUIy+ne/d9aXkpYiBpMw+1Zs3MCMO6vKkwqNRZU83wPNKC3GTaIQg4883lCa53SeSN6ZhNL/0f6pFJ9bMUHPm7Ouyu67GNGOKP7bHorUO9V2ARByKXREw2XvXS4rTtUjKDIFqOzS+RESvvmN9LN6vAvv+jIy7SAls/MEuZ9AGnv1eHyZ33TvdDAmtpHCzHACkpu0swfr9U2HSmNBAWU0xJGnRqrzFAGYGVkrG2TF6gA8NQfeSFXxneZo9x/guZbh9pPLP6B27U7nZ1wSd6ts5IbFWHfAZwJeHPvUwrMBAviiM1g6CP3S3sw89r2h1Rhu2vFZVEtZtm2YUaRWSpiV6oWezuF1eM9sQg4Fwz0OIspip15xstUllnUmus3gb1YoX+l95TCA/IUchLb/bwAkbATZy4sXgfKx1EZti7RZB+u/kh/I7PQrsZSFV4AKr7My5GelkvhZxefywllJKgmZ8JHlBJao64oihx/vsql X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: If the pages to be allocated are free, take them directly off the buddy allocator, instead of going through alloc_contig_range() and avoiding costly calls to lru_cache_disable(). Only allocations of the same size as the CMA region order are considered, to avoid taking the zone spinlock for too long. Signed-off-by: Alexandru Elisei --- Changes since rfc v2: * New patch. Reworked from the rfc v2 patch #26 ("arm64: mte: Fast track reserving tag storage when the block is free") (David Hildenbrand). include/linux/page-flags.h | 15 ++++++++++++-- mm/Kconfig | 5 +++++ mm/cma.c | 42 ++++++++++++++++++++++++++++++++++---- mm/memory-failure.c | 8 ++++---- mm/page_alloc.c | 23 ++++++++++++--------- 5 files changed, 73 insertions(+), 20 deletions(-) diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index 735cddc13d20..b7237bce7446 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -575,11 +575,22 @@ TESTSCFLAG(HWPoison, hwpoison, PF_ANY) #define MAGIC_HWPOISON 0x48575053U /* HWPS */ extern void SetPageHWPoisonTakenOff(struct page *page); extern void ClearPageHWPoisonTakenOff(struct page *page); -extern bool take_page_off_buddy(struct page *page); -extern bool put_page_back_buddy(struct page *page); +extern bool PageHWPoisonTakenOff(struct page *page); #else PAGEFLAG_FALSE(HWPoison, hwpoison) +TESTSCFLAG_FALSE(HWPoison, hwpoison) #define __PG_HWPOISON 0 +static inline void SetPageHWPoisonTakenOff(struct page *page) { } +static inline void ClearPageHWPoisonTakenOff(struct page *page) { } +static inline bool PageHWPoisonTakenOff(struct page *page) +{ + return false; +} +#endif + +#ifdef CONFIG_WANTS_TAKE_PAGE_OFF_BUDDY +extern bool take_page_off_buddy(struct page *page, bool poison); +extern bool put_page_back_buddy(struct page *page, bool unpoison); #endif #if defined(CONFIG_PAGE_IDLE_FLAG) && defined(CONFIG_64BIT) diff --git a/mm/Kconfig b/mm/Kconfig index ffc3a2ba3a8c..341cf53898db 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -745,12 +745,16 @@ config DEFAULT_MMAP_MIN_ADDR config ARCH_SUPPORTS_MEMORY_FAILURE bool +config WANTS_TAKE_PAGE_OFF_BUDDY + bool + config MEMORY_FAILURE depends on MMU depends on ARCH_SUPPORTS_MEMORY_FAILURE bool "Enable recovery from hardware memory errors" select MEMORY_ISOLATION select RAS + select WANTS_TAKE_PAGE_OFF_BUDDY help Enables code to recover from some memory failures on systems with MCA recovery. This allows a system to continue running @@ -891,6 +895,7 @@ config CMA depends on MMU select MIGRATION select MEMORY_ISOLATION + select WANTS_TAKE_PAGE_OFF_BUDDY help This enables the Contiguous Memory Allocator which allows other subsystems to allocate big physically-contiguous blocks of memory. diff --git a/mm/cma.c b/mm/cma.c index 2881bab12b01..15663f95d77b 100644 --- a/mm/cma.c +++ b/mm/cma.c @@ -444,6 +444,34 @@ static void cma_debug_show_areas(struct cma *cma) static inline void cma_debug_show_areas(struct cma *cma) { } #endif +/* Called with the cma mutex held. */ +static int cma_alloc_pages_fastpath(struct cma *cma, unsigned long start, + unsigned long end) +{ + bool success = false; + unsigned long i, j; + + /* Avoid contention on the zone lock. */ + if (start - end != 1 << cma->order_per_bit) + return -EINVAL; + + for (i = start; i < end; i++) { + if (!is_free_buddy_page(pfn_to_page(i))) + break; + success = take_page_off_buddy(pfn_to_page(i), false); + if (!success) + break; + } + + if (success) + return 0; + + for (j = start; j < i; j++) + put_page_back_buddy(pfn_to_page(j), false); + + return -EBUSY; +} + /** * cma_alloc_range() - allocate pages in a specific range * @cma: Contiguous memory region for which the allocation is performed. @@ -493,7 +521,11 @@ int cma_alloc_range(struct cma *cma, unsigned long start, unsigned long count, for (i = 0; i < tries; i++) { mutex_lock(&cma_mutex); - err = alloc_contig_range(start, start + count, MIGRATE_CMA, gfp); + err = cma_alloc_pages_fastpath(cma, start, start + count); + if (err) { + err = alloc_contig_range(start, start + count, + MIGRATE_CMA, gfp); + } mutex_unlock(&cma_mutex); if (err != -EBUSY) @@ -529,7 +561,6 @@ int cma_alloc_range(struct cma *cma, unsigned long start, unsigned long count, return err; } - /** * cma_alloc() - allocate pages from contiguous area * @cma: Contiguous memory region for which the allocation is performed. @@ -589,8 +620,11 @@ struct page *cma_alloc(struct cma *cma, unsigned long count, pfn = cma->base_pfn + (bitmap_no << cma->order_per_bit); mutex_lock(&cma_mutex); - ret = alloc_contig_range(pfn, pfn + count, MIGRATE_CMA, - GFP_KERNEL | (no_warn ? __GFP_NOWARN : 0)); + ret = cma_alloc_pages_fastpath(cma, pfn, pfn + count); + if (ret) { + ret = alloc_contig_range(pfn, pfn + count, MIGRATE_CMA, + GFP_KERNEL | (no_warn ? __GFP_NOWARN : 0)); + } mutex_unlock(&cma_mutex); if (ret == 0) { page = pfn_to_page(pfn); diff --git a/mm/memory-failure.c b/mm/memory-failure.c index 4f9b61f4a668..b87b533a9871 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -157,7 +157,7 @@ static int __page_handle_poison(struct page *page) zone_pcp_disable(page_zone(page)); ret = dissolve_free_huge_page(page); if (!ret) - ret = take_page_off_buddy(page); + ret = take_page_off_buddy(page, true); zone_pcp_enable(page_zone(page)); return ret; @@ -1353,7 +1353,7 @@ static int page_action(struct page_state *ps, struct page *p, return action_result(pfn, ps->type, result); } -static inline bool PageHWPoisonTakenOff(struct page *page) +bool PageHWPoisonTakenOff(struct page *page) { return PageHWPoison(page) && page_private(page) == MAGIC_HWPOISON; } @@ -2247,7 +2247,7 @@ int memory_failure(unsigned long pfn, int flags) res = get_hwpoison_page(p, flags); if (!res) { if (is_free_buddy_page(p)) { - if (take_page_off_buddy(p)) { + if (take_page_off_buddy(p, true)) { page_ref_inc(p); res = MF_RECOVERED; } else { @@ -2578,7 +2578,7 @@ int unpoison_memory(unsigned long pfn) ret = folio_test_clear_hwpoison(folio) ? 0 : -EBUSY; } else if (ghp < 0) { if (ghp == -EHWPOISON) { - ret = put_page_back_buddy(p) ? 0 : -EBUSY; + ret = put_page_back_buddy(p, true) ? 0 : -EBUSY; } else { ret = ghp; unpoison_pr_info("Unpoison: failed to grab page %#lx\n", diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 0fa34bcfb1af..502ee3eb8583 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -6655,7 +6655,7 @@ bool is_free_buddy_page(struct page *page) } EXPORT_SYMBOL(is_free_buddy_page); -#ifdef CONFIG_MEMORY_FAILURE +#ifdef CONFIG_WANTS_TAKE_PAGE_OFF_BUDDY /* * Break down a higher-order page in sub-pages, and keep our target out of * buddy allocator. @@ -6687,9 +6687,9 @@ static void break_down_buddy_pages(struct zone *zone, struct page *page, } /* - * Take a page that will be marked as poisoned off the buddy allocator. + * Take a page off the buddy allocator, and optionally mark it as poisoned. */ -bool take_page_off_buddy(struct page *page) +bool take_page_off_buddy(struct page *page, bool poison) { struct zone *zone = page_zone(page); unsigned long pfn = page_to_pfn(page); @@ -6710,7 +6710,8 @@ bool take_page_off_buddy(struct page *page) del_page_from_free_list(page_head, zone, page_order); break_down_buddy_pages(zone, page_head, page, 0, page_order, migratetype); - SetPageHWPoisonTakenOff(page); + if (poison) + SetPageHWPoisonTakenOff(page); if (!is_migrate_isolate(migratetype)) __mod_zone_freepage_state(zone, -1, migratetype); ret = true; @@ -6724,9 +6725,10 @@ bool take_page_off_buddy(struct page *page) } /* - * Cancel takeoff done by take_page_off_buddy(). + * Cancel takeoff done by take_page_off_buddy(), and optionally unpoison the + * page. */ -bool put_page_back_buddy(struct page *page) +bool put_page_back_buddy(struct page *page, bool unpoison) { struct zone *zone = page_zone(page); unsigned long pfn = page_to_pfn(page); @@ -6736,17 +6738,18 @@ bool put_page_back_buddy(struct page *page) spin_lock_irqsave(&zone->lock, flags); if (put_page_testzero(page)) { - ClearPageHWPoisonTakenOff(page); + VM_WARN_ON_ONCE(PageHWPoisonTakenOff(page) && !unpoison); + if (unpoison) + ClearPageHWPoisonTakenOff(page); __free_one_page(page, pfn, zone, 0, migratetype, FPI_NONE); - if (TestClearPageHWPoison(page)) { + if (!unpoison || (unpoison && TestClearPageHWPoison(page))) ret = true; - } } spin_unlock_irqrestore(&zone->lock, flags); return ret; } -#endif +#endif /* CONFIG_WANTS_TAKE_PAGE_OFF_BUDDY */ #ifdef CONFIG_ZONE_DMA bool has_managed_dma(void) From patchwork Thu Jan 25 16:42:32 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Elisei X-Patchwork-Id: 13531328 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A244FC47258 for ; Thu, 25 Jan 2024 16:43:53 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 368036B0071; Thu, 25 Jan 2024 11:43:53 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 256766B007B; Thu, 25 Jan 2024 11:43:53 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 033B76B0087; Thu, 25 Jan 2024 11:43:52 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id E25FA6B0071 for ; Thu, 25 Jan 2024 11:43:52 -0500 (EST) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id C3E4080804 for ; Thu, 25 Jan 2024 16:43:52 +0000 (UTC) X-FDA: 81718405104.27.4EBEF38 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf18.hostedemail.com (Postfix) with ESMTP id 4793F1C001C for ; Thu, 25 Jan 2024 16:43:51 +0000 (UTC) Authentication-Results: imf18.hostedemail.com; dkim=none; spf=pass (imf18.hostedemail.com: domain of alexandru.elisei@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=alexandru.elisei@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1706201031; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=2laAblo3Li8xXd9cFA3POwPPSKs1U3ZPUbQ/1+rjObk=; b=w9WCtRmjA5UA9B3NDh3TOibyux+WV7rMmDne8M1/XcYYoHRLI6rLV4qUDh2sq5ZL+506vc jXHxcI7k5pGmBX6Qmdt/N4ngsSPlMXb3SSR+qDeOlB+DRnhoy+CEeL2OvODRBDNKOUUZ3L lMPComlX+7bQyt+2PM+TzRzs1EjD3x8= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1706201031; a=rsa-sha256; cv=none; b=qTcVNKvqAqn8BkkOfwBpJsZF9xHHEDPJNlSLXelbD5dXL/OFYOiPzlxA+jpQp3Hg5DI7KG 739Q18DRzrhr2zar4fmBj9cA1KIXzrV2ogZqDD8cEBOz9r67NQCAQNWwLjrnD5U8VQ27Xc 4UV2RouNdSxjL+PpQBoXfjSR4/fDuzE= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=none; spf=pass (imf18.hostedemail.com: domain of alexandru.elisei@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=alexandru.elisei@arm.com; dmarc=pass (policy=none) header.from=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 113601570; Thu, 25 Jan 2024 08:44:35 -0800 (PST) Received: from e121798.cable.virginm.net (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 240823F5A1; Thu, 25 Jan 2024 08:43:45 -0800 (PST) From: Alexandru Elisei To: catalin.marinas@arm.com, will@kernel.org, oliver.upton@linux.dev, maz@kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, arnd@arndb.de, akpm@linux-foundation.org, mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, mhiramat@kernel.org, rppt@kernel.org, hughd@google.com Cc: pcc@google.com, steven.price@arm.com, anshuman.khandual@arm.com, vincenzo.frascino@arm.com, david@redhat.com, eugenis@google.com, kcc@google.com, hyesoo.yu@samsung.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, kvmarm@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org Subject: [PATCH RFC v3 11/35] mm: Allow an arch to hook into folio allocation when VMA is known Date: Thu, 25 Jan 2024 16:42:32 +0000 Message-Id: <20240125164256.4147-12-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240125164256.4147-1-alexandru.elisei@arm.com> References: <20240125164256.4147-1-alexandru.elisei@arm.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 4793F1C001C X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: pbkda4frhookgahedy11abob4y78ej7y X-HE-Tag: 1706201031-985229 X-HE-Meta: U2FsdGVkX1+WQl5ioXjeg16QQ75Hx+EWvdiH6fEx/0WA0MXsFPX5QViPnGe/y1eIwzVPXXxTtzuBHFDkL8Ke4R79IK2DF5iXM0S0OMmxxCJg8ocRAJ+Mq/73cMKQlhVuicrbTqPslel/BjHR6AK9+eokVZT+cFgnUz8sqmA+gzHEbjeWXFr9cELS7byayzdhFUJOhHfSJ2rCjWRpU9xOtlfAq9p6GrrdVEtA76j82duV+QD+ji2rOYBLwHCqrLUS7oozYrzIwSdrbEfJ1Y6FtbfFZ/ihE+3tVcxBo3VufZ+RfVGFSTve1h4hSMe/QtvggIik+6aYyCAVS/B12JVHjrer9IS8D84Bq5f5/yFDVuH2J7W2NoYs9M4tEIpF0ja60Q4XH/N2I9KwCmTC5MMgK7VFNs7C4VuTRPPDcB7rXCBJ+sggAPBKJe09Gzf3P3OZ87D26wmoLaLpu335DuV/c8wc5E8N3z2qn9Ew3FG8RaSu8lPYEarixFLBq7NU867y4swA6oim05iVOnHCRvmkdumNb4LHbyi8LKYbBfoy+PA+t7r3ERqnIXmEi5nMRCj/wcCvmXV+el+noQIKBZI9vghS2ujz0YizxWluGsUogMZkLQ0V5mgXcDjsSZZyeX0oDnvEoFfMjcuEofvALxFsfeOnwGL2culE7NSCwB9GqqqXVt5MHIGRiOEPggmF/hctCDot1Ykhh6YmAUhak/hK8Z/H7PZYPh3D/fpZgAkPY8AJQXeV3SDFAGxD4jQ44T1mkbN7tINT81k4t9PdKNH5J4NCUbXa+rSVd6eDw/YgZMRkJWN2ZShHz2ciauDh7q7i3o/AOXsB+Y7GNmFASnVk2mtiuWXN/vos X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: arm64 uses VM_HIGH_ARCH_0 and VM_HIGH_ARCH_1 for enabling MTE for a VMA. When VM_HIGH_ARCH_0, which arm64 renames to VM_MTE, is set for a VMA, and the gfp flag __GFP_ZERO is present, the __GFP_ZEROTAGS gfp flag also gets set in vma_alloc_zeroed_movable_folio(). Expand this to be more generic by adding an arch hook that modifes the gfp flags for an allocation when the VMA is known. Note that __GFP_ZEROTAGS is ignored by the page allocator unless __GFP_ZERO is also set; from that point of view, the current behaviour is unchanged, even though the arm64 flag is set in more places. When arm64 will have support to reuse the tag storage for data allocation, the uses of the __GFP_ZEROTAGS flag will be expanded to instruct the page allocator to try to reserve the corresponding tag storage for the pages being allocated. The flags returned by arch_calc_vma_gfp() are or'ed with the flags set by the caller; this has been done to keep an architecture from modifying the flags already set by the core memory management code; this is similar to how do_mmap() -> calc_vm_flag_bits() -> arch_calc_vm_flag_bits() has been implemented. This can be revisited in the future if there's a need to do so. Signed-off-by: Alexandru Elisei --- arch/arm64/include/asm/page.h | 5 ++--- arch/arm64/include/asm/pgtable.h | 3 +++ arch/arm64/mm/fault.c | 19 ++++++------------- include/linux/pgtable.h | 7 +++++++ mm/mempolicy.c | 1 + mm/shmem.c | 5 ++++- 6 files changed, 23 insertions(+), 17 deletions(-) diff --git a/arch/arm64/include/asm/page.h b/arch/arm64/include/asm/page.h index 2312e6ee595f..88bab032a493 100644 --- a/arch/arm64/include/asm/page.h +++ b/arch/arm64/include/asm/page.h @@ -29,9 +29,8 @@ void copy_user_highpage(struct page *to, struct page *from, void copy_highpage(struct page *to, struct page *from); #define __HAVE_ARCH_COPY_HIGHPAGE -struct folio *vma_alloc_zeroed_movable_folio(struct vm_area_struct *vma, - unsigned long vaddr); -#define vma_alloc_zeroed_movable_folio vma_alloc_zeroed_movable_folio +#define vma_alloc_zeroed_movable_folio(vma, vaddr) \ + vma_alloc_folio(GFP_HIGHUSER_MOVABLE | __GFP_ZERO, 0, vma, vaddr, false) void tag_clear_highpage(struct page *to); #define __HAVE_ARCH_TAG_CLEAR_HIGHPAGE diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index 79ce70fbb751..08f0904dbfc2 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -1071,6 +1071,9 @@ static inline void arch_swap_restore(swp_entry_t entry, struct folio *folio) #endif /* CONFIG_ARM64_MTE */ +#define __HAVE_ARCH_CALC_VMA_GFP +gfp_t arch_calc_vma_gfp(struct vm_area_struct *vma, gfp_t gfp); + /* * On AArch64, the cache coherency is handled via the set_pte_at() function. */ diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index 55f6455a8284..4d3f0a870ad8 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -937,22 +937,15 @@ void do_debug_exception(unsigned long addr_if_watchpoint, unsigned long esr, NOKPROBE_SYMBOL(do_debug_exception); /* - * Used during anonymous page fault handling. + * If this is called during anonymous page fault handling, and the page is + * mapped with PROT_MTE, initialise the tags at the point of tag zeroing as this + * is usually faster than separate DC ZVA and STGM. */ -struct folio *vma_alloc_zeroed_movable_folio(struct vm_area_struct *vma, - unsigned long vaddr) +gfp_t arch_calc_vma_gfp(struct vm_area_struct *vma, gfp_t gfp) { - gfp_t flags = GFP_HIGHUSER_MOVABLE | __GFP_ZERO; - - /* - * If the page is mapped with PROT_MTE, initialise the tags at the - * point of allocation and page zeroing as this is usually faster than - * separate DC ZVA and STGM. - */ if (vma->vm_flags & VM_MTE) - flags |= __GFP_ZEROTAGS; - - return vma_alloc_folio(flags, 0, vma, vaddr, false); + return __GFP_ZEROTAGS; + return 0; } void tag_clear_highpage(struct page *page) diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index c5ddec6b5305..98f81ca08cbe 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -901,6 +901,13 @@ static inline void arch_do_swap_page(struct mm_struct *mm, } #endif +#ifndef __HAVE_ARCH_CALC_VMA_GFP +static inline gfp_t arch_calc_vma_gfp(struct vm_area_struct *vma, gfp_t gfp) +{ + return 0; +} +#endif + #ifndef __HAVE_ARCH_FREE_PAGES_PREPARE static inline void arch_free_pages_prepare(struct page *page, int order) { } #endif diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 10a590ee1c89..f7ef52760b32 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -2168,6 +2168,7 @@ struct folio *vma_alloc_folio(gfp_t gfp, int order, struct vm_area_struct *vma, pgoff_t ilx; struct page *page; + gfp |= arch_calc_vma_gfp(vma, gfp); pol = get_vma_policy(vma, addr, order, &ilx); page = alloc_pages_mpol(gfp | __GFP_COMP, order, pol, ilx, numa_node_id()); diff --git a/mm/shmem.c b/mm/shmem.c index d7c84ff62186..14427e9982f9 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1585,7 +1585,7 @@ static struct folio *shmem_swapin_cluster(swp_entry_t swap, gfp_t gfp, */ static gfp_t limit_gfp_mask(gfp_t huge_gfp, gfp_t limit_gfp) { - gfp_t allowflags = __GFP_IO | __GFP_FS | __GFP_RECLAIM; + gfp_t allowflags = __GFP_IO | __GFP_FS | __GFP_RECLAIM | __GFP_ZEROTAGS; gfp_t denyflags = __GFP_NOWARN | __GFP_NORETRY; gfp_t zoneflags = limit_gfp & GFP_ZONEMASK; gfp_t result = huge_gfp & ~(allowflags | GFP_ZONEMASK); @@ -2038,6 +2038,7 @@ static int shmem_get_folio_gfp(struct inode *inode, pgoff_t index, gfp_t huge_gfp; huge_gfp = vma_thp_gfp_mask(vma); + huge_gfp |= arch_calc_vma_gfp(vma, huge_gfp); huge_gfp = limit_gfp_mask(huge_gfp, gfp); folio = shmem_alloc_and_add_folio(huge_gfp, inode, index, fault_mm, true); @@ -2214,6 +2215,8 @@ static vm_fault_t shmem_fault(struct vm_fault *vmf) vm_fault_t ret = 0; int err; + gfp |= arch_calc_vma_gfp(vmf->vma, gfp); + /* * Trinity finds that probing a hole which tmpfs is punching can * prevent the hole-punch from ever completing: noted in i_private. From patchwork Thu Jan 25 16:42:33 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Elisei X-Patchwork-Id: 13531329 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CBE17C47258 for ; Thu, 25 Jan 2024 16:43:59 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6278C6B007B; Thu, 25 Jan 2024 11:43:59 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 5B0E06B0087; Thu, 25 Jan 2024 11:43:59 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4516A8D0002; Thu, 25 Jan 2024 11:43:59 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 2E65E6B007B for ; Thu, 25 Jan 2024 11:43:59 -0500 (EST) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 0C141A0A61 for ; Thu, 25 Jan 2024 16:43:59 +0000 (UTC) X-FDA: 81718405398.27.96708FC Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf19.hostedemail.com (Postfix) with ESMTP id 450001A0007 for ; Thu, 25 Jan 2024 16:43:57 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=none; spf=pass (imf19.hostedemail.com: domain of alexandru.elisei@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=alexandru.elisei@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1706201037; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=t/1X1P8QVoHQyW9paC2c7VyRn9dvBn9hTvEEdUS4Dys=; b=6XHOHUFH06SttVx/Ln3YnGqr0/AZc9LsX+l9P20yZRCmXXhr8iiOIs/defkzE1yOX/N6IV j25GpiiwSuuzbY4FPuMGQgLBQsgHnlx3uOj2206SsXpnU3TbLlZanOoamh4qUAa/Mh5NP1 Hhhg7sQ2XvM1Hf/ONf4L59/HCeGzPyQ= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1706201037; a=rsa-sha256; cv=none; b=DEaPta2m/pjGVtqp+SwRgeir3gzvQwhbPTBpT0k00yiVfHfDuMLpTW/D1wVGh4g3nHGr3e HSFPR3P9LI1GX8tpeQPR3Ukb4EAJzHAhGIeVeGdmlYeHhudVOUoUbJBUFsGbyW+g8vioiE Kk6huXXiJGekATlMZ0HwXDtj39IkefE= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=none; spf=pass (imf19.hostedemail.com: domain of alexandru.elisei@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=alexandru.elisei@arm.com; dmarc=pass (policy=none) header.from=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id CDDEF1576; Thu, 25 Jan 2024 08:44:40 -0800 (PST) Received: from e121798.cable.virginm.net (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id DE15E3F5A1; Thu, 25 Jan 2024 08:43:50 -0800 (PST) From: Alexandru Elisei To: catalin.marinas@arm.com, will@kernel.org, oliver.upton@linux.dev, maz@kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, arnd@arndb.de, akpm@linux-foundation.org, mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, mhiramat@kernel.org, rppt@kernel.org, hughd@google.com Cc: pcc@google.com, steven.price@arm.com, anshuman.khandual@arm.com, vincenzo.frascino@arm.com, david@redhat.com, eugenis@google.com, kcc@google.com, hyesoo.yu@samsung.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, kvmarm@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org Subject: [PATCH RFC v3 12/35] mm: Call arch_swap_prepare_to_restore() before arch_swap_restore() Date: Thu, 25 Jan 2024 16:42:33 +0000 Message-Id: <20240125164256.4147-13-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240125164256.4147-1-alexandru.elisei@arm.com> References: <20240125164256.4147-1-alexandru.elisei@arm.com> MIME-Version: 1.0 X-Stat-Signature: rj6cpzu1dfe6hhk6y3qcukzyp5rz1owc X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 450001A0007 X-Rspam-User: X-HE-Tag: 1706201037-294687 X-HE-Meta: U2FsdGVkX1+OeGelHmyTMgcoMyioEpq23/935GmapYBkr4tLNqOQPNm3jtL8keZmugGDmIUh74Kb25XjIYNmuhuV5oBodT/TgRb8He9nd5zVKalbTmVuRG0CyR+pNpDOhwriCgtWEW2QX4C5yXoFODc0Iwy5fBgJyqIHJ0rbKjkhHPP0ejPw5hG4PCpLmpPIbZ4BNJwikC9pYTbhlNws2MdJKMtEUGX1qaINUgiawIWYoV+ziAXU7zrOziVaYB2n4o+t4jqwbDhc4r02+QHMl4+d0Tw8ne9QTixYUMkDj06obV1nZUQ6C6AXQpHdVU71BQ+hP56ODsZrqPnpwWP8LzlfL6ZV1e59230zvsXqeLGGQxRHJ8PdLFrAOpxHxWHhfU6IgoUn150BgVN/8T0nfmVK9sI8qAKIzemZrIZdjjQMHcIR6KLusGX6E4J1mYP4auRCubO91B+o01Pwj7VOFVsKnQFDbevsuuru1uUXyBXFPfpd91M8k7naglwv0/7O9FS+magCmNrz+z6Q//XlhEH8QEIDr3rOpKDOPv6Rdu0p+e9/AOMFKVwN5+HR/HqwVpbSmt6SarV2l6XCFsiegnSRgw4tyK9LivJrMaXgYiqvEfFJQ6B3RByEdyXFm+lOGP0a8VqISD/KF0IMFaWuD9ahHBT3NMVFVvuXQs3UgwXOoTVCXeSern1EeNlz81a+i+4bFga0bNV7sRZDEM63rLY+3cy9tHr7wyux2NoABRXiDcGEQVsEqT7OJi5AoO0qy+FEGUpA0YQ8x4F+JRHz25gmbx6AgHPdHmnBv3beSs/q37SAfy+4gozVJ0/VJN/eLZQCRLczSXjQHz1MPSxsrzZg/7mGJkxnxZQzbFOl14NDUYaIW7e1pUo2N05UlF2rTv8xEum8LWMU1xUtz51x9VKWG9KzbqHA37LbkE03t98EGfuumRGCVirE1kQsGoSL4Omn9TvXFFJfx7Pxh1Q /T92oOj1 KX02ADlQ/0EAPEmyIuvzFPma4vySa8AstvwzTvlC2NOfW5OlhvOFxH9mOwrREHeKTynYWDNL3vBRjM1PA+DxEfLrFGwJilrQplXiWNXyFFHKXpna3Rk2JOQQxB9G61X+TWnknnpDXNRLbplii/3tuJ3cMxpTe1Niw6z/HDo7k/ay5HOqEAHlf1LpsYnY7mn5Sc7kHW03lSstcDSw= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: arm64 uses arch_swap_restore() to restore saved tags before the page is swapped in and it's called in atomic context (with the ptl lock held). Introduce arch_swap_prepare_to_restore() that will allow an architecture to perform extra work during swap in and outside of a critical section. This will be used by arm64 to allocate a buffer in memory where to temporarily save tags if tag storage is not available for the page being swapped in. Signed-off-by: Alexandru Elisei --- include/linux/pgtable.h | 7 +++++++ mm/memory.c | 4 ++++ mm/shmem.c | 9 +++++++++ mm/swapfile.c | 5 +++++ 4 files changed, 25 insertions(+) diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index 98f81ca08cbe..2d0f04042f62 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -959,6 +959,13 @@ static inline void arch_swap_invalidate_area(int type) } #endif +#ifndef __HAVE_ARCH_SWAP_PREPARE_TO_RESTORE +static inline vm_fault_t arch_swap_prepare_to_restore(swp_entry_t entry, struct folio *folio) +{ + return 0; +} +#endif + #ifndef __HAVE_ARCH_SWAP_RESTORE static inline void arch_swap_restore(swp_entry_t entry, struct folio *folio) { diff --git a/mm/memory.c b/mm/memory.c index 7e1f4849463a..8a421e168b57 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3975,6 +3975,10 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) folio_throttle_swaprate(folio, GFP_KERNEL); + ret = arch_swap_prepare_to_restore(entry, folio); + if (ret) + goto out_page; + /* * Back out if somebody else already faulted in this pte. */ diff --git a/mm/shmem.c b/mm/shmem.c index 14427e9982f9..621fabc3b8c6 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1855,6 +1855,7 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index, struct swap_info_struct *si; struct folio *folio = NULL; swp_entry_t swap; + vm_fault_t ret; int error; VM_BUG_ON(!*foliop || !xa_is_value(*foliop)); @@ -1903,6 +1904,14 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index, } folio_wait_writeback(folio); + ret = arch_swap_prepare_to_restore(swap, folio); + if (ret) { + if (fault_type) + *fault_type = ret; + error = -EINVAL; + goto unlock; + } + /* * Some architectures may have to restore extra metadata to the * folio after reading from swap. diff --git a/mm/swapfile.c b/mm/swapfile.c index 556ff7347d5f..49425598f778 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -1785,6 +1785,11 @@ static int unuse_pte(struct vm_area_struct *vma, pmd_t *pmd, goto setpte; } + if (arch_swap_prepare_to_restore(entry, folio)) { + ret = -EINVAL; + goto out; + } + /* * Some architectures may have to restore extra metadata to the page * when reading from swap. This metadata may be indexed by swap entry From patchwork Thu Jan 25 16:42:34 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Elisei X-Patchwork-Id: 13531330 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CC434C47258 for ; Thu, 25 Jan 2024 16:44:05 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 64DE06B0081; Thu, 25 Jan 2024 11:44:05 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 5D82B8D0005; Thu, 25 Jan 2024 11:44:05 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4515C8D0002; Thu, 25 Jan 2024 11:44:05 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 2E7B86B0087 for ; Thu, 25 Jan 2024 11:44:05 -0500 (EST) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 02521140C3A for ; Thu, 25 Jan 2024 16:44:04 +0000 (UTC) X-FDA: 81718405650.02.38520B8 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf04.hostedemail.com (Postfix) with ESMTP id 1168540002 for ; Thu, 25 Jan 2024 16:44:02 +0000 (UTC) Authentication-Results: imf04.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf04.hostedemail.com: domain of alexandru.elisei@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=alexandru.elisei@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1706201043; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=6oGEDD7ufyPH4wqwJTovB2Szb2CcClxeHNPYX1VLAVI=; b=04PDP6nXWH4iD7rnyRTGLmaKAFGeQJdLyoCZUbT8Xb65SiTRr+gzVoqdvZAuH9UH+DLoyf aEVLNRe+k0vrRVhTFl7O94BDsaGJ1RBWDZKThT5IqXYvhhFgqRWxqApmfJyqBoJVG40vVy Pp/xKUD+yHwFHmwUQMHkfVV3/xY10r4= ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf04.hostedemail.com: domain of alexandru.elisei@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=alexandru.elisei@arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1706201043; a=rsa-sha256; cv=none; b=tn7p44y6c/b5HACXDU8uNoJcpf0spjB0iRpmvjOsYpfAOgbJ7TCqpbI4FUdGOOFNrICzLK 3fWiRUeKoBQqYR7pytnSUQE2g0Um8m0ZTJGu3i2Sb3EI76ClENAXGMzxJtpt45Ym2PH2Tu PZwK9Q9eaPefnkqi3oxacz8TppNZ5dY= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B7E751596; Thu, 25 Jan 2024 08:44:46 -0800 (PST) Received: from e121798.cable.virginm.net (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id A76463F5A1; Thu, 25 Jan 2024 08:43:56 -0800 (PST) From: Alexandru Elisei To: catalin.marinas@arm.com, will@kernel.org, oliver.upton@linux.dev, maz@kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, arnd@arndb.de, akpm@linux-foundation.org, mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, mhiramat@kernel.org, rppt@kernel.org, hughd@google.com Cc: pcc@google.com, steven.price@arm.com, anshuman.khandual@arm.com, vincenzo.frascino@arm.com, david@redhat.com, eugenis@google.com, kcc@google.com, hyesoo.yu@samsung.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, kvmarm@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org Subject: [PATCH RFC v3 13/35] mm: memory: Introduce fault-on-access mechanism for pages Date: Thu, 25 Jan 2024 16:42:34 +0000 Message-Id: <20240125164256.4147-14-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240125164256.4147-1-alexandru.elisei@arm.com> References: <20240125164256.4147-1-alexandru.elisei@arm.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 1168540002 X-Stat-Signature: 51owgxepormrsjr7dprgy58pqg199ec5 X-HE-Tag: 1706201042-192010 X-HE-Meta: U2FsdGVkX1/eynyvLhS0rUSloyketkIbkNce5Bd9eC7A0nz6zLPOoVvXXxu7m/uqH2kWqYOz3jRxl8v+GaNQFvbxGcisbLLbSUHMFF2wOFcWr+lvUsSzJLXmG1aFyqH82c1HRKNAzgVwUjmxIcHFS5//onCMYjs4cVnZ+c3bYKfDXI93By6WTB82y9ufoN9muog8iEGvKOV7u4Ed2AiMScXWbpdePA4MP5k29GRf5XDhVoVrzKEQNC8BZTiB6MJ8z2945Ava37Jo48hCryEqlRbcTAETNF3hE8mOK7SNJsQGOH165s/SyCrlWf4slgcz1J+oSj1gzs+SPt+yYEMZ4r9IVn7RPV0OKUly/HB8RUkndLR1D5kQMfYYstWvB023MhTsQ7H1S1MncRORMX1COzbKe02x6XKG5supjlZ6bKO4grINrMLtItJlNApOYOfTn5cZaplIS2TWt8NMB0q88dlieQlZ+JxZntth+p951X+mjWyXARLc/1sZMwsCBUqEPtSR0STiF225d5Bjxaz0hWgAwC0wI+ZGFN1ALbnzKmP2bBafnrQDRBI0s6tMCnIzs5jdf/WQOSGj5BzEXDM7JspDmbpu5q/q6KoM0K5lWw8c2hQNacyvNAS35cdxlFMkbk5A2+ptAyTnIzV033PqiFGuwl7LRBaOh5awAZ/8oh9IAoBVMoIwj4oCYs09HeRmexwuaTwgWYQkZ/MDL77mk0ba4DVCyfDb/g9mEyrR0RZl93wSoP1qmbCzWGxdUHIzXaIhsMMAmpYplESSkAcB4d68fm9z0vaHxDcmiSM0xmecNwdSxn0yJ6JbqkUcGisurxsuQ9C8kKAzxLjAWrjBcsKsaXYMVTwupwKCSgPnGwwqLPfYRcSYW0FZZBtUxByzCk594pTrDM7Gs/mF7HF7WP4MxgNSanpEiL+45zUkqZJg0JEdkYFPPP+978Ej7GzVUOdUnEUCDa+OpF0TQ9U IVfTAxuR wqnEvinm+28TEkcLsAQN/1f1jcRQrzBiShRUzzxVbwS0f+i3c5+bNea1criUi1il5NRF5ubLtLmCxqBOEwxhLdrX0tXwQx7R5d/kIq4lO7VyMNKLlwaPqNzuwZIM12EfzhNwKY4BFkj3JyUNyJtJYuXmFJnUZ0/uJaeYY X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Introduce a mechanism that allows an architecture to trigger a page fault, and add the infrastructure to handle that fault accordingly. To use make use of this, an arch is expected to mark the table entry as PAGE_NONE (which will cause a fault next time it is accessed) and to implement an arch-specific method (like a software bit) for recognizing that the fault needs to be handled by the arch code. arm64 will use of this approach to reserve tag storage for pages which are mapped in an MTE enabled VMA, but the storage needed to store tags isn't reserved (for example, because of an mprotect(PROT_MTE) call on a VMA with existing pages). Signed-off-by: Alexandru Elisei --- Changes since rfc v2: * New patch. Split from patch #19 ("mm: mprotect: Introduce PAGE_FAULT_ON_ACCESS for mprotect(PROT_MTE)") (David Hildenbrand). include/linux/huge_mm.h | 4 ++-- include/linux/pgtable.h | 47 +++++++++++++++++++++++++++++++++++-- mm/Kconfig | 3 +++ mm/huge_memory.c | 36 +++++++++++++++++++++-------- mm/memory.c | 51 ++++++++++++++++++++++++++--------------- 5 files changed, 109 insertions(+), 32 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 5adb86af35fc..4678a0a5e6a8 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -346,7 +346,7 @@ struct page *follow_devmap_pmd(struct vm_area_struct *vma, unsigned long addr, struct page *follow_devmap_pud(struct vm_area_struct *vma, unsigned long addr, pud_t *pud, int flags, struct dev_pagemap **pgmap); -vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf); +vm_fault_t handle_huge_pmd_protnone(struct vm_fault *vmf); extern struct page *huge_zero_page; extern unsigned long huge_zero_pfn; @@ -476,7 +476,7 @@ static inline spinlock_t *pud_trans_huge_lock(pud_t *pud, return NULL; } -static inline vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf) +static inline vm_fault_t handle_huge_pmd_protnone(struct vm_fault *vmf) { return 0; } diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index 2d0f04042f62..81a21be855a2 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -1455,7 +1455,7 @@ static inline int pud_trans_unstable(pud_t *pud) return 0; } -#ifndef CONFIG_NUMA_BALANCING +#if !defined(CONFIG_NUMA_BALANCING) && !defined(CONFIG_ARCH_HAS_FAULT_ON_ACCESS) /* * In an inaccessible (PROT_NONE) VMA, pte_protnone() may indicate "yes". It is * perfectly valid to indicate "no" in that case, which is why our default @@ -1477,7 +1477,50 @@ static inline int pmd_protnone(pmd_t pmd) { return 0; } -#endif /* CONFIG_NUMA_BALANCING */ +#endif /* !CONFIG_NUMA_BALANCING && !CONFIG_ARCH_HAS_FAULT_ON_ACCESS */ + +#ifndef CONFIG_ARCH_HAS_FAULT_ON_ACCESS +static inline bool arch_fault_on_access_pte(pte_t pte) +{ + return false; +} + +static inline bool arch_fault_on_access_pmd(pmd_t pmd) +{ + return false; +} + +/* + * The function is called with the fault lock held and an elevated reference on + * the folio. + * + * Rules that an arch implementation of the function must follow: + * + * 1. The function must return with the elevated reference dropped. + * + * 2. If the return value contains VM_FAULT_RETRY or VM_FAULT_COMPLETED then: + * + * - if FAULT_FLAG_RETRY_NOWAIT is not set, the function must return with the + * correct fault lock released, which can be accomplished with + * release_fault_lock(vmf). Note that release_fault_lock() doesn't check if + * FAULT_FLAG_RETRY_NOWAIT is set before releasing the mmap_lock. + * + * - if FAULT_FLAG_RETRY_NOWAIT is set, then the function must not release the + * mmap_lock. The flag should be set only if the mmap_lock is held. + * + * 3. If the return value contains neither of the above, the function must not + * release the fault lock; the generic fault handler will take care of releasing + * the correct lock. + */ +static inline vm_fault_t arch_handle_folio_fault_on_access(struct folio *folio, + struct vm_fault *vmf, + bool *map_pte) +{ + *map_pte = false; + + return VM_FAULT_SIGBUS; +} +#endif #endif /* CONFIG_MMU */ diff --git a/mm/Kconfig b/mm/Kconfig index 341cf53898db..153df67221f1 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -1006,6 +1006,9 @@ config IDLE_PAGE_TRACKING config ARCH_HAS_CACHE_LINE_SIZE bool +config ARCH_HAS_FAULT_ON_ACCESS + bool + config ARCH_HAS_CURRENT_STACK_POINTER bool help diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 94ef5c02b459..2bad63a7ec16 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1698,7 +1698,7 @@ struct page *follow_trans_huge_pmd(struct vm_area_struct *vma, } /* NUMA hinting page fault entry point for trans huge pmds */ -vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf) +vm_fault_t handle_huge_pmd_protnone(struct vm_fault *vmf) { struct vm_area_struct *vma = vmf->vma; pmd_t oldpmd = vmf->orig_pmd; @@ -1708,6 +1708,7 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf) int nid = NUMA_NO_NODE; int target_nid, last_cpupid = (-1 & LAST_CPUPID_MASK); bool migrated = false, writable = false; + vm_fault_t ret; int flags = 0; vmf->ptl = pmd_lock(vma->vm_mm, vmf->pmd); @@ -1731,6 +1732,20 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf) if (!folio) goto out_map; + folio_get(folio); + vma_set_access_pid_bit(vma); + + if (arch_fault_on_access_pmd(oldpmd)) { + bool map_pte = false; + + spin_unlock(vmf->ptl); + ret = arch_handle_folio_fault_on_access(folio, vmf, &map_pte); + if (ret || !map_pte) + return ret; + writable = false; + goto out_lock_and_map; + } + /* See similar comment in do_numa_page for explanation */ if (!writable) flags |= TNF_NO_GROUP; @@ -1755,15 +1770,18 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf) if (migrated) { flags |= TNF_MIGRATED; nid = target_nid; - } else { - flags |= TNF_MIGRATE_FAIL; - vmf->ptl = pmd_lock(vma->vm_mm, vmf->pmd); - if (unlikely(!pmd_same(oldpmd, *vmf->pmd))) { - spin_unlock(vmf->ptl); - goto out; - } - goto out_map; + goto out; + } + + flags |= TNF_MIGRATE_FAIL; + +out_lock_and_map: + vmf->ptl = pmd_lock(vma->vm_mm, vmf->pmd); + if (unlikely(!pmd_same(oldpmd, *vmf->pmd))) { + spin_unlock(vmf->ptl); + goto out; } + goto out_map; out: if (nid != NUMA_NO_NODE) diff --git a/mm/memory.c b/mm/memory.c index 8a421e168b57..110fe2224277 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4886,11 +4886,6 @@ static vm_fault_t do_fault(struct vm_fault *vmf) int numa_migrate_prep(struct folio *folio, struct vm_area_struct *vma, unsigned long addr, int page_nid, int *flags) { - folio_get(folio); - - /* Record the current PID acceesing VMA */ - vma_set_access_pid_bit(vma); - count_vm_numa_event(NUMA_HINT_FAULTS); if (page_nid == numa_node_id()) { count_vm_numa_event(NUMA_HINT_FAULTS_LOCAL); @@ -4900,13 +4895,14 @@ int numa_migrate_prep(struct folio *folio, struct vm_area_struct *vma, return mpol_misplaced(folio, vma, addr); } -static vm_fault_t do_numa_page(struct vm_fault *vmf) +static vm_fault_t handle_pte_protnone(struct vm_fault *vmf) { struct vm_area_struct *vma = vmf->vma; struct folio *folio = NULL; int nid = NUMA_NO_NODE; bool writable = false; int last_cpupid; + vm_fault_t ret; int target_nid; pte_t pte, old_pte; int flags = 0; @@ -4939,6 +4935,20 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf) if (!folio || folio_is_zone_device(folio)) goto out_map; + folio_get(folio); + /* Record the current PID acceesing VMA */ + vma_set_access_pid_bit(vma); + + if (arch_fault_on_access_pte(old_pte)) { + bool map_pte = false; + + pte_unmap_unlock(vmf->pte, vmf->ptl); + ret = arch_handle_folio_fault_on_access(folio, vmf, &map_pte); + if (ret || !map_pte) + return ret; + goto out_lock_and_map; + } + /* TODO: handle PTE-mapped THP */ if (folio_test_large(folio)) goto out_map; @@ -4983,18 +4993,21 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf) if (migrate_misplaced_folio(folio, vma, target_nid)) { nid = target_nid; flags |= TNF_MIGRATED; - } else { - flags |= TNF_MIGRATE_FAIL; - vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, - vmf->address, &vmf->ptl); - if (unlikely(!vmf->pte)) - goto out; - if (unlikely(!pte_same(ptep_get(vmf->pte), vmf->orig_pte))) { - pte_unmap_unlock(vmf->pte, vmf->ptl); - goto out; - } - goto out_map; + goto out; + } + + flags |= TNF_MIGRATE_FAIL; + +out_lock_and_map: + vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, + vmf->address, &vmf->ptl); + if (unlikely(!vmf->pte)) + goto out; + if (unlikely(!pte_same(ptep_get(vmf->pte), vmf->orig_pte))) { + pte_unmap_unlock(vmf->pte, vmf->ptl); + goto out; } + goto out_map; out: if (nid != NUMA_NO_NODE) @@ -5151,7 +5164,7 @@ static vm_fault_t handle_pte_fault(struct vm_fault *vmf) return do_swap_page(vmf); if (pte_protnone(vmf->orig_pte) && vma_is_accessible(vmf->vma)) - return do_numa_page(vmf); + return handle_pte_protnone(vmf); spin_lock(vmf->ptl); entry = vmf->orig_pte; @@ -5272,7 +5285,7 @@ static vm_fault_t __handle_mm_fault(struct vm_area_struct *vma, } if (pmd_trans_huge(vmf.orig_pmd) || pmd_devmap(vmf.orig_pmd)) { if (pmd_protnone(vmf.orig_pmd) && vma_is_accessible(vma)) - return do_huge_pmd_numa_page(&vmf); + return handle_huge_pmd_protnone(&vmf); if ((flags & (FAULT_FLAG_WRITE|FAULT_FLAG_UNSHARE)) && !pmd_write(vmf.orig_pmd)) { From patchwork Thu Jan 25 16:42:35 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Elisei X-Patchwork-Id: 13531331 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6679DC47258 for ; Thu, 25 Jan 2024 16:44:11 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id F03928D0005; Thu, 25 Jan 2024 11:44:10 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id E8ABC8D0002; Thu, 25 Jan 2024 11:44:10 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CDDB98D0005; Thu, 25 Jan 2024 11:44:10 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id B57028D0002 for ; Thu, 25 Jan 2024 11:44:10 -0500 (EST) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 97AB8A210B for ; Thu, 25 Jan 2024 16:44:10 +0000 (UTC) X-FDA: 81718405860.03.A6E9BCE Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf14.hostedemail.com (Postfix) with ESMTP id B9327100018 for ; Thu, 25 Jan 2024 16:44:08 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=none; spf=pass (imf14.hostedemail.com: domain of alexandru.elisei@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=alexandru.elisei@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1706201048; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=iqltK2Y9d/2ow3+g4elSrmbFiSkngQ1GfY2zIASVBTo=; b=akuikfp4ZKbFfmxUhpWY1L89a83jvddTSeHv9k+h1zzl1bIV3zJHKQUHd2VfrtM4f4xQhK lQPaS85Ogg8IM9jHOETNud5gMCOkuhKUAkKqvRx7gxqUrh8mfDLYMbvGLFlvTykwQMkFma 8OpYpvi05g1nUbR0ukBkg3FouUSAazk= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1706201048; a=rsa-sha256; cv=none; b=XzUHYeEutfFc7cyXnOpJdrYHXsTazfCkXJfQQyY7fHWUX5asLu7UhuSKuEwtzk9fRIUZIW S/ntZjKkZ5BSt1h5auUFgXPyqWVZnBPkxbfTtkgy2dKh+e8ExzmJgYnuBvFvUkRiDmEwO/ qPnqZNMN7pa/HASY3GrZXUv/8zdi+uk= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=none; spf=pass (imf14.hostedemail.com: domain of alexandru.elisei@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=alexandru.elisei@arm.com; dmarc=pass (policy=none) header.from=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 7FAFC15A1; Thu, 25 Jan 2024 08:44:52 -0800 (PST) Received: from e121798.cable.virginm.net (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 8E70C3F5A1; Thu, 25 Jan 2024 08:44:02 -0800 (PST) From: Alexandru Elisei To: catalin.marinas@arm.com, will@kernel.org, oliver.upton@linux.dev, maz@kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, arnd@arndb.de, akpm@linux-foundation.org, mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, mhiramat@kernel.org, rppt@kernel.org, hughd@google.com Cc: pcc@google.com, steven.price@arm.com, anshuman.khandual@arm.com, vincenzo.frascino@arm.com, david@redhat.com, eugenis@google.com, kcc@google.com, hyesoo.yu@samsung.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, kvmarm@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org Subject: [PATCH RFC v3 14/35] of: fdt: Return the region size in of_flat_dt_translate_address() Date: Thu, 25 Jan 2024 16:42:35 +0000 Message-Id: <20240125164256.4147-15-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240125164256.4147-1-alexandru.elisei@arm.com> References: <20240125164256.4147-1-alexandru.elisei@arm.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: B9327100018 X-Rspam-User: X-Stat-Signature: mzco7rb8xaah1bochnt94uokcpewo8s5 X-Rspamd-Server: rspam03 X-HE-Tag: 1706201048-271379 X-HE-Meta: U2FsdGVkX18jU4ZJJPkm49kgkK1HPVjYy/Z7ynpKtQe2DMKMN/yKtAYFdNzI4I04UxJ3k/ooUbbKlPlUorcx3d/wVrQlnZZOXweAvqgj6swKuEnicy94Fx+NI2OWIWg3ErSrZLNmdai/8Gsv94t9mjsa7dEL6v0IY3HU32O5+ZMAdQt/rya0U7Te4Ac4XP8ABbiIvf7LTtSoDjt4bPHdJOXQjXvzlymDYdugq4XdKiFRlSzqt+9Dir3E1bLIHREcxF6K9fpXUKL/jEkE/plLAh9l83VBK+sjU0O6+QkGkNZAWdS2l24yRpPZYJSuto6d91lzdAMnntILxSnE4F//Vp/isFEiv3Hs+Ed2CngJDrMiKU6/UUEEZ+lK/eMkayrJlZtlBKgvV/Bws4qAPLkwFMHOK0n3gQrljYSp6n+jtHOdOLLtgT5K3wRVmxHUiFaqNiYNAMfZPz+Ar0MMa0UgxfqHvhND8e5nxDoo1sjEwrxNI9or1LgTxQQwXtwIL9ClgRw9DrDxLj6QJD9X3J9vNuCyaJCaXUM4YrMwYwchTFyGzcc4bJ92jA6FlIq93hvGD26qPlPXMtgEvIZw9u/iYwIGY0DN60Mlh4wOi7e0873P0f10Az4WSaYu6t36XGiqOjmtXi087n1TdOqScVJibI4A2zcHgzqqzexlFiwsuXCHPvJGCPE0O4yQw/qpng8Y4QlNjgnTWG8pGLYP6B0jrAlYmQLg5Jmb9XO8mQmlbf/iMyPL+YIUT76ob7OalH7X1JvONbH17wgDoPe+ZoQsbbAzuYJqAxY9W8sfnHHT0W3hgaCeP0eBcbdURXOOFYkNQwJJ1anbqE332dAqAIfQP0NxWDZBgGzCZ+GOldWetK9WcWcY95C73UEVsMHO9MpS9gQUipoTH9FsLgf/CK4d0qmwgoiTqVwBlZn62I57GqTyuSZsZueVp3T4LimAxn/AStTd1m6hxkBkyFHhZvv 3bq+Gug1 fPH6SUhOZMMg+s9t2m/IXjKn+kYA8jprFQlODVQw7EqxfQLPw/RHOW/+1UDm31yrm0BMa78Kcy9G9RorPq87OjGJ6/34ZLzX7lTFRaUQ/YCPviZiry6y0Tc0jo7eLFKiWjKPOiJ6zErhfFvr+WVdBHoT2tVQFl6PALr4Z4PEZslg7tPBXnK32Mz/aL0UI/8pCsCm3BDy8tQWGIls= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Alongside the base address, arm64 will also need to know the size of a tag storage region. Teach of_flat_dt_translate_address() to parse and return the size. Signed-off-by: Alexandru Elisei --- Changes since rfc v2: * New patch, suggested by Rob Herring. arch/sh/kernel/cpu/sh2/probe.c | 2 +- drivers/of/fdt_address.c | 12 +++++++++--- drivers/tty/serial/earlycon.c | 2 +- include/linux/of_fdt.h | 2 +- 4 files changed, 12 insertions(+), 6 deletions(-) diff --git a/arch/sh/kernel/cpu/sh2/probe.c b/arch/sh/kernel/cpu/sh2/probe.c index 70a07f4f2142..fa8904e8f390 100644 --- a/arch/sh/kernel/cpu/sh2/probe.c +++ b/arch/sh/kernel/cpu/sh2/probe.c @@ -21,7 +21,7 @@ static int __init scan_cache(unsigned long node, const char *uname, if (!of_flat_dt_is_compatible(node, "jcore,cache")) return 0; - j2_ccr_base = ioremap(of_flat_dt_translate_address(node), 4); + j2_ccr_base = ioremap(of_flat_dt_translate_address(node, NULL), 4); return 1; } diff --git a/drivers/of/fdt_address.c b/drivers/of/fdt_address.c index 1dc15ab78b10..4c077778d710 100644 --- a/drivers/of/fdt_address.c +++ b/drivers/of/fdt_address.c @@ -160,7 +160,8 @@ static int __init fdt_translate_one(const void *blob, int parent, * that can be mapped to a cpu physical address). This is not really specified * that way, but this is traditionally the way IBM at least do things */ -static u64 __init fdt_translate_address(const void *blob, int node_offset) +static u64 __init fdt_translate_address(const void *blob, int node_offset, + u64 *out_size) { int parent, len; const struct of_bus *bus, *pbus; @@ -193,6 +194,9 @@ static u64 __init fdt_translate_address(const void *blob, int node_offset) goto bail; } memcpy(addr, reg, na * 4); + /* The size of the region doesn't need translating. */ + if (out_size) + *out_size = of_read_number(reg + na, ns); pr_debug("bus (na=%d, ns=%d) on %s\n", na, ns, fdt_get_name(blob, parent, NULL)); @@ -242,8 +246,10 @@ static u64 __init fdt_translate_address(const void *blob, int node_offset) /** * of_flat_dt_translate_address - translate DT addr into CPU phys addr * @node: node in the flat blob + * @out_size: size of the region, can be NULL if not needed + * @return: the address, OF_BAD_ADDR in case of error */ -u64 __init of_flat_dt_translate_address(unsigned long node) +u64 __init of_flat_dt_translate_address(unsigned long node, u64 *out_size) { - return fdt_translate_address(initial_boot_params, node); + return fdt_translate_address(initial_boot_params, node, out_size); } diff --git a/drivers/tty/serial/earlycon.c b/drivers/tty/serial/earlycon.c index a5fbb6ed38ae..e941cf786232 100644 --- a/drivers/tty/serial/earlycon.c +++ b/drivers/tty/serial/earlycon.c @@ -265,7 +265,7 @@ int __init of_setup_earlycon(const struct earlycon_id *match, spin_lock_init(&port->lock); port->iotype = UPIO_MEM; - addr = of_flat_dt_translate_address(node); + addr = of_flat_dt_translate_address(node, NULL); if (addr == OF_BAD_ADDR) { pr_warn("[%s] bad address\n", match->name); return -ENXIO; diff --git a/include/linux/of_fdt.h b/include/linux/of_fdt.h index d69ad5bb1eb1..0e26f8c3b10e 100644 --- a/include/linux/of_fdt.h +++ b/include/linux/of_fdt.h @@ -36,7 +36,7 @@ extern char __dtb_start[]; extern char __dtb_end[]; /* Other Prototypes */ -extern u64 of_flat_dt_translate_address(unsigned long node); +extern u64 of_flat_dt_translate_address(unsigned long node, u64 *out_size); extern void of_fdt_limit_memory(int limit); #endif /* CONFIG_OF_FLATTREE */ From patchwork Thu Jan 25 16:42:36 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Elisei X-Patchwork-Id: 13531332 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E4AB0C48260 for ; Thu, 25 Jan 2024 16:44:16 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4523C8D0008; Thu, 25 Jan 2024 11:44:16 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 3DBE28D0002; Thu, 25 Jan 2024 11:44:16 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 255E38D0008; Thu, 25 Jan 2024 11:44:16 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 1356C8D0002 for ; Thu, 25 Jan 2024 11:44:16 -0500 (EST) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id E70E2A1B29 for ; Thu, 25 Jan 2024 16:44:15 +0000 (UTC) X-FDA: 81718406070.03.DCD2B4B Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf01.hostedemail.com (Postfix) with ESMTP id 787EB40002 for ; Thu, 25 Jan 2024 16:44:14 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf01.hostedemail.com: domain of alexandru.elisei@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=alexandru.elisei@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1706201054; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=NpYeGi9fgLVXYL5ZAmgZfJeaprEEibWMkt8xw+gQ3Wc=; b=qInN3bm5szjdB19/rtYy50mCPL08ZTPMMnJUWF/WpwxKTehCSGWLZye1bdIpNDlbO5e90G lc7HyA063HvITzfVb7BL4+SZG+P9Lsdzn5+rHaaK+BPXdlu9EJZHrGnz5VbejeJ5iP16IS KcK8OeRPjDRxeRd/UrkebQV+QsaRy9Y= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf01.hostedemail.com: domain of alexandru.elisei@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=alexandru.elisei@arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1706201054; a=rsa-sha256; cv=none; b=6fHYYZRAW8rbGxp+witHJoyjzZscr+M+POzZYUVd6HSCmPyTWMF9hjA5lzgEJhyIjsNczD VsVlGDSD0Uo9oHlovp0gEMD+oepttTDX58t5uW1J/+emiN+zucgfQI04LE/08wzVODeWJc 2W/BzI6guCjZ32cp7odNqT6hawC58+s= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 48AE015BF; Thu, 25 Jan 2024 08:44:58 -0800 (PST) Received: from e121798.cable.virginm.net (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 5846F3F5A1; Thu, 25 Jan 2024 08:44:08 -0800 (PST) From: Alexandru Elisei To: catalin.marinas@arm.com, will@kernel.org, oliver.upton@linux.dev, maz@kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, arnd@arndb.de, akpm@linux-foundation.org, mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, mhiramat@kernel.org, rppt@kernel.org, hughd@google.com Cc: pcc@google.com, steven.price@arm.com, anshuman.khandual@arm.com, vincenzo.frascino@arm.com, david@redhat.com, eugenis@google.com, kcc@google.com, hyesoo.yu@samsung.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, kvmarm@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org Subject: [PATCH RFC v3 15/35] of: fdt: Add of_flat_read_u32() Date: Thu, 25 Jan 2024 16:42:36 +0000 Message-Id: <20240125164256.4147-16-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240125164256.4147-1-alexandru.elisei@arm.com> References: <20240125164256.4147-1-alexandru.elisei@arm.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 787EB40002 X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: 6a3be688gu5ekh8qrsgbpfq8xy8kpwy4 X-HE-Tag: 1706201054-599783 X-HE-Meta: U2FsdGVkX1+kDH5BjjUOLqPUR16ukDJ+wZ16fNoRbYtjW5LjjfD9CAEQPJd5HwyX1oHnoOSg/j9slYouUoo4DI5Iu5E6cGdpAqPFu7O+1Hk0GvfnQSnS7gXBfH/UTEr0PlR8XsLbMGxFIRnVgZI0zVgxwewbtgoGzmUF7W/EzSTewIIUp4BjppEfhX2w8DpXaXox2j2zmzL5LZ3veeqJqqgpum+iJgsYAOuCEDmQxzmPtQ58gIHgRTNqBMM0bpdM/Lp3ioCJjgQJ8aAnCi1n3SjNosTdl0Nx7mZnZ6HcJqeeEYnvkmv3f6IieeeOotDhbQQaculD6qjrIOP4l48E0qHTWwQEvjsBP70vjDK1PtFv4hoOLzYv/f5MkFsPZgsA9SHZxwOv4gKizw/z4x0ZSAISPW3aZtog3uEXkJmze6iRFyThwpJ5as8xJA0fHNQNuLIoRCuG/RwxYjX9ZaWjwdvV3XYTxVbQ+oq/i9EJrqwmTjKuLjPjq5i/Iwsx88Qn/OJz5Fi/Vi4CSJgWoMw9ymn+ybDi2LLSUSIwSbKxZ7V1GozPbqYzPxgf0cAcQ/OQ1YkCSF1uiOtCYD2AuNlFMjhz6fvKoxl3PUfr8oQ9HCcBuVkF1kNZUDqrNrkXO4eWnh4uWnxxXJ/WgEKA1uPBZARR3ccmGCeeFCb1mIl5gBW1uzRPxqRdOB6iEUmFq5uJxMNEdHDvJfJAZjgPJyadrsFiTtkDXnZTkT+v7gphNmDVHZs/6u763ExXoSymUacQX3A2xqFe+EWOFDmCv3A38LBEjNhd6xot8G2Q3bk/h2u4uUvofKw2jaeAemRbAnN70GWSN1OrN0XXat2hYzP8syHNhnycJsVingWhwtoWg358Ll7eJvIc6hqM00eNp+0056Y2XP0W0UU= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Add the function of_flat_read_u32() to return the value of a property as an u32. Signed-off-by: Alexandru Elisei --- Changes since rfc v2: * New patch, suggested by Rob Herring. drivers/of/fdt.c | 21 +++++++++++++++++++++ include/linux/of_fdt.h | 2 ++ 2 files changed, 23 insertions(+) diff --git a/drivers/of/fdt.c b/drivers/of/fdt.c index bf502ba8da95..dfcd79fd5fd9 100644 --- a/drivers/of/fdt.c +++ b/drivers/of/fdt.c @@ -755,6 +755,27 @@ const void *__init of_get_flat_dt_prop(unsigned long node, const char *name, return fdt_getprop(initial_boot_params, node, name, size); } +/* + * of_flat_read_u32 - Return the value of the given property as an u32. + * + * @node: device node from which the property value is to be read + * @propname: name of the property + * @out_value: the value of the property + * @return: 0 on success, -EINVAL if property does not exist + */ +int __init of_flat_read_u32(unsigned long node, const char *propname, + u32 *out_value) +{ + const __be32 *reg; + + reg = of_get_flat_dt_prop(node, propname, NULL); + if (!reg) + return -EINVAL; + + *out_value = be32_to_cpup(reg); + return 0; +} + /** * of_fdt_is_compatible - Return true if given node from the given blob has * compat in its compatible list diff --git a/include/linux/of_fdt.h b/include/linux/of_fdt.h index 0e26f8c3b10e..d7901699061b 100644 --- a/include/linux/of_fdt.h +++ b/include/linux/of_fdt.h @@ -57,6 +57,8 @@ extern const void *of_get_flat_dt_prop(unsigned long node, const char *name, extern int of_flat_dt_is_compatible(unsigned long node, const char *name); extern unsigned long of_get_flat_dt_root(void); extern uint32_t of_get_flat_dt_phandle(unsigned long node); +extern int of_flat_read_u32(unsigned long node, const char *propname, + u32 *out_value); extern int early_init_dt_scan_chosen(char *cmdline); extern int early_init_dt_scan_memory(void); From patchwork Thu Jan 25 16:42:37 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Elisei X-Patchwork-Id: 13531333 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B55F8C47258 for ; Thu, 25 Jan 2024 16:44:22 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 564F76B007B; Thu, 25 Jan 2024 11:44:22 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 4EDAB6B0098; Thu, 25 Jan 2024 11:44:22 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 38E7B6B009B; Thu, 25 Jan 2024 11:44:22 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 2385A6B007B for ; Thu, 25 Jan 2024 11:44:22 -0500 (EST) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id E5A72A1B4D for ; Thu, 25 Jan 2024 16:44:21 +0000 (UTC) X-FDA: 81718406322.12.44DEAC0 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf13.hostedemail.com (Postfix) with ESMTP id 4676920009 for ; Thu, 25 Jan 2024 16:44:20 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=none; spf=pass (imf13.hostedemail.com: domain of alexandru.elisei@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=alexandru.elisei@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1706201060; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=kMWL/IvSQJqDXce5c4oy93Fth3a3FVCxaa93iLMEuZQ=; b=ewMvR7JAugqFEz0RjQZDG6Zg/53K14MdEzGi5+8w/aq0grxRja8J4wI+CZmHpD9tVQSSoN 0tbyUU9rKTPNDDKxmSFQmmAdl0YQGnolGFppVoZ5vZWEap+Sc5mkq3ro5nNU4hyDpN0Wde 35dDQtxNgc+5blJQsmgujficMA9jQkU= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1706201060; a=rsa-sha256; cv=none; b=4JtiIp2HZOZSj9Re91conNSyI4tDTggkNDhvYznkQ07iBa0enJEjoOLnk5PSrYeVqiUut0 dtXGotqxyIv3D0Iq2NjynuW5ypEfPTkUEFt1Cs1qUmXzk6vQu3xLhldbc3iGbIakug4BUT V1EoJEOTneDW1HfwJ8SroY0xNC85ZEA= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=none; spf=pass (imf13.hostedemail.com: domain of alexandru.elisei@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=alexandru.elisei@arm.com; dmarc=pass (policy=none) header.from=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1168115DB; Thu, 25 Jan 2024 08:45:04 -0800 (PST) Received: from e121798.cable.virginm.net (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 211FF3F5A1; Thu, 25 Jan 2024 08:44:13 -0800 (PST) From: Alexandru Elisei To: catalin.marinas@arm.com, will@kernel.org, oliver.upton@linux.dev, maz@kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, arnd@arndb.de, akpm@linux-foundation.org, mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, mhiramat@kernel.org, rppt@kernel.org, hughd@google.com Cc: pcc@google.com, steven.price@arm.com, anshuman.khandual@arm.com, vincenzo.frascino@arm.com, david@redhat.com, eugenis@google.com, kcc@google.com, hyesoo.yu@samsung.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, kvmarm@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org Subject: [PATCH RFC v3 16/35] KVM: arm64: Don't deny VM_PFNMAP VMAs when kvm_has_mte() Date: Thu, 25 Jan 2024 16:42:37 +0000 Message-Id: <20240125164256.4147-17-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240125164256.4147-1-alexandru.elisei@arm.com> References: <20240125164256.4147-1-alexandru.elisei@arm.com> MIME-Version: 1.0 X-Stat-Signature: ue1qd4xeesxj58uhrzngmn6paydfki8s X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 4676920009 X-Rspam-User: X-HE-Tag: 1706201060-661469 X-HE-Meta: U2FsdGVkX19p0xeSRzNTMI/4cA3WmHeN9oRp7h5965cpi7mOXtkDON3jtFHRDZ8zpATayLPN7nu79F1sOszatKvL39CcdLxyNtlkItCKEARFJprdkcxQPhsfFRG7uhE0mLfwDlJl8F1WbVSFQT8etmjOixznxpnMkLOwn6ummoJvPKEV3M6iK7EgLsDa3HfwTLnSvoDpD8H9f4mVSaxpOav4ZQ2ye9SQFW2CrTKufIJkAK/paghmO/knV6E6iXioT0EOrnzkShCrj740siG4B8HjUkSSY1Xuqpt1fit+4YHYvcCyhPuEO6pIfKi4hht714DkV/f8BDKHS52+cAqQFpkgrSToQZpdXpOB9y7z7b/1s7Y0LiMthCo9H1Oin7MM1jVgWalhMKNVTc3LLq6jAhsqpwlHlu2W2xrwlCHaryWqfGAbLAhY/pvvIK8uzKOtfbrj1vfOc3DFkeuvCz33HTa2ff3QUvVtUM5nnWpqLQ/08mvRw3vl1CpOREVrUIgsoPHICwkOAKX/gz6iMbh+cUx9ZlzDo/U01pVreT8Obh3GkSlSo4jyEfZaH9ZmLPOkmS5sVMUhfWeREN41AagtI/ZK2D/yjaQogcEfz7j64lTbC2DrNfzv9YAjmCQWFq/M8ptnFhRxPcztl2G5YGQC9tkZKIKQMZp2w/JPc6r5aD3V/6Z26QVG4uvdCbYOcSVlSitxuC29ZbtdJax8F25TVn84Qe2ezAg0BXrrVHvdZp+QvdE6O674MlOWShbuubDMf1MEjewBss48zcKuNuTsOgHsVA+6Q59lv42jjiMkSOZRqZHJRyzT4O1rFzjfxhqcxeMTh/glE0jsEtcnG0DAM0P5wH43v0cK21Cv4enAonV4L4fma+X+HsH6wxdj38ktRRbsDkQQW7awUhd3SiaREq9g5FHrL3kUM6n9f1NAX2hrOVKnDcVJzOfrdSWU5pDa3cV6q/LmPlOGQotjqZz YnBmPDfC 0uEmSqjUGRXQ0jt2GLnIYadHA1whWS2jd+9v2U9uqRkpD5tZu3fXffNVZKb3MWEdILg3pEInOna81Y8lbOp2Ti3vDWTv4RNcpCZZqTSnrt/agusGCDXO+3ylaApSPwnsWkZNPbjU/d8s/IMo8SL0LmMiV5kvXj3O1moaH8AIaTl9PCfhbvguoD2E6myF0XqW7K990GWWnZfruwuk= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: According to ARM DDI 0487J.a, page D10-5976, a memory location which doesn't have the Normal memory attribute is considered Untagged, and accesses are Tag Unchecked. Tag reads from an Untagged address return 0b0000, and writes are ignored. Linux uses VM_PFNMAP VMAs represent device memory, and Linux doesn't set the VM_MTE_ALLOWED flag for these VMAs. In user_mem_abort(), KVM requires that all VMAs that back guest memory must allow tagging (VM_MTE_ALLOWED flag set), except for VMAs that represent device memory. When a memslot is created or changed, KVM enforces a different behaviour: **all** VMAs that intersect the memslot must allow tagging, even those that represent device memory. This is too restrictive, and can lead to inconsistent behaviour: a VM_PFNMAP VMA that is present when a memslot is created causes KVM_SET_USER_MEMORY_REGION to fail, but if such a VMA is created after the memslot has been created, the virtual machine will run without errors. Change kvm_arch_prepare_memory_region() to allow VM_PFNMAP VMAs when the VM has the MTE capability enabled. Signed-off-by: Alexandru Elisei --- Changes from rfc v2: * New patch. It's a fix, and can be taken independently of the series. arch/arm64/kvm/mmu.c | 8 +++----- 1 file changed, 3 insertions(+), 5 deletions(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index d14504821b79..b7517c4a19c4 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -2028,17 +2028,15 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm, if (!vma) break; - if (kvm_has_mte(kvm) && !kvm_vma_mte_allowed(vma)) { - ret = -EINVAL; - break; - } - if (vma->vm_flags & VM_PFNMAP) { /* IO region dirty page logging not allowed */ if (new->flags & KVM_MEM_LOG_DIRTY_PAGES) { ret = -EINVAL; break; } + } else if (kvm_has_mte(kvm) && !kvm_vma_mte_allowed(vma)) { + ret = -EINVAL; + break; } hva = min(reg_end, vma->vm_end); } while (hva < reg_end); From patchwork Thu Jan 25 16:42:38 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Elisei X-Patchwork-Id: 13531334 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A1459C47258 for ; Thu, 25 Jan 2024 16:44:28 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 31AFE6B009B; Thu, 25 Jan 2024 11:44:28 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 2A472280002; Thu, 25 Jan 2024 11:44:28 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0F7128D0002; Thu, 25 Jan 2024 11:44:28 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id ECA196B009B for ; Thu, 25 Jan 2024 11:44:27 -0500 (EST) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id B890E16025E for ; Thu, 25 Jan 2024 16:44:27 +0000 (UTC) X-FDA: 81718406574.12.825A6F4 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf06.hostedemail.com (Postfix) with ESMTP id 23CFA180002 for ; Thu, 25 Jan 2024 16:44:25 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf06.hostedemail.com: domain of alexandru.elisei@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=alexandru.elisei@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1706201066; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=RV1JNba78ld8zwYsEQ4JQjY59vUq0KWCH1X7tlbthmY=; b=iVYTsepcYGexgmg0RAj6UT8DrYZm8e2UJ4jOEJoEPivQ/TqSRKGHVGZEgH03JsttcnPib8 dFStATw+QzcOuyf+DVshnxzO9H8gX9y+FGxOEnAGEY/yDyaw16D89V2sOt6w6ZkFc9joUJ cBL8O6VW7cR0YXGpLy2v0HtfDCmKpmE= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf06.hostedemail.com: domain of alexandru.elisei@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=alexandru.elisei@arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1706201066; a=rsa-sha256; cv=none; b=oDjY9/mG+QowTjoEtyOIgQbXium7pqMnp2LtpsnCy1IgeYcoQ/GtMUpW+B5afpJk8Yww9f S38USEhAr/cuJtxpZ+IFvCH0VjXsz7Su/Jah2Ask36d91X8f56zxB/3pHUDomEueebrPbp E3sPRRps5o2bSMdwDiHVyfHBiqmD0/4= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id CF4481650; Thu, 25 Jan 2024 08:45:09 -0800 (PST) Received: from e121798.cable.virginm.net (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id DFFF33F5A1; Thu, 25 Jan 2024 08:44:19 -0800 (PST) From: Alexandru Elisei To: catalin.marinas@arm.com, will@kernel.org, oliver.upton@linux.dev, maz@kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, arnd@arndb.de, akpm@linux-foundation.org, mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, mhiramat@kernel.org, rppt@kernel.org, hughd@google.com Cc: pcc@google.com, steven.price@arm.com, anshuman.khandual@arm.com, vincenzo.frascino@arm.com, david@redhat.com, eugenis@google.com, kcc@google.com, hyesoo.yu@samsung.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, kvmarm@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org Subject: [PATCH RFC v3 17/35] arm64: mte: Rework naming for tag manipulation functions Date: Thu, 25 Jan 2024 16:42:38 +0000 Message-Id: <20240125164256.4147-18-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240125164256.4147-1-alexandru.elisei@arm.com> References: <20240125164256.4147-1-alexandru.elisei@arm.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 23CFA180002 X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: eeq1krg5i3uxgiwp1zk3yhsuciqrdzwc X-HE-Tag: 1706201065-150872 X-HE-Meta: U2FsdGVkX19WOkTQYOJPGxdxls/XlEewltLU9xFlrDxdgdva2S1QYZnnQ0c+anmOAEKGuE/t9tRjaH1lN7wUMgNPbwin1vy3IKfFXqmrbkTBPEajj2Q8/MKDCIzMY7t4pjW9CkRpJot5+iRf+jWSJh5E2edmcGVy4DsDHPX4v7wUJa78e6QKrRDDRLquDF64ypgJcED5dvWEBxlelsbrTDy16w2g5nhqwZ4zgnl56XaWio4SiMhHHEGTJJ/7C/xtEB8AEDMFIfUo78OPvQ2WTUWjAxXaQ50XEFTqeNii4k4JzpvlSykm5ctzzt0NhpUlP2ZVb3m9KO547WNOjOrV6yBnFeYJxoTX3jLz3u7cF4vPLhhPludAQZ+TqSzlq6zTV8o+IXajr8eMLnWiWBebN69hJ7AhYiwnRfnO/F752cc4xbBUWG0IoOvH+sPs4fYi421Ht47MLkpi1IwxtdyCwnJ3PmxpJXIlEoYcZo/khbyatRxsW+JMBmCKEjzbFkgDd9oJSFrtf3sCkm3znjmlMRDVwvyG1fwSU9BJSQS0VYtyX1yKL5LETxzBbsnE6x/oBT8yfxXZO4Ju+mJra2CHg8FSJowlmRukL8wm/uqKpOMOhRGLFhvU+mMl8oVa9jjNxZUKnGG5EvxVmeESGpXKk0u9HwY8CbvJFghHEaFNsWXTfeAc2foJf8KnDqRhFbO45yqNW+koNxNTo63LD3EDAAHXgd3+fmzN4teX56rzz9DvQ72OH1Pus7jzSUPt0Rx9Tc5A1ukToT7IeF6XJRAPASWmjjK9lOnO6RNlnQvKxe4BR+UGJooMsQKIl63AFCWXSqUaddoOcKYaHe6tEk8Rx7XN1JaKYXzv X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: The tag save/restore/copy functions could be more explicit about from where the tags are coming from and where they are being copied to. Renaming the functions to make it easier to understand what they are doing: - Rename the mte_clear_page_tags() 'addr' parameter to 'page_addr', to match the other functions that take a page address as parameter. - Rename mte_save/restore_tags() to mte_save/restore_page_tags_by_swp_entry() to make it clear that they are saved in a collection indexed by swp_entry (this will become important when they will be also saved in a collection indexed by page pfn). Same applies to mte_invalidate_tags{,_area}_by_swp_entry(). - Rename mte_save/restore_page_tags() to make it clear where the tags are going to be saved, respectively from where they are restored - in a previously allocated memory buffer, not in an xarray, like when the tags are saved when swapping. Rename the action to 'copy' instead of 'save'/'restore' to match the copy from user functions, which also copy tags to memory. - Rename mte_allocate/free_tag_storage() to mte_allocate/free_tag_buf() to make it clear the functions have nothing to do with the memory where the corresponding tags for a page live. Change the parameter type for mte_free_tag_buf()) to be void *, to match the return value of mte_allocate_tag_buf(). Also do that because that memory is opaque and it is not meant to be directly deferenced. In the name of consistency rename local variables from tag_storage to tags. Give a similar treatment to the hibernation code that saves and restores the tags for all tagged pages. In the same spirit, rename MTE_PAGE_TAG_STORAGE to MTE_PAGE_TAG_STORAGE_SIZE to make it clear that it relates to the size of the memory needed to save the tags for a page. Oportunistically rename MTE_TAG_SIZE to MTE_TAG_SIZE_BITS to make it clear it is measured in bits, not bytes, like the rest of the size variable from the same header file. Signed-off-by: Alexandru Elisei --- arch/arm64/include/asm/mte-def.h | 16 +++++----- arch/arm64/include/asm/mte.h | 23 +++++++++------ arch/arm64/include/asm/pgtable.h | 8 ++--- arch/arm64/kernel/elfcore.c | 14 ++++----- arch/arm64/kernel/hibernate.c | 46 ++++++++++++++--------------- arch/arm64/lib/mte.S | 18 ++++++------ arch/arm64/mm/mteswap.c | 50 ++++++++++++++++---------------- 7 files changed, 90 insertions(+), 85 deletions(-) diff --git a/arch/arm64/include/asm/mte-def.h b/arch/arm64/include/asm/mte-def.h index 14ee86b019c2..eb0d76a6bdcf 100644 --- a/arch/arm64/include/asm/mte-def.h +++ b/arch/arm64/include/asm/mte-def.h @@ -5,14 +5,14 @@ #ifndef __ASM_MTE_DEF_H #define __ASM_MTE_DEF_H -#define MTE_GRANULE_SIZE UL(16) -#define MTE_GRANULE_MASK (~(MTE_GRANULE_SIZE - 1)) -#define MTE_GRANULES_PER_PAGE (PAGE_SIZE / MTE_GRANULE_SIZE) -#define MTE_TAG_SHIFT 56 -#define MTE_TAG_SIZE 4 -#define MTE_TAG_MASK GENMASK((MTE_TAG_SHIFT + (MTE_TAG_SIZE - 1)), MTE_TAG_SHIFT) -#define MTE_PAGE_TAG_STORAGE (MTE_GRANULES_PER_PAGE * MTE_TAG_SIZE / 8) +#define MTE_GRANULE_SIZE UL(16) +#define MTE_GRANULE_MASK (~(MTE_GRANULE_SIZE - 1)) +#define MTE_GRANULES_PER_PAGE (PAGE_SIZE / MTE_GRANULE_SIZE) +#define MTE_TAG_SHIFT 56 +#define MTE_TAG_SIZE_BITS 4 +#define MTE_TAG_MASK GENMASK((MTE_TAG_SHIFT + (MTE_TAG_SIZE_BITS - 1)), MTE_TAG_SHIFT) +#define MTE_PAGE_TAG_STORAGE_SIZE (MTE_GRANULES_PER_PAGE * MTE_TAG_SIZE_BITS / 8) -#define __MTE_PREAMBLE ARM64_ASM_PREAMBLE ".arch_extension memtag\n" +#define __MTE_PREAMBLE ARM64_ASM_PREAMBLE ".arch_extension memtag\n" #endif /* __ASM_MTE_DEF_H */ diff --git a/arch/arm64/include/asm/mte.h b/arch/arm64/include/asm/mte.h index 91fbd5c8a391..8034695b3dd7 100644 --- a/arch/arm64/include/asm/mte.h +++ b/arch/arm64/include/asm/mte.h @@ -18,19 +18,24 @@ #include -void mte_clear_page_tags(void *addr); +void mte_clear_page_tags(void *page_addr); + unsigned long mte_copy_tags_from_user(void *to, const void __user *from, unsigned long n); unsigned long mte_copy_tags_to_user(void __user *to, void *from, unsigned long n); -int mte_save_tags(struct page *page); -void mte_save_page_tags(const void *page_addr, void *tag_storage); -void mte_restore_tags(swp_entry_t entry, struct page *page); -void mte_restore_page_tags(void *page_addr, const void *tag_storage); -void mte_invalidate_tags(int type, pgoff_t offset); -void mte_invalidate_tags_area(int type); -void *mte_allocate_tag_storage(void); -void mte_free_tag_storage(char *storage); + +int mte_save_page_tags_by_swp_entry(struct page *page); +void mte_restore_page_tags_by_swp_entry(swp_entry_t entry, struct page *page); + +void mte_copy_page_tags_to_buf(const void *page_addr, void *to); +void mte_copy_page_tags_from_buf(void *page_addr, const void *from); + +void mte_invalidate_tags_by_swp_entry(int type, pgoff_t offset); +void mte_invalidate_tags_area_by_swp_entry(int type); + +void *mte_allocate_tag_buf(void); +void mte_free_tag_buf(void *buf); #ifdef CONFIG_ARM64_MTE diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index 08f0904dbfc2..2499cc4fa4f2 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -1045,7 +1045,7 @@ static inline pmd_t pmdp_establish(struct vm_area_struct *vma, static inline int arch_prepare_to_swap(struct page *page) { if (system_supports_mte()) - return mte_save_tags(page); + return mte_save_page_tags_by_swp_entry(page); return 0; } @@ -1053,20 +1053,20 @@ static inline int arch_prepare_to_swap(struct page *page) static inline void arch_swap_invalidate_page(int type, pgoff_t offset) { if (system_supports_mte()) - mte_invalidate_tags(type, offset); + mte_invalidate_tags_by_swp_entry(type, offset); } static inline void arch_swap_invalidate_area(int type) { if (system_supports_mte()) - mte_invalidate_tags_area(type); + mte_invalidate_tags_area_by_swp_entry(type); } #define __HAVE_ARCH_SWAP_RESTORE static inline void arch_swap_restore(swp_entry_t entry, struct folio *folio) { if (system_supports_mte()) - mte_restore_tags(entry, &folio->page); + mte_restore_page_tags_by_swp_entry(entry, &folio->page); } #endif /* CONFIG_ARM64_MTE */ diff --git a/arch/arm64/kernel/elfcore.c b/arch/arm64/kernel/elfcore.c index 2e94d20c4ac7..e9ae00dacad8 100644 --- a/arch/arm64/kernel/elfcore.c +++ b/arch/arm64/kernel/elfcore.c @@ -17,7 +17,7 @@ static unsigned long mte_vma_tag_dump_size(struct core_vma_metadata *m) { - return (m->dump_size >> PAGE_SHIFT) * MTE_PAGE_TAG_STORAGE; + return (m->dump_size >> PAGE_SHIFT) * MTE_PAGE_TAG_STORAGE_SIZE; } /* Derived from dump_user_range(); start/end must be page-aligned */ @@ -38,7 +38,7 @@ static int mte_dump_tag_range(struct coredump_params *cprm, * have been all zeros. */ if (!page) { - dump_skip(cprm, MTE_PAGE_TAG_STORAGE); + dump_skip(cprm, MTE_PAGE_TAG_STORAGE_SIZE); continue; } @@ -48,12 +48,12 @@ static int mte_dump_tag_range(struct coredump_params *cprm, */ if (!page_mte_tagged(page)) { put_page(page); - dump_skip(cprm, MTE_PAGE_TAG_STORAGE); + dump_skip(cprm, MTE_PAGE_TAG_STORAGE_SIZE); continue; } if (!tags) { - tags = mte_allocate_tag_storage(); + tags = mte_allocate_tag_buf(); if (!tags) { put_page(page); ret = 0; @@ -61,16 +61,16 @@ static int mte_dump_tag_range(struct coredump_params *cprm, } } - mte_save_page_tags(page_address(page), tags); + mte_copy_page_tags_to_buf(page_address(page), tags); put_page(page); - if (!dump_emit(cprm, tags, MTE_PAGE_TAG_STORAGE)) { + if (!dump_emit(cprm, tags, MTE_PAGE_TAG_STORAGE_SIZE)) { ret = 0; break; } } if (tags) - mte_free_tag_storage(tags); + mte_free_tag_buf(tags); return ret; } diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c index 02870beb271e..a3b0e7b32457 100644 --- a/arch/arm64/kernel/hibernate.c +++ b/arch/arm64/kernel/hibernate.c @@ -215,41 +215,41 @@ static int create_safe_exec_page(void *src_start, size_t length, #ifdef CONFIG_ARM64_MTE -static DEFINE_XARRAY(mte_pages); +static DEFINE_XARRAY(tags_by_pfn); -static int save_tags(struct page *page, unsigned long pfn) +static int save_page_tags_by_pfn(struct page *page, unsigned long pfn) { - void *tag_storage, *ret; + void *tags, *ret; - tag_storage = mte_allocate_tag_storage(); - if (!tag_storage) + tags = mte_allocate_tag_buf(); + if (!tags) return -ENOMEM; - mte_save_page_tags(page_address(page), tag_storage); + mte_copy_page_tags_to_buf(page_address(page), tags); - ret = xa_store(&mte_pages, pfn, tag_storage, GFP_KERNEL); + ret = xa_store(&tags_by_pfn, pfn, tags, GFP_KERNEL); if (WARN(xa_is_err(ret), "Failed to store MTE tags")) { - mte_free_tag_storage(tag_storage); + mte_free_tag_buf(tags); return xa_err(ret); } else if (WARN(ret, "swsusp: %s: Duplicate entry", __func__)) { - mte_free_tag_storage(ret); + mte_free_tag_buf(ret); } return 0; } -static void swsusp_mte_free_storage(void) +static void swsusp_mte_free_tags(void) { - XA_STATE(xa_state, &mte_pages, 0); + XA_STATE(xa_state, &tags_by_pfn, 0); void *tags; - xa_lock(&mte_pages); + xa_lock(&tags_by_pfn); xas_for_each(&xa_state, tags, ULONG_MAX) { - mte_free_tag_storage(tags); + mte_free_tag_buf(tags); } - xa_unlock(&mte_pages); + xa_unlock(&tags_by_pfn); - xa_destroy(&mte_pages); + xa_destroy(&tags_by_pfn); } static int swsusp_mte_save_tags(void) @@ -273,9 +273,9 @@ static int swsusp_mte_save_tags(void) if (!page_mte_tagged(page)) continue; - ret = save_tags(page, pfn); + ret = save_page_tags_by_pfn(page, pfn); if (ret) { - swsusp_mte_free_storage(); + swsusp_mte_free_tags(); goto out; } @@ -290,25 +290,25 @@ static int swsusp_mte_save_tags(void) static void swsusp_mte_restore_tags(void) { - XA_STATE(xa_state, &mte_pages, 0); + XA_STATE(xa_state, &tags_by_pfn, 0); int n = 0; void *tags; - xa_lock(&mte_pages); + xa_lock(&tags_by_pfn); xas_for_each(&xa_state, tags, ULONG_MAX) { unsigned long pfn = xa_state.xa_index; struct page *page = pfn_to_online_page(pfn); - mte_restore_page_tags(page_address(page), tags); + mte_copy_page_tags_from_buf(page_address(page), tags); - mte_free_tag_storage(tags); + mte_free_tag_buf(tags); n++; } - xa_unlock(&mte_pages); + xa_unlock(&tags_by_pfn); pr_info("Restored %d MTE pages\n", n); - xa_destroy(&mte_pages); + xa_destroy(&tags_by_pfn); } #else /* CONFIG_ARM64_MTE */ diff --git a/arch/arm64/lib/mte.S b/arch/arm64/lib/mte.S index 5018ac03b6bf..9f623e9da09f 100644 --- a/arch/arm64/lib/mte.S +++ b/arch/arm64/lib/mte.S @@ -119,7 +119,7 @@ SYM_FUNC_START(mte_copy_tags_to_user) cbz x2, 2f 1: ldg x4, [x1] - ubfx x4, x4, #MTE_TAG_SHIFT, #MTE_TAG_SIZE + ubfx x4, x4, #MTE_TAG_SHIFT, #MTE_TAG_SIZE_BITS USER(2f, sttrb w4, [x0]) add x0, x0, #1 add x1, x1, #MTE_GRANULE_SIZE @@ -132,11 +132,11 @@ USER(2f, sttrb w4, [x0]) SYM_FUNC_END(mte_copy_tags_to_user) /* - * Save the tags in a page + * Copy the tags in a page to a buffer * x0 - page address - * x1 - tag storage, MTE_PAGE_TAG_STORAGE bytes + * x1 - memory buffer, MTE_PAGE_TAG_STORAGE_SIZE bytes */ -SYM_FUNC_START(mte_save_page_tags) +SYM_FUNC_START(mte_copy_page_tags_to_buf) multitag_transfer_size x7, x5 1: mov x2, #0 @@ -153,14 +153,14 @@ SYM_FUNC_START(mte_save_page_tags) b.ne 1b ret -SYM_FUNC_END(mte_save_page_tags) +SYM_FUNC_END(mte_copy_page_tags_to_buf) /* - * Restore the tags in a page + * Restore the tags in a page from a buffer * x0 - page address - * x1 - tag storage, MTE_PAGE_TAG_STORAGE bytes + * x1 - memory buffer, MTE_PAGE_TAG_STORAGE_SIZE bytes */ -SYM_FUNC_START(mte_restore_page_tags) +SYM_FUNC_START(mte_copy_page_tags_from_buf) multitag_transfer_size x7, x5 1: ldr x2, [x1], #8 @@ -174,4 +174,4 @@ SYM_FUNC_START(mte_restore_page_tags) b.ne 1b ret -SYM_FUNC_END(mte_restore_page_tags) +SYM_FUNC_END(mte_copy_page_tags_from_buf) diff --git a/arch/arm64/mm/mteswap.c b/arch/arm64/mm/mteswap.c index a31833e3ddc5..2a43746b803f 100644 --- a/arch/arm64/mm/mteswap.c +++ b/arch/arm64/mm/mteswap.c @@ -7,79 +7,79 @@ #include #include -static DEFINE_XARRAY(mte_pages); +static DEFINE_XARRAY(tags_by_swp_entry); -void *mte_allocate_tag_storage(void) +void *mte_allocate_tag_buf(void) { /* tags granule is 16 bytes, 2 tags stored per byte */ - return kmalloc(MTE_PAGE_TAG_STORAGE, GFP_KERNEL); + return kmalloc(MTE_PAGE_TAG_STORAGE_SIZE, GFP_KERNEL); } -void mte_free_tag_storage(char *storage) +void mte_free_tag_buf(void *buf) { - kfree(storage); + kfree(buf); } -int mte_save_tags(struct page *page) +int mte_save_page_tags_by_swp_entry(struct page *page) { - void *tag_storage, *ret; + void *tags, *ret; if (!page_mte_tagged(page)) return 0; - tag_storage = mte_allocate_tag_storage(); - if (!tag_storage) + tags = mte_allocate_tag_buf(); + if (!tags) return -ENOMEM; - mte_save_page_tags(page_address(page), tag_storage); + mte_copy_page_tags_to_buf(page_address(page), tags); /* lookup the swap entry.val from the page */ - ret = xa_store(&mte_pages, page_swap_entry(page).val, tag_storage, + ret = xa_store(&tags_by_swp_entry, page_swap_entry(page).val, tags, GFP_KERNEL); if (WARN(xa_is_err(ret), "Failed to store MTE tags")) { - mte_free_tag_storage(tag_storage); + mte_free_tag_buf(tags); return xa_err(ret); } else if (ret) { /* Entry is being replaced, free the old entry */ - mte_free_tag_storage(ret); + mte_free_tag_buf(ret); } return 0; } -void mte_restore_tags(swp_entry_t entry, struct page *page) +void mte_restore_page_tags_by_swp_entry(swp_entry_t entry, struct page *page) { - void *tags = xa_load(&mte_pages, entry.val); + void *tags = xa_load(&tags_by_swp_entry, entry.val); if (!tags) return; if (try_page_mte_tagging(page)) { - mte_restore_page_tags(page_address(page), tags); + mte_copy_page_tags_from_buf(page_address(page), tags); set_page_mte_tagged(page); } } -void mte_invalidate_tags(int type, pgoff_t offset) +void mte_invalidate_tags_by_swp_entry(int type, pgoff_t offset) { swp_entry_t entry = swp_entry(type, offset); - void *tags = xa_erase(&mte_pages, entry.val); + void *tags = xa_erase(&tags_by_swp_entry, entry.val); - mte_free_tag_storage(tags); + mte_free_tag_buf(tags); } -void mte_invalidate_tags_area(int type) +void mte_invalidate_tags_area_by_swp_entry(int type) { swp_entry_t entry = swp_entry(type, 0); swp_entry_t last_entry = swp_entry(type + 1, 0); void *tags; - XA_STATE(xa_state, &mte_pages, entry.val); + XA_STATE(xa_state, &tags_by_swp_entry, entry.val); - xa_lock(&mte_pages); + xa_lock(&tags_by_swp_entry); xas_for_each(&xa_state, tags, last_entry.val - 1) { - __xa_erase(&mte_pages, xa_state.xa_index); - mte_free_tag_storage(tags); + __xa_erase(&tags_by_swp_entry, xa_state.xa_index); + mte_free_tag_buf(tags); } - xa_unlock(&mte_pages); + xa_unlock(&tags_by_swp_entry); } From patchwork Thu Jan 25 16:42:39 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Elisei X-Patchwork-Id: 13531335 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 804D0C47258 for ; Thu, 25 Jan 2024 16:44:34 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1BB016B009E; Thu, 25 Jan 2024 11:44:34 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1463E8D0002; Thu, 25 Jan 2024 11:44:34 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F274D6B00A0; Thu, 25 Jan 2024 11:44:33 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id DB1026B009E for ; Thu, 25 Jan 2024 11:44:33 -0500 (EST) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 9D60680875 for ; Thu, 25 Jan 2024 16:44:33 +0000 (UTC) X-FDA: 81718406826.16.5C6B9A6 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf18.hostedemail.com (Postfix) with ESMTP id D5E641C0014 for ; Thu, 25 Jan 2024 16:44:31 +0000 (UTC) Authentication-Results: imf18.hostedemail.com; dkim=none; spf=pass (imf18.hostedemail.com: domain of alexandru.elisei@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=alexandru.elisei@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1706201072; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=XZVWsMwrNpcyVX1D23AkubGGDRD4MEkTOQdbRdm7PXE=; b=2Lt6AQUQvrhapPLvcfkvlTHFZhSuSq2GF465WNhDAucDB4qJeYMzSUl1H64me5Z74V7IDL /IawKWm4PkoULrJT9ROQJO5oxa6tY7dkMvx44rIb0CzBeTPd8mNoQ2WcQRT5m2BsUbHyDg yXXZuY3XRpzBjC6kYpS951QZOtg5aQ4= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1706201072; a=rsa-sha256; cv=none; b=556b7pI20/prGrhxq8lc6UVT4iSxKE5G9UWntMwonUbnw+LpJ5ZhEyOcDofnPXm8kG0vEu s1aPIYm9ADTnXlFeGhjwCr/hJlpybxedNw9utF7lRXK0Fv4iuUBBpdEGizqAumOfvT0Vyw 2InZVBQkAPIZyPfIITVcCVHg8OeJM0c= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=none; spf=pass (imf18.hostedemail.com: domain of alexandru.elisei@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=alexandru.elisei@arm.com; dmarc=pass (policy=none) header.from=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 97188165C; Thu, 25 Jan 2024 08:45:15 -0800 (PST) Received: from e121798.cable.virginm.net (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id A9C433F5A1; Thu, 25 Jan 2024 08:44:25 -0800 (PST) From: Alexandru Elisei To: catalin.marinas@arm.com, will@kernel.org, oliver.upton@linux.dev, maz@kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, arnd@arndb.de, akpm@linux-foundation.org, mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, mhiramat@kernel.org, rppt@kernel.org, hughd@google.com Cc: pcc@google.com, steven.price@arm.com, anshuman.khandual@arm.com, vincenzo.frascino@arm.com, david@redhat.com, eugenis@google.com, kcc@google.com, hyesoo.yu@samsung.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, kvmarm@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org Subject: [PATCH RFC v3 18/35] arm64: mte: Rename __GFP_ZEROTAGS to __GFP_TAGGED Date: Thu, 25 Jan 2024 16:42:39 +0000 Message-Id: <20240125164256.4147-19-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240125164256.4147-1-alexandru.elisei@arm.com> References: <20240125164256.4147-1-alexandru.elisei@arm.com> MIME-Version: 1.0 X-Stat-Signature: t691ripwuhhz99wngukro6nc94ja34du X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: D5E641C0014 X-Rspam-User: X-HE-Tag: 1706201071-405553 X-HE-Meta: U2FsdGVkX1/mxgE2p0zKbdI5WYC3Dj5xAvcTsXKmWu5c0s6lHHja3y3mdZ7wuzGujlJzEsIkWimVykHIyOpSgQiunXE0nTLmZwhvTQxhWYWTmDeQV2nF/uN4Xn6OjNOV++EjUwhlavaw67mtdlL7o0ln+kzkMaUKoGmdbfNlrenr4ply2owgv5Xj52Kk1KIQCSkX/L/RPp+swCl2kT91BI35mVwgd8LmwQb6QdQSs4BvR91y1YOi4G95kmmvrqN9MyaIqAtDMDsCLyUoxmSe7fzMu0KfTFide188jaFMK7KJUbKIwwKl1A037EI0PwwVZ2Q6yrlV1fisJPJn35f6dezjxaQoQSHQyC/H2C2YT2UUoPnQtHHRgT+2uTORXUvdKSOW2H8opUEiLGbpsV45hpdHO5vvEE1xf919G8QBku6Wu4MPZXtXq4+NKw5Qxaxve7uEzj++Cydry3NsVjJOaWf9aIZy3OTZXoc4jCwGihGQn60b6VlZ+ZrnueWcary2tqddeQtaQcCkN2bYDiPPUemB3Y1OlIxw/AUCQOtKjkrbz5g6JnRBjSE7b4HZl/12N/mH5DukThWImXIFFzz4bk4z6pps6BLv7rrTKqRAM268BfpkpehYHD/nRRI0Sx9cO8PACeR4GTYOljDneBLmI9guvA2Gma9sVubBWYJEE06oVDsYNi63Dmg5P/E33qyKBfA+jQiEJh/cynZ6r5rHzXY6UVj8dPpl37TNkkD0uCrVMRRH6VbA8AbpQY9JA4RwqmostH5fEXAykXVSlsMS0BQhLADm6gI3z6uRPnWmkTDqM8BYJgO+r4pq70kN8FnxM4Msgwp6LR8nJSfmsZgSEvLbAikE5kqS1yJafMq7CEoWNvsfdE9wZN/AzlqiULyUUy17kDD0SgKYT1CPfr5sXp7zjtJnRWJE9tfYXAkSFrTibcB7s6YwFE1oKFOzaFhlwnaFJXmzjsZ5gCX0TRZ f375Zyl8 N0l+/rtv4EXJwRqMlMRL6USf2G4Xr86NQFy59eo3M+Vsgf9zH95t4KRg2UmpQgnpqKmgPOZIrfiwWBWB57Ev19bl3GBYSUffetJK/V3zojAz4Xhh+2kMNw9bgGpvsbMYo1uG2Vmn27pAD6I89ssz1u7qwrvTmbYpNW/RVDfxDSqF9y4li1/ks+2p4bk/1j2zMoutNHfmGWyIfkICvH3p0vwyUWg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: __GFP_ZEROTAGS is used to instruct the page allocator to zero the tags at the same time as the physical frame is zeroed. The name can be slightly misleading, because it doesn't mean that the code will zero the tags unconditionally, but that the tags will be zeroed if and only if the physical frame is also zeroed (either __GFP_ZERO is set or init_on_alloc is 1). Rename it to __GFP_TAGGED, in preparation for it to be used by the page allocator to recognize when an allocation is tagged (has metadata). Signed-off-by: Alexandru Elisei --- arch/arm64/mm/fault.c | 2 +- include/linux/gfp_types.h | 6 +++--- include/trace/events/mmflags.h | 2 +- mm/page_alloc.c | 2 +- mm/shmem.c | 2 +- 5 files changed, 7 insertions(+), 7 deletions(-) diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index 4d3f0a870ad8..c022e473c17c 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -944,7 +944,7 @@ NOKPROBE_SYMBOL(do_debug_exception); gfp_t arch_calc_vma_gfp(struct vm_area_struct *vma, gfp_t gfp) { if (vma->vm_flags & VM_MTE) - return __GFP_ZEROTAGS; + return __GFP_TAGGED; return 0; } diff --git a/include/linux/gfp_types.h b/include/linux/gfp_types.h index 1b6053da8754..f638353ebdc7 100644 --- a/include/linux/gfp_types.h +++ b/include/linux/gfp_types.h @@ -45,7 +45,7 @@ typedef unsigned int __bitwise gfp_t; #define ___GFP_HARDWALL 0x100000u #define ___GFP_THISNODE 0x200000u #define ___GFP_ACCOUNT 0x400000u -#define ___GFP_ZEROTAGS 0x800000u +#define ___GFP_TAGGED 0x800000u #ifdef CONFIG_KASAN_HW_TAGS #define ___GFP_SKIP_ZERO 0x1000000u #define ___GFP_SKIP_KASAN 0x2000000u @@ -226,7 +226,7 @@ typedef unsigned int __bitwise gfp_t; * * %__GFP_ZERO returns a zeroed page on success. * - * %__GFP_ZEROTAGS zeroes memory tags at allocation time if the memory itself + * %__GFP_TAGGED zeroes memory tags at allocation time if the memory itself * is being zeroed (either via __GFP_ZERO or via init_on_alloc, provided that * __GFP_SKIP_ZERO is not set). This flag is intended for optimization: setting * memory tags at the same time as zeroing memory has minimal additional @@ -241,7 +241,7 @@ typedef unsigned int __bitwise gfp_t; #define __GFP_NOWARN ((__force gfp_t)___GFP_NOWARN) #define __GFP_COMP ((__force gfp_t)___GFP_COMP) #define __GFP_ZERO ((__force gfp_t)___GFP_ZERO) -#define __GFP_ZEROTAGS ((__force gfp_t)___GFP_ZEROTAGS) +#define __GFP_TAGGED ((__force gfp_t)___GFP_TAGGED) #define __GFP_SKIP_ZERO ((__force gfp_t)___GFP_SKIP_ZERO) #define __GFP_SKIP_KASAN ((__force gfp_t)___GFP_SKIP_KASAN) diff --git a/include/trace/events/mmflags.h b/include/trace/events/mmflags.h index d801409b33cf..6ca0d5ed46c0 100644 --- a/include/trace/events/mmflags.h +++ b/include/trace/events/mmflags.h @@ -50,7 +50,7 @@ gfpflag_string(__GFP_RECLAIM), \ gfpflag_string(__GFP_DIRECT_RECLAIM), \ gfpflag_string(__GFP_KSWAPD_RECLAIM), \ - gfpflag_string(__GFP_ZEROTAGS) + gfpflag_string(__GFP_TAGGED) #ifdef CONFIG_KASAN_HW_TAGS #define __def_gfpflag_names_kasan , \ diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 502ee3eb8583..0a0118612a13 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1480,7 +1480,7 @@ inline void post_alloc_hook(struct page *page, unsigned int order, { bool init = !want_init_on_free() && want_init_on_alloc(gfp_flags) && !should_skip_init(gfp_flags); - bool zero_tags = init && (gfp_flags & __GFP_ZEROTAGS); + bool zero_tags = init && (gfp_flags & __GFP_TAGGED); int i; set_page_private(page, 0); diff --git a/mm/shmem.c b/mm/shmem.c index 621fabc3b8c6..3e28357b0a40 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1585,7 +1585,7 @@ static struct folio *shmem_swapin_cluster(swp_entry_t swap, gfp_t gfp, */ static gfp_t limit_gfp_mask(gfp_t huge_gfp, gfp_t limit_gfp) { - gfp_t allowflags = __GFP_IO | __GFP_FS | __GFP_RECLAIM | __GFP_ZEROTAGS; + gfp_t allowflags = __GFP_IO | __GFP_FS | __GFP_RECLAIM | __GFP_TAGGED; gfp_t denyflags = __GFP_NOWARN | __GFP_NORETRY; gfp_t zoneflags = limit_gfp & GFP_ZONEMASK; gfp_t result = huge_gfp & ~(allowflags | GFP_ZONEMASK); From patchwork Thu Jan 25 16:42:40 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Elisei X-Patchwork-Id: 13531336 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 74BE7C47258 for ; Thu, 25 Jan 2024 16:44:40 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D7AA66B009F; Thu, 25 Jan 2024 11:44:39 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D03F16B00A0; Thu, 25 Jan 2024 11:44:39 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B7DCC6B00A1; Thu, 25 Jan 2024 11:44:39 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id A29D26B009F for ; Thu, 25 Jan 2024 11:44:39 -0500 (EST) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 6FA441A06D7 for ; Thu, 25 Jan 2024 16:44:39 +0000 (UTC) X-FDA: 81718407078.15.B4C44AF Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf24.hostedemail.com (Postfix) with ESMTP id ADA2A180013 for ; Thu, 25 Jan 2024 16:44:37 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf24.hostedemail.com: domain of alexandru.elisei@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=alexandru.elisei@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1706201077; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=DnGvKJCeAiPGewtmauVNmApKa8ZS9uEEWMVkr22WqT0=; b=dhHHOmQBypOOfvNF2x9U2WpwjB4iwgWaXolhtb4YanyNoYWVnVdN1Y8OOnH5OBcAEyUMSD tJdmLZE25deVMb2YZ4Yjpt3eF4NHJzDAZhOQovkTlhioA7425kBzpo6PWcmqjtbW/X05cG mP34bu3yy01UvlfKq8gSLR3gl6PtK1Y= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf24.hostedemail.com: domain of alexandru.elisei@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=alexandru.elisei@arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1706201077; a=rsa-sha256; cv=none; b=zXajXMszGBL2S5QQCJl0cE59KZnorKPLJlsIAwg1o5GxSjmeVrtbo99B54DUEkiOMdsW3h sEWFi1OREyfUKwlbHlIuENRPI818B7iNqBOnPEBB2HyznVVFHrow2UPE6Wd+aXrKt17Cgx mFAXvgA4uFvSGwL3DoIw2c58SA7l2XA= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 607B01682; Thu, 25 Jan 2024 08:45:21 -0800 (PST) Received: from e121798.cable.virginm.net (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 711BE3F5A1; Thu, 25 Jan 2024 08:44:31 -0800 (PST) From: Alexandru Elisei To: catalin.marinas@arm.com, will@kernel.org, oliver.upton@linux.dev, maz@kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, arnd@arndb.de, akpm@linux-foundation.org, mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, mhiramat@kernel.org, rppt@kernel.org, hughd@google.com Cc: pcc@google.com, steven.price@arm.com, anshuman.khandual@arm.com, vincenzo.frascino@arm.com, david@redhat.com, eugenis@google.com, kcc@google.com, hyesoo.yu@samsung.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, kvmarm@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org Subject: [PATCH RFC v3 19/35] arm64: mte: Discover tag storage memory Date: Thu, 25 Jan 2024 16:42:40 +0000 Message-Id: <20240125164256.4147-20-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240125164256.4147-1-alexandru.elisei@arm.com> References: <20240125164256.4147-1-alexandru.elisei@arm.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: ADA2A180013 X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: fd77wa5h8kxgy15hr9e1xwjhk1isaaqz X-HE-Tag: 1706201077-707296 X-HE-Meta: U2FsdGVkX1+d2b3n8/Tvlmv0Y6SwVsGbjGzgXfiOcWJevAnlrN2XVEZ1owdslbqPp/d7uRKtC9hRkzN9891+MaToIPbFnR2cjuqCtMrlD+mQpwjCsg1NKiM4VefptrXbo5kbyEqt+xQLcDUqXJLU5pgP17pXOFDHtRxjgKbv35k2HjaqvgDx6GtrH68YbE6fLDIMotxoc5/7E9sLYfR3nam8/Q9TirSm3wUerXxetp3P0ixDDgqD+IdKcaylxJwt6Z2F8afnha6T0D44VQpGnJMu+KbhB09kJ71z3grXYSuEDXfvoE5u9TB8mTcSXPlfYdFTZzpmWo0MB2RHnekD4ovi0Besm1JVCQe/dcA+UEdvWl7moZbIsrLw39hiogWYRuikzzFVOXz6axYs1VEMlXw4rsuzmNPCWoiuJGlL6hbkldB7nJ5WP0x7thV019SnVWynWlmSebOLhFfjRfGGOJci+CVGLWIM2HoQTiWn9epPEiVy4Xauq7qoWT8oEN32DFpWE0Xder84l0M9HXSCDrnP5Q1CQQLXpzJGpyBcUYvJ5kBPoaI/gdVgnHfePyQNV4cIeHgHgdTwIPvtFE+4Rl88sCL4+UqvDNGnOfKka68a08jFNWBCe/xVwb/XIbFJJTXMFOVh5CCKgGTTTAUXnTXASgbTGJfULYEc9wbZ1vhyF4+wbdmHdCP9mEDHlR25HCx4h6QJtUZI3bp10tYEVIQ2e/ChlZ3rON+Ud21F935B0oGqB9f1bGD486uNknUh7zuk2SkgcPMtqImZ85VSsYwalMqwDZQJpKF/gbUUix/b3rNrOrp0FZ/eXjC5YhvaeFX0CWAilEw2LAJpMcnLH0qNKHGRdzsWKpXUEstL0Nuz+QvHIDSntsqWDjBZngeoTbd4+RxpJdOEG4iGGLXJuUpPbF282T7AaJqm6Lr5lgtmw8K5ALc+lub/2FtVSa8BTJPkIvuozWomq8F3CHZ fbjIUG6v uC/m/6q6kAF0PkZpAgxSh+aySwsCnHdszBS7oXzA9Z592LxLT/cKwOB1C6vvQPdz6DuijLanNPHgE6na/+WdNTZ9Biv+5GzTQWfKIykK0REgVpf8E+J17qvcHCqIkMpP43eKkc2uWJrGG1sneDEp3mK7wZUAotZKX3InCfbUzRQJEQq2CoBUgCgw3hqJ6/RnCmTV+g1noEXGa6UtLen8DuSeqgHvMLVtpp5Pd X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Allow the kernel to get the base address, size, block size and associated memory node for tag storage from the device tree blob. A tag storage region represents the smallest contiguous memory region that holds all the tags for the associated contiguous memory region which can be tagged. For example, for a 32GB contiguous tagged memory the corresponding tag storage region is exactly 1GB of contiguous memory, not two adjacent 512M of tag storage memory, nor one 2GB tag storage region. Tag storage is described as reserved memory; future patches will teach the kernel how to make use of it for data (non-tagged) allocations. Signed-off-by: Alexandru Elisei --- Changes since rfc v2: * Reworked from rfc v2 patch #11 ("arm64: mte: Reserve tag storage memory"). * Added device tree schema (Rob Herring) * Tag storage memory is now described in the "reserved-memory" node (Rob Herring). .../reserved-memory/arm,mte-tag-storage.yaml | 78 +++++++++ arch/arm64/Kconfig | 12 ++ arch/arm64/include/asm/mte_tag_storage.h | 16 ++ arch/arm64/kernel/Makefile | 1 + arch/arm64/kernel/mte_tag_storage.c | 158 ++++++++++++++++++ arch/arm64/mm/init.c | 3 + 6 files changed, 268 insertions(+) create mode 100644 Documentation/devicetree/bindings/reserved-memory/arm,mte-tag-storage.yaml create mode 100644 arch/arm64/include/asm/mte_tag_storage.h create mode 100644 arch/arm64/kernel/mte_tag_storage.c diff --git a/Documentation/devicetree/bindings/reserved-memory/arm,mte-tag-storage.yaml b/Documentation/devicetree/bindings/reserved-memory/arm,mte-tag-storage.yaml new file mode 100644 index 000000000000..a99aaa1e8b6e --- /dev/null +++ b/Documentation/devicetree/bindings/reserved-memory/arm,mte-tag-storage.yaml @@ -0,0 +1,78 @@ +# SPDX-License-Identifier: GPL-2.0-only OR BSD-2-Clause +%YAML 1.2 +--- +$id: http://devicetree.org/schemas/reserved-memory/arm,mte-tag-storage.yaml# +$schema: http://devicetree.org/meta-schemas/core.yaml# + +title: Tag storage memory for Memory Tagging Extension + +description: | + Description of the tag storage memory region that Linux can use to store + data when the associated memory is not tagged. + + The reserved memory described by the node must also be described by a + standalone 'memory' node. + +maintainers: + - Alexandru Elisei + +allOf: + - $ref: reserved-memory.yaml + +properties: + compatible: + const: arm,mte-tag-storage + + reg: + description: | + Specifies the memory region that MTE uses for tag storage. The size of the + region must be equal to the size needed to store all the tags for the + associated tagged memory. + + block-size: + description: | + Specifies the minimum multiple of 4K bytes of tag storage where all the + tags stored in the block correspond to a contiguous memory region. This + is needed for platforms where the memory controller interleaves tag + writes to memory. + + For example, if the memory controller interleaves tag writes for 256KB + of contiguous memory across 8K of tag storage (2-way interleave), then + the correct value for 'block-size' is 0x2000. + + This value is a hardware property, independent of the selected kernel page + size. + $ref: /schemas/types.yaml#/definitions/uint32 + + tagged-memory: + description: | + Specifies the memory node, as a phandle, for which all the tags are + stored in the tag storage region. + + The memory node must describe one contiguous memory region (i.e, the + 'ranges' property of the memory node must have exactly one entry). + $ref: /schemas/types.yaml#/definitions/phandle + +unevaluatedProperties: false + +required: + - compatible + - reg + - block-size + - tagged-memory + - reusable + +examples: + - | + reserved-memory { + #address-cells = <2>; + #size-cells = <2>; + + tags0: tag-storage@8f8000000 { + compatible = "arm,mte-tag-storage"; + reg = <0x08 0xf8000000 0x00 0x4000000>; + block-size = <0x1000>; + tagged-memory = <&memory0>; + reusable; + }; + }; diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index aa7c1d435139..92d97930b56e 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -2082,6 +2082,18 @@ config ARM64_MTE Documentation/arch/arm64/memory-tagging-extension.rst. +if ARM64_MTE +config ARM64_MTE_TAG_STORAGE + bool + help + Adds support for dynamic management of the memory used by the hardware + for storing MTE tags. This memory, unlike normal memory, cannot be + tagged. When it is used to store tags for another memory location it + cannot be used for any type of allocation. + + If unsure, say N +endif # ARM64_MTE + endmenu # "ARMv8.5 architectural features" menu "ARMv8.7 architectural features" diff --git a/arch/arm64/include/asm/mte_tag_storage.h b/arch/arm64/include/asm/mte_tag_storage.h new file mode 100644 index 000000000000..3c2cd29e053e --- /dev/null +++ b/arch/arm64/include/asm/mte_tag_storage.h @@ -0,0 +1,16 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright (C) 2023 ARM Ltd. + */ +#ifndef __ASM_MTE_TAG_STORAGE_H +#define __ASM_MTE_TAG_STORAGE_H + +#ifdef CONFIG_ARM64_MTE_TAG_STORAGE +void mte_init_tag_storage(void); +#else +static inline void mte_init_tag_storage(void) +{ +} +#endif /* CONFIG_ARM64_MTE_TAG_STORAGE */ + +#endif /* __ASM_MTE_TAG_STORAGE_H */ diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile index e5d03a7039b4..89c28b538908 100644 --- a/arch/arm64/kernel/Makefile +++ b/arch/arm64/kernel/Makefile @@ -70,6 +70,7 @@ obj-$(CONFIG_CRASH_CORE) += crash_core.o obj-$(CONFIG_ARM_SDE_INTERFACE) += sdei.o obj-$(CONFIG_ARM64_PTR_AUTH) += pointer_auth.o obj-$(CONFIG_ARM64_MTE) += mte.o +obj-$(CONFIG_ARM64_MTE_TAG_STORAGE) += mte_tag_storage.o obj-y += vdso-wrap.o obj-$(CONFIG_COMPAT_VDSO) += vdso32-wrap.o obj-$(CONFIG_UNWIND_PATCH_PAC_INTO_SCS) += patch-scs.o diff --git a/arch/arm64/kernel/mte_tag_storage.c b/arch/arm64/kernel/mte_tag_storage.c new file mode 100644 index 000000000000..2f32265d8ad8 --- /dev/null +++ b/arch/arm64/kernel/mte_tag_storage.c @@ -0,0 +1,158 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Support for dynamic tag storage. + * + * Copyright (C) 2023 ARM Ltd. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include + +struct tag_region { + struct range mem_range; /* Memory associated with the tag storage, in PFNs. */ + struct range tag_range; /* Tag storage memory, in PFNs. */ + u32 block_size_pages; /* Tag block size, in pages. */ + phandle mem_phandle; /* phandle for the associated memory node. */ +}; + +#define MAX_TAG_REGIONS 32 + +static struct tag_region tag_regions[MAX_TAG_REGIONS]; +static int num_tag_regions; + +static u32 __init get_block_size_pages(u32 block_size_bytes) +{ + u32 a = PAGE_SIZE; + u32 b = block_size_bytes; + u32 r; + + /* Find greatest common divisor using the Euclidian algorithm. */ + do { + r = a % b; + a = b; + b = r; + } while (b != 0); + + return PHYS_PFN(PAGE_SIZE * block_size_bytes / a); +} + +int __init tag_storage_probe(struct reserved_mem *rmem) +{ + struct tag_region *region; + u32 block_size_bytes; + int ret; + + if (num_tag_regions == MAX_TAG_REGIONS) { + pr_err("Exceeded maximum number of tag storage regions"); + goto out_err; + } + + region = &tag_regions[num_tag_regions]; + region->tag_range.start = PHYS_PFN(rmem->base); + region->tag_range.end = PHYS_PFN(rmem->base + rmem->size - 1); + + ret = of_flat_read_u32(rmem->fdt_node, "block-size", &block_size_bytes); + if (ret || block_size_bytes == 0) { + pr_err("Invalid or missing 'block-size' property"); + goto out_err; + } + + region->block_size_pages = get_block_size_pages(block_size_bytes); + if (range_len(®ion->tag_range) % region->block_size_pages != 0) { + pr_err("Tag storage region size 0x%llx pages is not a multiple of block size 0x%x pages", + range_len(®ion->tag_range), region->block_size_pages); + goto out_err; + } + + ret = of_flat_read_u32(rmem->fdt_node, "tagged-memory", ®ion->mem_phandle); + if (ret) { + pr_err("Invalid or missing 'tagged-memory' property"); + goto out_err; + } + + num_tag_regions++; + return 0; + +out_err: + num_tag_regions = 0; + return -EINVAL; +} +RESERVEDMEM_OF_DECLARE(tag_storage, "arm,mte-tag-storage", tag_storage_probe); + +static int __init mte_find_tagged_memory_regions(void) +{ + struct device_node *mem_dev; + struct tag_region *region; + struct range *mem_range; + const __be32 *reg; + u64 addr, size; + int i; + + for (i = 0; i < num_tag_regions; i++) { + region = &tag_regions[i]; + mem_range = ®ion->mem_range; + + mem_dev = of_find_node_by_phandle(region->mem_phandle); + if (!mem_dev) { + pr_err("Cannot find tagged memory node"); + goto out; + } + + reg = of_get_property(mem_dev, "reg", NULL); + if (!reg) { + pr_err("Invalid tagged memory node"); + goto out_put_mem; + } + + addr = of_translate_address(mem_dev, reg); + if (addr == OF_BAD_ADDR) { + pr_err("Invalid memory address"); + goto out_put_mem; + } + + size = of_read_number(reg + of_n_addr_cells(mem_dev), of_n_size_cells(mem_dev)); + if (!size) { + pr_err("Invalid memory size"); + goto out_put_mem; + } + + mem_range->start = PHYS_PFN(addr); + mem_range->end = PHYS_PFN(addr + size - 1); + + of_node_put(mem_dev); + } + + return 0; + +out_put_mem: + of_node_put(mem_dev); +out: + return -EINVAL; +} + +void __init mte_init_tag_storage(void) +{ + int ret; + + if (num_tag_regions == 0) + return; + + ret = mte_find_tagged_memory_regions(); + if (ret) + goto out_disabled; + + return; + +out_disabled: + num_tag_regions = 0; + pr_info("MTE tag storage region management disabled"); +} diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index 74c1db8ce271..2ccc0c294a13 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -39,6 +39,7 @@ #include #include #include +#include #include #include #include @@ -386,6 +387,8 @@ void __init mem_init(void) /* this will put all unused low memory onto the freelists */ memblock_free_all(); + mte_init_tag_storage(); + /* * Check boundaries twice: Some fundamental inconsistencies can be * detected at build time already. From patchwork Thu Jan 25 16:42:41 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Elisei X-Patchwork-Id: 13531337 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 71E31C47422 for ; Thu, 25 Jan 2024 16:44:46 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id F293E6B00A1; Thu, 25 Jan 2024 11:44:45 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id EB3CE6B00A2; Thu, 25 Jan 2024 11:44:45 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CB7CC8D0002; Thu, 25 Jan 2024 11:44:45 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id B2B706B00A1 for ; Thu, 25 Jan 2024 11:44:45 -0500 (EST) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 77B1480858 for ; Thu, 25 Jan 2024 16:44:45 +0000 (UTC) X-FDA: 81718407330.13.E07942D Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf20.hostedemail.com (Postfix) with ESMTP id 8181B1C000E for ; Thu, 25 Jan 2024 16:44:43 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf20.hostedemail.com: domain of alexandru.elisei@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=alexandru.elisei@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1706201083; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=gg8GzzdvepyVA3FDccd3l1+kjX2ys5Xo5RALvMpgS44=; b=fXcl3TA3qiIGjvZSk7d0tshktJUA1l+1z+CnI1DingQaJ7+Z/+nUw5pBakiHrScFuOauxB Rpt1A2TQGdGdKlXPlmpyR8+8oVDBh8EyLPJc5ZIMgwyNFN85gtREhZ9Mu25a7x7W1605DV S9rBH9x5csRId5mMEt64UTN3pu+j1jg= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf20.hostedemail.com: domain of alexandru.elisei@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=alexandru.elisei@arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1706201083; a=rsa-sha256; cv=none; b=nScPR1WdJApVaCGrvoJKZ3bducHzLhDhDMQIc34447/vqTRr6jwxNR4hE2alWuQjc8GHKN kCu82x31gWczmmVVzrfS8h8DrOdb+p+okJ9SvLqTPDTFIfKvGSE+/dqtjbmNKGwk9Lp14E EJB8B0fv6OWBUvD8e5RSPlSfG3i8mT8= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 294E21684; Thu, 25 Jan 2024 08:45:27 -0800 (PST) Received: from e121798.cable.virginm.net (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 3AF663F5A1; Thu, 25 Jan 2024 08:44:37 -0800 (PST) From: Alexandru Elisei To: catalin.marinas@arm.com, will@kernel.org, oliver.upton@linux.dev, maz@kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, arnd@arndb.de, akpm@linux-foundation.org, mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, mhiramat@kernel.org, rppt@kernel.org, hughd@google.com Cc: pcc@google.com, steven.price@arm.com, anshuman.khandual@arm.com, vincenzo.frascino@arm.com, david@redhat.com, eugenis@google.com, kcc@google.com, hyesoo.yu@samsung.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, kvmarm@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org Subject: [PATCH RFC v3 20/35] arm64: mte: Add tag storage memory to CMA Date: Thu, 25 Jan 2024 16:42:41 +0000 Message-Id: <20240125164256.4147-21-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240125164256.4147-1-alexandru.elisei@arm.com> References: <20240125164256.4147-1-alexandru.elisei@arm.com> MIME-Version: 1.0 X-Rspam-User: X-Stat-Signature: p5e5d7sfdx1euzxuzhb8ujndmqobupf3 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 8181B1C000E X-HE-Tag: 1706201083-696971 X-HE-Meta: U2FsdGVkX19n8cNNCmMI3seW+CH6GGvzJahYOM5NPR821hFZciHto2DtD+cX5jjteE+jvTFVVQvzLxuIQGdZaihK2+g4hxTf82S8VLYr+vaPFyj9ECrxwLjvfWyQB2BI3h4LDCGic0+HGqJklB/dh9uvrsxIdzQ194w3fRUO6mbW8sBKO4mAHdr4Ft8geZ1U3Jfs9HC1n8yYI7N1ilN4kesb0cJkrITzhWHtEurl6NJBse+HYPQj3SsynVzaYW2BTj4h3uOjgu8eo/4XS9AbqRkA+p8QCeHhfDzwk3KYm5uD+HFWQQy0y7OHMSxVOb8OdF19OGJlzzVU1kPcn1VAdRwws17DPE1FoQKlox4pDUdgWpY2dqFTFf4GIpLnzdBDf2RtSYK+GsmOCJf/UwkergQm4chMUkohEC5Kh2YlHkt7iJIZWvi0b8lAHXDOgTzeb4uLcuAREaEaqdxhNQMSK5Wa9GpmG770ujTJsEAAJzykCyj4iuB0Ovo+iXNRZZ56eo2s2b2K2wuf/CarAX10xs+NQ1rt+Gu2Ob/L6ot7NNWsUj2Gme+CU4DwYeF8k9xXpC3U5BvPNLiUyMew+0VNAIeL8Vg8pT6xkfZvnFmv1xyNu5oqFo8HoINOzN2/nyGgh7aMrzNRInKa9jNVhGpe+wiiamVbH8hg1s/ABMIWunz3rYE68XLbjmjxH5A3yzaopJcG4L1rqw2cBD1gOuR355hfgUP2Y70nwQmu4l2PIpWjahalZldQHB3SWPnwfMg++ObWOfnPxdGrmiUezrsXAk4tZtwWNryvjRlEnlDcFHb3BMafZYv0Bvmug9XiRZC5OuI05q2BKTgPFw3WhPBz3NySpVkHKuVvnUC0cnkARccDSHavZ3SocHemPi1f91pjoubFunQ14ETlttoq0lzWcB0e3okvpMpKqhdNLtUQqr79hWMi8Em6rTbiprIqIpfUuzLIhCYxEXno7/dzRuO PVY4epAy 1QbO46FvSOrTj+XmqyUZDAMeL/ZA1JAdn6yVkHAcNOZSh+LbxjEyVqP5LQh9fzEN+e7Llyq2IATcu9qLY8T6Q0c4u+amrmVlnrQpTbIKlejk8TKUf4cZ5B6w/VOJwXdCHEmCfzUvYXg3U0upuYCn3gLeb6/5KmAN2jmCA X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Add the MTE tag storage pages to CMA, which allows the page allocator to manage them like regular pages. The CMA migratype lends the tag storage pages some very desirable properties: * They cannot be longterm pinned, meaning they should always be migratable. * The pages can be allocated explicitely by using their PFN (with alloc_cma_range()) when they are needed to store tags. Signed-off-by: Alexandru Elisei --- Changes since v2: * Reworked from rfc v2 patch #12 ("arm64: mte: Add tag storage pages to the MIGRATE_CMA migratetype"). * Tag storage memory is now added to the cma_areas array and will be managed like a regular CMA region (David Hildenbrand). * If a tag storage region spans multiple zones, CMA won't be able to activate the region. Split such regions into multiple tag storage regions (Hyesoo Yu). arch/arm64/Kconfig | 1 + arch/arm64/kernel/mte_tag_storage.c | 150 +++++++++++++++++++++++++++- 2 files changed, 150 insertions(+), 1 deletion(-) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 92d97930b56e..6f65e9005dc9 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -2085,6 +2085,7 @@ config ARM64_MTE if ARM64_MTE config ARM64_MTE_TAG_STORAGE bool + select CONFIG_CMA help Adds support for dynamic management of the memory used by the hardware for storing MTE tags. This memory, unlike normal memory, cannot be diff --git a/arch/arm64/kernel/mte_tag_storage.c b/arch/arm64/kernel/mte_tag_storage.c index 2f32265d8ad8..90b157132efa 100644 --- a/arch/arm64/kernel/mte_tag_storage.c +++ b/arch/arm64/kernel/mte_tag_storage.c @@ -5,6 +5,8 @@ * Copyright (C) 2023 ARM Ltd. */ +#include +#include #include #include #include @@ -22,6 +24,7 @@ struct tag_region { struct range tag_range; /* Tag storage memory, in PFNs. */ u32 block_size_pages; /* Tag block size, in pages. */ phandle mem_phandle; /* phandle for the associated memory node. */ + struct cma *cma; /* CMA cookie */ }; #define MAX_TAG_REGIONS 32 @@ -139,9 +142,88 @@ static int __init mte_find_tagged_memory_regions(void) return -EINVAL; } +static void __init mte_split_tag_region(struct tag_region *region, unsigned long last_tag_pfn) +{ + struct tag_region *new_region; + unsigned long last_mem_pfn; + + new_region = &tag_regions[num_tag_regions]; + last_mem_pfn = region->mem_range.start + (last_tag_pfn - region->tag_range.start) * 32; + + new_region->mem_range.start = last_mem_pfn + 1; + new_region->mem_range.end = region->mem_range.end; + region->mem_range.end = last_mem_pfn; + + new_region->tag_range.start = last_tag_pfn + 1; + new_region->tag_range.end = region->tag_range.end; + region->tag_range.end = last_tag_pfn; + + new_region->block_size_pages = region->block_size_pages; + + num_tag_regions++; +} + +/* + * Split any tag region that spans multiple zones - CMA will fail if that + * happens. + */ +static int __init mte_split_tag_regions(void) +{ + struct tag_region *region; + struct range *tag_range; + struct zone *zone; + unsigned long pfn; + int i; + + for (i = 0; i < num_tag_regions; i++) { + region = &tag_regions[i]; + tag_range = ®ion->tag_range; + zone = page_zone(pfn_to_page(tag_range->start)); + + for (pfn = tag_range->start + 1; pfn <= tag_range->end; pfn++) { + if (page_zone(pfn_to_page(pfn)) == zone) + continue; + + if (WARN_ON_ONCE(pfn % region->block_size_pages)) + goto out_err; + + if (num_tag_regions == MAX_TAG_REGIONS) + goto out_err; + + mte_split_tag_region(&tag_regions[i], pfn - 1); + /* Move on to the next region. */ + break; + } + } + + return 0; + +out_err: + pr_err("Error splitting tag storage region 0x%llx-0x%llx spanning multiple zones", + PFN_PHYS(tag_range->start), PFN_PHYS(tag_range->end + 1) - 1); + return -EINVAL; +} + void __init mte_init_tag_storage(void) { - int ret; + unsigned long long mem_end; + struct tag_region *region; + unsigned long pfn, order; + u64 start, end; + int i, j, ret; + + /* + * Tag storage memory requires that tag storage pages in use for data + * are always migratable when they need to be repurposed to store tags. + * If ARCH_KEEP_MEMBLOCK is enabled, kexec will not scan reserved + * memblocks when trying to find a suitable location for the kernel + * image. This means that kexec will not use tag storage pages for + * copying the kernel, and the pages will remain migratable. + * + * Add the check in case arm64 stops selecting ARCH_KEEP_MEMBLOCK by + * default. + */ + BUILD_BUG_ON(!IS_ENABLED(CONFIG_ARCH_KEEP_MEMBLOCK)); if (num_tag_regions == 0) return; @@ -150,6 +232,72 @@ void __init mte_init_tag_storage(void) if (ret) goto out_disabled; + mem_end = PHYS_PFN(memblock_end_of_DRAM()); + + /* + * MTE is disabled, tag storage pages can be used like any other pages. + * The only restriction is that the pages cannot be used by kexec + * because the memory remains marked as reserved in the memblock + * allocator. + */ + if (!system_supports_mte()) { + for (i = 0; i< num_tag_regions; i++) { + start = tag_regions[i].tag_range.start; + end = tag_regions[i].tag_range.end; + + /* end is inclusive, mem_end is not */ + if (end >= mem_end) + end = mem_end - 1; + if (end < start) + continue; + for (pfn = start; pfn <= end; pfn++) + free_reserved_page(pfn_to_page(pfn)); + } + goto out_disabled; + } + + /* + * Check that tag storage is addressable by the kernel. + * cma_init_reserved_mem(), unlike cma_declare_contiguous_nid(), doesn't + * perform this check. + */ + for (i = 0; i< num_tag_regions; i++) { + start = tag_regions[i].tag_range.start; + end = tag_regions[i].tag_range.end; + + if (end >= mem_end) { + pr_err("Tag region 0x%llx-0x%llx outside addressable memory", + PFN_PHYS(start), PFN_PHYS(end + 1) - 1); + goto out_disabled; + } + } + + ret = mte_split_tag_regions(); + if (ret) + goto out_disabled; + + for (i = 0; i < num_tag_regions; i++) { + region = &tag_regions[i]; + + /* Tag storage pages are managed in block_size_pages chunks. */ + if (is_power_of_2(region->block_size_pages)) + order = ilog2(region->block_size_pages); + else + order = 0; + + ret = cma_init_reserved_mem(PFN_PHYS(region->tag_range.start), + PFN_PHYS(range_len(®ion->tag_range)), + order, NULL, ®ion->cma); + if (ret) { + for (j = 0; j < i; j++) + cma_remove_mem(®ion->cma); + goto out_disabled; + } + + /* Keep pages reserved if activation fails. */ + cma_reserve_pages_on_error(region->cma); + } + return; out_disabled: From patchwork Thu Jan 25 16:42:42 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Elisei X-Patchwork-Id: 13531338 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CFA45C47422 for ; Thu, 25 Jan 2024 16:44:51 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 628486B00A3; Thu, 25 Jan 2024 11:44:51 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 5B1FC6B00A4; Thu, 25 Jan 2024 11:44:51 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 451E78D0002; Thu, 25 Jan 2024 11:44:51 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 2ED666B00A3 for ; Thu, 25 Jan 2024 11:44:51 -0500 (EST) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id D69B0140515 for ; Thu, 25 Jan 2024 16:44:50 +0000 (UTC) X-FDA: 81718407540.11.94313AB Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf01.hostedemail.com (Postfix) with ESMTP id 1C23140007 for ; Thu, 25 Jan 2024 16:44:48 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf01.hostedemail.com: domain of alexandru.elisei@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=alexandru.elisei@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1706201089; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=NKb8hXHRJf1xlcryWkhAW3EgZtKBRLV38rWEpFR2xbw=; b=YkiHAjESu55wNCJR2nKKN3InGkaPXB0fJ+7xFHunppZOzijWei8chymkyU2I1tcmDxmIlX 1jBHNzvq/Quo+ztZu3J/TPIvSX+v3kqLOfVkqRLObJzQvfUiNqJYs0VS8FHJmmK1vJP9Qj tgBIb3HGcsRuWH9i4dkVZrz/bSx5ZN0= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf01.hostedemail.com: domain of alexandru.elisei@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=alexandru.elisei@arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1706201089; a=rsa-sha256; cv=none; b=WC75esh0O/TQLGRZe9LMKmU+JF2osyq0sESvk3JL60FBGPI/DhFMNp/OdRYmVLlytCTpWx YVcfXnmJX0sqX9I/CWsLiwIYZEYBI7HDUmf3WrwAVCfCb4tOJ1JlX2wr5w2rwCbCHdJ0rJ uQx4QvTtbTXZ/5x2+94Fbcy+DGU0B8E= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E894C1688; Thu, 25 Jan 2024 08:45:32 -0800 (PST) Received: from e121798.cable.virginm.net (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 03CA73F5A1; Thu, 25 Jan 2024 08:44:42 -0800 (PST) From: Alexandru Elisei To: catalin.marinas@arm.com, will@kernel.org, oliver.upton@linux.dev, maz@kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, arnd@arndb.de, akpm@linux-foundation.org, mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, mhiramat@kernel.org, rppt@kernel.org, hughd@google.com Cc: pcc@google.com, steven.price@arm.com, anshuman.khandual@arm.com, vincenzo.frascino@arm.com, david@redhat.com, eugenis@google.com, kcc@google.com, hyesoo.yu@samsung.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, kvmarm@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org Subject: [PATCH RFC v3 21/35] arm64: mte: Disable dynamic tag storage management if HW KASAN is enabled Date: Thu, 25 Jan 2024 16:42:42 +0000 Message-Id: <20240125164256.4147-22-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240125164256.4147-1-alexandru.elisei@arm.com> References: <20240125164256.4147-1-alexandru.elisei@arm.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 1C23140007 X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: kartb4nrzk4tyan9cq5nyo55n9ug1ofg X-HE-Tag: 1706201088-740582 X-HE-Meta: U2FsdGVkX19dzh5qffO8g6D/3/DHANyMPe/U57djF0ul2UOds1cy2uAXaFjOZ0/CzYAyMt/LAUslD4Z2gYpkZmunBnq1UM5R3yXNBg918r6zqsVrkc4o0UMGJrEIR3s0du1UV0oG+CHKPBM5P7142jD0MLTdJqWGeqNBfOTTmnOivxkqedh+3IbLPEA1RKNhxy2Dn87QLBY+5X20U3BOhlUiYbiy0QwEFW6dqvwVEBisyZtNUkmQGiDE+1nZeMdACesVdhWJj3uJb3u8GMSwAhhM9pdgNp4CPqyeNFcDPVKQ04mvMQYnjsHvC6gKC72ueQzigjXQ4UJpjoWqhB852wikPVkQRvXva+4Xs+ZPCgH6FiO6fonRDYISWORKtcCHLDTHfmmpeUHMOhY7QFLA4n1Gys/b7N/o+prfFqQnYHu4nrkV62Uvf81IsA2wb/4mLDxq8E5v3Zmq2uGeI2hw11o5zxj0hidV/tUin1nGNYTTHNDBKkGjJr2nL5AbeYmiwfM6uF+AYJ9nQtbfOZH+/oDOMlmTrJStqVkfDnhabRIzBU492cHT94JNeO4UkcozCO7njNPmwowroJb5w70FwXLaL0vDu7hRT2DvGiinSO8yU/aa60KF+qgJnSSKh/iQ41BUwVDS9s4yAVwnbxYzZqlbirQtOGkXGL0HQQLOKgqX1SBAkTFeqdze3d5nDJFOpRIniQkHKuVGxjajBlyPiO04aJwPlwCU5yrIodphMS6h07P03HYJP+w1YUykCOkB8LIyyPQtf29D1f3nwxZflN2FmDFTU4whrFWhaxjBdXAm9mLqYafP6eCNJVGgL8edgQpcNcDw0nZOLwEuhJogWHiPBE9fbPTylrqwTq2ulEYyqzZ0HmWfwC0qWE9JiK+dB1KqU1bTzpt2ZARPgqgZ3q9moPueWKknZFnnkMc7BRLvTF7MFU4fw7Pa2rZBXykdP0FGi9o1L+4BHMdYu2v O+DKeYo+ JgwDZSP5AhCfVf4gwmK0to2tsgIEPYZalltVc43uX9arnnHGB5WJdzZxq40U9yjtz5gtB7u5NpQKVmSBiSDar3qrE3rGrBKrVerm9oE8qClMFZg4pJYZ69NCYpo2S5FWOVlwimEztAPhCnx4E6T7vV9sl7ZbPPk+2bfd/ X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: To be able to reserve the tag storage associated with a tagged page requires that the tag storage can be migrated, if it's in use for data. The kernel allocates pages in non-preemptible contexts, which makes migration impossible. The only user of tagged pages in the kernel is HW KASAN, so don't use tag storage pages if HW KASAN is enabled. Signed-off-by: Alexandru Elisei --- Changes since rfc v2: * Expanded commit message (David Hildenbrand) arch/arm64/kernel/mte_tag_storage.c | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/arch/arm64/kernel/mte_tag_storage.c b/arch/arm64/kernel/mte_tag_storage.c index 90b157132efa..9a1a8a45171e 100644 --- a/arch/arm64/kernel/mte_tag_storage.c +++ b/arch/arm64/kernel/mte_tag_storage.c @@ -256,6 +256,16 @@ void __init mte_init_tag_storage(void) goto out_disabled; } + /* + * The kernel allocates memory in non-preemptible contexts, which makes + * migration impossible when reserving the associated tag storage. The + * only in-kernel user of tagged pages is HW KASAN. + */ + if (kasan_hw_tags_enabled()) { + pr_info("KASAN HW tags incompatible with MTE tag storage management"); + goto out_disabled; + } + /* * Check that tag storage is addressable by the kernel. * cma_init_reserved_mem(), unlike cma_declare_contiguous_nid(), doesn't From patchwork Thu Jan 25 16:42:43 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Elisei X-Patchwork-Id: 13531339 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 95B56C47422 for ; Thu, 25 Jan 2024 16:44:57 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 28AA2280002; Thu, 25 Jan 2024 11:44:57 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 207568D0002; Thu, 25 Jan 2024 11:44:57 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 082CA6B00A6; Thu, 25 Jan 2024 11:44:57 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id E65156B00A4 for ; Thu, 25 Jan 2024 11:44:56 -0500 (EST) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id C392116082A for ; Thu, 25 Jan 2024 16:44:56 +0000 (UTC) X-FDA: 81718407792.02.5825671 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf11.hostedemail.com (Postfix) with ESMTP id 1854740023 for ; Thu, 25 Jan 2024 16:44:54 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=none; spf=pass (imf11.hostedemail.com: domain of alexandru.elisei@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=alexandru.elisei@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1706201095; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=5zf4/QVJoGe+m11F6r0cQ8EbqmSgrLFc9Da1YM6fJtc=; b=ueJN1xPQMwX3jnrMH7fVt8CLP0lZbZymgmlYruQpOI2YY3F2mYnQKTkjuu4yUy5Mtgv+HR t66U7n5H9JegI1r3kmWPO7f5LHIWitTMaa9Lw3SzAJjCmW7fjGBKAQengkkRabzXzGIVTP lh9b36NTn452GVV8VpBwm0txkdZmheE= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1706201095; a=rsa-sha256; cv=none; b=ka08MmKMUzZ3bfaquSarlgeY/SOAkIRQBrfmIWyMmDCHo8iLRyKC+MYGYxtdjyfxr5DkhN TQ1vzimxxfcaAyUXYl3ixmWp/tZPvDhKQ1Z0GioNBEMt+Ni7ndcqoxO0nPxNHv+yqp2zic FYrqMJRgbCff0eipOuH9I4vvHMtAOKA= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=none; spf=pass (imf11.hostedemail.com: domain of alexandru.elisei@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=alexandru.elisei@arm.com; dmarc=pass (policy=none) header.from=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B1083168F; Thu, 25 Jan 2024 08:45:38 -0800 (PST) Received: from e121798.cable.virginm.net (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id C149A3F5A1; Thu, 25 Jan 2024 08:44:48 -0800 (PST) From: Alexandru Elisei To: catalin.marinas@arm.com, will@kernel.org, oliver.upton@linux.dev, maz@kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, arnd@arndb.de, akpm@linux-foundation.org, mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, mhiramat@kernel.org, rppt@kernel.org, hughd@google.com Cc: pcc@google.com, steven.price@arm.com, anshuman.khandual@arm.com, vincenzo.frascino@arm.com, david@redhat.com, eugenis@google.com, kcc@google.com, hyesoo.yu@samsung.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, kvmarm@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org Subject: [PATCH RFC v3 22/35] arm64: mte: Enable tag storage if CMA areas have been activated Date: Thu, 25 Jan 2024 16:42:43 +0000 Message-Id: <20240125164256.4147-23-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240125164256.4147-1-alexandru.elisei@arm.com> References: <20240125164256.4147-1-alexandru.elisei@arm.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 1854740023 X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: yebzf39fbwidtcj6t58dq4fis6ffhuix X-HE-Tag: 1706201094-251289 X-HE-Meta: U2FsdGVkX18x1QwMmHGW1K9RNcikB/DEOgEsyHQDDfddcOakggYQMFLm9DdU9X8WsRbUiIAVmaHCYPe/VAMm8K3rYmqnxSEAW9v2OWc7EcptuByzKgUg7/VFhcte5EkKQpNACpGRz/dKOlFpJMyJ2kEzkiKZCheL4m53nchHrz+1tLvngveKRlynTDrUWoqHWTYTgiq9uSCXrSO6FFLQyirZTjTpPKTheHgIHzCu16YNqdAfTDcycxlYelqTcpul7vKcgahkCzPgg4IJaHRyu/bzI8tzPTkIC1YBF81vSIA5a+Cd9FXgoFOMGoJ3uSCx2H+/ujf2Vqqi9vNa2u115kuLH/Uu3DkltUxTZ23jOTe9SAfbXy127TbmSzW8qCKbMc+XH/NIPDFrnlDd1FHl8LWVUxSYSr/74xlYUs5SO45E+FO4+NDRora8YYAkzr8PtxO8ar+HcrSFBaCVhhD8nkjrAZrGwGJg/3CGXbjBpzbu0+tRSBJQviWLiT6nNeYDt5kvvs2h4NLvdoBH+c+cz3J+7BVNljfIrN4imJDjkvPceb1Ad5Y5k7EPjmiX5g0ABCUfSeuRS2RgzeRj3bZ4NW6u2YMiK2IgcLFZbcqRJmBPjsgzZDP8CqTsZ0W7jzLpp7GDvJn63rCW+1lpx6mWSp0NSNZXGLMZJZupzJ7H8UjXI2yj/8g1VRroHWnYQ6UWbzvow0zZuS1Rf+eG9y9gOPS+Hw+uTLn+kbhGXrKPir0fNFZEabdnYKZ0f2LKnVm6+gHJsOSBN7mnTA+VPfFUmji8ANIIdH3EPKxtLX9glACggJU4vulRWRXS6BI8XXNxTOaS7ZTm+FYvMQGUGmwh2QcgShVZLr1Bwdey11t2H5Ot8ZUstulCVrI6PwiJRTSWimX0ipECK4gqskczqPWwVMWlnz1/oC/K/xF0sCHX9ssYE9jBraFYl6BD44W7CImt50ynhhD77ECxptYfvV0 EziAsllN pMiFLqjLGbZ+voX0D3AA9AixJHjmb53o6wH3XdLppnQLocqAbHow9qz2v0jXzp76TDgbIBmISjPpZVA+aB9UXbIBh5SUSamj6Sgyc9WW/VP4WOSw2NV+ZzB6A34otRksbqGvyYfVBPRlkvbOf5/BgeqVpe6TOdG829OqQ X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Before enabling MTE tag storage management, make sure that the CMA areas have been successfully activated. If a CMA area fails activation, the pages are kept as reserved. Reserved pages are never used by the page allocator. If this happens, the kernel would have to manage tag storage only for some of the memory, but not for all memory, and that would make the code unreasonably complicated. Choose to disable tag storage management altogether if a CMA area fails to be activated. Signed-off-by: Alexandru Elisei --- Changes since v2: * New patch. arch/arm64/include/asm/mte_tag_storage.h | 12 ++++++ arch/arm64/kernel/mte_tag_storage.c | 50 ++++++++++++++++++++++++ 2 files changed, 62 insertions(+) diff --git a/arch/arm64/include/asm/mte_tag_storage.h b/arch/arm64/include/asm/mte_tag_storage.h index 3c2cd29e053e..7b3f6bff8e6f 100644 --- a/arch/arm64/include/asm/mte_tag_storage.h +++ b/arch/arm64/include/asm/mte_tag_storage.h @@ -6,8 +6,20 @@ #define __ASM_MTE_TAG_STORAGE_H #ifdef CONFIG_ARM64_MTE_TAG_STORAGE + +DECLARE_STATIC_KEY_FALSE(tag_storage_enabled_key); + +static inline bool tag_storage_enabled(void) +{ + return static_branch_likely(&tag_storage_enabled_key); +} + void mte_init_tag_storage(void); #else +static inline bool tag_storage_enabled(void) +{ + return false; +} static inline void mte_init_tag_storage(void) { } diff --git a/arch/arm64/kernel/mte_tag_storage.c b/arch/arm64/kernel/mte_tag_storage.c index 9a1a8a45171e..d58c68b4a849 100644 --- a/arch/arm64/kernel/mte_tag_storage.c +++ b/arch/arm64/kernel/mte_tag_storage.c @@ -19,6 +19,8 @@ #include +__ro_after_init DEFINE_STATIC_KEY_FALSE(tag_storage_enabled_key); + struct tag_region { struct range mem_range; /* Memory associated with the tag storage, in PFNs. */ struct range tag_range; /* Tag storage memory, in PFNs. */ @@ -314,3 +316,51 @@ void __init mte_init_tag_storage(void) num_tag_regions = 0; pr_info("MTE tag storage region management disabled"); } + +static int __init mte_enable_tag_storage(void) +{ + struct range *tag_range; + struct cma *cma; + int i, ret; + + if (num_tag_regions == 0) + return 0; + + for (i = 0; i < num_tag_regions; i++) { + tag_range = &tag_regions[i].tag_range; + cma = tag_regions[i].cma; + /* + * CMA will keep the pages as reserved when the region fails + * activation. + */ + if (PageReserved(pfn_to_page(tag_range->start))) + goto out_disabled; + } + + static_branch_enable(&tag_storage_enabled_key); + pr_info("MTE tag storage region management enabled"); + + return 0; + +out_disabled: + for (i = 0; i < num_tag_regions; i++) { + tag_range = &tag_regions[i].tag_range; + cma = tag_regions[i].cma; + + if (PageReserved(pfn_to_page(tag_range->start))) + continue; + + /* Try really hard to reserve the tag storage. */ + ret = cma_alloc(cma, range_len(tag_range), 8, true); + /* + * Tag storage is still in use for data, memory and/or tag + * corruption will ensue. + */ + WARN_ON_ONCE(ret); + } + num_tag_regions = 0; + pr_info("MTE tag storage region management disabled"); + + return -EINVAL; +} +arch_initcall(mte_enable_tag_storage); From patchwork Thu Jan 25 16:42:44 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Elisei X-Patchwork-Id: 13531340 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 600D1C47258 for ; Thu, 25 Jan 2024 16:45:03 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E65EC6B009E; Thu, 25 Jan 2024 11:45:02 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id DEE6A6B00A5; Thu, 25 Jan 2024 11:45:02 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C1A8F8D0002; Thu, 25 Jan 2024 11:45:02 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id A71E36B009E for ; Thu, 25 Jan 2024 11:45:02 -0500 (EST) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 821AF40E1B for ; Thu, 25 Jan 2024 16:45:02 +0000 (UTC) X-FDA: 81718408044.29.72CB039 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf30.hostedemail.com (Postfix) with ESMTP id B6F1080020 for ; Thu, 25 Jan 2024 16:45:00 +0000 (UTC) Authentication-Results: imf30.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf30.hostedemail.com: domain of alexandru.elisei@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=alexandru.elisei@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1706201100; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=/O8Y9An5QD7wzC5lcRNLLXrUsdXXPQ+++czarKN3dvY=; b=8QR1svcUHa0U5BuCO4begmvynmdoeGTtKgjmFsppQQdUvrF98ET36kZ9ZiFEv4cOW1Gt0u 3YiFL+3mt47BlDnzdwl9enfTk9v2iMKwhJp3W/Y/lOzIWX0gxjQhtPYOv7WIS5jF5N8GHn y2rOIvBxg2IEWFxZezTBWGaW5/zzcDQ= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf30.hostedemail.com: domain of alexandru.elisei@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=alexandru.elisei@arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1706201100; a=rsa-sha256; cv=none; b=s1ftrMe7X4P9Lhc3nXDkSJDpQJTLLeCnoszpYFcHu2L6SakyF94Q43Shv5woy/0/IhnF52 Dz5tDxkETKsX/1U8UEE6pJYccRnVPpRFI8C4vuFNAkJ4nPFDZQn9BOI5IgbnRf6Op024+w Sr6oYcUS1IplCdCwaJPG+rgzDg+DyZc= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 7B1D41691; Thu, 25 Jan 2024 08:45:44 -0800 (PST) Received: from e121798.cable.virginm.net (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 8C0543F5A1; Thu, 25 Jan 2024 08:44:54 -0800 (PST) From: Alexandru Elisei To: catalin.marinas@arm.com, will@kernel.org, oliver.upton@linux.dev, maz@kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, arnd@arndb.de, akpm@linux-foundation.org, mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, mhiramat@kernel.org, rppt@kernel.org, hughd@google.com Cc: pcc@google.com, steven.price@arm.com, anshuman.khandual@arm.com, vincenzo.frascino@arm.com, david@redhat.com, eugenis@google.com, kcc@google.com, hyesoo.yu@samsung.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, kvmarm@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org Subject: [PATCH RFC v3 23/35] arm64: mte: Try to reserve tag storage in arch_alloc_page() Date: Thu, 25 Jan 2024 16:42:44 +0000 Message-Id: <20240125164256.4147-24-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240125164256.4147-1-alexandru.elisei@arm.com> References: <20240125164256.4147-1-alexandru.elisei@arm.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: B6F1080020 X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: hujrgcnx6qtwbq87xn1aoqo37y8d3739 X-HE-Tag: 1706201100-124093 X-HE-Meta: U2FsdGVkX1/TaLAUyWkNjW414H8dgsFZgP2Wb+zFIil5ePpzT5ButbnEByHYoBBCOkerhc+Yc2ljedIPzHY8M/hgaS7vpPiXvUzOJzybxlj+0G+rIj1EhwJ2pat/DfFd2vDGmZeidXeWRX6trZWxBgCANuEbq/Jck5NuwaEjLRtc2sAdoAQlUnuH7mSHsO0ZUBoPEM1n23dI6zPoK8EpZ277BosyDrawI8GxO+o04gpH5WdMOcA+6V6wkfzOtjtDoRY7xbQkkN9a9lkSsFF0+hcMHUea3LGAuOXIWSTCIPY080UNXOe5CWhe6caW/dIR11+TVCbu9+iEGS4U6fAfP359YJYCS4kGs7cSK4PrZRrAyDg66Oi64Jhj5LQWjZgL/OAFuPKGlsGiWGXxt+PpbmxUZnUHSdYwjz55xQKN/v8DHFHhfeyiAJ4Mj2Inwe0fCeMvJAlSdmF9/QoQIO8Y3xxWA60DmY5PypXnwHuZ87L1mw9Ez80rQK1JkUY74B6DafTgbbS7tMjTfUf11L3n/ruLCCfGfHD+5ZAr6fe8wfHRjAsMlpCMY48AxD3KjtXmhOYeyCqwyjEtZ3Lint4j+4hLyOjA09IAvakCphIK66fnL+XiTOqUA6u07Zf0r4Vg2WDs7dsHR7orRPTP0IxKdAY/4Z2YLTKNVlz2Jy/CFgbD4pactsZSPiHm3aQdorY89uQ6biNizDxLrX6n37eQ70V+/7gw44Otuxdlj+WII9uMNZboBSZmPtfCaxfgYfqAxgna4ajMJ0j1aKP40KiCK9k24muIrFdZj+s+F+FfguBS02UrmKp+/hOw1vW3CGJeWHf2wQsUd9rZOTxXz36wKPC/GX6+XoLXpwkX13aA/lr09ReYFk9B2L5ex/fio+4ijJxCtOJKxFBqRmSk/tnh2WpB8LTMWCcsZ1aW9Oyhcfbdzo9EDIm+kFR9xj1QJHAmK6K2Tjo/Gliod0Vuzmh sPphLrau eokHeLCz3RjseQY76fC8JKa42gGdN+RBT09rlFi1vA0wy4d/DmXzZik1wZWkB/NZtupkRt3lcESBlrlPZPTEFfbfYGaG9dpLQkkcR779rEmwp0UxdvLDyQ+JDUfoHcXQw7gVhg1MeCgyfHB4ifVbJjJdfqcBqQ+d5y5em02F3riSFpzCN3tuHLjP1LFwGxMkBaXjkE0n//MjhEus= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Reserve tag storage for a page that is being allocated as tagged. This is a best effort approach, and failing to reserve tag storage is allowed. When all the associated tagged pages have been freed, return the tag storage pages back to the page allocator, where they can be used again for data allocations. Signed-off-by: Alexandru Elisei --- Changes since rfc v2: * Based on rfc v2 patch #16 ("arm64: mte: Manage tag storage on page allocation"). * Fixed calculation of the number of associated tag storage blocks (Hyesoo Yu). * Tag storage is reserved in arch_alloc_page() instead of arch_prep_new_page(). arch/arm64/include/asm/mte.h | 16 +- arch/arm64/include/asm/mte_tag_storage.h | 31 +++ arch/arm64/include/asm/page.h | 5 + arch/arm64/include/asm/pgtable.h | 19 ++ arch/arm64/kernel/mte_tag_storage.c | 234 +++++++++++++++++++++++ arch/arm64/mm/fault.c | 7 + fs/proc/page.c | 1 + include/linux/kernel-page-flags.h | 1 + include/linux/page-flags.h | 1 + include/trace/events/mmflags.h | 3 +- mm/huge_memory.c | 1 + 11 files changed, 316 insertions(+), 3 deletions(-) diff --git a/arch/arm64/include/asm/mte.h b/arch/arm64/include/asm/mte.h index 8034695b3dd7..6457b7899207 100644 --- a/arch/arm64/include/asm/mte.h +++ b/arch/arm64/include/asm/mte.h @@ -40,12 +40,24 @@ void mte_free_tag_buf(void *buf); #ifdef CONFIG_ARM64_MTE /* track which pages have valid allocation tags */ -#define PG_mte_tagged PG_arch_2 +#define PG_mte_tagged PG_arch_2 /* simple lock to avoid multiple threads tagging the same page */ -#define PG_mte_lock PG_arch_3 +#define PG_mte_lock PG_arch_3 +/* Track if a tagged page has tag storage reserved */ +#define PG_tag_storage_reserved PG_arch_4 + +#ifdef CONFIG_ARM64_MTE_TAG_STORAGE +DECLARE_STATIC_KEY_FALSE(tag_storage_enabled_key); +extern bool page_tag_storage_reserved(struct page *page); +#endif static inline void set_page_mte_tagged(struct page *page) { +#ifdef CONFIG_ARM64_MTE_TAG_STORAGE + /* Open code mte_tag_storage_enabled() */ + WARN_ON_ONCE(static_branch_likely(&tag_storage_enabled_key) && + !page_tag_storage_reserved(page)); +#endif /* * Ensure that the tags written prior to this function are visible * before the page flags update. diff --git a/arch/arm64/include/asm/mte_tag_storage.h b/arch/arm64/include/asm/mte_tag_storage.h index 7b3f6bff8e6f..09f1318d924e 100644 --- a/arch/arm64/include/asm/mte_tag_storage.h +++ b/arch/arm64/include/asm/mte_tag_storage.h @@ -5,6 +5,12 @@ #ifndef __ASM_MTE_TAG_STORAGE_H #define __ASM_MTE_TAG_STORAGE_H +#ifndef __ASSEMBLY__ + +#include + +#include + #ifdef CONFIG_ARM64_MTE_TAG_STORAGE DECLARE_STATIC_KEY_FALSE(tag_storage_enabled_key); @@ -15,6 +21,15 @@ static inline bool tag_storage_enabled(void) } void mte_init_tag_storage(void); + +static inline bool alloc_requires_tag_storage(gfp_t gfp) +{ + return gfp & __GFP_TAGGED; +} +int reserve_tag_storage(struct page *page, int order, gfp_t gfp); +void free_tag_storage(struct page *page, int order); + +bool page_tag_storage_reserved(struct page *page); #else static inline bool tag_storage_enabled(void) { @@ -23,6 +38,22 @@ static inline bool tag_storage_enabled(void) static inline void mte_init_tag_storage(void) { } +static inline bool alloc_requires_tag_storage(struct page *page) +{ + return false; +} +static inline int reserve_tag_storage(struct page *page, int order, gfp_t gfp) +{ + return 0; +} +static inline void free_tag_storage(struct page *page, int order) +{ +} +static inline bool page_tag_storage_reserved(struct page *page) +{ + return true; +} #endif /* CONFIG_ARM64_MTE_TAG_STORAGE */ +#endif /* !__ASSEMBLY__ */ #endif /* __ASM_MTE_TAG_STORAGE_H */ diff --git a/arch/arm64/include/asm/page.h b/arch/arm64/include/asm/page.h index 88bab032a493..3a656492f34a 100644 --- a/arch/arm64/include/asm/page.h +++ b/arch/arm64/include/asm/page.h @@ -35,6 +35,11 @@ void copy_highpage(struct page *to, struct page *from); void tag_clear_highpage(struct page *to); #define __HAVE_ARCH_TAG_CLEAR_HIGHPAGE +#ifdef CONFIG_ARM64_MTE_TAG_STORAGE +void arch_alloc_page(struct page *, int order, gfp_t gfp); +#define HAVE_ARCH_ALLOC_PAGE +#endif + #define clear_user_page(page, vaddr, pg) clear_page(page) #define copy_user_page(to, from, vaddr, pg) copy_page(to, from) diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index 2499cc4fa4f2..f30466199a9b 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -10,6 +10,7 @@ #include #include +#include #include #include #include @@ -1069,6 +1070,24 @@ static inline void arch_swap_restore(swp_entry_t entry, struct folio *folio) mte_restore_page_tags_by_swp_entry(entry, &folio->page); } +#ifdef CONFIG_ARM64_MTE_TAG_STORAGE + +#define __HAVE_ARCH_FREE_PAGES_PREPARE +static inline void arch_free_pages_prepare(struct page *page, int order) +{ + if (tag_storage_enabled() && page_mte_tagged(page)) + free_tag_storage(page, order); +} + +#define __HAVE_ARCH_ALLOC_CMA +static inline bool arch_alloc_cma(gfp_t gfp_mask) +{ + if (tag_storage_enabled() && alloc_requires_tag_storage(gfp_mask)) + return false; + return true; +} + +#endif /* CONFIG_ARM64_MTE_TAG_STORAGE */ #endif /* CONFIG_ARM64_MTE */ #define __HAVE_ARCH_CALC_VMA_GFP diff --git a/arch/arm64/kernel/mte_tag_storage.c b/arch/arm64/kernel/mte_tag_storage.c index d58c68b4a849..762c7c803a70 100644 --- a/arch/arm64/kernel/mte_tag_storage.c +++ b/arch/arm64/kernel/mte_tag_storage.c @@ -34,6 +34,31 @@ struct tag_region { static struct tag_region tag_regions[MAX_TAG_REGIONS]; static int num_tag_regions; +/* + * A note on locking. Reserving tag storage takes the tag_blocks_lock mutex, + * because alloc_contig_range() might sleep. + * + * Freeing tag storage takes the xa_lock spinlock with interrupts disabled + * because pages can be freed from non-preemptible contexts, including from an + * interrupt handler. + * + * Because tag storage can be freed from interrupt contexts, the xarray is + * defined with the XA_FLAGS_LOCK_IRQ flag to disable interrupts when calling + * xa_store(). This is done to prevent a deadlock with free_tag_storage() being + * called from an interrupt raised before xa_store() releases the xa_lock. + * + * All of the above means that reserve_tag_storage() cannot run concurrently + * with itself (no concurrent insertions), but it can run at the same time as + * free_tag_storage(). The first thing that reserve_tag_storage() does after + * taking the mutex is increase the refcount on all present tag storage blocks + * with the xa_lock held, to serialize against freeing the blocks. This is an + * optimization to avoid taking and releasing the xa_lock after each iteration + * if the refcount operation was moved inside the loop, where it would have had + * to be executed for each block. + */ +static DEFINE_XARRAY_FLAGS(tag_blocks_reserved, XA_FLAGS_LOCK_IRQ); +static DEFINE_MUTEX(tag_blocks_lock); + static u32 __init get_block_size_pages(u32 block_size_bytes) { u32 a = PAGE_SIZE; @@ -364,3 +389,212 @@ static int __init mte_enable_tag_storage(void) return -EINVAL; } arch_initcall(mte_enable_tag_storage); + +static void page_set_tag_storage_reserved(struct page *page, int order) +{ + int i; + + for (i = 0; i < (1 << order); i++) + set_bit(PG_tag_storage_reserved, &(page + i)->flags); +} + +static void block_ref_add(unsigned long block, struct tag_region *region, int order) +{ + int count; + + count = min(1u << order, 32 * region->block_size_pages); + page_ref_add(pfn_to_page(block), count); +} + +static int block_ref_sub_return(unsigned long block, struct tag_region *region, int order) +{ + int count; + + count = min(1u << order, 32 * region->block_size_pages); + return page_ref_sub_return(pfn_to_page(block), count); +} + +static bool tag_storage_block_is_reserved(unsigned long block) +{ + return xa_load(&tag_blocks_reserved, block) != NULL; +} + +static int tag_storage_reserve_block(unsigned long block, struct tag_region *region, int order) +{ + int ret; + + ret = xa_err(xa_store(&tag_blocks_reserved, block, pfn_to_page(block), GFP_KERNEL)); + if (!ret) + block_ref_add(block, region, order); + + return ret; +} + +static int order_to_num_blocks(int order, u32 block_size_pages) +{ + int num_tag_storage_pages = max((1 << order) / 32, 1); + + return DIV_ROUND_UP(num_tag_storage_pages, block_size_pages); +} + +static int tag_storage_find_block_in_region(struct page *page, unsigned long *blockp, + struct tag_region *region) +{ + struct range *tag_range = ®ion->tag_range; + struct range *mem_range = ®ion->mem_range; + u64 page_pfn = page_to_pfn(page); + u64 block, block_offset; + + if (!(mem_range->start <= page_pfn && page_pfn <= mem_range->end)) + return -ERANGE; + + block_offset = (page_pfn - mem_range->start) / 32; + block = tag_range->start + rounddown(block_offset, region->block_size_pages); + + if (block + region->block_size_pages - 1 > tag_range->end) { + pr_err("Block 0x%llx-0x%llx is outside tag region 0x%llx-0x%llx\n", + PFN_PHYS(block), PFN_PHYS(block + region->block_size_pages + 1) - 1, + PFN_PHYS(tag_range->start), PFN_PHYS(tag_range->end + 1) - 1); + return -ERANGE; + } + *blockp = block; + + return 0; + +} + +static int tag_storage_find_block(struct page *page, unsigned long *block, + struct tag_region **region) +{ + int i, ret; + + for (i = 0; i < num_tag_regions; i++) { + ret = tag_storage_find_block_in_region(page, block, &tag_regions[i]); + if (ret == 0) { + *region = &tag_regions[i]; + return 0; + } + } + + return -EINVAL; +} + +bool page_tag_storage_reserved(struct page *page) +{ + return test_bit(PG_tag_storage_reserved, &page->flags); +} + +int reserve_tag_storage(struct page *page, int order, gfp_t gfp) +{ + unsigned long start_block, end_block; + struct tag_region *region; + unsigned long block; + unsigned long flags; + int ret = 0; + + VM_WARN_ON_ONCE(!preemptible()); + + if (page_tag_storage_reserved(page)) + return 0; + + /* + * __alloc_contig_migrate_range() ignores gfp when allocating the + * destination page for migration. Regardless, massage gfp flags and + * remove __GFP_TAGGED to avoid recursion in case gfp stops being + * ignored. + */ + gfp &= ~__GFP_TAGGED; + if (!(gfp & __GFP_NORETRY)) + gfp |= __GFP_RETRY_MAYFAIL; + + ret = tag_storage_find_block(page, &start_block, ®ion); + if (WARN_ONCE(ret, "Missing tag storage block for pfn 0x%lx", page_to_pfn(page))) + return -EINVAL; + end_block = start_block + order_to_num_blocks(order, region->block_size_pages); + + mutex_lock(&tag_blocks_lock); + + /* Check again, this time with the lock held. */ + if (page_tag_storage_reserved(page)) + goto out_unlock; + + /* Make sure existing entries are not freed from out under out feet. */ + xa_lock_irqsave(&tag_blocks_reserved, flags); + for (block = start_block; block < end_block; block += region->block_size_pages) { + if (tag_storage_block_is_reserved(block)) + block_ref_add(block, region, order); + } + xa_unlock_irqrestore(&tag_blocks_reserved, flags); + + for (block = start_block; block < end_block; block += region->block_size_pages) { + /* Refcount incremented above. */ + if (tag_storage_block_is_reserved(block)) + continue; + + ret = cma_alloc_range(region->cma, block, region->block_size_pages, 3, gfp); + /* Should never happen. */ + VM_WARN_ON_ONCE(ret == -EEXIST); + if (ret) + goto out_error; + + ret = tag_storage_reserve_block(block, region, order); + if (ret) { + cma_release(region->cma, pfn_to_page(block), region->block_size_pages); + goto out_error; + } + } + + page_set_tag_storage_reserved(page, order); +out_unlock: + mutex_unlock(&tag_blocks_lock); + + return 0; + +out_error: + xa_lock_irqsave(&tag_blocks_reserved, flags); + for (block = start_block; block < end_block; block += region->block_size_pages) { + if (tag_storage_block_is_reserved(block) && + block_ref_sub_return(block, region, order) == 1) { + __xa_erase(&tag_blocks_reserved, block); + cma_release(region->cma, pfn_to_page(block), region->block_size_pages); + } + } + xa_unlock_irqrestore(&tag_blocks_reserved, flags); + + mutex_unlock(&tag_blocks_lock); + + return ret; +} + +void free_tag_storage(struct page *page, int order) +{ + unsigned long block, start_block, end_block; + struct tag_region *region; + unsigned long flags; + int ret; + + ret = tag_storage_find_block(page, &start_block, ®ion); + if (WARN_ONCE(ret, "Missing tag storage block for pfn 0x%lx", page_to_pfn(page))) + return; + + end_block = start_block + order_to_num_blocks(order, region->block_size_pages); + + xa_lock_irqsave(&tag_blocks_reserved, flags); + for (block = start_block; block < end_block; block += region->block_size_pages) { + if (WARN_ONCE(!tag_storage_block_is_reserved(block), + "Block 0x%lx is not reserved for pfn 0x%lx", block, page_to_pfn(page))) + continue; + + if (block_ref_sub_return(block, region, order) == 1) { + __xa_erase(&tag_blocks_reserved, block); + cma_release(region->cma, pfn_to_page(block), region->block_size_pages); + } + } + xa_unlock_irqrestore(&tag_blocks_reserved, flags); +} + +void arch_alloc_page(struct page *page, int order, gfp_t gfp) +{ + if (tag_storage_enabled() && alloc_requires_tag_storage(gfp)) + reserve_tag_storage(page, order, gfp); +} diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index c022e473c17c..1ffaeccecda2 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -37,6 +37,7 @@ #include #include #include +#include #include #include #include @@ -950,6 +951,12 @@ gfp_t arch_calc_vma_gfp(struct vm_area_struct *vma, gfp_t gfp) void tag_clear_highpage(struct page *page) { + if (tag_storage_enabled() && !page_tag_storage_reserved(page)) { + /* Don't zero the tags if tag storage is not reserved */ + clear_page(page_address(page)); + return; + } + /* Newly allocated page, shouldn't have been tagged yet */ WARN_ON_ONCE(!try_page_mte_tagging(page)); mte_zero_clear_page_tags(page_address(page)); diff --git a/fs/proc/page.c b/fs/proc/page.c index 195b077c0fac..e7eb584a9234 100644 --- a/fs/proc/page.c +++ b/fs/proc/page.c @@ -221,6 +221,7 @@ u64 stable_page_flags(struct page *page) #ifdef CONFIG_ARCH_USES_PG_ARCH_X u |= kpf_copy_bit(k, KPF_ARCH_2, PG_arch_2); u |= kpf_copy_bit(k, KPF_ARCH_3, PG_arch_3); + u |= kpf_copy_bit(k, KPF_ARCH_4, PG_arch_4); #endif return u; diff --git a/include/linux/kernel-page-flags.h b/include/linux/kernel-page-flags.h index 859f4b0c1b2b..4a0d719ffdd4 100644 --- a/include/linux/kernel-page-flags.h +++ b/include/linux/kernel-page-flags.h @@ -19,5 +19,6 @@ #define KPF_SOFTDIRTY 40 #define KPF_ARCH_2 41 #define KPF_ARCH_3 42 +#define KPF_ARCH_4 43 #endif /* LINUX_KERNEL_PAGE_FLAGS_H */ diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index b7237bce7446..03f03e6d735e 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -135,6 +135,7 @@ enum pageflags { #ifdef CONFIG_ARCH_USES_PG_ARCH_X PG_arch_2, PG_arch_3, + PG_arch_4, #endif __NR_PAGEFLAGS, diff --git a/include/trace/events/mmflags.h b/include/trace/events/mmflags.h index 6ca0d5ed46c0..ba962fd10a2c 100644 --- a/include/trace/events/mmflags.h +++ b/include/trace/events/mmflags.h @@ -125,7 +125,8 @@ IF_HAVE_PG_HWPOISON(hwpoison) \ IF_HAVE_PG_IDLE(idle) \ IF_HAVE_PG_IDLE(young) \ IF_HAVE_PG_ARCH_X(arch_2) \ -IF_HAVE_PG_ARCH_X(arch_3) +IF_HAVE_PG_ARCH_X(arch_3) \ +IF_HAVE_PG_ARCH_X(arch_4) #define show_page_flags(flags) \ (flags) ? __print_flags(flags, "|", \ diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 2bad63a7ec16..47932539cc50 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2804,6 +2804,7 @@ static void __split_huge_page_tail(struct folio *folio, int tail, #ifdef CONFIG_ARCH_USES_PG_ARCH_X (1L << PG_arch_2) | (1L << PG_arch_3) | + (1L << PG_arch_4) | #endif (1L << PG_dirty) | LRU_GEN_MASK | LRU_REFS_MASK)); From patchwork Thu Jan 25 16:42:45 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Elisei X-Patchwork-Id: 13531341 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 49E76C47422 for ; Thu, 25 Jan 2024 16:45:09 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D55D46B00A1; Thu, 25 Jan 2024 11:45:08 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id CDE416B00A5; Thu, 25 Jan 2024 11:45:08 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B7E716B00A6; Thu, 25 Jan 2024 11:45:08 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id A11386B00A1 for ; Thu, 25 Jan 2024 11:45:08 -0500 (EST) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 5A03F1C0C02 for ; Thu, 25 Jan 2024 16:45:08 +0000 (UTC) X-FDA: 81718408296.21.51614A0 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf21.hostedemail.com (Postfix) with ESMTP id 871491C0019 for ; Thu, 25 Jan 2024 16:45:06 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf21.hostedemail.com: domain of alexandru.elisei@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=alexandru.elisei@arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1706201106; a=rsa-sha256; cv=none; b=NzeonvnsDfsiOqzt+uNJn5JE+yShT4pnn54lx+DcXoyPi88Xt8lX+EpqIgC0HUT7s2M1bD 9FUDJPe/4tjOLwdFcMNGbXpSw7fSy5QuGSmdY9uG7P9wIaDfCjQ/eDyPXV+aB63qwtH+zz zkQ8BxOYFBKVxuM2TPI68iyThCMxJ3E= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf21.hostedemail.com: domain of alexandru.elisei@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=alexandru.elisei@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1706201106; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=C8jb86jJGr/SzJcF4tjF9SW6X7mUJzUVsm4HdqOEF3M=; b=kIvLOw5HcK/43noPwKyMhvD91MtMhO0+RAlw3epvLgF511CBUlXlbMg/mY81Xf4nmt07Vu CWfUOzIwASahKFZJB9TW5s28jBonI0CFrKHbyLm70yTD8NKBGo/ob0equ5vqsqR/t/9bfy WAxFM44+pjs8Yuyryw2G4Tux1rwG99s= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 43F741692; Thu, 25 Jan 2024 08:45:50 -0800 (PST) Received: from e121798.cable.virginm.net (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 54CF73F5A1; Thu, 25 Jan 2024 08:45:00 -0800 (PST) From: Alexandru Elisei To: catalin.marinas@arm.com, will@kernel.org, oliver.upton@linux.dev, maz@kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, arnd@arndb.de, akpm@linux-foundation.org, mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, mhiramat@kernel.org, rppt@kernel.org, hughd@google.com Cc: pcc@google.com, steven.price@arm.com, anshuman.khandual@arm.com, vincenzo.frascino@arm.com, david@redhat.com, eugenis@google.com, kcc@google.com, hyesoo.yu@samsung.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, kvmarm@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org Subject: [PATCH RFC v3 24/35] arm64: mte: Perform CMOs for tag blocks Date: Thu, 25 Jan 2024 16:42:45 +0000 Message-Id: <20240125164256.4147-25-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240125164256.4147-1-alexandru.elisei@arm.com> References: <20240125164256.4147-1-alexandru.elisei@arm.com> MIME-Version: 1.0 X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 871491C0019 X-Stat-Signature: nhbuz9hpairno5b644rsgtf1jw4anqsw X-Rspam-User: X-HE-Tag: 1706201106-944766 X-HE-Meta: U2FsdGVkX1/HiSSp6+yEXWCqi+YYl5KUGprpxXsUl+XfDrKutuSfrd+DBDt8w3VDBxooMkksBkc6FxG7Vf2DuLOlpsdacaDrQkfltFsNn1XmnWFGBqiFZtRL6krhPPf3nz4OY5shFy5jSck/2+Aw1BTTfV9lRPBDmVaf19hTgXUG0rjdQT7BuX57yDj6AzMfI4shUKB+gC8MpbslE398Tv+doaLbnFKE7JOqM9aMYAscTggG8NmTQIu3M8DT1tFwLgRJu/gL8U6oFc8/xqx84/v5rOxWUrVq//7s53CHQ8un5fwA7H3PveowOXwDSHk9BGJUOtmxVnH68PgGqAauIUZldh8H54a01VY1tzBM1PbgW1glhFLAfBihRDRwJJAA25PFLKgixC21hcAyh4iwhdNFP9cDV4TnyqvVuI1m/NSjeY4yNcx6+nbo7nsEP57f9GYtJqeqpT9Hv1w4XFFywU/yEqFdxQPgadku3PcT/n9Aa4DglX6ac9/s7ksMnd/feX8z90OetLjqudCt7DZ6OBTD7G/h0BG1+UEw3PQUzKhnuQAeznXMJtQmMgeimIS2KIdsp2KwWyJPa2AuRk4Wh07xxGNOxHkXLHsr9f13AQOl/ZWVGWzJYE0kEk5a/7D5TkcVxYAlBMJQiQuct5uPWf7UdG8RawL/wCjmItGRqbcj9EM2ax8aL7v0E7SSEUSuxa97HEQefisUHI5Xn198Mq1n9XhYGe27niSLb0Ya7oSZ/nBaFTKZ4jqMiYj6r+NHgb2C7cSMggl+itHkfIrbE3C8OQHizp01zY5bQrV1NzqPcSq5MgbWXDduLQNCbpAcTJ3TF9qScfY+RqorwMK5id+eg1VfZfV+ekVfGIM4QMQ8uDowAmiDG7yWLBsGL9pHYNFNcvamBcPNLnrp3PlWtJAcotFF33JYcs3XSe7jEfNghnRcAbqT5WWsq1TsYutSDFX/+NdAkWvfZzHvD1h wEC6Ix7s 2CjcdY76jc6s9EDBkkv3Pt/UTzgTkA9JtBlzEvCn1qF0QYKd1rYqX8/O5Ehr/qTWULFfWUNQzn0F/d4H6bJOo+KbIKSJAwnOGws+wOC62TCIaYInXjbrIi7uEqq2jWmJBn3niB/WVibxvd9YjS7wrSbreRxKYvfLQfuMh X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Make sure the contents of the tag storage block is not corrupted by performing: 1. A tag dcache inval when the associated tagged pages are freed, to avoid dirty tag cache lines being evicted and corrupting the tag storage block when it's being used to store data. 2. A data cache inval when the tag storage block is being reserved, to ensure that no dirty data cache lines are present, which would trigger a writeback that could corrupt the tags stored in the block. Signed-off-by: Alexandru Elisei --- arch/arm64/include/asm/assembler.h | 10 ++++++++++ arch/arm64/include/asm/mte_tag_storage.h | 2 ++ arch/arm64/kernel/mte_tag_storage.c | 11 +++++++++++ arch/arm64/lib/mte.S | 16 ++++++++++++++++ 4 files changed, 39 insertions(+) diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h index 513787e43329..65fe88cce72b 100644 --- a/arch/arm64/include/asm/assembler.h +++ b/arch/arm64/include/asm/assembler.h @@ -310,6 +310,16 @@ alternative_cb_end lsl \reg, \reg, \tmp // actual cache line size .endm +/* + * tcache_line_size - get the safe tag cache line size across all CPUs + */ + .macro tcache_line_size, reg, tmp + read_ctr \tmp + ubfm \tmp, \tmp, #32, #37 // tag cache line size encoding + mov \reg, #4 // bytes per word + lsl \reg, \reg, \tmp // actual tag cache line size + .endm + /* * raw_icache_line_size - get the minimum I-cache line size on this CPU * from the CTR register. diff --git a/arch/arm64/include/asm/mte_tag_storage.h b/arch/arm64/include/asm/mte_tag_storage.h index 09f1318d924e..423b19e0cc46 100644 --- a/arch/arm64/include/asm/mte_tag_storage.h +++ b/arch/arm64/include/asm/mte_tag_storage.h @@ -11,6 +11,8 @@ #include +extern void dcache_inval_tags_poc(unsigned long start, unsigned long end); + #ifdef CONFIG_ARM64_MTE_TAG_STORAGE DECLARE_STATIC_KEY_FALSE(tag_storage_enabled_key); diff --git a/arch/arm64/kernel/mte_tag_storage.c b/arch/arm64/kernel/mte_tag_storage.c index 762c7c803a70..8c347f4855e4 100644 --- a/arch/arm64/kernel/mte_tag_storage.c +++ b/arch/arm64/kernel/mte_tag_storage.c @@ -17,6 +17,7 @@ #include #include +#include #include __ro_after_init DEFINE_STATIC_KEY_FALSE(tag_storage_enabled_key); @@ -421,8 +422,13 @@ static bool tag_storage_block_is_reserved(unsigned long block) static int tag_storage_reserve_block(unsigned long block, struct tag_region *region, int order) { + unsigned long block_va; int ret; + block_va = (unsigned long)page_to_virt(pfn_to_page(block)); + /* Avoid writeback of dirty data cache lines corrupting tags. */ + dcache_inval_poc(block_va, block_va + region->block_size_pages * PAGE_SIZE); + ret = xa_err(xa_store(&tag_blocks_reserved, block, pfn_to_page(block), GFP_KERNEL)); if (!ret) block_ref_add(block, region, order); @@ -570,6 +576,7 @@ void free_tag_storage(struct page *page, int order) { unsigned long block, start_block, end_block; struct tag_region *region; + unsigned long page_va; unsigned long flags; int ret; @@ -577,6 +584,10 @@ void free_tag_storage(struct page *page, int order) if (WARN_ONCE(ret, "Missing tag storage block for pfn 0x%lx", page_to_pfn(page))) return; + page_va = (unsigned long)page_to_virt(page); + /* Avoid writeback of dirty tag cache lines corrupting data. */ + dcache_inval_tags_poc(page_va, page_va + (PAGE_SIZE << order)); + end_block = start_block + order_to_num_blocks(order, region->block_size_pages); xa_lock_irqsave(&tag_blocks_reserved, flags); diff --git a/arch/arm64/lib/mte.S b/arch/arm64/lib/mte.S index 9f623e9da09f..bc02b4e95062 100644 --- a/arch/arm64/lib/mte.S +++ b/arch/arm64/lib/mte.S @@ -175,3 +175,19 @@ SYM_FUNC_START(mte_copy_page_tags_from_buf) ret SYM_FUNC_END(mte_copy_page_tags_from_buf) + +/* + * dcache_inval_tags_poc(start, end) + * + * Ensure that any tags in the D-cache for the interval [start, end) + * are invalidated to PoC. + * + * - start - virtual start address of region + * - end - virtual end address of region + */ +SYM_FUNC_START(__pi_dcache_inval_tags_poc) + tcache_line_size x2, x3 + dcache_by_myline_op igvac, sy, x0, x1, x2, x3 + ret +SYM_FUNC_END(__pi_dcache_inval_tags_poc) +SYM_FUNC_ALIAS(dcache_inval_tags_poc, __pi_dcache_inval_tags_poc) From patchwork Thu Jan 25 16:42:46 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Elisei X-Patchwork-Id: 13531342 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D1B14C47422 for ; Thu, 25 Jan 2024 16:45:14 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 69D856B00A6; Thu, 25 Jan 2024 11:45:14 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 627C36B00A7; Thu, 25 Jan 2024 11:45:14 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4A0726B00A8; Thu, 25 Jan 2024 11:45:14 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 34F0D6B00A6 for ; Thu, 25 Jan 2024 11:45:14 -0500 (EST) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 06AA31408D8 for ; Thu, 25 Jan 2024 16:45:14 +0000 (UTC) X-FDA: 81718408548.09.9E2EDB6 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf10.hostedemail.com (Postfix) with ESMTP id 58D3DC002F for ; Thu, 25 Jan 2024 16:45:12 +0000 (UTC) Authentication-Results: imf10.hostedemail.com; dkim=none; spf=pass (imf10.hostedemail.com: domain of alexandru.elisei@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=alexandru.elisei@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1706201112; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=6SFhHxHCIfL6RZYw6/4SEfw42kw/Q+SBxFbIBdpiS3E=; b=6iFlauL3ShdV9S78aCV0MX1P13ZsklpsHPGTutofJe9TCmqhbF5H+ODOjj7cjQqK6kMmzQ q0he7a7oL9yKJwCRJ0B2U3ZjQVVdl+CpE3aWcYc/YsUVL/uaLvzFzFaomVmG7mAYhhMbRz wCbWqJ8R+iIjsmt4svaH3b776+vKNXE= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1706201112; a=rsa-sha256; cv=none; b=rgWN4znQS2y63gBbMDWdLK8lZp3zKfmaeHtiYs047UNI2/fg1yoIkkN3OKo3UW7NMHCq36 TQMDRtQiZsqEgwiHceRS0oRSEzP5C7uIY8eVBKjL+CiGdE4ja4+3zKFrqDrSeYZ8uqSduY vA5XwywyL66PPy5FMPABpoLy9PzrF5Y= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=none; spf=pass (imf10.hostedemail.com: domain of alexandru.elisei@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=alexandru.elisei@arm.com; dmarc=pass (policy=none) header.from=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 08EF7169C; Thu, 25 Jan 2024 08:45:56 -0800 (PST) Received: from e121798.cable.virginm.net (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 1C34E3F5A1; Thu, 25 Jan 2024 08:45:05 -0800 (PST) From: Alexandru Elisei To: catalin.marinas@arm.com, will@kernel.org, oliver.upton@linux.dev, maz@kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, arnd@arndb.de, akpm@linux-foundation.org, mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, mhiramat@kernel.org, rppt@kernel.org, hughd@google.com Cc: pcc@google.com, steven.price@arm.com, anshuman.khandual@arm.com, vincenzo.frascino@arm.com, david@redhat.com, eugenis@google.com, kcc@google.com, hyesoo.yu@samsung.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, kvmarm@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org Subject: [PATCH RFC v3 25/35] arm64: mte: Reserve tag block for the zero page Date: Thu, 25 Jan 2024 16:42:46 +0000 Message-Id: <20240125164256.4147-26-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240125164256.4147-1-alexandru.elisei@arm.com> References: <20240125164256.4147-1-alexandru.elisei@arm.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 58D3DC002F X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: 184nxcg6mmihxd1zotn5u85apc9gjnjt X-HE-Tag: 1706201112-739576 X-HE-Meta: U2FsdGVkX1+OioUbyzjaxKHatQ1aA26/35itWMjtW+uqvL6EHG+8Ddr2VoXnzC4bZqYFpdCXVI1KSWKe8QxexTYD6au3dt8fQyJI2iiqHS2nMVSLhcYoOO7isStaVSp2s50UbGcQJJwCvpP95UtnpDeEHXHj8hFGJ7MfTNm4R5JmQgSXLgovoqqWThU4O9hXL49yLjLj6XOBMkN/YZc0nSPwnIVDC91IFxL3jpZEY72eW6b/tI7/BMcRYqsr8MTpvUaW531W1d4qDEav0RxHQfO2t+Pgu3HAfjJz3Q5DOdD6aGXRNbHe2SsggEHhUCjpmbe3vcfHgDQZSunkmz4BHlz31f92EI4wrNF+1EcciE+ZDUtH+pUrfsGp1B4LVFd/JtW5XOGtEdjN1peBMKxmnFCbP385YEafEbQcT4wei+aNUOHh7TFu8iC2OJ9L86PpPcJjt3OaEuL5zhelj8fbrw/kHWLeUXiUP0X70Vy+Ft6y6dneAFrTtsHvFZIpMR8BZ7ZpZJLYXkHmFlNdvU8Pmnjgh4Qa+c6Y8d4S4a1Ub9Cv4f2oyyGT0mB9Ri/6i4H4NmxWj2FdwI1cSKrdPwTiZuha9WLT2htnGxFDQ0SfKPM71MsY2JEUMDE4RG/UHxulJU+xAEEBLupAzI4FwF3yZUJvqmdkM0OHQg17Qbs2fP+x3pZD75u7u03m/t/K/npcWUYQaFUYtTDFFZobn5Wwp8WEgivUNCUupymL/ATgSXH4OdcyE834rtmvfXi6qsLviDGvXdNy3aWCfNy/9+uW6rOa4D8E/5cxv65TVJn2TMJLEPLbKjIVhJbrHt5yQ/+LxuBRy9qePuwsUgGJkCoffSuBmR0Y8pBOnryp779DKeO1vsD6beuHgwVTGUzLrzbfrTEqQtR85Lqsd1noVblzXBnJud/a2eEh+0XYDWTUnGECpmi0HH948sMgbJd9IgLZpOgL787swD7PeaXC8+Z sRq7SZxO zpWkoIwWKs/MdKvxkCV7CR+le0Yx/cSztwWoLaFlXgzqhwjqXLbc36HH7YUtMua0zkBponY5TphX+G4y5IA/VrsfTi/QtQqXyhtEgqYeFrrU4o5WZIWhPf4jLww2DECLXiSwvQwYYghNj0xRIRWxaBi183qYOKXyLq+0b X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On arm64, when a page is mapped as tagged, its tags are zeroed for two reasons: * To prevent leakage of tags to userspace. * To allow userspace to access the contents of the page with having to set the tags explicitely (bits 59:56 of an userspace pointer are zero, which correspond to tag 0b0000). The zero page receives special treatment, as the tags for the zero page are zeroed when the MTE feature is being enabled. This is done for performance reasons - the tags are zeroed once, instead of every time the page is mapped. When the tags for the zero page are zeroed, tag storage is not yet enabled. Reserve tag storage for the page immediately after tag storage management becomes enabled. Note that zeroing tags before tag storage management is enabled is safe to do because the tag storage pages are reserved at that point. Signed-off-by: Alexandru Elisei --- Changes since rfc v2: * Expanded commit message (David Hildenbrand) arch/arm64/kernel/mte_tag_storage.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/arch/arm64/kernel/mte_tag_storage.c b/arch/arm64/kernel/mte_tag_storage.c index 8c347f4855e4..1c8469781870 100644 --- a/arch/arm64/kernel/mte_tag_storage.c +++ b/arch/arm64/kernel/mte_tag_storage.c @@ -363,6 +363,8 @@ static int __init mte_enable_tag_storage(void) goto out_disabled; } + reserve_tag_storage(ZERO_PAGE(0), 0, GFP_HIGHUSER); + static_branch_enable(&tag_storage_enabled_key); pr_info("MTE tag storage region management enabled"); From patchwork Thu Jan 25 16:42:47 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Elisei X-Patchwork-Id: 13531343 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 965A6C48260 for ; Thu, 25 Jan 2024 16:45:20 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 343316B00A3; Thu, 25 Jan 2024 11:45:20 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 2CBA26B00A7; Thu, 25 Jan 2024 11:45:20 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 145896B00A8; Thu, 25 Jan 2024 11:45:20 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 0219E6B00A3 for ; Thu, 25 Jan 2024 11:45:20 -0500 (EST) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id BCEFCA1B0D for ; Thu, 25 Jan 2024 16:45:19 +0000 (UTC) X-FDA: 81718408758.28.C75F1FF Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf26.hostedemail.com (Postfix) with ESMTP id 00CAF140020 for ; Thu, 25 Jan 2024 16:45:17 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf26.hostedemail.com: domain of alexandru.elisei@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=alexandru.elisei@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1706201118; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=tS7TDecO9LxdjRmma+X31vm6vbZprrp6vvAHm32rssI=; b=6SwG9MSNJZROvcYR1RHKlOPhWqTd/yTk7stLd2WVRzxJMP2he9mJ8GmrT2uwNLAbYg1zgQ XTPO6Oz8GYfRPYote/jjo62DnaB4xdD6qb27HAmNNLA1bbWEXRpENq2js6hLiuqqJpwgmX TtKNplQl0JoBRnju3J1CkhwdSLI3DWE= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf26.hostedemail.com: domain of alexandru.elisei@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=alexandru.elisei@arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1706201118; a=rsa-sha256; cv=none; b=fzKN9P+yi8X1fFe1h4CosgVpL2mvzYIAGC1JtkX4kO+n3qfjtdzAWudHKoe5oEE30XDo5s 45sSOVqn2HNTeEe/1WGO4Z8wHJl43Hn13InHi1lCiUjpO56Q5c4EwZpsXvikMyh2Y8juw2 MyYpRSFMbiU4vZBJ2pu7wByXmqVKVwg= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id BD7CA169E; Thu, 25 Jan 2024 08:46:01 -0800 (PST) Received: from e121798.cable.virginm.net (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id D5BA33F5A1; Thu, 25 Jan 2024 08:45:11 -0800 (PST) From: Alexandru Elisei To: catalin.marinas@arm.com, will@kernel.org, oliver.upton@linux.dev, maz@kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, arnd@arndb.de, akpm@linux-foundation.org, mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, mhiramat@kernel.org, rppt@kernel.org, hughd@google.com Cc: pcc@google.com, steven.price@arm.com, anshuman.khandual@arm.com, vincenzo.frascino@arm.com, david@redhat.com, eugenis@google.com, kcc@google.com, hyesoo.yu@samsung.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, kvmarm@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org Subject: [PATCH RFC v3 26/35] arm64: mte: Use fault-on-access to reserve missing tag storage Date: Thu, 25 Jan 2024 16:42:47 +0000 Message-Id: <20240125164256.4147-27-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240125164256.4147-1-alexandru.elisei@arm.com> References: <20240125164256.4147-1-alexandru.elisei@arm.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 00CAF140020 X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: jpf74j1x691g73rj6in9nc4bg3h7mhm5 X-HE-Tag: 1706201117-489354 X-HE-Meta: U2FsdGVkX1/HzKgCJBeBmBmCyqqRJD7U/orR8GzaJFJuczxz2aoheg3sS1lITth3BB65k2EIN+2A1ICE1ADNbJUU8iS90x98FktUf29A/LSzx29xuFGwAQYhLSLzDWLRqIlc3B5G/ZCZbmMWMU/7z56ZUL7kUtGdNuT3OT+t4WHnVAdjR16/8qOO7xR4aO2I2fLWYZiSU1MISeqYFhdyKb6I/yk8NpIrpQMqYSpn0ekOkGTMkJdKg6yrCvV0GJoW1HVL0tCZVH0beX2772TZyc+Nf5EGyzvmbmvc5vLx9woy0MpzpaSKTtvVi+pJF1roJ3xa5ncYpUzadUTwWycVUCivPsuiOE8r4UQpLJzhsaHW8KfcIHT8YfxnA1gEM4S3Ccor0aEkbsrJJEztk1F1Jqx28pjwtKGJLXGdgA4HKiPsPkxUbNrevcQbVY6BRQgNtyVT+G1Hn/LjGKFdUVq1ilDo4SzRMSy1QXxQsycZq/bs49IYJHbEMZP/jBgau1v6X6dRPqxhkhM1QBvqigfq5h7zvz0HzXzcvNKyLsCsxaBvyCiR14y3p8lUGoDR26nFXvpUvlCDp3xss5Px2qAx6xLuryCtzyaAct1rhnuuqXLAxOkKSd/mTHwRCw+/clspv4WTmxNkPWHtPqUTBYI/TfJab09wHHGTHP8LqLe8VX3ispXcGh146k9CwztBbSaAcdTAM4BZetYufdHXENTdiuYldPKL7A3/UBLED0upXOPyJUsAKnjaPvot6AaI4YSEiNLNPlV4u2lFyepHhv3U6bxkqyIlzA/eam/7kKwhBICy5E0IKOxwJAAx1UDkr70KNivi6RxWUS/xSDw64OUW7v+wutY0q6qkL7oLGDQFRZJCbUyKvZAhAvaUfHw6DUCxXqfGbP4JOtl5oZy8fkv3RboyThkE8eOzgczzSqL3I/BdewQ1HRKIE1z5acQDyzMYkWcXDB9XdI4GLJYNPSk pGUU1T1B b6hz0tIdhTSRvc8y9zyacqjUlVkuqCg87vhXETFVA3NY+j+8cdQEE33UQZK2oarfR1uDvXW2ZAUkO5vyAX2JdICzAaI+mGhp+vfjZ/THfm+l4z8ewKHYFlxWBFY4s9QJ+ILGPpKQuB0d1qy97bE3VtND2WeBQxJxhibS4 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: There are three situations in which a page that is to be mapped as tagged doesn't have the corresponding tag storage reserved: * reserve_tag_storage() failed. * The allocation didn't specifiy __GFP_TAGGED (this can happen during migration, for example). * The page was mapped in a non-MTE enabled VMA, then an mprotect(PROT_MTE) enabled MTE. If a page that is about to be mapped as tagged doesn't have tag storage reserved, map it with the PAGE_FAULT_ON_ACCESS protection to trigger a fault next time they are accessed, and then reserve tag storage when the fault is handled. If tag storage cannot be reserved, then the page is migrated out of the VMA. Tag storage pages (which cannot be tagged) mapped in an MTE enabled MTE will be handled in a subsequent patch. Signed-off-by: Alexandru Elisei --- Changes since rfc v2: * New patch, loosely based on the arm64 code from the rfc v2 patch #19 ("mm: mprotect: Introduce PAGE_FAULT_ON_ACCESS for mprotect(PROT_MTE)") * All the common code has been moved back to the arch independent function handle_{huge_pmd,pte}_protnone() (David Hildenbrand). * Page is migrated if tag storage cannot be reserved after exhausting all attempts (Hyesoo Yu). * Moved folio_isolate_lru() declaration and struct migration_target_control to headers in include/linux (Peter Collingbourne). arch/arm64/Kconfig | 1 + arch/arm64/include/asm/mte.h | 4 +- arch/arm64/include/asm/mte_tag_storage.h | 3 + arch/arm64/include/asm/pgtable-prot.h | 2 + arch/arm64/include/asm/pgtable.h | 44 ++++++++--- arch/arm64/kernel/mte.c | 11 ++- arch/arm64/mm/fault.c | 98 ++++++++++++++++++++++++ include/linux/memcontrol.h | 2 + include/linux/migrate.h | 8 +- include/linux/migrate_mode.h | 1 + mm/internal.h | 6 -- 11 files changed, 156 insertions(+), 24 deletions(-) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 6f65e9005dc9..088e30fc6d12 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -2085,6 +2085,7 @@ config ARM64_MTE if ARM64_MTE config ARM64_MTE_TAG_STORAGE bool + select ARCH_HAS_FAULT_ON_ACCESS select CONFIG_CMA help Adds support for dynamic management of the memory used by the hardware diff --git a/arch/arm64/include/asm/mte.h b/arch/arm64/include/asm/mte.h index 6457b7899207..70dc2e409070 100644 --- a/arch/arm64/include/asm/mte.h +++ b/arch/arm64/include/asm/mte.h @@ -107,7 +107,7 @@ static inline bool try_page_mte_tagging(struct page *page) } void mte_zero_clear_page_tags(void *addr); -void mte_sync_tags(pte_t pte, unsigned int nr_pages); +void mte_sync_tags(pte_t *pteval, unsigned int nr_pages); void mte_copy_page_tags(void *kto, const void *kfrom); void mte_thread_init_user(void); void mte_thread_switch(struct task_struct *next); @@ -139,7 +139,7 @@ static inline bool try_page_mte_tagging(struct page *page) static inline void mte_zero_clear_page_tags(void *addr) { } -static inline void mte_sync_tags(pte_t pte, unsigned int nr_pages) +static inline void mte_sync_tags(pte_t *pteval, unsigned int nr_pages) { } static inline void mte_copy_page_tags(void *kto, const void *kfrom) diff --git a/arch/arm64/include/asm/mte_tag_storage.h b/arch/arm64/include/asm/mte_tag_storage.h index 423b19e0cc46..6d0f6ffcfdd6 100644 --- a/arch/arm64/include/asm/mte_tag_storage.h +++ b/arch/arm64/include/asm/mte_tag_storage.h @@ -32,6 +32,9 @@ int reserve_tag_storage(struct page *page, int order, gfp_t gfp); void free_tag_storage(struct page *page, int order); bool page_tag_storage_reserved(struct page *page); + +vm_fault_t handle_folio_missing_tag_storage(struct folio *folio, struct vm_fault *vmf, + bool *map_pte); #else static inline bool tag_storage_enabled(void) { diff --git a/arch/arm64/include/asm/pgtable-prot.h b/arch/arm64/include/asm/pgtable-prot.h index 483dbfa39c4c..1820e29244f8 100644 --- a/arch/arm64/include/asm/pgtable-prot.h +++ b/arch/arm64/include/asm/pgtable-prot.h @@ -19,6 +19,7 @@ #define PTE_SPECIAL (_AT(pteval_t, 1) << 56) #define PTE_DEVMAP (_AT(pteval_t, 1) << 57) #define PTE_PROT_NONE (_AT(pteval_t, 1) << 58) /* only when !PTE_VALID */ +#define PTE_TAG_STORAGE_NONE (_AT(pteval_t, 1) << 60) /* only when PTE_PROT_NONE */ /* * This bit indicates that the entry is present i.e. pmd_page() @@ -96,6 +97,7 @@ extern bool arm64_use_ng_mappings; }) #define PAGE_NONE __pgprot(((_PAGE_DEFAULT) & ~PTE_VALID) | PTE_PROT_NONE | PTE_RDONLY | PTE_NG | PTE_PXN | PTE_UXN) +#define PAGE_FAULT_ON_ACCESS __pgprot(((_PAGE_DEFAULT) & ~PTE_VALID) | PTE_PROT_NONE | PTE_TAG_STORAGE_NONE | PTE_RDONLY | PTE_NG | PTE_PXN | PTE_UXN) /* shared+writable pages are clean by default, hence PTE_RDONLY|PTE_WRITE */ #define PAGE_SHARED __pgprot(_PAGE_SHARED) #define PAGE_SHARED_EXEC __pgprot(_PAGE_SHARED_EXEC) diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index f30466199a9b..0174e292f890 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -326,10 +326,10 @@ static inline void __check_safe_pte_update(struct mm_struct *mm, pte_t *ptep, __func__, pte_val(old_pte), pte_val(pte)); } -static inline void __sync_cache_and_tags(pte_t pte, unsigned int nr_pages) +static inline void __sync_cache_and_tags(pte_t *pteval, unsigned int nr_pages) { - if (pte_present(pte) && pte_user_exec(pte) && !pte_special(pte)) - __sync_icache_dcache(pte); + if (pte_present(*pteval) && pte_user_exec(*pteval) && !pte_special(*pteval)) + __sync_icache_dcache(*pteval); /* * If the PTE would provide user space access to the tags associated @@ -337,9 +337,9 @@ static inline void __sync_cache_and_tags(pte_t pte, unsigned int nr_pages) * pte_access_permitted() returns false for exec only mappings, they * don't expose tags (instruction fetches don't check tags). */ - if (system_supports_mte() && pte_access_permitted(pte, false) && - !pte_special(pte) && pte_tagged(pte)) - mte_sync_tags(pte, nr_pages); + if (system_supports_mte() && pte_access_permitted(*pteval, false) && + !pte_special(*pteval) && pte_tagged(*pteval)) + mte_sync_tags(pteval, nr_pages); } static inline void set_ptes(struct mm_struct *mm, @@ -347,7 +347,7 @@ static inline void set_ptes(struct mm_struct *mm, pte_t *ptep, pte_t pte, unsigned int nr) { page_table_check_ptes_set(mm, ptep, pte, nr); - __sync_cache_and_tags(pte, nr); + __sync_cache_and_tags(&pte, nr); for (;;) { __check_safe_pte_update(mm, ptep, pte); @@ -444,7 +444,7 @@ static inline pgprot_t pte_pgprot(pte_t pte) return __pgprot(pte_val(pfn_pte(pfn, __pgprot(0))) ^ pte_val(pte)); } -#ifdef CONFIG_NUMA_BALANCING +#if defined(CONFIG_NUMA_BALANCING) || defined(CONFIG_ARCH_HAS_FAULT_ON_ACCESS) /* * See the comment in include/linux/pgtable.h */ @@ -459,6 +459,28 @@ static inline int pmd_protnone(pmd_t pmd) } #endif +#ifdef CONFIG_ARCH_HAS_FAULT_ON_ACCESS +static inline bool arch_fault_on_access_pte(pte_t pte) +{ + return pte_protnone(pte) && (pte_val(pte) & PTE_TAG_STORAGE_NONE); +} + +static inline bool arch_fault_on_access_pmd(pmd_t pmd) +{ + return arch_fault_on_access_pte(pmd_pte(pmd)); +} + +static inline vm_fault_t arch_handle_folio_fault_on_access(struct folio *folio, + struct vm_fault *vmf, + bool *map_pte) +{ + if (tag_storage_enabled()) + return handle_folio_missing_tag_storage(folio, vmf, map_pte); + + return VM_FAULT_SIGBUS; +} +#endif /* CONFIG_ARCH_HAS_FAULT_ON_ACCESS */ + #define pmd_present_invalid(pmd) (!!(pmd_val(pmd) & PMD_PRESENT_INVALID)) static inline int pmd_present(pmd_t pmd) @@ -533,7 +555,7 @@ static inline void __set_pte_at(struct mm_struct *mm, unsigned long __always_unused addr, pte_t *ptep, pte_t pte, unsigned int nr) { - __sync_cache_and_tags(pte, nr); + __sync_cache_and_tags(&pte, nr); __check_safe_pte_update(mm, ptep, pte); set_pte(ptep, pte); } @@ -828,8 +850,8 @@ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot) * in MAIR_EL1. The mask below has to include PTE_ATTRINDX_MASK. */ const pteval_t mask = PTE_USER | PTE_PXN | PTE_UXN | PTE_RDONLY | - PTE_PROT_NONE | PTE_VALID | PTE_WRITE | PTE_GP | - PTE_ATTRINDX_MASK; + PTE_PROT_NONE | PTE_TAG_STORAGE_NONE | PTE_VALID | + PTE_WRITE | PTE_GP | PTE_ATTRINDX_MASK; /* preserve the hardware dirty information */ if (pte_hw_dirty(pte)) pte = set_pte_bit(pte, __pgprot(PTE_DIRTY)); diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c index a41ef3213e1e..faf09da3400a 100644 --- a/arch/arm64/kernel/mte.c +++ b/arch/arm64/kernel/mte.c @@ -35,13 +35,18 @@ DEFINE_STATIC_KEY_FALSE(mte_async_or_asymm_mode); EXPORT_SYMBOL_GPL(mte_async_or_asymm_mode); #endif -void mte_sync_tags(pte_t pte, unsigned int nr_pages) +void mte_sync_tags(pte_t *pteval, unsigned int nr_pages) { - struct page *page = pte_page(pte); + struct page *page = pte_page(*pteval); unsigned int i; - /* if PG_mte_tagged is set, tags have already been initialised */ for (i = 0; i < nr_pages; i++, page++) { + if (tag_storage_enabled() && !page_tag_storage_reserved(page)) { + *pteval = pte_modify(*pteval, PAGE_FAULT_ON_ACCESS); + continue; + } + + /* if PG_mte_tagged is set, tags have already been initialised */ if (try_page_mte_tagging(page)) { mte_clear_page_tags(page_address(page)); set_page_mte_tagged(page); diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index 1ffaeccecda2..1db3adb6499f 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -12,6 +12,8 @@ #include #include #include +#include +#include #include #include #include @@ -19,6 +21,7 @@ #include #include #include +#include #include #include #include @@ -962,3 +965,98 @@ void tag_clear_highpage(struct page *page) mte_zero_clear_page_tags(page_address(page)); set_page_mte_tagged(page); } + +#ifdef CONFIG_ARM64_MTE_TAG_STORAGE + +#define MR_TAG_STORAGE MR_ARCH_1 + +/* + * Called with an elevated reference on the folio. + * Returns with the elevated reference dropped. + */ +static int replace_folio_with_tagged(struct folio *folio) +{ + struct migration_target_control mtc = { + .nid = NUMA_NO_NODE, + .gfp_mask = GFP_HIGHUSER_MOVABLE | __GFP_TAGGED, + }; + LIST_HEAD(foliolist); + int ret, tries; + + lru_cache_disable(); + + if (!folio_isolate_lru(folio)) { + lru_cache_enable(); + folio_put(folio); + return -EAGAIN; + } + + /* Isolate just grabbed another reference, drop ours. */ + folio_put(folio); + list_add_tail(&folio->lru, &foliolist); + + tries = 3; + while (tries--) { + ret = migrate_pages(&foliolist, alloc_migration_target, NULL, (unsigned long)&mtc, + MIGRATE_SYNC, MR_TAG_STORAGE, NULL); + if (ret != -EBUSY) + break; + } + + if (ret != 0) + putback_movable_pages(&foliolist); + + lru_cache_enable(); + + return ret; +} + +vm_fault_t handle_folio_missing_tag_storage(struct folio *folio, struct vm_fault *vmf, + bool *map_pte) +{ + struct vm_area_struct *vma = vmf->vma; + int ret = 0; + + *map_pte = false; + + /* + * This should never happen, once a VMA has been marked as tagged, that + * cannot be changed. + */ + if (WARN_ON_ONCE(!(vma->vm_flags & VM_MTE))) + goto out_map; + + /* + * The folio is probably being isolated for migration, replay the fault + * to give time for the entry to be replaced by a migration pte. + */ + if (unlikely(is_migrate_isolate_page(folio_page(folio, 0)))) + goto out_retry; + + ret = reserve_tag_storage(folio_page(folio, 0), folio_order(folio), GFP_HIGHUSER_MOVABLE); + if (ret) { + /* replace_folio_with_tagged() is expensive, try to avoid it. */ + if (fault_flag_allow_retry_first(vmf->flags)) + goto out_retry; + + replace_folio_with_tagged(folio); + return 0; + } + +out_map: + folio_put(folio); + *map_pte = true; + return 0; + +out_retry: + folio_put(folio); + if (fault_flag_allow_retry_first(vmf->flags)) { + /* Flag set by GUP. */ + if (!(vmf->flags & FAULT_FLAG_RETRY_NOWAIT)) + release_fault_lock(vmf); + return VM_FAULT_RETRY; + } + /* Replay the fault. */ + return 0; +} +#endif diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 20ff87f8e001..9c0b559f54f5 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -1633,6 +1633,8 @@ unsigned long mem_cgroup_soft_limit_reclaim(pg_data_t *pgdat, int order, } #endif /* CONFIG_MEMCG */ +bool folio_isolate_lru(struct folio *folio); + static inline void __inc_lruvec_kmem_state(void *p, enum node_stat_item idx) { __mod_lruvec_kmem_state(p, idx, 1); diff --git a/include/linux/migrate.h b/include/linux/migrate.h index 2ce13e8a309b..f954e19bd9d1 100644 --- a/include/linux/migrate.h +++ b/include/linux/migrate.h @@ -10,8 +10,6 @@ typedef struct folio *new_folio_t(struct folio *folio, unsigned long private); typedef void free_folio_t(struct folio *folio, unsigned long private); -struct migration_target_control; - /* * Return values from addresss_space_operations.migratepage(): * - negative errno on page migration failure; @@ -57,6 +55,12 @@ struct movable_operations { void (*putback_page)(struct page *); }; +struct migration_target_control { + int nid; /* preferred node id */ + nodemask_t *nmask; + gfp_t gfp_mask; +}; + /* Defined in mm/debug.c: */ extern const char *migrate_reason_names[MR_TYPES]; diff --git a/include/linux/migrate_mode.h b/include/linux/migrate_mode.h index f37cc03f9369..c6c5c7726d26 100644 --- a/include/linux/migrate_mode.h +++ b/include/linux/migrate_mode.h @@ -29,6 +29,7 @@ enum migrate_reason { MR_CONTIG_RANGE, MR_LONGTERM_PIN, MR_DEMOTION, + MR_ARCH_1, MR_TYPES }; diff --git a/mm/internal.h b/mm/internal.h index f309a010d50f..cb76cf0928f5 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -952,12 +952,6 @@ static inline bool is_migrate_highatomic_page(struct page *page) void setup_zone_pageset(struct zone *zone); -struct migration_target_control { - int nid; /* preferred node id */ - nodemask_t *nmask; - gfp_t gfp_mask; -}; - /* * mm/filemap.c */ From patchwork Thu Jan 25 16:42:48 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Elisei X-Patchwork-Id: 13531344 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D2862C47422 for ; Thu, 25 Jan 2024 16:45:26 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6A04C6B00A7; Thu, 25 Jan 2024 11:45:26 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 64F066B00A8; Thu, 25 Jan 2024 11:45:26 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4A0076B00A9; Thu, 25 Jan 2024 11:45:26 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 33F776B00A7 for ; Thu, 25 Jan 2024 11:45:26 -0500 (EST) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id F055D1404E9 for ; Thu, 25 Jan 2024 16:45:25 +0000 (UTC) X-FDA: 81718409010.10.21AC171 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf01.hostedemail.com (Postfix) with ESMTP id D688140023 for ; Thu, 25 Jan 2024 16:45:23 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=none; spf=pass (imf01.hostedemail.com: domain of alexandru.elisei@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=alexandru.elisei@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1706201124; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=loubURgXZSCdTWE+xpQRByZLITqEZMmk9LYVAauFzxg=; b=DL4TllMvzUdR/4SiAlTmjvXBFU4iQCfw8qoQZsN72gBTkpD3QvFZ3fupxb+Tv6otgDHdWu FMFjprHlMWcqMcUlwfFtM+n2to+UDudnj1ruZRXu7ManTn8GWf0nYkvm688UFpNb3QQHtR g7DvQbHbomDCz64L0dTS1PTJBczTKbA= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1706201124; a=rsa-sha256; cv=none; b=T9PzANzAyIt3i55ozLT0zNYlQJQKVZwZdPuGNOVB4LDo8CduNWuMaXHuLumdI5teTawatE eHMDER+uZXEcTOGCTLja1J2Z3sSJB0KKfGTTh839eHFzQtIDxEgu6mS9/GiQuQct799yIu N0CSJeqGM0b7ACThDiJ2zrAI770Vms0= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=none; spf=pass (imf01.hostedemail.com: domain of alexandru.elisei@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=alexandru.elisei@arm.com; dmarc=pass (policy=none) header.from=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 7EA3516A3; Thu, 25 Jan 2024 08:46:07 -0800 (PST) Received: from e121798.cable.virginm.net (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 963B83F5A1; Thu, 25 Jan 2024 08:45:17 -0800 (PST) From: Alexandru Elisei To: catalin.marinas@arm.com, will@kernel.org, oliver.upton@linux.dev, maz@kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, arnd@arndb.de, akpm@linux-foundation.org, mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, mhiramat@kernel.org, rppt@kernel.org, hughd@google.com Cc: pcc@google.com, steven.price@arm.com, anshuman.khandual@arm.com, vincenzo.frascino@arm.com, david@redhat.com, eugenis@google.com, kcc@google.com, hyesoo.yu@samsung.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, kvmarm@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org Subject: [PATCH RFC v3 27/35] arm64: mte: Handle tag storage pages mapped in an MTE VMA Date: Thu, 25 Jan 2024 16:42:48 +0000 Message-Id: <20240125164256.4147-28-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240125164256.4147-1-alexandru.elisei@arm.com> References: <20240125164256.4147-1-alexandru.elisei@arm.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: D688140023 X-Rspam-User: X-Stat-Signature: z14nn6zae4zt9ncheatzweptjomyffgb X-Rspamd-Server: rspam03 X-HE-Tag: 1706201123-580200 X-HE-Meta: U2FsdGVkX1+dp66kcB6zCGi2ezU6aCJjBg0Kg3BijS3vN8l4FQ4STucLW+TvUU3fK8Y3rHegw/1wAAME7YHF0BNJ26su5YF3gMF4RZqeOabl7EEzLRfhiK6QDbeEpgAtejGwduK67TQRePUeIEoPiWg80aI1C0mZMjarBnfuhItwHy9JXfRD2vZTDkl5MbVFGFjygrzAHuqBKRF36bmXms5GDQtS5RERdGtvZnzpzRQB+2dSd29xvaJNmsg9tXa3x7S3K2LPNmF6s1zQ10sgrX3v4bGURyoMSY/YBUHwBfUUwJ7WBanNyz9BVqybB7vbPmJE8TS6kzcO/JxIKL0Lu0eJB1Lf5Ht0rvinoH/STnqxdhiO2l7TkA9fiAkiMW8dOgT5TMQ/tO8SjOoqtvm5ixziEKLOFBrFHag8I5IoWqHBOuEDQPr4AvbwLff32nd0QKUtb/aK5NwV+SfDzykxorepwHJby7dwfxPc0KqNdzD8LaGUWSWFzEayFYTcgU9o5fEqttZ5HeG9ucrnL3N1sICUSN6QR7lokZ2ez9r4UPwEz1kzKRfKC3x6211XUi1VfHBItcHy/Ikb8qfdIrlwaNC/Epa1S7oaeiBe2oIyLwsUg0hO0Sf4AHPXv+nJN3AIPXkcNH+X916L4WBRPPz8ouoyic+y7dllmPe3KMXTUQcjj0FTVnTh5voNe7L/5pzjtOCw9ojXWFn69J0ESHI3M9tBesFk7bW8drpI68xRdpZ5DN8otCJ4WG/l4+EfGDCQQhsNl50y2ZD6Bnnzv8StKofFCcaFYjWStzT3wF7JV7v726/bciOq4jL43t6UEjpLCqaJCJDJ2VymYvMSOMQeukNrRLnq/5JP2nXejuptsebwcam1SP5xmTaAEwuzcQIG9bwYzHy9Ww/P60ZS2xxBqzbpabw6qA3PL8ypgxHnt2kSO2Bni9/9pZ3CJ1k+HpfjgChs+aPEiflyMRpZxb3 2vAAUeWe 73w7zwC5p634Xj+O48b7++fApZu8yMk8RqArcqcPwIq0DZXrr7LEM6oDoiH/ulQ8gEsrsJBpi5/sFf0GIsfzo9xc25Mwi4KkMmIg3JhRSPzplLN/6bsDiWGtaNv7mw0edjZ/DnZhnIXUie1lbBDILgUNd2ITPCwcYJrfj X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Tag stoarge pages cannot be tagged. When such a page is mapped in a MTE-enabled VMA, migrate it out directly and don't try to reserve tag storage for it. Signed-off-by: Alexandru Elisei --- arch/arm64/include/asm/mte_tag_storage.h | 1 + arch/arm64/kernel/mte_tag_storage.c | 15 +++++++++++++++ arch/arm64/mm/fault.c | 11 +++++++++-- 3 files changed, 25 insertions(+), 2 deletions(-) diff --git a/arch/arm64/include/asm/mte_tag_storage.h b/arch/arm64/include/asm/mte_tag_storage.h index 6d0f6ffcfdd6..50bdae94cf71 100644 --- a/arch/arm64/include/asm/mte_tag_storage.h +++ b/arch/arm64/include/asm/mte_tag_storage.h @@ -32,6 +32,7 @@ int reserve_tag_storage(struct page *page, int order, gfp_t gfp); void free_tag_storage(struct page *page, int order); bool page_tag_storage_reserved(struct page *page); +bool page_is_tag_storage(struct page *page); vm_fault_t handle_folio_missing_tag_storage(struct folio *folio, struct vm_fault *vmf, bool *map_pte); diff --git a/arch/arm64/kernel/mte_tag_storage.c b/arch/arm64/kernel/mte_tag_storage.c index 1c8469781870..afe2bb754879 100644 --- a/arch/arm64/kernel/mte_tag_storage.c +++ b/arch/arm64/kernel/mte_tag_storage.c @@ -492,6 +492,21 @@ bool page_tag_storage_reserved(struct page *page) return test_bit(PG_tag_storage_reserved, &page->flags); } +bool page_is_tag_storage(struct page *page) +{ + unsigned long pfn = page_to_pfn(page); + struct range *tag_range; + int i; + + for (i = 0; i < num_tag_regions; i++) { + tag_range = &tag_regions[i].tag_range; + if (tag_range->start <= pfn && pfn <= tag_range->end) + return true; + } + + return false; +} + int reserve_tag_storage(struct page *page, int order, gfp_t gfp) { unsigned long start_block, end_block; diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index 1db3adb6499f..01450ab91a87 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -1014,6 +1014,7 @@ static int replace_folio_with_tagged(struct folio *folio) vm_fault_t handle_folio_missing_tag_storage(struct folio *folio, struct vm_fault *vmf, bool *map_pte) { + bool is_tag_storage = page_is_tag_storage(folio_page(folio, 0)); struct vm_area_struct *vma = vmf->vma; int ret = 0; @@ -1033,12 +1034,18 @@ vm_fault_t handle_folio_missing_tag_storage(struct folio *folio, struct vm_fault if (unlikely(is_migrate_isolate_page(folio_page(folio, 0)))) goto out_retry; - ret = reserve_tag_storage(folio_page(folio, 0), folio_order(folio), GFP_HIGHUSER_MOVABLE); - if (ret) { + if (!is_tag_storage) { + ret = reserve_tag_storage(folio_page(folio, 0), folio_order(folio), + GFP_HIGHUSER_MOVABLE); + if (!ret) + goto out_map; + /* replace_folio_with_tagged() is expensive, try to avoid it. */ if (fault_flag_allow_retry_first(vmf->flags)) goto out_retry; + } + if (ret || is_tag_storage) { replace_folio_with_tagged(folio); return 0; } From patchwork Thu Jan 25 16:42:49 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Elisei X-Patchwork-Id: 13531345 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 39432C48260 for ; Thu, 25 Jan 2024 16:45:32 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C1C166B00A9; Thu, 25 Jan 2024 11:45:31 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id BA5296B00AA; Thu, 25 Jan 2024 11:45:31 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9F8286B00AB; Thu, 25 Jan 2024 11:45:31 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 8A0916B00A9 for ; Thu, 25 Jan 2024 11:45:31 -0500 (EST) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 41E1640E21 for ; Thu, 25 Jan 2024 16:45:31 +0000 (UTC) X-FDA: 81718409262.30.F1CE84C Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf24.hostedemail.com (Postfix) with ESMTP id 62B0C180009 for ; Thu, 25 Jan 2024 16:45:29 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=none; spf=pass (imf24.hostedemail.com: domain of alexandru.elisei@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=alexandru.elisei@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1706201129; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=/tlf6XHMLJA0XeAvsRCyHUBEcI/XVj52JNiW8wN50cE=; b=iRfNzugHuiNlGRSm5nuVvPp8Pebw/MD26BrFUA+fiD0V5GoiKSiV6kzWIyt8qmtgl8zXNz CVPVW0w9ZhcAaDhMAtMDqOtdm9djqMcC8Bec4tJZcGpuYTl3N9jxgVLeb3elQvJKyHJ6GE edcdfL1zDbpqBjM4k4QCONJL1HfS2kY= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=none; spf=pass (imf24.hostedemail.com: domain of alexandru.elisei@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=alexandru.elisei@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1706201129; a=rsa-sha256; cv=none; b=BeziXNg+iptlZRNdnQheNkhhpL4PShZpjr9Ic/ZVAocVScpjb7CSIuv8xFtBwxdXy2jTUq 0J2vkexxS9heZgZcyun0CnzpKTKmKyhqK+yQLYxkz1ChfOltAv02E717bh9OKhO++gj3Zx fYfvX1J5r5YtGCknSW3QsNfZzHgUSkY= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 4406316F2; Thu, 25 Jan 2024 08:46:13 -0800 (PST) Received: from e121798.cable.virginm.net (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 59B4A3F5A1; Thu, 25 Jan 2024 08:45:23 -0800 (PST) From: Alexandru Elisei To: catalin.marinas@arm.com, will@kernel.org, oliver.upton@linux.dev, maz@kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, arnd@arndb.de, akpm@linux-foundation.org, mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, mhiramat@kernel.org, rppt@kernel.org, hughd@google.com Cc: pcc@google.com, steven.price@arm.com, anshuman.khandual@arm.com, vincenzo.frascino@arm.com, david@redhat.com, eugenis@google.com, kcc@google.com, hyesoo.yu@samsung.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, kvmarm@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org Subject: [PATCH RFC v3 28/35] arm64: mte: swap: Handle tag restoring when missing tag storage Date: Thu, 25 Jan 2024 16:42:49 +0000 Message-Id: <20240125164256.4147-29-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240125164256.4147-1-alexandru.elisei@arm.com> References: <20240125164256.4147-1-alexandru.elisei@arm.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 62B0C180009 X-Rspam-User: X-Stat-Signature: dzqbawdozwjkctqn3wds9w9bjai4uyii X-Rspamd-Server: rspam01 X-HE-Tag: 1706201129-284637 X-HE-Meta: U2FsdGVkX18fJyOLiHDP794YK4ykWML26Qo2ZtpveGYCNmT1BSKBOR8cLO4D+hsT3BDyXU3QaCalTvluYIYsw+Y9vWTyt8VbxNprBawXtPOp3/6kA00LkpWr9lxdPHf6P2zah/VKC/DFgP/b44GWnNwKxULe3gOjD9RuJ7Mcjhy8x5K5s85mXQc0vZOkroJjczWb0jf192FgkfEUutW5/AmbMq4vqmVg2cIANCYviSaNdcyPtZaPrtRLVHMcG3D8kz0Go5VE52WxsNI4u1oBA3I3JUVFnf1Ur+3GDXa4PRfwncKLu/UHQbWQYT39VCm/Erkiwg5GnJunnCAevRGEA+OrPY9IO0WR8YXhBzAhkjHWAW+d4zzb9oRC9DdWNV7J7ate/mlfRYIHaxgGbbwblLFHFhhrBCPduVWCUNe0TggrAj+cvQQQNbj3j0vACO10YMOS8ITZX9TPglEudgv8EUXvcwHES5WI7ZpUyrI6cWsJnz0KiJDGpjBsJUjb49OAUqx4cmz+O93Y4TzHkuaSPQk35M1Fk6P0JhvSTKHgktH1qN01RdB/4JC8LyGFbLwjP468j2hoe/sWMPbC3AIFggCO4/zVrMHz97tdNUmOYuA6VPAAGdDspyCvdowhC1wIje8pE+fJpxDeMTw7wROfyCHJ63Db8zpCEKLEqbFa0zp7JLGnV9tan3CZgd930DQn09jn8XZ/vRf3Ojkt15sAN86mIiDxbaYtrz+42yE8Q0wa71FQd1pXRwKv1W0Ei3YYxxjQuEFbjJeZaFhtxk3Ev3K2T78Q6d259kGTcmwN5GqG8cXOsSHfbM+LiuFaMdwYZGq1UD1DdPHSGqixASK0JqwDqHz72J3JvxyWs3LvprmybzOXNHlIzPougnN0hDxWAAWgkw3ilwYZZ8VYiCBBbaclFoA+GwpSUqmq3yPd4ozj2BwYLGCgQS+ZNsmb9n5iMak4rKbIGohgto5sSkl mIHUAIzZ 1fVqZNT9SAfqqbvolsGG7QIKzln8UV/uOMWM+/eeI/P4L5o2e3F1TnZb9WTYAyMtrzKar0+5vHGsSFEhpl96WqwXNV6z/oYQGYiM4wzDCJl4rgdb4aEaPkKGAhR8GfJaI3M46HQ93/MOZhjXfLBGV9qKA/Cy0ILtI+aYu X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Linux restores tags when a page is swapped in and there are tags associated with the swap entry which the new page will replace. The saved tags are restored even if the page will not be mapped as tagged, to protect against cases where the page is shared between different VMAs, and is tagged in some, but untagged in others. By using this approach, the process can still access the correct tags following an mprotect(PROT_MTE) on the non-MTE enabled VMA. But this poses a challenge for managing tag storage: in the scenario above, when a new page is allocated to be swapped in for the process where it will be mapped as untagged, the corresponding tag storage block is not reserved. mte_restore_page_tags_by_swp_entry(), when it restores the saved tags, will overwrite data in the tag storage block associated with the new page, leading to data corruption if the block is in use by a process. Get around this issue by saving the tags in a new xarray, this time indexed by the page pfn, and then restoring them when tag storage is reserved for the page. Signed-off-by: Alexandru Elisei --- Changes since rfc v2: * Restore saved tags **before** setting the PG_tag_storage_reserved bit to eliminate a brief window of opportunity where userspace can access uninitialized tags (Peter Collingbourne). arch/arm64/include/asm/mte_tag_storage.h | 8 ++ arch/arm64/include/asm/pgtable.h | 11 +++ arch/arm64/kernel/mte_tag_storage.c | 12 ++- arch/arm64/mm/mteswap.c | 110 +++++++++++++++++++++++ 4 files changed, 140 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/mte_tag_storage.h b/arch/arm64/include/asm/mte_tag_storage.h index 50bdae94cf71..40590a8c3748 100644 --- a/arch/arm64/include/asm/mte_tag_storage.h +++ b/arch/arm64/include/asm/mte_tag_storage.h @@ -36,6 +36,14 @@ bool page_is_tag_storage(struct page *page); vm_fault_t handle_folio_missing_tag_storage(struct folio *folio, struct vm_fault *vmf, bool *map_pte); +vm_fault_t mte_try_transfer_swap_tags(swp_entry_t entry, struct page *page); + +void tags_by_pfn_lock(void); +void tags_by_pfn_unlock(void); + +void *mte_erase_tags_for_pfn(unsigned long pfn); +bool mte_save_tags_for_pfn(void *tags, unsigned long pfn); +void mte_restore_tags_for_pfn(unsigned long start_pfn, int order); #else static inline bool tag_storage_enabled(void) { diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index 0174e292f890..87ae59436162 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -1085,6 +1085,17 @@ static inline void arch_swap_invalidate_area(int type) mte_invalidate_tags_area_by_swp_entry(type); } +#ifdef CONFIG_ARM64_MTE_TAG_STORAGE +#define __HAVE_ARCH_SWAP_PREPARE_TO_RESTORE +static inline vm_fault_t arch_swap_prepare_to_restore(swp_entry_t entry, + struct folio *folio) +{ + if (tag_storage_enabled()) + return mte_try_transfer_swap_tags(entry, &folio->page); + return 0; +} +#endif + #define __HAVE_ARCH_SWAP_RESTORE static inline void arch_swap_restore(swp_entry_t entry, struct folio *folio) { diff --git a/arch/arm64/kernel/mte_tag_storage.c b/arch/arm64/kernel/mte_tag_storage.c index afe2bb754879..ac7b9c9c585c 100644 --- a/arch/arm64/kernel/mte_tag_storage.c +++ b/arch/arm64/kernel/mte_tag_storage.c @@ -567,6 +567,7 @@ int reserve_tag_storage(struct page *page, int order, gfp_t gfp) } } + mte_restore_tags_for_pfn(page_to_pfn(page), order); page_set_tag_storage_reserved(page, order); out_unlock: mutex_unlock(&tag_blocks_lock); @@ -595,7 +596,8 @@ void free_tag_storage(struct page *page, int order) struct tag_region *region; unsigned long page_va; unsigned long flags; - int ret; + void *tags; + int i, ret; ret = tag_storage_find_block(page, &start_block, ®ion); if (WARN_ONCE(ret, "Missing tag storage block for pfn 0x%lx", page_to_pfn(page))) @@ -605,6 +607,14 @@ void free_tag_storage(struct page *page, int order) /* Avoid writeback of dirty tag cache lines corrupting data. */ dcache_inval_tags_poc(page_va, page_va + (PAGE_SIZE << order)); + tags_by_pfn_lock(); + for (i = 0; i < (1 << order); i++) { + tags = mte_erase_tags_for_pfn(page_to_pfn(page + i)); + if (unlikely(tags)) + mte_free_tag_buf(tags); + } + tags_by_pfn_unlock(); + end_block = start_block + order_to_num_blocks(order, region->block_size_pages); xa_lock_irqsave(&tag_blocks_reserved, flags); diff --git a/arch/arm64/mm/mteswap.c b/arch/arm64/mm/mteswap.c index 2a43746b803f..e11495fa3c18 100644 --- a/arch/arm64/mm/mteswap.c +++ b/arch/arm64/mm/mteswap.c @@ -20,6 +20,112 @@ void mte_free_tag_buf(void *buf) kfree(buf); } +#ifdef CONFIG_ARM64_MTE_TAG_STORAGE +static DEFINE_XARRAY(tags_by_pfn); + +void tags_by_pfn_lock(void) +{ + xa_lock(&tags_by_pfn); +} + +void tags_by_pfn_unlock(void) +{ + xa_unlock(&tags_by_pfn); +} + +void *mte_erase_tags_for_pfn(unsigned long pfn) +{ + return __xa_erase(&tags_by_pfn, pfn); +} + +bool mte_save_tags_for_pfn(void *tags, unsigned long pfn) +{ + void *entry; + int ret; + + ret = xa_reserve(&tags_by_pfn, pfn, GFP_KERNEL); + if (ret) + return true; + + tags_by_pfn_lock(); + + if (page_tag_storage_reserved(pfn_to_page(pfn))) { + xa_release(&tags_by_pfn, pfn); + tags_by_pfn_unlock(); + return false; + } + + entry = __xa_store(&tags_by_pfn, pfn, tags, GFP_ATOMIC); + if (xa_is_err(entry)) { + xa_release(&tags_by_pfn, pfn); + goto out_unlock; + } else if (entry) { + mte_free_tag_buf(entry); + } + +out_unlock: + tags_by_pfn_unlock(); + return true; +} + +void mte_restore_tags_for_pfn(unsigned long start_pfn, int order) +{ + struct page *page = pfn_to_page(start_pfn); + unsigned long pfn; + void *tags; + + tags_by_pfn_lock(); + + for (pfn = start_pfn; pfn < start_pfn + (1 << order); pfn++, page++) { + tags = mte_erase_tags_for_pfn(pfn); + if (unlikely(tags)) { + /* + * Mark the page as tagged so mte_sync_tags() doesn't + * clear the tags. + */ + WARN_ON_ONCE(!try_page_mte_tagging(page)); + mte_copy_page_tags_from_buf(page_address(page), tags); + set_page_mte_tagged(page); + mte_free_tag_buf(tags); + } + } + + tags_by_pfn_unlock(); +} + +/* + * Note on locking: swap in/out is done with the folio locked, which eliminates + * races with mte_save/restore_page_tags_by_swp_entry. + */ +vm_fault_t mte_try_transfer_swap_tags(swp_entry_t entry, struct page *page) +{ + void *swap_tags, *pfn_tags; + bool saved; + + /* + * mte_restore_page_tags_by_swp_entry() will take care of copying the + * tags over. + */ + if (likely(page_mte_tagged(page) || page_tag_storage_reserved(page))) + return 0; + + swap_tags = xa_load(&tags_by_swp_entry, entry.val); + if (!swap_tags) + return 0; + + pfn_tags = mte_allocate_tag_buf(); + if (!pfn_tags) + return VM_FAULT_OOM; + + memcpy(pfn_tags, swap_tags, MTE_PAGE_TAG_STORAGE_SIZE); + saved = mte_save_tags_for_pfn(pfn_tags, page_to_pfn(page)); + if (!saved) + mte_free_tag_buf(pfn_tags); + + return 0; +} +#endif + int mte_save_page_tags_by_swp_entry(struct page *page) { void *tags, *ret; @@ -54,6 +160,10 @@ void mte_restore_page_tags_by_swp_entry(swp_entry_t entry, struct page *page) if (!tags) return; + /* Tags will be restored when tag storage is reserved. */ + if (tag_storage_enabled() && unlikely(!page_tag_storage_reserved(page))) + return; + if (try_page_mte_tagging(page)) { mte_copy_page_tags_from_buf(page_address(page), tags); set_page_mte_tagged(page); From patchwork Thu Jan 25 16:42:50 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Elisei X-Patchwork-Id: 13531346 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CD5C6C47258 for ; Thu, 25 Jan 2024 16:45:37 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 64FAF6B0098; Thu, 25 Jan 2024 11:45:37 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 5D8EB8D0002; Thu, 25 Jan 2024 11:45:37 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 453146B00AB; Thu, 25 Jan 2024 11:45:37 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 30ACD6B0098 for ; Thu, 25 Jan 2024 11:45:37 -0500 (EST) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id EE8EE1C1721 for ; Thu, 25 Jan 2024 16:45:36 +0000 (UTC) X-FDA: 81718409472.14.C507AAE Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf05.hostedemail.com (Postfix) with ESMTP id 3861C100020 for ; Thu, 25 Jan 2024 16:45:35 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf05.hostedemail.com: domain of alexandru.elisei@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=alexandru.elisei@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1706201135; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=O83xQ4m/aJM+qp7cmwtNg+48CMmBJybDD269x+AFojc=; b=dcy6gf6kWDNaeGRrbJRzouA0+qJiVEkS9g/xz8+OYtQsoId8DOwG0kn5aUPxHCVOjkAW/E lwbmN0Z9hUOu2N5XKKTygkTR4GupnTg1pal11kzkHgN0lzpgrSLviAd3PQ7YvF6Ed2POz/ 6GLnXAU0Qc8tYgfsBOjlSft8pxFdCXw= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf05.hostedemail.com: domain of alexandru.elisei@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=alexandru.elisei@arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1706201135; a=rsa-sha256; cv=none; b=yphkySWZ4mqQAhsinqJQTW1JRSkiCpCMidSoSQuyTKYSp2lusCHXMDRWSFYysFlaiM5jBg abK3bsGRYMWRKWQWEkMZ0miljn37k/3SgmjH+fqpJdcUKmG7VDlNK74kiayynErkVt3Rra gFTt5AvvwsT46RoBqXH9Wh4ErkF8kqA= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0743516F3; Thu, 25 Jan 2024 08:46:19 -0800 (PST) Received: from e121798.cable.virginm.net (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 1E2EF3F5A1; Thu, 25 Jan 2024 08:45:28 -0800 (PST) From: Alexandru Elisei To: catalin.marinas@arm.com, will@kernel.org, oliver.upton@linux.dev, maz@kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, arnd@arndb.de, akpm@linux-foundation.org, mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, mhiramat@kernel.org, rppt@kernel.org, hughd@google.com Cc: pcc@google.com, steven.price@arm.com, anshuman.khandual@arm.com, vincenzo.frascino@arm.com, david@redhat.com, eugenis@google.com, kcc@google.com, hyesoo.yu@samsung.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, kvmarm@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org Subject: [PATCH RFC v3 29/35] arm64: mte: copypage: Handle tag restoring when missing tag storage Date: Thu, 25 Jan 2024 16:42:50 +0000 Message-Id: <20240125164256.4147-30-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240125164256.4147-1-alexandru.elisei@arm.com> References: <20240125164256.4147-1-alexandru.elisei@arm.com> MIME-Version: 1.0 X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 3861C100020 X-Stat-Signature: d7azxybhiz9ybdye7ms59tho6tgdi346 X-Rspam-User: X-HE-Tag: 1706201135-527137 X-HE-Meta: U2FsdGVkX1/AsqldcD6bkORTrJ+wGwi8+Cy794uZ5vnwono8Ef5zM4fjaHzChuvUFlD5wnzLQKhgdCsCu/dHqcdQ0i74rFzQjOiQz1Pq/3P7BEJOCye7v1EW13jKSDcOhBpt9x1gFAqNrpJ0XdKkC/LG8KY+VEWwe2s5tEBb4CH4OqH4Qn2iNp8wzSbWoRksb2hTqWQuqQAVZj0rTBYavN5HCMksGEEhbTHGjcQ/gdZukTXAIN3qJsm8z77Qvis1IfZAp8obp4feOZW+r4h3o8Q6txAjbUivZ6fAnsSptuLhZ+KAa8OedZvsQTrB5ludKWaKeIFu8LFmDeeCwk49Fwgh3/MAJoZTwLZpAzI1yQKieH5c3XL3WPQpUZXSxScRh/Vw/dqCXFYzB0/2ttbEMhvq4g7buyjfIB1f4myDOx0z1qomOUFv9IgGIa3k1o60TAoO0L6APwaeW1lXsPEWz6Dz8SwZrcQwjHxCklRpWyRAZcJRZKSXz3UKHesG/ldEcg3Bljg+zPArPDx8AgVlz8XE4+QlwWLeDsQeD0d/j7hDEOwGwFAOj5NiLi2b9Xj7c5dnQrdUDiCSUcZWXpDCjG40koBAdhnyy9zwqVoKccn0Ycdicd+ABXJpRE9cfuh6azG8e87HnqEsn99CKO4J2PTimGSfLB3rLVhIZnomPtMOwFyz3jxhrXoGRLQPIysyKhLNrJMBKWOcCYo9wRTZ21mMhf5pS/Jjg5EsWO8rMvOa69tjzHGhKhi8a2TewhFha79aqPHCKkvcNv/zUtB3lXYUiNF6TJ1HJS3uJiiy6T5X7ZNHpkXNhyxbyei1cYEH+MB3/pjrbwSLTzzO+5jjpF+/bLp57COAMCQeLsONQGQFVexUNOVwaITFoFIMKS7CCj1LialHZKGsVnt0apj16f0vNMSU2JYEharZfQjrbyuRAUlOqMnRx+5/Ep5RVHtXtChpWoMam4tDUgHY+rf KwH5jCl9 8zPaK+31Rzq4ht5mmPWHe64XdBcGlaZ8Gx3VUrgJ8dLRen+FEq2Mbgxc5b5MAiSBCGiLRXpH6gGpKnc+AtARjjM6dCfCc5tAo2w+R9kmY4HoL8/kvJt91xbMVzW3elejxtQW6x4sjhuii5fXewI221R+o8q4+h1bYHX9DRKVvlT9b7OGGwjMb2zFDtQ6GDlNhc9LC X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: There are several situations where copy_highpage() can end up copying tags to a page which doesn't have its tag storage reserved. One situation involves migration racing with mprotect(PROT_MTE): VMA is initially untagged, migration starts and destination page is allocated as untagged, mprotect(PROT_MTE) changes the VMA to tagged and userspace accesses the source page, thus making it tagged. The migration code then calls copy_highpage(), which will copy the tags from the source page (now tagged) to the destination page (allocated as untagged). Yes another situation can happen during THP collapse. The huge page that will replace the HPAGE_PMD_NR contiguous mapped pages is allocated with __GFP_TAGGED not set. copy_highpage() will copy the tags from the pages being replaced to the huge page which doesn't have tag storage reserved. The situation gets even more complicated when the replacement huge page is a tag storage page. The tag storage huge page will be migrated after a fault on access, but the tags from the original pages must be copied over to the huge page that will be replacing the tag storage huge page. Signed-off-by: Alexandru Elisei --- arch/arm64/mm/copypage.c | 56 ++++++++++++++++++++++++++++++++++++++++ 1 file changed, 56 insertions(+) diff --git a/arch/arm64/mm/copypage.c b/arch/arm64/mm/copypage.c index a7bb20055ce0..e991ccb43fb7 100644 --- a/arch/arm64/mm/copypage.c +++ b/arch/arm64/mm/copypage.c @@ -13,6 +13,59 @@ #include #include #include +#include + +#ifdef CONFIG_ARM64_MTE_TAG_STORAGE +static inline bool try_transfer_saved_tags(struct page *from, struct page *to) +{ + void *tags; + bool saved; + + VM_WARN_ON_ONCE(!preemptible()); + + if (page_mte_tagged(from)) { + if (page_tag_storage_reserved(to)) + return false; + + tags = mte_allocate_tag_buf(); + if (WARN_ON(!tags)) + return true; + + mte_copy_page_tags_to_buf(page_address(from), tags); + saved = mte_save_tags_for_pfn(tags, page_to_pfn(to)); + if (!saved) + mte_free_tag_buf(tags); + + return saved; + } + + tags_by_pfn_lock(); + tags = mte_erase_tags_for_pfn(page_to_pfn(from)); + tags_by_pfn_unlock(); + + if (likely(!tags)) + return false; + + if (page_tag_storage_reserved(to)) { + WARN_ON_ONCE(!try_page_mte_tagging(to)); + mte_copy_page_tags_from_buf(page_address(to), tags); + set_page_mte_tagged(to); + mte_free_tag_buf(tags); + return true; + } + + saved = mte_save_tags_for_pfn(tags, page_to_pfn(to)); + if (!saved) + mte_free_tag_buf(tags); + + return saved; +} +#else +static inline bool try_transfer_saved_tags(struct page *from, struct page *to) +{ + return false; +} +#endif void copy_highpage(struct page *to, struct page *from) { @@ -24,6 +77,9 @@ void copy_highpage(struct page *to, struct page *from) if (kasan_hw_tags_enabled()) page_kasan_tag_reset(to); + if (tag_storage_enabled() && try_transfer_saved_tags(from, to)) + return; + if (system_supports_mte() && page_mte_tagged(from)) { /* It's a new page, shouldn't have been tagged yet */ WARN_ON_ONCE(!try_page_mte_tagging(to)); From patchwork Thu Jan 25 16:42:51 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Elisei X-Patchwork-Id: 13531347 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id EE9C8C47422 for ; Thu, 25 Jan 2024 16:45:43 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8C1206B007B; Thu, 25 Jan 2024 11:45:43 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 872366B0099; Thu, 25 Jan 2024 11:45:43 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6EBA06B007E; Thu, 25 Jan 2024 11:45:43 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 5A1266B00AB for ; Thu, 25 Jan 2024 11:45:43 -0500 (EST) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 2B0931A0DF3 for ; Thu, 25 Jan 2024 16:45:43 +0000 (UTC) X-FDA: 81718409766.24.B5801D0 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf14.hostedemail.com (Postfix) with ESMTP id 5AB98100017 for ; Thu, 25 Jan 2024 16:45:41 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf14.hostedemail.com: domain of alexandru.elisei@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=alexandru.elisei@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1706201141; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=JDPvXQHRthpuSC/CtNOCvPZ9frOdnRIFjtDWOmmFb68=; b=5R6OvTX+sQc9+gliH4kOMYSgCSYEKWd1Xq/YXGMIKDBrKRMFvOSS/4VFEmvGSlVgyZmlRd t1yCrv/7uPZvHF8IbLmnMyutjsPBnHI6CKBnLMcXzVQaKXwpG0bvc262qom6QFVce49V56 5wotRtGN4HJ9JlwsDIHkQoE3LU9W+AY= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf14.hostedemail.com: domain of alexandru.elisei@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=alexandru.elisei@arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1706201141; a=rsa-sha256; cv=none; b=OFMkA4Dv/FO/igXJNo+8ePm2zK1Mns7OPG07yfM3uKXWkLCQR22AAyjloeCSujZzeW53jL rIm8gJU4vXXaDZxyW89H1N/Izzf/CMCaU/w3zjpE0jqjrNYmxwisIQmmZSCNZCAf2myJNT JepbS5EVZYXy5c6Z/UsmiA6lP/QJ40E= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 09C361713; Thu, 25 Jan 2024 08:46:25 -0800 (PST) Received: from e121798.cable.virginm.net (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id D3D8E3F5A1; Thu, 25 Jan 2024 08:45:34 -0800 (PST) From: Alexandru Elisei To: catalin.marinas@arm.com, will@kernel.org, oliver.upton@linux.dev, maz@kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, arnd@arndb.de, akpm@linux-foundation.org, mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, mhiramat@kernel.org, rppt@kernel.org, hughd@google.com Cc: pcc@google.com, steven.price@arm.com, anshuman.khandual@arm.com, vincenzo.frascino@arm.com, david@redhat.com, eugenis@google.com, kcc@google.com, hyesoo.yu@samsung.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, kvmarm@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org Subject: [PATCH RFC v3 30/35] arm64: mte: ptrace: Handle pages with missing tag storage Date: Thu, 25 Jan 2024 16:42:51 +0000 Message-Id: <20240125164256.4147-31-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240125164256.4147-1-alexandru.elisei@arm.com> References: <20240125164256.4147-1-alexandru.elisei@arm.com> MIME-Version: 1.0 X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 5AB98100017 X-Stat-Signature: ntspob4r591kw44j4im47gnhbra7zskc X-Rspam-User: X-HE-Tag: 1706201141-667057 X-HE-Meta: U2FsdGVkX1+y/ZTSTJB1UapGm2us4RDVdrcvrt44xNTbecb6tLv85DgiE2ceDgaXymH+FA5eZgpBCpU3HfQPFK1PokLtk6xmkafjgHkv+I36Fud9UEH/ybOzIdSgM9927PY8ErAJ3UqWgWCaYPE++YIfJ2dOc20Al6R5Lcvq/01Jp/N/kmrHZSCDtBQUuO5tw/K8zwLRNnTX1SHZpjxT2eb8nFZ3RzKflDfr32PiNBPowNDulLxDXg+5zUC0GhfhCj95yGjCCXSTOdgapj5OIdhwHRWGfthouARnn5wqxCAkYf0/V0EroQkJvmu9GDQqUIAYpqfQK5eFP9MEoXTv0S7sw3dM6G3DV25n6Mc1v0522RqNz3iwnvzDueEWA/Enc8T9V7DZ1vAiw7OihWKJ1K6KNaVoRoPkuqF9IfYJ1UWoXJBVxPh1GRmBojecgZT2vLTAEEGmvpuP2UbTU5qY4lNyD0Jcz8kbCaR7bXZYjnfSbun3P+4/SahxqbKF9uCFEoEu/iK4t4fhSa/51ECMb1lQqsPxHVDf+ZJM7+bSQaCiU7UPl/m/P4YRPnI5f400ZBSUukAMAVbo/V8nZUkELhg2ABL92p9tdt/VdyrwlHR6ivwTngRSnvqCJ9W4gmSRgIirafhEP9oMWfudO7cZnZSbxWE+/7ntMT064x7PPTEfPKXYK4U0OQXTeTGVyw5hF4fga8arEG1zM+rfBg4vxtCgQM9q5Zz6VQ3+R7+myFxIJGv4C6Jnptzxfj78UtPx3uEh/o5TzFV2R4MWLh3pwS5uoNsLLUzOPkf6y1hdiOFrrPTsoBErwkks8uU2rPNFbZPwdfcxero7e2bQHtcDfurkw7UK1qfDYc9iQHHNVAFxzmLV5oPnM2i8epVuoYCG714epIQPqqU8Q3f8Zg87fVmawfHK5BxX6D9O4WOVCLRU5QHdUSYQqX0QuMsPSX5dEPGisA63ckZArq+tbEK ARJT1WkB wTrMP3GOiRFAdXxY6lEFFiK4AHeETRTv54ad8FtsYIXAdCdoIeJz/hgJ+1kpiBdu3+nSq6v4uL1Uywgy6KPtmVGFen2vc4luBCOwUnLmdGd+ZUHMmny2myq2QBtZbgfQ6EY1SbGK3AfVbcUQ2Lr0bQhEpXKPPzVuKxPSV X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: A page can end up mapped in a MTE enabled VMA without the corresponding tag storage block reserved. Tag accesses made by ptrace in this case can lead to the wrong tags being read or memory corruption for the process that is using the tag storage memory as data. Reserve tag storage by treating ptrace accesses like a fault. Signed-off-by: Alexandru Elisei --- Changes since rfc v2: * New patch, issue reported by Peter Collingbourne. arch/arm64/kernel/mte.c | 26 ++++++++++++++++++++++++-- 1 file changed, 24 insertions(+), 2 deletions(-) diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c index faf09da3400a..b1fa02dad4fd 100644 --- a/arch/arm64/kernel/mte.c +++ b/arch/arm64/kernel/mte.c @@ -412,10 +412,13 @@ static int __access_remote_tags(struct mm_struct *mm, unsigned long addr, while (len) { struct vm_area_struct *vma; unsigned long tags, offset; + unsigned int fault_flags; + struct page *page; + vm_fault_t ret; void *maddr; - struct page *page = get_user_page_vma_remote(mm, addr, - gup_flags, &vma); +get_page: + page = get_user_page_vma_remote(mm, addr, gup_flags, &vma); if (IS_ERR(page)) { err = PTR_ERR(page); break; @@ -433,6 +436,25 @@ static int __access_remote_tags(struct mm_struct *mm, unsigned long addr, put_page(page); break; } + + if (tag_storage_enabled() && !page_tag_storage_reserved(page)) { + fault_flags = FAULT_FLAG_DEFAULT | \ + FAULT_FLAG_USER | \ + FAULT_FLAG_REMOTE | \ + FAULT_FLAG_ALLOW_RETRY | \ + FAULT_FLAG_RETRY_NOWAIT; + if (write) + fault_flags |= FAULT_FLAG_WRITE; + + put_page(page); + ret = handle_mm_fault(vma, addr, fault_flags, NULL); + if (ret & VM_FAULT_ERROR) { + err = -EFAULT; + break; + } + goto get_page; + } + WARN_ON_ONCE(!page_mte_tagged(page)); /* limit access to the end of the page */ From patchwork Thu Jan 25 16:42:52 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Elisei X-Patchwork-Id: 13531348 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 80A01C47258 for ; Thu, 25 Jan 2024 16:45:49 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1BDF96B00A9; Thu, 25 Jan 2024 11:45:49 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1201B6B00AB; Thu, 25 Jan 2024 11:45:49 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F01E36B00AD; Thu, 25 Jan 2024 11:45:48 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id D92276B00AB for ; Thu, 25 Jan 2024 11:45:48 -0500 (EST) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 871F140E13 for ; Thu, 25 Jan 2024 16:45:48 +0000 (UTC) X-FDA: 81718409976.23.F6C0BE8 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf24.hostedemail.com (Postfix) with ESMTP id CC287180003 for ; Thu, 25 Jan 2024 16:45:46 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf24.hostedemail.com: domain of alexandru.elisei@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=alexandru.elisei@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1706201147; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=xkwftJVeqouLGSxjRxX2qTSTkNnfTRKYrouvXmLFb0c=; b=yGNNjA7Id5QT0Qtubmo11UpixebtZ/hb04IQ+cFdDNXgSabXUeJZV6VgwptQDZpk8vdytw lDw/+amLVNdPAo3rDjL3eoKqTAq8gPmWGJfT4WjA6XNgLMPPm59bRhmgwdp8lODzXClPeH +E2d4QJKLlPHs3LGX1EJIhWDEyRuXTk= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf24.hostedemail.com: domain of alexandru.elisei@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=alexandru.elisei@arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1706201147; a=rsa-sha256; cv=none; b=YP8FxEI9Yl72gBjUWpWWjhZz61dLENu2JMW9R3lw6H0utcPkkD/7qwptNbgA3o5mCyeZFJ axvRvJV5yaR7eYeBDTq0SC6USxFpseGYBDSvGi9fRJDm3HqNH8UiGCVZlE+cVFV2A5FqGT kSx/gxiQxlVu09rmZJCBYUMTFdYiftY= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8549C1756; Thu, 25 Jan 2024 08:46:30 -0800 (PST) Received: from e121798.cable.virginm.net (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 945353F8A4; Thu, 25 Jan 2024 08:45:40 -0800 (PST) From: Alexandru Elisei To: catalin.marinas@arm.com, will@kernel.org, oliver.upton@linux.dev, maz@kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, arnd@arndb.de, akpm@linux-foundation.org, mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, mhiramat@kernel.org, rppt@kernel.org, hughd@google.com Cc: pcc@google.com, steven.price@arm.com, anshuman.khandual@arm.com, vincenzo.frascino@arm.com, david@redhat.com, eugenis@google.com, kcc@google.com, hyesoo.yu@samsung.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, kvmarm@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org Subject: [PATCH RFC v3 31/35] khugepaged: arm64: Don't collapse MTE enabled VMAs Date: Thu, 25 Jan 2024 16:42:52 +0000 Message-Id: <20240125164256.4147-32-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240125164256.4147-1-alexandru.elisei@arm.com> References: <20240125164256.4147-1-alexandru.elisei@arm.com> MIME-Version: 1.0 X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: CC287180003 X-Stat-Signature: 3jsnpr95as1jcaukrjz5am6pf1xo8rh4 X-Rspam-User: X-HE-Tag: 1706201146-534829 X-HE-Meta: U2FsdGVkX18arGhEIvB3E1HCkBeenm2Fj33usI+7MmxeOwFlEXrV7V+rc3n7bNaS6e33ejCHT6hP8x/wxIN3UnKiBigkeHQspzsFdOTYCSUXfK9QXrMTtpqeDAZbs3jIrGciqU/8SxrKIzzEhquTSctX6bdE5dtNxbplaQFDOAJc46O/N9Nrbnt/NrUXTUIMXdaX/8n2Ax0Yi7nK/zlonjk0VOqQH02jtY1d8QmYCdqQbvHkXK5lQxUpctFul9eSkQ5LwVqnhzuBEvJncEDsyp4M/atesjZ0KJGGN4LP8Qhce/fj9m8Z5IH0Tk+D6R0RS4PBGa+8Um70/gfbP14D+rK6H7afLvyeBEVYgBs/VQpPYVPuZPvOukqKKSuAyrG2AsaIOY+0bhBGI2347zTvKG9hTseABJbzo3rVuQzzWXQaL1ify3+vOeA0ZevuP2k6LKI6GnfdBLJeq0k0Jgok6vH6PqTwmqOSWXZWYcQHDSz8/VVR8952J3I74uWOGmOzVLBFpLgp0M+7/PMvvLGLmSeybIJoejHj5qqFO1PK/DMIo6qezJKnP5uPGvvt82V4LJJM9JrDd4N3iVs0iHRluClvHobcIlGxpzSr9vKHUDgFkrdG4OucD49abNyR6vXzUazcDkvPCgNmXhi+gCyayhNxN274qVn+DUU47oY1yAsWhSzklr4vTZm8wsNgA/pQoEmJnhJGUnrsH92eLVY40nKZpqMaG8leGLhx9MCKgP99XxSlJr9GZ+kBMe5IOG8oqvixL9bFPlq9n/Vjfpct//sHtJ2WjxKXYiSsCPKBjOo8od0/nxpxh6cEsTAuVeJV0lh2ktNkn7BejDqRTcvQgKJQLivTiHpk8iC9Lf8VOIx6DTnXvBLeP5UlzyQuVSJ1iWIEj8xV/hWNvMMctO52NDhe1TZDVDm18PyX22X5fgtE8BAFYLm/lDp+876Jp/ybIZtoITxybB2me3rTCKN j9i5SFyA RB4NxuByzIrgzCokHdXkRodPNdxMhORWKJHTRiUNr/KYM01q2F5WLEiRET6tveNqCQnYq4cZrLvrwMG1VqqSz3CLuAAUNOOhh21MeWLGGaZuQYbYIBvKXtxDU0XqdzmCJS1NbuZ/tCJHz8nCEHu3IlxaDTnNj/jNTzL/Q X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: copy_user_highpage() will do memory allocation if there are saved tags for the destination page, and the page is missing tag storage. After commit a349d72fd9ef ("mm/pgtable: add rcu_read_lock() and rcu_read_unlock()s"), collapse_huge_page() calls __collapse_huge_page_copy() -> .. -> copy_user_highpage() with the RCU lock held, which means that copy_user_highpage() can only allocate memory using GFP_ATOMIC or equivalent. Get around this by refusing to collapse pages into a transparent huge page if the VMA is MTE-enabled. Signed-off-by: Alexandru Elisei --- Changes since rfc v2: * New patch. I think an agreement on whether copy*_user_highpage() should be always allowed to sleep, or should not be allowed, would be useful. arch/arm64/include/asm/pgtable.h | 3 +++ arch/arm64/kernel/mte_tag_storage.c | 5 +++++ include/linux/khugepaged.h | 5 +++++ mm/khugepaged.c | 4 ++++ 4 files changed, 17 insertions(+) diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index 87ae59436162..d0473538c926 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -1120,6 +1120,9 @@ static inline bool arch_alloc_cma(gfp_t gfp_mask) return true; } +bool arch_hugepage_vma_revalidate(struct vm_area_struct *vma, unsigned long address); +#define arch_hugepage_vma_revalidate arch_hugepage_vma_revalidate + #endif /* CONFIG_ARM64_MTE_TAG_STORAGE */ #endif /* CONFIG_ARM64_MTE */ diff --git a/arch/arm64/kernel/mte_tag_storage.c b/arch/arm64/kernel/mte_tag_storage.c index ac7b9c9c585c..a99959b70573 100644 --- a/arch/arm64/kernel/mte_tag_storage.c +++ b/arch/arm64/kernel/mte_tag_storage.c @@ -636,3 +636,8 @@ void arch_alloc_page(struct page *page, int order, gfp_t gfp) if (tag_storage_enabled() && alloc_requires_tag_storage(gfp)) reserve_tag_storage(page, order, gfp); } + +bool arch_hugepage_vma_revalidate(struct vm_area_struct *vma, unsigned long address) +{ + return !(vma->vm_flags & VM_MTE); +} diff --git a/include/linux/khugepaged.h b/include/linux/khugepaged.h index f68865e19b0b..461e4322dff2 100644 --- a/include/linux/khugepaged.h +++ b/include/linux/khugepaged.h @@ -38,6 +38,11 @@ static inline void khugepaged_exit(struct mm_struct *mm) if (test_bit(MMF_VM_HUGEPAGE, &mm->flags)) __khugepaged_exit(mm); } + +#ifndef arch_hugepage_vma_revalidate +#define arch_hugepage_vma_revalidate(vma, address) 1 +#endif + #else /* CONFIG_TRANSPARENT_HUGEPAGE */ static inline void khugepaged_fork(struct mm_struct *mm, struct mm_struct *oldmm) { diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 2b219acb528e..cb9a9ddb4d86 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -935,6 +935,10 @@ static int hugepage_vma_revalidate(struct mm_struct *mm, unsigned long address, */ if (expect_anon && (!(*vmap)->anon_vma || !vma_is_anonymous(*vmap))) return SCAN_PAGE_ANON; + + if (!arch_hugepage_vma_revalidate(vma, address)) + return SCAN_VMA_CHECK; + return SCAN_SUCCEED; } From patchwork Thu Jan 25 16:42:53 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Elisei X-Patchwork-Id: 13531349 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5B38DC47422 for ; Thu, 25 Jan 2024 16:45:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DF24B6B0071; Thu, 25 Jan 2024 11:45:54 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D7AB76B0081; Thu, 25 Jan 2024 11:45:54 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BF6416B0098; Thu, 25 Jan 2024 11:45:54 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id A7D756B0071 for ; Thu, 25 Jan 2024 11:45:54 -0500 (EST) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 5BE6F1404E9 for ; Thu, 25 Jan 2024 16:45:54 +0000 (UTC) X-FDA: 81718410228.11.6CB7555 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf15.hostedemail.com (Postfix) with ESMTP id 8FB6AA000F for ; Thu, 25 Jan 2024 16:45:52 +0000 (UTC) Authentication-Results: imf15.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf15.hostedemail.com: domain of alexandru.elisei@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=alexandru.elisei@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1706201152; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=KHvUGMYd0nZcyyUZw5YWOWKPzifS3EXkLNN8Z9a6idc=; b=rHkJn0rHFLbI8+hxl/ZAwszi66h/TRoU3kQHpjbxxc+yls/zu2zNq+yC/0kA+Homoup9Vq EXmDvVqZHb1Ujy6cLeupU83B3AUZdbVP69ODb7xoA9B+dlQEHAcnkogVmuDzVZ7+VA+5E/ swA9U+sSY7G2bwjcPfPBuVQkn11TVxI= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf15.hostedemail.com: domain of alexandru.elisei@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=alexandru.elisei@arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1706201152; a=rsa-sha256; cv=none; b=r2SVfXbi5YzJ9KQJVEemcjhZGAvkdgKIm1XJsj3DZRVd6cqC5dsIZaQ5aMbOSRdXS2mG3a abtV8wS/nnig5ZXMxJ8enbTOB0ofX3P3Jb0M4WNu/7DEVI0V8XDhl8JL/kxvXOiS+CyOXe x5arnGDcGG5fefbGHukeGs25xxMRy9Q= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 4F38316F8; Thu, 25 Jan 2024 08:46:36 -0800 (PST) Received: from e121798.cable.virginm.net (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 5D8F93F5A1; Thu, 25 Jan 2024 08:45:46 -0800 (PST) From: Alexandru Elisei To: catalin.marinas@arm.com, will@kernel.org, oliver.upton@linux.dev, maz@kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, arnd@arndb.de, akpm@linux-foundation.org, mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, mhiramat@kernel.org, rppt@kernel.org, hughd@google.com Cc: pcc@google.com, steven.price@arm.com, anshuman.khandual@arm.com, vincenzo.frascino@arm.com, david@redhat.com, eugenis@google.com, kcc@google.com, hyesoo.yu@samsung.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, kvmarm@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org Subject: [PATCH RFC v3 32/35] KVM: arm64: mte: Reserve tag storage for virtual machines with MTE Date: Thu, 25 Jan 2024 16:42:53 +0000 Message-Id: <20240125164256.4147-33-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240125164256.4147-1-alexandru.elisei@arm.com> References: <20240125164256.4147-1-alexandru.elisei@arm.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 8FB6AA000F X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: 1a84gskmkyhouhormr6z1mhre1pnqk6j X-HE-Tag: 1706201152-723360 X-HE-Meta: U2FsdGVkX19e/aX6oxpjh3Bov+XsT1MtgqkRGfTTVwWcTEtdAdaKK4D/dCq2kWsiBdDVkjsxgBmIgAUnLb8dkSHCvO4u97q6kvnky/SEQbV6jNgvsdykqm9nLOGBP20YzD1emTrNVrmu6bt7m+MToHHWb3Mg/jdUQ7jiX5a//twqQEwJ/VzZsBu6QgCldXvRqEfiDmF7anZvjKps/DmKUrhG3Qpj7tda14s1Fs2TWmeJclL8yd1Pkrru5LcXayse9Tbudj7/dBIAMrLIshJxKbSebP2FX5vCAG2Oy557VuEFjog8yc2nzs0M/VBXR9KKm/7l3ewQZXxDBSKxJ+xwxwX7UVeUsTNamP+hw5NUhFokQGfaTNWTIXJsn8/BOJ7V4RD9iaBMLeArgc+4XyTTpwYzW/9uC+awp0gNEk9Qb/szOJGcWqYBFGbv0BDvcg3PK9GfTespz5matuh+WN9ACBd7GAaAav2DaGl5b2Tz6shaDHuGRSY4RspMZRCsF8ioePY5g5oPLjQCMUkI0e4jsjsP2MyDuZnfrH6z9iFn7pLRQjgJKWmF4brWmMdfpl4GVuqm5qsTAlf9dwGnePF0bXkj6Sn7jQxh+VfWvsn2mQY9pOk4/QMnqwdmrXYvg1mbcoJVixyd5cBOFrwxNtd4X7wGUOfdJhNGi7k8z19jVoLjvrDCJrWrNYziaxYWRw7AshJxnrJL+3ipos3LQ0RRhL73gCFDnllhd0FfK+epPkSDYmmyocAYQ60oLQUky6jq82VCTkl3L+JP/iAAKq33QfXTJfCQ/wMvm0Nbh+w7W+DoflW2eYk/UEXHJRbR3XxcZ+sGhjJc13GZ2UHekcgZLktgTdgJnhKf9jb0VDKkKruaCVV5uctpQmFvQh50DCXOoQ0IXcIcyBS1J/cQS05oYYX+kJK1JLihX1d9h3zSpB/5E3JtAzHzy47ECQPR6APd/Ry69NALTondiph5cBn 3qkmm5Sg /rN2C/+yQ3M7DJquUo8QRFxuBmrI73MTTS3VCebtURw01Io6r4iTdqqULPYJsrEgC2nufKZTnkyPEqV7EedTkwnRKOkcAHJB5vdEslUwvckDh6MvXiGIfCrp3Lc4jQn0J5emKXWa3rVua0uNCQTxVlCAd+HrIv1uv9Go+By2FFAL/kvSZ/VqqiORxsfCbg+PLCq2u X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: KVM allows MTE enabled VMs to be created when the backing VMA does not have MTE enabled. As a result, pages allocated for the virtual machine's memory won't have tag storage reserved. Try to reserve tag storage the first time the page is accessed by the guest. This is similar to how pages mapped without tag storage in an MTE VMA are handled. Signed-off-by: Alexandru Elisei --- Changes since rfc v2: * New patch. arch/arm64/include/asm/mte_tag_storage.h | 10 ++++++ arch/arm64/include/asm/pgtable.h | 7 +++- arch/arm64/kvm/mmu.c | 43 ++++++++++++++++++++++++ arch/arm64/mm/fault.c | 2 +- 4 files changed, 60 insertions(+), 2 deletions(-) diff --git a/arch/arm64/include/asm/mte_tag_storage.h b/arch/arm64/include/asm/mte_tag_storage.h index 40590a8c3748..32940ef7bcdf 100644 --- a/arch/arm64/include/asm/mte_tag_storage.h +++ b/arch/arm64/include/asm/mte_tag_storage.h @@ -34,6 +34,8 @@ void free_tag_storage(struct page *page, int order); bool page_tag_storage_reserved(struct page *page); bool page_is_tag_storage(struct page *page); +int replace_folio_with_tagged(struct folio *folio); + vm_fault_t handle_folio_missing_tag_storage(struct folio *folio, struct vm_fault *vmf, bool *map_pte); vm_fault_t mte_try_transfer_swap_tags(swp_entry_t entry, struct page *page); @@ -67,6 +69,14 @@ static inline bool page_tag_storage_reserved(struct page *page) { return true; } +static inline bool page_is_tag_storage(struct page *page) +{ + return false; +} +static inline int replace_folio_with_tagged(struct folio *folio) +{ + return -EINVAL; +} #endif /* CONFIG_ARM64_MTE_TAG_STORAGE */ #endif /* !__ASSEMBLY__ */ diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index d0473538c926..7f89606ad617 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -1108,7 +1108,12 @@ static inline void arch_swap_restore(swp_entry_t entry, struct folio *folio) #define __HAVE_ARCH_FREE_PAGES_PREPARE static inline void arch_free_pages_prepare(struct page *page, int order) { - if (tag_storage_enabled() && page_mte_tagged(page)) + /* + * KVM can free a page after tag storage has been reserved and before is + * marked as tagged, hence use page_tag_storage_reserved() instead of + * page_mte_tagged() to check for tag storage. + */ + if (tag_storage_enabled() && page_tag_storage_reserved(page)) free_tag_storage(page, order); } diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index b7517c4a19c4..986a9544228d 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1361,6 +1361,8 @@ static void sanitise_mte_tags(struct kvm *kvm, kvm_pfn_t pfn, if (!kvm_has_mte(kvm)) return; + WARN_ON_ONCE(tag_storage_enabled() && !page_tag_storage_reserved(pfn_to_page(pfn))); + for (i = 0; i < nr_pages; i++, page++) { if (try_page_mte_tagging(page)) { mte_clear_page_tags(page_address(page)); @@ -1374,6 +1376,39 @@ static bool kvm_vma_mte_allowed(struct vm_area_struct *vma) return vma->vm_flags & VM_MTE_ALLOWED; } +/* + * Called with an elevated reference on the pfn. If successful, the reference + * count is not changed. If it returns an error, the elevated reference is + * dropped. + */ +static int kvm_mte_reserve_tag_storage(kvm_pfn_t pfn) +{ + struct folio *folio; + int ret; + + folio = page_folio(pfn_to_page(pfn)); + + if (page_tag_storage_reserved(folio_page(folio, 0))) + return 0; + + if (page_is_tag_storage(folio_page(folio, 0))) + goto migrate; + + ret = reserve_tag_storage(folio_page(folio, 0), folio_order(folio), + GFP_HIGHUSER_MOVABLE); + if (!ret) + return 0; + +migrate: + replace_folio_with_tagged(folio); + /* + * If migration succeeds, the fault needs to be replayed because 'pfn' + * has been unmapped. If migration fails, KVM will try to reserve tag + * storage again by replaying the fault. + */ + return -EAGAIN; +} + static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, struct kvm_memory_slot *memslot, unsigned long hva, bool fault_is_perm) @@ -1488,6 +1523,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, pfn = __gfn_to_pfn_memslot(memslot, gfn, false, false, NULL, write_fault, &writable, NULL); + if (pfn == KVM_PFN_ERR_HWPOISON) { kvm_send_hwpoison_signal(hva, vma_shift); return 0; @@ -1518,6 +1554,13 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, if (exec_fault && device) return -ENOEXEC; + if (tag_storage_enabled() && !fault_is_perm && !device && + kvm_has_mte(kvm) && mte_allowed) { + ret = kvm_mte_reserve_tag_storage(pfn); + if (ret) + return ret == -EAGAIN ? 0 : ret; + } + read_lock(&kvm->mmu_lock); pgt = vcpu->arch.hw_mmu->pgt; if (mmu_invalidate_retry(kvm, mmu_seq)) diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index 01450ab91a87..5c12232bdf0b 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -974,7 +974,7 @@ void tag_clear_highpage(struct page *page) * Called with an elevated reference on the folio. * Returns with the elevated reference dropped. */ -static int replace_folio_with_tagged(struct folio *folio) +int replace_folio_with_tagged(struct folio *folio) { struct migration_target_control mtc = { .nid = NUMA_NO_NODE, From patchwork Thu Jan 25 16:42:54 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Elisei X-Patchwork-Id: 13531350 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id F0B18C47258 for ; Thu, 25 Jan 2024 16:46:00 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 89A386B0098; Thu, 25 Jan 2024 11:46:00 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 825AD6B00A0; Thu, 25 Jan 2024 11:46:00 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 69D256B00A2; Thu, 25 Jan 2024 11:46:00 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 53BD56B0098 for ; Thu, 25 Jan 2024 11:46:00 -0500 (EST) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 2A0EA1A0DF4 for ; Thu, 25 Jan 2024 16:46:00 +0000 (UTC) X-FDA: 81718410480.08.AC4E3DA Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf07.hostedemail.com (Postfix) with ESMTP id 5994C4002A for ; Thu, 25 Jan 2024 16:45:58 +0000 (UTC) Authentication-Results: imf07.hostedemail.com; dkim=none; spf=pass (imf07.hostedemail.com: domain of alexandru.elisei@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=alexandru.elisei@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1706201158; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=VmI4/L7Bdj1EfzaZ0hfN72ArxrVtIDzRCaBT5xxScCg=; b=iqZpOnGEiqB2CAB5DooEvUdZ4pdPErCQ8ajEDHaYBTywrSkBZguif/OBVr0iLhKyROlIm7 BhJA5fllhNIDoocPZvYFOZaWJqac5hLs11rStukt4/qcS/RzOxJPASQVarLRyg4fPyJpOF orfEr60mbvD7f07lFEMsHU8MgFCPm58= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=none; spf=pass (imf07.hostedemail.com: domain of alexandru.elisei@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=alexandru.elisei@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1706201158; a=rsa-sha256; cv=none; b=EZZ7/SZUBeyUG45aE4eaOKejNgEJuNL555m+D3Q9ePHVrEpYnvuLBaE4j7zXHuBKrsszBD Cu3/dp2KM2Xic2m9N2XtJsPHQTqb9zZCj3mg0YYV+8Zl13Z5BQiMOuHK3O31I559LB2Fa3 k7C3RZXah/IL5qTnS7qj/lyIN9XaLdQ= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1422C1758; Thu, 25 Jan 2024 08:46:42 -0800 (PST) Received: from e121798.cable.virginm.net (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 2B5AC3F5A1; Thu, 25 Jan 2024 08:45:52 -0800 (PST) From: Alexandru Elisei To: catalin.marinas@arm.com, will@kernel.org, oliver.upton@linux.dev, maz@kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, arnd@arndb.de, akpm@linux-foundation.org, mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, mhiramat@kernel.org, rppt@kernel.org, hughd@google.com Cc: pcc@google.com, steven.price@arm.com, anshuman.khandual@arm.com, vincenzo.frascino@arm.com, david@redhat.com, eugenis@google.com, kcc@google.com, hyesoo.yu@samsung.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, kvmarm@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org Subject: [PATCH RFC v3 33/35] KVM: arm64: mte: Introduce VM_MTE_KVM VMA flag Date: Thu, 25 Jan 2024 16:42:54 +0000 Message-Id: <20240125164256.4147-34-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240125164256.4147-1-alexandru.elisei@arm.com> References: <20240125164256.4147-1-alexandru.elisei@arm.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 5994C4002A X-Rspam-User: X-Stat-Signature: rckzcsfgickrcwj6x4kkxnrk493koy79 X-Rspamd-Server: rspam01 X-HE-Tag: 1706201158-354265 X-HE-Meta: U2FsdGVkX1+hszJF/lJQXpBoud9k5qw/9E3glE4MgezIxOwW1yIZtdgnI6pp5Uu0DEPu4iWfSp5GtuUibcErQ28oVDTbQkh/43iwYs666xLlyqrsWx0S7swGAEvMJoqtWiKgWevMHBkh2YdOgaPulLMhBA2DQ/nVLw4+SaoZDkZqNUEcek2cpm3oY04GudbnJfhsj8L8XZHRcAV+dAHvl9Jbwx/nx8HNziL8oTWiVsfYULn9ba2OlYAwTORhLmWp+nj1+Jcl3WxPuK3qOultzDjzG4QS3WW2NZTZmur1jd/7fwEnTwl1oIHmYwuKrQvoe+sp5GpAWsWB5Acf4HmDtmnFkI2otDTLZUkm3iXwvVn7BjI5CFFULOakw+ditU+QQIxV9X6Q2KZDVrYGc2u9zkvzbIyuBbfuKXfwjt7JoIPm36ZQYBBle9HmUcL+LD5xVzA68FmGU2x/u2WHynrQE6gRjvsHUlON8OvBQoLj9R6Y00C555JFozYSKOXTkSy1t2DuVFNV546w+qUcYljQDnDHXGfL4ninjrzBKeVcSvayJYrmnJso7LeDD2VtYP9TOQ4rTplb0G/nDAuXuXVUaUk7qrIwBsnms9RFs+Iar/DGirnwBk3vvKSQ5+5oChFyvY8E3S+Rh0jy1Ocln4xehCEweLFnJnxHn/dCjih0IhtqFVmocvBx8huKtmJozro2Z/EKpIAktFulVFvWibeCL+NQDhYX4BRgykGcLN0NDtfBuou1ineh8rQdmX6AvZQHPxTPidm8uq2tS8uTFQuzIguMn1qQGaqeuUvlBn8fT0MlEY83EPIUVPyrpbYYOGeWEgdHMspKiu04v0iIfDWQDXpy2JU4aBq2TI7SRFvgDnv82IQENGJsY46E5Wjaf6aatD2d7+f036YNYOJ2cSc90wapMP88MdGBLyVXA5ZXeGUYkvaXMs+DE9+mehWyWPYJbrsbBuTCr/Mkhzdlb/B SqBlztS1 ruHnv5InrKKhUGZNFqaAsl1LTf94+87ruXjh/Xa/iObKvRybD2ZwiTnyjR9DD7u7Sj/tPv4yGFFYbxnss7EeUGbsg4AnwuRJ7DYF7eWilQyrPBzZ1tT8JdUrY8k1rsxWFKt81E+ZFs3eTNOQdyuCEyde5M0Ev2zuUjhV4dwXoLO/uNRdMAFwMSUG300tqgxybcTZe9nwHMeCHSGlp07Ghd6A+Xg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Tag storage pages mapped by the host in a VM with MTE enabled are migrated when they are first accessed by the guest. This introduces latency spikes for memory accesses made by the guest. Tag storage pages can be mapped in the guest memory when the VM_MTE VMA flag is not set. Introduce a new VMA flag, VM_MTE_KVM, to stop tag storage pages from being mapped in a VM with MTE enabled. The flag is different from VM_MTE, because the pages from the VMA won't be mapped as tagged in the host, and host's userspace can continue to access the guest memory as Untagged. The flag's only function is to instruct the page allocator to treat the allocation as tagged, so tag storage pages aren't used. The page allocator will also try to reserve tag storage for the new page, which can speed up stage 2 aborts further if the VMM has accessed the memory before the guest. For example, qemu and kvmtool will benefit from this change because the guest image is copied after the memslot is created. Signed-off-by: Alexandru Elisei --- Changes since rfc v2: * New patch. arch/arm64/kvm/mmu.c | 77 ++++++++++++++++++++++++++++++++++++++++++- arch/arm64/mm/fault.c | 2 +- include/linux/mm.h | 2 ++ 3 files changed, 79 insertions(+), 2 deletions(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 986a9544228d..45c57c4b9fe2 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1420,7 +1420,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, unsigned long mmu_seq; struct kvm *kvm = vcpu->kvm; struct kvm_mmu_memory_cache *memcache = &vcpu->arch.mmu_page_cache; - struct vm_area_struct *vma; + struct vm_area_struct *vma, *old_vma; short vma_shift; gfn_t gfn; kvm_pfn_t pfn; @@ -1428,6 +1428,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, long vma_pagesize, fault_granule; enum kvm_pgtable_prot prot = KVM_PGTABLE_PROT_R; struct kvm_pgtable *pgt; + bool vma_has_kvm_mte = false; if (fault_is_perm) fault_granule = kvm_vcpu_trap_get_perm_fault_granule(vcpu); @@ -1506,6 +1507,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, gfn = fault_ipa >> PAGE_SHIFT; mte_allowed = kvm_vma_mte_allowed(vma); + vma_has_kvm_mte = !!(vma->vm_flags & VM_MTE_KVM); + old_vma = vma; /* Don't use the VMA after the unlock -- it may have vanished */ vma = NULL; @@ -1521,6 +1524,27 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, mmu_seq = vcpu->kvm->mmu_invalidate_seq; mmap_read_unlock(current->mm); + /* + * If the VMA was created after the memslot, it doesn't have the + * VM_MTE_KVM flag set. + */ + if (unlikely(tag_storage_enabled() && !fault_is_perm && + kvm_has_mte(kvm) && mte_allowed && !vma_has_kvm_mte)) { + mmap_write_lock(current->mm); + vma = vma_lookup(current->mm, hva); + /* The VMA was changed, replay the fault. */ + if (vma != old_vma) { + mmap_write_unlock(current->mm); + return 0; + } + if (!(vma->vm_flags & VM_MTE_KVM)) { + vma_start_write(vma); + vm_flags_reset(vma, vma->vm_flags | VM_MTE_KVM); + } + vma = NULL; + mmap_write_unlock(current->mm); + } + pfn = __gfn_to_pfn_memslot(memslot, gfn, false, false, NULL, write_fault, &writable, NULL); @@ -1986,6 +2010,40 @@ int __init kvm_mmu_init(u32 *hyp_va_bits) return err; } +static int kvm_set_clear_kvm_mte_vma(const struct kvm_memory_slot *memslot, bool set) +{ + struct vm_area_struct *vma; + hva_t hva, memslot_end; + int ret = 0; + + hva = memslot->userspace_addr; + memslot_end = hva + (memslot->npages << PAGE_SHIFT); + + mmap_write_lock(current->mm); + + do { + vma = find_vma_intersection(current->mm, hva, memslot_end); + if (!vma) + break; + if (!kvm_vma_mte_allowed(vma)) + continue; + if (set) { + if (!(vma->vm_flags & VM_MTE_KVM)) { + vma_start_write(vma); + vm_flags_reset(vma, vma->vm_flags | VM_MTE_KVM); + } + } else if (vma->vm_flags & VM_MTE_KVM) { + vma_start_write(vma); + vm_flags_reset(vma, vma->vm_flags & ~VM_MTE_KVM); + } + hva = min(memslot_end, vma->vm_end); + } while (hva < memslot_end); + + mmap_write_unlock(current->mm); + + return ret; +} + void kvm_arch_commit_memory_region(struct kvm *kvm, struct kvm_memory_slot *old, const struct kvm_memory_slot *new, @@ -1993,6 +2051,23 @@ void kvm_arch_commit_memory_region(struct kvm *kvm, { bool log_dirty_pages = new && new->flags & KVM_MEM_LOG_DIRTY_PAGES; + if (kvm_has_mte(kvm) && change != KVM_MR_FLAGS_ONLY) { + switch (change) { + case KVM_MR_CREATE: + kvm_set_clear_kvm_mte_vma(new, true); + break; + case KVM_MR_DELETE: + kvm_set_clear_kvm_mte_vma(old, false); + break; + case KVM_MR_MOVE: + kvm_set_clear_kvm_mte_vma(old, false); + kvm_set_clear_kvm_mte_vma(new, true); + break; + default: + WARN(true, "Unknown memslot change"); + } + } + /* * At this point memslot has been committed and there is an * allocated dirty_bitmap[], dirty pages will be tracked while the diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index 5c12232bdf0b..f4ca3ba8dde7 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -947,7 +947,7 @@ NOKPROBE_SYMBOL(do_debug_exception); */ gfp_t arch_calc_vma_gfp(struct vm_area_struct *vma, gfp_t gfp) { - if (vma->vm_flags & VM_MTE) + if (vma->vm_flags & (VM_MTE |VM_MTE_KVM)) return __GFP_TAGGED; return 0; } diff --git a/include/linux/mm.h b/include/linux/mm.h index f5a97dec5169..924aa7c26ec9 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -375,9 +375,11 @@ extern unsigned int kobjsize(const void *objp); #if defined(CONFIG_ARM64_MTE) # define VM_MTE VM_HIGH_ARCH_0 /* Use Tagged memory for access control */ # define VM_MTE_ALLOWED VM_HIGH_ARCH_1 /* Tagged memory permitted */ +# define VM_MTE_KVM VM_HIGH_ARCH_2 /* VMA is mapped in a virtual machine with MTE */ #else # define VM_MTE VM_NONE # define VM_MTE_ALLOWED VM_NONE +# define VM_MTE_KVM VM_NONE #endif #ifndef VM_GROWSUP From patchwork Thu Jan 25 16:42:55 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Elisei X-Patchwork-Id: 13531351 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7F567C47422 for ; Thu, 25 Jan 2024 16:46:06 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 123136B00A2; Thu, 25 Jan 2024 11:46:06 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 0AA2F6B00AB; Thu, 25 Jan 2024 11:46:06 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E65216B00AD; Thu, 25 Jan 2024 11:46:05 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id D0B756B00A2 for ; Thu, 25 Jan 2024 11:46:05 -0500 (EST) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id A9C3740DC8 for ; Thu, 25 Jan 2024 16:46:05 +0000 (UTC) X-FDA: 81718410690.08.7DEA381 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf23.hostedemail.com (Postfix) with ESMTP id 032D2140017 for ; Thu, 25 Jan 2024 16:46:03 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf23.hostedemail.com: domain of alexandru.elisei@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=alexandru.elisei@arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1706201164; a=rsa-sha256; cv=none; b=j08vDlbpu/l9KD+J/jjOvN8BMsq3y9VY5bqeQ6IqL3yJaW5xDG3TKzuzRNBN7v2XWoDOY2 uxhcVQyDyotIhKdusCDfeAkGfvpo9Om1c7pkNodFt2Nkz48PvdaZ8G4jFavn2/q4NdacLu 1pYq0O5mvUmeGkmEZkkPiB5MBet2Vkg= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf23.hostedemail.com: domain of alexandru.elisei@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=alexandru.elisei@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1706201164; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=GdGQe3M7VBupU0F0JBnH4/73PFwp4VFaLR6ZkK6MXmU=; b=yYoSJzX9YDBph1qEJpy857bytRoaBEsaMfm3U/lnd95yXmmzgJpKwlGiY8stOz9iFwn1UY fHw5nOF2x4FKikyu5C3UInvlPFuzv7ntwlD9GKuTxdvDEspkYuRjCdXzkCPe7fjzb2KwhN W+LHf0zTJL8fC2T5ow/Do8VpnKJAIWE= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id CBC93175A; Thu, 25 Jan 2024 08:46:47 -0800 (PST) Received: from e121798.cable.virginm.net (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id E1F9D3F5A1; Thu, 25 Jan 2024 08:45:57 -0800 (PST) From: Alexandru Elisei To: catalin.marinas@arm.com, will@kernel.org, oliver.upton@linux.dev, maz@kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, arnd@arndb.de, akpm@linux-foundation.org, mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, mhiramat@kernel.org, rppt@kernel.org, hughd@google.com Cc: pcc@google.com, steven.price@arm.com, anshuman.khandual@arm.com, vincenzo.frascino@arm.com, david@redhat.com, eugenis@google.com, kcc@google.com, hyesoo.yu@samsung.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, kvmarm@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org Subject: [PATCH RFC v3 34/35] arm64: mte: Enable dynamic tag storage management Date: Thu, 25 Jan 2024 16:42:55 +0000 Message-Id: <20240125164256.4147-35-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240125164256.4147-1-alexandru.elisei@arm.com> References: <20240125164256.4147-1-alexandru.elisei@arm.com> MIME-Version: 1.0 X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 032D2140017 X-Stat-Signature: e5nxpouqdph9dfb3k8jjn6sqwhxe9nfi X-Rspam-User: X-HE-Tag: 1706201163-85461 X-HE-Meta: U2FsdGVkX1+Z7r81aalyuMAAAGk9IisYM5HPxW9mKjqBdhxiu7sovYQU0Ug0/5kjsLTSkhKdLkxm6xY49eRgIT/ihdf7j5dXK7d7u10tnF9nM03/eIN8DTMbTiwl24JJU182m3QJ54V6XRufwvBsesuzq0sI8MJxqlLHpup+CktVgbX1J4+hKA6/dNt/FmwcI2i5NSqVEcCBI/C9Yf4vesnH7DAimYXcOX4Q+2R9Y3O4tCDXoELpjgR2E9gz5ZhOAFvu1TBAHLOAxd0FuAqbd8ioEzU30UZeT8pnrpZwRxA+sQP8o7zpfJndX4GgoqhFMv4rvM+VvkpwDTedFdO4sUA9P+t5ze3GfX59hS5sVxYhMQUz/SZRWKnyD2ub8YWjRcOhP+epY30BEhAgdUzfYdYAY2ZvXzZJ5GKdRbLHyNrPzFAf5v43ka7tQrAx+JhswDzUUqdEISo/ItLuOaYYrHcNMinW/f2GX+zBoMZ7GBDf+n5JTkGgx583wLkv6F9eH4sCcMVot92+wk+cBBy1jQNaclPcr4ebg/DbS3nZVIhsKTthRQfh73RZBumpJNZpC6geTHs7A6/gDpx696XoOTxDAaGBVOqYBJcflWkil0qMs0G4HIXkZ/nOl8UQgjo8cpMWC2s0N0jWXG58l8xUGlIfMJn8sQhuOTG8K9mh+uGMqkKsEiQWLFqxEVs9vWsj+UwlkOsDHofgbmFLcSZTlsaYtCKVf3kyv2KyP3nM9BkCgSOm7arj9DQzNKPUXs67Flc5zxZvernAvoC7JXL/9OEPrFHppgojMb4yETMzcJWzSSoN39ueXwxVk7rDfhD2HQCrTNtx6RR50pddaNrA/rqXOJcCMMjDO49a9rMDCvLD4x4C1r8q6iSqmHCUXIPI5nqjEhZJAcn67oyeD/kg7/6rOruZiB8BOLjSklzmibUfNP6IiY55qxly+8mbuuhPpx70+PbOchrS4VSO1L7 n7kx8r1k tIIrjjRtm76Usy2EHt7uwo7u4WbeWIaO2+C7PwfYKZVDcDRsE1ttGInb0gKFSnzIShRj9wc3YVBHEbdkNgJZIEbifSTfWvKKv6rCfHnXkRJpeNUo9NKYr7HpNgh4HZ6OvAwY2sNwiU2WKe3wslieca2x29bXbEmFnAEEB X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Everything is in place, enable tag storage management. Signed-off-by: Alexandru Elisei --- arch/arm64/Kconfig | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 088e30fc6d12..95c153705a2c 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -2084,7 +2084,7 @@ config ARM64_MTE if ARM64_MTE config ARM64_MTE_TAG_STORAGE - bool + bool "MTE tag storage management" select ARCH_HAS_FAULT_ON_ACCESS select CONFIG_CMA help From patchwork Thu Jan 25 16:42:56 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Elisei X-Patchwork-Id: 13531352 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3470BC47258 for ; Thu, 25 Jan 2024 16:46:12 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BA7616B009B; Thu, 25 Jan 2024 11:46:11 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id B31176B00A4; Thu, 25 Jan 2024 11:46:11 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9ABAC6B00AD; Thu, 25 Jan 2024 11:46:11 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 88A9B6B00A4 for ; Thu, 25 Jan 2024 11:46:11 -0500 (EST) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 63C64A2035 for ; Thu, 25 Jan 2024 16:46:11 +0000 (UTC) X-FDA: 81718410942.07.8872B81 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf11.hostedemail.com (Postfix) with ESMTP id C0A8E4001F for ; Thu, 25 Jan 2024 16:46:09 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf11.hostedemail.com: domain of alexandru.elisei@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=alexandru.elisei@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1706201169; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=xx0pC8VyPKbqT+silQujt9s6QTe8+aNz2i4uRnJr11M=; b=Tk+6kN7RxLTNJOShTB9x0shauP154Csr7rciRvRfAyA7EXmI6TmX1beCg/x9f76n0DjjlK MyxTmTMdQmQkqmRm6aIEotQ9+1KSOCma/5CRM2ox9muJGpAY/yb+0Mix6j5OAZffYy2WzJ VZZo9zs5pFPJitfL5pdcrDa/ujODcLk= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf11.hostedemail.com: domain of alexandru.elisei@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=alexandru.elisei@arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1706201169; a=rsa-sha256; cv=none; b=mo7ktcC0e+NUmo7ylqNSI8fc+xPsmKul2nB479NVbrCfJqQpjzoiqZxf7FsUekUmEiTogN kHOecK72N1FVhXlICuoa87W6H91bBtqzphOWPR2TY+AgiS/Ig5L6NlJikNww575B6Y70ti Wz7zl6I6nKmU1upg1u3naUPIMeFIevg= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 908D41762; Thu, 25 Jan 2024 08:46:53 -0800 (PST) Received: from e121798.cable.virginm.net (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id A7DED3F5A1; Thu, 25 Jan 2024 08:46:03 -0800 (PST) From: Alexandru Elisei To: catalin.marinas@arm.com, will@kernel.org, oliver.upton@linux.dev, maz@kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, arnd@arndb.de, akpm@linux-foundation.org, mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, mhiramat@kernel.org, rppt@kernel.org, hughd@google.com Cc: pcc@google.com, steven.price@arm.com, anshuman.khandual@arm.com, vincenzo.frascino@arm.com, david@redhat.com, eugenis@google.com, kcc@google.com, hyesoo.yu@samsung.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, kvmarm@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org Subject: [PATCH RFC v3 35/35] HACK! arm64: dts: Add fake tag storage to fvp-base-revc.dts Date: Thu, 25 Jan 2024 16:42:56 +0000 Message-Id: <20240125164256.4147-36-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240125164256.4147-1-alexandru.elisei@arm.com> References: <20240125164256.4147-1-alexandru.elisei@arm.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: C0A8E4001F X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: xip7hesjrgahjqhqni7jfyaukthobd55 X-HE-Tag: 1706201169-807005 X-HE-Meta: U2FsdGVkX1/sQDrEwR7bz26BENwhOQ4RdOnBZ/pesFraXFnFJoP2Ly7oBWlBo4z6VS8ufWk7qCKRcojpnIW5TnZP0FultB6ajKLB30qBBu4OqeukX+cB1Fa9iMkKqK5f6xZrd7Jf/eMI0t1NYIU9EUSRi3srgzA/NE/twXkiswTlu0Engs9tO8P5I1fmT1QpTXOnVSg5R9SdLqWie+LIb/K89oZ0WRES/VMEiQzKv4wUtzmtlBi0lxK9QVfwH4LSJUv/XIkZkyb8D1r+vmtFo2k/DlrJ+Xbcj0HWYxJUEL7Hwuudx5WTax96+FXlKB++m4ol6iEK5fHtopnMbQJvo5BjoDTjJ4RhcbhsyLD/T5J79oPuLVKUlI/JQ7abz45syj7aoOrG+10KbD0SlkReo3a1BpulpHqUulCnZ0qIMbBHXcfKnyDdURS/agiDdpfXXsPSzRxcJiGi8PxFvkco0dQGfe2ooZZfUyXk78tHYq4jidsYsobvczDcEc2+bOd3N+xh8h4EVqGf+XlrlnUnzjogczsKdFd48IH8AGf+7c+13QWZtiN3atS2Re93fNrAUE8gFQw9Qk8xor9e1+x9LWVrQr5781niqYsEC5wqtS9YXI1GqCB9b0B71bniGkQgrLqj2/7myWVfZsj8PYoA7DF+4sCUCmuT3LJ/WPAE5KbzFyXmmzr8qWPmRPa97teWVwEEZHX2j/LITnonT8EPhwzOZZbOM+P8LtdUXTVMwcY6DcZ4//EEqVtVz/cpNddihZxC5zJKSBzSpDI90o2OWMioKTDA2gs8MeuqM7T8tkv0RJ313F8PmhzK4Tiw3MOxWl3NMyN6qO0VXeYev/+aToZGiqdLrWWqygKxCxuAc0Rwya4jlUbEHH6HT+ZjEl3nmXJ/3QBD18OJAyP7jb5cr12rBxy4V0tT4abZs79hTuoku71netwPb2hzzz1EC9wultYrlIswJGFEbZhXFAL G1zSyICQ TZEvQKuAqKA/2tq4ggm77XPd73H6Wqf3t/rGTAEc9x7SVaUShDYMSsINIgdKTpu9mQhQHMXvxPkMCDPiY3WA1XWfpx/5jJ8d8ctaMie/yMrpDK3ecmlniKteDCXPLmAegdYwfNWjYGQOQASiJkzYjSl9ej5dG/ZAzPDjRcbfHqmmTIso1YT22EWDhzVYjPU9hsluJQTJ465T208w= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Faking a tag storage region for FVP is useful for testing. Signed-off-by: Alexandru Elisei --- Changes since rfc v2: * New patch, not intended to be merged. arch/arm64/boot/dts/arm/fvp-base-revc.dts | 42 +++++++++++++++++++++-- 1 file changed, 39 insertions(+), 3 deletions(-) diff --git a/arch/arm64/boot/dts/arm/fvp-base-revc.dts b/arch/arm64/boot/dts/arm/fvp-base-revc.dts index 60472d65a355..e9f44420cb62 100644 --- a/arch/arm64/boot/dts/arm/fvp-base-revc.dts +++ b/arch/arm64/boot/dts/arm/fvp-base-revc.dts @@ -165,10 +165,30 @@ C1_L2: l2-cache1 { }; }; - memory@80000000 { + memory0: memory@80000000 { device_type = "memory"; - reg = <0x00000000 0x80000000 0 0x80000000>, - <0x00000008 0x80000000 0 0x80000000>; + reg = <0x00 0x80000000 0x00 0x80000000>; + numa-node-id = <0x00>; + }; + + /* tags0 */ + tags_memory0: memory@8f8000000 { + device_type = "memory"; + reg = <0x08 0xf8000000 0x00 0x4000000>; + numa-node-id = <0x00>; + }; + + memory1: memory@880000000 { + device_type = "memory"; + reg = <0x08 0x80000000 0x00 0x78000000>; + numa-node-id = <0x01>; + }; + + /* tags1 */ + tags_memory1: memory@8fc00000 { + device_type = "memory"; + reg = <0x08 0xfc000000 0x00 0x3c00000>; + numa-node-id = <0x01>; }; reserved-memory { @@ -183,6 +203,22 @@ vram: vram@18000000 { reg = <0x00000000 0x18000000 0 0x00800000>; no-map; }; + + tags0: tag-storage@8f8000000 { + compatible = "arm,mte-tag-storage"; + reg = <0x08 0xf8000000 0x00 0x4000000>; + block-size = <0x1000>; + tagged-memory = <&memory0>; + reusable; + }; + + tags1: tag-storage@8fc00000 { + compatible = "arm,mte-tag-storage"; + reg = <0x08 0xfc000000 0x00 0x3c00000>; + block-size = <0x1000>; + tagged-memory = <&memory1>; + reusable; + }; }; gic: interrupt-controller@2f000000 {