From patchwork Fri Apr 19 07:43:40 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Roberts X-Patchwork-Id: 13635739 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D9E65C04FF6 for ; Fri, 19 Apr 2024 07:44:01 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 625B76B0085; Fri, 19 Apr 2024 03:44:01 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5D6DF6B0087; Fri, 19 Apr 2024 03:44:01 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 49E656B0088; Fri, 19 Apr 2024 03:44:01 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 222B76B0085 for ; Fri, 19 Apr 2024 03:44:01 -0400 (EDT) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 9CB04C0E74 for ; Fri, 19 Apr 2024 07:44:00 +0000 (UTC) X-FDA: 82025492640.21.94DEF4E Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf09.hostedemail.com (Postfix) with ESMTP id 02ECC14001C for ; Fri, 19 Apr 2024 07:43:58 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=none; spf=pass (imf09.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1713512639; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=spZoa5h/qUXqFMQ+/0z/V07ne8ZpayUJhRmJBzPJ4Lc=; b=dWkUsaHRizr52lc465j8+1f2aufK3QSo2r3JMz4/W/qs1Yt2ZPybgu/028aEZ4Nga8e9nB EHYylN7PIk8jw5m7gwv1YkrVP57BPhg2Cy4fZbK0O5i4eM4PXlcmpDttuWmhnAkcVrIzfX wZjMXr4Qv7L+EI1zeHxso2XKAZOhNnU= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1713512639; a=rsa-sha256; cv=none; b=2kTybySBgh91+/JoISx9/xe52gpz2E7d1J2vr/ATZ/TMfgQu9COejRUCqenwpmyrNcn3Km rGunVDmf2N03HEhBDBbBIxCdJ5zlvf17Bo3mDqE4rWtlcflzOWZk/TKGfpUWc2c3aGo/5B GEUBw6F7zauA37dKqiePlyNOhyJ15uo= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=none; spf=pass (imf09.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 5C18F339; Fri, 19 Apr 2024 00:44:26 -0700 (PDT) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.27]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 0D9543F792; Fri, 19 Apr 2024 00:43:55 -0700 (PDT) From: Ryan Roberts To: Catalin Marinas , Will Deacon , Andrew Morton , Shuah Khan , Joey Gouly , Ard Biesheuvel , Mark Rutland , Anshuman Khandual , David Hildenbrand , Shivansh Vij Cc: Ryan Roberts , linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org Subject: [PATCH v1 1/5] arm64/mm: Move PTE_PROT_NONE and PMD_PRESENT_INVALID Date: Fri, 19 Apr 2024 08:43:40 +0100 Message-Id: <20240419074344.2643212-2-ryan.roberts@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240419074344.2643212-1-ryan.roberts@arm.com> References: <20240419074344.2643212-1-ryan.roberts@arm.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 02ECC14001C X-Stat-Signature: 13pyp661d8pj3yk9rnmew7zdtzkau6xr X-HE-Tag: 1713512638-774524 X-HE-Meta: U2FsdGVkX1+9ZC9vtdKvzBRu/HYmjhr3J80SbZOkNcJNqFdSVpR0xQ1Mb4ho/0avl96JVvMxQ8YCj9BiGpcGN7l16BiWBEpZtddvFnRbDFC7cIVCp9sXTh5IZG72eWbpYXbmpCpRR1lNENeOa4JoVR1YHM0TOQfzygLbYUfdMZCj0OeyDB1pFvIM6T3xHR9TycSzngH1LRjzG9wUcWs/knFZ77yszooGNl9MojkwaEm1sD4EzdBta81VtdayE4Q4gTs6W5gt+8TAmz0tigdK6ZDbTQASgIGadRW6GJ5Tt6ASaUwX047H8FHb8u3rYOdCrmFVorzZKfw6eGYe+vhSEr1K1IXR6B8zf858I4BRQKUx6iJ0Dz/sbyhmBKMRhhzSG+xDV3DdKxvfQpvF9RzbdHKiJyDQkfIUguRlWYJp5084Z4CqG6DA/XytwhO4Niutb1XIEwMDAmFYkNDyixk7xvXp26RhHdtM7MrEl0AUK6aoOP4mApfXFs/lBUOJnpNuJapAD1Gua87Nvfo1aG+R6AX0M+6y1L7bP+AJLyi4aea3rCMf1mrdaXr5eJD2Eft1O3lnpU47/R5vPAyiJm27A8BXZiC3tvvVAWBIpli+xHz5XFf7FS4QXckNJPN0ounxKd6Z1+ynoCGl17MG82d3sswiNgEcMfwD+7dFVAEwf6guPZoh3ud22RyyYX79ALNW1HBR/hzSxtEy3vQ6YVfRKAcnqfa+JW6kEqc0tdcjlPNP70V/638K5SszqgfiWSvwXdYkecQFj0YpexxoB71h31s4ZjVGM2e4gn0CrT0EWdCRjA1vu5K+l4qVGGzyLR3F5dVEpIFxOdq/ZIbrtRRqRLkWRIuV2szQA58EFC8PsneJUqdnf+UotayCWz0R7HXk5IM4JrvbUaTC0xdFVDRCtoRIUajom1aEOLdFV22FDocK9u5/j3v82j6u0oCeRQmPIdgtdFrohRlm7Tmpfya u24k5y6m H+3LdAVBdMquI0eJJmOHU+872UoW+IILtsPfsanJyw+GsRlKwXN+6sdqLAm5kNnoJi829X6xGMLO7h9sea02OB3tvHoCZJ95jRVwngNOwWaaVGWaSbznDQ16hKC5FVapWvpfnm88C8+QWkkpSA7itYCBw/ikBS8mB4yU4dM8XGH/bbHU0Ce1sHUdzvvAbqOunhYuyLnGh2dE4pvQ= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Previously PTE_PROT_NONE was occupying bit 58, one of the bits reserved for SW use when the PTE is valid. This is a waste of those precious SW bits since PTE_PROT_NONE can only ever be set when valid is clear. Instead let's overlay it on what would be a HW bit if valid was set. We need to be careful about which HW bit to choose since some of them must be preserved; when pte_present() is true (as it is for a PTE_PROT_NONE pte), it is legitimate for the core to call various accessors, e.g. pte_dirty(), pte_write() etc. There are also some accessors that are private to the arch which must continue to be honoured, e.g. pte_user(), pte_user_exec() etc. So we choose to overlay PTE_UXN; This effectively means that whenever a pte has PTE_PROT_NONE set, it will always report pte_user_exec() == false, which is obviously always correct. As a result of this change, we must shuffle the layout of the arch-specific swap pte so that PTE_PROT_NONE is always zero and not overlapping with any other field. As a result of this, there is no way to keep the `type` field contiguous without conflicting with PMD_PRESENT_INVALID (bit 59), which must also be 0 for a swap pte. So let's move PMD_PRESENT_INVALID to bit 60. In the end, this frees up bit 58 for future use as a proper SW bit (e.g. soft-dirty or uffd-wp). Signed-off-by: Ryan Roberts --- arch/arm64/include/asm/pgtable-prot.h | 4 ++-- arch/arm64/include/asm/pgtable.h | 16 +++++++++------- 2 files changed, 11 insertions(+), 9 deletions(-) -- 2.25.1 diff --git a/arch/arm64/include/asm/pgtable-prot.h b/arch/arm64/include/asm/pgtable-prot.h index dd9ee67d1d87..ef952d69fd04 100644 --- a/arch/arm64/include/asm/pgtable-prot.h +++ b/arch/arm64/include/asm/pgtable-prot.h @@ -18,14 +18,14 @@ #define PTE_DIRTY (_AT(pteval_t, 1) << 55) #define PTE_SPECIAL (_AT(pteval_t, 1) << 56) #define PTE_DEVMAP (_AT(pteval_t, 1) << 57) -#define PTE_PROT_NONE (_AT(pteval_t, 1) << 58) /* only when !PTE_VALID */ +#define PTE_PROT_NONE (PTE_UXN) /* Reuse PTE_UXN; only when !PTE_VALID */ /* * This bit indicates that the entry is present i.e. pmd_page() * still points to a valid huge page in memory even if the pmd * has been invalidated. */ -#define PMD_PRESENT_INVALID (_AT(pteval_t, 1) << 59) /* only when !PMD_SECT_VALID */ +#define PMD_PRESENT_INVALID (_AT(pteval_t, 1) << 60) /* only when !PMD_SECT_VALID */ #define _PROT_DEFAULT (PTE_TYPE_PAGE | PTE_AF | PTE_SHARED) #define _PROT_SECT_DEFAULT (PMD_TYPE_SECT | PMD_SECT_AF | PMD_SECT_S) diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index afdd56d26ad7..23aabff4fa6f 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -1248,20 +1248,22 @@ static inline pmd_t pmdp_establish(struct vm_area_struct *vma, * Encode and decode a swap entry: * bits 0-1: present (must be zero) * bits 2: remember PG_anon_exclusive - * bits 3-7: swap type - * bits 8-57: swap offset - * bit 58: PTE_PROT_NONE (must be zero) + * bits 4-53: swap offset + * bit 54: PTE_PROT_NONE (overlays PTE_UXN) (must be zero) + * bits 55-59: swap type + * bit 60: PMD_PRESENT_INVALID (must be zero) */ -#define __SWP_TYPE_SHIFT 3 +#define __SWP_TYPE_SHIFT 55 #define __SWP_TYPE_BITS 5 -#define __SWP_OFFSET_BITS 50 #define __SWP_TYPE_MASK ((1 << __SWP_TYPE_BITS) - 1) -#define __SWP_OFFSET_SHIFT (__SWP_TYPE_BITS + __SWP_TYPE_SHIFT) +#define __SWP_OFFSET_SHIFT 4 +#define __SWP_OFFSET_BITS 50 #define __SWP_OFFSET_MASK ((1UL << __SWP_OFFSET_BITS) - 1) #define __swp_type(x) (((x).val >> __SWP_TYPE_SHIFT) & __SWP_TYPE_MASK) #define __swp_offset(x) (((x).val >> __SWP_OFFSET_SHIFT) & __SWP_OFFSET_MASK) -#define __swp_entry(type,offset) ((swp_entry_t) { ((type) << __SWP_TYPE_SHIFT) | ((offset) << __SWP_OFFSET_SHIFT) }) +#define __swp_entry(type, offset) ((swp_entry_t) { ((unsigned long)(type) << __SWP_TYPE_SHIFT) | \ + ((unsigned long)(offset) << __SWP_OFFSET_SHIFT) }) #define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) }) #define __swp_entry_to_pte(swp) ((pte_t) { (swp).val }) From patchwork Fri Apr 19 07:43:41 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Roberts X-Patchwork-Id: 13635740 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id F0A44C4345F for ; Fri, 19 Apr 2024 07:44:03 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 365F76B0082; Fri, 19 Apr 2024 03:44:03 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3174E6B0087; Fri, 19 Apr 2024 03:44:03 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 169BA6B0088; Fri, 19 Apr 2024 03:44:03 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id EB0486B0087 for ; Fri, 19 Apr 2024 03:44:02 -0400 (EDT) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id A8A4C120AC6 for ; Fri, 19 Apr 2024 07:44:02 +0000 (UTC) X-FDA: 82025492724.13.1543383 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf19.hostedemail.com (Postfix) with ESMTP id 418B81A000A for ; Fri, 19 Apr 2024 07:44:01 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=none; spf=pass (imf19.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1713512641; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=EzmO/qFEL1Ca6RbQWTHVGe78y8FhbzICPW4St8qde7A=; b=YAaX/9lbmyl6IRJOBfZgbRn2r/N/0dnmbM0PhSH4GT0EYadplykHNjT3L5QBiLg9Xas/Cn 9uwfRzJ8/RZR7zRXOAcVCWVHGgFKNui5YCm2X5qcACX/GNiHDAe4Ykq/Pv4G9r3rwIIUqb rMjvqLTJjAOS0pJGCkc4FwG//lr11qo= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1713512641; a=rsa-sha256; cv=none; b=NE9yQnN4dkyLJmCBZddcXTsFzzIx7Wn5b/PsD+0/PGPj/GSOjS/50a+ndZ75MG6Z4c7zWV AswSTWIzneWPMrDSDL2jP75J8iCPKBoWm2KVmZjMt+NNiNFmcVNbAw0OVY+UGPxPKLxquy /bTJkiCRPUEdPhtJgZYDmAdc2e0jeo0= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=none; spf=pass (imf19.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C8C9D106F; Fri, 19 Apr 2024 00:44:28 -0700 (PDT) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.27]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 7A6E63F792; Fri, 19 Apr 2024 00:43:58 -0700 (PDT) From: Ryan Roberts To: Catalin Marinas , Will Deacon , Andrew Morton , Shuah Khan , Joey Gouly , Ard Biesheuvel , Mark Rutland , Anshuman Khandual , David Hildenbrand , Shivansh Vij Cc: Ryan Roberts , linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org Subject: [PATCH v1 2/5] arm64/mm: Add uffd write-protect support Date: Fri, 19 Apr 2024 08:43:41 +0100 Message-Id: <20240419074344.2643212-3-ryan.roberts@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240419074344.2643212-1-ryan.roberts@arm.com> References: <20240419074344.2643212-1-ryan.roberts@arm.com> MIME-Version: 1.0 X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 418B81A000A X-Stat-Signature: 8wuq7thrgydie6mid4tagb94i4wzph7y X-Rspam-User: X-HE-Tag: 1713512641-12746 X-HE-Meta: U2FsdGVkX19XRXS3IJKYpt7/bbWX+6b3kItsR8AUopFjIiE01FUXlk4Qy/Ju+NEBPtHc9mgzP/DQgYHqofrrCBu6yAHz1CLKytI3t7xWEl/7ieAPn6sEKBYhmRnhsAAQgSxUdzz+AE3juGnVd+cmTFncc9HpGkTGqhltqR7iq6e3cizE+dAIR+UKMFrID23KAqMn6xBErwfD3J9yR76pzFfQMuhsltUigBzgFejpBRuHbmNUfM2PJsjm0XApgKbuqdk53m9pm/qbq8FKfYvsSOgZ/6X26djfY/6hbaN8cS1H2jnrXiG77wUU4HMmLgCv98+xaPvZjU2j6d6UMFNLIXAPzveBl2sAUuHc4cIeMsB1VHhhlL/qXVuF7B6Y+UP3GhZSV8OERkhGx0tYmEv9Ol3tRecxRwysvNUUCr4cIbpoCWffqRGrQcappWGHHQIRA7K6YxJgai0Kz7/ug5niH2pX2Bml7ewLgZKaZ0PxZhdv9LtMhn3Twm8Tl+vznxRG5DD8YT10Lz1okle2SVx/lpLqtZQ6RFUXh7XDZ9IISIY6IYQ0XRewWkLfKu7xnJOZ+g4u2JV1CWddaY+2EbPrNFuRhYvi7x3UqFzc/5wEkxkOUDtP3a23zba3fLzsC9oEk0tO6oX01dx9bLaH+kuz3PHtCE2EHAwg8H3tLUqT13V2aK5B6kaRWHOcd+sjUkrAhz9Sfg6qT+/6D0Y3vc5BlnCoWRlzq7xYxsjb02NetTVH/kazvy3oyL8+Gb3DYlArNdaBToGY3fMj8vdOWwEbRB4p4d89ZLnInKM8pAB8uMPrMkub4htverYFY1KjBHTgdLZw9EwyYuOUaUQ8KZsXYrXFB3WC9jHc X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Let's use the newly-free PTE SW bit (58) to add support for uffd-wp. The standard handlers are implemented for set/test/clear for both pte and pmd. Additionally we must also track the uffd-wp state as a pte swp bit, so use a free swap entry pte bit (3). Signed-off-by: Ryan Roberts --- arch/arm64/Kconfig | 1 + arch/arm64/include/asm/pgtable-prot.h | 8 ++++ arch/arm64/include/asm/pgtable.h | 55 +++++++++++++++++++++++++++ 3 files changed, 64 insertions(+) -- 2.25.1 diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 7b11c98b3e84..763e221f2169 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -255,6 +255,7 @@ config ARM64 select SYSCTL_EXCEPTION_TRACE select THREAD_INFO_IN_TASK select HAVE_ARCH_USERFAULTFD_MINOR if USERFAULTFD + select HAVE_ARCH_USERFAULTFD_WP if USERFAULTFD select TRACE_IRQFLAGS_SUPPORT select TRACE_IRQFLAGS_NMI_SUPPORT select HAVE_SOFTIRQ_ON_OWN_STACK diff --git a/arch/arm64/include/asm/pgtable-prot.h b/arch/arm64/include/asm/pgtable-prot.h index ef952d69fd04..f1e1f6306e03 100644 --- a/arch/arm64/include/asm/pgtable-prot.h +++ b/arch/arm64/include/asm/pgtable-prot.h @@ -20,6 +20,14 @@ #define PTE_DEVMAP (_AT(pteval_t, 1) << 57) #define PTE_PROT_NONE (PTE_UXN) /* Reuse PTE_UXN; only when !PTE_VALID */ +#ifdef CONFIG_HAVE_ARCH_USERFAULTFD_WP +#define PTE_UFFD_WP (_AT(pteval_t, 1) << 58) /* uffd-wp tracking */ +#define PTE_SWP_UFFD_WP (_AT(pteval_t, 1) << 3) /* only for swp ptes */ +#else +#define PTE_UFFD_WP (_AT(pteval_t, 0)) +#define PTE_SWP_UFFD_WP (_AT(pteval_t, 0)) +#endif /* CONFIG_HAVE_ARCH_USERFAULTFD_WP */ + /* * This bit indicates that the entry is present i.e. pmd_page() * still points to a valid huge page in memory even if the pmd diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index 23aabff4fa6f..3f4748741fdb 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -271,6 +271,34 @@ static inline pte_t pte_mkdevmap(pte_t pte) return set_pte_bit(pte, __pgprot(PTE_DEVMAP | PTE_SPECIAL)); } +#ifdef CONFIG_HAVE_ARCH_USERFAULTFD_WP +static inline int pte_uffd_wp(pte_t pte) +{ + bool wp = !!(pte_val(pte) & PTE_UFFD_WP); + +#ifdef CONFIG_DEBUG_VM + /* + * Having write bit for wr-protect-marked present ptes is fatal, because + * it means the uffd-wp bit will be ignored and write will just go + * through. See comment in x86 implementation. + */ + WARN_ON_ONCE(wp && pte_write(pte)); +#endif + + return wp; +} + +static inline pte_t pte_mkuffd_wp(pte_t pte) +{ + return pte_wrprotect(set_pte_bit(pte, __pgprot(PTE_UFFD_WP))); +} + +static inline pte_t pte_clear_uffd_wp(pte_t pte) +{ + return clear_pte_bit(pte, __pgprot(PTE_UFFD_WP)); +} +#endif /* CONFIG_HAVE_ARCH_USERFAULTFD_WP */ + static inline void __set_pte(pte_t *ptep, pte_t pte) { WRITE_ONCE(*ptep, pte); @@ -463,6 +491,23 @@ static inline pte_t pte_swp_clear_exclusive(pte_t pte) return clear_pte_bit(pte, __pgprot(PTE_SWP_EXCLUSIVE)); } +#ifdef CONFIG_HAVE_ARCH_USERFAULTFD_WP +static inline pte_t pte_swp_mkuffd_wp(pte_t pte) +{ + return set_pte_bit(pte, __pgprot(PTE_SWP_UFFD_WP)); +} + +static inline int pte_swp_uffd_wp(pte_t pte) +{ + return !!(pte_val(pte) & PTE_SWP_UFFD_WP); +} + +static inline pte_t pte_swp_clear_uffd_wp(pte_t pte) +{ + return clear_pte_bit(pte, __pgprot(PTE_SWP_UFFD_WP)); +} +#endif /* CONFIG_HAVE_ARCH_USERFAULTFD_WP */ + #ifdef CONFIG_NUMA_BALANCING /* * See the comment in include/linux/pgtable.h @@ -508,6 +553,15 @@ static inline int pmd_trans_huge(pmd_t pmd) #define pmd_mkclean(pmd) pte_pmd(pte_mkclean(pmd_pte(pmd))) #define pmd_mkdirty(pmd) pte_pmd(pte_mkdirty(pmd_pte(pmd))) #define pmd_mkyoung(pmd) pte_pmd(pte_mkyoung(pmd_pte(pmd))) +#ifdef CONFIG_HAVE_ARCH_USERFAULTFD_WP +#define pmd_uffd_wp(pmd) pte_uffd_wp(pmd_pte(pmd)) +#define pmd_mkuffd_wp(pmd) pte_pmd(pte_mkuffd_wp(pmd_pte(pmd))) +#define pmd_clear_uffd_wp(pmd) pte_pmd(pte_clear_uffd_wp(pmd_pte(pmd))) +#define pmd_swp_uffd_wp(pmd) pte_swp_uffd_wp(pmd_pte(pmd)) +#define pmd_swp_mkuffd_wp(pmd) pte_pmd(pte_swp_mkuffd_wp(pmd_pte(pmd))) +#define pmd_swp_clear_uffd_wp(pmd) \ + pte_pmd(pte_swp_clear_uffd_wp(pmd_pte(pmd))) +#endif /* CONFIG_HAVE_ARCH_USERFAULTFD_WP */ static inline pmd_t pmd_mkinvalid(pmd_t pmd) { @@ -1248,6 +1302,7 @@ static inline pmd_t pmdp_establish(struct vm_area_struct *vma, * Encode and decode a swap entry: * bits 0-1: present (must be zero) * bits 2: remember PG_anon_exclusive + * bit 3: remember uffd-wp state * bits 4-53: swap offset * bit 54: PTE_PROT_NONE (overlays PTE_UXN) (must be zero) * bits 55-59: swap type From patchwork Fri Apr 19 07:43:42 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Roberts X-Patchwork-Id: 13635741 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E7DCDC4345F for ; Fri, 19 Apr 2024 07:44:06 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7D6776B0088; Fri, 19 Apr 2024 03:44:06 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 783906B0089; Fri, 19 Apr 2024 03:44:06 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5FEEF6B008A; Fri, 19 Apr 2024 03:44:06 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 344176B0088 for ; Fri, 19 Apr 2024 03:44:06 -0400 (EDT) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id A7674160139 for ; Fri, 19 Apr 2024 07:44:05 +0000 (UTC) X-FDA: 82025492850.22.7478F4F Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf21.hostedemail.com (Postfix) with ESMTP id 0A0071C0003 for ; Fri, 19 Apr 2024 07:44:03 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf21.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1713512644; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=b2GnRQA85NCSlhox3wzkqxg3kSEh1lD8ZmVbaNMJUsM=; b=tveD7ZBUNhCt0p8cvzz5TkmoiAZKnUm6L4ffncBQAGNqW5jFUkKb9UiZcg4qe8n/ydfJBl yo4OK+e9Xo1tmVvsjGIT7iMieffxchSwnDikkqcNh9f/FCWOLSZ8BMQUlgIpLU6ezOhjKu JQJN3LurwY6zawFDik7AK14V06i5g6Q= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf21.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1713512644; a=rsa-sha256; cv=none; b=GNdsmuVq6qPZ3+r7mg2T78w+WWwJtj4JJGMwEatDMjDxcnOPQlETiTqe21mo45EULJO8tW SaxUxzNX3i83ipr4XGMffZFNCRNZGHp765IPhSpf/DMkUtK9V5iXA6yRCxJN8mb9rglt8h sSOo6nLi4g7AZeQulrZORFkiTRupv0I= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 60ECC1424; Fri, 19 Apr 2024 00:44:31 -0700 (PDT) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.27]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id E73D03F792; Fri, 19 Apr 2024 00:44:00 -0700 (PDT) From: Ryan Roberts To: Catalin Marinas , Will Deacon , Andrew Morton , Shuah Khan , Joey Gouly , Ard Biesheuvel , Mark Rutland , Anshuman Khandual , David Hildenbrand , Shivansh Vij Cc: Ryan Roberts , linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org Subject: [RFC PATCH v1 3/5] arm64/mm: Add soft-dirty page tracking support Date: Fri, 19 Apr 2024 08:43:42 +0100 Message-Id: <20240419074344.2643212-4-ryan.roberts@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240419074344.2643212-1-ryan.roberts@arm.com> References: <20240419074344.2643212-1-ryan.roberts@arm.com> MIME-Version: 1.0 X-Stat-Signature: m4kb9iheqxp3ncps1nt9k8tbg79wmbf9 X-Rspamd-Queue-Id: 0A0071C0003 X-Rspamd-Server: rspam10 X-Rspam-User: X-HE-Tag: 1713512643-712850 X-HE-Meta: U2FsdGVkX19Mn8DpgQ7ivaMJKkoM3Bl6HM9dSrktOv5bt0JavO/SnoYlQ/CsvseqRiv4ZsDMQBbBFni7gdQVPwwVWUOy3FNF/aciTDH/D8N/OzC//9v9W8jmsMqIZi57E6QrTul2hc2kcwsm9D5JSUV3D8cAdnuo65cGUmOpzv1h5NJiVRZQImNshKzSWvmNFkg0l7FDS++jza3FrLWfJVJpSePlkO3H2hkzKuf90g6JP1kvJ2WY1sgUNKIdhBfSLg22HlgGB9pauqGnYkfjBo1yf/VHiuo64feM1ckxFxaX9iPArNS6nWfnGn8YnzFYRt0WDPHV9DWPqM+9lz1Bxan0NQikKhRschQ+wazXHrFJYIn/j63Nn5rmiFmsdkQNb9IEKIZYMUqazaNbAn/Qp7D48tArY5K8UkNtZ/eouTA1HoVmWrIdmIHbBQ9hCGymCfiIBLIkdLh3KtDwAhJ92otK5kxtI/vx9YUatwne+rFSeX6A955DDT73bBRBaz1OZizfFIr5ZCMv0WEoPLxGQ60mHB0XhZqgEuxyxsZE2880LWZvjwibXPHyKlhMw2s7l8D0EMkrGGXZcQNzoaqk+npwwGNI5+zLVIcxlQz9R8I6NThpbOvUiTS0ew49nvyAFxzNF8W8CDqrV2v8sozFU2P1n4z3iKw6fCaIxNAc39ZJkw+UunyZ9P7NU0KINTH8tmjbLZvni75YuhUSISyoYYW/03dy/Ei5nv5P31kpteAPDHye5ZjBX2UL6qrQmd0dT1czJodBpmWTo3OgQ4qjthObY1hVFZi5wMKaXX3+mRkj0PuFoccHRILGyoLBS+LWYT7Bnc3y1tKeZPiG3NF9Ub0rb29ngS5XIsIHs/6Pg8Pc3/Y7bfnXibvYhvFdTMNgcNr7Ne2SAO6N/2lkgSOUDrFLbJ4ICbPXT49McdgteSG1r1yUWzA2QGMfPABBpKQPgrGxQCviBRwLU/J5Xha 82+s7Mui kCp5fXuKLFEY+j9g5FPJOsgawznnjy3YfDhXags1ppgQIXx1EQPsI7MhBtszp6z1fzxGdP4baJyg2tewfoVrvuR3scprX9TbRTczjPxs9I0vks4ZKmKmVxmyFoaKbSCsDDkqeGfpLcGQ5VIu7mEVuxapVa5gA3+gTbk5FB1HCRznl0p5iFDbNxBPaQooSGSNNx1+nnqi5SWLpbbBpO6sC775xg1OveqL6crQQ/0uRvTm8Nd1Cf1DLtIBoBw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Use the final remaining PTE SW bit (63) for soft-dirty tracking. The standard handlers are implemented for set/test/clear for both pte and pmd. Additionally we must also track the soft-dirty state as a pte swp bit, so use a free swap entry pte bit (61). There are a few complexities worth calling out: - The semantic of soft-dirty calls for having it auto-set by pte_mkdirty(). But the arch code would previously call pte_mkdirty() for various house-keeping operations such as gathering dirty bits into a pte across a contpte block. These operations must not cause soft-dirty to be set. So an internal version, __pte_mkdirty(), has been created that does not manipulate soft-dirty, and pte_mkdirty() is now a wrapper around that, which also sets the soft-dirty bit. - For a region with soft-dirty tracking enabled, it works by wrprotecting the ptes, causing a write to fault, where the handler calls pte_mkdirty(ptep_get()) (which causes soft-dirty to be set), then the resulting pte is written back with ptep_set_access_flags(). So the arm64 version of ptep_set_access_flags() now needs to explicitly also set the soft-dirty bit to prevent loss. The patch is very loosely based on a similar patch posted by Shivansh Vij , at the below link. Primary motivation for adding soft-dirty support is to allow Checkpoint-Restore in Userspace (CRIU) to be able to track a memory page's changes if we want to enable pre-dumping, which is important for live migration. Link: https://lore.kernel.org/linux-arm-kernel/MW4PR12MB687563EFB56373E8D55DDEABB92B2@MW4PR12MB6875.namprd12.prod.outlook.com/ Signed-off-by: Ryan Roberts --- arch/arm64/Kconfig | 1 + arch/arm64/include/asm/pgtable-prot.h | 8 +++++ arch/arm64/include/asm/pgtable.h | 47 +++++++++++++++++++++++++-- arch/arm64/mm/contpte.c | 6 ++-- arch/arm64/mm/fault.c | 3 +- arch/arm64/mm/hugetlbpage.c | 6 ++-- 6 files changed, 61 insertions(+), 10 deletions(-) -- 2.25.1 diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 763e221f2169..3a5e22208e38 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -178,6 +178,7 @@ config ARM64 select HAVE_ARCH_PREL32_RELOCATIONS select HAVE_ARCH_RANDOMIZE_KSTACK_OFFSET select HAVE_ARCH_SECCOMP_FILTER + select HAVE_ARCH_SOFT_DIRTY select HAVE_ARCH_STACKLEAK select HAVE_ARCH_THREAD_STRUCT_WHITELIST select HAVE_ARCH_TRACEHOOK diff --git a/arch/arm64/include/asm/pgtable-prot.h b/arch/arm64/include/asm/pgtable-prot.h index f1e1f6306e03..7fce22ed3fda 100644 --- a/arch/arm64/include/asm/pgtable-prot.h +++ b/arch/arm64/include/asm/pgtable-prot.h @@ -28,6 +28,14 @@ #define PTE_SWP_UFFD_WP (_AT(pteval_t, 0)) #endif /* CONFIG_HAVE_ARCH_USERFAULTFD_WP */ +#ifdef CONFIG_MEM_SOFT_DIRTY +#define PTE_SOFT_DIRTY (_AT(pteval_t, 1) << 63) /* soft-dirty tracking */ +#define PTE_SWP_SOFT_DIRTY (_AT(pteval_t, 1) << 61) /* only for swp ptes */ +#else +#define PTE_SOFT_DIRTY (_AT(pteval_t, 0)) +#define PTE_SWP_SOFT_DIRTY (_AT(pteval_t, 0)) +#endif /* CONFIG_MEM_SOFT_DIRTY */ + /* * This bit indicates that the entry is present i.e. pmd_page() * still points to a valid huge page in memory even if the pmd diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index 3f4748741fdb..0118e6e0adde 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -114,6 +114,7 @@ static inline pteval_t __phys_to_pte_val(phys_addr_t phys) #define pte_user_exec(pte) (!(pte_val(pte) & PTE_UXN)) #define pte_cont(pte) (!!(pte_val(pte) & PTE_CONT)) #define pte_devmap(pte) (!!(pte_val(pte) & PTE_DEVMAP)) +#define pte_soft_dirty(pte) (!!(pte_val(pte) & PTE_SOFT_DIRTY)) #define pte_tagged(pte) ((pte_val(pte) & PTE_ATTRINDX_MASK) == \ PTE_ATTRINDX(MT_NORMAL_TAGGED)) @@ -206,7 +207,7 @@ static inline pte_t pte_mkclean(pte_t pte) return pte; } -static inline pte_t pte_mkdirty(pte_t pte) +static inline pte_t __pte_mkdirty(pte_t pte) { pte = set_pte_bit(pte, __pgprot(PTE_DIRTY)); @@ -216,6 +217,11 @@ static inline pte_t pte_mkdirty(pte_t pte) return pte; } +static inline pte_t pte_mkdirty(pte_t pte) +{ + return __pte_mkdirty(set_pte_bit(pte, __pgprot(PTE_SOFT_DIRTY))); +} + static inline pte_t pte_wrprotect(pte_t pte) { /* @@ -299,6 +305,16 @@ static inline pte_t pte_clear_uffd_wp(pte_t pte) } #endif /* CONFIG_HAVE_ARCH_USERFAULTFD_WP */ +static inline pte_t pte_mksoft_dirty(pte_t pte) +{ + return set_pte_bit(pte, __pgprot(PTE_SOFT_DIRTY)); +} + +static inline pte_t pte_clear_soft_dirty(pte_t pte) +{ + return clear_pte_bit(pte, __pgprot(PTE_SOFT_DIRTY)); +} + static inline void __set_pte(pte_t *ptep, pte_t pte) { WRITE_ONCE(*ptep, pte); @@ -508,6 +524,21 @@ static inline pte_t pte_swp_clear_uffd_wp(pte_t pte) } #endif /* CONFIG_HAVE_ARCH_USERFAULTFD_WP */ +static inline pte_t pte_swp_mksoft_dirty(pte_t pte) +{ + return set_pte_bit(pte, __pgprot(PTE_SWP_SOFT_DIRTY)); +} + +static inline bool pte_swp_soft_dirty(pte_t pte) +{ + return !!(pte_val(pte) & PTE_SWP_SOFT_DIRTY); +} + +static inline pte_t pte_swp_clear_soft_dirty(pte_t pte) +{ + return clear_pte_bit(pte, __pgprot(PTE_SWP_SOFT_DIRTY)); +} + #ifdef CONFIG_NUMA_BALANCING /* * See the comment in include/linux/pgtable.h @@ -562,6 +593,15 @@ static inline int pmd_trans_huge(pmd_t pmd) #define pmd_swp_clear_uffd_wp(pmd) \ pte_pmd(pte_swp_clear_uffd_wp(pmd_pte(pmd))) #endif /* CONFIG_HAVE_ARCH_USERFAULTFD_WP */ +#define pmd_soft_dirty(pmd) pte_soft_dirty(pmd_pte(pmd)) +#define pmd_mksoft_dirty(pmd) pte_pmd(pte_mksoft_dirty(pmd_pte(pmd))) +#define pmd_clear_soft_dirty(pmd) \ + pte_pmd(pte_clear_soft_dirty(pmd_pte(pmd))) +#define pmd_swp_soft_dirty(pmd) pte_swp_soft_dirty(pmd_pte(pmd)) +#define pmd_swp_mksoft_dirty(pmd) \ + pte_pmd(pte_swp_mksoft_dirty(pmd_pte(pmd))) +#define pmd_swp_clear_soft_dirty(pmd) \ + pte_pmd(pte_swp_clear_soft_dirty(pmd_pte(pmd))) static inline pmd_t pmd_mkinvalid(pmd_t pmd) { @@ -1093,7 +1133,7 @@ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot) * dirtiness again. */ if (pte_sw_dirty(pte)) - pte = pte_mkdirty(pte); + pte = __pte_mkdirty(pte); return pte; } @@ -1228,7 +1268,7 @@ static inline pte_t __get_and_clear_full_ptes(struct mm_struct *mm, addr += PAGE_SIZE; tmp_pte = __ptep_get_and_clear(mm, addr, ptep); if (pte_dirty(tmp_pte)) - pte = pte_mkdirty(pte); + pte = __pte_mkdirty(pte); if (pte_young(tmp_pte)) pte = pte_mkyoung(pte); } @@ -1307,6 +1347,7 @@ static inline pmd_t pmdp_establish(struct vm_area_struct *vma, * bit 54: PTE_PROT_NONE (overlays PTE_UXN) (must be zero) * bits 55-59: swap type * bit 60: PMD_PRESENT_INVALID (must be zero) + * bit 61: remember soft-dirty state */ #define __SWP_TYPE_SHIFT 55 #define __SWP_TYPE_BITS 5 diff --git a/arch/arm64/mm/contpte.c b/arch/arm64/mm/contpte.c index 1b64b4c3f8bf..c6f52fcf5d9a 100644 --- a/arch/arm64/mm/contpte.c +++ b/arch/arm64/mm/contpte.c @@ -62,7 +62,7 @@ static void contpte_convert(struct mm_struct *mm, unsigned long addr, pte_t ptent = __ptep_get_and_clear(mm, addr, ptep); if (pte_dirty(ptent)) - pte = pte_mkdirty(pte); + pte = __pte_mkdirty(pte); if (pte_young(ptent)) pte = pte_mkyoung(pte); @@ -170,7 +170,7 @@ pte_t contpte_ptep_get(pte_t *ptep, pte_t orig_pte) pte = __ptep_get(ptep); if (pte_dirty(pte)) - orig_pte = pte_mkdirty(orig_pte); + orig_pte = __pte_mkdirty(orig_pte); if (pte_young(pte)) orig_pte = pte_mkyoung(orig_pte); @@ -227,7 +227,7 @@ pte_t contpte_ptep_get_lockless(pte_t *orig_ptep) goto retry; if (pte_dirty(pte)) - orig_pte = pte_mkdirty(orig_pte); + orig_pte = __pte_mkdirty(orig_pte); if (pte_young(pte)) orig_pte = pte_mkyoung(orig_pte); diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index 8251e2fea9c7..678171fd88bd 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -220,7 +220,8 @@ int __ptep_set_access_flags(struct vm_area_struct *vma, return 0; /* only preserve the access flags and write permission */ - pte_val(entry) &= PTE_RDONLY | PTE_AF | PTE_WRITE | PTE_DIRTY; + pte_val(entry) &= PTE_RDONLY | PTE_AF | PTE_WRITE | + PTE_DIRTY | PTE_SOFT_DIRTY; /* * Setting the flags must be done atomically to avoid racing with the diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c index 0f0e10bb0a95..4605eb146a2f 100644 --- a/arch/arm64/mm/hugetlbpage.c +++ b/arch/arm64/mm/hugetlbpage.c @@ -155,7 +155,7 @@ pte_t huge_ptep_get(pte_t *ptep) pte_t pte = __ptep_get(ptep); if (pte_dirty(pte)) - orig_pte = pte_mkdirty(orig_pte); + orig_pte = __pte_mkdirty(orig_pte); if (pte_young(pte)) orig_pte = pte_mkyoung(orig_pte); @@ -189,7 +189,7 @@ static pte_t get_clear_contig(struct mm_struct *mm, * so check them all. */ if (pte_dirty(pte)) - orig_pte = pte_mkdirty(orig_pte); + orig_pte = __pte_mkdirty(orig_pte); if (pte_young(pte)) orig_pte = pte_mkyoung(orig_pte); @@ -464,7 +464,7 @@ int huge_ptep_set_access_flags(struct vm_area_struct *vma, /* Make sure we don't lose the dirty or young state */ if (pte_dirty(orig_pte)) - pte = pte_mkdirty(pte); + pte = __pte_mkdirty(pte); if (pte_young(orig_pte)) pte = pte_mkyoung(pte); From patchwork Fri Apr 19 07:43:43 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Roberts X-Patchwork-Id: 13635742 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 170B0C04FF6 for ; Fri, 19 Apr 2024 07:44:09 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 959A06B0089; Fri, 19 Apr 2024 03:44:08 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 90AE76B008A; Fri, 19 Apr 2024 03:44:08 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7D29B6B008C; Fri, 19 Apr 2024 03:44:08 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 5EDC36B0089 for ; Fri, 19 Apr 2024 03:44:08 -0400 (EDT) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 0679C140C42 for ; Fri, 19 Apr 2024 07:44:08 +0000 (UTC) X-FDA: 82025492976.13.FC181DE Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf08.hostedemail.com (Postfix) with ESMTP id 4F25B16000F for ; Fri, 19 Apr 2024 07:44:06 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf08.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1713512646; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=3a7ljbEYFFKg7Z46rH+RZ1RgtbNibGzF7Gg3KSVCk4U=; b=BRtowYxElDilh/895reaNP6+4tPturIkw1lFtZUvRn1d/k8Rf+6QFb1FerjRLIUHPaiFTV jKgsUtoACew/C0yQl3ATyvPaRBBvL9CHT1HmLYUCP4gRYmARKEhbYoj3krGCigawCxclI9 QNCgiO7phfZZMPL1xrm/jwp2dGjjYgg= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf08.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1713512646; a=rsa-sha256; cv=none; b=cSNjQlA3H2hDGP1/NH5sziIQfZTlplTc3IQ3HWVpWajubFosW71sDwvDDvYr7oUXBwgYNt GJ1lnGJ77TptQKiliVIXvxPc/8/7on0oEuD9y/5ZxAWWUWjl13hK/eSHHy1tontwik8JaO ZCcJgJh0QbJ08kplWYmAnxqdf0ud41I= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id CD9682F; Fri, 19 Apr 2024 00:44:33 -0700 (PDT) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.27]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 7F6D43F792; Fri, 19 Apr 2024 00:44:03 -0700 (PDT) From: Ryan Roberts To: Catalin Marinas , Will Deacon , Andrew Morton , Shuah Khan , Joey Gouly , Ard Biesheuvel , Mark Rutland , Anshuman Khandual , David Hildenbrand , Shivansh Vij Cc: Ryan Roberts , linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org Subject: [RFC PATCH v1 4/5] selftests/mm: Enable soft-dirty tests on arm64 Date: Fri, 19 Apr 2024 08:43:43 +0100 Message-Id: <20240419074344.2643212-5-ryan.roberts@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240419074344.2643212-1-ryan.roberts@arm.com> References: <20240419074344.2643212-1-ryan.roberts@arm.com> MIME-Version: 1.0 X-Stat-Signature: y4mgsi7xbcjhxfigfkzg4g989icy1fob X-Rspamd-Queue-Id: 4F25B16000F X-Rspamd-Server: rspam02 X-Rspam-User: X-HE-Tag: 1713512646-766125 X-HE-Meta: U2FsdGVkX1/o6WSh2D/LlE8gJe6+8/U1QX2V6mI43W5o4CqXCgC3E9/1uCBxZ6rWxD+TTXCd1Ao+CmOK342WD+wKjAR7medlNnLM/1KJ3ryi2+Ryhf66b65d0ImtXqFmCLmK9AQ6Q62t6OCTM1iU6gvwGejm4pE9wEma5Qhsi5UGs8BVi81HvqSXqhLUjCWuf1AKsTcZhcazwjUhZqryzrzkIe6eQnD/s1OXkJdVDMTG4qmrhjS3NVjB6w0f0NPmcOkmRYGAx54G4cU0i6c61AVY2bG08fBv/FuYbyki3CgCZdwU3+r0JFlwMf52of4tjIWMEF3OP0ozQ2LpWqLMMTxG4ijzQNgraNiH5MBM4r/qr9xSvmh6PhxiOZGbDu618S6/wIadVB4BKyotA3jysIAwBP069nLN0UHJa+hjANpZV9PNJk3oktnmlNBbY2mLnchalHiA2HrGAACkKIGhFV/mYCB9Y8cP/mbMn/6u6+657SX90DvaQzFEUQotNmPz5VBAhUHEJsOTk71TI4CuBcvftI0zf9aC6kysAfpBsiuo54R3qzQ4NV3KwDOb6mX1jHX+qZ3m7jVNO6KlehfpRMWrIDH7NVV/U+xcZUFCbK8uxpVuIbr2LCoMkIMBHvassuB5omtX+BF2eMOTupRSWadbyZubyapLCr+nsLVcj7PhT8swITvHRFWiXMyIjLQZAK8XAOTMx0Hcy+aJEbK3MlAQtUKwGy763p2gDn54Ro+Q2P9dtb2g5rCLmmP59OA4mBo/LuawsiJ/A/GLZGH/eatqHM/MXy/V6OBWypNpc0sq9GckeXmNa8RqlpDRWmbS98hxlLK4AoI1MWudEJsxQGV9n+R4VlYFPoBKVc3T+SkT0E87Ga9hShKGzlIR3laT9F1hnt+JwEHA1mCu6F4gr8ksLvppHRKIblHip4j/MQrHHpfuzjtvyLVVaha7yGVH+WnhlcrBLADZdvXKcGb LIulnrzS I8yYtaVZGxvIzq0gMjFn9uzwTRxcZbhms58D+mV2Xr4NOd7yTxXx/GSwF4HRu4PxjSW85v2sFSo8CnklRXpbBCtxi4USam7Q7tWgwgp7ezPkDUiBTe7DWk2RC0kTEl7Ax9LnyMWUolJyeGhpT/3spuL7/hUrYJFWI1s/6 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Now that arm64 supports soft-dirty tracking lets enable the tests, which were previously disabled for arm64 to reduce noise. This reverts commit f6dd4e223d87 ("selftests/mm: skip soft-dirty tests on arm64"). Signed-off-by: Ryan Roberts --- tools/testing/selftests/mm/Makefile | 5 +---- tools/testing/selftests/mm/madv_populate.c | 26 ++-------------------- tools/testing/selftests/mm/run_vmtests.sh | 5 +---- 3 files changed, 4 insertions(+), 32 deletions(-) -- 2.25.1 diff --git a/tools/testing/selftests/mm/Makefile b/tools/testing/selftests/mm/Makefile index eb5f39a2668b..7f1a6ad09534 100644 --- a/tools/testing/selftests/mm/Makefile +++ b/tools/testing/selftests/mm/Makefile @@ -65,6 +65,7 @@ TEST_GEN_FILES += thuge-gen TEST_GEN_FILES += transhuge-stress TEST_GEN_FILES += uffd-stress TEST_GEN_FILES += uffd-unit-tests +TEST_GEN_FILES += soft-dirty TEST_GEN_FILES += split_huge_page_test TEST_GEN_FILES += ksm_tests TEST_GEN_FILES += ksm_functional_tests @@ -72,10 +73,6 @@ TEST_GEN_FILES += mdwe_test TEST_GEN_FILES += hugetlb_fault_after_madv TEST_GEN_FILES += hugetlb_madv_vs_map -ifneq ($(ARCH),arm64) -TEST_GEN_FILES += soft-dirty -endif - ifeq ($(ARCH),x86_64) CAN_BUILD_I386 := $(shell ./../x86/check_cc.sh "$(CC)" ../x86/trivial_32bit_program.c -m32) CAN_BUILD_X86_64 := $(shell ./../x86/check_cc.sh "$(CC)" ../x86/trivial_64bit_program.c) diff --git a/tools/testing/selftests/mm/madv_populate.c b/tools/testing/selftests/mm/madv_populate.c index 17bcb07f19f3..60547245e479 100644 --- a/tools/testing/selftests/mm/madv_populate.c +++ b/tools/testing/selftests/mm/madv_populate.c @@ -264,35 +264,14 @@ static void test_softdirty(void) munmap(addr, SIZE); } -static int system_has_softdirty(void) -{ - /* - * There is no way to check if the kernel supports soft-dirty, other - * than by writing to a page and seeing if the bit was set. But the - * tests are intended to check that the bit gets set when it should, so - * doing that check would turn a potentially legitimate fail into a - * skip. Fortunately, we know for sure that arm64 does not support - * soft-dirty. So for now, let's just use the arch as a corse guide. - */ -#if defined(__aarch64__) - return 0; -#else - return 1; -#endif -} - int main(int argc, char **argv) { - int nr_tests = 16; int err; pagesize = getpagesize(); - if (system_has_softdirty()) - nr_tests += 5; - ksft_print_header(); - ksft_set_plan(nr_tests); + ksft_set_plan(21); sense_support(); test_prot_read(); @@ -300,8 +279,7 @@ int main(int argc, char **argv) test_holes(); test_populate_read(); test_populate_write(); - if (system_has_softdirty()) - test_softdirty(); + test_softdirty(); err = ksft_get_fail_cnt(); if (err) diff --git a/tools/testing/selftests/mm/run_vmtests.sh b/tools/testing/selftests/mm/run_vmtests.sh index c2c542fe7b17..29806d352c73 100755 --- a/tools/testing/selftests/mm/run_vmtests.sh +++ b/tools/testing/selftests/mm/run_vmtests.sh @@ -395,10 +395,7 @@ then CATEGORY="pkey" run_test ./protection_keys_64 fi -if [ -x ./soft-dirty ] -then - CATEGORY="soft_dirty" run_test ./soft-dirty -fi +CATEGORY="soft_dirty" run_test ./soft-dirty CATEGORY="pagemap" run_test ./pagemap_ioctl From patchwork Fri Apr 19 07:43:44 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Roberts X-Patchwork-Id: 13635743 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8E100C4345F for ; Fri, 19 Apr 2024 07:44:11 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 207CE6B008A; Fri, 19 Apr 2024 03:44:11 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 16D696B008C; Fri, 19 Apr 2024 03:44:11 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F00146B0092; Fri, 19 Apr 2024 03:44:10 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id D54296B008A for ; Fri, 19 Apr 2024 03:44:10 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 8A131140283 for ; Fri, 19 Apr 2024 07:44:10 +0000 (UTC) X-FDA: 82025493060.14.542FEFE Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf21.hostedemail.com (Postfix) with ESMTP id E71811C0018 for ; Fri, 19 Apr 2024 07:44:08 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=none; spf=pass (imf21.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1713512649; a=rsa-sha256; cv=none; b=nyR/mdleba00Rs921sa8Oyea2WdUtEMgevEJgdN+Xbv6+zlttouIZSTNl4Ij6LaFiUSe3B ULrpXoT4gHesKLaqoGMdJ/DqinLDKRS4KG8PtJT4bw/kHY7xmu547322N19b8bKe89PzoP Ra6s9v437/a6Tait/7ecyzhLh2gcuSA= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=none; spf=pass (imf21.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1713512649; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=LueqF6+OT2M/kp1Ny7qCK2OyHQdGdekYxHdLSp7y8ac=; b=G3eaYw50JAJaIshVo950/7g8Gn5IRnFXNOr1B1yOQJjrSCgI2V51n/o5usz532MRDYTac1 CVEkLCytIwVZZJ2TXEmFTMPn/ezj4C2zu3yT1tWLSYxeNzboFuxuc8ee+cA5oKu6+gFSWz a2bONOCa3C7kPvFbX7sRIWw5bjwOYhY= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 45F7F2F; Fri, 19 Apr 2024 00:44:36 -0700 (PDT) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.27]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id EBD1C3F792; Fri, 19 Apr 2024 00:44:05 -0700 (PDT) From: Ryan Roberts To: Catalin Marinas , Will Deacon , Andrew Morton , Shuah Khan , Joey Gouly , Ard Biesheuvel , Mark Rutland , Anshuman Khandual , David Hildenbrand , Shivansh Vij Cc: Ryan Roberts , linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org Subject: [PATCH v1 5/5] selftests/mm: soft-dirty should fail if a testcase fails Date: Fri, 19 Apr 2024 08:43:44 +0100 Message-Id: <20240419074344.2643212-6-ryan.roberts@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240419074344.2643212-1-ryan.roberts@arm.com> References: <20240419074344.2643212-1-ryan.roberts@arm.com> MIME-Version: 1.0 X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: E71811C0018 X-Stat-Signature: weskx995ykscxyy4mtmzqkp6k5qfe7ai X-Rspam-User: X-HE-Tag: 1713512648-169210 X-HE-Meta: U2FsdGVkX1/RTagf4xe4eqxswxCoSToR/BiRcYmGHqvGITiwtz2f1owPF6l69jPkGLqAKtot3I6Xn1wtYvvRNLIjL4bZUwNOdoHle3npqlTysdOWtmo2F/IP/SLlebGfgjgrMT3/CoHSEVmgoYEMRbxKX10JA3Po94OmqI8ka9ITAAszbI7ivvhlb68a9bRRNQto5/iW5EdA2S9oy1IeaGacAWL+R1VpXu0AK14ldrg3EvHwRFvjMSt6S8Dt3R8FcgJofaQ7Z2+nXRmEYBoBy/UlQAKy9AEcITS3DgqRofMnv0KwsTZJq9orIBopv+W1L2SeLLNp9ebrCuS2h2REhGcpEcC7EEfPpm7K/IsAFa+ES3aYCkkBRUsqJNRNQIPes0Z3E6LjfB75hLn/ZEBdxZg6413uKaqIc9HZT4DcaL70ZoBhhOrIfJAjp0xLrNOYlx3lIG89wJHFqBsPVQzVdZ12Qga9mtdJYfLIbEzjs07EIURkSTghiEDFT0YPXfhiDrcFyJzi9CTOCMoA6xbHU4Vij37fUDWnIBgIA//f+pvJXNLW1QlHwNA2hRURBAYJI9+g1ygEY3sdMp1m5thatgMlkKhpes9GVh4ugsoZ56KhYGt6TtakYLkEF72IenRVDTTpXxS7ju2jF/m2ji5Y/H/XXA58R65URHro/d+tiQLrtkLXQmpuUHTzBvUrBPfKbgAds48duOr+Pfe8TCaSIfVlLzkjSwtSUXdmWrMO48i5Lrd9UZbbeqmJ3jDCgwIE6Q62FQXoYPObIWTe+EeNkNT8em7UGQxOUFsx/KocsNHmrtp+xOe3svvW+/mLpGb9sDbAyROkn7Uq4vaGzwE8vvPEFReE3WXWwnwjV6kODska7SPXFPjrIYJRrXnnydhsxLKMJJfuj6oiCfcHUgfZJXdAXWY6v6HcB25y9nLuqOWTty5gRPTSyno/4DGAgJZ0CZquwZ5bw1y9YJY4cQ7 oSVmOtIj Eq90mqlWiIdZX5hAdZ7EokeCDSeL9c0U/o76YFExg2O8PnUXCkg5sFR18Mgl59XMTU9BOb6jPLkolDNAHCMVg/p/e3oOkk1sMccUWOGESYsKN/Tzr5IDs+iIEToCZ6EPjH74fmPrHEUodSCp8mPvOWczET2yqrQ4yQvg1 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Previously soft-dirty was unconditionally exiting with success, even if one of it's testcases failed. Let's fix that so that failure can be reported to automated systems properly. Signed-off-by: Ryan Roberts Reviewed-by: David Hildenbrand Reviewed-by: Muhammad Usama Anjum --- tools/testing/selftests/mm/soft-dirty.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) -- 2.25.1 diff --git a/tools/testing/selftests/mm/soft-dirty.c b/tools/testing/selftests/mm/soft-dirty.c index 7dbfa53d93a0..bdfa5d085f00 100644 --- a/tools/testing/selftests/mm/soft-dirty.c +++ b/tools/testing/selftests/mm/soft-dirty.c @@ -209,5 +209,5 @@ int main(int argc, char **argv) close(pagemap_fd); - return ksft_exit_pass(); + ksft_finished(); }