From patchwork Mon Mar 18 13:28:46 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Prasad, Aravinda" X-Patchwork-Id: 13595370 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7893AC54E5D for ; Mon, 18 Mar 2024 13:24:21 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 070B46B0088; Mon, 18 Mar 2024 09:24:21 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 020596B0089; Mon, 18 Mar 2024 09:24:20 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DDD966B008A; Mon, 18 Mar 2024 09:24:20 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id CB4DF6B0088 for ; Mon, 18 Mar 2024 09:24:20 -0400 (EDT) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 53D19160B4A for ; Mon, 18 Mar 2024 13:24:20 +0000 (UTC) X-FDA: 81910228680.27.D0F771C Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.10]) by imf01.hostedemail.com (Postfix) with ESMTP id 3C36A40005 for ; Mon, 18 Mar 2024 13:24:17 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=oHKArCVZ; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf01.hostedemail.com: domain of aravinda.prasad@intel.com designates 198.175.65.10 as permitted sender) smtp.mailfrom=aravinda.prasad@intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1710768258; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=rv4xwBKPX6Rai/+/tmiPvCefdvvDnuUn5wBpCH8PL9Y=; b=oZR0eyQoeQm496q97ZEu2voP5PrzFI5LjShnryjcJJ1K7TXYbqmZHPAdH8n9U3teu5qilT 8PKjiSUcDas0BgmjiZ5K2gg/cuCRZ5K/QxGg/dXMRXgaO32voeVCeSeLvImVEq2A5z6PK6 /KVz6dUS72YTn82guMkUh80+7SIuj4g= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=oHKArCVZ; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf01.hostedemail.com: domain of aravinda.prasad@intel.com designates 198.175.65.10 as permitted sender) smtp.mailfrom=aravinda.prasad@intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1710768258; a=rsa-sha256; cv=none; b=J9WU9j4eHDQ5twLoJ1IRX29gRikw2W24VDnnqg/Y/6eGE4V9kBIqiMOpTQ/az4x0YrFZuB KevXitVqXhxg9c5IYECjcCissoitwNZLMOtTQf0ZWlsfG2rDQeP3YI0pHpmiXIz70meU1y 9KfyI/cY01hfdcZt/928/Zp4rtiBR2k= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1710768258; x=1742304258; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=/VuJKYILT3YfCYMBoMhHCBG0ry/0kkaiuCo4GcM3XUM=; b=oHKArCVZR8sUCNTMcnlEQMQoWRAJe+KvCVWl2H4N4JNKVsPwGd+CoV27 HqejRBgw4eLlmNvl5iKPWb7uIwQKhlq09H7SFlni6OswDo1DnA3wMo9oT ncPYGYbzHFrkZvOflUZyDYujFNQbiJ7rbeqlAv+z3gkI/9rbCBMYB0nLK 4As8nLPFcDi3etugOxOgprDteVjqDWt9q0Zu6pWswUaG3nQRd7tgBS+LG ZwfsHHSyc9vbTvozxuw4Ra7yoGOlCruhBhA1RxndV1hyxBvhxcvsU3wpK li5mmmDBT0LZgsDmmZnxcewCS1LJBT/ugaLFdPVeC7Hev7TPltN6ax07S w==; X-IronPort-AV: E=McAfee;i="6600,9927,11016"; a="23037925" X-IronPort-AV: E=Sophos;i="6.07,134,1708416000"; d="scan'208";a="23037925" Received: from orviesa008.jf.intel.com ([10.64.159.148]) by orvoesa102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Mar 2024 06:24:17 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,134,1708416000"; d="scan'208";a="14102734" Received: from adr-par-inspur1.iind.intel.com ([10.223.93.209]) by orviesa008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Mar 2024 06:24:14 -0700 From: Aravinda Prasad To: damon@lists.linux.dev, linux-mm@kvack.org, sj@kernel.org, linux-kernel@vger.kernel.org Cc: aravinda.prasad@intel.com, s2322819@ed.ac.uk, sandeep4.kumar@intel.com, ying.huang@intel.com, dave.hansen@intel.com, dan.j.williams@intel.com, sreenivas.subramoney@intel.com, antti.kervinen@intel.com, alexander.kanevskiy@intel.com Subject: [PATCH v2 1/3] mm/damon: mm infrastructure support Date: Mon, 18 Mar 2024 18:58:46 +0530 Message-Id: <20240318132848.82686-2-aravinda.prasad@intel.com> X-Mailer: git-send-email 2.21.3 In-Reply-To: <20240318132848.82686-1-aravinda.prasad@intel.com> References: <20240318132848.82686-1-aravinda.prasad@intel.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 3C36A40005 X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: r1uje733u4q7uit8ndhmgeuo8wxjw55k X-HE-Tag: 1710768257-878235 X-HE-Meta: U2FsdGVkX1/eAW9cTxQ/m26fNIDhJjpD4y9uGpcENXjZlp5yYvcjDwCkg0sspFoAJbZPfOgkRUQutKNqpnmyW7eNuJxDb0CoaiSAhEaIDVo+J6JQgPNDFEbWR4FbRKCRPzPkeVrGIy8onQW1ILXcTbXtOjJMQQI7qVajSLPdTEbW+BNKL6j1l3tf0+utkkjrl/nct0Fu2kv5b9wEG0NqTDGKeSZhIVvfE1BJYU4exaFfzIRPC2xWcyxxS1vYcZIqYKR6SqMNSy9DzAOLgn6xwU8KNE1tJUj9vPFnf6+cReBJZ0Thth+072LdiiyGsS+MUfUhj5kFySMlm8sFvuk/c8CS4pfeqxyQigFr2TmRpNjCo3IQfa1GAgqIR9USfEumEhZAE9RfOSg6dZYPcGjy+yei+nYoz1uqsygi3GS1pFVwXDuc0GpCcFbFXhGgILzrZFj/hWBI2IiEfWHJ/wFrpybA+pg5w0YaepcUccN7V6XXV2YCbZ7xes6VnNwv7jLAknG2XDQWW2Mutp0oWMh7TOUlEhKm1M10YZ5ppo2RHwcq224c/b2K23mlpwkgRL2Lb6ZCB8vaTqAX0pO+9Vp6IWpJDwo1dz41LUiWtj5v0KUoeaqMSWco//Y9mdHEYzNi8uU3fTquetltEglYvIfcw6BJHieOLp4c2iMmguaGdn6QHqiEpWvfU+M9ny7z/1HC9U60V5F7spF//jRKnAXM9dsqCm4A6TOyahDxDjXWHulxRPB8yOTtA/hDqHJR+NaTxyObK2qz5noZ0QJLrNRGtAYgM2bKlDpf5Zub+VgNqcVjUdF2ambGLkW+poqVZlFiFafDpy2D7OwQd665lQP5k0kUqBXPkxWFvs2TkY3Fof4yHifEDNsmYDgFLZawVT7+dIdIinK+vzIz9/4Q6gkoqh2usPGsotCvAbdaChYp6q7pihf/ooBG238jrvrc7Ij6Ky77uymOMBBnD7tlKP7 aatyjmDk S/K81Xzi/fsI4B7akdMo9z6M2BsLatBK9Lwxeth/zFmVMwGwI/w0rd150VIqbq+yFZBt1MLvSmTFlT5aL2uKzpM7olZKgBspW70zdXGWCZF2lBHQIkHNKs2o4TvpgE+skCsYmkl4W9BQ3uPe3a3ZPO5aWyA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: This patch adds mm infrastructure support to set and test access bits at different levels of the page table tree. The patch also adds support to check if a give address is in the PMD/PUD/PGD address range. Signed-off-by: Alan Nair Signed-off-by: Aravinda Prasad --- arch/x86/include/asm/pgtable.h | 20 +++++++++ arch/x86/mm/pgtable.c | 28 +++++++++++- include/linux/mmu_notifier.h | 36 ++++++++++++++++ include/linux/pgtable.h | 79 ++++++++++++++++++++++++++++++++++ 4 files changed, 161 insertions(+), 2 deletions(-) diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h index 7621a5acb13e..b8d505194282 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -164,11 +164,24 @@ static inline bool pud_dirty(pud_t pud) return pud_flags(pud) & _PAGE_DIRTY_BITS; } +#define pud_young pud_young static inline int pud_young(pud_t pud) { return pud_flags(pud) & _PAGE_ACCESSED; } +#define p4d_young p4d_young +static inline int p4d_young(p4d_t p4d) +{ + return p4d_flags(p4d) & _PAGE_ACCESSED; +} + +#define pgd_young pgd_young +static inline int pgd_young(pgd_t pgd) +{ + return pgd_flags(pgd) & _PAGE_ACCESSED; +} + static inline int pte_write(pte_t pte) { /* @@ -1329,10 +1342,17 @@ extern int pudp_set_access_flags(struct vm_area_struct *vma, pud_t entry, int dirty); #define __HAVE_ARCH_PMDP_TEST_AND_CLEAR_YOUNG +#define pudp_test_and_clear_young pudp_test_and_clear_young +#define p4dp_test_and_clear_young p4dp_test_and_clear_young +#define pgdp_test_and_clear_young pgdp_test_and_clear_young extern int pmdp_test_and_clear_young(struct vm_area_struct *vma, unsigned long addr, pmd_t *pmdp); extern int pudp_test_and_clear_young(struct vm_area_struct *vma, unsigned long addr, pud_t *pudp); +extern int p4dp_test_and_clear_young(struct vm_area_struct *vma, + unsigned long addr, p4d_t *p4dp); +extern int pgdp_test_and_clear_young(struct vm_area_struct *vma, + unsigned long addr, pgd_t *pgdp); #define __HAVE_ARCH_PMDP_CLEAR_YOUNG_FLUSH extern int pmdp_clear_flush_young(struct vm_area_struct *vma, diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c index ff690ddc2334..9f8e08326b43 100644 --- a/arch/x86/mm/pgtable.c +++ b/arch/x86/mm/pgtable.c @@ -578,9 +578,7 @@ int pmdp_test_and_clear_young(struct vm_area_struct *vma, return ret; } -#endif -#ifdef CONFIG_TRANSPARENT_HUGEPAGE int pudp_test_and_clear_young(struct vm_area_struct *vma, unsigned long addr, pud_t *pudp) { @@ -594,6 +592,32 @@ int pudp_test_and_clear_young(struct vm_area_struct *vma, } #endif +#ifdef CONFIG_ARCH_HAS_NONLEAF_PMD_YOUNG +int p4dp_test_and_clear_young(struct vm_area_struct *vma, + unsigned long addr, p4d_t *p4dp) +{ + int ret = 0; + + if (p4d_young(*p4dp)) + ret = test_and_clear_bit(_PAGE_BIT_ACCESSED, + (unsigned long *)p4dp); + + return ret; +} + +int pgdp_test_and_clear_young(struct vm_area_struct *vma, + unsigned long addr, pgd_t *pgdp) +{ + int ret = 0; + + if (pgd_young(*pgdp)) + ret = test_and_clear_bit(_PAGE_BIT_ACCESSED, + (unsigned long *)pgdp); + + return ret; +} +#endif + int ptep_clear_flush_young(struct vm_area_struct *vma, unsigned long address, pte_t *ptep) { diff --git a/include/linux/mmu_notifier.h b/include/linux/mmu_notifier.h index f349e08a9dfe..ec7fc170882e 100644 --- a/include/linux/mmu_notifier.h +++ b/include/linux/mmu_notifier.h @@ -581,6 +581,39 @@ static inline void mmu_notifier_range_init_owner( __young; \ }) +#define pudp_clear_young_notify(__vma, __address, __pudp) \ +({ \ + int __young; \ + struct vm_area_struct *___vma = __vma; \ + unsigned long ___address = __address; \ + __young = pudp_test_and_clear_young(___vma, ___address, __pudp);\ + __young |= mmu_notifier_clear_young(___vma->vm_mm, ___address, \ + ___address + PUD_SIZE); \ + __young; \ +}) + +#define p4dp_clear_young_notify(__vma, __address, __p4dp) \ +({ \ + int __young; \ + struct vm_area_struct *___vma = __vma; \ + unsigned long ___address = __address; \ + __young = p4dp_test_and_clear_young(___vma, ___address, __p4dp);\ + __young |= mmu_notifier_clear_young(___vma->vm_mm, ___address, \ + ___address + P4D_SIZE); \ + __young; \ +}) + +#define pgdp_clear_young_notify(__vma, __address, __pgdp) \ +({ \ + int __young; \ + struct vm_area_struct *___vma = __vma; \ + unsigned long ___address = __address; \ + __young = pgdp_test_and_clear_young(___vma, ___address, __pgdp);\ + __young |= mmu_notifier_clear_young(___vma->vm_mm, ___address, \ + ___address + PGDIR_SIZE); \ + __young; \ +}) + /* * set_pte_at_notify() sets the pte _after_ running the notifier. * This is safe to start by updating the secondary MMUs, because the primary MMU @@ -690,6 +723,9 @@ static inline void mmu_notifier_subscriptions_destroy(struct mm_struct *mm) #define pmdp_clear_flush_young_notify pmdp_clear_flush_young #define ptep_clear_young_notify ptep_test_and_clear_young #define pmdp_clear_young_notify pmdp_test_and_clear_young +#define pudp_clear_young_notify pudp_test_and_clear_young +#define p4dp_clear_young_notify p4dp_test_and_clear_young +#define pgdp_clear_young_notify pgdp_test_and_clear_young #define ptep_clear_flush_notify ptep_clear_flush #define pmdp_huge_clear_flush_notify pmdp_huge_clear_flush #define pudp_huge_clear_flush_notify pudp_huge_clear_flush diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index 85fc7554cd52..09c3e8bb11bf 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -184,6 +184,27 @@ static inline int pmd_young(pmd_t pmd) } #endif +#ifndef pud_young +static inline int pud_young(pud_t pud) +{ + return 0; +} +#endif + +#ifndef p4d_young +static inline int p4d_young(p4d_t p4d) +{ + return 0; +} +#endif + +#ifndef pgd_young +static inline int pgd_young(pgd_t pgd) +{ + return 0; +} +#endif + #ifndef pmd_dirty static inline int pmd_dirty(pmd_t pmd) { @@ -386,6 +407,33 @@ static inline int pmdp_test_and_clear_young(struct vm_area_struct *vma, #endif /* CONFIG_TRANSPARENT_HUGEPAGE || CONFIG_ARCH_HAS_NONLEAF_PMD_YOUNG */ #endif +#ifndef pudp_test_and_clear_young +static inline int pudp_test_and_clear_young(struct vm_area_struct *vma, + unsigned long address, + pud_t *pudp) +{ + return 0; +} +#endif + +#ifndef p4dp_test_and_clear_young +static inline int p4dp_test_and_clear_young(struct vm_area_struct *vma, + unsigned long address, + p4d_t *p4dp) +{ + return 0; +} +#endif + +#ifndef pgdp_test_and_clear_young +static inline int pgdp_test_and_clear_young(struct vm_area_struct *vma, + unsigned long address, + pgd_t *pgdp) +{ + return 0; +} +#endif + #ifndef __HAVE_ARCH_PTEP_CLEAR_YOUNG_FLUSH int ptep_clear_flush_young(struct vm_area_struct *vma, unsigned long address, pte_t *ptep); @@ -1090,6 +1138,37 @@ static inline void arch_swap_restore(swp_entry_t entry, struct folio *folio) #define flush_tlb_fix_spurious_fault(vma, address, ptep) flush_tlb_page(vma, address) #endif +/* + * When walking page tables, get the address of the current boundary, + * or the start address of the range if that comes earlier. + */ + +#define pgd_addr_start(addr, start) \ +({ unsigned long __boundary = (addr) & PGDIR_MASK; \ + (__boundary > start) ? __boundary : (start); \ +}) + +#ifndef p4d_addr_start +#define p4d_addr_start(addr, start) \ +({ unsigned long __boundary = (addr) & P4D_MASK; \ + (__boundary > start) ? __boundary : (start); \ +}) +#endif + +#ifndef pud_addr_start +#define pud_addr_start(addr, start) \ +({ unsigned long __boundary = (addr) & PUD_MASK; \ + (__boundary > start) ? __boundary : (start); \ +}) +#endif + +#ifndef pmd_addr_start +#define pmd_addr_start(addr, start) \ +({ unsigned long __boundary = (addr) & PMD_MASK; \ + (__boundary > start) ? __boundary : (start); \ +}) +#endif + /* * When walking page tables, get the address of the next boundary, * or the end address of the range if that comes earlier. Although no