From patchwork Wed Mar 13 04:21:17 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rohan McLure X-Patchwork-Id: 13590994 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id DE93EC54791 for ; Wed, 13 Mar 2024 04:24:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=bccD1MGknxVpMWodSH/RXDGtOnJjIkhVhUNUgjH1xhc=; b=ShCSfnfoXjHfVy zShG6I3Gn/x1tsOFP9hm7s9AhAweScZjzgSF1kBoo5ub3P4/uLatD92Xp/Wplle4IvSppHmKGRW+P K2NznEDa0r3SR1NiCxwOgmSNpE8lryKs+T4qox/Rr62dhNP+eeNp3EFJxukaXmsP9gazpG+Lxjcsr m3uPcdQpxYrvFsCrM81xiuSgxqMGg2CUHOFZPAtV9XrJ4L8kq9u15TqyasGNOTCO0X9FYg4p71iDU kTiabpoLtv78bR58k0KlQ9QQxQn/KR7xj0Gy5GICVKviwHhM2xoG6YLohxLSeS3Qv2WdaZ/ysYZUq cMMYsdVBu0YZIk+WebFA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rkGAX-00000008ngb-2vhy; Wed, 13 Mar 2024 04:24:45 +0000 Received: from mx0b-001b2d01.pphosted.com ([148.163.158.5]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rkGAT-00000008ndP-11C3; Wed, 13 Mar 2024 04:24:42 +0000 Received: from pps.filterd (m0360072.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 42D3pjJp026965; Wed, 13 Mar 2024 04:24:28 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=EC7nSZSULI8e2XNSmPVHgKMffsECx20IsyuiUhwVR1U=; b=lL/drXoQFvlVCTVLbiA7ymwFMSbrBxhP0Z2FrBVYZKsocNv0omAKa8zyGCbv2aPNnLLu 80nCiWf8BBsfFkP3nnzswGW8CLBsrB3ifNyrYE0ydss6We74JLWEuOYnaveMwRbBJ+MH c8zZJm9P96TYLjQu6S7sOQAyhF383qE+LIMtnoiVcCwKcw2V3+UqdgnUmE9G+VbkiFKV 0Ow2H6nSDPc/bIrxbyNgntyjAztTsrDUAXBr9LmpRtWkLobT1kQbDDQ/4Y0fE0baym6z oF9WhJqhmGT0422BnhU0dOBk0CihGjJpejZBp/GzYzavpJ/DqeoPWrA/5SIfG91wrA5t 6Q== Received: from ppma11.dal12v.mail.ibm.com (db.9e.1632.ip4.static.sl-reverse.com [50.22.158.219]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3wu43ughr6-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 13 Mar 2024 04:24:27 +0000 Received: from pps.filterd (ppma11.dal12v.mail.ibm.com [127.0.0.1]) by ppma11.dal12v.mail.ibm.com (8.17.1.19/8.17.1.19) with ESMTP id 42D2AQFl018543; Wed, 13 Mar 2024 04:21:36 GMT Received: from smtprelay07.fra02v.mail.ibm.com ([9.218.2.229]) by ppma11.dal12v.mail.ibm.com (PPS) with ESMTPS id 3ws4t23204-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 13 Mar 2024 04:21:36 +0000 Received: from smtpav04.fra02v.mail.ibm.com (smtpav04.fra02v.mail.ibm.com [10.20.54.103]) by smtprelay07.fra02v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 42D4LW0B37552582 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 13 Mar 2024 04:21:34 GMT Received: from smtpav04.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id F2BCA20040; Wed, 13 Mar 2024 04:21:31 +0000 (GMT) Received: from smtpav04.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 2C9172004B; Wed, 13 Mar 2024 04:21:31 +0000 (GMT) Received: from ozlabs.au.ibm.com (unknown [9.192.253.14]) by smtpav04.fra02v.mail.ibm.com (Postfix) with ESMTP; Wed, 13 Mar 2024 04:21:31 +0000 (GMT) Received: from socotra.ozlabs.ibm.com (haven.au.ibm.com [9.192.254.114]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.au.ibm.com (Postfix) with ESMTPSA id 3091760442; Wed, 13 Mar 2024 15:21:26 +1100 (AEDT) From: Rohan McLure To: linuxppc-dev@lists.ozlabs.org Cc: Rohan McLure , mpe@ellerman.id.au, christophe.leroy@csgroup.eu, linux-mm@kvack.org, linux-riscv@lists.infradead.org, linux-arm-kernel@lists.infradead.org, x86@kernel.org Subject: [PATCH v10 12/12] powerpc: mm: Support page table check Date: Wed, 13 Mar 2024 15:21:17 +1100 Message-ID: <20240313042118.230397-13-rmclure@linux.ibm.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240313042118.230397-1-rmclure@linux.ibm.com> References: <20240313042118.230397-1-rmclure@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-GUID: uLiDqDNMTdVtIOYy7wc0sZFzpwd_6oOE X-Proofpoint-ORIG-GUID: uLiDqDNMTdVtIOYy7wc0sZFzpwd_6oOE X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.1011,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2024-03-13_04,2024-03-12_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 suspectscore=0 mlxlogscore=828 spamscore=0 impostorscore=0 bulkscore=0 phishscore=0 clxscore=1015 lowpriorityscore=0 adultscore=0 malwarescore=0 mlxscore=0 priorityscore=1501 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2311290000 definitions=main-2403130030 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240312_212441_413129_E7C63D55 X-CRM114-Status: GOOD ( 25.50 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org On creation and clearing of a page table mapping, instrument such calls by invoking page_table_check_pte_set and page_table_check_pte_clear respectively. These calls serve as a sanity check against illegal mappings. Enable ARCH_SUPPORTS_PAGE_TABLE_CHECK for all platforms. See also: riscv support in commit 3fee229a8eb9 ("riscv/mm: enable ARCH_SUPPORTS_PAGE_TABLE_CHECK") arm64 in commit 42b2547137f5 ("arm64/mm: enable ARCH_SUPPORTS_PAGE_TABLE_CHECK") x86_64 in commit d283d422c6c4 ("x86: mm: add x86_64 support for page table check") Reviewed-by: Christophe Leroy Signed-off-by: Rohan McLure --- v9: Updated for new API. Instrument pmdp_collapse_flush's two constituent calls to avoid header hell v10: Cause p{u,m}dp_huge_get_and_clear() to resemble one another --- arch/powerpc/Kconfig | 1 + arch/powerpc/include/asm/book3s/32/pgtable.h | 7 ++- arch/powerpc/include/asm/book3s/64/pgtable.h | 45 +++++++++++++++----- arch/powerpc/mm/book3s64/hash_pgtable.c | 4 ++ arch/powerpc/mm/book3s64/pgtable.c | 11 +++-- arch/powerpc/mm/book3s64/radix_pgtable.c | 3 ++ arch/powerpc/mm/pgtable.c | 4 ++ 7 files changed, 61 insertions(+), 14 deletions(-) diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig index b9fc064d38d2..2dfa5ccb25cc 100644 --- a/arch/powerpc/Kconfig +++ b/arch/powerpc/Kconfig @@ -166,6 +166,7 @@ config PPC select ARCH_STACKWALK select ARCH_SUPPORTS_ATOMIC_RMW select ARCH_SUPPORTS_DEBUG_PAGEALLOC if PPC_BOOK3S || PPC_8xx || 40x + select ARCH_SUPPORTS_PAGE_TABLE_CHECK select ARCH_USE_BUILTIN_BSWAP select ARCH_USE_CMPXCHG_LOCKREF if PPC64 select ARCH_USE_MEMTEST diff --git a/arch/powerpc/include/asm/book3s/32/pgtable.h b/arch/powerpc/include/asm/book3s/32/pgtable.h index 52971ee30717..a97edbc09984 100644 --- a/arch/powerpc/include/asm/book3s/32/pgtable.h +++ b/arch/powerpc/include/asm/book3s/32/pgtable.h @@ -201,6 +201,7 @@ void unmap_kernel_page(unsigned long va); #ifndef __ASSEMBLY__ #include #include +#include /* Bits to mask out from a PGD to get to the PUD page */ #define PGD_MASKED_BITS 0 @@ -314,7 +315,11 @@ static inline int __ptep_test_and_clear_young(struct mm_struct *mm, static inline pte_t ptep_get_and_clear(struct mm_struct *mm, unsigned long addr, pte_t *ptep) { - return __pte(pte_update(mm, addr, ptep, ~_PAGE_HASHPTE, 0, 0)); + pte_t old_pte = __pte(pte_update(mm, addr, ptep, ~_PAGE_HASHPTE, 0, 0)); + + page_table_check_pte_clear(mm, addr, old_pte); + + return old_pte; } #define __HAVE_ARCH_PTEP_SET_WRPROTECT diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h index ca765331e21d..4ad88d4ede88 100644 --- a/arch/powerpc/include/asm/book3s/64/pgtable.h +++ b/arch/powerpc/include/asm/book3s/64/pgtable.h @@ -145,6 +145,8 @@ #define PAGE_KERNEL_ROX __pgprot(_PAGE_BASE | _PAGE_KERNEL_ROX) #ifndef __ASSEMBLY__ +#include + /* * page table defines */ @@ -415,8 +417,11 @@ static inline void huge_ptep_set_wrprotect(struct mm_struct *mm, static inline pte_t ptep_get_and_clear(struct mm_struct *mm, unsigned long addr, pte_t *ptep) { - unsigned long old = pte_update(mm, addr, ptep, ~0UL, 0, 0); - return __pte(old); + pte_t old_pte = __pte(pte_update(mm, addr, ptep, ~0UL, 0, 0)); + + page_table_check_pte_clear(mm, addr, old_pte); + + return old_pte; } #define __HAVE_ARCH_PTEP_GET_AND_CLEAR_FULL @@ -425,11 +430,16 @@ static inline pte_t ptep_get_and_clear_full(struct mm_struct *mm, pte_t *ptep, int full) { if (full && radix_enabled()) { + pte_t old_pte; + /* * We know that this is a full mm pte clear and * hence can be sure there is no parallel set_pte. */ - return radix__ptep_get_and_clear_full(mm, addr, ptep, full); + old_pte = radix__ptep_get_and_clear_full(mm, addr, ptep, full); + page_table_check_pte_clear(mm, addr, old_pte); + + return old_pte; } return ptep_get_and_clear(mm, addr, ptep); } @@ -1334,19 +1344,34 @@ extern int pudp_test_and_clear_young(struct vm_area_struct *vma, static inline pmd_t pmdp_huge_get_and_clear(struct mm_struct *mm, unsigned long addr, pmd_t *pmdp) { - if (radix_enabled()) - return radix__pmdp_huge_get_and_clear(mm, addr, pmdp); - return hash__pmdp_huge_get_and_clear(mm, addr, pmdp); + pmd_t old_pmd; + + if (radix_enabled()) { + old_pmd = radix__pmdp_huge_get_and_clear(mm, addr, pmdp); + } else { + old_pmd = hash__pmdp_huge_get_and_clear(mm, addr, pmdp); + } + + page_table_check_pmd_clear(mm, addr, old_pmd); + + return old_pmd; } #define __HAVE_ARCH_PUDP_HUGE_GET_AND_CLEAR static inline pud_t pudp_huge_get_and_clear(struct mm_struct *mm, unsigned long addr, pud_t *pudp) { - if (radix_enabled()) - return radix__pudp_huge_get_and_clear(mm, addr, pudp); - BUG(); - return *pudp; + pud_t old_pud; + + if (radix_enabled()) { + old_pud = radix__pudp_huge_get_and_clear(mm, addr, pudp); + } else { + BUG(); + } + + page_table_check_pud_clear(mm, addr, old_pud); + + return old_pud; } static inline pmd_t pmdp_collapse_flush(struct vm_area_struct *vma, diff --git a/arch/powerpc/mm/book3s64/hash_pgtable.c b/arch/powerpc/mm/book3s64/hash_pgtable.c index 871472f99a01..f200d55c35d8 100644 --- a/arch/powerpc/mm/book3s64/hash_pgtable.c +++ b/arch/powerpc/mm/book3s64/hash_pgtable.c @@ -8,6 +8,7 @@ #include #include #include +#include #include #include @@ -231,6 +232,9 @@ pmd_t hash__pmdp_collapse_flush(struct vm_area_struct *vma, unsigned long addres pmd = *pmdp; pmd_clear(pmdp); + + page_table_check_pmd_clear(vma->vm_mm, address, pmd); + /* * Wait for all pending hash_page to finish. This is needed * in case of subpage collapse. When we collapse normal pages diff --git a/arch/powerpc/mm/book3s64/pgtable.c b/arch/powerpc/mm/book3s64/pgtable.c index 25082ab6018b..fa352da844a9 100644 --- a/arch/powerpc/mm/book3s64/pgtable.c +++ b/arch/powerpc/mm/book3s64/pgtable.c @@ -10,6 +10,7 @@ #include #include #include +#include #include #include @@ -116,6 +117,7 @@ void set_pmd_at(struct mm_struct *mm, unsigned long addr, WARN_ON(!(pmd_large(pmd))); #endif trace_hugepage_set_pmd(addr, pmd_val(pmd)); + page_table_check_pmd_set(mm, addr, pmdp, pmd); return set_pte_at_unchecked(mm, addr, pmdp_ptep(pmdp), pmd_pte(pmd)); } @@ -133,6 +135,7 @@ void set_pud_at(struct mm_struct *mm, unsigned long addr, WARN_ON(!(pud_large(pud))); #endif trace_hugepage_set_pud(addr, pud_val(pud)); + page_table_check_pud_set(mm, addr, pudp, pud); return set_pte_at_unchecked(mm, addr, pudp_ptep(pudp), pud_pte(pud)); } @@ -168,11 +171,13 @@ void serialize_against_pte_lookup(struct mm_struct *mm) pmd_t pmdp_invalidate(struct vm_area_struct *vma, unsigned long address, pmd_t *pmdp) { - unsigned long old_pmd; + pmd_t old_pmd; - old_pmd = pmd_hugepage_update(vma->vm_mm, address, pmdp, _PAGE_PRESENT, _PAGE_INVALID); + old_pmd = __pmd(pmd_hugepage_update(vma->vm_mm, address, pmdp, _PAGE_PRESENT, _PAGE_INVALID)); flush_pmd_tlb_range(vma, address, address + HPAGE_PMD_SIZE); - return __pmd(old_pmd); + page_table_check_pmd_clear(vma->vm_mm, address, old_pmd); + + return old_pmd; } pmd_t pmdp_huge_get_and_clear_full(struct vm_area_struct *vma, diff --git a/arch/powerpc/mm/book3s64/radix_pgtable.c b/arch/powerpc/mm/book3s64/radix_pgtable.c index c661e42bb2f1..1fafb9fb6231 100644 --- a/arch/powerpc/mm/book3s64/radix_pgtable.c +++ b/arch/powerpc/mm/book3s64/radix_pgtable.c @@ -14,6 +14,7 @@ #include #include #include +#include #include #include #include @@ -1390,6 +1391,8 @@ pmd_t radix__pmdp_collapse_flush(struct vm_area_struct *vma, unsigned long addre pmd = *pmdp; pmd_clear(pmdp); + page_table_check_pmd_clear(vma->vm_mm, address, pmd); + radix__flush_tlb_collapsed_pmd(vma->vm_mm, address); return pmd; diff --git a/arch/powerpc/mm/pgtable.c b/arch/powerpc/mm/pgtable.c index 352679cf2684..e89b28a7b313 100644 --- a/arch/powerpc/mm/pgtable.c +++ b/arch/powerpc/mm/pgtable.c @@ -22,6 +22,7 @@ #include #include #include +#include #include #include #include @@ -206,6 +207,9 @@ void set_ptes(struct mm_struct *mm, unsigned long addr, pte_t *ptep, * and not hw_valid ptes. Hence there is no translation cache flush * involved that need to be batched. */ + + page_table_check_ptes_set(mm, addr, ptep, pte, nr); + for (;;) { /*