From patchwork Wed Jan 26 17:30:00 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 12725504 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0D617C63682 for ; Wed, 26 Jan 2022 17:30:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243724AbiAZRaY (ORCPT ); Wed, 26 Jan 2022 12:30:24 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45208 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243708AbiAZRaY (ORCPT ); Wed, 26 Jan 2022 12:30:24 -0500 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1630EC06161C for ; Wed, 26 Jan 2022 09:30:24 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id A7F5B61B29 for ; Wed, 26 Jan 2022 17:30:23 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id C9B6FC340EB; Wed, 26 Jan 2022 17:30:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1643218223; bh=RnezKA+OcKDAG9Mh1/3XxqGMxaOSHLyLtTYhFTpeQ0k=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=KwPpOhdfTY2EK9+tglIKSGBNkLVLyKwT0XxXsjopC/TBEtV4f2dv13gDn75hgc8GG V32it6CyEF20TJsq4Y6JsL14GvQDP7rw6FJmfogNylrQGaoimTRBoR3sLk3LnCY4UG 55fmEw4RTIwuewdpibR+btC4ry6oDCbco6tvEmUca0CvYGReClX1ckh5fLk9uziteI Am65V1pkupa9B1N2C8r1l8QdSMhNqIIn/0Cau5NocYV2i1VUAQmw/06I/8SzLLgQLE 1pAu1UIyc55EmoqPA6dL3jigbyBgTw982Qb70M3+5ds/KYXFR9WjIm/c9VX0L/+1rv QS1fMphVjknKw== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: kvmarm@lists.cs.columbia.edu, linux-hardening@vger.kernel.org, Ard Biesheuvel , Will Deacon , Marc Zyngier , Fuad Tabba , Quentin Perret , Mark Rutland , James Morse , Catalin Marinas Subject: [RFC PATCH 01/12] asm-generic/pgalloc: allow arch to override PMD alloc/free routines Date: Wed, 26 Jan 2022 18:30:00 +0100 Message-Id: <20220126173011.3476262-2-ardb@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220126173011.3476262-1-ardb@kernel.org> References: <20220126173011.3476262-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=2359; h=from:subject; bh=RnezKA+OcKDAG9Mh1/3XxqGMxaOSHLyLtTYhFTpeQ0k=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBh8YUNbLtlyRCaE6N4Ud6VCGzNOAAfiUcToxl3P/XX P6BgSkiJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYfGFDQAKCRDDTyI5ktmPJLX4C/ 94sVPXESuLBfe40cP35qsqUFWRHo6zU1AmLldoRO1uJcQyttHvitcfpNxvEeg3H9dXJebYbvwbzvZP E5wA5NxeiZ550fq/Pm5oeMVCHimTNeW7okOqKoNOr+3lG6OwHG80TKjg8E2tbVFNkxRCloUHVBBYUl 3T3bHqe6Wl7lLCoY0phVanY9hnV33g/gjeHhCKjLYrGfCx48KxhmcHe3kEM+f4oGg6idvSdBBzFtbj qp89/FCbJsvVqkdirMWYI5xpEIMLFBdfNfNU2VLfL9jPjCESDU/yODefwCcXg8ra1r/ExCJ3D/elkt fAlfG8oTTtXsUsjsB/XXICLmiAj/M0LlgGtdPD9jyvcORsQ3xFtWxgoVl4kjtgoFYWblaXmtsTwKUz PIFagxF9j4EkoNQhWJBGXZX9j3+8XYLY7YrJww1YZQzt8n9Lo6yIAPt+gGYdoFTGMD5aUYs3w1/3Eo eQUilERJpgO2RDOlIeJuD61GFr/hdLX3gwNY9hlf1igCw= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org Extend the existing CPP macro based hooks that allow architectures to specialize the code that allocates and frees pages to be used as page tables. Signed-off-by: Ard Biesheuvel --- include/asm-generic/pgalloc.h | 13 +++++++++++-- 1 file changed, 11 insertions(+), 2 deletions(-) diff --git a/include/asm-generic/pgalloc.h b/include/asm-generic/pgalloc.h index 977bea16cf1b..65f31f615d99 100644 --- a/include/asm-generic/pgalloc.h +++ b/include/asm-generic/pgalloc.h @@ -34,6 +34,7 @@ static inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm) } #endif +#ifndef __HAVE_ARCH_PTE_FREE_KERNEL /** * pte_free_kernel - free PTE-level kernel page table page * @mm: the mm_struct of the current context @@ -43,6 +44,7 @@ static inline void pte_free_kernel(struct mm_struct *mm, pte_t *pte) { free_page((unsigned long)pte); } +#endif /** * __pte_alloc_one - allocate a page for PTE-level user page table @@ -91,6 +93,7 @@ static inline pgtable_t pte_alloc_one(struct mm_struct *mm) * done with a reference count in struct page. */ +#ifndef __HAVE_ARCH_PTE_FREE /** * pte_free - free PTE-level user page table page * @mm: the mm_struct of the current context @@ -101,11 +104,11 @@ static inline void pte_free(struct mm_struct *mm, struct page *pte_page) pgtable_pte_page_dtor(pte_page); __free_page(pte_page); } +#endif #if CONFIG_PGTABLE_LEVELS > 2 -#ifndef __HAVE_ARCH_PMD_ALLOC_ONE /** * pmd_alloc_one - allocate a page for PMD-level page table * @mm: the mm_struct of the current context @@ -116,7 +119,7 @@ static inline void pte_free(struct mm_struct *mm, struct page *pte_page) * * Return: pointer to the allocated memory or %NULL on error */ -static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long addr) +static inline pmd_t *__pmd_alloc_one(struct mm_struct *mm, unsigned long addr) { struct page *page; gfp_t gfp = GFP_PGTABLE_USER; @@ -132,6 +135,12 @@ static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long addr) } return (pmd_t *)page_address(page); } + +#ifndef __HAVE_ARCH_PMD_ALLOC_ONE +static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long addr) +{ + return __pmd_alloc_one(mm, addr); +} #endif #ifndef __HAVE_ARCH_PMD_FREE From patchwork Wed Jan 26 17:30:01 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 12725505 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CBAC1C63682 for ; Wed, 26 Jan 2022 17:30:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243726AbiAZRa3 (ORCPT ); Wed, 26 Jan 2022 12:30:29 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45230 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242785AbiAZRa3 (ORCPT ); Wed, 26 Jan 2022 12:30:29 -0500 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4C5F3C06161C for ; Wed, 26 Jan 2022 09:30:28 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 0FED9B81AC2 for ; Wed, 26 Jan 2022 17:30:27 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 82A1AC340E8; Wed, 26 Jan 2022 17:30:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1643218225; bh=lOZNDQBKEdhHEAm0EEJe90CRSD2PZxxRVPzRAgeBg+g=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=FTEQY/gvBQR4fr6BJJVpEB8tMzPWpBQtKb4mJieQld7XUpKevRmjfWdpyL4JlI/tx ApsUi0ztFBxF4ro5UBBwUIAnRss/13D+y6fyoUs0gKFl0gpqn1NuiJ1bdioYd/od/e WrJ7IRlaXIVUV76sn7FBqD3A2+KDz1tXTxB6skirn7h78tPaRipCGDb0VMg6TIwlmD zYtDRcbp1LhFFGRYSWAn8V/P5Yt/Cz3jUXiePcPbNoufZo33LZ6M6SgE/f7FsmfYPf 4r3BfJYVyTd4pc/y1lM5iBqkuZPmSxqNpSodsuuKTwC24BKSPZBwu0fsXGJhqZl5tJ hPSxT5Eb5u0bg== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: kvmarm@lists.cs.columbia.edu, linux-hardening@vger.kernel.org, Ard Biesheuvel , Will Deacon , Marc Zyngier , Fuad Tabba , Quentin Perret , Mark Rutland , James Morse , Catalin Marinas Subject: [RFC PATCH 02/12] arm64: mm: add helpers to remap page tables read-only/read-write Date: Wed, 26 Jan 2022 18:30:01 +0100 Message-Id: <20220126173011.3476262-3-ardb@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220126173011.3476262-1-ardb@kernel.org> References: <20220126173011.3476262-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=1657; h=from:subject; bh=lOZNDQBKEdhHEAm0EEJe90CRSD2PZxxRVPzRAgeBg+g=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBh8YUPNZPuIBlYQ+hSRXUzLJbcL5a+cXOtRIAZKvC1 ifBWh+mJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYfGFDwAKCRDDTyI5ktmPJCe+DA CSWOra5ZYs6My7IvQBErUHdCz3EWGT1gOpzkaysGHJySx71LtDunmLFb1SCjxW6VaX0uU8A3PmfRFW dBk1cOwfRvJU/O95yC7aGxLxmiTPDP1LFlut+hHxiIjbhMiBQ/6QhfSq01iso8XQiZZQEL4dEnB8TL Gdr+CI7vbg6070ZKDhLHTBQzDMhPCd3NxtBSvWHQkEc5jhZ3B2YvC+mgAWOzUKPUHZqdbff+vQzbsm WrN3WhPbCsmYU3fTSrgUPPq1trL2ntlYDCquLp+1jfhzq/FmKNlQ8DI3wASm5QHiwmeG/6PfGJe06y 6oKj9uyngpSE4tgFuxt4pWKSPq3MBNGrPQml5D0z4M214r61m73e2nUHHqFnFS6F5PImQpq6aTqHIh 76MI51/EPW0s7bNsm2DlDwxbf7Xbc46fGBVz/awvgT+0aIxITh88AB1Bm6beTaFCjgErEuV3HEyWxW bVZveYVQ3Qw0o3N7ksmnj/ZGDU6hOzSellUrszqpJnL0s= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org Add a couple of helpers to remap a single page read-only or read-write via its linear address. This will be used for mappings of page table pages in the linear region. Note that set_memory_ro/set_memory_rw operate on addresses in the vmalloc space only, so they cannot be used here. Signed-off-by: Ard Biesheuvel --- arch/arm64/include/asm/pgtable.h | 3 +++ arch/arm64/mm/pageattr.c | 14 ++++++++++++++ 2 files changed, 17 insertions(+) diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index c4ba047a82d2..8d3806c68687 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -34,6 +34,9 @@ #include #include +int set_pgtable_ro(void *addr); +int set_pgtable_rw(void *addr); + #ifdef CONFIG_TRANSPARENT_HUGEPAGE #define __HAVE_ARCH_FLUSH_PMD_TLB_RANGE diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c index a3bacd79507a..61f4aca08b95 100644 --- a/arch/arm64/mm/pageattr.c +++ b/arch/arm64/mm/pageattr.c @@ -153,6 +153,20 @@ int set_memory_valid(unsigned long addr, int numpages, int enable) __pgprot(PTE_VALID)); } +int set_pgtable_ro(void *addr) +{ + return __change_memory_common((u64)addr, PAGE_SIZE, + __pgprot(PTE_RDONLY), + __pgprot(PTE_WRITE)); +} + +int set_pgtable_rw(void *addr) +{ + return __change_memory_common((u64)addr, PAGE_SIZE, + __pgprot(PTE_WRITE), + __pgprot(PTE_RDONLY)); +} + int set_direct_map_invalid_noflush(struct page *page) { struct page_change_data data = { From patchwork Wed Jan 26 17:30:02 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 12725506 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B8D6AC2BA4C for ; Wed, 26 Jan 2022 17:30:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242785AbiAZRab (ORCPT ); Wed, 26 Jan 2022 12:30:31 -0500 Received: from ams.source.kernel.org ([145.40.68.75]:43574 "EHLO ams.source.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243708AbiAZRab (ORCPT ); Wed, 26 Jan 2022 12:30:31 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id AE3C1B81F85 for ; Wed, 26 Jan 2022 17:30:29 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3C0E8C340E9; Wed, 26 Jan 2022 17:30:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1643218228; bh=VdaCBdNlqtgW9Cctzs4vRvx9HfjMA2+uOU7uxySIrMs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=JvjdN9W2zwdXZCXGwVSBkDsP74pLLfSOLVY4u/CstpV64j5amtsTrok3LKHPTDZ6E cTzPldX4RzH+lA09URRcsli//g+7lVLIbkzKh1L1w7C0U+5k9O6zNtRSm2EhcSopQl Zcje7lmNONsk1XIKhPA3oQH5CbbWINBH4aWqvLQi4L6ieXHF8gEj2D2E+e6Qp8U1V0 4VsTGDVD8sOMohEEK4oXtitWXCcxQWPmZIQPSSPtiBab5Fl3bFEnYnBlzPgjKXhmP9 jULAJIbexVBYaIo4JesaYLDqE4HP9RK82uEwvfz2RyypdKQFxqt/tyctWtiWiSQGAW fVEDaABx8nAMw== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: kvmarm@lists.cs.columbia.edu, linux-hardening@vger.kernel.org, Ard Biesheuvel , Will Deacon , Marc Zyngier , Fuad Tabba , Quentin Perret , Mark Rutland , James Morse , Catalin Marinas Subject: [RFC PATCH 03/12] arm64: mm: use a fixmap slot for user page table modifications Date: Wed, 26 Jan 2022 18:30:02 +0100 Message-Id: <20220126173011.3476262-4-ardb@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220126173011.3476262-1-ardb@kernel.org> References: <20220126173011.3476262-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=14697; h=from:subject; bh=VdaCBdNlqtgW9Cctzs4vRvx9HfjMA2+uOU7uxySIrMs=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBh8YUROYuwwTSeTDmM+jOKhAZmpCyA3LxpvYR1OddG 1wxr90CJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYfGFEQAKCRDDTyI5ktmPJJlgC/ 4qSo/1MT36pZMEDd9ccaa4oBkXmtQ1Kfb9j0ukSnyLCoR/naBCfglDevw1WX8GVqS6GsuQi7EqAHnD PeFXwYHekzXrOMU6PWu0/4mlkKHcEYl2aChMRHHF+Ri17AO4lUB/RQiwCpMmMDIBIpNjEfIO/XNg0M IApyR+lj3ZvQw6j+jHaBhzaYj+BPEwAZeQi8l5djkaqkMYhI3ef6ini1oEHyxhoNf/LapMAGbnf4yM kb4T0aXMTPGPkXFD01XXPdZnx1hGv8Wcq97JD33RXO2H5zq44RqgFT0tq9wN2JRIyMVxYnC6kLRck3 fX2mH/C/0cBI0hF+wwfRhuyNPO5gA+h3JYweJH8JqZXX0oPx2cJqzBWjqcPz4OhVaIxpzTasbXORWh 3tsVEABAtyXF6GNIj0kpkRIvrRRMITthuEh/iXjeBnWkt4M8kMtNJ+DXfuXiUtfHYbqOQ7kFGrUkDZ JP2BXylMVw9TDF/0Kujh3SbTdMl7iBeNLYpU00WMksEwA= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org To prepare for user and kernel page tables being remapped read-only in the linear region, define a new fixmap slot and use it to apply all page table descriptor updates that target page tables other than swapper. Fortunately for us, the fixmap descriptors themselves are always manipulated via their kernel mapping in .bss, so there is no special exception required to avoid circular logic here. Signed-off-by: Ard Biesheuvel --- arch/arm64/Kconfig | 11 +++ arch/arm64/include/asm/fixmap.h | 1 + arch/arm64/include/asm/pgalloc.h | 28 +++++- arch/arm64/include/asm/pgtable.h | 79 +++++++++++++--- arch/arm64/mm/Makefile | 2 + arch/arm64/mm/fault.c | 8 +- arch/arm64/mm/ro_page_tables.c | 100 ++++++++++++++++++++ 7 files changed, 209 insertions(+), 20 deletions(-) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 6978140edfa4..a3e98286b074 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -1311,6 +1311,17 @@ config RODATA_FULL_DEFAULT_ENABLED This requires the linear region to be mapped down to pages, which may adversely affect performance in some cases. +config ARM64_RO_PAGE_TABLES + bool "Remap page tables read-only in the kernel VA space" + select RODATA_FULL_DEFAULT_ENABLED + help + Remap linear mappings of page table pages read-only as long as they + are being used as such, and use a fixmap API to manipulate all page + table descriptors, instead of manipulating them directly via their + writable mappings in the direct map. This is intended as a debug + and/or hardening feature, as it removes the ability for stray writes + to be exploited to bypass permission restrictions. + config ARM64_SW_TTBR0_PAN bool "Emulate Privileged Access Never using TTBR0_EL1 switching" help diff --git a/arch/arm64/include/asm/fixmap.h b/arch/arm64/include/asm/fixmap.h index 4335800201c9..71dfbe0452bb 100644 --- a/arch/arm64/include/asm/fixmap.h +++ b/arch/arm64/include/asm/fixmap.h @@ -50,6 +50,7 @@ enum fixed_addresses { FIX_EARLYCON_MEM_BASE, FIX_TEXT_POKE0, + FIX_TEXT_POKE_PTE, #ifdef CONFIG_ACPI_APEI_GHES /* Used for GHES mapping from assorted contexts */ diff --git a/arch/arm64/include/asm/pgalloc.h b/arch/arm64/include/asm/pgalloc.h index 237224484d0f..d54ac9f8d6c7 100644 --- a/arch/arm64/include/asm/pgalloc.h +++ b/arch/arm64/include/asm/pgalloc.h @@ -30,7 +30,11 @@ static inline void pud_populate(struct mm_struct *mm, pud_t *pudp, pmd_t *pmdp) pudval_t pudval = PUD_TYPE_TABLE; pudval |= (mm == &init_mm) ? PUD_TABLE_UXN : PUD_TABLE_PXN; - __pud_populate(pudp, __pa(pmdp), pudval); + if (page_tables_are_ro()) + xchg_ro_pte(mm, (pte_t *)pudp, + __pte(__phys_to_pud_val(__pa(pmdp) | pudval))); + else + __pud_populate(pudp, __pa(pmdp), pudval); } #else static inline void __pud_populate(pud_t *pudp, phys_addr_t pmdp, pudval_t prot) @@ -51,7 +55,11 @@ static inline void p4d_populate(struct mm_struct *mm, p4d_t *p4dp, pud_t *pudp) p4dval_t p4dval = P4D_TYPE_TABLE; p4dval |= (mm == &init_mm) ? P4D_TABLE_UXN : P4D_TABLE_PXN; - __p4d_populate(p4dp, __pa(pudp), p4dval); + if (page_tables_are_ro()) + xchg_ro_pte(mm, (pte_t *)p4dp, + __pte(__phys_to_p4d_val(__pa(pudp) | p4dval))); + else + __p4d_populate(p4dp, __pa(pudp), p4dval); } #else static inline void __p4d_populate(p4d_t *p4dp, phys_addr_t pudp, p4dval_t prot) @@ -76,15 +84,27 @@ static inline void __pmd_populate(pmd_t *pmdp, phys_addr_t ptep, static inline void pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmdp, pte_t *ptep) { + pmdval_t pmdval = PMD_TYPE_TABLE | PMD_TABLE_UXN; + VM_BUG_ON(mm && mm != &init_mm); - __pmd_populate(pmdp, __pa(ptep), PMD_TYPE_TABLE | PMD_TABLE_UXN); + if (page_tables_are_ro()) + xchg_ro_pte(mm, (pte_t *)pmdp, + __pte(__phys_to_pmd_val(__pa(ptep) | pmdval))); + else + __pmd_populate(pmdp, __pa(ptep), pmdval); } static inline void pmd_populate(struct mm_struct *mm, pmd_t *pmdp, pgtable_t ptep) { + pmdval_t pmdval = PMD_TYPE_TABLE | PMD_TABLE_PXN; + VM_BUG_ON(mm == &init_mm); - __pmd_populate(pmdp, page_to_phys(ptep), PMD_TYPE_TABLE | PMD_TABLE_PXN); + if (page_tables_are_ro()) + xchg_ro_pte(mm, (pte_t *)pmdp, + __pte(__phys_to_pmd_val(page_to_phys(ptep) | pmdval))); + else + __pmd_populate(pmdp, page_to_phys(ptep), pmdval); } #endif diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index 8d3806c68687..a8daea6b4ac9 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -30,6 +30,7 @@ #include #include +#include #include #include #include @@ -37,6 +38,17 @@ int set_pgtable_ro(void *addr); int set_pgtable_rw(void *addr); +DECLARE_STATIC_KEY_FALSE(ro_page_tables); + +static inline bool page_tables_are_ro(void) +{ + return IS_ENABLED(CONFIG_ARM64_RO_PAGE_TABLES) && + static_branch_unlikely(&ro_page_tables); +} + +pte_t xchg_ro_pte(struct mm_struct *mm, pte_t *ptep, pte_t pte); +pte_t cmpxchg_ro_pte(struct mm_struct *mm, pte_t *ptep, pte_t old, pte_t new); + #ifdef CONFIG_TRANSPARENT_HUGEPAGE #define __HAVE_ARCH_FLUSH_PMD_TLB_RANGE @@ -89,7 +101,7 @@ static inline pteval_t __phys_to_pte_val(phys_addr_t phys) __pte(__phys_to_pte_val((phys_addr_t)(pfn) << PAGE_SHIFT) | pgprot_val(prot)) #define pte_none(pte) (!pte_val(pte)) -#define pte_clear(mm,addr,ptep) set_pte(ptep, __pte(0)) +#define pte_clear(mm,addr,ptep) set_pte_at(mm, addr, ptep, __pte(0)) #define pte_page(pte) (pfn_to_page(pte_pfn(pte))) /* @@ -257,7 +269,10 @@ static inline pte_t pte_mkdevmap(pte_t pte) static inline void set_pte(pte_t *ptep, pte_t pte) { - WRITE_ONCE(*ptep, pte); + if (page_tables_are_ro()) + xchg_ro_pte(&init_mm, ptep, pte); + else + WRITE_ONCE(*ptep, pte); /* * Only if the new pte is valid and kernel, otherwise TLB maintenance @@ -343,7 +358,10 @@ static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, __check_racy_pte_update(mm, ptep, pte); - set_pte(ptep, pte); + if (page_tables_are_ro()) + xchg_ro_pte(mm, ptep, pte); + else + set_pte(ptep, pte); } /* @@ -579,7 +597,10 @@ static inline void set_pmd(pmd_t *pmdp, pmd_t pmd) } #endif /* __PAGETABLE_PMD_FOLDED */ - WRITE_ONCE(*pmdp, pmd); + if (page_tables_are_ro()) + xchg_ro_pte(&init_mm, (pte_t *)pmdp, pmd_pte(pmd)); + else + WRITE_ONCE(*pmdp, pmd); if (pmd_valid(pmd)) { dsb(ishst); @@ -589,7 +610,10 @@ static inline void set_pmd(pmd_t *pmdp, pmd_t pmd) static inline void pmd_clear(pmd_t *pmdp) { - set_pmd(pmdp, __pmd(0)); + if (page_tables_are_ro()) + xchg_ro_pte(NULL, (pte_t *)pmdp, __pte(0)); + else + set_pmd(pmdp, __pmd(0)); } static inline phys_addr_t pmd_page_paddr(pmd_t pmd) @@ -640,7 +664,10 @@ static inline void set_pud(pud_t *pudp, pud_t pud) } #endif /* __PAGETABLE_PUD_FOLDED */ - WRITE_ONCE(*pudp, pud); + if (page_tables_are_ro()) + xchg_ro_pte(&init_mm, (pte_t *)pudp, pud_pte(pud)); + else + WRITE_ONCE(*pudp, pud); if (pud_valid(pud)) { dsb(ishst); @@ -650,7 +677,10 @@ static inline void set_pud(pud_t *pudp, pud_t pud) static inline void pud_clear(pud_t *pudp) { - set_pud(pudp, __pud(0)); + if (page_tables_are_ro()) + xchg_ro_pte(NULL, (pte_t *)pudp, __pte(0)); + else + set_pud(pudp, __pud(0)); } static inline phys_addr_t pud_page_paddr(pud_t pud) @@ -704,14 +734,20 @@ static inline void set_p4d(p4d_t *p4dp, p4d_t p4d) return; } - WRITE_ONCE(*p4dp, p4d); + if (page_tables_are_ro()) + xchg_ro_pte(&init_mm, (pte_t *)p4dp, p4d_pte(p4d)); + else + WRITE_ONCE(*p4dp, p4d); dsb(ishst); isb(); } static inline void p4d_clear(p4d_t *p4dp) { - set_p4d(p4dp, __p4d(0)); + if (page_tables_are_ro()) + xchg_ro_pte(NULL, (pte_t *)p4dp, __pte(0)); + else + set_p4d(p4dp, __p4d(0)); } static inline phys_addr_t p4d_page_paddr(p4d_t p4d) @@ -806,7 +842,7 @@ static inline int pgd_devmap(pgd_t pgd) * Atomic pte/pmd modifications. */ #define __HAVE_ARCH_PTEP_TEST_AND_CLEAR_YOUNG -static inline int __ptep_test_and_clear_young(pte_t *ptep) +static inline int __ptep_test_and_clear_young(struct mm_struct *mm, pte_t *ptep) { pte_t old_pte, pte; @@ -814,8 +850,13 @@ static inline int __ptep_test_and_clear_young(pte_t *ptep) do { old_pte = pte; pte = pte_mkold(pte); - pte_val(pte) = cmpxchg_relaxed(&pte_val(*ptep), - pte_val(old_pte), pte_val(pte)); + + if (page_tables_are_ro()) + pte = cmpxchg_ro_pte(mm, ptep, old_pte, pte); + else + pte_val(pte) = cmpxchg_relaxed(&pte_val(*ptep), + pte_val(old_pte), + pte_val(pte)); } while (pte_val(pte) != pte_val(old_pte)); return pte_young(pte); @@ -825,7 +866,7 @@ static inline int ptep_test_and_clear_young(struct vm_area_struct *vma, unsigned long address, pte_t *ptep) { - return __ptep_test_and_clear_young(ptep); + return __ptep_test_and_clear_young(vma->vm_mm, ptep); } #define __HAVE_ARCH_PTEP_CLEAR_YOUNG_FLUSH @@ -863,6 +904,8 @@ static inline int pmdp_test_and_clear_young(struct vm_area_struct *vma, static inline pte_t ptep_get_and_clear(struct mm_struct *mm, unsigned long address, pte_t *ptep) { + if (page_tables_are_ro()) + return xchg_ro_pte(mm, ptep, __pte(0)); return __pte(xchg_relaxed(&pte_val(*ptep), 0)); } @@ -888,8 +931,12 @@ static inline void ptep_set_wrprotect(struct mm_struct *mm, unsigned long addres do { old_pte = pte; pte = pte_wrprotect(pte); - pte_val(pte) = cmpxchg_relaxed(&pte_val(*ptep), - pte_val(old_pte), pte_val(pte)); + if (page_tables_are_ro()) + pte = cmpxchg_ro_pte(mm, ptep, old_pte, pte); + else + pte_val(pte) = cmpxchg_relaxed(&pte_val(*ptep), + pte_val(old_pte), + pte_val(pte)); } while (pte_val(pte) != pte_val(old_pte)); } @@ -905,6 +952,8 @@ static inline void pmdp_set_wrprotect(struct mm_struct *mm, static inline pmd_t pmdp_establish(struct vm_area_struct *vma, unsigned long address, pmd_t *pmdp, pmd_t pmd) { + if (page_tables_are_ro()) + return pte_pmd(xchg_ro_pte(vma->vm_mm, (pte_t *)pmdp, pmd_pte(pmd))); return __pmd(xchg_relaxed(&pmd_val(*pmdp), pmd_val(pmd))); } #endif diff --git a/arch/arm64/mm/Makefile b/arch/arm64/mm/Makefile index ff1e800ba7a1..7750cafd969a 100644 --- a/arch/arm64/mm/Makefile +++ b/arch/arm64/mm/Makefile @@ -14,3 +14,5 @@ KASAN_SANITIZE_physaddr.o += n obj-$(CONFIG_KASAN) += kasan_init.o KASAN_SANITIZE_kasan_init.o := n + +obj-$(CONFIG_ARM64_RO_PAGE_TABLES) += ro_page_tables.o diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index 77341b160aca..5a5055c3e1c2 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -220,7 +220,13 @@ int ptep_set_access_flags(struct vm_area_struct *vma, pteval ^= PTE_RDONLY; pteval |= pte_val(entry); pteval ^= PTE_RDONLY; - pteval = cmpxchg_relaxed(&pte_val(*ptep), old_pteval, pteval); + if (page_tables_are_ro()) + pteval = pte_val(cmpxchg_ro_pte(vma->vm_mm, ptep, + __pte(old_pteval), + __pte(pteval))); + else + pteval = cmpxchg_relaxed(&pte_val(*ptep), old_pteval, + pteval); } while (pteval != old_pteval); /* Invalidate a stale read-only entry */ diff --git a/arch/arm64/mm/ro_page_tables.c b/arch/arm64/mm/ro_page_tables.c new file mode 100644 index 000000000000..f497adfd774d --- /dev/null +++ b/arch/arm64/mm/ro_page_tables.c @@ -0,0 +1,100 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2021 - Google Inc + * Author: Ard Biesheuvel + */ + +#include +#include +#include +#include +#include + +#include +#include +#include +#include +#include +#include + +static DEFINE_RAW_SPINLOCK(patch_pte_lock); + +DEFINE_STATIC_KEY_FALSE(ro_page_tables); + +static bool __initdata ro_page_tables_enabled = true; + +static int __init parse_ro_page_tables(char *arg) +{ + return strtobool(arg, &ro_page_tables_enabled); +} +early_param("ro_page_tables", parse_ro_page_tables); + +static bool in_kernel_text_or_rodata(phys_addr_t pa) +{ + /* + * This is a minimal check to ensure that the r/o page table patching + * API is not being abused to make changes to the kernel text. This + * should ideally cover module and BPF text/rodata as well, but that + * is less straight-forward and hence more costly. + */ + return pa >= __pa_symbol(_stext) && pa < __pa_symbol(__init_begin); +} + +pte_t xchg_ro_pte(struct mm_struct *mm, pte_t *ptep, pte_t pte) +{ + unsigned long flags; + u64 pte_pa; + pte_t ret; + pte_t *p; + + /* can we use __pa() on ptep? */ + if (!virt_addr_valid(ptep)) { + /* only linear aliases are remapped r/o anyway */ + pte_val(ret) = xchg_relaxed(&pte_val(*ptep), pte_val(pte)); + return ret; + } + + pte_pa = __pa(ptep); + BUG_ON(in_kernel_text_or_rodata(pte_pa)); + + raw_spin_lock_irqsave(&patch_pte_lock, flags); + p = (pte_t *)set_fixmap_offset(FIX_TEXT_POKE_PTE, pte_pa); + pte_val(ret) = xchg_relaxed(&pte_val(*p), pte_val(pte)); + clear_fixmap(FIX_TEXT_POKE_PTE); + raw_spin_unlock_irqrestore(&patch_pte_lock, flags); + return ret; +} + +pte_t cmpxchg_ro_pte(struct mm_struct *mm, pte_t *ptep, pte_t old, pte_t new) +{ + unsigned long flags; + u64 pte_pa; + pte_t ret; + pte_t *p; + + BUG_ON(!virt_addr_valid(ptep)); + + pte_pa = __pa(ptep); + BUG_ON(in_kernel_text_or_rodata(pte_pa)); + + raw_spin_lock_irqsave(&patch_pte_lock, flags); + p = (pte_t *)set_fixmap_offset(FIX_TEXT_POKE_PTE, pte_pa); + pte_val(ret) = cmpxchg_relaxed(&pte_val(*p), pte_val(old), pte_val(new)); + clear_fixmap(FIX_TEXT_POKE_PTE); + raw_spin_unlock_irqrestore(&patch_pte_lock, flags); + return ret; +} + +static int __init ro_page_tables_init(void) +{ + if (ro_page_tables_enabled) { + if (!rodata_full) { + pr_err("Failed to enable R/O page table protection, rodata=full is not enabled\n"); + } else { + pr_err("Enabling R/O page table protection\n"); + static_branch_enable(&ro_page_tables); + } + } + return 0; +} +early_initcall(ro_page_tables_init); From patchwork Wed Jan 26 17:30:03 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 12725507 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3A09DC28CF5 for ; Wed, 26 Jan 2022 17:30:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243728AbiAZRac (ORCPT ); Wed, 26 Jan 2022 12:30:32 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45240 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243708AbiAZRac (ORCPT ); Wed, 26 Jan 2022 12:30:32 -0500 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2F126C06161C for ; Wed, 26 Jan 2022 09:30:32 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id C066261B14 for ; Wed, 26 Jan 2022 17:30:31 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id EBE1EC340ED; Wed, 26 Jan 2022 17:30:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1643218231; bh=NkDHn/GFNzvHawPVS9Op4WAl8MYqnxITPT90mOH4U9k=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=qP1am5qxgM0KkzyrIl6Q0mpR9YjjeqfvI5J0rwwQx0TyqI9pm+FolKFUmQQC4Lb2j 6D3/AgEm7K9ZYjsNC54LVuPMc3AlZld8YT0Jk2T5r21VW9jONWm+xbnKvSBOm77nlg wH4RqgWTqzYiIETCdUHnMdnH6L9TLZLNaGaFicbQQQiIt/uCINcYITkMApXsIBkuLB aYp3FNu2h5QX3XJlx+AcdcLAWgetZ7voTk8+2k6VFSNMLT7vMML9ysXdCUDRFMPYJx IHkxukLBMOuUNYfpJvCo24OdcibMD1Dik30im1JLNlIb+GaYOYZVOtZn3Lh9yfc1PA 0Vcq6K5mZpxBA== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: kvmarm@lists.cs.columbia.edu, linux-hardening@vger.kernel.org, Ard Biesheuvel , Will Deacon , Marc Zyngier , Fuad Tabba , Quentin Perret , Mark Rutland , James Morse , Catalin Marinas Subject: [RFC PATCH 04/12] arm64: mm: remap PGD pages r/o in the linear region after allocation Date: Wed, 26 Jan 2022 18:30:03 +0100 Message-Id: <20220126173011.3476262-5-ardb@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220126173011.3476262-1-ardb@kernel.org> References: <20220126173011.3476262-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=2460; h=from:subject; bh=NkDHn/GFNzvHawPVS9Op4WAl8MYqnxITPT90mOH4U9k=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBh8YUTOXbAUibor0RdzEKRaxJDrNYFfRidvP8AjwRP cwb0lxuJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYfGFEwAKCRDDTyI5ktmPJF8DC/ 9RAU2RHO/xy49vf4HfxCaKH83qITUHRcA6r89bUF/uVIsCSLeyZu+YTSVYzVbR0oXkOvhePQUSGjBY fvk161grsqB5b5VoZDT/OvYggUxZ59BPeuBNzXYJ5C/XcD12JX9iNQvtND/f9oZTsPX3c4ztmwMppH ak+KB8FI3rug+zIflF7vXnshCCkPPC+pVlaszPqQYupXLi/gt1JmKoSP7+BUfUTJKnPk/HBh1T0jRW MdqFeYWyU/bQz8befC9PuUO02UDcFECd7eAk2FFib0WKtMO6XbOY8HWbM32M66nAOtsOkYwBtSTWGx SSgTnGgokH7pQMkzM2loNpUvAwzHw5JsRLnXQxigbV8f1+aSHLZpBs21XN2mNSwwB8ZZ/zcFEhvJyA 9HraR1/POsV2WVoUC6CREUfB7YYT5GZZZn2tpWVcHinY6iopviXhB3G1gpK1eWeWEEUta13iIQY52U U3xV+Wn3b6sZOOBTd4c0mMonkJYtbsMQV1MSFq7CnsEP0= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org As the first step in restricting write access to all page tables via the linear mapping, remap the page at the root PGD level of a user space page table hierarchy read-only after allocation, so that it can only be manipulated using the dedicated fixmap based API. Signed-off-by: Ard Biesheuvel --- arch/arm64/mm/mmu.c | 7 ++++-- arch/arm64/mm/pgd.c | 25 ++++++++++++++------ 2 files changed, 23 insertions(+), 9 deletions(-) diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index acfae9b41cc8..a52c3162beae 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -394,8 +394,11 @@ static phys_addr_t __pgd_pgtable_alloc(int shift) void *ptr = (void *)__get_free_page(GFP_PGTABLE_KERNEL); BUG_ON(!ptr); - /* Ensure the zeroed page is visible to the page table walker */ - dsb(ishst); + if (page_tables_are_ro()) + set_pgtable_ro(ptr); + else + /* Ensure the zeroed page is visible to the page table walker */ + dsb(ishst); return __pa(ptr); } diff --git a/arch/arm64/mm/pgd.c b/arch/arm64/mm/pgd.c index 4a64089e5771..637d6eceeada 100644 --- a/arch/arm64/mm/pgd.c +++ b/arch/arm64/mm/pgd.c @@ -9,8 +9,10 @@ #include #include #include +#include #include +#include #include #include #include @@ -20,24 +22,33 @@ static struct kmem_cache *pgd_cache __ro_after_init; pgd_t *pgd_alloc(struct mm_struct *mm) { gfp_t gfp = GFP_PGTABLE_USER; + pgd_t *pgd; - if (PGD_SIZE == PAGE_SIZE) - return (pgd_t *)__get_free_page(gfp); - else + if (PGD_SIZE < PAGE_SIZE && !page_tables_are_ro()) return kmem_cache_alloc(pgd_cache, gfp); + + pgd = (pgd_t *)__get_free_page(gfp); + if (!pgd) + return NULL; + if (page_tables_are_ro()) + set_pgtable_ro(pgd); + return pgd; } void pgd_free(struct mm_struct *mm, pgd_t *pgd) { - if (PGD_SIZE == PAGE_SIZE) - free_page((unsigned long)pgd); - else + if (PGD_SIZE < PAGE_SIZE && !page_tables_are_ro()) { kmem_cache_free(pgd_cache, pgd); + } else { + if (page_tables_are_ro()) + set_pgtable_rw(pgd); + free_page((unsigned long)pgd); + } } void __init pgtable_cache_init(void) { - if (PGD_SIZE == PAGE_SIZE) + if (PGD_SIZE == PAGE_SIZE || page_tables_are_ro()) return; #ifdef CONFIG_ARM64_PA_BITS_52 From patchwork Wed Jan 26 17:30:04 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 12725508 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 77B10C28CF5 for ; Wed, 26 Jan 2022 17:30:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243727AbiAZRaf (ORCPT ); Wed, 26 Jan 2022 12:30:35 -0500 Received: from dfw.source.kernel.org ([139.178.84.217]:58282 "EHLO dfw.source.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243708AbiAZRae (ORCPT ); Wed, 26 Jan 2022 12:30:34 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 798C561B14 for ; Wed, 26 Jan 2022 17:30:34 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id A5052C340E7; Wed, 26 Jan 2022 17:30:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1643218233; bh=pO2QvUUsV5L71fd/N5ug5v8ZnpkBMeHmPsJTXFD+MSg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=h4mWX3z03gi8sHlG0EWbH3tBlmtzj7tvg3DWAos6nt50rpnSz+y4Eo/8029jByzvN 1QS6bxsqkew7z6/g9AMZR9VyY1rGyCZIpSGhsDL6MXG2ly75+IVJGMwbP5qQNQmnc5 niDLO7NXIvS8SW/sS+Iu9RE0rgtq1yAVtlynnyHMK3kjdKVHWSzzxx/9fq5DpWQKYu uFouCpefIDGbxSjJ6uLwAvPlD+k/rnpTUE5yE4RTjHp/v/t7Y/50IOKA0+6mcFKqkS bDbWFaatfMikrY8xjz63C104GtDmitiWGq6t64sFVO6JZD8B1suaQ4ed7Ix0FOkLFJ ZqUY+QNom/i5Q== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: kvmarm@lists.cs.columbia.edu, linux-hardening@vger.kernel.org, Ard Biesheuvel , Will Deacon , Marc Zyngier , Fuad Tabba , Quentin Perret , Mark Rutland , James Morse , Catalin Marinas Subject: [RFC PATCH 05/12] arm64: mm: remap PUD pages r/o in linear region Date: Wed, 26 Jan 2022 18:30:04 +0100 Message-Id: <20220126173011.3476262-6-ardb@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220126173011.3476262-1-ardb@kernel.org> References: <20220126173011.3476262-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=2436; h=from:subject; bh=pO2QvUUsV5L71fd/N5ug5v8ZnpkBMeHmPsJTXFD+MSg=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBh8YUVOOEJY4XVWRP3L+tY2Fd1rYmhhe86Q8f7AUSQ xXjIo1iJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYfGFFQAKCRDDTyI5ktmPJM0SC/ 9TJASSXr7BX76TnGrR6DVfOMgm4ePZMcUwkFtSUNvrQSheHkJ5Dhpn+wTwQ4G4uEaCrIvhsyhtOJXn gIAqHwx4C2T9W1b3QHf9d5i2c6OXAwWavoJbXb8Hc6W5hBuUylZ+Sb0Aksn1COSPDrNZKn8uB8hGIC /Qea52bWHMwHQdXM1FX2hlKl0SgwrSoyI5HhgMYMQSZJAm8smM6XDsjvXKXwerEg6thIIg3owWG3lp NDAgAmCQu7Se96JZUEYEQoTpGuVBpz4Z2RpNgEMGOCLwfvDDRV9g6cdh9xx+09jia6i0HYga3glYgY AkYpFuapogYWP+/BG6trhpDrEoW8nGh94AxbjdrhxuodEwdGEVg/uE4n4b7ytd6I6UmU5Uq2mUl6PC fhsWIcI82knyV4QIBwfu020onGbyrADdPL9XzuLKgXlr2lVwZmxWs342yn7MxpTq/Y6K5LZqFL6php ii5PZEXG/T7oY+AlBemmzo9R20Bdx0Xcgthq3RHGGaox4= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org Implement the arch specific PUD alloc/free helpers by wrapping the generic code, and remapping the page read-only on allocation and read-write on free. Signed-off-by: Ard Biesheuvel --- arch/arm64/include/asm/pgalloc.h | 5 +++++ arch/arm64/include/asm/tlb.h | 2 ++ arch/arm64/mm/mmu.c | 20 ++++++++++++++++++++ 3 files changed, 27 insertions(+) diff --git a/arch/arm64/include/asm/pgalloc.h b/arch/arm64/include/asm/pgalloc.h index d54ac9f8d6c7..737e9f32b199 100644 --- a/arch/arm64/include/asm/pgalloc.h +++ b/arch/arm64/include/asm/pgalloc.h @@ -14,6 +14,8 @@ #include #define __HAVE_ARCH_PGD_FREE +#define __HAVE_ARCH_PUD_ALLOC_ONE +#define __HAVE_ARCH_PUD_FREE #include #define PGD_SIZE (PTRS_PER_PGD * sizeof(pgd_t)) @@ -45,6 +47,9 @@ static inline void __pud_populate(pud_t *pudp, phys_addr_t pmdp, pudval_t prot) #if CONFIG_PGTABLE_LEVELS > 3 +pud_t *pud_alloc_one(struct mm_struct *mm, unsigned long addr); +void pud_free(struct mm_struct *mm, pud_t *pud); + static inline void __p4d_populate(p4d_t *p4dp, phys_addr_t pudp, p4dval_t prot) { set_p4d(p4dp, __p4d(__phys_to_p4d_val(pudp) | prot)); diff --git a/arch/arm64/include/asm/tlb.h b/arch/arm64/include/asm/tlb.h index c995d1f4594f..6557626752fc 100644 --- a/arch/arm64/include/asm/tlb.h +++ b/arch/arm64/include/asm/tlb.h @@ -94,6 +94,8 @@ static inline void __pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmdp, static inline void __pud_free_tlb(struct mmu_gather *tlb, pud_t *pudp, unsigned long addr) { + if (page_tables_are_ro()) + set_pgtable_rw(pudp); tlb_remove_table(tlb, virt_to_page(pudp)); } #endif diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index a52c3162beae..03d77c4c3570 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -1645,3 +1645,23 @@ static int __init prevent_bootmem_remove_init(void) } early_initcall(prevent_bootmem_remove_init); #endif + +#ifndef __PAGETABLE_PUD_FOLDED +pud_t *pud_alloc_one(struct mm_struct *mm, unsigned long addr) +{ + pud_t *pud = __pud_alloc_one(mm, addr); + + if (!pud) + return NULL; + if (page_tables_are_ro()) + set_pgtable_ro(pud); + return pud; +} + +void pud_free(struct mm_struct *mm, pud_t *pud) +{ + if (page_tables_are_ro()) + set_pgtable_rw(pud); + free_page((u64)pud); +} +#endif From patchwork Wed Jan 26 17:30:05 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 12725509 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 92FF9C63682 for ; Wed, 26 Jan 2022 17:30:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243730AbiAZRai (ORCPT ); Wed, 26 Jan 2022 12:30:38 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45266 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243708AbiAZRah (ORCPT ); Wed, 26 Jan 2022 12:30:37 -0500 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B2658C06161C for ; Wed, 26 Jan 2022 09:30:37 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 3759661B14 for ; Wed, 26 Jan 2022 17:30:37 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5E85AC340ED; Wed, 26 Jan 2022 17:30:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1643218236; bh=EdjKbvFKm2bAF1qMp2eKnJLC/EwKCIgrl7Fs4CE7aNY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=E6lWSgQpcYxHmKLLY804nbtXsPIyZqGOarRd0i4z7R07rXwNMpD9/g1kZsrbUUzOj KL0ChMM9aIRXhIz15v3Q+P6A7sgYNuVJplCuZfkNotrqHzgo0E0JQXfskGB6ntUtcI we1F6cNzyT6bnrOH8rhKDNhAUlU2RpLrvfxqkP1L8sicboSpzy8+FkHxS35WdHHn+8 KIB8LX5s0N5hoOqvt4WUyunQgLzuRQEAwhGNAcNshIpT0qltAy/EKjGe6jmNlCeuom BpzckJbqmN0647KSZ0FR5W6jRRiz9H6ZzQJ8iVNv1O5b2AjAclwLQBVyksmFPRSJ9K YWKWt2725E9IA== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: kvmarm@lists.cs.columbia.edu, linux-hardening@vger.kernel.org, Ard Biesheuvel , Will Deacon , Marc Zyngier , Fuad Tabba , Quentin Perret , Mark Rutland , James Morse , Catalin Marinas Subject: [RFC PATCH 06/12] arm64: mm: remap PMD pages r/o in linear region Date: Wed, 26 Jan 2022 18:30:05 +0100 Message-Id: <20220126173011.3476262-7-ardb@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220126173011.3476262-1-ardb@kernel.org> References: <20220126173011.3476262-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=2334; h=from:subject; bh=EdjKbvFKm2bAF1qMp2eKnJLC/EwKCIgrl7Fs4CE7aNY=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBh8YUXminiyVimuEy7o/9j5FK4FCDo9c8aQ+euoa+9 xZfUOg6JAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYfGFFwAKCRDDTyI5ktmPJMcsDA C2XmwZEdc3cWvWVlY5OJKjSItfWPVhPyVtc2SVpON6oD8r6hvuOr9CH7TXILHnVwYZ8rLt85j2nrnv PG25exDWAAsn9cu4A3ctT878nXG+cDagm47FEQQSWXGD2rxWY3riF7EWbn2MPpRIezw9sY8Ufc3sSL Xo0V7ZI021PwTVaWobSY37WnwX6iLFuf7gVuZZp6Zl67b0tSC+PNCGdjEmfe/sFvvYNBbQfOdxS7T2 JN1bBts8FoqhfhSVMsB8w16JS2nJhLjvyvSOTxyzgOn8s5D/Uhs+HeBvZp1/KCkPmAxh5C2Sl8UUqL nj8hZTNhza/39pRqOfsFySKB+Nk+lSzb/bV9G96yR82mjhHskTJ1i8i651OjYNRsIq4i0qW3J6omuJ 40qrYd2l/E8Qf+FsiAsY+wdOdTD4QX1yLq7g57BUTnNFpEvHpCVpvSx57Oz35PdO3P/R1N8/sWFbww ISX+WkTd0VsakbUyb3xub95GUgB9dIE3ZGgtP1cwWOqOs= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org PMD modifications all go through the fixmap update routine, so there is no longer a need to keep it mapped read/write in the linear region. Signed-off-by: Ard Biesheuvel --- arch/arm64/include/asm/pgalloc.h | 5 +++++ arch/arm64/include/asm/tlb.h | 2 ++ arch/arm64/mm/mmu.c | 21 ++++++++++++++++++++ 3 files changed, 28 insertions(+) diff --git a/arch/arm64/include/asm/pgalloc.h b/arch/arm64/include/asm/pgalloc.h index 737e9f32b199..63f9ae9e96fe 100644 --- a/arch/arm64/include/asm/pgalloc.h +++ b/arch/arm64/include/asm/pgalloc.h @@ -16,12 +16,17 @@ #define __HAVE_ARCH_PGD_FREE #define __HAVE_ARCH_PUD_ALLOC_ONE #define __HAVE_ARCH_PUD_FREE +#define __HAVE_ARCH_PMD_ALLOC_ONE +#define __HAVE_ARCH_PMD_FREE #include #define PGD_SIZE (PTRS_PER_PGD * sizeof(pgd_t)) #if CONFIG_PGTABLE_LEVELS > 2 +pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long addr); +void pmd_free(struct mm_struct *mm, pmd_t *pmd); + static inline void __pud_populate(pud_t *pudp, phys_addr_t pmdp, pudval_t prot) { set_pud(pudp, __pud(__phys_to_pud_val(pmdp) | prot)); diff --git a/arch/arm64/include/asm/tlb.h b/arch/arm64/include/asm/tlb.h index 6557626752fc..0f54fbb59bba 100644 --- a/arch/arm64/include/asm/tlb.h +++ b/arch/arm64/include/asm/tlb.h @@ -85,6 +85,8 @@ static inline void __pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmdp, { struct page *page = virt_to_page(pmdp); + if (page_tables_are_ro()) + set_pgtable_rw(pmdp); pgtable_pmd_page_dtor(page); tlb_remove_table(tlb, page); } diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 03d77c4c3570..e55d91a5f1ed 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -1665,3 +1665,24 @@ void pud_free(struct mm_struct *mm, pud_t *pud) free_page((u64)pud); } #endif + +#ifndef __PAGETABLE_PMD_FOLDED +pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long addr) +{ + pmd_t *pmd = __pmd_alloc_one(mm, addr); + + if (!pmd) + return NULL; + if (page_tables_are_ro()) + set_pgtable_ro(pmd); + return pmd; +} + +void pmd_free(struct mm_struct *mm, pmd_t *pmd) +{ + if (page_tables_are_ro()) + set_pgtable_rw(pmd); + pgtable_pmd_page_dtor(virt_to_page(pmd)); + free_page((u64)pmd); +} +#endif From patchwork Wed Jan 26 17:30:06 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 12725510 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3C4CBC63682 for ; Wed, 26 Jan 2022 17:30:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243732AbiAZRak (ORCPT ); Wed, 26 Jan 2022 12:30:40 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45278 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243708AbiAZRak (ORCPT ); Wed, 26 Jan 2022 12:30:40 -0500 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 57E38C06161C for ; Wed, 26 Jan 2022 09:30:40 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id E7B5D61B2A for ; Wed, 26 Jan 2022 17:30:39 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 185A6C340EB; Wed, 26 Jan 2022 17:30:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1643218239; bh=3HM7cQ/5N69Z4WxpXnGFX7UUgDMV0EwcZH/b3alT5MU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=bM9U8MROkoJLF5koOvGfND8rtkARDS0xElyd8ZLZ1SeO+tt+C0gEkRQoxolW0A9GI GjzPwoNeXX1fTaPwP8QHByZ8g7ri6VwDer6HfOD1uDbE3O/EkgLQ3Npw2XEtG0MhUb 7LMCYpQk0FXf64PycFyERMZo6B178hbbXsRPVl7c3asss8hxXBXEn8EUe7gH8Ikd43 HBXXUj3UDGXriSrmd++xroGj+NM+QniWFIw1RzkawBF7k18HLCX/eGokA2RNtpd38d L+2W1yoGkH+gGnTAMUXND7BAuT5UB7Dq1M7GzAeQWWxgAVriGsEPsQ+rZqf+fI3d/f sBlDcGvouqG6Q== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: kvmarm@lists.cs.columbia.edu, linux-hardening@vger.kernel.org, Ard Biesheuvel , Will Deacon , Marc Zyngier , Fuad Tabba , Quentin Perret , Mark Rutland , James Morse , Catalin Marinas Subject: [RFC PATCH 07/12] arm64: mm: remap PTE level user page tables r/o in the linear region Date: Wed, 26 Jan 2022 18:30:06 +0100 Message-Id: <20220126173011.3476262-8-ardb@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220126173011.3476262-1-ardb@kernel.org> References: <20220126173011.3476262-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=2393; h=from:subject; bh=3HM7cQ/5N69Z4WxpXnGFX7UUgDMV0EwcZH/b3alT5MU=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBh8YUZ1RR/n6TU46VHyO1VfjhX818sE8OO55BQ1a3R cttM4YyJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYfGFGQAKCRDDTyI5ktmPJAVUC/ 9sP1raif5eNZV8SPcUfEepaqcSdCHY9WgssrVKYXJxgfTrJajTRn3+i0Y3FydsD2+cxtG97P/jI07s rfCBDteJnj6WUQO0Lvc7TFFjaDjnPVFle7LzPqwU9IOPhwXDSBDJEpIowmibeunyIZNiZG5Aw+z05c l+IgEkdhEZdHn0nJfvi0q5DT3OxamA4/YPimDlA61Cywtl9y0RQ8w/a0tDNZoan/KbXNHE4V3EvvP8 ePqWl2lGthkj7wSnWopnQgN2OJaHHbMzuA9WOp9p8GBPCD+n0rylao06CuidkqhmqDqtr05gUREs55 YPUkWo5MW/WN43kmPTieytr0uEsMClfD3H08qHlzZqaBDbGih8rhzeIBdgMgJx6hejX6Wnoppe+E1l FcvCn3hlvYVz+1zbBEKReWDG+SRBHknt4H3dVFapaiBMkggdcz2n49nD1Ewsbi3YHRP+F62Zfm+Cvf tAV9RgZaj9+11f/gXa3AXAAG03JUP+ysfAL2uqnKisVDs= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org Now that all PTE manipulations for user space tables go via the fixmap, we can remap these tables read-only in the linear region so they cannot be corrupted inadvertently. Signed-off-by: Ard Biesheuvel --- arch/arm64/include/asm/pgalloc.h | 5 +++++ arch/arm64/include/asm/tlb.h | 2 ++ arch/arm64/mm/mmu.c | 23 ++++++++++++++++++++ 3 files changed, 30 insertions(+) diff --git a/arch/arm64/include/asm/pgalloc.h b/arch/arm64/include/asm/pgalloc.h index 63f9ae9e96fe..18a5bb0c9ee4 100644 --- a/arch/arm64/include/asm/pgalloc.h +++ b/arch/arm64/include/asm/pgalloc.h @@ -18,10 +18,15 @@ #define __HAVE_ARCH_PUD_FREE #define __HAVE_ARCH_PMD_ALLOC_ONE #define __HAVE_ARCH_PMD_FREE +#define __HAVE_ARCH_PTE_ALLOC_ONE +#define __HAVE_ARCH_PTE_FREE #include #define PGD_SIZE (PTRS_PER_PGD * sizeof(pgd_t)) +pgtable_t pte_alloc_one(struct mm_struct *mm); +void pte_free(struct mm_struct *mm, struct page *pte_page); + #if CONFIG_PGTABLE_LEVELS > 2 pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long addr); diff --git a/arch/arm64/include/asm/tlb.h b/arch/arm64/include/asm/tlb.h index 0f54fbb59bba..e69a44160cce 100644 --- a/arch/arm64/include/asm/tlb.h +++ b/arch/arm64/include/asm/tlb.h @@ -75,6 +75,8 @@ static inline void tlb_flush(struct mmu_gather *tlb) static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t pte, unsigned long addr) { + if (page_tables_are_ro()) + set_pgtable_rw(page_address(pte)); pgtable_pte_page_dtor(pte); tlb_remove_table(tlb, pte); } diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index e55d91a5f1ed..949846654797 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -1686,3 +1686,26 @@ void pmd_free(struct mm_struct *mm, pmd_t *pmd) free_page((u64)pmd); } #endif + +pgtable_t pte_alloc_one(struct mm_struct *mm) +{ + pgtable_t pgt = __pte_alloc_one(mm, GFP_PGTABLE_USER); + + VM_BUG_ON(mm == &init_mm); + + if (!pgt) + return NULL; + if (page_tables_are_ro()) + set_pgtable_ro(page_address(pgt)); + return pgt; +} + +void pte_free(struct mm_struct *mm, struct page *pte_page) +{ + VM_BUG_ON(mm == &init_mm); + + if (page_tables_are_ro()) + set_pgtable_rw(page_address(pte_page)); + pgtable_pte_page_dtor(pte_page); + __free_page(pte_page); +} From patchwork Wed Jan 26 17:30:07 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 12725511 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8E2E3C28CF5 for ; Wed, 26 Jan 2022 17:30:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243734AbiAZRan (ORCPT ); Wed, 26 Jan 2022 12:30:43 -0500 Received: from dfw.source.kernel.org ([139.178.84.217]:58364 "EHLO dfw.source.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243708AbiAZRam (ORCPT ); Wed, 26 Jan 2022 12:30:42 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 9D01F61B2A for ; Wed, 26 Jan 2022 17:30:42 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id C52CBC340E8; Wed, 26 Jan 2022 17:30:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1643218242; bh=+9r0eENwjcrZsaucV5LRPlnIuFJW40RDRpKxbSWSeHc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=YDAxXbhb+/WC0bnK3qTYyh/LKHBiNtzGkIqBDPoTOJ0ufKXcvKi85P9jlQst0A/8F GnVyDTEDpYd4oLcMorvP4R30Q9d2p9xRgMX0u/v3/W19mTQdL8A28cA9Uu+jWf2Hi9 d8M/ZGODlcA3tduJTxNxbJi+BDT19B0VzeCdg0TG/7wYjvFTUkTsNU/bukMl0UScb1 i3KNr/YA76IfeyzV7sy2vRDvxiLaVycICOn2MlBiY3pLrqlMZi+/cRJex0Z8idrEWu NZFKN8qoBveecwEzG8DZZBtopF9I2dG1U4x+P8iRi72WWBNprH/URrrbAF2cb3lByz fJ5nRUjv1TcWg== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: kvmarm@lists.cs.columbia.edu, linux-hardening@vger.kernel.org, Ard Biesheuvel , Will Deacon , Marc Zyngier , Fuad Tabba , Quentin Perret , Mark Rutland , James Morse , Catalin Marinas Subject: [RFC PATCH 08/12] arm64: mm: remap kernel PTE level page tables r/o in the linear region Date: Wed, 26 Jan 2022 18:30:07 +0100 Message-Id: <20220126173011.3476262-9-ardb@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220126173011.3476262-1-ardb@kernel.org> References: <20220126173011.3476262-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=2238; h=from:subject; bh=+9r0eENwjcrZsaucV5LRPlnIuFJW40RDRpKxbSWSeHc=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBh8YUbo/Cm7PIshn0dC5CWatC79oZscN7k5hjn8Rcu PMtRDlqJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYfGFGwAKCRDDTyI5ktmPJLviC/ 40E17Ru+4lAVhwvY6HNacedYwjyRWoJxpb4D5K29WQfXWq1YOL6ROBHq5DtB1gjmqJeqZQJQWrfajk 4/SW9RDioe8YaLmAWUZZgdOJ6bxrEvsWakArwRaS3Haur+/Qc6i30Chgz0z6scvYoOdIhTfoQFAwWR qvipSKoQIlUsmC7UpAO9mIxxBzn2XJiN6Jd0hdEdXYudnNCqfJ9cPkM5KS92PT9KFAi1W8nlcc+8Ao jIN3GwRntYbppkKelvTql7ZcLV1qD2h0h2c0HkP82ZFxzaf9DyRXnwY1/mbNzx/b5uiBqJ4WCgQLxW gP09b6f4Q41N0Wk3JFBEH8NejSOM0NtIA43WfnW9QNmaM5npB/P934NzvWT83XVWtghq+UZ/dMHaGb aH8n6QQt2u1wKJ3rgGBx7yVsFXFvywvj7bA3LOzUyw/PdL8DIjKIn58nMw6mvbe58gut2mMno8HZrU owY+QCKUIrTorWodg/NvzuTD6Oye4caAvKPezXnjkZAQY= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org Now that all kernel page table manipulations are routed through the fixmap API if r/o page tables are enabled, we can remove write access from the linear mapping of those pages. Signed-off-by: Ard Biesheuvel --- arch/arm64/include/asm/pgalloc.h | 6 +++++ arch/arm64/mm/mmu.c | 24 +++++++++++++++++++- 2 files changed, 29 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/pgalloc.h b/arch/arm64/include/asm/pgalloc.h index 18a5bb0c9ee4..073482634e74 100644 --- a/arch/arm64/include/asm/pgalloc.h +++ b/arch/arm64/include/asm/pgalloc.h @@ -20,6 +20,9 @@ #define __HAVE_ARCH_PMD_FREE #define __HAVE_ARCH_PTE_ALLOC_ONE #define __HAVE_ARCH_PTE_FREE +#define __HAVE_ARCH_PTE_ALLOC_ONE_KERNEL +#define __HAVE_ARCH_PTE_FREE_KERNEL + #include #define PGD_SIZE (PTRS_PER_PGD * sizeof(pgd_t)) @@ -27,6 +30,9 @@ pgtable_t pte_alloc_one(struct mm_struct *mm); void pte_free(struct mm_struct *mm, struct page *pte_page); +pte_t *pte_alloc_one_kernel(struct mm_struct *mm); +void pte_free_kernel(struct mm_struct *mm, pte_t *pte); + #if CONFIG_PGTABLE_LEVELS > 2 pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long addr); diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 949846654797..971501535757 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -1402,7 +1402,7 @@ int pmd_free_pte_page(pmd_t *pmdp, unsigned long addr) table = pte_offset_kernel(pmdp, addr); pmd_clear(pmdp); __flush_tlb_kernel_pgtable(addr); - pte_free_kernel(NULL, table); + pte_free_kernel(&init_mm, table); return 1; } @@ -1709,3 +1709,25 @@ void pte_free(struct mm_struct *mm, struct page *pte_page) pgtable_pte_page_dtor(pte_page); __free_page(pte_page); } + +pte_t *pte_alloc_one_kernel(struct mm_struct *mm) +{ + pte_t *pte = __pte_alloc_one_kernel(mm); + + VM_BUG_ON(mm != &init_mm); + + if (!pte) + return NULL; + if (page_tables_are_ro()) + set_pgtable_ro(pte); + return pte; +} + +void pte_free_kernel(struct mm_struct *mm, pte_t *pte) +{ + VM_BUG_ON(mm != &init_mm); + + if (page_tables_are_ro()) + set_pgtable_rw(pte); + free_page((u64)pte); +} From patchwork Wed Jan 26 17:30:08 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 12725512 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4CB6DC2BA4C for ; Wed, 26 Jan 2022 17:30:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243736AbiAZRar (ORCPT ); Wed, 26 Jan 2022 12:30:47 -0500 Received: from ams.source.kernel.org ([145.40.68.75]:43682 "EHLO ams.source.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243708AbiAZRar (ORCPT ); Wed, 26 Jan 2022 12:30:47 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 10375B81F84 for ; Wed, 26 Jan 2022 17:30:46 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7E391C340EB; Wed, 26 Jan 2022 17:30:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1643218244; bh=nNUKFaczAxtIriN5XdrdIMGNmdjEdmxkFpYB7GnKfAY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ESboqctVJ6fUNu/W3ZIYUZgJ9wGqik6M6Eo1rmE+hVJg9E8QTjAVpfSR8UOAMQfCd HZIfbCeaMo7aiVFoNAHJn2trgnUPPrnSe+cjzHIAkSkR6X+AZyvqxmxEyo5ycZD4iN d2BtfVnK0PVm5LVhdPSJzyu1L8WGB3K/GlM3S7H6oOgf8glIxpvwNTFPn+LaKPk9wj 7HeYnW0pEyihrwNaZEu0O7SqEaEJa9TlAhE/3SWAGUsdaAs5ePAjAdvTP4n7hpDRRr 8XojudOw962IkzapnU4oVNh4S5snRh1/dMYs+Kp3QJiwg/5Lh8NVQXIiOxk1xxM+6k v7uuVeLlFgMeg== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: kvmarm@lists.cs.columbia.edu, linux-hardening@vger.kernel.org, Ard Biesheuvel , Will Deacon , Marc Zyngier , Fuad Tabba , Quentin Perret , Mark Rutland , James Morse , Catalin Marinas Subject: [RFC PATCH 09/12] arm64: mm: remap kernel page tables read-only at end of init Date: Wed, 26 Jan 2022 18:30:08 +0100 Message-Id: <20220126173011.3476262-10-ardb@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220126173011.3476262-1-ardb@kernel.org> References: <20220126173011.3476262-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=1493; h=from:subject; bh=nNUKFaczAxtIriN5XdrdIMGNmdjEdmxkFpYB7GnKfAY=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBh8YUd3PSNoIjo2Lk2qkayY+V0cNJVcBz93RhLFO54 S91VXeqJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYfGFHQAKCRDDTyI5ktmPJDZbC/ 4hxlW60C4EKcfnGoyieUkDa0wi750/b0JWQEi27+LiiYQFbscSI7QyHBnHm8t0CAMNlsgoKI9KnxvJ Bs3bRbFCQl4Xv+Tp0u5dTwAMRK1JX5DsckA2kt6Z7mH5E4TLFWKXlCmLI4KipsLuPwdKNk0bBvruOG HJ2SNNglSzyCFOT3Htlaz6dBtUKrzSjurgJUaz7aGJ2IneMteYMdFCkNJEra7rGbc601+sgR2KrN8i xESU/AiQ/pV+2iuxWg4q2eGQHC9YIsNE90n/dXzrHLX55ktf904CTaddoWiUr3nn8YVa2KzAIcluXP hR0Wi+oi9VLkZUxE/Fl2ifvLMYfLaXkzldmrBno8uSw9pA/xaV5TARkqpxd8NEYFO0EXXWls628mas Rl4M6W8FsYrCDR5xUvYEgjQYuLlfrzLA8gLASlLp8FSzQT2/FfbGjFXBtC3dD4mXCU7A1HXaRX91d8 cPZZXud7+ELGlphhMR8UTj2ewOHgGp8Lcsg/XqbBDZ7jc= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org Now that all the handling is in place to deal with read-only page tables at runtime, do a pass over the kernel page tables at boot to remap all the page table pages read-only that were allocated early. Signed-off-by: Ard Biesheuvel --- arch/arm64/mm/mmu.c | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+) diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 971501535757..b1212f6d48f2 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -559,8 +559,23 @@ static void __init map_mem(pgd_t *pgdp) memblock_clear_nomap(kernel_start, kernel_end - kernel_start); } +static void mark_pgtables_ro(const pmd_t *pmdp, int level, int num_entries) +{ + while (num_entries--) { + if (pmd_valid(*pmdp) && pmd_table(*pmdp)) { + pmd_t *next = __va(__pmd_to_phys(*pmdp)); + + if (level < 2) + mark_pgtables_ro(next, level + 1, PTRS_PER_PMD); + set_pgtable_ro(next); + } + pmdp++; + } +} + void mark_rodata_ro(void) { + int pgd_level = 4 - CONFIG_PGTABLE_LEVELS; unsigned long section_size; /* @@ -571,6 +586,11 @@ void mark_rodata_ro(void) update_mapping_prot(__pa_symbol(__start_rodata), (unsigned long)__start_rodata, section_size, PAGE_KERNEL_RO); +#ifdef CONFIG_UNMAP_KERNEL_AT_EL0 + mark_pgtables_ro((pmd_t *)&tramp_pg_dir, pgd_level, PTRS_PER_PGD); +#endif + mark_pgtables_ro((pmd_t *)&swapper_pg_dir, pgd_level, PTRS_PER_PGD); + debug_checkwx(); } From patchwork Wed Jan 26 17:30:09 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 12725513 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E812DC63682 for ; Wed, 26 Jan 2022 17:30:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243738AbiAZRau (ORCPT ); Wed, 26 Jan 2022 12:30:50 -0500 Received: from ams.source.kernel.org ([145.40.68.75]:43708 "EHLO ams.source.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243708AbiAZRat (ORCPT ); Wed, 26 Jan 2022 12:30:49 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id A5F36B81F83 for ; Wed, 26 Jan 2022 17:30:48 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 39CE8C340EC; Wed, 26 Jan 2022 17:30:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1643218247; bh=Ze5fXt4FqL+Zh8NP0MNCtrp81vRDih6UzTPz+louRs0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=bdCg8Kzb9oSI2Kbc9wKA1SmuNtjNGvjuPuYt7zrQ3HFQAlTG1SACXpQLBOdUMb7qy 2GybdCNMdnBHtvHhGbZPlKLrET3OPwQV46/7WYmgl9fI+Utu5mO46LGOHT+JmG2fv0 lihQHGHDyX5+BZDy2LQV512mCLIn51JLy8A4uzymM/7rSXBYGwwBxZFHNGU7traTGL AadcY6LDicxQlZsqQgVsGp3M2lAWy5TNrgTmsUeuIzRR3zonW3PIECcWrNUVJ1HX3T r8ktreOFgw5OVLfRZvxzXQBdfAoYbAU7KvSF/2vcBs6d1rdy5izYDD/PB/aWi+65Gi SJDT5PNeFqS6w== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: kvmarm@lists.cs.columbia.edu, linux-hardening@vger.kernel.org, Ard Biesheuvel , Will Deacon , Marc Zyngier , Fuad Tabba , Quentin Perret , Mark Rutland , James Morse , Catalin Marinas Subject: [RFC PATCH 10/12] mm: add default definition of p4d_index() Date: Wed, 26 Jan 2022 18:30:09 +0100 Message-Id: <20220126173011.3476262-11-ardb@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220126173011.3476262-1-ardb@kernel.org> References: <20220126173011.3476262-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=865; h=from:subject; bh=Ze5fXt4FqL+Zh8NP0MNCtrp81vRDih6UzTPz+louRs0=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBh8YUfVBEGUydkFMtp/h/I8kciukE+cFuRdfFb2fky AhRgWCaJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYfGFHwAKCRDDTyI5ktmPJLD/C/ 4yQahFWNFa/1tH/H8r8S9y5RIYb0h5NS5hECeNvEcWyO7oRnqbY3f5zbn+klgL2I48RSBQteEhW1+d 7NmRJi4E06iZljwX6ZuQdIGXbgfOuwzJ+VQWr5b7JUnoXtSobUuPQgMywf8bBOja6SQSgLRDVopxtL Yi7VR77F2T8ODmx0pGhDHZaD4mgiS2tfiIkUEGnqyTRRpUqKUDWq0yI+1ZLl+UXeAKlmM8qNAdLQWp hZdtN4RiyflFFlqdgCuMtEERZKl7+dgTnMj/iOOysP/dtDCFLVtVqq5C8FPQGz0KIL0o4DcsZBOuTA 09M/Fs2SU+gBljG8scV935cZVm0ZXgsoOTjwDCC7i8GlsmeBcQnZfdVGDaSnt5qzJNoyNQ+g9yEOqV W3BNZdqg7gIRWk0LP8Sx3J3b8b7TPKdaHMgk+BxPoEo6UqxEqR3dsbvOSnoRtZqSRkPDljDsjCiUvf xfkilXxzgrsuGpGIA1D1fKBx1NjppgccTkYXJMS7J0zFs= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org Implement a default version of p4d_index() similar to how pud/pmd_index are defined. Signed-off-by: Ard Biesheuvel --- include/linux/pgtable.h | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index bc8713a76e03..e8aacf6ea207 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -79,6 +79,14 @@ static inline unsigned long pud_index(unsigned long address) #define pud_index pud_index #endif +#ifndef p4d_index +static inline unsigned long p4d_index(unsigned long address) +{ + return (address >> P4D_SHIFT) & (PTRS_PER_P4D - 1); +} +#define p4d_index p4d_index +#endif + #ifndef pgd_index /* Must be a compile-time constant, so implement it as a macro */ #define pgd_index(a) (((a) >> PGDIR_SHIFT) & (PTRS_PER_PGD - 1)) From patchwork Wed Jan 26 17:30:10 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 12725514 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B5592C63684 for ; Wed, 26 Jan 2022 17:30:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243740AbiAZRax (ORCPT ); Wed, 26 Jan 2022 12:30:53 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45326 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243708AbiAZRaw (ORCPT ); Wed, 26 Jan 2022 12:30:52 -0500 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A3058C06161C for ; Wed, 26 Jan 2022 09:30:52 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 61E3CB81F83 for ; Wed, 26 Jan 2022 17:30:51 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id E7884C340E9; Wed, 26 Jan 2022 17:30:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1643218250; bh=/28tCtfFancm5CCSx78Aad6xerDDMgMGm3g+HUtCuTI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=OsvhggZGrDPJhvViDZXQc5RW7Ur63e2g4/8sIHPWOeDixuKztRbv4sznJBVyIk41+ PTRx6lIWRnxuNZByeHNAvkvk6A3tuvRaHH+C3vwp/L8zGrYD+LcBfWNbm9Rzoo+UHJ IqcZjpA1SiwKX0avGzI7rbw4dsl3It/o2UkWR1STBeAIZKT8kpgOJUGslhBHDq1J6q ZnamH56hdQxfdh7/sL8v8hz3GIeQi9w7RKNEC/izsXjrzMsy8Ex4f/J9aIVcwHc4c6 butX3VGsAnAvqkK7MDoGimFUobGE850Oyej1Oefw0V2juAXSqNicItg9Pw8BaDiD3M 2Vif7+oZa849Q== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: kvmarm@lists.cs.columbia.edu, linux-hardening@vger.kernel.org, Ard Biesheuvel , Will Deacon , Marc Zyngier , Fuad Tabba , Quentin Perret , Mark Rutland , James Morse , Catalin Marinas Subject: [RFC PATCH 11/12] arm64: efi: use set_pte_at() not set_pte() in order to pass mm pointer Date: Wed, 26 Jan 2022 18:30:10 +0100 Message-Id: <20220126173011.3476262-12-ardb@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220126173011.3476262-1-ardb@kernel.org> References: <20220126173011.3476262-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=854; h=from:subject; bh=/28tCtfFancm5CCSx78Aad6xerDDMgMGm3g+HUtCuTI=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBh8YUhx47G2Di37HBOTdk1WIh6QmR6R0DIRCPzQe3u Y9HyTc6JAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYfGFIQAKCRDDTyI5ktmPJHlUC/ 9fpAGBHh3zml3pjMTEaSbLLUBTmAifHgqXDsTvzBaMFSF0AjLSXGwREtuZo/GjPv8vayjRlRH3qXYs wj3UdGjwglVOKGYXyz//a89HyX0yj4wbs6bluZzQ9lP/EGutJDydzRQw1lUBVxwN1GXBJl2JOzN3CE TYyEnmvOcFHAOo9SqMbyNaR0JpauoD9NJKPuQ1PdwVfl729t580jb50LS8vl+uVeMHZTme8Kb/IPLQ afhYrLIuHqqVqwyMMz8Xfzh1bSOQ2wpytn5fplrqMETqXtgjsPg0V5yWF3htw1lrrNHTf/1SAo5L0F EJM5xf/2icjJ15b+zUUYgsq1EnpBN992ztuWtCassP6GIjwGWPwefQgprtrBG3N9bDK29dDxhrohmo T09Ubm5WXu6Ve68zXBjeJWSWzdA6rDIyj4hl0ks3E3zZq43KCB3+fdUzipGaiw+k0ZOr+1g3uR8zrn z8NsR8XruDSBrpa3LJ+N0zOwDbpaz2B6ViCART/lXHuao= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org The set_pte() helper does not carry the struct mm pointer, which makes it difficult for the implementation to reason about the context in which the set_pte() call is taking place. So switch to set_pte_at() instead. Signed-off-by: Ard Biesheuvel --- arch/arm64/kernel/efi.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/arm64/kernel/efi.c b/arch/arm64/kernel/efi.c index e1be6c429810..e3e50adfae18 100644 --- a/arch/arm64/kernel/efi.c +++ b/arch/arm64/kernel/efi.c @@ -92,7 +92,7 @@ static int __init set_permissions(pte_t *ptep, unsigned long addr, void *data) pte = set_pte_bit(pte, __pgprot(PTE_RDONLY)); if (md->attribute & EFI_MEMORY_XP) pte = set_pte_bit(pte, __pgprot(PTE_PXN)); - set_pte(ptep, pte); + set_pte_at(&efi_mm, addr, ptep, pte); return 0; } From patchwork Wed Jan 26 17:30:11 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 12725515 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 91C64C63682 for ; Wed, 26 Jan 2022 17:30:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243739AbiAZRa4 (ORCPT ); Wed, 26 Jan 2022 12:30:56 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45340 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243708AbiAZRaz (ORCPT ); Wed, 26 Jan 2022 12:30:55 -0500 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7AF66C06161C for ; Wed, 26 Jan 2022 09:30:55 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 3695CB81F84 for ; Wed, 26 Jan 2022 17:30:54 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id A0AECC340E8; Wed, 26 Jan 2022 17:30:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1643218252; bh=DsaWgn2b5/uiNugxCw2SLmR1/vGVjh0Ss2yA9R9s1LM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=dGb3RS7OkfD5pyOERSPYQyIB/zbpDxa79Y4qflpJQCQR7cp/gUlVUAWi9OCcLody4 FuJ0zdG3rgIonSxsChrnLiaTQe+wLI2F7XNoMSp6OZ599qyuwChe/c4cEgC4oPJ53I Lh1YxpmYcP42bcZhzyoPSk0WGIVARAV0zgCk1GZKY1IWLQu6GCsSQGeS2kgWinvfjR SReLjmfb1GwpDjCE9nm8sWJ82Y7JKiF40e4ZrwYGx5SeNU64ATl0VnqI6sjxZ037uA ZoSWcwmLqY+NwhY6GPCiSOTcMBM4J32pAGycDHAJjW6EwBN+WbRtsbNc7hFbOD5lEy KiZlcdaQ8wagQ== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: kvmarm@lists.cs.columbia.edu, linux-hardening@vger.kernel.org, Ard Biesheuvel , Will Deacon , Marc Zyngier , Fuad Tabba , Quentin Perret , Mark Rutland , James Morse , Catalin Marinas Subject: [RFC PATCH 12/12] arm64: hugetlb: use set_pte_at() not set_pte() to provide mm pointer Date: Wed, 26 Jan 2022 18:30:11 +0100 Message-Id: <20220126173011.3476262-13-ardb@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220126173011.3476262-1-ardb@kernel.org> References: <20220126173011.3476262-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=828; h=from:subject; bh=DsaWgn2b5/uiNugxCw2SLmR1/vGVjh0Ss2yA9R9s1LM=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBh8YUiX6nCobhdCM2xfJOu55QOHjqiXQBuGJOyXzVu UUxMSROJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYfGFIgAKCRDDTyI5ktmPJI1zDA CIsdV7m7gTzi2Wdm+kuwbE5P5R5tZ5RUHTok8VsHS0rmypOJ6Xz3Etk0/Fhfbw8fgmLAWy5VQ5nMph HvC+702UWf52zA74CQDMx77//bxRlGTt7kXfuxuxhW+4fAxhgFQSt+jv1E0E7J2Hpu9WRc+AAFPJ6c qbvLVOHnUTRthZHOzxlB7Bl0e8DVqU4cXbcuOa4TSCgcWxJiVg1BdXL8eThwxGF7fy7GOGz4o3fOpr MlTyf8/N6yfluh0Al1EzanO7c/5GZFfBrKjxq85b/jCHCYVwlXs/EXx3aViHmPp8etJ+ndyPyTUDIy IpvwoOHqv+yRee0RyqS8Z9dOpa6PpMs8N2pAeSH0NLPu7J6aG5H4oq86nGzlYaT6GNHDY2xIM7+JsC mZ6RnecKTD8V1mu/KIHxwA7wO/n3kxzQlpHuWJlpQcj3C17mPw8RpVKBeTmonGsNb6mxsqNIEzHAh+ h0uQMBh7mPmfAUuujd8OCDIKYE1H1YJCa0LanJ7bkKb60= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org Switch to set_pte_at() so we can provide the mm pointer to the code that performs the page table update. Signed-off-by: Ard Biesheuvel --- arch/arm64/mm/hugetlbpage.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c index ffb9c229610a..099b28b00f4c 100644 --- a/arch/arm64/mm/hugetlbpage.c +++ b/arch/arm64/mm/hugetlbpage.c @@ -252,8 +252,8 @@ void set_huge_swap_pte_at(struct mm_struct *mm, unsigned long addr, ncontig = num_contig_ptes(sz, &pgsize); - for (i = 0; i < ncontig; i++, ptep++) - set_pte(ptep, pte); + for (i = 0; i < ncontig; i++, ptep++, addr += pgsize) + set_pte_at(mm, addr, ptep, pte); } pte_t *huge_pte_alloc(struct mm_struct *mm, struct vm_area_struct *vma,