From patchwork Mon Apr 18 03:44:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tong Tiangen X-Patchwork-Id: 12816242 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id CD12EC433F5 for ; Mon, 18 Apr 2022 03:26:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=2ZfAbC19ezOzazDcDNrPlOLj1LB3ohwn3l2js8OdnjU=; b=WfXGSccm051za6 DnYFiH/E8D8HGBT3DZyZOYqnPoTP0LsDxONEATIeotxxs9Y8YaTZt47EVniI8tvsf7wKwJUJYPzqh s297ZcSrLXP+3DivKcjyQReY3JJgjvHoRsy1Kr8qnz75VBTb8wrRamCFfqR2CSAejWFWITk/Lhiwx RgS1Qlt3YIxSU+LhbMbmZqKlvnbF/JN/dNjvoNMibVP+KW0sdE893YFjV8Ul3JahCH6tcPRmv5Ja0 3qe5JISbsLCW/TVWKtQsHGaScQKsGn22D6rdradqSyROEa1yl+vtJsM6iQQIQ2isIN3YZ3PGMi29S 5vKGKfc1k73eBp1VMWng==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1ngI1w-00FPap-Se; Mon, 18 Apr 2022 03:26:24 +0000 Received: from szxga08-in.huawei.com ([45.249.212.255]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1ngI1j-00FPTj-V4; Mon, 18 Apr 2022 03:26:14 +0000 Received: from kwepemi100004.china.huawei.com (unknown [172.30.72.54]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4KhXPc28kfz1J9jy; Mon, 18 Apr 2022 11:25:24 +0800 (CST) Received: from kwepemm600017.china.huawei.com (7.193.23.234) by kwepemi100004.china.huawei.com (7.221.188.70) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 18 Apr 2022 11:26:06 +0800 Received: from localhost.localdomain (10.175.112.125) by kwepemm600017.china.huawei.com (7.193.23.234) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 18 Apr 2022 11:26:04 +0800 From: Tong Tiangen To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , , "H. Peter Anvin" , Pasha Tatashin , Andrew Morton , Catalin Marinas , Will Deacon , Paul Walmsley , Palmer Dabbelt , Albert Ou CC: , , , , Tong Tiangen , Kefeng Wang , Guohanjun Subject: [PATCH -next v4 1/4] mm: page_table_check: move pxx_user_accessible_page into x86 Date: Mon, 18 Apr 2022 03:44:41 +0000 Message-ID: <20220418034444.520928-2-tongtiangen@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220418034444.520928-1-tongtiangen@huawei.com> References: <20220418034444.520928-1-tongtiangen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To kwepemm600017.china.huawei.com (7.193.23.234) X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220417_202612_347026_DA60D297 X-CRM114-Status: GOOD ( 10.05 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Kefeng Wang The pxx_user_accessible_page() check the PTE bit, it's architecture-specific code, move them into x86's pgtable.h, also add default PMD/PUD_PAGE_SIZE definition, it's prepare for support page table check feature on new architecture. Signed-off-by: Kefeng Wang Acked-by: Pasha Tatashin --- arch/x86/include/asm/pgtable.h | 19 +++++++++++++++++++ mm/page_table_check.c | 25 ++++++++----------------- 2 files changed, 27 insertions(+), 17 deletions(-) diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h index b7464f13e416..564abe42b0f7 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -1447,6 +1447,25 @@ static inline bool arch_has_hw_pte_young(void) return true; } +#ifdef CONFIG_PAGE_TABLE_CHECK +static inline bool pte_user_accessible_page(pte_t pte) +{ + return (pte_val(pte) & _PAGE_PRESENT) && (pte_val(pte) & _PAGE_USER); +} + +static inline bool pmd_user_accessible_page(pmd_t pmd) +{ + return pmd_leaf(pmd) && (pmd_val(pmd) & _PAGE_PRESENT) && + (pmd_val(pmd) & _PAGE_USER); +} + +static inline bool pud_user_accessible_page(pud_t pud) +{ + return pud_leaf(pud) && (pud_val(pud) & _PAGE_PRESENT) && + (pud_val(pud) & _PAGE_USER); +} +#endif + #endif /* __ASSEMBLY__ */ #endif /* _ASM_X86_PGTABLE_H */ diff --git a/mm/page_table_check.c b/mm/page_table_check.c index 2458281bff89..145f059d1c4d 100644 --- a/mm/page_table_check.c +++ b/mm/page_table_check.c @@ -10,6 +10,14 @@ #undef pr_fmt #define pr_fmt(fmt) "page_table_check: " fmt +#ifndef PMD_PAGE_SIZE +#define PMD_PAGE_SIZE PMD_SIZE +#endif + +#ifndef PUD_PAGE_SIZE +#define PUD_PAGE_SIZE PUD_SIZE +#endif + struct page_table_check { atomic_t anon_map_count; atomic_t file_map_count; @@ -52,23 +60,6 @@ static struct page_table_check *get_page_table_check(struct page_ext *page_ext) return (void *)(page_ext) + page_table_check_ops.offset; } -static inline bool pte_user_accessible_page(pte_t pte) -{ - return (pte_val(pte) & _PAGE_PRESENT) && (pte_val(pte) & _PAGE_USER); -} - -static inline bool pmd_user_accessible_page(pmd_t pmd) -{ - return pmd_leaf(pmd) && (pmd_val(pmd) & _PAGE_PRESENT) && - (pmd_val(pmd) & _PAGE_USER); -} - -static inline bool pud_user_accessible_page(pud_t pud) -{ - return pud_leaf(pud) && (pud_val(pud) & _PAGE_PRESENT) && - (pud_val(pud) & _PAGE_USER); -} - /* * An enty is removed from the page table, decrement the counters for that page * verify that it is of correct type and counters do not become negative. From patchwork Mon Apr 18 03:44:42 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tong Tiangen X-Patchwork-Id: 12816244 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 91311C433F5 for ; Mon, 18 Apr 2022 03:27:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=eJ7NbUvQeoRPLCKfVNTRHepf+mFLwb/Tl4obXZRQHmM=; b=LXYRR5bjB6Ym8q So7XWuir+6BNe0RNcnKhHz679FwrEvHpZKB8gd9RzN4Pryr2nIU6o6HUpth7VmTZc+Mx6FFevbDtm H/p5UDvhC4GgkaI9iGU6LLtYnXvIsFc7u6HOEeBDyXaWi/XhpJ3yYs3YjfH8+G09bLUJfKIWFYhQs UbW532nxe3N8yfD2T8rHO2Osr//HrgBPCpdPRFT10Z/aiXPEEXOJxARvlfYw62zDycDdUtDIjf6n3 qSJXzFjecK487BOdCdbSVGfWeTcBmwSaoNm0pQ/Z3SkU34B8JGBDpZvqY/B+VkEUnhvcyqXOf/b63 PWMzGLECj+w+dnhH4ynA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1ngI2T-00FPoZ-MZ; Mon, 18 Apr 2022 03:26:57 +0000 Received: from szxga03-in.huawei.com ([45.249.212.189]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1ngI1m-00FPUu-FJ; Mon, 18 Apr 2022 03:26:16 +0000 Received: from kwepemi100002.china.huawei.com (unknown [172.30.72.54]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4KhXKP69wVzCr70; Mon, 18 Apr 2022 11:21:45 +0800 (CST) Received: from kwepemm600017.china.huawei.com (7.193.23.234) by kwepemi100002.china.huawei.com (7.221.188.188) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 18 Apr 2022 11:26:08 +0800 Received: from localhost.localdomain (10.175.112.125) by kwepemm600017.china.huawei.com (7.193.23.234) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 18 Apr 2022 11:26:06 +0800 From: Tong Tiangen To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , , "H. Peter Anvin" , Pasha Tatashin , Andrew Morton , Catalin Marinas , Will Deacon , Paul Walmsley , Palmer Dabbelt , Albert Ou CC: , , , , Tong Tiangen , Kefeng Wang , Guohanjun Subject: [PATCH -next v4 2/4] mm: page_table_check: add hooks to public helpers Date: Mon, 18 Apr 2022 03:44:42 +0000 Message-ID: <20220418034444.520928-3-tongtiangen@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220418034444.520928-1-tongtiangen@huawei.com> References: <20220418034444.520928-1-tongtiangen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To kwepemm600017.china.huawei.com (7.193.23.234) X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220417_202614_929970_88E59431 X-CRM114-Status: GOOD ( 10.64 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Move ptep_clear() to the include/linux/pgtable.h and add page table check relate hooks to some helpers, it's prepare for support page table check feature on new architecture. Signed-off-by: Tong Tiangen Acked-by: Pasha Tatashin --- arch/x86/include/asm/pgtable.h | 10 ---------- include/linux/pgtable.h | 26 ++++++++++++++++++-------- 2 files changed, 18 insertions(+), 18 deletions(-) diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h index 564abe42b0f7..51cd39858f81 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -1073,16 +1073,6 @@ static inline pte_t ptep_get_and_clear_full(struct mm_struct *mm, return pte; } -#define __HAVE_ARCH_PTEP_CLEAR -static inline void ptep_clear(struct mm_struct *mm, unsigned long addr, - pte_t *ptep) -{ - if (IS_ENABLED(CONFIG_PAGE_TABLE_CHECK)) - ptep_get_and_clear(mm, addr, ptep); - else - pte_clear(mm, addr, ptep); -} - #define __HAVE_ARCH_PTEP_SET_WRPROTECT static inline void ptep_set_wrprotect(struct mm_struct *mm, unsigned long addr, pte_t *ptep) diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index 49ab8ee2d6d7..10d2d91edf20 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -12,6 +12,7 @@ #include #include #include +#include #if 5 - defined(__PAGETABLE_P4D_FOLDED) - defined(__PAGETABLE_PUD_FOLDED) - \ defined(__PAGETABLE_PMD_FOLDED) != CONFIG_PGTABLE_LEVELS @@ -272,14 +273,6 @@ static inline bool arch_has_hw_pte_young(void) } #endif -#ifndef __HAVE_ARCH_PTEP_CLEAR -static inline void ptep_clear(struct mm_struct *mm, unsigned long addr, - pte_t *ptep) -{ - pte_clear(mm, addr, ptep); -} -#endif - #ifndef __HAVE_ARCH_PTEP_GET_AND_CLEAR static inline pte_t ptep_get_and_clear(struct mm_struct *mm, unsigned long address, @@ -287,10 +280,22 @@ static inline pte_t ptep_get_and_clear(struct mm_struct *mm, { pte_t pte = *ptep; pte_clear(mm, address, ptep); + page_table_check_pte_clear(mm, address, pte); return pte; } #endif +#ifndef __HAVE_ARCH_PTEP_CLEAR +static inline void ptep_clear(struct mm_struct *mm, unsigned long addr, + pte_t *ptep) +{ + if (IS_ENABLED(CONFIG_PAGE_TABLE_CHECK)) + ptep_get_and_clear(mm, addr, ptep); + else + pte_clear(mm, addr, ptep); +} +#endif + #ifndef __HAVE_ARCH_PTEP_GET static inline pte_t ptep_get(pte_t *ptep) { @@ -360,7 +365,10 @@ static inline pmd_t pmdp_huge_get_and_clear(struct mm_struct *mm, pmd_t *pmdp) { pmd_t pmd = *pmdp; + pmd_clear(pmdp); + page_table_check_pmd_clear(mm, address, pmd); + return pmd; } #endif /* __HAVE_ARCH_PMDP_HUGE_GET_AND_CLEAR */ @@ -372,6 +380,8 @@ static inline pud_t pudp_huge_get_and_clear(struct mm_struct *mm, pud_t pud = *pudp; pud_clear(pudp); + page_table_check_pud_clear(mm, address, pud); + return pud; } #endif /* __HAVE_ARCH_PUDP_HUGE_GET_AND_CLEAR */ From patchwork Mon Apr 18 03:44:43 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tong Tiangen X-Patchwork-Id: 12816245 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A583EC433F5 for ; Mon, 18 Apr 2022 03:27:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=YVN/CUCMCMZWWTSfdhjSUhKDfNyd8+nUtw4bOEruFIY=; b=WFiZx7u3biFUEU eegkQ/ceTuKc8Hq5ACIYgWxXKwsXoz69qFxetvIAVptqTj49mYZhyfZJn8XCs1ji0dB3cjTMEOk4x nJzlUgvojuNKfUIlHzS3EuRt6g1V75vZy1ub7Rl8RsonwAwkI/HT5tD7VIuQI2UEzsQNh8VT6wES2 75OkpQpaFsdl4IPguAFz/N9j95eQ5VA2XImT+O4gLf7xAf9nr5TAhL+j6LUB3nC3YSXrSCkfn3ewK M1jjaDQz/BmJqF8hMkdEGfwN9GVBw2NRFlSnkl8+iBa1GZnLtwCS7HhTYNTyTSZ0z7JZ5VIACO3fg K4YczbpSGrs3m7lqIdRg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1ngI2V-00FPpk-LN; Mon, 18 Apr 2022 03:26:59 +0000 Received: from szxga03-in.huawei.com ([45.249.212.189]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1ngI1m-00FPVb-FE; Mon, 18 Apr 2022 03:26:16 +0000 Received: from kwepemi100005.china.huawei.com (unknown [172.30.72.57]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4KhXKT07pGzCr5Y; Mon, 18 Apr 2022 11:21:48 +0800 (CST) Received: from kwepemm600017.china.huawei.com (7.193.23.234) by kwepemi100005.china.huawei.com (7.221.188.155) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 18 Apr 2022 11:26:11 +0800 Received: from localhost.localdomain (10.175.112.125) by kwepemm600017.china.huawei.com (7.193.23.234) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 18 Apr 2022 11:26:10 +0800 From: Tong Tiangen To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , , "H. Peter Anvin" , Pasha Tatashin , Andrew Morton , Catalin Marinas , Will Deacon , Paul Walmsley , Palmer Dabbelt , Albert Ou CC: , , , , Tong Tiangen , Kefeng Wang , Guohanjun Subject: [PATCH -next v4 3/4] arm64: mm: add support for page table check Date: Mon, 18 Apr 2022 03:44:43 +0000 Message-ID: <20220418034444.520928-4-tongtiangen@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220418034444.520928-1-tongtiangen@huawei.com> References: <20220418034444.520928-1-tongtiangen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To kwepemm600017.china.huawei.com (7.193.23.234) X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220417_202614_932081_6E2C52C2 X-CRM114-Status: GOOD ( 11.73 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Kefeng Wang As commit d283d422c6c4 ("x86: mm: add x86_64 support for page table check"), add some necessary page table check hooks into routines that modify user page tables. Signed-off-by: Kefeng Wang Signed-off-by: Tong Tiangen Reviewed-by: Pasha Tatashin --- arch/arm64/Kconfig | 1 + arch/arm64/include/asm/pgtable.h | 65 +++++++++++++++++++++++++++++--- 2 files changed, 61 insertions(+), 5 deletions(-) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index e80fd2372f02..7114d2d5155e 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -92,6 +92,7 @@ config ARM64 select ARCH_SUPPORTS_ATOMIC_RMW select ARCH_SUPPORTS_INT128 if CC_HAS_INT128 select ARCH_SUPPORTS_NUMA_BALANCING + select ARCH_SUPPORTS_PAGE_TABLE_CHECK select ARCH_WANT_COMPAT_IPC_PARSE_VERSION if COMPAT select ARCH_WANT_DEFAULT_BPF_JIT select ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index 930077f7b572..9f8f97a7cc7c 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -33,6 +33,7 @@ #include #include #include +#include #ifdef CONFIG_TRANSPARENT_HUGEPAGE #define __HAVE_ARCH_FLUSH_PMD_TLB_RANGE @@ -96,6 +97,7 @@ static inline pteval_t __phys_to_pte_val(phys_addr_t phys) #define pte_young(pte) (!!(pte_val(pte) & PTE_AF)) #define pte_special(pte) (!!(pte_val(pte) & PTE_SPECIAL)) #define pte_write(pte) (!!(pte_val(pte) & PTE_WRITE)) +#define pte_user(pte) (!!(pte_val(pte) & PTE_USER)) #define pte_user_exec(pte) (!(pte_val(pte) & PTE_UXN)) #define pte_cont(pte) (!!(pte_val(pte) & PTE_CONT)) #define pte_devmap(pte) (!!(pte_val(pte) & PTE_DEVMAP)) @@ -312,7 +314,7 @@ static inline void __check_racy_pte_update(struct mm_struct *mm, pte_t *ptep, __func__, pte_val(old_pte), pte_val(pte)); } -static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, +static inline void __set_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep, pte_t pte) { if (pte_present(pte) && pte_user_exec(pte) && !pte_special(pte)) @@ -343,6 +345,13 @@ static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, set_pte(ptep, pte); } +static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte) +{ + page_table_check_pte_set(mm, addr, ptep, pte); + return __set_pte_at(mm, addr, ptep, pte); +} + /* * Huge pte definitions. */ @@ -454,6 +463,8 @@ static inline int pmd_trans_huge(pmd_t pmd) #define pmd_dirty(pmd) pte_dirty(pmd_pte(pmd)) #define pmd_young(pmd) pte_young(pmd_pte(pmd)) #define pmd_valid(pmd) pte_valid(pmd_pte(pmd)) +#define pmd_user(pmd) pte_user(pmd_pte(pmd)) +#define pmd_user_exec(pmd) pte_user_exec(pmd_pte(pmd)) #define pmd_cont(pmd) pte_cont(pmd_pte(pmd)) #define pmd_wrprotect(pmd) pte_pmd(pte_wrprotect(pmd_pte(pmd))) #define pmd_mkold(pmd) pte_pmd(pte_mkold(pmd_pte(pmd))) @@ -501,8 +512,19 @@ static inline pmd_t pmd_mkdevmap(pmd_t pmd) #define pud_pfn(pud) ((__pud_to_phys(pud) & PUD_MASK) >> PAGE_SHIFT) #define pfn_pud(pfn,prot) __pud(__phys_to_pud_val((phys_addr_t)(pfn) << PAGE_SHIFT) | pgprot_val(prot)) -#define set_pmd_at(mm, addr, pmdp, pmd) set_pte_at(mm, addr, (pte_t *)pmdp, pmd_pte(pmd)) -#define set_pud_at(mm, addr, pudp, pud) set_pte_at(mm, addr, (pte_t *)pudp, pud_pte(pud)) +static inline void set_pmd_at(struct mm_struct *mm, unsigned long addr, + pmd_t *pmdp, pmd_t pmd) +{ + page_table_check_pmd_set(mm, addr, pmdp, pmd); + return __set_pte_at(mm, addr, (pte_t *)pmdp, pmd_pte(pmd)); +} + +static inline void set_pud_at(struct mm_struct *mm, unsigned long addr, + pud_t *pudp, pud_t pud) +{ + page_table_check_pud_set(mm, addr, pudp, pud); + return __set_pte_at(mm, addr, (pte_t *)pudp, pud_pte(pud)); +} #define __p4d_to_phys(p4d) __pte_to_phys(p4d_pte(p4d)) #define __phys_to_p4d_val(phys) __phys_to_pte_val(phys) @@ -643,6 +665,24 @@ static inline unsigned long pmd_page_vaddr(pmd_t pmd) #define pud_present(pud) pte_present(pud_pte(pud)) #define pud_leaf(pud) pud_sect(pud) #define pud_valid(pud) pte_valid(pud_pte(pud)) +#define pud_user(pud) pte_user(pud_pte(pud)) + +#ifdef CONFIG_PAGE_TABLE_CHECK +static inline bool pte_user_accessible_page(pte_t pte) +{ + return pte_present(pte) && (pte_user(pte) || pte_user_exec(pte)); +} + +static inline bool pmd_user_accessible_page(pmd_t pmd) +{ + return pmd_present(pmd) && (pmd_user(pmd) || pmd_user_exec(pmd)); +} + +static inline bool pud_user_accessible_page(pud_t pud) +{ + return pud_present(pud) && pud_user(pud); +} +#endif static inline void set_pud(pud_t *pudp, pud_t pud) { @@ -872,11 +912,21 @@ static inline int pmdp_test_and_clear_young(struct vm_area_struct *vma, } #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ +static inline pte_t __ptep_get_and_clear(struct mm_struct *mm, + unsigned long address, pte_t *ptep) +{ + return __pte(xchg_relaxed(&pte_val(*ptep), 0)); +} + #define __HAVE_ARCH_PTEP_GET_AND_CLEAR static inline pte_t ptep_get_and_clear(struct mm_struct *mm, unsigned long address, pte_t *ptep) { - return __pte(xchg_relaxed(&pte_val(*ptep), 0)); + pte_t pte = __ptep_get_and_clear(mm, address, ptep); + + page_table_check_pte_clear(mm, address, pte); + + return pte; } #ifdef CONFIG_TRANSPARENT_HUGEPAGE @@ -884,7 +934,11 @@ static inline pte_t ptep_get_and_clear(struct mm_struct *mm, static inline pmd_t pmdp_huge_get_and_clear(struct mm_struct *mm, unsigned long address, pmd_t *pmdp) { - return pte_pmd(ptep_get_and_clear(mm, address, (pte_t *)pmdp)); + pmd_t pmd = pte_pmd(__ptep_get_and_clear(mm, address, (pte_t *)pmdp)); + + page_table_check_pmd_clear(mm, address, pmd); + + return pmd; } #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ @@ -918,6 +972,7 @@ static inline void pmdp_set_wrprotect(struct mm_struct *mm, static inline pmd_t pmdp_establish(struct vm_area_struct *vma, unsigned long address, pmd_t *pmdp, pmd_t pmd) { + page_table_check_pmd_set(vma->vm_mm, address, pmdp, pmd); return __pmd(xchg_relaxed(&pmd_val(*pmdp), pmd_val(pmd))); } #endif From patchwork Mon Apr 18 03:44:44 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tong Tiangen X-Patchwork-Id: 12816246 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 76358C433EF for ; Mon, 18 Apr 2022 03:27:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=44sHn4bYrKVtkO+GbAv7K/KmqYxi9MN2WONRO9N1f0M=; b=DN2x3vxM8tYcLE DjIxAlIsrYLkNd7zpT7FlyRZLgiRIbx92/54unv2L+SYPNpsSQmYD55VYERIToqxkbXlN58esWyRa fxCyS8wyOEwI5krRkxOVV82sx8V99LVtJ3r1OnGA4i0v8ggUwPBec3Vfb9ZhNP64XXI9ZYmT7++hy Y1IELJM67LZYf03Ci/3nObRmqms3kuRbfgOqJHCDRBzuakkoCCKgx0U5Bgyzz8j8330I8S7486v1M 2psQuZ/k3AS2U+F85yaBQaE1m1BMqzhzuA0x/9HXxWmakdUy47NTfukXDbrO48QlUAzELjytjwQoB 3rxOZf8503qCbOUuezew==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1ngI2Y-00FPrS-LW; Mon, 18 Apr 2022 03:27:02 +0000 Received: from szxga02-in.huawei.com ([45.249.212.188]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1ngI1q-00FPWv-Tr; Mon, 18 Apr 2022 03:26:21 +0000 Received: from kwepemi100001.china.huawei.com (unknown [172.30.72.57]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4KhXMk2LyhzFq1W; Mon, 18 Apr 2022 11:23:46 +0800 (CST) Received: from kwepemm600017.china.huawei.com (7.193.23.234) by kwepemi100001.china.huawei.com (7.221.188.215) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 18 Apr 2022 11:26:15 +0800 Received: from localhost.localdomain (10.175.112.125) by kwepemm600017.china.huawei.com (7.193.23.234) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 18 Apr 2022 11:26:13 +0800 From: Tong Tiangen To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , , "H. Peter Anvin" , Pasha Tatashin , Andrew Morton , Catalin Marinas , Will Deacon , Paul Walmsley , Palmer Dabbelt , Albert Ou CC: , , , , Tong Tiangen , Kefeng Wang , Guohanjun Subject: [PATCH -next v4 4/4] riscv: mm: add support for page table check Date: Mon, 18 Apr 2022 03:44:44 +0000 Message-ID: <20220418034444.520928-5-tongtiangen@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220418034444.520928-1-tongtiangen@huawei.com> References: <20220418034444.520928-1-tongtiangen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To kwepemm600017.china.huawei.com (7.193.23.234) X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220417_202619_353113_24BB394F X-CRM114-Status: GOOD ( 11.29 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org As commit d283d422c6c4 ("x86: mm: add x86_64 support for page table check"), add some necessary page table check hooks into routines that modify user page tables. Signed-off-by: Tong Tiangen Reviewed-by: Pasha Tatashin --- arch/riscv/Kconfig | 1 + arch/riscv/include/asm/pgtable.h | 77 +++++++++++++++++++++++++++++--- 2 files changed, 72 insertions(+), 6 deletions(-) diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig index 63f7258984f3..66d241cee52c 100644 --- a/arch/riscv/Kconfig +++ b/arch/riscv/Kconfig @@ -38,6 +38,7 @@ config RISCV select ARCH_SUPPORTS_ATOMIC_RMW select ARCH_SUPPORTS_DEBUG_PAGEALLOC if MMU select ARCH_SUPPORTS_HUGETLBFS if MMU + select ARCH_SUPPORTS_PAGE_TABLE_CHECK select ARCH_USE_MEMTEST select ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT if MMU select ARCH_WANT_FRAME_POINTERS diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h index 046b44225623..6f22d9580658 100644 --- a/arch/riscv/include/asm/pgtable.h +++ b/arch/riscv/include/asm/pgtable.h @@ -114,6 +114,8 @@ #include #endif /* CONFIG_64BIT */ +#include + #ifdef CONFIG_XIP_KERNEL #define XIP_FIXUP(addr) ({ \ uintptr_t __a = (uintptr_t)(addr); \ @@ -315,6 +317,11 @@ static inline int pte_exec(pte_t pte) return pte_val(pte) & _PAGE_EXEC; } +static inline int pte_user(pte_t pte) +{ + return pte_val(pte) & _PAGE_USER; +} + static inline int pte_huge(pte_t pte) { return pte_present(pte) && (pte_val(pte) & _PAGE_LEAF); @@ -446,7 +453,7 @@ static inline void set_pte(pte_t *ptep, pte_t pteval) void flush_icache_pte(pte_t pte); -static inline void set_pte_at(struct mm_struct *mm, +static inline void __set_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep, pte_t pteval) { if (pte_present(pteval) && pte_exec(pteval)) @@ -455,10 +462,17 @@ static inline void set_pte_at(struct mm_struct *mm, set_pte(ptep, pteval); } +static inline void set_pte_at(struct mm_struct *mm, + unsigned long addr, pte_t *ptep, pte_t pteval) +{ + page_table_check_pte_set(mm, addr, ptep, pteval); + __set_pte_at(mm, addr, ptep, pteval); +} + static inline void pte_clear(struct mm_struct *mm, unsigned long addr, pte_t *ptep) { - set_pte_at(mm, addr, ptep, __pte(0)); + __set_pte_at(mm, addr, ptep, __pte(0)); } #define __HAVE_ARCH_PTEP_SET_ACCESS_FLAGS @@ -475,11 +489,21 @@ static inline int ptep_set_access_flags(struct vm_area_struct *vma, return true; } +static inline pte_t __ptep_get_and_clear(struct mm_struct *mm, + unsigned long address, pte_t *ptep) +{ + return __pte(atomic_long_xchg((atomic_long_t *)ptep, 0)); +} + #define __HAVE_ARCH_PTEP_GET_AND_CLEAR static inline pte_t ptep_get_and_clear(struct mm_struct *mm, unsigned long address, pte_t *ptep) { - return __pte(atomic_long_xchg((atomic_long_t *)ptep, 0)); + pte_t pte = __ptep_get_and_clear(mm, address, ptep); + + page_table_check_pte_clear(mm, address, pte); + + return pte; } #define __HAVE_ARCH_PTEP_TEST_AND_CLEAR_YOUNG @@ -546,6 +570,13 @@ static inline unsigned long pmd_pfn(pmd_t pmd) return ((__pmd_to_phys(pmd) & PMD_MASK) >> PAGE_SHIFT); } +#define __pud_to_phys(pud) (pud_val(pud) >> _PAGE_PFN_SHIFT << PAGE_SHIFT) + +static inline unsigned long pud_pfn(pud_t pud) +{ + return ((__pud_to_phys(pud) & PUD_MASK) >> PAGE_SHIFT); +} + static inline pmd_t pmd_modify(pmd_t pmd, pgprot_t newprot) { return pte_pmd(pte_modify(pmd_pte(pmd), newprot)); @@ -567,6 +598,11 @@ static inline int pmd_young(pmd_t pmd) return pte_young(pmd_pte(pmd)); } +static inline int pmd_user(pmd_t pmd) +{ + return pte_user(pmd_pte(pmd)); +} + static inline pmd_t pmd_mkold(pmd_t pmd) { return pte_pmd(pte_mkold(pmd_pte(pmd))); @@ -600,15 +636,39 @@ static inline pmd_t pmd_mkdirty(pmd_t pmd) static inline void set_pmd_at(struct mm_struct *mm, unsigned long addr, pmd_t *pmdp, pmd_t pmd) { - return set_pte_at(mm, addr, (pte_t *)pmdp, pmd_pte(pmd)); + page_table_check_pmd_set(mm, addr, pmdp, pmd); + return __set_pte_at(mm, addr, (pte_t *)pmdp, pmd_pte(pmd)); +} + +static inline int pud_user(pud_t pud) +{ + return pte_user(pud_pte(pud)); } static inline void set_pud_at(struct mm_struct *mm, unsigned long addr, pud_t *pudp, pud_t pud) { - return set_pte_at(mm, addr, (pte_t *)pudp, pud_pte(pud)); + page_table_check_pud_set(mm, addr, pudp, pud); + return __set_pte_at(mm, addr, (pte_t *)pudp, pud_pte(pud)); +} + +#ifdef CONFIG_PAGE_TABLE_CHECK +static inline bool pte_user_accessible_page(pte_t pte) +{ + return pte_present(pte) && pte_user(pte); } +static inline bool pmd_user_accessible_page(pmd_t pmd) +{ + return pmd_leaf(pmd) && pmd_user(pmd); +} + +static inline bool pud_user_accessible_page(pud_t pud) +{ + return pud_leaf(pud) && pud_user(pud); +} +#endif + #ifdef CONFIG_TRANSPARENT_HUGEPAGE static inline int pmd_trans_huge(pmd_t pmd) { @@ -634,7 +694,11 @@ static inline int pmdp_test_and_clear_young(struct vm_area_struct *vma, static inline pmd_t pmdp_huge_get_and_clear(struct mm_struct *mm, unsigned long address, pmd_t *pmdp) { - return pte_pmd(ptep_get_and_clear(mm, address, (pte_t *)pmdp)); + pmd_t pmd = pte_pmd(__ptep_get_and_clear(mm, address, (pte_t *)pmdp)); + + page_table_check_pmd_clear(mm, address, pmd); + + return pmd; } #define __HAVE_ARCH_PMDP_SET_WRPROTECT @@ -648,6 +712,7 @@ static inline void pmdp_set_wrprotect(struct mm_struct *mm, static inline pmd_t pmdp_establish(struct vm_area_struct *vma, unsigned long address, pmd_t *pmdp, pmd_t pmd) { + page_table_check_pmd_set(vma->vm_mm, address, pmdp, pmd); return __pmd(atomic_long_xchg((atomic_long_t *)pmdp, pmd_val(pmd))); } #endif /* CONFIG_TRANSPARENT_HUGEPAGE */