From patchwork Mon Jul 22 15:41:50 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 11052789 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B026C1398 for ; Mon, 22 Jul 2019 15:43:12 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9DFFB27F86 for ; Mon, 22 Jul 2019 15:43:12 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 9222E283BB; Mon, 22 Jul 2019 15:43:12 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 0003727FB3 for ; Mon, 22 Jul 2019 15:43:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=qgk6+AI0MNNU/y9mqqmJDy7K0lcVx3PJchV+Mu5rWxE=; b=A8czAzSMEeDXZP dIOjGQRuNOpESZVYIKamZqvONtBBX7rLuxIUsTLDpLtosTG2H8ntaA8ryxeAFcZ0ZGPMGhz7wrwu7 liYID4sYWM8zBtXw6YQ+1KQ8pXcAGlSg7jj5wwUxDnIoQ7Z67IIGM78Ebo1wvfLF7cCvy3pCjCF5C lopfdyVKzRmghXpNZVxhS8lPi3Hgt9SywHJfb3teCbb+JgAMZFeA+2lTcqJro3t3vr9hXHEy+DRnW /moSBPnjCJV5Q7m8iR6Tq4LVKpsDGUxatSkSWwuFB7iBY60zI8w1/eEBjq2kzDCqnDTai69BWuq/y VIHrI7r8m6EnkxG29IbQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hpaSu-000262-M1; Mon, 22 Jul 2019 15:43:04 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hpaSI-0001eT-Vq; Mon, 22 Jul 2019 15:42:28 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 86D4E1509; Mon, 22 Jul 2019 08:42:26 -0700 (PDT) Received: from e112269-lin.arm.com (e112269-lin.cambridge.arm.com [10.1.196.133]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id BA1883F694; Mon, 22 Jul 2019 08:42:23 -0700 (PDT) From: Steven Price To: linux-mm@kvack.org Subject: [PATCH v9 01/21] arc: mm: Add p?d_leaf() definitions Date: Mon, 22 Jul 2019 16:41:50 +0100 Message-Id: <20190722154210.42799-2-steven.price@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190722154210.42799-1-steven.price@arm.com> References: <20190722154210.42799-1-steven.price@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190722_084227_120672_9B0EED0F X-CRM114-Status: GOOD ( 10.82 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , Peter Zijlstra , Catalin Marinas , Dave Hansen , "H. Peter Anvin" , Will Deacon , "Liang, Kan" , x86@kernel.org, Steven Price , Ingo Molnar , linux-snps-arc@lists.infradead.org, Arnd Bergmann , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Borislav Petkov , Andy Lutomirski , Thomas Gleixner , linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Vineet Gupta , linux-kernel@vger.kernel.org, James Morse , Andrew Morton Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP walk_page_range() is going to be allowed to walk page tables other than those of user space. For this it needs to know when it has reached a 'leaf' entry in the page tables. This information will be provided by the p?d_leaf() functions/macros. For arc, we only have two levels, so only pmd_leaf() is needed. CC: Vineet Gupta CC: linux-snps-arc@lists.infradead.org Signed-off-by: Steven Price --- arch/arc/include/asm/pgtable.h | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/arc/include/asm/pgtable.h b/arch/arc/include/asm/pgtable.h index 1d87c18a2976..8c425cf796db 100644 --- a/arch/arc/include/asm/pgtable.h +++ b/arch/arc/include/asm/pgtable.h @@ -274,6 +274,7 @@ static inline void pmd_set(pmd_t *pmdp, pte_t *ptep) #define pmd_none(x) (!pmd_val(x)) #define pmd_bad(x) ((pmd_val(x) & ~PAGE_MASK)) #define pmd_present(x) (pmd_val(x)) +#define pmd_leaf(x) (pmd_val(pmd) & _PAGE_HW_SZ) #define pmd_clear(xp) do { pmd_val(*(xp)) = 0; } while (0) #define pte_page(pte) pfn_to_page(pte_pfn(pte)) From patchwork Mon Jul 22 15:41:51 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 11052809 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 217621398 for ; Mon, 22 Jul 2019 15:43:48 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0DB9127FB3 for ; Mon, 22 Jul 2019 15:43:48 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 00D4D28434; Mon, 22 Jul 2019 15:43:47 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 302EF27FB3 for ; Mon, 22 Jul 2019 15:43:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=OkuZic3oH+RQXx7zPDnIe1+xHY7acWmu0a0Q5eTEEEg=; b=squJXNnZqDARVY iK2huAa34plLe4BLiWXJvncwAkMFgsjhKvYQUUTZ7ugYK10jLzVs3wch8bbM/U8B5eoHvdrpCqR9s u7kj9aIPKMFS5R1aMvBk1JWv7mP0KptkQonLwus0pNVhKLYh8oglW2QRtSHaBFvM25tO4Yv9wzAGl KmAo6HoibmItLgBCv1Jm9+WSw8VPduP9lemKCiI49mktbqDZxZbZ9WMwbt5+9WvCnZYA/ELFLWEq9 NRbEiVzcsUw08NKwYVRQvq6g44sIF/dLumQnqJbH+WaZ3FoN4NeIKQKkxf91zphcnhOJwb5BhTvtD iXgcwimLSr2M8KaAiapw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hpaTY-0002Yu-F1; Mon, 22 Jul 2019 15:43:44 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hpaSL-0001gQ-P1 for linux-arm-kernel@lists.infradead.org; Mon, 22 Jul 2019 15:42:32 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 6DB1B1596; Mon, 22 Jul 2019 08:42:29 -0700 (PDT) Received: from e112269-lin.arm.com (e112269-lin.cambridge.arm.com [10.1.196.133]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id BCA9A3F694; Mon, 22 Jul 2019 08:42:26 -0700 (PDT) From: Steven Price To: linux-mm@kvack.org Subject: [PATCH v9 02/21] arm: mm: Add p?d_leaf() definitions Date: Mon, 22 Jul 2019 16:41:51 +0100 Message-Id: <20190722154210.42799-3-steven.price@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190722154210.42799-1-steven.price@arm.com> References: <20190722154210.42799-1-steven.price@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190722_084229_955557_444D75A7 X-CRM114-Status: GOOD ( 11.75 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , Russell King , x86@kernel.org, Arnd Bergmann , Ard Biesheuvel , Peter Zijlstra , Catalin Marinas , Dave Hansen , linux-kernel@vger.kernel.org, Steven Price , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Ingo Molnar , Borislav Petkov , Andy Lutomirski , "H. Peter Anvin" , James Morse , Thomas Gleixner , Will Deacon , Andrew Morton , linux-arm-kernel@lists.infradead.org, "Liang, Kan" Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP walk_page_range() is going to be allowed to walk page tables other than those of user space. For this it needs to know when it has reached a 'leaf' entry in the page tables. This information is provided by the p?d_leaf() functions/macros. For arm pmd_large() already exists and does what we want. So simply provide the generic pmd_leaf() name. CC: Russell King CC: linux-arm-kernel@lists.infradead.org Signed-off-by: Steven Price --- arch/arm/include/asm/pgtable-2level.h | 1 + arch/arm/include/asm/pgtable-3level.h | 1 + 2 files changed, 2 insertions(+) diff --git a/arch/arm/include/asm/pgtable-2level.h b/arch/arm/include/asm/pgtable-2level.h index 51beec41d48c..0d3ea35c97fe 100644 --- a/arch/arm/include/asm/pgtable-2level.h +++ b/arch/arm/include/asm/pgtable-2level.h @@ -189,6 +189,7 @@ static inline pmd_t *pmd_offset(pud_t *pud, unsigned long addr) } #define pmd_large(pmd) (pmd_val(pmd) & 2) +#define pmd_leaf(pmd) (pmd_val(pmd) & 2) #define pmd_bad(pmd) (pmd_val(pmd) & 2) #define pmd_present(pmd) (pmd_val(pmd)) diff --git a/arch/arm/include/asm/pgtable-3level.h b/arch/arm/include/asm/pgtable-3level.h index 5b18295021a0..ad55ab068dbf 100644 --- a/arch/arm/include/asm/pgtable-3level.h +++ b/arch/arm/include/asm/pgtable-3level.h @@ -134,6 +134,7 @@ #define pmd_sect(pmd) ((pmd_val(pmd) & PMD_TYPE_MASK) == \ PMD_TYPE_SECT) #define pmd_large(pmd) pmd_sect(pmd) +#define pmd_leaf(pmd) pmd_sect(pmd) #define pud_clear(pudp) \ do { \ From patchwork Mon Jul 22 15:41:52 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 11052811 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8757C14F6 for ; Mon, 22 Jul 2019 15:44:16 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7318228433 for ; Mon, 22 Jul 2019 15:44:16 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 662A728438; Mon, 22 Jul 2019 15:44:16 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 0CC6028433 for ; Mon, 22 Jul 2019 15:44:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=8VPYNjaEEY035ASRykCvPt/SQPB0fgZhXs4LYhdy5Y0=; b=keNvTMF08z+b+X TqOryqPE2Q4SyRrD8l+bgOQM8nRDeU7HJD8cKXknxzKr34KOFnZ/2VXUYTLPG8s62/1x11Q88ofqM QT4pT7dbE/cvID29kZ8/J+OUHvxe03j1jKnyrmskY0gTSaVU9MCaWOQuAzMN46XHzfLLn9zijB7Og QUxvAiPgKgQpQotJk7VdzvjaLLFURjM2vFmIOJOxXTlJBJ8Y38YDNlTctbE/A/McVjo0EXzeeN9U/ t+i586WVhZ/LDiMNYTNzcply6MWm88D4G33jrCLJdELzRutXozE+o3YRknSskZ3Y6eTpzlU/i3fW6 9qOx2CBlYWiQ9hwwYZNg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hpaTw-0002rn-Df; Mon, 22 Jul 2019 15:44:08 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hpaSP-0001hz-QS for linux-arm-kernel@lists.infradead.org; Mon, 22 Jul 2019 15:42:35 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3C5BD15A2; Mon, 22 Jul 2019 08:42:32 -0700 (PDT) Received: from e112269-lin.arm.com (e112269-lin.cambridge.arm.com [10.1.196.133]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id A3AD73F694; Mon, 22 Jul 2019 08:42:29 -0700 (PDT) From: Steven Price To: linux-mm@kvack.org Subject: [PATCH v9 03/21] arm64: mm: Add p?d_leaf() definitions Date: Mon, 22 Jul 2019 16:41:52 +0100 Message-Id: <20190722154210.42799-4-steven.price@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190722154210.42799-1-steven.price@arm.com> References: <20190722154210.42799-1-steven.price@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190722_084233_897026_4DF8459F X-CRM114-Status: GOOD ( 12.37 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , x86@kernel.org, Arnd Bergmann , Ard Biesheuvel , Peter Zijlstra , Catalin Marinas , Dave Hansen , linux-kernel@vger.kernel.org, Steven Price , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Ingo Molnar , Borislav Petkov , Andy Lutomirski , "H. Peter Anvin" , James Morse , Thomas Gleixner , Will Deacon , Andrew Morton , linux-arm-kernel@lists.infradead.org, "Liang, Kan" Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP walk_page_range() is going to be allowed to walk page tables other than those of user space. For this it needs to know when it has reached a 'leaf' entry in the page tables. This information will be provided by the p?d_leaf() functions/macros. For arm64, we already have p?d_sect() macros which we can reuse for p?d_leaf(). pud_sect() is defined as a dummy function when CONFIG_PGTABLE_LEVELS < 3 or CONFIG_ARM64_64K_PAGES is defined. However when the kernel is configured this way then architecturally it isn't allowed to have a large page that this level, and any code using these page walking macros is implicitly relying on the page size/number of levels being the same as the kernel. So it is safe to reuse this for p?d_leaf() as it is an architectural restriction. CC: Catalin Marinas CC: Will Deacon Signed-off-by: Steven Price --- arch/arm64/include/asm/pgtable.h | 2 ++ 1 file changed, 2 insertions(+) diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index 87a4b2ddc1a1..2c123d59dbff 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -446,6 +446,7 @@ extern pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn, PMD_TYPE_TABLE) #define pmd_sect(pmd) ((pmd_val(pmd) & PMD_TYPE_MASK) == \ PMD_TYPE_SECT) +#define pmd_leaf(pmd) pmd_sect(pmd) #if defined(CONFIG_ARM64_64K_PAGES) || CONFIG_PGTABLE_LEVELS < 3 #define pud_sect(pud) (0) @@ -528,6 +529,7 @@ static inline void pte_unmap(pte_t *pte) { } #define pud_none(pud) (!pud_val(pud)) #define pud_bad(pud) (!(pud_val(pud) & PUD_TABLE_BIT)) #define pud_present(pud) pte_present(pud_pte(pud)) +#define pud_leaf(pud) pud_sect(pud) #define pud_valid(pud) pte_valid(pud_pte(pud)) static inline void set_pud(pud_t *pudp, pud_t pud) From patchwork Mon Jul 22 15:41:53 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 11052815 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 65A6D1398 for ; Mon, 22 Jul 2019 15:44:39 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4EE6528433 for ; Mon, 22 Jul 2019 15:44:39 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3ECB928438; Mon, 22 Jul 2019 15:44:39 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id D58E528433 for ; Mon, 22 Jul 2019 15:44:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=fS4Lzgq9A6xC/e6MGB6SjEFE2PQb93+XDiEic3ysy2A=; b=PXdaK6XWZuqpEg WP4F+/withsNrtnjTVLIHi/Xqz2U2Y7DDWREUNsDgwKlgn3XmMNhpSEN28LsYQfTS4d2n329edlMR AXYeCJQwXHZfTSN8KL1S1yT39IVGqZGdDZlyXb7cEcJw8Ej+vZnM7vVhH93ilx21PuEOKMvwPe0GK 44U3tzL0taPU69jW2mwKfgE2rELgGPLy0eeLgFbsAYTo6kXrt+g212uKOtkm7MHFFIQG2Qs34ANR1 pMlJx3vAOqh61QwvuwTM3tqEfyeXBB2A82whJ14UFuIn249xOefWw3cuwPT9DiBsEKNqm5flniSxN sih5WQiZfAkIqmLXrMaw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hpaUP-00038l-FU; Mon, 22 Jul 2019 15:44:37 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hpaSR-0001jf-Tr for linux-arm-kernel@lists.infradead.org; Mon, 22 Jul 2019 15:42:37 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 740AF15A1; Mon, 22 Jul 2019 08:42:35 -0700 (PDT) Received: from e112269-lin.arm.com (e112269-lin.cambridge.arm.com [10.1.196.133]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 71F6A3F694; Mon, 22 Jul 2019 08:42:32 -0700 (PDT) From: Steven Price To: linux-mm@kvack.org Subject: [PATCH v9 04/21] mips: mm: Add p?d_leaf() definitions Date: Mon, 22 Jul 2019 16:41:53 +0100 Message-Id: <20190722154210.42799-5-steven.price@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190722154210.42799-1-steven.price@arm.com> References: <20190722154210.42799-1-steven.price@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190722_084236_123970_58AFC94E X-CRM114-Status: GOOD ( 12.31 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , Peter Zijlstra , Catalin Marinas , Dave Hansen , linux-mips@vger.kernel.org, "H. Peter Anvin" , Will Deacon , "Liang, Kan" , x86@kernel.org, Steven Price , Ingo Molnar , James Hogan , Arnd Bergmann , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Borislav Petkov , Andy Lutomirski , Thomas Gleixner , linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , linux-kernel@vger.kernel.org, Ralf Baechle , Paul Burton , James Morse , Andrew Morton Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP walk_page_range() is going to be allowed to walk page tables other than those of user space. For this it needs to know when it has reached a 'leaf' entry in the page tables. This information is provided by the p?d_leaf() functions/macros. For mips, we only support large pages on 64 bit. For 64 bit if _PAGE_HUGE is defined we can simply look for it. When not defined we can be confident that there are no leaf pages in existence and fall back on the generic implementation (added in a later patch) which returns 0. CC: Ralf Baechle CC: Paul Burton CC: James Hogan CC: linux-mips@vger.kernel.org Signed-off-by: Steven Price --- arch/mips/include/asm/pgtable-64.h | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/arch/mips/include/asm/pgtable-64.h b/arch/mips/include/asm/pgtable-64.h index 93a9dce31f25..2bdbf8652b5f 100644 --- a/arch/mips/include/asm/pgtable-64.h +++ b/arch/mips/include/asm/pgtable-64.h @@ -273,6 +273,10 @@ static inline int pmd_present(pmd_t pmd) return pmd_val(pmd) != (unsigned long) invalid_pte_table; } +#ifdef _PAGE_HUGE +#define pmd_leaf(pmd) ((pmd_val(pmd) & _PAGE_HUGE) != 0) +#endif + static inline void pmd_clear(pmd_t *pmdp) { pmd_val(*pmdp) = ((unsigned long) invalid_pte_table); @@ -297,6 +301,10 @@ static inline int pud_present(pud_t pud) return pud_val(pud) != (unsigned long) invalid_pmd_table; } +#ifdef _PAGE_HUGE +#define pud_leaf(pud) ((pud_val(pud) & _PAGE_HUGE) != 0) +#endif + static inline void pud_clear(pud_t *pudp) { pud_val(*pudp) = ((unsigned long) invalid_pmd_table); From patchwork Mon Jul 22 15:41:54 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 11052819 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 477BD1398 for ; Mon, 22 Jul 2019 15:45:02 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3248228433 for ; Mon, 22 Jul 2019 15:45:02 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 22ADB28438; Mon, 22 Jul 2019 15:45:02 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id B1C4028433 for ; Mon, 22 Jul 2019 15:45:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=t/WfAKCGUXQ+3nuXIMCuXKlGvHRQf1qSKyXnMJDWE9g=; b=TF36qiS1C2pOAc SOvyNJ96wpJPdGESOPASLoiNffYwekNA6FEYHwLa8w72aIGSG7Zhr98+3E4Sr0AMF3cUXdfGJHlbQ +deUBcNzU7DKUAjW8xQ6duU9LkF+QHPR4JP6Mvec6EHeSgnMdYz/FftHh2SDMtIHtvJDw041hg7LV VQGBpvkokIAxlR7cgTlFbf9nweDXFUxqZKA3D0+xudvbRo4Ny+ea2yv4u1mr2j16CpBHqu+W98xQi l+iH5BKWM3jNHWhFvLALmmwi0lYqqHRQoZQk/tUqdxXnzCvXG8KTIO/XTZ89p/LO+tfhUvWVRL/2D x85RmK6BTn+kRGlMLT5A==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hpaUg-0003MZ-M8; Mon, 22 Jul 2019 15:44:54 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hpaSX-0001oD-Oj for linux-arm-kernel@lists.infradead.org; Mon, 22 Jul 2019 15:42:43 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C72961688; Mon, 22 Jul 2019 08:42:38 -0700 (PDT) Received: from e112269-lin.arm.com (e112269-lin.cambridge.arm.com [10.1.196.133]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id AA7FD3F694; Mon, 22 Jul 2019 08:42:35 -0700 (PDT) From: Steven Price To: linux-mm@kvack.org Subject: [PATCH v9 05/21] powerpc: mm: Add p?d_leaf() definitions Date: Mon, 22 Jul 2019 16:41:54 +0100 Message-Id: <20190722154210.42799-6-steven.price@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190722154210.42799-1-steven.price@arm.com> References: <20190722154210.42799-1-steven.price@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190722_084242_188624_D3FB2ED1 X-CRM114-Status: GOOD ( 11.25 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , Peter Zijlstra , Benjamin Herrenschmidt , Dave Hansen , Paul Mackerras , "H. Peter Anvin" , Will Deacon , "Liang, Kan" , Michael Ellerman , x86@kernel.org, Steven Price , Ingo Molnar , Catalin Marinas , Arnd Bergmann , kvm-ppc@vger.kernel.org, =?utf-8?b?SsOpcsO0?= =?utf-8?b?bWUgR2xpc3Nl?= , Borislav Petkov , Andy Lutomirski , Thomas Gleixner , linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , linux-kernel@vger.kernel.org, James Morse , Andrew Morton , linuxppc-dev@lists.ozlabs.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP walk_page_range() is going to be allowed to walk page tables other than those of user space. For this it needs to know when it has reached a 'leaf' entry in the page tables. This information is provided by the p?d_leaf() functions/macros. For powerpc pmd_large() already exists and does what we want, so hoist it out of the CONFIG_TRANSPARENT_HUGEPAGE condition and implement the other levels. Macros are used to provide the generic p?d_leaf() names. CC: Benjamin Herrenschmidt CC: Paul Mackerras CC: Michael Ellerman CC: linuxppc-dev@lists.ozlabs.org CC: kvm-ppc@vger.kernel.org Signed-off-by: Steven Price --- arch/powerpc/include/asm/book3s/64/pgtable.h | 30 ++++++++++++++------ 1 file changed, 21 insertions(+), 9 deletions(-) diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h index 8308f32e9782..84270666355c 100644 --- a/arch/powerpc/include/asm/book3s/64/pgtable.h +++ b/arch/powerpc/include/asm/book3s/64/pgtable.h @@ -921,6 +921,12 @@ static inline int pud_present(pud_t pud) return !!(pud_raw(pud) & cpu_to_be64(_PAGE_PRESENT)); } +#define pud_leaf pud_large +static inline int pud_large(pud_t pud) +{ + return !!(pud_raw(pud) & cpu_to_be64(_PAGE_PTE)); +} + extern struct page *pud_page(pud_t pud); extern struct page *pmd_page(pmd_t pmd); static inline pte_t pud_pte(pud_t pud) @@ -964,6 +970,12 @@ static inline int pgd_present(pgd_t pgd) return !!(pgd_raw(pgd) & cpu_to_be64(_PAGE_PRESENT)); } +#define pgd_leaf pgd_large +static inline int pgd_large(pgd_t pgd) +{ + return !!(pgd_raw(pgd) & cpu_to_be64(_PAGE_PTE)); +} + static inline pte_t pgd_pte(pgd_t pgd) { return __pte_raw(pgd_raw(pgd)); @@ -1131,6 +1143,15 @@ static inline bool pmd_access_permitted(pmd_t pmd, bool write) return pte_access_permitted(pmd_pte(pmd), write); } +#define pmd_leaf pmd_large +/* + * returns true for pmd migration entries, THP, devmap, hugetlb + */ +static inline int pmd_large(pmd_t pmd) +{ + return !!(pmd_raw(pmd) & cpu_to_be64(_PAGE_PTE)); +} + #ifdef CONFIG_TRANSPARENT_HUGEPAGE extern pmd_t pfn_pmd(unsigned long pfn, pgprot_t pgprot); extern pmd_t mk_pmd(struct page *page, pgprot_t pgprot); @@ -1157,15 +1178,6 @@ pmd_hugepage_update(struct mm_struct *mm, unsigned long addr, pmd_t *pmdp, return hash__pmd_hugepage_update(mm, addr, pmdp, clr, set); } -/* - * returns true for pmd migration entries, THP, devmap, hugetlb - * But compile time dependent on THP config - */ -static inline int pmd_large(pmd_t pmd) -{ - return !!(pmd_raw(pmd) & cpu_to_be64(_PAGE_PTE)); -} - static inline pmd_t pmd_mknotpresent(pmd_t pmd) { return __pmd(pmd_val(pmd) & ~_PAGE_PRESENT); From patchwork Mon Jul 22 15:41:55 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 11052821 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 33B811398 for ; Mon, 22 Jul 2019 15:45:27 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 20FEA28433 for ; Mon, 22 Jul 2019 15:45:27 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 146C328438; Mon, 22 Jul 2019 15:45:27 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id AD5A728433 for ; Mon, 22 Jul 2019 15:45:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=UMg6kCY5CuYrkar+NPP0oFX0ed6U+9jZKjwxEu4yk7A=; b=PwSBV+BBHPjY4z IFjWTuyuRgkamxjO1U2t7RjElYN4oPrWe2Dpuph2cA58j/FeEP86k0ND8Y88ZYYqqGxxGDudUAgrD B+P1A0bpSMqnpqESOIysrbpb9MG68Xr6TS7i6rxD9uemyqNt4qtU8gIJ+LR4e3vJKS5g4XuHOdOBi Axbc8aBeMZ3+Tpl/VqB4bdg7YTa7p7wQfLe+7jRRlW3m6qkUmZUrFw/2zJL7aZ4cyT6aoq0NkXHPe jgWbyprp1xx55wv3m5lFXbS9uPOQmX2xkcPv4KUvziQLdBLxbsqdS4i7HzsbC7Bl6ekiLbEDO6Ftq HJIhedbOqB/6Le75tsAw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hpaVB-0004rq-VQ; Mon, 22 Jul 2019 15:45:26 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hpaSY-0001pI-Pe; Mon, 22 Jul 2019 15:42:45 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E3ACF28; Mon, 22 Jul 2019 08:42:41 -0700 (PDT) Received: from e112269-lin.arm.com (e112269-lin.cambridge.arm.com [10.1.196.133]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 094753F694; Mon, 22 Jul 2019 08:42:38 -0700 (PDT) From: Steven Price To: linux-mm@kvack.org Subject: [PATCH v9 06/21] riscv: mm: Add p?d_leaf() definitions Date: Mon, 22 Jul 2019 16:41:55 +0100 Message-Id: <20190722154210.42799-7-steven.price@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190722154210.42799-1-steven.price@arm.com> References: <20190722154210.42799-1-steven.price@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190722_084242_942763_1122FEC4 X-CRM114-Status: GOOD ( 10.73 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , Peter Zijlstra , Catalin Marinas , Dave Hansen , "H. Peter Anvin" , linux-riscv@lists.infradead.org, Will Deacon , "Liang, Kan" , x86@kernel.org, Steven Price , Ingo Molnar , Palmer Dabbelt , Albert Ou , Arnd Bergmann , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Borislav Petkov , Andy Lutomirski , Thomas Gleixner , linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , linux-kernel@vger.kernel.org, James Morse , Andrew Morton Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP walk_page_range() is going to be allowed to walk page tables other than those of user space. For this it needs to know when it has reached a 'leaf' entry in the page tables. This information is provided by the p?d_leaf() functions/macros. For riscv a page is a leaf page when it has a read, write or execute bit set on it. CC: Palmer Dabbelt CC: Albert Ou CC: linux-riscv@lists.infradead.org Signed-off-by: Steven Price --- arch/riscv/include/asm/pgtable-64.h | 7 +++++++ arch/riscv/include/asm/pgtable.h | 7 +++++++ 2 files changed, 14 insertions(+) diff --git a/arch/riscv/include/asm/pgtable-64.h b/arch/riscv/include/asm/pgtable-64.h index 74630989006d..e88a8e8acbdf 100644 --- a/arch/riscv/include/asm/pgtable-64.h +++ b/arch/riscv/include/asm/pgtable-64.h @@ -43,6 +43,13 @@ static inline int pud_bad(pud_t pud) return !pud_present(pud); } +#define pud_leaf pud_leaf +static inline int pud_leaf(pud_t pud) +{ + return pud_present(pud) + && (pud_val(pud) & (_PAGE_READ | _PAGE_WRITE | _PAGE_EXEC)); +} + static inline void set_pud(pud_t *pudp, pud_t pud) { *pudp = pud; diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h index a364aba23d55..f6523155111a 100644 --- a/arch/riscv/include/asm/pgtable.h +++ b/arch/riscv/include/asm/pgtable.h @@ -105,6 +105,13 @@ static inline int pmd_bad(pmd_t pmd) return !pmd_present(pmd); } +#define pmd_leaf pmd_leaf +static inline int pmd_leaf(pmd_t pmd) +{ + return pmd_present(pmd) + && (pmd_val(pmd) & (_PAGE_READ | _PAGE_WRITE | _PAGE_EXEC)); +} + static inline void set_pmd(pmd_t *pmdp, pmd_t pmd) { *pmdp = pmd; From patchwork Mon Jul 22 15:41:56 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 11052827 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1B551138D for ; Mon, 22 Jul 2019 15:46:01 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 091D8204FA for ; Mon, 22 Jul 2019 15:46:01 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id F02B827D16; Mon, 22 Jul 2019 15:46:00 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 9D1F7204FA for ; Mon, 22 Jul 2019 15:46:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=sPMVhtmsfBpIRCT4HfNkBSE1xlnR6+1NGEpwO6Ccr9M=; b=HRTf4HY7a89BO2 z0BOv2efYqxPYdbkwFPUV7F7KOHgXvxeZPbPEENTb7O6ACAqXZRGqwBdp5c/DjYXUaP5kPoRH9WQM akRC+yD01LpTzSDk9o/9DcmmyCwqDBcI17ZrdPtUxePXJ66y3hm//p2IRKm7R2407ZEFGArJnhAML UUkwPquJHyNxOnDmo9Hd6Wcy93kuS2V+kJtuBOipWDUIC8xXoM5CS9hdWEP9hwT3PodZsbBQBcgDS MSp2ceU5VQLLdCtI4eFbnc4NcBAgYkpcncxKptBNkfjqIhxVteIbGzZ62hsUyS1Qb4uvsywhOYAEI 7pEwMogrbQaidn1FD/yw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hpaVe-0005Nn-Sn; Mon, 22 Jul 2019 15:45:54 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hpaSb-0001rY-DM for linux-arm-kernel@lists.infradead.org; Mon, 22 Jul 2019 15:42:47 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0BDA01509; Mon, 22 Jul 2019 08:42:45 -0700 (PDT) Received: from e112269-lin.arm.com (e112269-lin.cambridge.arm.com [10.1.196.133]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 257E33F694; Mon, 22 Jul 2019 08:42:42 -0700 (PDT) From: Steven Price To: linux-mm@kvack.org Subject: [PATCH v9 07/21] s390: mm: Add p?d_leaf() definitions Date: Mon, 22 Jul 2019 16:41:56 +0100 Message-Id: <20190722154210.42799-8-steven.price@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190722154210.42799-1-steven.price@arm.com> References: <20190722154210.42799-1-steven.price@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190722_084246_029754_CE70A32D X-CRM114-Status: GOOD ( 11.14 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , Peter Zijlstra , Catalin Marinas , Dave Hansen , Heiko Carstens , "H. Peter Anvin" , Will Deacon , "Liang, Kan" , linux-s390@vger.kernel.org, x86@kernel.org, Steven Price , Ingo Molnar , Arnd Bergmann , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Borislav Petkov , Andy Lutomirski , Thomas Gleixner , linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , linux-kernel@vger.kernel.org, James Morse , Martin Schwidefsky , Andrew Morton Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP walk_page_range() is going to be allowed to walk page tables other than those of user space. For this it needs to know when it has reached a 'leaf' entry in the page tables. This information is provided by the p?d_leaf() functions/macros. For s390, pud_large() and pmd_large() are already implemented as static inline functions. Add a macro to provide the p?d_leaf names for the generic code to use. CC: Martin Schwidefsky CC: Heiko Carstens CC: linux-s390@vger.kernel.org Signed-off-by: Steven Price --- arch/s390/include/asm/pgtable.h | 2 ++ 1 file changed, 2 insertions(+) diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pgtable.h index 9b274fcaacb6..f99a5f546e5e 100644 --- a/arch/s390/include/asm/pgtable.h +++ b/arch/s390/include/asm/pgtable.h @@ -674,6 +674,7 @@ static inline int pud_none(pud_t pud) return pud_val(pud) == _REGION3_ENTRY_EMPTY; } +#define pud_leaf pud_large static inline int pud_large(pud_t pud) { if ((pud_val(pud) & _REGION_ENTRY_TYPE_MASK) != _REGION_ENTRY_TYPE_R3) @@ -691,6 +692,7 @@ static inline unsigned long pud_pfn(pud_t pud) return (pud_val(pud) & origin_mask) >> PAGE_SHIFT; } +#define pmd_leaf pmd_large static inline int pmd_large(pmd_t pmd) { return (pmd_val(pmd) & _SEGMENT_ENTRY_LARGE) != 0; From patchwork Mon Jul 22 15:41:57 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 11052839 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A8C4B14DB for ; Mon, 22 Jul 2019 15:46:46 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 96A5B262F2 for ; Mon, 22 Jul 2019 15:46:46 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8A41F28433; Mon, 22 Jul 2019 15:46:46 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 19C80262F2 for ; Mon, 22 Jul 2019 15:46:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=uxEqn8m/tZ0RM4CGiqJ9/VDGaW/LYVexUuZ+PuNpDHs=; b=I84Kgkp7QvcBAJ z6TtRANxG2/Lw76szK4VuhWDqBojqXD0mOsZfPktxH6KamEUV9gJ09HdF/gpPI3i14qV6X6GnF9V6 9NNJ5hrzDIvgqVlhyZhTW8XYYqBcXHvzarxMELxgF2JJO5Q8AHItS+U4YDHsLflBUCU68YffLa6t4 MlNhEyrsZc9gaRqeQZEC3djOI2Mkf7T8IObGdtNC6IrY9gqAjRcU8VlrfTN18KDq0gwsatH0sBnJ4 jrVA/wBUSlctGpMkD32A7G6u4SCBsMAE2Y+LKsOEaYXusdKXyQZRqBmRJVqtULFeavUTwEXuMpGr/ 2Z0K3sBMcgLD4gUnS/uQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hpaWS-0005pR-Bh; Mon, 22 Jul 2019 15:46:44 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hpaSe-0001u9-BK for linux-arm-kernel@lists.infradead.org; Mon, 22 Jul 2019 15:42:49 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0D49F1596; Mon, 22 Jul 2019 08:42:48 -0700 (PDT) Received: from e112269-lin.arm.com (e112269-lin.cambridge.arm.com [10.1.196.133]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 41F493F694; Mon, 22 Jul 2019 08:42:45 -0700 (PDT) From: Steven Price To: linux-mm@kvack.org Subject: [PATCH v9 08/21] sparc: mm: Add p?d_leaf() definitions Date: Mon, 22 Jul 2019 16:41:57 +0100 Message-Id: <20190722154210.42799-9-steven.price@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190722154210.42799-1-steven.price@arm.com> References: <20190722154210.42799-1-steven.price@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190722_084248_572139_91833ECD X-CRM114-Status: GOOD ( 11.06 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , Peter Zijlstra , Catalin Marinas , Dave Hansen , "H. Peter Anvin" , sparclinux@vger.kernel.org, Will Deacon , "Liang, Kan" , x86@kernel.org, Steven Price , Ingo Molnar , Arnd Bergmann , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Borislav Petkov , Andy Lutomirski , Thomas Gleixner , linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , linux-kernel@vger.kernel.org, James Morse , Andrew Morton , "David S. Miller" Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP walk_page_range() is going to be allowed to walk page tables other than those of user space. For this it needs to know when it has reached a 'leaf' entry in the page tables. This information is provided by the p?d_leaf() functions/macros. For sparc 64 bit, pmd_large() and pud_large() are already provided, so add macros to provide the p?d_leaf names required by the generic code. CC: "David S. Miller" CC: sparclinux@vger.kernel.org Signed-off-by: Steven Price --- arch/sparc/include/asm/pgtable_64.h | 2 ++ 1 file changed, 2 insertions(+) diff --git a/arch/sparc/include/asm/pgtable_64.h b/arch/sparc/include/asm/pgtable_64.h index 1599de730532..a78b968ae3fa 100644 --- a/arch/sparc/include/asm/pgtable_64.h +++ b/arch/sparc/include/asm/pgtable_64.h @@ -683,6 +683,7 @@ static inline unsigned long pte_special(pte_t pte) return pte_val(pte) & _PAGE_SPECIAL; } +#define pmd_leaf pmd_large static inline unsigned long pmd_large(pmd_t pmd) { pte_t pte = __pte(pmd_val(pmd)); @@ -867,6 +868,7 @@ static inline unsigned long pud_page_vaddr(pud_t pud) /* only used by the stubbed out hugetlb gup code, should never be called */ #define pgd_page(pgd) NULL +#define pud_leaf pud_large static inline unsigned long pud_large(pud_t pud) { pte_t pte = __pte(pud_val(pud)); From patchwork Mon Jul 22 15:41:58 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 11052841 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 05539746 for ; Mon, 22 Jul 2019 15:47:19 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E5C40212E8 for ; Mon, 22 Jul 2019 15:47:18 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D8C9E28414; Mon, 22 Jul 2019 15:47:18 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 81FBF212E8 for ; Mon, 22 Jul 2019 15:47:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=KMhdTX7IuNt6oRe1P2As6GsCnWlF1MxCRLBgzi5D8cY=; b=J2m8slCtCwiiCp MFRSw6rwAPidSl7gT9IXvN+J7kWNAPzxHHqri6hKh5NweRU9lqZx1U+aEEbh/wMCfVun5FCWcHMHG 5pekCtuC+jkHzoMtpQ4FRWbESN8oF/lOj5jnkIPKGAWuXyPk2xMalZnLuWg8vj2OolI9eSmQe4gR+ sLrlSUZtYmgN8s+2GX4w5B8yQCa43zI+AEVrbtXvefvKjaRbcdE8GLze4I1YSipW8uIKqgdY1bkDu iBaQBtOjzdFygROmV67K2J2ErY0CPq6wkdRB3Jf5vIBNGGXo/8ysex7LOFZBCgSL4LQtAQR3K0WwI HrEe2CX6IcnUnn2kr6mg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hpaWy-00065e-P6; Mon, 22 Jul 2019 15:47:17 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hpaSh-0001w8-G4 for linux-arm-kernel@lists.infradead.org; Mon, 22 Jul 2019 15:42:53 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id CE1BE15A2; Mon, 22 Jul 2019 08:42:50 -0700 (PDT) Received: from e112269-lin.arm.com (e112269-lin.cambridge.arm.com [10.1.196.133]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 430953F694; Mon, 22 Jul 2019 08:42:48 -0700 (PDT) From: Steven Price To: linux-mm@kvack.org Subject: [PATCH v9 09/21] x86: mm: Add p?d_leaf() definitions Date: Mon, 22 Jul 2019 16:41:58 +0100 Message-Id: <20190722154210.42799-10-steven.price@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190722154210.42799-1-steven.price@arm.com> References: <20190722154210.42799-1-steven.price@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190722_084252_201490_A94B15C7 X-CRM114-Status: GOOD ( 12.12 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , x86@kernel.org, Arnd Bergmann , Ard Biesheuvel , Peter Zijlstra , Catalin Marinas , Dave Hansen , linux-kernel@vger.kernel.org, Steven Price , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Ingo Molnar , Borislav Petkov , Andy Lutomirski , "H. Peter Anvin" , James Morse , Thomas Gleixner , Will Deacon , Andrew Morton , linux-arm-kernel@lists.infradead.org, "Liang, Kan" Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP walk_page_range() is going to be allowed to walk page tables other than those of user space. For this it needs to know when it has reached a 'leaf' entry in the page tables. This information is provided by the p?d_leaf() functions/macros. For x86 we already have p?d_large() functions, so simply add macros to provide the generic p?d_leaf() names for the generic code. Signed-off-by: Steven Price --- arch/x86/include/asm/pgtable.h | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h index 0bc530c4eb13..6986a451619e 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -239,6 +239,7 @@ static inline unsigned long pgd_pfn(pgd_t pgd) return (pgd_val(pgd) & PTE_PFN_MASK) >> PAGE_SHIFT; } +#define p4d_leaf p4d_large static inline int p4d_large(p4d_t p4d) { /* No 512 GiB pages yet */ @@ -247,6 +248,7 @@ static inline int p4d_large(p4d_t p4d) #define pte_page(pte) pfn_to_page(pte_pfn(pte)) +#define pmd_leaf pmd_large static inline int pmd_large(pmd_t pte) { return pmd_flags(pte) & _PAGE_PSE; @@ -874,6 +876,7 @@ static inline pmd_t *pmd_offset(pud_t *pud, unsigned long address) return (pmd_t *)pud_page_vaddr(*pud) + pmd_index(address); } +#define pud_leaf pud_large static inline int pud_large(pud_t pud) { return (pud_val(pud) & (_PAGE_PSE | _PAGE_PRESENT)) == @@ -885,6 +888,7 @@ static inline int pud_bad(pud_t pud) return (pud_flags(pud) & ~(_KERNPG_TABLE | _PAGE_USER)) != 0; } #else +#define pud_leaf pud_large static inline int pud_large(pud_t pud) { return 0; @@ -1233,6 +1237,7 @@ static inline bool pgdp_maps_userspace(void *__ptr) return (((ptr & ~PAGE_MASK) / sizeof(pgd_t)) < PGD_KERNEL_START); } +#define pgd_leaf pgd_large static inline int pgd_large(pgd_t pgd) { return 0; } #ifdef CONFIG_PAGE_TABLE_ISOLATION From patchwork Mon Jul 22 15:41:59 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 11052845 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8769D138D for ; Mon, 22 Jul 2019 15:47:46 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 74618262F2 for ; Mon, 22 Jul 2019 15:47:46 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 685E428433; Mon, 22 Jul 2019 15:47:46 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 0D65B262F2 for ; Mon, 22 Jul 2019 15:47:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=KXMB8v1pLq6Fq806vyspt8K2MisaA0DuLKAFoh5Pe80=; b=tNVovfAc0vRQd7 5vwxnsYHk6AMBzE7nG4iJko9kIGsvoPAnkp0zj/TpP4XJx1Qh9SGEARARNmqq+Qih3g1GCaV9qGh2 xwnp4FFoS+Tx68Szg2cdD5bCebVrMLal35ZGrVnoukgizFTav6/ZJJ6NS0GJonOjd8qdHiiNyNftn 309VvyhTH7WPPWBapWh4KMzgAbzKyQcn9QLQRWbdlMmyiv/xQmDoFZY+JYgzE6JgMQEjAPVo8t/bn ITinvUAW/qdK75A6zsvLOMsWGs0FxkAtubeCxKSclrSFZq5lU978UZ29pxDtH4a2CFOA+TecjoPVK tdgG2RcGjCqWSXHSbgCQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hpaXK-0006KU-5r; Mon, 22 Jul 2019 15:47:38 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hpaSj-0001y4-SI for linux-arm-kernel@lists.infradead.org; Mon, 22 Jul 2019 15:42:56 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 98F461509; Mon, 22 Jul 2019 08:42:53 -0700 (PDT) Received: from e112269-lin.arm.com (e112269-lin.cambridge.arm.com [10.1.196.133]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 0FAD03F694; Mon, 22 Jul 2019 08:42:50 -0700 (PDT) From: Steven Price To: linux-mm@kvack.org Subject: [PATCH v9 10/21] mm: Add generic p?d_leaf() macros Date: Mon, 22 Jul 2019 16:41:59 +0100 Message-Id: <20190722154210.42799-11-steven.price@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190722154210.42799-1-steven.price@arm.com> References: <20190722154210.42799-1-steven.price@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190722_084254_250557_E3332697 X-CRM114-Status: GOOD ( 10.95 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , x86@kernel.org, Arnd Bergmann , Ard Biesheuvel , Peter Zijlstra , Catalin Marinas , Dave Hansen , linux-kernel@vger.kernel.org, Steven Price , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Ingo Molnar , Borislav Petkov , Andy Lutomirski , "H. Peter Anvin" , James Morse , Thomas Gleixner , Will Deacon , Andrew Morton , linux-arm-kernel@lists.infradead.org, "Liang, Kan" Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Exposing the pud/pgd levels of the page tables to walk_page_range() means we may come across the exotic large mappings that come with large areas of contiguous memory (such as the kernel's linear map). For architectures that don't provide all p?d_leaf() macros, provide generic do nothing default that are suitable where there cannot be leaf pages that that level. Signed-off-by: Steven Price Acked-by: Mark Rutland --- include/asm-generic/pgtable.h | 19 +++++++++++++++++++ 1 file changed, 19 insertions(+) diff --git a/include/asm-generic/pgtable.h b/include/asm-generic/pgtable.h index 75d9d68a6de7..46275896ca66 100644 --- a/include/asm-generic/pgtable.h +++ b/include/asm-generic/pgtable.h @@ -1188,4 +1188,23 @@ static inline bool arch_has_pfn_modify_check(void) #define mm_pmd_folded(mm) __is_defined(__PAGETABLE_PMD_FOLDED) #endif +/* + * p?d_leaf() - true if this entry is a final mapping to a physical address. + * This differs from p?d_huge() by the fact that they are always available (if + * the architecture supports large pages at the appropriate level) even + * if CONFIG_HUGETLB_PAGE is not defined. + */ +#ifndef pgd_leaf +#define pgd_leaf(x) 0 +#endif +#ifndef p4d_leaf +#define p4d_leaf(x) 0 +#endif +#ifndef pud_leaf +#define pud_leaf(x) 0 +#endif +#ifndef pmd_leaf +#define pmd_leaf(x) 0 +#endif + #endif /* _ASM_GENERIC_PGTABLE_H */ From patchwork Mon Jul 22 15:42:00 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 11052849 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 480E31398 for ; Mon, 22 Jul 2019 15:48:05 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 35F5628414 for ; Mon, 22 Jul 2019 15:48:05 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 290EF28438; Mon, 22 Jul 2019 15:48:05 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id B523028414 for ; Mon, 22 Jul 2019 15:48:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=9WPAgBPLEqMBKGo+FVdLG3JN1CJ9uvuKu+2cSudm6H0=; b=Z/w2uvks1DiW9c TEj0IKLF5MLF1q6YVFs9iHa1bNmm+4ir2pfbUVxK0F00o4owLL7ER33lrFruYPK0Y8g3q0YlxbldF 8Yzh+8a5sjNwgw4VHUWyjPxQXOyaGCrSmb5PHgFlZZmzv1QUR5lT5IC9S0xLp88I/5/wXRMAyId4/ W1fkL8gj1Q3eTCvhHc0MZTBzuOVriv5RTIjJAW/dFx1jHKQWWwzaskUsfQRc/RAbUW7dBlx5Qfkrm C/4BspE0z3piAmlotdrtTESP5EbkHr7iA9vzLnQ2nNgztKMGiN5LKXpR5oNqA2p8lbHJE6OCNic67 3c6+Bi8vw6oZlXg4EC+g==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hpaXj-0006Zj-Tq; Mon, 22 Jul 2019 15:48:04 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hpaSm-0001y4-J3 for linux-arm-kernel@lists.infradead.org; Mon, 22 Jul 2019 15:42:59 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 65B0B1596; Mon, 22 Jul 2019 08:42:56 -0700 (PDT) Received: from e112269-lin.arm.com (e112269-lin.cambridge.arm.com [10.1.196.133]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id CECB03F694; Mon, 22 Jul 2019 08:42:53 -0700 (PDT) From: Steven Price To: linux-mm@kvack.org Subject: [PATCH v9 11/21] mm: pagewalk: Add p4d_entry() and pgd_entry() Date: Mon, 22 Jul 2019 16:42:00 +0100 Message-Id: <20190722154210.42799-12-steven.price@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190722154210.42799-1-steven.price@arm.com> References: <20190722154210.42799-1-steven.price@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190722_084256_826275_4610104E X-CRM114-Status: GOOD ( 14.60 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , x86@kernel.org, Arnd Bergmann , Ard Biesheuvel , Peter Zijlstra , Catalin Marinas , Dave Hansen , linux-kernel@vger.kernel.org, Steven Price , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Ingo Molnar , Borislav Petkov , Andy Lutomirski , "H. Peter Anvin" , James Morse , Thomas Gleixner , Will Deacon , Andrew Morton , linux-arm-kernel@lists.infradead.org, "Liang, Kan" Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP pgd_entry() and pud_entry() were removed by commit 0b1fbfe50006c410 ("mm/pagewalk: remove pgd_entry() and pud_entry()") because there were no users. We're about to add users so reintroduce them, along with p4d_entry() as we now have 5 levels of tables. Note that commit a00cc7d9dd93d66a ("mm, x86: add support for PUD-sized transparent hugepages") already re-added pud_entry() but with different semantics to the other callbacks. Since there have never been upstream users of this, revert the semantics back to match the other callbacks. This means pud_entry() is called for all entries, not just transparent huge pages. Signed-off-by: Steven Price --- include/linux/mm.h | 15 +++++++++------ mm/pagewalk.c | 27 ++++++++++++++++----------- 2 files changed, 25 insertions(+), 17 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 0334ca97c584..b22799129128 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1432,15 +1432,14 @@ void unmap_vmas(struct mmu_gather *tlb, struct vm_area_struct *start_vma, /** * mm_walk - callbacks for walk_page_range - * @pud_entry: if set, called for each non-empty PUD (2nd-level) entry - * this handler should only handle pud_trans_huge() puds. - * the pmd_entry or pte_entry callbacks will be used for - * regular PUDs. - * @pmd_entry: if set, called for each non-empty PMD (3rd-level) entry + * @pgd_entry: if set, called for each non-empty PGD (top-level) entry + * @p4d_entry: if set, called for each non-empty P4D entry + * @pud_entry: if set, called for each non-empty PUD entry + * @pmd_entry: if set, called for each non-empty PMD entry * this handler is required to be able to handle * pmd_trans_huge() pmds. They may simply choose to * split_huge_page() instead of handling it explicitly. - * @pte_entry: if set, called for each non-empty PTE (4th-level) entry + * @pte_entry: if set, called for each non-empty PTE (lowest-level) entry * @pte_hole: if set, called for each hole at all levels * @hugetlb_entry: if set, called for each hugetlb entry * @test_walk: caller specific callback function to determine whether @@ -1455,6 +1454,10 @@ void unmap_vmas(struct mmu_gather *tlb, struct vm_area_struct *start_vma, * (see the comment on walk_page_range() for more details) */ struct mm_walk { + int (*pgd_entry)(pgd_t *pgd, unsigned long addr, + unsigned long next, struct mm_walk *walk); + int (*p4d_entry)(p4d_t *p4d, unsigned long addr, + unsigned long next, struct mm_walk *walk); int (*pud_entry)(pud_t *pud, unsigned long addr, unsigned long next, struct mm_walk *walk); int (*pmd_entry)(pmd_t *pmd, unsigned long addr, diff --git a/mm/pagewalk.c b/mm/pagewalk.c index c3084ff2569d..98373a9f88b8 100644 --- a/mm/pagewalk.c +++ b/mm/pagewalk.c @@ -90,15 +90,9 @@ static int walk_pud_range(p4d_t *p4d, unsigned long addr, unsigned long end, } if (walk->pud_entry) { - spinlock_t *ptl = pud_trans_huge_lock(pud, walk->vma); - - if (ptl) { - err = walk->pud_entry(pud, addr, next, walk); - spin_unlock(ptl); - if (err) - break; - continue; - } + err = walk->pud_entry(pud, addr, next, walk); + if (err) + break; } split_huge_pud(walk->vma, pud, addr); @@ -131,7 +125,12 @@ static int walk_p4d_range(pgd_t *pgd, unsigned long addr, unsigned long end, break; continue; } - if (walk->pmd_entry || walk->pte_entry) + if (walk->p4d_entry) { + err = walk->p4d_entry(p4d, addr, next, walk); + if (err) + break; + } + if (walk->pud_entry || walk->pmd_entry || walk->pte_entry) err = walk_pud_range(p4d, addr, next, walk); if (err) break; @@ -157,7 +156,13 @@ static int walk_pgd_range(unsigned long addr, unsigned long end, break; continue; } - if (walk->pmd_entry || walk->pte_entry) + if (walk->pgd_entry) { + err = walk->pgd_entry(pgd, addr, next, walk); + if (err) + break; + } + if (walk->p4d_entry || walk->pud_entry || walk->pmd_entry || + walk->pte_entry) err = walk_p4d_range(pgd, addr, next, walk); if (err) break; From patchwork Mon Jul 22 15:42:01 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 11052857 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2FE2913AC for ; Mon, 22 Jul 2019 15:48:33 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1C95328414 for ; Mon, 22 Jul 2019 15:48:33 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1023B28438; Mon, 22 Jul 2019 15:48:33 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id B49AD28433 for ; Mon, 22 Jul 2019 15:48:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=3dhep0PYQwQ8Cm4dvDqvS3J4gXolUttRNkJ6LZYeims=; b=th89YFNLGfMa3p JHthDSVrjLezK7BA0sWQJGSmwzSyyqavFfePNpk3LBTvB5It7qUYn1TN3fWS7aiCpXLEMG8Et1pPD FWeNBzAWQv6TWseswC92auvx87V6jp0WBQ5H9LmxsBoEtd2Aeij0+LMXRVg8ujsZDgcHZRdCnRH8c eZCPGHBUbPvCq741FfIRxdukLZDK5CgMkBMV7hFM1Eg366huc+4rbOfTaA2LzJkHbdC9rUQaiqj0A KwaCgAVWWOQIltHE/e31Dl7JuaWVYKA1CDJUCT754IihdUszU5fJjGLArHmOrXgW/f03g3lHxVNnD PmJxA5UjIlKszMexNL5w==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hpaY7-0006pS-HI; Mon, 22 Jul 2019 15:48:27 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hpaSp-000239-Qi for linux-arm-kernel@lists.infradead.org; Mon, 22 Jul 2019 15:43:01 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 30DF515A2; Mon, 22 Jul 2019 08:42:59 -0700 (PDT) Received: from e112269-lin.arm.com (e112269-lin.cambridge.arm.com [10.1.196.133]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 9B72F3F694; Mon, 22 Jul 2019 08:42:56 -0700 (PDT) From: Steven Price To: linux-mm@kvack.org Subject: [PATCH v9 12/21] mm: pagewalk: Allow walking without vma Date: Mon, 22 Jul 2019 16:42:01 +0100 Message-Id: <20190722154210.42799-13-steven.price@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190722154210.42799-1-steven.price@arm.com> References: <20190722154210.42799-1-steven.price@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190722_084300_234096_0457F3BD X-CRM114-Status: GOOD ( 13.68 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , x86@kernel.org, Arnd Bergmann , Ard Biesheuvel , Peter Zijlstra , Catalin Marinas , Dave Hansen , linux-kernel@vger.kernel.org, Steven Price , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Ingo Molnar , Borislav Petkov , Andy Lutomirski , "H. Peter Anvin" , James Morse , Thomas Gleixner , Will Deacon , Andrew Morton , linux-arm-kernel@lists.infradead.org, "Liang, Kan" Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Since 48684a65b4e3: "mm: pagewalk: fix misbehavior of walk_page_range for vma(VM_PFNMAP)", page_table_walk() will report any kernel area as a hole, because it lacks a vma. This means each arch has re-implemented page table walking when needed, for example in the per-arch ptdump walker. Remove the requirement to have a vma except when trying to split huge pages. Signed-off-by: Steven Price --- mm/pagewalk.c | 25 +++++++++++++++++-------- 1 file changed, 17 insertions(+), 8 deletions(-) diff --git a/mm/pagewalk.c b/mm/pagewalk.c index 98373a9f88b8..1cbef99e9258 100644 --- a/mm/pagewalk.c +++ b/mm/pagewalk.c @@ -36,7 +36,7 @@ static int walk_pmd_range(pud_t *pud, unsigned long addr, unsigned long end, do { again: next = pmd_addr_end(addr, end); - if (pmd_none(*pmd) || !walk->vma) { + if (pmd_none(*pmd)) { if (walk->pte_hole) err = walk->pte_hole(addr, next, walk); if (err) @@ -59,9 +59,14 @@ static int walk_pmd_range(pud_t *pud, unsigned long addr, unsigned long end, if (!walk->pte_entry) continue; - split_huge_pmd(walk->vma, pmd, addr); - if (pmd_trans_unstable(pmd)) - goto again; + if (walk->vma) { + split_huge_pmd(walk->vma, pmd, addr); + if (pmd_trans_unstable(pmd)) + goto again; + } else if (pmd_leaf(*pmd)) { + continue; + } + err = walk_pte_range(pmd, addr, next, walk); if (err) break; @@ -81,7 +86,7 @@ static int walk_pud_range(p4d_t *p4d, unsigned long addr, unsigned long end, do { again: next = pud_addr_end(addr, end); - if (pud_none(*pud) || !walk->vma) { + if (pud_none(*pud)) { if (walk->pte_hole) err = walk->pte_hole(addr, next, walk); if (err) @@ -95,9 +100,13 @@ static int walk_pud_range(p4d_t *p4d, unsigned long addr, unsigned long end, break; } - split_huge_pud(walk->vma, pud, addr); - if (pud_none(*pud)) - goto again; + if (walk->vma) { + split_huge_pud(walk->vma, pud, addr); + if (pud_none(*pud)) + goto again; + } else if (pud_leaf(*pud)) { + continue; + } if (walk->pmd_entry || walk->pte_entry) err = walk_pmd_range(pud, addr, next, walk); From patchwork Mon Jul 22 15:42:02 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 11052861 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 24CBC1399 for ; Mon, 22 Jul 2019 15:49:18 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0F15A2847E for ; Mon, 22 Jul 2019 15:49:18 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 00F0428488; Mon, 22 Jul 2019 15:49:17 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 9A29B2846D for ; Mon, 22 Jul 2019 15:49:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=McceTMtouWYetLvseVM9Of0HYUoDOydSMP1tE0Nia1o=; b=cbYuOF9OaHnka5 HM7rmIRjYN6wjz7/JN2NODqVprEvGSvtG4h9zxoF7pR470WdjakWKs+X0oNI0AFbDf6c8wsUZtde7 UGfJvVXi2lNCMsTw09yeZSfTbN6iRJU7sOrAGUJ6YmXoe4iQ4rCWiiSV9HTihRWh5NBlobsoeEQCp ZoOKUxlesytqJIgnCVbCsJOjb0MHhQkKI3FdbOdmAXLiaJGaEQXCC9jHNzgCurh+ORZ88v6OnWNGM ZexZv++2hTx/Q8hwVA75YyBeYPVaZ26XXB65l5CCIS1IajbMGAlHFmzykePp+qKHo0MmjPF9zcaUN qbySJgZIeE4RK6V/amEw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hpaYv-0007Hp-19; Mon, 22 Jul 2019 15:49:17 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hpaSs-00025o-Eg for linux-arm-kernel@lists.infradead.org; Mon, 22 Jul 2019 15:43:09 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id EFE9E1509; Mon, 22 Jul 2019 08:43:01 -0700 (PDT) Received: from e112269-lin.arm.com (e112269-lin.cambridge.arm.com [10.1.196.133]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 66A7A3F694; Mon, 22 Jul 2019 08:42:59 -0700 (PDT) From: Steven Price To: linux-mm@kvack.org Subject: [PATCH v9 13/21] mm: pagewalk: Add test_p?d callbacks Date: Mon, 22 Jul 2019 16:42:02 +0100 Message-Id: <20190722154210.42799-14-steven.price@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190722154210.42799-1-steven.price@arm.com> References: <20190722154210.42799-1-steven.price@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190722_084302_685231_CFB77B3D X-CRM114-Status: GOOD ( 15.20 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , x86@kernel.org, Arnd Bergmann , Ard Biesheuvel , Peter Zijlstra , Catalin Marinas , Dave Hansen , linux-kernel@vger.kernel.org, Steven Price , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Ingo Molnar , Borislav Petkov , Andy Lutomirski , "H. Peter Anvin" , James Morse , Thomas Gleixner , Will Deacon , Andrew Morton , linux-arm-kernel@lists.infradead.org, "Liang, Kan" Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP It is useful to be able to skip parts of the page table tree even when walking without VMAs. Add test_p?d callbacks similar to test_walk but which are called just before a table at that level is walked. If the callback returns non-zero then the entire table is skipped. Signed-off-by: Steven Price --- include/linux/mm.h | 11 +++++++++++ mm/pagewalk.c | 24 ++++++++++++++++++++++++ 2 files changed, 35 insertions(+) diff --git a/include/linux/mm.h b/include/linux/mm.h index b22799129128..325a1ca6f820 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1447,6 +1447,11 @@ void unmap_vmas(struct mmu_gather *tlb, struct vm_area_struct *start_vma, * value means "do page table walk over the current vma," * and a negative one means "abort current page table walk * right now." 1 means "skip the current vma." + * @test_pmd: similar to test_walk(), but called for every pmd. + * @test_pud: similar to test_walk(), but called for every pud. + * @test_p4d: similar to test_walk(), but called for every p4d. + * Returning 0 means walk this part of the page tables, + * returning 1 means to skip this range. * @mm: mm_struct representing the target process of page table walk * @vma: vma currently walked (NULL if walking outside vmas) * @private: private data for callbacks' usage @@ -1471,6 +1476,12 @@ struct mm_walk { struct mm_walk *walk); int (*test_walk)(unsigned long addr, unsigned long next, struct mm_walk *walk); + int (*test_pmd)(unsigned long addr, unsigned long next, + pmd_t *pmd_start, struct mm_walk *walk); + int (*test_pud)(unsigned long addr, unsigned long next, + pud_t *pud_start, struct mm_walk *walk); + int (*test_p4d)(unsigned long addr, unsigned long next, + p4d_t *p4d_start, struct mm_walk *walk); struct mm_struct *mm; struct vm_area_struct *vma; void *private; diff --git a/mm/pagewalk.c b/mm/pagewalk.c index 1cbef99e9258..6bea79b95be3 100644 --- a/mm/pagewalk.c +++ b/mm/pagewalk.c @@ -32,6 +32,14 @@ static int walk_pmd_range(pud_t *pud, unsigned long addr, unsigned long end, unsigned long next; int err = 0; + if (walk->test_pmd) { + err = walk->test_pmd(addr, end, pmd_offset(pud, 0UL), walk); + if (err < 0) + return err; + if (err > 0) + return 0; + } + pmd = pmd_offset(pud, addr); do { again: @@ -82,6 +90,14 @@ static int walk_pud_range(p4d_t *p4d, unsigned long addr, unsigned long end, unsigned long next; int err = 0; + if (walk->test_pud) { + err = walk->test_pud(addr, end, pud_offset(p4d, 0UL), walk); + if (err < 0) + return err; + if (err > 0) + return 0; + } + pud = pud_offset(p4d, addr); do { again: @@ -124,6 +140,14 @@ static int walk_p4d_range(pgd_t *pgd, unsigned long addr, unsigned long end, unsigned long next; int err = 0; + if (walk->test_p4d) { + err = walk->test_p4d(addr, end, p4d_offset(pgd, 0UL), walk); + if (err < 0) + return err; + if (err > 0) + return 0; + } + p4d = p4d_offset(pgd, addr); do { next = p4d_addr_end(addr, end); From patchwork Mon Jul 22 15:42:03 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 11052859 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0EAE41398 for ; Mon, 22 Jul 2019 15:49:05 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id F0852202DB for ; Mon, 22 Jul 2019 15:49:04 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E465128450; Mon, 22 Jul 2019 15:49:04 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 9667F28438 for ; Mon, 22 Jul 2019 15:49:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=hmL4qufPeuDrtD5BPPrn3CjUSsczKc4JjakH+2iHnmo=; b=CfSOcOSr2CiO55 ftYEMGalSEb/hfWuH0z00Ak30iqjozTWuRbU1rOuyE8rsKgGx93dp4hSpaiKv28UdgoY7IKwPtQDv kieoD6wGtd6QodbsTX5dbYNh3DpKjZCd0gGzSv0o32M6KWVImDYqMY3rf8eizXFL1LW+zWyGrrF1X wx03mDTnaiwGFanonq4wn9N7zP1NQuFBG8k1Ya7Dzmf98fPp7KVL/LECyXJtUs9FmBtmoKrLwkaGa WQP3xnewcxXqFe7BrWsO4Rto2JWV8tvP19mqOQXl9RydNxcKTPFCo55S+XBJmacwvVWzAvhk2fb2t lUXj6rsrt6D//g4zvxNQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hpaYb-00074Q-BE; Mon, 22 Jul 2019 15:48:57 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hpaSv-000281-4T for linux-arm-kernel@lists.infradead.org; Mon, 22 Jul 2019 15:43:09 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C90AE15BF; Mon, 22 Jul 2019 08:43:04 -0700 (PDT) Received: from e112269-lin.arm.com (e112269-lin.cambridge.arm.com [10.1.196.133]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 3192C3F694; Mon, 22 Jul 2019 08:43:02 -0700 (PDT) From: Steven Price To: linux-mm@kvack.org Subject: [PATCH v9 14/21] x86: mm: Don't display pages which aren't present in debugfs Date: Mon, 22 Jul 2019 16:42:03 +0100 Message-Id: <20190722154210.42799-15-steven.price@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190722154210.42799-1-steven.price@arm.com> References: <20190722154210.42799-1-steven.price@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190722_084305_374928_05DEC269 X-CRM114-Status: GOOD ( 13.08 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , x86@kernel.org, Arnd Bergmann , Ard Biesheuvel , Peter Zijlstra , Catalin Marinas , Dave Hansen , linux-kernel@vger.kernel.org, Steven Price , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Ingo Molnar , Borislav Petkov , Andy Lutomirski , "H. Peter Anvin" , James Morse , Thomas Gleixner , Will Deacon , Andrew Morton , linux-arm-kernel@lists.infradead.org, "Liang, Kan" Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP For the /sys/kernel/debug/page_tables/ files, rather than outputing a mostly empty line when a block of memory isn't present just skip the line. This keeps the output shorter and will help with a future change switching to using the generic page walk code as we no longer care about the 'level' that the page table holes are at. Signed-off-by: Steven Price --- arch/x86/mm/dump_pagetables.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/arch/x86/mm/dump_pagetables.c b/arch/x86/mm/dump_pagetables.c index ab67822fd2f4..95728027dd3b 100644 --- a/arch/x86/mm/dump_pagetables.c +++ b/arch/x86/mm/dump_pagetables.c @@ -301,8 +301,8 @@ static void note_page(struct seq_file *m, struct pg_state *st, /* * Now print the actual finished series */ - if (!st->marker->max_lines || - st->lines < st->marker->max_lines) { + if ((cur & _PAGE_PRESENT) && (!st->marker->max_lines || + st->lines < st->marker->max_lines)) { pt_dump_seq_printf(m, st->to_dmesg, "0x%0*lx-0x%0*lx ", width, st->start_address, @@ -318,7 +318,8 @@ static void note_page(struct seq_file *m, struct pg_state *st, printk_prot(m, st->current_prot, st->level, st->to_dmesg); } - st->lines++; + if (cur & _PAGE_PRESENT) + st->lines++; /* * We print markers for special areas of address space, From patchwork Mon Jul 22 15:42:04 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 11052865 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4FC121398 for ; Mon, 22 Jul 2019 15:49:39 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3CF3A28438 for ; Mon, 22 Jul 2019 15:49:39 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 306172846D; Mon, 22 Jul 2019 15:49:39 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 9C1AA28438 for ; Mon, 22 Jul 2019 15:49:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=QcqHpyeV0QMm3BhpDOyZMVw7F1qpEZzUw8IFhB3YoWs=; b=AFwsQZR9Jmd1Zj SoY4z6X7fMgwG0gt2DseDsiN6gRy31ABvp+SNWa8iAYgkixm0EoJriXwPvK45iHxCNHy25jy9bRk3 6YuCto8jbCVifNVsBdnio8N3XsCXWoFz9MAlJiF+8v2GoxODpEy3M0vy/LgrFmHfJcZIXC2KdxVHN G968IH5LJ4mJi/+jCqrTX8IP7VmPE+qnh2fZpHHDe4Hcw+o6Ub/VQuj3Z6GuG6J3Fpp5J9YaXfIDV BlyF30sye0b92P2x6oxUeLIMOeM6NmFuE2swqowbFX7ROd4iAmL/A3oJQ3vahHGsJmfeA7Se7aBqT /5yzqeV3WX/YTxvMF9Tg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hpaZB-0007V8-H3; Mon, 22 Jul 2019 15:49:33 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hpaSy-0002AC-4e for linux-arm-kernel@lists.infradead.org; Mon, 22 Jul 2019 15:43:10 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B403715A2; Mon, 22 Jul 2019 08:43:07 -0700 (PDT) Received: from e112269-lin.arm.com (e112269-lin.cambridge.arm.com [10.1.196.133]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 0B07B3F694; Mon, 22 Jul 2019 08:43:04 -0700 (PDT) From: Steven Price To: linux-mm@kvack.org Subject: [PATCH v9 15/21] x86: mm: Point to struct seq_file from struct pg_state Date: Mon, 22 Jul 2019 16:42:04 +0100 Message-Id: <20190722154210.42799-16-steven.price@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190722154210.42799-1-steven.price@arm.com> References: <20190722154210.42799-1-steven.price@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190722_084308_616907_0C511F7B X-CRM114-Status: GOOD ( 14.83 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , x86@kernel.org, Arnd Bergmann , Ard Biesheuvel , Peter Zijlstra , Catalin Marinas , Dave Hansen , linux-kernel@vger.kernel.org, Steven Price , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Ingo Molnar , Borislav Petkov , Andy Lutomirski , "H. Peter Anvin" , James Morse , Thomas Gleixner , Will Deacon , Andrew Morton , linux-arm-kernel@lists.infradead.org, "Liang, Kan" Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP mm/dump_pagetables.c passes both struct seq_file and struct pg_state down the chain of walk_*_level() functions to be passed to note_page(). Instead place the struct seq_file in struct pg_state and access it from struct pg_state (which is private to this file) in note_page(). Signed-off-by: Steven Price --- arch/x86/mm/dump_pagetables.c | 69 ++++++++++++++++++----------------- 1 file changed, 35 insertions(+), 34 deletions(-) diff --git a/arch/x86/mm/dump_pagetables.c b/arch/x86/mm/dump_pagetables.c index 95728027dd3b..fe21b57f629f 100644 --- a/arch/x86/mm/dump_pagetables.c +++ b/arch/x86/mm/dump_pagetables.c @@ -36,6 +36,7 @@ struct pg_state { bool to_dmesg; bool check_wx; unsigned long wx_pages; + struct seq_file *seq; }; struct addr_marker { @@ -265,11 +266,12 @@ static void note_wx(struct pg_state *st) * of PTE entries; the next one is different so we need to * print what we collected so far. */ -static void note_page(struct seq_file *m, struct pg_state *st, - pgprot_t new_prot, pgprotval_t new_eff, int level) +static void note_page(struct pg_state *st, pgprot_t new_prot, + pgprotval_t new_eff, int level) { pgprotval_t prot, cur, eff; static const char units[] = "BKMGTPE"; + struct seq_file *m = st->seq; /* * If we have a "break" in the series, we need to flush the state that @@ -355,8 +357,8 @@ static inline pgprotval_t effective_prot(pgprotval_t prot1, pgprotval_t prot2) ((prot1 | prot2) & _PAGE_NX); } -static void walk_pte_level(struct seq_file *m, struct pg_state *st, pmd_t addr, - pgprotval_t eff_in, unsigned long P) +static void walk_pte_level(struct pg_state *st, pmd_t addr, pgprotval_t eff_in, + unsigned long P) { int i; pte_t *pte; @@ -367,7 +369,7 @@ static void walk_pte_level(struct seq_file *m, struct pg_state *st, pmd_t addr, pte = pte_offset_map(&addr, st->current_address); prot = pte_flags(*pte); eff = effective_prot(eff_in, prot); - note_page(m, st, __pgprot(prot), eff, 5); + note_page(st, __pgprot(prot), eff, 5); pte_unmap(pte); } } @@ -380,22 +382,20 @@ static void walk_pte_level(struct seq_file *m, struct pg_state *st, pmd_t addr, * us dozens of seconds (minutes for 5-level config) while checking for * W+X mapping or reading kernel_page_tables debugfs file. */ -static inline bool kasan_page_table(struct seq_file *m, struct pg_state *st, - void *pt) +static inline bool kasan_page_table(struct pg_state *st, void *pt) { if (__pa(pt) == __pa(kasan_early_shadow_pmd) || (pgtable_l5_enabled() && __pa(pt) == __pa(kasan_early_shadow_p4d)) || __pa(pt) == __pa(kasan_early_shadow_pud)) { pgprotval_t prot = pte_flags(kasan_early_shadow_pte[0]); - note_page(m, st, __pgprot(prot), 0, 5); + note_page(st, __pgprot(prot), 0, 5); return true; } return false; } #else -static inline bool kasan_page_table(struct seq_file *m, struct pg_state *st, - void *pt) +static inline bool kasan_page_table(struct pg_state *st, void *pt) { return false; } @@ -403,7 +403,7 @@ static inline bool kasan_page_table(struct seq_file *m, struct pg_state *st, #if PTRS_PER_PMD > 1 -static void walk_pmd_level(struct seq_file *m, struct pg_state *st, pud_t addr, +static void walk_pmd_level(struct pg_state *st, pud_t addr, pgprotval_t eff_in, unsigned long P) { int i; @@ -417,27 +417,27 @@ static void walk_pmd_level(struct seq_file *m, struct pg_state *st, pud_t addr, prot = pmd_flags(*start); eff = effective_prot(eff_in, prot); if (pmd_large(*start) || !pmd_present(*start)) { - note_page(m, st, __pgprot(prot), eff, 4); - } else if (!kasan_page_table(m, st, pmd_start)) { - walk_pte_level(m, st, *start, eff, + note_page(st, __pgprot(prot), eff, 4); + } else if (!kasan_page_table(st, pmd_start)) { + walk_pte_level(st, *start, eff, P + i * PMD_LEVEL_MULT); } } else - note_page(m, st, __pgprot(0), 0, 4); + note_page(st, __pgprot(0), 0, 4); start++; } } #else -#define walk_pmd_level(m,s,a,e,p) walk_pte_level(m,s,__pmd(pud_val(a)),e,p) +#define walk_pmd_level(s,a,e,p) walk_pte_level(s,__pmd(pud_val(a)),e,p) #define pud_large(a) pmd_large(__pmd(pud_val(a))) #define pud_none(a) pmd_none(__pmd(pud_val(a))) #endif #if PTRS_PER_PUD > 1 -static void walk_pud_level(struct seq_file *m, struct pg_state *st, p4d_t addr, - pgprotval_t eff_in, unsigned long P) +static void walk_pud_level(struct pg_state *st, p4d_t addr, pgprotval_t eff_in, + unsigned long P) { int i; pud_t *start, *pud_start; @@ -451,33 +451,33 @@ static void walk_pud_level(struct seq_file *m, struct pg_state *st, p4d_t addr, prot = pud_flags(*start); eff = effective_prot(eff_in, prot); if (pud_large(*start) || !pud_present(*start)) { - note_page(m, st, __pgprot(prot), eff, 3); - } else if (!kasan_page_table(m, st, pud_start)) { - walk_pmd_level(m, st, *start, eff, + note_page(st, __pgprot(prot), eff, 3); + } else if (!kasan_page_table(st, pud_start)) { + walk_pmd_level(st, *start, eff, P + i * PUD_LEVEL_MULT); } } else - note_page(m, st, __pgprot(0), 0, 3); + note_page(st, __pgprot(0), 0, 3); start++; } } #else -#define walk_pud_level(m,s,a,e,p) walk_pmd_level(m,s,__pud(p4d_val(a)),e,p) +#define walk_pud_level(s,a,e,p) walk_pmd_level(s,__pud(p4d_val(a)),e,p) #define p4d_large(a) pud_large(__pud(p4d_val(a))) #define p4d_none(a) pud_none(__pud(p4d_val(a))) #endif -static void walk_p4d_level(struct seq_file *m, struct pg_state *st, pgd_t addr, - pgprotval_t eff_in, unsigned long P) +static void walk_p4d_level(struct pg_state *st, pgd_t addr, pgprotval_t eff_in, + unsigned long P) { int i; p4d_t *start, *p4d_start; pgprotval_t prot, eff; if (PTRS_PER_P4D == 1) - return walk_pud_level(m, st, __p4d(pgd_val(addr)), eff_in, P); + return walk_pud_level(st, __p4d(pgd_val(addr)), eff_in, P); p4d_start = start = (p4d_t *)pgd_page_vaddr(addr); @@ -487,13 +487,13 @@ static void walk_p4d_level(struct seq_file *m, struct pg_state *st, pgd_t addr, prot = p4d_flags(*start); eff = effective_prot(eff_in, prot); if (p4d_large(*start) || !p4d_present(*start)) { - note_page(m, st, __pgprot(prot), eff, 2); - } else if (!kasan_page_table(m, st, p4d_start)) { - walk_pud_level(m, st, *start, eff, + note_page(st, __pgprot(prot), eff, 2); + } else if (!kasan_page_table(st, p4d_start)) { + walk_pud_level(st, *start, eff, P + i * P4D_LEVEL_MULT); } } else - note_page(m, st, __pgprot(0), 0, 2); + note_page(st, __pgprot(0), 0, 2); start++; } @@ -530,6 +530,7 @@ static void ptdump_walk_pgd_level_core(struct seq_file *m, pgd_t *pgd, } st.check_wx = checkwx; + st.seq = m; if (checkwx) st.wx_pages = 0; @@ -543,13 +544,13 @@ static void ptdump_walk_pgd_level_core(struct seq_file *m, pgd_t *pgd, eff = prot; #endif if (pgd_large(*start) || !pgd_present(*start)) { - note_page(m, &st, __pgprot(prot), eff, 1); + note_page(&st, __pgprot(prot), eff, 1); } else { - walk_p4d_level(m, &st, *start, eff, + walk_p4d_level(&st, *start, eff, i * PGD_LEVEL_MULT); } } else - note_page(m, &st, __pgprot(0), 0, 1); + note_page(&st, __pgprot(0), 0, 1); cond_resched(); start++; @@ -557,7 +558,7 @@ static void ptdump_walk_pgd_level_core(struct seq_file *m, pgd_t *pgd, /* Flush out the last page */ st.current_address = normalize_addr(PTRS_PER_PGD*PGD_LEVEL_MULT); - note_page(m, &st, __pgprot(0), 0, 0); + note_page(&st, __pgprot(0), 0, 0); if (!checkwx) return; if (st.wx_pages) From patchwork Mon Jul 22 15:42:05 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 11052869 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 805AC1399 for ; Mon, 22 Jul 2019 15:50:01 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6C35428438 for ; Mon, 22 Jul 2019 15:50:01 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 5ECD42846D; Mon, 22 Jul 2019 15:50:01 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 05DB028438 for ; Mon, 22 Jul 2019 15:50:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=vBPaEimjLWQizW9pgnbbK2P5xKlVJxqLkOzH8XAhpM4=; b=QoNc/WA7vrkN2X xap1EML88j8HQCXW7ZqeWols0lO/5HhLHZ95eGQSikbx7XwcKErDDOTPjQBTLTOY2YmfcZehZvwIS qskORpunzGGlpeV7jxASuajYAT/q7KBxmgITFLUIA5qGrz/FSL0IrkYt2c/7zPNs56JGHO/i9BIUL eTYujHQ+z9czyyoE7rN+k77objqNzzHwsz8XQ8KjOragbjOVJ3PEhmBfAB8yNOhRffCVlszLRZz1A 3wed6AVPIucm2zOpmIBCZJluLFd+FjDuy1UCvp4uC7fWktB9aH04sXLFpmk5+FW+d/7Rx1mO4GNfI zZEX7pH9OBcdPp6sw93A==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hpaZb-0007jL-Q8; Mon, 22 Jul 2019 15:50:00 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hpaT1-0002Cx-3J for linux-arm-kernel@lists.infradead.org; Mon, 22 Jul 2019 15:43:15 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 7F0791509; Mon, 22 Jul 2019 08:43:10 -0700 (PDT) Received: from e112269-lin.arm.com (e112269-lin.cambridge.arm.com [10.1.196.133]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id E99833F694; Mon, 22 Jul 2019 08:43:07 -0700 (PDT) From: Steven Price To: linux-mm@kvack.org Subject: [PATCH v9 16/21] x86: mm+efi: Convert ptdump_walk_pgd_level() to take a mm_struct Date: Mon, 22 Jul 2019 16:42:05 +0100 Message-Id: <20190722154210.42799-17-steven.price@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190722154210.42799-1-steven.price@arm.com> References: <20190722154210.42799-1-steven.price@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190722_084311_888641_AE3F44D5 X-CRM114-Status: GOOD ( 12.78 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , x86@kernel.org, Arnd Bergmann , Ard Biesheuvel , Peter Zijlstra , Catalin Marinas , Dave Hansen , linux-kernel@vger.kernel.org, Steven Price , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Ingo Molnar , Borislav Petkov , Andy Lutomirski , "H. Peter Anvin" , James Morse , Thomas Gleixner , Will Deacon , Andrew Morton , linux-arm-kernel@lists.infradead.org, "Liang, Kan" Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP To enable x86 to use the generic walk_page_range() function, the callers of ptdump_walk_pgd_level() need to pass an mm_struct rather than the raw pgd_t pointer. Luckily since commit 7e904a91bf60 ("efi: Use efi_mm in x86 as well as ARM") we now have an mm_struct for EFI on x86. Signed-off-by: Steven Price --- arch/x86/include/asm/pgtable.h | 2 +- arch/x86/mm/dump_pagetables.c | 4 ++-- arch/x86/platform/efi/efi_32.c | 2 +- arch/x86/platform/efi/efi_64.c | 4 ++-- 4 files changed, 6 insertions(+), 6 deletions(-) diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h index 6986a451619e..1a2b469f6e75 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -29,7 +29,7 @@ extern pgd_t early_top_pgt[PTRS_PER_PGD]; int __init __early_make_pgtable(unsigned long address, pmdval_t pmd); -void ptdump_walk_pgd_level(struct seq_file *m, pgd_t *pgd); +void ptdump_walk_pgd_level(struct seq_file *m, struct mm_struct *mm); void ptdump_walk_pgd_level_debugfs(struct seq_file *m, pgd_t *pgd, bool user); void ptdump_walk_pgd_level_checkwx(void); void ptdump_walk_user_pgd_level_checkwx(void); diff --git a/arch/x86/mm/dump_pagetables.c b/arch/x86/mm/dump_pagetables.c index fe21b57f629f..6f0d1296dee1 100644 --- a/arch/x86/mm/dump_pagetables.c +++ b/arch/x86/mm/dump_pagetables.c @@ -568,9 +568,9 @@ static void ptdump_walk_pgd_level_core(struct seq_file *m, pgd_t *pgd, pr_info("x86/mm: Checked W+X mappings: passed, no W+X pages found.\n"); } -void ptdump_walk_pgd_level(struct seq_file *m, pgd_t *pgd) +void ptdump_walk_pgd_level(struct seq_file *m, struct mm_struct *mm) { - ptdump_walk_pgd_level_core(m, pgd, false, true); + ptdump_walk_pgd_level_core(m, mm->pgd, false, true); } void ptdump_walk_pgd_level_debugfs(struct seq_file *m, pgd_t *pgd, bool user) diff --git a/arch/x86/platform/efi/efi_32.c b/arch/x86/platform/efi/efi_32.c index 9959657127f4..9175ceaa6e72 100644 --- a/arch/x86/platform/efi/efi_32.c +++ b/arch/x86/platform/efi/efi_32.c @@ -49,7 +49,7 @@ void efi_sync_low_kernel_mappings(void) {} void __init efi_dump_pagetable(void) { #ifdef CONFIG_EFI_PGT_DUMP - ptdump_walk_pgd_level(NULL, swapper_pg_dir); + ptdump_walk_pgd_level(NULL, init_mm); #endif } diff --git a/arch/x86/platform/efi/efi_64.c b/arch/x86/platform/efi/efi_64.c index 08ce8177c3af..47a4c6c70648 100644 --- a/arch/x86/platform/efi/efi_64.c +++ b/arch/x86/platform/efi/efi_64.c @@ -614,9 +614,9 @@ void __init efi_dump_pagetable(void) { #ifdef CONFIG_EFI_PGT_DUMP if (efi_enabled(EFI_OLD_MEMMAP)) - ptdump_walk_pgd_level(NULL, swapper_pg_dir); + ptdump_walk_pgd_level(NULL, init_mm); else - ptdump_walk_pgd_level(NULL, efi_mm.pgd); + ptdump_walk_pgd_level(NULL, efi_mm); #endif } From patchwork Mon Jul 22 15:42:06 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 11052871 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 28A001399 for ; Mon, 22 Jul 2019 15:50:29 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 14CBD28438 for ; Mon, 22 Jul 2019 15:50:29 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 086902846D; Mon, 22 Jul 2019 15:50:29 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 8873828438 for ; Mon, 22 Jul 2019 15:50:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Zf7ZMe6gv/zv+sXenQrC7vdcs7v2qST5y3Zj9wHsUeg=; b=SnTuwa5lcrx/wX Zp5dDsqDELnDlmIQTkBWUf3uiNBKUhAkiY2L/lygS0kcQXoBGalUxDCFTq4PG8/8mtBlevYoaOJiE lWFcuPqKXUcsWmwRSWH3qgJnLrBCmHWEu3DiLOfIYR/H9ccLioY+EyuOM3HiaWUHaF9CrCBK1hZAQ 57D+5ztDLoKf+KR4gUxzDYni/hLIItTZq5tJz5hX6L6s8DBG5V99EwiYGpFSpsuGJH1tFPj4VztmD pSbK3MOQcEuclnWNzBCYoG9AZvp3yro78Sgp9pyKev0hCUWb/Otcz/iHlMzOR61SC/YoGwHtI2e7Q dt676BxACYzzb0T/7OLQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hpaZz-0000iW-Fi; Mon, 22 Jul 2019 15:50:23 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hpaT4-0002FE-2z for linux-arm-kernel@lists.infradead.org; Mon, 22 Jul 2019 15:43:15 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 4B3A328; Mon, 22 Jul 2019 08:43:13 -0700 (PDT) Received: from e112269-lin.arm.com (e112269-lin.cambridge.arm.com [10.1.196.133]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id B4F913F694; Mon, 22 Jul 2019 08:43:10 -0700 (PDT) From: Steven Price To: linux-mm@kvack.org Subject: [PATCH v9 17/21] x86: mm: Convert ptdump_walk_pgd_level_debugfs() to take an mm_struct Date: Mon, 22 Jul 2019 16:42:06 +0100 Message-Id: <20190722154210.42799-18-steven.price@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190722154210.42799-1-steven.price@arm.com> References: <20190722154210.42799-1-steven.price@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190722_084314_256005_8428B791 X-CRM114-Status: GOOD ( 14.05 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , x86@kernel.org, Arnd Bergmann , Ard Biesheuvel , Peter Zijlstra , Catalin Marinas , Dave Hansen , linux-kernel@vger.kernel.org, Steven Price , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Ingo Molnar , Borislav Petkov , Andy Lutomirski , "H. Peter Anvin" , James Morse , Thomas Gleixner , Will Deacon , Andrew Morton , linux-arm-kernel@lists.infradead.org, "Liang, Kan" Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP To enable x86 to use the generic walk_page_range() function, the callers of ptdump_walk_pgd_level_debugfs() need to pass in the mm_struct. This means that ptdump_walk_pgd_level_core() is now always passed a valid pgd, so drop the support for pgd==NULL. Signed-off-by: Steven Price --- arch/x86/include/asm/pgtable.h | 3 ++- arch/x86/mm/debug_pagetables.c | 8 ++++---- arch/x86/mm/dump_pagetables.c | 14 ++++++-------- 3 files changed, 12 insertions(+), 13 deletions(-) diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h index 1a2b469f6e75..1b255987712e 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -30,7 +30,8 @@ extern pgd_t early_top_pgt[PTRS_PER_PGD]; int __init __early_make_pgtable(unsigned long address, pmdval_t pmd); void ptdump_walk_pgd_level(struct seq_file *m, struct mm_struct *mm); -void ptdump_walk_pgd_level_debugfs(struct seq_file *m, pgd_t *pgd, bool user); +void ptdump_walk_pgd_level_debugfs(struct seq_file *m, struct mm_struct *mm, + bool user); void ptdump_walk_pgd_level_checkwx(void); void ptdump_walk_user_pgd_level_checkwx(void); diff --git a/arch/x86/mm/debug_pagetables.c b/arch/x86/mm/debug_pagetables.c index 39001a401eff..d0efec713c6c 100644 --- a/arch/x86/mm/debug_pagetables.c +++ b/arch/x86/mm/debug_pagetables.c @@ -7,7 +7,7 @@ static int ptdump_show(struct seq_file *m, void *v) { - ptdump_walk_pgd_level_debugfs(m, NULL, false); + ptdump_walk_pgd_level_debugfs(m, &init_mm, false); return 0; } @@ -17,7 +17,7 @@ static int ptdump_curknl_show(struct seq_file *m, void *v) { if (current->mm->pgd) { down_read(¤t->mm->mmap_sem); - ptdump_walk_pgd_level_debugfs(m, current->mm->pgd, false); + ptdump_walk_pgd_level_debugfs(m, current->mm, false); up_read(¤t->mm->mmap_sem); } return 0; @@ -30,7 +30,7 @@ static int ptdump_curusr_show(struct seq_file *m, void *v) { if (current->mm->pgd) { down_read(¤t->mm->mmap_sem); - ptdump_walk_pgd_level_debugfs(m, current->mm->pgd, true); + ptdump_walk_pgd_level_debugfs(m, current->mm, true); up_read(¤t->mm->mmap_sem); } return 0; @@ -43,7 +43,7 @@ DEFINE_SHOW_ATTRIBUTE(ptdump_curusr); static int ptdump_efi_show(struct seq_file *m, void *v) { if (efi_mm.pgd) - ptdump_walk_pgd_level_debugfs(m, efi_mm.pgd, false); + ptdump_walk_pgd_level_debugfs(m, &efi_mm, false); return 0; } diff --git a/arch/x86/mm/dump_pagetables.c b/arch/x86/mm/dump_pagetables.c index 6f0d1296dee1..bcaf27b637e0 100644 --- a/arch/x86/mm/dump_pagetables.c +++ b/arch/x86/mm/dump_pagetables.c @@ -519,16 +519,12 @@ static inline bool is_hypervisor_range(int idx) static void ptdump_walk_pgd_level_core(struct seq_file *m, pgd_t *pgd, bool checkwx, bool dmesg) { - pgd_t *start = INIT_PGD; + pgd_t *start = pgd; pgprotval_t prot, eff; int i; struct pg_state st = {}; - if (pgd) { - start = pgd; - st.to_dmesg = dmesg; - } - + st.to_dmesg = dmesg; st.check_wx = checkwx; st.seq = m; if (checkwx) @@ -573,8 +569,10 @@ void ptdump_walk_pgd_level(struct seq_file *m, struct mm_struct *mm) ptdump_walk_pgd_level_core(m, mm->pgd, false, true); } -void ptdump_walk_pgd_level_debugfs(struct seq_file *m, pgd_t *pgd, bool user) +void ptdump_walk_pgd_level_debugfs(struct seq_file *m, struct mm_struct *mm, + bool user) { + pgd_t *pgd = mm->pgd; #ifdef CONFIG_PAGE_TABLE_ISOLATION if (user && boot_cpu_has(X86_FEATURE_PTI)) pgd = kernel_to_user_pgdp(pgd); @@ -600,7 +598,7 @@ void ptdump_walk_user_pgd_level_checkwx(void) void ptdump_walk_pgd_level_checkwx(void) { - ptdump_walk_pgd_level_core(NULL, NULL, true, false); + ptdump_walk_pgd_level_core(NULL, INIT_PGD, true, false); } static int __init pt_dump_init(void) From patchwork Mon Jul 22 15:42:07 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 11052873 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4156B1398 for ; Mon, 22 Jul 2019 15:50:46 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2D64C27FB3 for ; Mon, 22 Jul 2019 15:50:46 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2123028438; Mon, 22 Jul 2019 15:50:46 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id B87B427FB3 for ; Mon, 22 Jul 2019 15:50:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=uJDzszkWe4kwzjuLHUHZKwIaUPGXpI7O2UdFdMZKA5E=; b=o+/M3qn9sDoXBS 4om4OY7qybzWqrT80GEl6R/aZnDP8CZoanUpEzT/Md2iCuGMJAFLZOns0J1LegIdepBS8S0Fqxcn5 QO5L65ATc/k/+c0G9W/3vCaPa63Hozgx/jDR5FumFAVMVZ3xS8xW+tlUUz/+IJUy2iTntvCMtEWIu d6gnaj+bI7sJKzUrAJhQA6YK/ThN5F8obYvUyjO3uue/37ieTBdBOpb1r7VMjIlrukrVfoFsTbHsy oa3CbIhL0OWwUGi1r4kssOQFwcdfCrOo07SX1uKAg7oUSekMspMbW4W4oG4b2A5navgzzObFaPbjA 1bSn+mxwatJ8i1gPyuuA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hpaaK-0000vd-TM; Mon, 22 Jul 2019 15:50:45 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hpaT6-0002H3-DG for linux-arm-kernel@lists.infradead.org; Mon, 22 Jul 2019 15:43:18 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 16C1515A2; Mon, 22 Jul 2019 08:43:16 -0700 (PDT) Received: from e112269-lin.arm.com (e112269-lin.cambridge.arm.com [10.1.196.133]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 8186F3F694; Mon, 22 Jul 2019 08:43:13 -0700 (PDT) From: Steven Price To: linux-mm@kvack.org Subject: [PATCH v9 18/21] x86: mm: Convert ptdump_walk_pgd_level_core() to take an mm_struct Date: Mon, 22 Jul 2019 16:42:07 +0100 Message-Id: <20190722154210.42799-19-steven.price@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190722154210.42799-1-steven.price@arm.com> References: <20190722154210.42799-1-steven.price@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190722_084316_716536_2D0C718B X-CRM114-Status: GOOD ( 15.26 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , x86@kernel.org, Arnd Bergmann , Ard Biesheuvel , Peter Zijlstra , Catalin Marinas , Dave Hansen , linux-kernel@vger.kernel.org, Steven Price , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Ingo Molnar , Borislav Petkov , Andy Lutomirski , "H. Peter Anvin" , James Morse , Thomas Gleixner , Will Deacon , Andrew Morton , linux-arm-kernel@lists.infradead.org, "Liang, Kan" Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP An mm_struct is needed to enable x86 to use of the generic walk_page_range() function. In the case of walking the user page tables (when CONFIG_PAGE_TABLE_ISOLATION is enabled), it is necessary to create a fake_mm structure because there isn't an mm_struct with a pointer to the pgd of the user page tables. This fake_mm structure is initialised with the minimum necessary for the generic page walk code. Signed-off-by: Steven Price --- arch/x86/mm/dump_pagetables.c | 36 ++++++++++++++++++++--------------- 1 file changed, 21 insertions(+), 15 deletions(-) diff --git a/arch/x86/mm/dump_pagetables.c b/arch/x86/mm/dump_pagetables.c index bcaf27b637e0..546e28a7785c 100644 --- a/arch/x86/mm/dump_pagetables.c +++ b/arch/x86/mm/dump_pagetables.c @@ -107,8 +107,6 @@ static struct addr_marker address_markers[] = { [END_OF_SPACE_NR] = { -1, NULL } }; -#define INIT_PGD ((pgd_t *) &init_top_pgt) - #else /* CONFIG_X86_64 */ enum address_markers_idx { @@ -143,8 +141,6 @@ static struct addr_marker address_markers[] = { [END_OF_SPACE_NR] = { -1, NULL } }; -#define INIT_PGD (swapper_pg_dir) - #endif /* !CONFIG_X86_64 */ /* Multipliers for offsets within the PTEs */ @@ -516,10 +512,10 @@ static inline bool is_hypervisor_range(int idx) #endif } -static void ptdump_walk_pgd_level_core(struct seq_file *m, pgd_t *pgd, +static void ptdump_walk_pgd_level_core(struct seq_file *m, struct mm_struct *mm, bool checkwx, bool dmesg) { - pgd_t *start = pgd; + pgd_t *start = mm->pgd; pgprotval_t prot, eff; int i; struct pg_state st = {}; @@ -566,39 +562,49 @@ static void ptdump_walk_pgd_level_core(struct seq_file *m, pgd_t *pgd, void ptdump_walk_pgd_level(struct seq_file *m, struct mm_struct *mm) { - ptdump_walk_pgd_level_core(m, mm->pgd, false, true); + ptdump_walk_pgd_level_core(m, mm, false, true); } +#ifdef CONFIG_PAGE_TABLE_ISOLATION +static void ptdump_walk_pgd_level_user_core(struct seq_file *m, + struct mm_struct *mm, + bool checkwx, bool dmesg) +{ + struct mm_struct fake_mm = { + .pgd = kernel_to_user_pgdp(mm->pgd) + }; + init_rwsem(&fake_mm.mmap_sem); + ptdump_walk_pgd_level_core(m, &fake_mm, checkwx, dmesg); +} +#endif + void ptdump_walk_pgd_level_debugfs(struct seq_file *m, struct mm_struct *mm, bool user) { - pgd_t *pgd = mm->pgd; #ifdef CONFIG_PAGE_TABLE_ISOLATION if (user && boot_cpu_has(X86_FEATURE_PTI)) - pgd = kernel_to_user_pgdp(pgd); + ptdump_walk_pgd_level_user_core(m, mm, false, false); + else #endif - ptdump_walk_pgd_level_core(m, pgd, false, false); + ptdump_walk_pgd_level_core(m, mm, false, false); } EXPORT_SYMBOL_GPL(ptdump_walk_pgd_level_debugfs); void ptdump_walk_user_pgd_level_checkwx(void) { #ifdef CONFIG_PAGE_TABLE_ISOLATION - pgd_t *pgd = INIT_PGD; - if (!(__supported_pte_mask & _PAGE_NX) || !boot_cpu_has(X86_FEATURE_PTI)) return; pr_info("x86/mm: Checking user space page tables\n"); - pgd = kernel_to_user_pgdp(pgd); - ptdump_walk_pgd_level_core(NULL, pgd, true, false); + ptdump_walk_pgd_level_user_core(NULL, &init_mm, true, false); #endif } void ptdump_walk_pgd_level_checkwx(void) { - ptdump_walk_pgd_level_core(NULL, INIT_PGD, true, false); + ptdump_walk_pgd_level_core(NULL, &init_mm, true, false); } static int __init pt_dump_init(void) From patchwork Mon Jul 22 15:42:08 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 11052877 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 676CA1398 for ; Mon, 22 Jul 2019 15:51:09 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 51CEF202DB for ; Mon, 22 Jul 2019 15:51:09 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 44D0D2853C; Mon, 22 Jul 2019 15:51:09 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id BBC09202DB for ; Mon, 22 Jul 2019 15:51:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=uwOi+88NjXaYSPIX+WmgqlOUaAmG21RxoHj5catE5FA=; b=WgWkLTQ/N6xvu6 q34A6hvOK3R9DlLnWq3qvcJFX45U9rDSyEbOaKTv3GrdnF7vEulcLXYPWrp1ywFj4X8YlLg2tT/6J X//5l6h4zQ87ssYrg4BfUTPqWSjiTzEDtFvnAwS5oRvPbh4qq1HnkvrXhw9o80XLCSyPN7ns4tq7z 2S1ha1mZqfJumkbmUrhhUw5dHUqmUze6rPB8Tpz5WqGOvBXFGQSKSdpRQZ8hAS3HjYLX19JZ0GA+0 tSDNFHUxBJeAKNWHKYNry7gk5ttnAenbdbcjQNs3OaghR7zzIUDfj5dshbzwMSkBEAlLGvbi3M/tE sKP1TIdmvnczjRKnEloQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hpaac-00019h-E4; Mon, 22 Jul 2019 15:51:02 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hpaT9-0002JJ-7V for linux-arm-kernel@lists.infradead.org; Mon, 22 Jul 2019 15:43:21 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D653C1509; Mon, 22 Jul 2019 08:43:18 -0700 (PDT) Received: from e112269-lin.arm.com (e112269-lin.cambridge.arm.com [10.1.196.133]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 4D1383F694; Mon, 22 Jul 2019 08:43:16 -0700 (PDT) From: Steven Price To: linux-mm@kvack.org Subject: [PATCH v9 19/21] mm: Add generic ptdump Date: Mon, 22 Jul 2019 16:42:08 +0100 Message-Id: <20190722154210.42799-20-steven.price@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190722154210.42799-1-steven.price@arm.com> References: <20190722154210.42799-1-steven.price@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190722_084319_521994_B706BD6D X-CRM114-Status: GOOD ( 19.49 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , x86@kernel.org, Arnd Bergmann , Ard Biesheuvel , Peter Zijlstra , Catalin Marinas , Dave Hansen , linux-kernel@vger.kernel.org, Steven Price , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Ingo Molnar , Borislav Petkov , Andy Lutomirski , "H. Peter Anvin" , James Morse , Thomas Gleixner , Will Deacon , Andrew Morton , linux-arm-kernel@lists.infradead.org, "Liang, Kan" Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Add a generic version of page table dumping that architectures can opt-in to Signed-off-by: Steven Price --- include/linux/ptdump.h | 19 +++++ mm/Kconfig.debug | 21 ++++++ mm/Makefile | 1 + mm/ptdump.c | 161 +++++++++++++++++++++++++++++++++++++++++ 4 files changed, 202 insertions(+) create mode 100644 include/linux/ptdump.h create mode 100644 mm/ptdump.c diff --git a/include/linux/ptdump.h b/include/linux/ptdump.h new file mode 100644 index 000000000000..eb8e78154be3 --- /dev/null +++ b/include/linux/ptdump.h @@ -0,0 +1,19 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +#ifndef _LINUX_PTDUMP_H +#define _LINUX_PTDUMP_H + +struct ptdump_range { + unsigned long start; + unsigned long end; +}; + +struct ptdump_state { + void (*note_page)(struct ptdump_state *st, unsigned long addr, + int level, unsigned long val); + const struct ptdump_range *range; +}; + +void ptdump_walk_pgd(struct ptdump_state *st, struct mm_struct *mm); + +#endif /* _LINUX_PTDUMP_H */ diff --git a/mm/Kconfig.debug b/mm/Kconfig.debug index 82b6a20898bd..7ad939b7140f 100644 --- a/mm/Kconfig.debug +++ b/mm/Kconfig.debug @@ -115,3 +115,24 @@ config DEBUG_RODATA_TEST depends on STRICT_KERNEL_RWX ---help--- This option enables a testcase for the setting rodata read-only. + +config GENERIC_PTDUMP + bool + +config PTDUMP_CORE + bool + +config PTDUMP_DEBUGFS + bool "Export kernel pagetable layout to userspace via debugfs" + depends on DEBUG_KERNEL + depends on DEBUG_FS + depends on GENERIC_PTDUMP + select PTDUMP_CORE + help + Say Y here if you want to show the kernel pagetable layout in a + debugfs file. This information is only useful for kernel developers + who are working in architecture specific areas of the kernel. + It is probably not a good idea to enable this feature in a production + kernel. + + If in doubt, say N. diff --git a/mm/Makefile b/mm/Makefile index 338e528ad436..750a4c12d5da 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -104,3 +104,4 @@ obj-$(CONFIG_HARDENED_USERCOPY) += usercopy.o obj-$(CONFIG_PERCPU_STATS) += percpu-stats.o obj-$(CONFIG_HMM_MIRROR) += hmm.o obj-$(CONFIG_MEMFD_CREATE) += memfd.o +obj-$(CONFIG_PTDUMP_CORE) += ptdump.o diff --git a/mm/ptdump.c b/mm/ptdump.c new file mode 100644 index 000000000000..39befc9088b8 --- /dev/null +++ b/mm/ptdump.c @@ -0,0 +1,161 @@ +// SPDX-License-Identifier: GPL-2.0 + +#include +#include +#include + +static int ptdump_pgd_entry(pgd_t *pgd, unsigned long addr, + unsigned long next, struct mm_walk *walk) +{ + struct ptdump_state *st = walk->private; + pgd_t val = READ_ONCE(*pgd); + + if (pgd_leaf(val)) + st->note_page(st, addr, 1, pgd_val(val)); + + return 0; +} + +static int ptdump_p4d_entry(p4d_t *p4d, unsigned long addr, + unsigned long next, struct mm_walk *walk) +{ + struct ptdump_state *st = walk->private; + p4d_t val = READ_ONCE(*p4d); + + if (p4d_leaf(val)) + st->note_page(st, addr, 2, p4d_val(val)); + + return 0; +} + +static int ptdump_pud_entry(pud_t *pud, unsigned long addr, + unsigned long next, struct mm_walk *walk) +{ + struct ptdump_state *st = walk->private; + pud_t val = READ_ONCE(*pud); + + if (pud_leaf(val)) + st->note_page(st, addr, 3, pud_val(val)); + + return 0; +} + +static int ptdump_pmd_entry(pmd_t *pmd, unsigned long addr, + unsigned long next, struct mm_walk *walk) +{ + struct ptdump_state *st = walk->private; + pmd_t val = READ_ONCE(*pmd); + + if (pmd_leaf(val)) + st->note_page(st, addr, 4, pmd_val(val)); + + return 0; +} + +static int ptdump_pte_entry(pte_t *pte, unsigned long addr, + unsigned long next, struct mm_walk *walk) +{ + struct ptdump_state *st = walk->private; + + st->note_page(st, addr, 5, pte_val(READ_ONCE(*pte))); + + return 0; +} + +#ifdef CONFIG_KASAN +/* + * This is an optimization for KASAN=y case. Since all kasan page tables + * eventually point to the kasan_early_shadow_page we could call note_page() + * right away without walking through lower level page tables. This saves + * us dozens of seconds (minutes for 5-level config) while checking for + * W+X mapping or reading kernel_page_tables debugfs file. + */ +static inline bool kasan_page_table(struct ptdump_state *st, void *pt, + unsigned long addr) +{ + if (__pa(pt) == __pa(kasan_early_shadow_pmd) || +#ifdef CONFIG_X86 + (pgtable_l5_enabled() && + __pa(pt) == __pa(kasan_early_shadow_p4d)) || +#endif + __pa(pt) == __pa(kasan_early_shadow_pud)) { + st->note_page(st, addr, 5, pte_val(kasan_early_shadow_pte[0])); + return true; + } + return false; +} +#else +static inline bool kasan_page_table(struct ptdump_state *st, void *pt, + unsigned long addr) +{ + return false; +} +#endif + +static int ptdump_test_p4d(unsigned long addr, unsigned long next, + p4d_t *p4d, struct mm_walk *walk) +{ + struct ptdump_state *st = walk->private; + + if (kasan_page_table(st, p4d, addr)) + return 1; + return 0; +} + +static int ptdump_test_pud(unsigned long addr, unsigned long next, + pud_t *pud, struct mm_walk *walk) +{ + struct ptdump_state *st = walk->private; + + if (kasan_page_table(st, pud, addr)) + return 1; + return 0; +} + +static int ptdump_test_pmd(unsigned long addr, unsigned long next, + pmd_t *pmd, struct mm_walk *walk) +{ + struct ptdump_state *st = walk->private; + + if (kasan_page_table(st, pmd, addr)) + return 1; + return 0; +} + +static int ptdump_hole(unsigned long addr, unsigned long next, + struct mm_walk *walk) +{ + struct ptdump_state *st = walk->private; + + st->note_page(st, addr, -1, 0); + + return 0; +} + +void ptdump_walk_pgd(struct ptdump_state *st, struct mm_struct *mm) +{ + struct mm_walk walk = { + .mm = mm, + .pgd_entry = ptdump_pgd_entry, + .p4d_entry = ptdump_p4d_entry, + .pud_entry = ptdump_pud_entry, + .pmd_entry = ptdump_pmd_entry, + .pte_entry = ptdump_pte_entry, + .test_p4d = ptdump_test_p4d, + .test_pud = ptdump_test_pud, + .test_pmd = ptdump_test_pmd, + .pte_hole = ptdump_hole, + .private = st + }; + const struct ptdump_range *range = st->range; + + down_read(&mm->mmap_sem); + while (range->start != range->end) { + walk_page_range(range->start, range->end, &walk); + range++; + } + up_read(&mm->mmap_sem); + + /* Flush out the last page */ + st->note_page(st, 0, 0, 0); +} From patchwork Mon Jul 22 15:42:09 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 11052881 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 25CEF1399 for ; Mon, 22 Jul 2019 15:51:25 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 10A0B202DB for ; Mon, 22 Jul 2019 15:51:25 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 041322857F; Mon, 22 Jul 2019 15:51:25 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 2298527FC0 for ; Mon, 22 Jul 2019 15:51:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=GgA84Syg67B/ujXgE4ILjyLFXNNud5pVfMzpI+a2ovU=; b=cAtK7MGxU8eTVq o6W8GIpo/SSaOTOE5/uSKbOGtC4anbjR/36BIP3bdlQ1T178ZAKIBz14S0mmz6UvUIbS0uvUw7ikr T1t7SStw8ygQzgYH/pfQL7VWWnLY0fxKKQHU+tvTg9e2xz0glXQEvV09gheqDh48rdJJlpQUQnDd0 kv5OGEyb1iMsQdh2WK6v4A982ytf2HMqnfIXQ/Rr3x09mwt0ARU4o3jGyM8wmYMQYf8jALh5kjJMa 1LZ2rXdLtfKFt36lwkZ5C0PH52/rqrtdcu+y5sqjHMjlFU3Q2Wb5dxRx05obh01qXLjP7zPi7a8bf 5ETCwhhuNFp34Ha8lVFg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hpaax-0001Mg-5W; Mon, 22 Jul 2019 15:51:23 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hpaTC-0002KY-3j for linux-arm-kernel@lists.infradead.org; Mon, 22 Jul 2019 15:43:25 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id BD87728; Mon, 22 Jul 2019 08:43:21 -0700 (PDT) Received: from e112269-lin.arm.com (e112269-lin.cambridge.arm.com [10.1.196.133]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 193A33F694; Mon, 22 Jul 2019 08:43:18 -0700 (PDT) From: Steven Price To: linux-mm@kvack.org Subject: [PATCH v9 20/21] x86: mm: Convert dump_pagetables to use walk_page_range Date: Mon, 22 Jul 2019 16:42:09 +0100 Message-Id: <20190722154210.42799-21-steven.price@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190722154210.42799-1-steven.price@arm.com> References: <20190722154210.42799-1-steven.price@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190722_084322_864556_872E5676 X-CRM114-Status: GOOD ( 27.87 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , x86@kernel.org, Arnd Bergmann , Ard Biesheuvel , Peter Zijlstra , Catalin Marinas , Dave Hansen , linux-kernel@vger.kernel.org, Steven Price , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Ingo Molnar , Borislav Petkov , Andy Lutomirski , "H. Peter Anvin" , James Morse , Thomas Gleixner , Will Deacon , Andrew Morton , linux-arm-kernel@lists.infradead.org, "Liang, Kan" Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Make use of the new functionality in walk_page_range to remove the arch page walking code and use the generic code to walk the page tables. The effective permissions are passed down the chain using new fields in struct pg_state. The KASAN optimisation is implemented by including test_p?d callbacks which can decide to skip an entire tree of entries Signed-off-by: Steven Price --- arch/x86/Kconfig | 1 + arch/x86/Kconfig.debug | 20 +-- arch/x86/mm/Makefile | 4 +- arch/x86/mm/dump_pagetables.c | 285 +++++++--------------------------- 4 files changed, 64 insertions(+), 246 deletions(-) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 222855cc0158..76beeec13c27 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -119,6 +119,7 @@ config X86 select GENERIC_IRQ_RESERVATION_MODE select GENERIC_IRQ_SHOW select GENERIC_PENDING_IRQ if SMP + select GENERIC_PTDUMP select GENERIC_SMP_IDLE_THREAD select GENERIC_STRNCPY_FROM_USER select GENERIC_STRNLEN_USER diff --git a/arch/x86/Kconfig.debug b/arch/x86/Kconfig.debug index 71c92db47c41..ca4ee374e685 100644 --- a/arch/x86/Kconfig.debug +++ b/arch/x86/Kconfig.debug @@ -62,26 +62,10 @@ config EARLY_PRINTK_USB_XDBC config MCSAFE_TEST def_bool n -config X86_PTDUMP_CORE - def_bool n - -config X86_PTDUMP - tristate "Export kernel pagetable layout to userspace via debugfs" - depends on DEBUG_KERNEL - select DEBUG_FS - select X86_PTDUMP_CORE - ---help--- - Say Y here if you want to show the kernel pagetable layout in a - debugfs file. This information is only useful for kernel developers - who are working in architecture specific areas of the kernel. - It is probably not a good idea to enable this feature in a production - kernel. - If in doubt, say "N" - config EFI_PGT_DUMP bool "Dump the EFI pagetable" depends on EFI - select X86_PTDUMP_CORE + select PTDUMP_CORE ---help--- Enable this if you want to dump the EFI page table before enabling virtual mode. This can be used to debug miscellaneous @@ -90,7 +74,7 @@ config EFI_PGT_DUMP config DEBUG_WX bool "Warn on W+X mappings at boot" - select X86_PTDUMP_CORE + select PTDUMP_CORE ---help--- Generate a warning if any W+X mappings are found at boot. diff --git a/arch/x86/mm/Makefile b/arch/x86/mm/Makefile index 84373dc9b341..66cf0ea5c2be 100644 --- a/arch/x86/mm/Makefile +++ b/arch/x86/mm/Makefile @@ -28,8 +28,8 @@ obj-$(CONFIG_X86_PAT) += pat_rbtree.o obj-$(CONFIG_X86_32) += pgtable_32.o iomap_32.o obj-$(CONFIG_HUGETLB_PAGE) += hugetlbpage.o -obj-$(CONFIG_X86_PTDUMP_CORE) += dump_pagetables.o -obj-$(CONFIG_X86_PTDUMP) += debug_pagetables.o +obj-$(CONFIG_PTDUMP_CORE) += dump_pagetables.o +obj-$(CONFIG_PTDUMP_DEBUGFS) += debug_pagetables.o obj-$(CONFIG_HIGHMEM) += highmem_32.o diff --git a/arch/x86/mm/dump_pagetables.c b/arch/x86/mm/dump_pagetables.c index 546e28a7785c..61b5feff12c7 100644 --- a/arch/x86/mm/dump_pagetables.c +++ b/arch/x86/mm/dump_pagetables.c @@ -16,6 +16,7 @@ #include #include #include +#include #include #include @@ -26,11 +27,12 @@ * when a "break" in the continuity is found. */ struct pg_state { + struct ptdump_state ptdump; int level; - pgprot_t current_prot; + pgprotval_t current_prot; pgprotval_t effective_prot; + pgprotval_t prot_levels[5]; unsigned long start_address; - unsigned long current_address; const struct addr_marker *marker; unsigned long lines; bool to_dmesg; @@ -171,9 +173,8 @@ static struct addr_marker address_markers[] = { /* * Print a readable form of a pgprot_t to the seq_file */ -static void printk_prot(struct seq_file *m, pgprot_t prot, int level, bool dmsg) +static void printk_prot(struct seq_file *m, pgprotval_t pr, int level, bool dmsg) { - pgprotval_t pr = pgprot_val(prot); static const char * const level_name[] = { "cr3", "pgd", "p4d", "pud", "pmd", "pte" }; @@ -220,24 +221,11 @@ static void printk_prot(struct seq_file *m, pgprot_t prot, int level, bool dmsg) pt_dump_cont_printf(m, dmsg, "%s\n", level_name[level]); } -/* - * On 64 bits, sign-extend the 48 bit address to 64 bit - */ -static unsigned long normalize_addr(unsigned long u) -{ - int shift; - if (!IS_ENABLED(CONFIG_X86_64)) - return u; - - shift = 64 - (__VIRTUAL_MASK_SHIFT + 1); - return (signed long)(u << shift) >> shift; -} - -static void note_wx(struct pg_state *st) +static void note_wx(struct pg_state *st, unsigned long addr) { unsigned long npages; - npages = (st->current_address - st->start_address) / PAGE_SIZE; + npages = (addr - st->start_address) / PAGE_SIZE; #ifdef CONFIG_PCI_BIOS /* @@ -245,7 +233,7 @@ static void note_wx(struct pg_state *st) * Inform about it, but avoid the warning. */ if (pcibios_enabled && st->start_address >= PAGE_OFFSET + BIOS_BEGIN && - st->current_address <= PAGE_OFFSET + BIOS_END) { + addr <= PAGE_OFFSET + BIOS_END) { pr_warn_once("x86/mm: PCI BIOS W+X mapping %lu pages\n", npages); return; } @@ -257,25 +245,44 @@ static void note_wx(struct pg_state *st) (void *)st->start_address); } +static inline pgprotval_t effective_prot(pgprotval_t prot1, pgprotval_t prot2) +{ + return (prot1 & prot2 & (_PAGE_USER | _PAGE_RW)) | + ((prot1 | prot2) & _PAGE_NX); +} + /* * This function gets called on a break in a continuous series * of PTE entries; the next one is different so we need to * print what we collected so far. */ -static void note_page(struct pg_state *st, pgprot_t new_prot, - pgprotval_t new_eff, int level) +static void note_page(struct ptdump_state *pt_st, unsigned long addr, int level, + unsigned long val) { - pgprotval_t prot, cur, eff; + struct pg_state *st = container_of(pt_st, struct pg_state, ptdump); + pgprotval_t new_prot, new_eff; + pgprotval_t cur, eff; static const char units[] = "BKMGTPE"; struct seq_file *m = st->seq; + new_prot = val & PTE_FLAGS_MASK; + + if (level > 1) { + new_eff = effective_prot(st->prot_levels[level - 2], + new_prot); + } else { + new_eff = new_prot; + } + + if (level > 0) + st->prot_levels[level-1] = new_eff; + /* * If we have a "break" in the series, we need to flush the state that * we have now. "break" is either changing perms, levels or * address space marker. */ - prot = pgprot_val(new_prot); - cur = pgprot_val(st->current_prot); + cur = st->current_prot; eff = st->effective_prot; if (!st->level) { @@ -287,14 +294,14 @@ static void note_page(struct pg_state *st, pgprot_t new_prot, st->lines = 0; pt_dump_seq_printf(m, st->to_dmesg, "---[ %s ]---\n", st->marker->name); - } else if (prot != cur || new_eff != eff || level != st->level || - st->current_address >= st->marker[1].start_address) { + } else if (new_prot != cur || new_eff != eff || level != st->level || + addr >= st->marker[1].start_address) { const char *unit = units; unsigned long delta; int width = sizeof(unsigned long) * 2; if (st->check_wx && (eff & _PAGE_RW) && !(eff & _PAGE_NX)) - note_wx(st); + note_wx(st, addr); /* * Now print the actual finished series @@ -304,9 +311,9 @@ static void note_page(struct pg_state *st, pgprot_t new_prot, pt_dump_seq_printf(m, st->to_dmesg, "0x%0*lx-0x%0*lx ", width, st->start_address, - width, st->current_address); + width, addr); - delta = st->current_address - st->start_address; + delta = addr - st->start_address; while (!(delta & 1023) && unit[1]) { delta >>= 10; unit++; @@ -324,7 +331,7 @@ static void note_page(struct pg_state *st, pgprot_t new_prot, * such as the start of vmalloc space etc. * This helps in the interpretation. */ - if (st->current_address >= st->marker[1].start_address) { + if (addr >= st->marker[1].start_address) { if (st->marker->max_lines && st->lines > st->marker->max_lines) { unsigned long nskip = @@ -340,217 +347,43 @@ static void note_page(struct pg_state *st, pgprot_t new_prot, st->marker->name); } - st->start_address = st->current_address; + st->start_address = addr; st->current_prot = new_prot; st->effective_prot = new_eff; st->level = level; } } -static inline pgprotval_t effective_prot(pgprotval_t prot1, pgprotval_t prot2) -{ - return (prot1 & prot2 & (_PAGE_USER | _PAGE_RW)) | - ((prot1 | prot2) & _PAGE_NX); -} - -static void walk_pte_level(struct pg_state *st, pmd_t addr, pgprotval_t eff_in, - unsigned long P) -{ - int i; - pte_t *pte; - pgprotval_t prot, eff; - - for (i = 0; i < PTRS_PER_PTE; i++) { - st->current_address = normalize_addr(P + i * PTE_LEVEL_MULT); - pte = pte_offset_map(&addr, st->current_address); - prot = pte_flags(*pte); - eff = effective_prot(eff_in, prot); - note_page(st, __pgprot(prot), eff, 5); - pte_unmap(pte); - } -} -#ifdef CONFIG_KASAN - -/* - * This is an optimization for KASAN=y case. Since all kasan page tables - * eventually point to the kasan_early_shadow_page we could call note_page() - * right away without walking through lower level page tables. This saves - * us dozens of seconds (minutes for 5-level config) while checking for - * W+X mapping or reading kernel_page_tables debugfs file. - */ -static inline bool kasan_page_table(struct pg_state *st, void *pt) -{ - if (__pa(pt) == __pa(kasan_early_shadow_pmd) || - (pgtable_l5_enabled() && - __pa(pt) == __pa(kasan_early_shadow_p4d)) || - __pa(pt) == __pa(kasan_early_shadow_pud)) { - pgprotval_t prot = pte_flags(kasan_early_shadow_pte[0]); - note_page(st, __pgprot(prot), 0, 5); - return true; - } - return false; -} -#else -static inline bool kasan_page_table(struct pg_state *st, void *pt) -{ - return false; -} -#endif - -#if PTRS_PER_PMD > 1 - -static void walk_pmd_level(struct pg_state *st, pud_t addr, - pgprotval_t eff_in, unsigned long P) -{ - int i; - pmd_t *start, *pmd_start; - pgprotval_t prot, eff; - - pmd_start = start = (pmd_t *)pud_page_vaddr(addr); - for (i = 0; i < PTRS_PER_PMD; i++) { - st->current_address = normalize_addr(P + i * PMD_LEVEL_MULT); - if (!pmd_none(*start)) { - prot = pmd_flags(*start); - eff = effective_prot(eff_in, prot); - if (pmd_large(*start) || !pmd_present(*start)) { - note_page(st, __pgprot(prot), eff, 4); - } else if (!kasan_page_table(st, pmd_start)) { - walk_pte_level(st, *start, eff, - P + i * PMD_LEVEL_MULT); - } - } else - note_page(st, __pgprot(0), 0, 4); - start++; - } -} - -#else -#define walk_pmd_level(s,a,e,p) walk_pte_level(s,__pmd(pud_val(a)),e,p) -#define pud_large(a) pmd_large(__pmd(pud_val(a))) -#define pud_none(a) pmd_none(__pmd(pud_val(a))) -#endif - -#if PTRS_PER_PUD > 1 - -static void walk_pud_level(struct pg_state *st, p4d_t addr, pgprotval_t eff_in, - unsigned long P) -{ - int i; - pud_t *start, *pud_start; - pgprotval_t prot, eff; - - pud_start = start = (pud_t *)p4d_page_vaddr(addr); - - for (i = 0; i < PTRS_PER_PUD; i++) { - st->current_address = normalize_addr(P + i * PUD_LEVEL_MULT); - if (!pud_none(*start)) { - prot = pud_flags(*start); - eff = effective_prot(eff_in, prot); - if (pud_large(*start) || !pud_present(*start)) { - note_page(st, __pgprot(prot), eff, 3); - } else if (!kasan_page_table(st, pud_start)) { - walk_pmd_level(st, *start, eff, - P + i * PUD_LEVEL_MULT); - } - } else - note_page(st, __pgprot(0), 0, 3); - - start++; - } -} - -#else -#define walk_pud_level(s,a,e,p) walk_pmd_level(s,__pud(p4d_val(a)),e,p) -#define p4d_large(a) pud_large(__pud(p4d_val(a))) -#define p4d_none(a) pud_none(__pud(p4d_val(a))) -#endif - -static void walk_p4d_level(struct pg_state *st, pgd_t addr, pgprotval_t eff_in, - unsigned long P) -{ - int i; - p4d_t *start, *p4d_start; - pgprotval_t prot, eff; - - if (PTRS_PER_P4D == 1) - return walk_pud_level(st, __p4d(pgd_val(addr)), eff_in, P); - - p4d_start = start = (p4d_t *)pgd_page_vaddr(addr); - - for (i = 0; i < PTRS_PER_P4D; i++) { - st->current_address = normalize_addr(P + i * P4D_LEVEL_MULT); - if (!p4d_none(*start)) { - prot = p4d_flags(*start); - eff = effective_prot(eff_in, prot); - if (p4d_large(*start) || !p4d_present(*start)) { - note_page(st, __pgprot(prot), eff, 2); - } else if (!kasan_page_table(st, p4d_start)) { - walk_pud_level(st, *start, eff, - P + i * P4D_LEVEL_MULT); - } - } else - note_page(st, __pgprot(0), 0, 2); - - start++; - } -} +static const struct ptdump_range ptdump_ranges[] = { +#ifdef CONFIG_X86_64 -#define pgd_large(a) (pgtable_l5_enabled() ? pgd_large(a) : p4d_large(__p4d(pgd_val(a)))) -#define pgd_none(a) (pgtable_l5_enabled() ? pgd_none(a) : p4d_none(__p4d(pgd_val(a)))) +#define normalize_addr_shift (64 - (__VIRTUAL_MASK_SHIFT + 1)) +#define normalize_addr(u) ((signed long)(u << normalize_addr_shift) \ + >> normalize_addr_shift) -static inline bool is_hypervisor_range(int idx) -{ -#ifdef CONFIG_X86_64 - /* - * A hole in the beginning of kernel address space reserved - * for a hypervisor. - */ - return (idx >= pgd_index(GUARD_HOLE_BASE_ADDR)) && - (idx < pgd_index(GUARD_HOLE_END_ADDR)); + {0, PTRS_PER_PGD * PGD_LEVEL_MULT / 2}, + {normalize_addr(PTRS_PER_PGD * PGD_LEVEL_MULT / 2), ~0UL}, #else - return false; + {0, ~0UL}, #endif -} + {0, 0} +}; static void ptdump_walk_pgd_level_core(struct seq_file *m, struct mm_struct *mm, bool checkwx, bool dmesg) { - pgd_t *start = mm->pgd; - pgprotval_t prot, eff; - int i; - struct pg_state st = {}; - - st.to_dmesg = dmesg; - st.check_wx = checkwx; - st.seq = m; - if (checkwx) - st.wx_pages = 0; - - for (i = 0; i < PTRS_PER_PGD; i++) { - st.current_address = normalize_addr(i * PGD_LEVEL_MULT); - if (!pgd_none(*start) && !is_hypervisor_range(i)) { - prot = pgd_flags(*start); -#ifdef CONFIG_X86_PAE - eff = _PAGE_USER | _PAGE_RW; -#else - eff = prot; -#endif - if (pgd_large(*start) || !pgd_present(*start)) { - note_page(&st, __pgprot(prot), eff, 1); - } else { - walk_p4d_level(&st, *start, eff, - i * PGD_LEVEL_MULT); - } - } else - note_page(&st, __pgprot(0), 0, 1); + struct pg_state st = { + .ptdump = { + .note_page = note_page, + .range = ptdump_ranges + }, + .to_dmesg = dmesg, + .check_wx = checkwx, + .seq = m + }; - cond_resched(); - start++; - } + ptdump_walk_pgd(&st.ptdump, mm); - /* Flush out the last page */ - st.current_address = normalize_addr(PTRS_PER_PGD*PGD_LEVEL_MULT); - note_page(&st, __pgprot(0), 0, 0); if (!checkwx) return; if (st.wx_pages) From patchwork Mon Jul 22 15:42:10 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 11052883 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 473191399 for ; Mon, 22 Jul 2019 15:51:53 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 33882202DB for ; Mon, 22 Jul 2019 15:51:53 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2705328514; Mon, 22 Jul 2019 15:51:53 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 76E62202DB for ; Mon, 22 Jul 2019 15:51:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=aHcr4wzAknklMyL4SVlH2JiMAlur3sx6F4/c0Q8uNcg=; b=mt5u9knzaRJabr +KvrPFe1HaJoxiO1m/ym792KA5hV08nLhJSyCyr0vMfihTAYYqfiiK//PpD/EFK2lbQxlqNWaXn31 k1LOXNiXqO021HyCJDRitfACtEOiNq4OS7y2/+oGa567L+QqE1flYlACjlOa9gb2/LxjsVR5PWIRm EBrGiM/kgdABC987f7W5s5d5wnk4zRgcrYwVctDTBSbTEF/rieO8nNBEzaQQCnQ0VKBdzjXqUGh30 nJlNVIVh1HwVE4WqUJ4mZ4mQIvFLUBnfP8VtdG9gQVqeQf8piCfnd7KEVK6gflGttB6p0IwiNEwAd kbNGOHkIArahFNCDZdOg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hpabI-0001cp-3I; Mon, 22 Jul 2019 15:51:44 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hpaTE-0002Mf-Qs for linux-arm-kernel@lists.infradead.org; Mon, 22 Jul 2019 15:43:27 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 880E715A2; Mon, 22 Jul 2019 08:43:24 -0700 (PDT) Received: from e112269-lin.arm.com (e112269-lin.cambridge.arm.com [10.1.196.133]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id F34383F694; Mon, 22 Jul 2019 08:43:21 -0700 (PDT) From: Steven Price To: linux-mm@kvack.org Subject: [PATCH v9 21/21] arm64: mm: Convert mm/dump.c to use walk_page_range() Date: Mon, 22 Jul 2019 16:42:10 +0100 Message-Id: <20190722154210.42799-22-steven.price@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190722154210.42799-1-steven.price@arm.com> References: <20190722154210.42799-1-steven.price@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190722_084325_225393_D3063834 X-CRM114-Status: GOOD ( 23.06 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , x86@kernel.org, Arnd Bergmann , Ard Biesheuvel , Peter Zijlstra , Catalin Marinas , Dave Hansen , linux-kernel@vger.kernel.org, Steven Price , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Ingo Molnar , Borislav Petkov , Andy Lutomirski , "H. Peter Anvin" , James Morse , Thomas Gleixner , Will Deacon , Andrew Morton , linux-arm-kernel@lists.infradead.org, "Liang, Kan" Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Now walk_page_range() can walk kernel page tables, we can switch the arm64 ptdump code over to using it, simplifying the code. Signed-off-by: Steven Price --- arch/arm64/Kconfig | 1 + arch/arm64/Kconfig.debug | 19 +---- arch/arm64/include/asm/ptdump.h | 8 +- arch/arm64/mm/Makefile | 4 +- arch/arm64/mm/dump.c | 117 ++++++++++------------------- arch/arm64/mm/ptdump_debugfs.c | 2 +- drivers/firmware/efi/arm-runtime.c | 2 +- 7 files changed, 48 insertions(+), 105 deletions(-) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 3adcec05b1f6..5a32c87f37c6 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -105,6 +105,7 @@ config ARM64 select GENERIC_IRQ_SHOW select GENERIC_IRQ_SHOW_LEVEL select GENERIC_PCI_IOMAP + select GENERIC_PTDUMP select GENERIC_SCHED_CLOCK select GENERIC_SMP_IDLE_THREAD select GENERIC_STRNCPY_FROM_USER diff --git a/arch/arm64/Kconfig.debug b/arch/arm64/Kconfig.debug index cf09010d825f..1c906d932d6b 100644 --- a/arch/arm64/Kconfig.debug +++ b/arch/arm64/Kconfig.debug @@ -1,22 +1,5 @@ # SPDX-License-Identifier: GPL-2.0-only -config ARM64_PTDUMP_CORE - def_bool n - -config ARM64_PTDUMP_DEBUGFS - bool "Export kernel pagetable layout to userspace via debugfs" - depends on DEBUG_KERNEL - select ARM64_PTDUMP_CORE - select DEBUG_FS - help - Say Y here if you want to show the kernel pagetable layout in a - debugfs file. This information is only useful for kernel developers - who are working in architecture specific areas of the kernel. - It is probably not a good idea to enable this feature in a production - kernel. - - If in doubt, say N. - config PID_IN_CONTEXTIDR bool "Write the current PID to the CONTEXTIDR register" help @@ -42,7 +25,7 @@ config ARM64_RANDOMIZE_TEXT_OFFSET config DEBUG_WX bool "Warn on W+X mappings at boot" - select ARM64_PTDUMP_CORE + select PTDUMP_CORE ---help--- Generate a warning if any W+X mappings are found at boot. diff --git a/arch/arm64/include/asm/ptdump.h b/arch/arm64/include/asm/ptdump.h index 0b8e7269ec82..38187f74e089 100644 --- a/arch/arm64/include/asm/ptdump.h +++ b/arch/arm64/include/asm/ptdump.h @@ -5,7 +5,7 @@ #ifndef __ASM_PTDUMP_H #define __ASM_PTDUMP_H -#ifdef CONFIG_ARM64_PTDUMP_CORE +#ifdef CONFIG_PTDUMP_CORE #include #include @@ -21,15 +21,15 @@ struct ptdump_info { unsigned long base_addr; }; -void ptdump_walk_pgd(struct seq_file *s, struct ptdump_info *info); -#ifdef CONFIG_ARM64_PTDUMP_DEBUGFS +void ptdump_walk(struct seq_file *s, struct ptdump_info *info); +#ifdef CONFIG_PTDUMP_DEBUGFS void ptdump_debugfs_register(struct ptdump_info *info, const char *name); #else static inline void ptdump_debugfs_register(struct ptdump_info *info, const char *name) { } #endif void ptdump_check_wx(void); -#endif /* CONFIG_ARM64_PTDUMP_CORE */ +#endif /* CONFIG_PTDUMP_CORE */ #ifdef CONFIG_DEBUG_WX #define debug_checkwx() ptdump_check_wx() diff --git a/arch/arm64/mm/Makefile b/arch/arm64/mm/Makefile index 849c1df3d214..d91030f0ffee 100644 --- a/arch/arm64/mm/Makefile +++ b/arch/arm64/mm/Makefile @@ -4,8 +4,8 @@ obj-y := dma-mapping.o extable.o fault.o init.o \ ioremap.o mmap.o pgd.o mmu.o \ context.o proc.o pageattr.o obj-$(CONFIG_HUGETLB_PAGE) += hugetlbpage.o -obj-$(CONFIG_ARM64_PTDUMP_CORE) += dump.o -obj-$(CONFIG_ARM64_PTDUMP_DEBUGFS) += ptdump_debugfs.o +obj-$(CONFIG_PTDUMP_CORE) += dump.o +obj-$(CONFIG_PTDUMP_DEBUGFS) += ptdump_debugfs.o obj-$(CONFIG_NUMA) += numa.o obj-$(CONFIG_DEBUG_VIRTUAL) += physaddr.o KASAN_SANITIZE_physaddr.o += n diff --git a/arch/arm64/mm/dump.c b/arch/arm64/mm/dump.c index 82b3a7fdb4a6..5cc71ad567b4 100644 --- a/arch/arm64/mm/dump.c +++ b/arch/arm64/mm/dump.c @@ -15,6 +15,7 @@ #include #include #include +#include #include #include @@ -65,10 +66,11 @@ static const struct addr_marker address_markers[] = { * dumps out a description of the range. */ struct pg_state { + struct ptdump_state ptdump; struct seq_file *seq; const struct addr_marker *marker; unsigned long start_address; - unsigned level; + int level; u64 current_prot; bool check_wx; unsigned long wx_pages; @@ -168,6 +170,10 @@ static struct pg_level pg_level[] = { .name = "PGD", .bits = pte_bits, .num = ARRAY_SIZE(pte_bits), + }, { /* p4d */ + .name = "P4D", + .bits = pte_bits, + .num = ARRAY_SIZE(pte_bits), }, { /* pud */ .name = (CONFIG_PGTABLE_LEVELS > 3) ? "PUD" : "PGD", .bits = pte_bits, @@ -230,11 +236,15 @@ static void note_prot_wx(struct pg_state *st, unsigned long addr) st->wx_pages += (addr - st->start_address) / PAGE_SIZE; } -static void note_page(struct pg_state *st, unsigned long addr, unsigned level, - u64 val) +static void note_page(struct ptdump_state *pt_st, unsigned long addr, int level, + unsigned long val) { + struct pg_state *st = container_of(pt_st, struct pg_state, ptdump); static const char units[] = "KMGTPE"; - u64 prot = val & pg_level[level].mask; + u64 prot = 0; + + if (level >= 0) + prot = val & pg_level[level].mask; if (!st->level) { st->level = level; @@ -282,85 +292,27 @@ static void note_page(struct pg_state *st, unsigned long addr, unsigned level, } -static void walk_pte(struct pg_state *st, pmd_t *pmdp, unsigned long start, - unsigned long end) -{ - unsigned long addr = start; - pte_t *ptep = pte_offset_kernel(pmdp, start); - - do { - note_page(st, addr, 4, READ_ONCE(pte_val(*ptep))); - } while (ptep++, addr += PAGE_SIZE, addr != end); -} - -static void walk_pmd(struct pg_state *st, pud_t *pudp, unsigned long start, - unsigned long end) -{ - unsigned long next, addr = start; - pmd_t *pmdp = pmd_offset(pudp, start); - - do { - pmd_t pmd = READ_ONCE(*pmdp); - next = pmd_addr_end(addr, end); - - if (pmd_none(pmd) || pmd_sect(pmd)) { - note_page(st, addr, 3, pmd_val(pmd)); - } else { - BUG_ON(pmd_bad(pmd)); - walk_pte(st, pmdp, addr, next); - } - } while (pmdp++, addr = next, addr != end); -} - -static void walk_pud(struct pg_state *st, pgd_t *pgdp, unsigned long start, - unsigned long end) +void ptdump_walk(struct seq_file *s, struct ptdump_info *info) { - unsigned long next, addr = start; - pud_t *pudp = pud_offset(pgdp, start); - - do { - pud_t pud = READ_ONCE(*pudp); - next = pud_addr_end(addr, end); - - if (pud_none(pud) || pud_sect(pud)) { - note_page(st, addr, 2, pud_val(pud)); - } else { - BUG_ON(pud_bad(pud)); - walk_pmd(st, pudp, addr, next); - } - } while (pudp++, addr = next, addr != end); -} + unsigned long end = ~0UL; + struct pg_state st; -static void walk_pgd(struct pg_state *st, struct mm_struct *mm, - unsigned long start) -{ - unsigned long end = (start < TASK_SIZE_64) ? TASK_SIZE_64 : 0; - unsigned long next, addr = start; - pgd_t *pgdp = pgd_offset(mm, start); - - do { - pgd_t pgd = READ_ONCE(*pgdp); - next = pgd_addr_end(addr, end); - - if (pgd_none(pgd)) { - note_page(st, addr, 1, pgd_val(pgd)); - } else { - BUG_ON(pgd_bad(pgd)); - walk_pud(st, pgdp, addr, next); - } - } while (pgdp++, addr = next, addr != end); -} + if (info->base_addr < TASK_SIZE_64) + end = TASK_SIZE_64; -void ptdump_walk_pgd(struct seq_file *m, struct ptdump_info *info) -{ - struct pg_state st = { - .seq = m, + st = (struct pg_state){ + .seq = s, .marker = info->markers, + .ptdump = { + .note_page = note_page, + .range = (struct ptdump_range[]){ + {info->base_addr, end}, + {0, 0} + } + } }; - walk_pgd(&st, info->mm, info->base_addr); - - note_page(&st, 0, 0, 0); + ptdump_walk_pgd(&st.ptdump, info->mm); } static void ptdump_initialize(void) @@ -388,10 +340,17 @@ void ptdump_check_wx(void) { -1, NULL}, }, .check_wx = true, + .ptdump = { + .note_page = note_page, + .range = (struct ptdump_range[]) { + {VA_START, ~0UL}, + {0, 0} + } + } }; - walk_pgd(&st, &init_mm, VA_START); - note_page(&st, 0, 0, 0); + ptdump_walk_pgd(&st.ptdump, &init_mm); + if (st.wx_pages || st.uxn_pages) pr_warn("Checked W+X mappings: FAILED, %lu W+X pages found, %lu non-UXN pages found\n", st.wx_pages, st.uxn_pages); diff --git a/arch/arm64/mm/ptdump_debugfs.c b/arch/arm64/mm/ptdump_debugfs.c index 064163f25592..1f2eae3e988b 100644 --- a/arch/arm64/mm/ptdump_debugfs.c +++ b/arch/arm64/mm/ptdump_debugfs.c @@ -7,7 +7,7 @@ static int ptdump_show(struct seq_file *m, void *v) { struct ptdump_info *info = m->private; - ptdump_walk_pgd(m, info); + ptdump_walk(m, info); return 0; } DEFINE_SHOW_ATTRIBUTE(ptdump); diff --git a/drivers/firmware/efi/arm-runtime.c b/drivers/firmware/efi/arm-runtime.c index e2ac5fa5531b..1283685f9c20 100644 --- a/drivers/firmware/efi/arm-runtime.c +++ b/drivers/firmware/efi/arm-runtime.c @@ -27,7 +27,7 @@ extern u64 efi_system_table; -#ifdef CONFIG_ARM64_PTDUMP_DEBUGFS +#if defined(CONFIG_PTDUMP_DEBUGFS) && defined(CONFIG_ARM64) #include static struct ptdump_info efi_ptdump_info = {