From patchwork Thu Mar 28 15:20:45 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 10875131 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 158B514DE for ; Thu, 28 Mar 2019 15:22:30 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id F1DCB28E25 for ; Thu, 28 Mar 2019 15:22:29 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E609F28E26; Thu, 28 Mar 2019 15:22:29 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 19B6328B74 for ; Thu, 28 Mar 2019 15:22:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=YkjChLFPk+nmnTqfqvjn2qkBX3ntAajbK7G/LhWylDQ=; b=mLqo0n2S/4LZaR HehMYtsz1XZR3x0iN1dx+lqhf13zU5q5WSlDC9oTB25P/6rHElT9eDm1+AjPqvS4kEUWjVtq9y4MK 0TLeJ61Q5jIVyY10TIw2CRiwrtXSPsNWO5j0s3/a6qGQtklPuG++zo/4naQiaw7Se617xYC+6RVGC /LL0FXPQCrmPThqphUqJ/x2z6ihnjxAnIrxjfDU2tMYYnQMksNBur0pFtMk7q70oamSo5UC8goIGg bYF/KogA1emkvCPWczYbs1RBzL//OZGYxDuXZrjF6xBKkJQi60Dc3wudFdI9mkmCGVj/tYRwPJRBZ PmTGBvJQD+t6HwPr0zAg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1h9WrH-0003Ri-Mi; Thu, 28 Mar 2019 15:22:23 +0000 Received: from foss.arm.com ([217.140.101.70]) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1h9Wqs-00030j-0E; Thu, 28 Mar 2019 15:22:04 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 59CE615BF; Thu, 28 Mar 2019 08:21:57 -0700 (PDT) Received: from e112269-lin.arm.com (e112269-lin.cambridge.arm.com [10.1.196.69]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id BED6C3F557; Thu, 28 Mar 2019 08:21:53 -0700 (PDT) From: Steven Price To: linux-mm@kvack.org Subject: [PATCH v7 01/20] arc: mm: Add p?d_large() definitions Date: Thu, 28 Mar 2019 15:20:45 +0000 Message-Id: <20190328152104.23106-2-steven.price@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190328152104.23106-1-steven.price@arm.com> References: <20190328152104.23106-1-steven.price@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190328_082158_416678_0AEF8CFE X-CRM114-Status: GOOD ( 11.02 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , x86@kernel.org, Arnd Bergmann , Ard Biesheuvel , Peter Zijlstra , Catalin Marinas , Dave Hansen , Will Deacon , linux-kernel@vger.kernel.org, Steven Price , Vineet Gupta , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Ingo Molnar , Borislav Petkov , Andy Lutomirski , "H. Peter Anvin" , James Morse , Thomas Gleixner , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, "Liang, Kan" Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP walk_page_range() is going to be allowed to walk page tables other than those of user space. For this it needs to know when it has reached a 'leaf' entry in the page tables. This information will be provided by the p?d_large() functions/macros. For arc, we only have two levels, so only pmd_large() is needed. CC: Vineet Gupta CC: linux-snps-arc@lists.infradead.org Signed-off-by: Steven Price Acked-by: Vineet Gupta --- arch/arc/include/asm/pgtable.h | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/arc/include/asm/pgtable.h b/arch/arc/include/asm/pgtable.h index cf4be70d5892..0edd27bc7018 100644 --- a/arch/arc/include/asm/pgtable.h +++ b/arch/arc/include/asm/pgtable.h @@ -277,6 +277,7 @@ static inline void pmd_set(pmd_t *pmdp, pte_t *ptep) #define pmd_none(x) (!pmd_val(x)) #define pmd_bad(x) ((pmd_val(x) & ~PAGE_MASK)) #define pmd_present(x) (pmd_val(x)) +#define pmd_large(x) (pmd_val(pmd) & _PAGE_HW_SZ) #define pmd_clear(xp) do { pmd_val(*(xp)) = 0; } while (0) #define pte_page(pte) pfn_to_page(pte_pfn(pte)) From patchwork Thu Mar 28 15:20:46 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 10875123 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A6EA714DE for ; Thu, 28 Mar 2019 15:22:20 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8D42A28671 for ; Thu, 28 Mar 2019 15:22:20 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 803D528A84; Thu, 28 Mar 2019 15:22:20 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id CBADF28671 for ; Thu, 28 Mar 2019 15:22:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=6ryr9D9kjoP0WimQ+NejryT8WeM5VWWsOoQdJfkTpwU=; b=ttbxT3jzRuc8zJ u/JlTiGQjfd29hALnEM9VMVO00ENWBHE+hgo8IfPiAv3Rg5+ln6AI8/fIu90cLf02HA5pajH/Ksq9 7A/bfg6bOZ45aUDYfqkArhbKZKKY+ggBZbwDYpU0F7PV4/ehwzBwmOTyMfPYkIqVfjEJIm7gi47KB UXEl3cRpd71pJXE3fEU6uAoDMTqC0ziOVUq6RNA1KKSLa4Dbk1EbqgNvf771FnjVIQHfx5WyJhHDw h3EJqfG6HJCHsVXq0CY+4nUSqOJxxPKY7UQ7VOStOEK5VPO1/KkZm9AkUbaKAKWeYfXeL4R42vZOl Vcw6W5JtjhlTu+QQ06TQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1h9Wr7-0003Fl-L7; Thu, 28 Mar 2019 15:22:13 +0000 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70] helo=foss.arm.com) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1h9Wqv-00031u-R9 for linux-arm-kernel@lists.infradead.org; Thu, 28 Mar 2019 15:22:03 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id DFAE0165C; Thu, 28 Mar 2019 08:22:00 -0700 (PDT) Received: from e112269-lin.arm.com (e112269-lin.cambridge.arm.com [10.1.196.69]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 99DC33F557; Thu, 28 Mar 2019 08:21:57 -0700 (PDT) From: Steven Price To: linux-mm@kvack.org Subject: [PATCH v7 02/20] arm64: mm: Add p?d_large() definitions Date: Thu, 28 Mar 2019 15:20:46 +0000 Message-Id: <20190328152104.23106-3-steven.price@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190328152104.23106-1-steven.price@arm.com> References: <20190328152104.23106-1-steven.price@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190328_082201_885955_949B9C2D X-CRM114-Status: GOOD ( 12.31 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , x86@kernel.org, Arnd Bergmann , Ard Biesheuvel , Peter Zijlstra , Catalin Marinas , Dave Hansen , Will Deacon , linux-kernel@vger.kernel.org, Steven Price , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Ingo Molnar , Borislav Petkov , Andy Lutomirski , "H. Peter Anvin" , James Morse , Thomas Gleixner , linux-arm-kernel@lists.infradead.org, "Liang, Kan" Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP walk_page_range() is going to be allowed to walk page tables other than those of user space. For this it needs to know when it has reached a 'leaf' entry in the page tables. This information will be provided by the p?d_large() functions/macros. For arm64, we already have p?d_sect() macros which we can reuse for p?d_large(). CC: Catalin Marinas CC: Will Deacon Signed-off-by: Steven Price --- arch/arm64/include/asm/pgtable.h | 2 ++ 1 file changed, 2 insertions(+) diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index de70c1eabf33..6eef345dbaf4 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -428,6 +428,7 @@ extern pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn, PMD_TYPE_TABLE) #define pmd_sect(pmd) ((pmd_val(pmd) & PMD_TYPE_MASK) == \ PMD_TYPE_SECT) +#define pmd_large(pmd) pmd_sect(pmd) #if defined(CONFIG_ARM64_64K_PAGES) || CONFIG_PGTABLE_LEVELS < 3 #define pud_sect(pud) (0) @@ -511,6 +512,7 @@ static inline phys_addr_t pmd_page_paddr(pmd_t pmd) #define pud_none(pud) (!pud_val(pud)) #define pud_bad(pud) (!(pud_val(pud) & PUD_TABLE_BIT)) #define pud_present(pud) pte_present(pud_pte(pud)) +#define pud_large(pud) pud_sect(pud) #define pud_valid(pud) pte_valid(pud_pte(pud)) static inline void set_pud(pud_t *pudp, pud_t pud) From patchwork Thu Mar 28 15:20:47 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 10875141 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6FD891390 for ; Thu, 28 Mar 2019 15:22:45 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 56B2728E18 for ; Thu, 28 Mar 2019 15:22:45 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 5487F28E28; Thu, 28 Mar 2019 15:22:45 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id CF78528E23 for ; Thu, 28 Mar 2019 15:22:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=/mv2stDa33yQhTYmIfyRzWgfJ3Q8w2QzGtV1zPrfjxk=; b=IaILWdt8h9K8ba qMVsiu3y/EFjxn5pZXSXJZWCoMdNMCaNzCxkFq+KsMPXSmzUDApqty7EZ+nbdXkCJq4S6N8x+QCmv xsHjfn/9aa6VCXyZikhB1EJFVPc5gDPXzO+E6HEliXZ3vka9EzbXmHQs1bH++QDBz9+XqjNHMOyz/ s0zweiZPqmd6ZLiM6KbMfFxbUGcbgWJzGNG0JgnqcmEqiq6ZUribPnbAYLptTpMLF41VXoLKM/0We aiuJrROZS3mclua8imbEv16tpMVevjQFIOY54pTwbshXh7HPfuEIz+E866+b3Jfyou1WKZ3sfmzx4 BrHHdESdrbDVUPrKpI2w==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1h9WrT-0003kw-81; Thu, 28 Mar 2019 15:22:35 +0000 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70] helo=foss.arm.com) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1h9Wqz-00035y-Oj for linux-arm-kernel@lists.infradead.org; Thu, 28 Mar 2019 15:22:10 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0EBB81684; Thu, 28 Mar 2019 08:22:05 -0700 (PDT) Received: from e112269-lin.arm.com (e112269-lin.cambridge.arm.com [10.1.196.69]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 2C39E3F557; Thu, 28 Mar 2019 08:22:01 -0700 (PDT) From: Steven Price To: linux-mm@kvack.org Subject: [PATCH v7 03/20] mips: mm: Add p?d_large() definitions Date: Thu, 28 Mar 2019 15:20:47 +0000 Message-Id: <20190328152104.23106-4-steven.price@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190328152104.23106-1-steven.price@arm.com> References: <20190328152104.23106-1-steven.price@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190328_082206_124659_306D1294 X-CRM114-Status: GOOD ( 13.92 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , Peter Zijlstra , Catalin Marinas , Dave Hansen , Will Deacon , linux-mips@vger.kernel.org, "H. Peter Anvin" , "Liang, Kan" , x86@kernel.org, Steven Price , Ingo Molnar , James Hogan , Arnd Bergmann , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Borislav Petkov , Andy Lutomirski , Thomas Gleixner , linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , linux-kernel@vger.kernel.org, Ralf Baechle , Paul Burton , James Morse Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP walk_page_range() is going to be allowed to walk page tables other than those of user space. For this it needs to know when it has reached a 'leaf' entry in the page tables. This information is provided by the p?d_large() functions/macros. For mips, we only support large pages on 64 bit. For 64 bit if _PAGE_HUGE is defined we can simply look for it. When not defined we can be confident that there are no large pages in existence and fall back on the generic implementation (added in a later patch) which returns 0. CC: Ralf Baechle CC: Paul Burton CC: James Hogan CC: linux-mips@vger.kernel.org Signed-off-by: Steven Price Acked-by: Paul Burton --- arch/mips/include/asm/pgtable-64.h | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/arch/mips/include/asm/pgtable-64.h b/arch/mips/include/asm/pgtable-64.h index 93a9dce31f25..42162877ac62 100644 --- a/arch/mips/include/asm/pgtable-64.h +++ b/arch/mips/include/asm/pgtable-64.h @@ -273,6 +273,10 @@ static inline int pmd_present(pmd_t pmd) return pmd_val(pmd) != (unsigned long) invalid_pte_table; } +#ifdef _PAGE_HUGE +#define pmd_large(pmd) ((pmd_val(pmd) & _PAGE_HUGE) != 0) +#endif + static inline void pmd_clear(pmd_t *pmdp) { pmd_val(*pmdp) = ((unsigned long) invalid_pte_table); @@ -297,6 +301,10 @@ static inline int pud_present(pud_t pud) return pud_val(pud) != (unsigned long) invalid_pmd_table; } +#ifdef _PAGE_HUGE +#define pud_large(pud) ((pud_val(pud) & _PAGE_HUGE) != 0) +#endif + static inline void pud_clear(pud_t *pudp) { pud_val(*pudp) = ((unsigned long) invalid_pmd_table); From patchwork Thu Mar 28 15:20:48 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 10875151 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A604A14DE for ; Thu, 28 Mar 2019 15:23:01 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 899CD28AD5 for ; Thu, 28 Mar 2019 15:23:01 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 7CD0E28E26; Thu, 28 Mar 2019 15:23:01 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 1486628E26 for ; Thu, 28 Mar 2019 15:23:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=51vdBQyhjl7goJlpVS0UilcWGO8mn67X0Ot2HdFeTkY=; b=Kbk24YfQPaiq9q gZqlTRWPxX8ANT2v+3tIrRJG2hcOZiKTuT6Wex+JxP48y4VrUL2vkRkAO/BnwarSODi+RLDN+wYQD pAFu984NFDxcZFesNwyS/ZzO3SnpfbCCOhWASLESFkdABa7S2sqV5MVa68vAaaQFnYLOiJ5Urqufn bjTe/c5ezGBlTRZ7Yqw+XJF5jrhu6qKMhhMLIvr0Fy8Ye+moJ5C8lNVmS5EjLj8pBWpoxZxXvgLbV IRcDrQf6SYPJCCKGUndP/dNz9eqX2vdazj9EUTWRD+YahqibSIo83xyI9GmBdq2Iq+zjvLmcUi482 lDfJBMeMmHgXGI2QHHJw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1h9Wrp-00045y-Hc; Thu, 28 Mar 2019 15:22:57 +0000 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70] helo=foss.arm.com) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1h9Wr4-0003Cg-8L for linux-arm-kernel@lists.infradead.org; Thu, 28 Mar 2019 15:22:16 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 5801B168F; Thu, 28 Mar 2019 08:22:09 -0700 (PDT) Received: from e112269-lin.arm.com (e112269-lin.cambridge.arm.com [10.1.196.69]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 502A33F557; Thu, 28 Mar 2019 08:22:05 -0700 (PDT) From: Steven Price To: linux-mm@kvack.org Subject: [PATCH v7 04/20] powerpc: mm: Add p?d_large() definitions Date: Thu, 28 Mar 2019 15:20:48 +0000 Message-Id: <20190328152104.23106-5-steven.price@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190328152104.23106-1-steven.price@arm.com> References: <20190328152104.23106-1-steven.price@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190328_082210_322106_0A80B922 X-CRM114-Status: GOOD ( 12.77 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , Peter Zijlstra , Benjamin Herrenschmidt , Dave Hansen , Will Deacon , Paul Mackerras , "H. Peter Anvin" , "Liang, Kan" , Michael Ellerman , x86@kernel.org, Steven Price , Ingo Molnar , Catalin Marinas , Arnd Bergmann , kvm-ppc@vger.kernel.org, =?utf-8?b?SsOpcsO0?= =?utf-8?b?bWUgR2xpc3Nl?= , Borislav Petkov , Andy Lutomirski , Thomas Gleixner , linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , linux-kernel@vger.kernel.org, James Morse , linuxppc-dev@lists.ozlabs.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP walk_page_range() is going to be allowed to walk page tables other than those of user space. For this it needs to know when it has reached a 'leaf' entry in the page tables. This information is provided by the p?d_large() functions/macros. For powerpc pmd_large() was already implemented, so hoist it out of the CONFIG_TRANSPARENT_HUGEPAGE condition and implement the other levels. CC: Benjamin Herrenschmidt CC: Paul Mackerras CC: Michael Ellerman CC: linuxppc-dev@lists.ozlabs.org CC: kvm-ppc@vger.kernel.org Signed-off-by: Steven Price --- arch/powerpc/include/asm/book3s/64/pgtable.h | 30 ++++++++++++++------ 1 file changed, 21 insertions(+), 9 deletions(-) diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h index 581f91be9dd4..f6d1ac8b832e 100644 --- a/arch/powerpc/include/asm/book3s/64/pgtable.h +++ b/arch/powerpc/include/asm/book3s/64/pgtable.h @@ -897,6 +897,12 @@ static inline int pud_present(pud_t pud) return !!(pud_raw(pud) & cpu_to_be64(_PAGE_PRESENT)); } +#define pud_large pud_large +static inline int pud_large(pud_t pud) +{ + return !!(pud_raw(pud) & cpu_to_be64(_PAGE_PTE)); +} + extern struct page *pud_page(pud_t pud); extern struct page *pmd_page(pmd_t pmd); static inline pte_t pud_pte(pud_t pud) @@ -940,6 +946,12 @@ static inline int pgd_present(pgd_t pgd) return !!(pgd_raw(pgd) & cpu_to_be64(_PAGE_PRESENT)); } +#define pgd_large pgd_large +static inline int pgd_large(pgd_t pgd) +{ + return !!(pgd_raw(pgd) & cpu_to_be64(_PAGE_PTE)); +} + static inline pte_t pgd_pte(pgd_t pgd) { return __pte_raw(pgd_raw(pgd)); @@ -1093,6 +1105,15 @@ static inline bool pmd_access_permitted(pmd_t pmd, bool write) return pte_access_permitted(pmd_pte(pmd), write); } +#define pmd_large pmd_large +/* + * returns true for pmd migration entries, THP, devmap, hugetlb + */ +static inline int pmd_large(pmd_t pmd) +{ + return !!(pmd_raw(pmd) & cpu_to_be64(_PAGE_PTE)); +} + #ifdef CONFIG_TRANSPARENT_HUGEPAGE extern pmd_t pfn_pmd(unsigned long pfn, pgprot_t pgprot); extern pmd_t mk_pmd(struct page *page, pgprot_t pgprot); @@ -1119,15 +1140,6 @@ pmd_hugepage_update(struct mm_struct *mm, unsigned long addr, pmd_t *pmdp, return hash__pmd_hugepage_update(mm, addr, pmdp, clr, set); } -/* - * returns true for pmd migration entries, THP, devmap, hugetlb - * But compile time dependent on THP config - */ -static inline int pmd_large(pmd_t pmd) -{ - return !!(pmd_raw(pmd) & cpu_to_be64(_PAGE_PTE)); -} - static inline pmd_t pmd_mknotpresent(pmd_t pmd) { return __pmd(pmd_val(pmd) & ~_PAGE_PRESENT); From patchwork Thu Mar 28 15:20:49 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 10875157 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7C5B714DE for ; Thu, 28 Mar 2019 15:23:12 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6358B28A46 for ; Thu, 28 Mar 2019 15:23:12 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 571EA28AD5; Thu, 28 Mar 2019 15:23:12 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 5443C28A46 for ; Thu, 28 Mar 2019 15:23:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=DMgPV1+cRGJgTS9M2bBn+sh3Jq9OKgYd59qudueiJd4=; b=a/lYkCPrlwowYB 4cir8C9oOpQThvru5QRdOYjfep2DCdRBhzBLxmqt0uDJ/vA0jInS20EFFrsBXqHG0M+YjCQHFJ2W6 COfXbUMFkGW+PPTmi+uemEvmmWMUqEAcb+Ry08AH3bfHPghg+TjUGCeUDg1sb27dHxROaKMGI0rSu xWMvkZBJyMsGGYxDoXiBPmMdRpbzf/0GJqJ2L26rRQMrA0NvlyLQMuRtw8b5yXuWa0NX8aIPaq9Df 960T/hmMKlw7B2GS7qPeFtW5eX0ok/0Mt9alRxcDv6mAjyFzrj3RUqtuCYXxHPuOyGZv99oQgJrKr yn4YwvH03Avklg8H+sYQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1h9Wry-0004GB-3c; Thu, 28 Mar 2019 15:23:06 +0000 Received: from foss.arm.com ([217.140.101.70]) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1h9Wr9-0003G9-5P for linux-arm-kernel@lists.infradead.org; Thu, 28 Mar 2019 15:22:22 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 9FA4915BF; Thu, 28 Mar 2019 08:22:13 -0700 (PDT) Received: from e112269-lin.arm.com (e112269-lin.cambridge.arm.com [10.1.196.69]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 98B3F3F557; Thu, 28 Mar 2019 08:22:09 -0700 (PDT) From: Steven Price To: linux-mm@kvack.org Subject: [PATCH v7 05/20] KVM: PPC: Book3S HV: Remove pmd_is_leaf() Date: Thu, 28 Mar 2019 15:20:49 +0000 Message-Id: <20190328152104.23106-6-steven.price@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190328152104.23106-1-steven.price@arm.com> References: <20190328152104.23106-1-steven.price@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190328_082215_614872_369C4802 X-CRM114-Status: GOOD ( 14.06 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , Peter Zijlstra , Benjamin Herrenschmidt , Dave Hansen , Will Deacon , Paul Mackerras , "H. Peter Anvin" , "Liang, Kan" , Michael Ellerman , x86@kernel.org, Steven Price , Ingo Molnar , Catalin Marinas , Arnd Bergmann , kvm-ppc@vger.kernel.org, =?utf-8?b?SsOpcsO0?= =?utf-8?b?bWUgR2xpc3Nl?= , Borislav Petkov , Andy Lutomirski , Thomas Gleixner , linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , linux-kernel@vger.kernel.org, James Morse , linuxppc-dev@lists.ozlabs.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Since pmd_large() is now always available, pmd_is_leaf() is redundant. Replace all uses with calls to pmd_large(). CC: Benjamin Herrenschmidt CC: Michael Ellerman CC: Paul Mackerras CC: kvm-ppc@vger.kernel.org CC: linuxppc-dev@lists.ozlabs.org Signed-off-by: Steven Price --- arch/powerpc/kvm/book3s_64_mmu_radix.c | 12 +++--------- 1 file changed, 3 insertions(+), 9 deletions(-) diff --git a/arch/powerpc/kvm/book3s_64_mmu_radix.c b/arch/powerpc/kvm/book3s_64_mmu_radix.c index f55ef071883f..1b57b4e3f819 100644 --- a/arch/powerpc/kvm/book3s_64_mmu_radix.c +++ b/arch/powerpc/kvm/book3s_64_mmu_radix.c @@ -363,12 +363,6 @@ static void kvmppc_pte_free(pte_t *ptep) kmem_cache_free(kvm_pte_cache, ptep); } -/* Like pmd_huge() and pmd_large(), but works regardless of config options */ -static inline int pmd_is_leaf(pmd_t pmd) -{ - return !!(pmd_val(pmd) & _PAGE_PTE); -} - static pmd_t *kvmppc_pmd_alloc(void) { return kmem_cache_alloc(kvm_pmd_cache, GFP_KERNEL); @@ -460,7 +454,7 @@ static void kvmppc_unmap_free_pmd(struct kvm *kvm, pmd_t *pmd, bool full, for (im = 0; im < PTRS_PER_PMD; ++im, ++p) { if (!pmd_present(*p)) continue; - if (pmd_is_leaf(*p)) { + if (pmd_large(*p)) { if (full) { pmd_clear(p); } else { @@ -593,7 +587,7 @@ int kvmppc_create_pte(struct kvm *kvm, pgd_t *pgtable, pte_t pte, else if (level <= 1) new_pmd = kvmppc_pmd_alloc(); - if (level == 0 && !(pmd && pmd_present(*pmd) && !pmd_is_leaf(*pmd))) + if (level == 0 && !(pmd && pmd_present(*pmd) && !pmd_large(*pmd))) new_ptep = kvmppc_pte_alloc(); /* Check if we might have been invalidated; let the guest retry if so */ @@ -662,7 +656,7 @@ int kvmppc_create_pte(struct kvm *kvm, pgd_t *pgtable, pte_t pte, new_pmd = NULL; } pmd = pmd_offset(pud, gpa); - if (pmd_is_leaf(*pmd)) { + if (pmd_large(*pmd)) { unsigned long lgpa = gpa & PMD_MASK; /* Check if we raced and someone else has set the same thing */ From patchwork Thu Mar 28 15:20:50 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 10875165 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A05C91708 for ; Thu, 28 Mar 2019 15:23:24 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8822D28671 for ; Thu, 28 Mar 2019 15:23:24 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 7C90728A4B; Thu, 28 Mar 2019 15:23:24 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id EB14A28A46 for ; Thu, 28 Mar 2019 15:23:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=7oBlTJbcE8dRLQutbIxTW2FuSERbznxAQbPMR88QO7U=; b=Ij1F9U2Fb7PbZN Sad8c0zyyOPSTRozfX3ongeIHIw6LqkKHpKWel86j376P5wxL4gPik3VQhBWD/4Qu9HjvJ1hj246p bfXLwZzUD7Av4yKD6dcX3EtW3cH3Yt6PPL0GytskfEUre64pP+vsyeqFFhvOs0kFhRNwwCohP3dUd EyKzyICoY3jc0dBNJiOoIfNNwkGGiLi219rRq1wmv3gkakL+GfWd3Tu35Yl2SpTKabrV5DMrWLt0w fj7XCv7E5d53XTYk/dWgebo/QuAa2gdV93u05TebnTZqFc3cfi1LLDogwxW4rbz3d7Nq2Q1/SFkId 3QDLECUsIPQs6rQDkhFg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1h9Ws9-0004UP-PO; Thu, 28 Mar 2019 15:23:17 +0000 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70] helo=foss.arm.com) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1h9WrC-0003MD-CR; Thu, 28 Mar 2019 15:22:29 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A1494169E; Thu, 28 Mar 2019 08:22:17 -0700 (PDT) Received: from e112269-lin.arm.com (e112269-lin.cambridge.arm.com [10.1.196.69]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id DFF3A3F557; Thu, 28 Mar 2019 08:22:13 -0700 (PDT) From: Steven Price To: linux-mm@kvack.org Subject: [PATCH v7 06/20] riscv: mm: Add p?d_large() definitions Date: Thu, 28 Mar 2019 15:20:50 +0000 Message-Id: <20190328152104.23106-7-steven.price@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190328152104.23106-1-steven.price@arm.com> References: <20190328152104.23106-1-steven.price@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190328_082218_881294_4917EED4 X-CRM114-Status: GOOD ( 12.26 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , Peter Zijlstra , Catalin Marinas , Dave Hansen , Will Deacon , "H. Peter Anvin" , linux-riscv@lists.infradead.org, "Liang, Kan" , x86@kernel.org, Steven Price , Ingo Molnar , Palmer Dabbelt , Albert Ou , Arnd Bergmann , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Borislav Petkov , Andy Lutomirski , Thomas Gleixner , linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , linux-kernel@vger.kernel.org, James Morse Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP walk_page_range() is going to be allowed to walk page tables other than those of user space. For this it needs to know when it has reached a 'leaf' entry in the page tables. This information is provided by the p?d_large() functions/macros. For riscv a page is large when it has a read, write or execute bit set on it. CC: Palmer Dabbelt CC: Albert Ou CC: linux-riscv@lists.infradead.org Signed-off-by: Steven Price --- arch/riscv/include/asm/pgtable-64.h | 7 +++++++ arch/riscv/include/asm/pgtable.h | 7 +++++++ 2 files changed, 14 insertions(+) diff --git a/arch/riscv/include/asm/pgtable-64.h b/arch/riscv/include/asm/pgtable-64.h index 7aa0ea9bd8bb..73747d9d7c66 100644 --- a/arch/riscv/include/asm/pgtable-64.h +++ b/arch/riscv/include/asm/pgtable-64.h @@ -51,6 +51,13 @@ static inline int pud_bad(pud_t pud) return !pud_present(pud); } +#define pud_large pud_large +static inline int pud_large(pud_t pud) +{ + return pud_present(pud) + && (pud_val(pud) & (_PAGE_READ | _PAGE_WRITE | _PAGE_EXEC)); +} + static inline void set_pud(pud_t *pudp, pud_t pud) { *pudp = pud; diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h index 1141364d990e..9570883c79e7 100644 --- a/arch/riscv/include/asm/pgtable.h +++ b/arch/riscv/include/asm/pgtable.h @@ -111,6 +111,13 @@ static inline int pmd_bad(pmd_t pmd) return !pmd_present(pmd); } +#define pmd_large pmd_large +static inline int pmd_large(pmd_t pmd) +{ + return pmd_present(pmd) + && (pmd_val(pmd) & (_PAGE_READ | _PAGE_WRITE | _PAGE_EXEC)); +} + static inline void set_pmd(pmd_t *pmdp, pmd_t pmd) { *pmdp = pmd; From patchwork Thu Mar 28 15:20:51 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 10875167 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9698D14DE for ; Thu, 28 Mar 2019 15:23:34 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7D40C28671 for ; Thu, 28 Mar 2019 15:23:34 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 70CFB28A84; Thu, 28 Mar 2019 15:23:34 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 1709C28671 for ; Thu, 28 Mar 2019 15:23:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=bVP9ynVEfQjLyjMTAQBVb09tUpsismiTJu5X03uT7Vg=; b=eNUi2dwD0p1/pR +k+RG2k94uJu/ipZ1oRnPmJ3bxp/5C7+i3MlaKbIuZeQLX+xkxLMpy7QCk8xL7D+ZvuZjSrFV3hFB lDeHdnWlPUIuEFL5oiSBk2iqINaioNfhfsLjGtmCZwKkxG2pH8WNntcwBufj8PXxIpJnqfyctNUtt ZRuzW+9X/XSuVHNKxusSA980JtsFgMI4gbaE4V7k/4xRDkbT2eoGmbPlBBLshIQR0HHnDzhlU+9GL DY5Rv2QfH2U4BVIWkMRdhPpqVGJuiSAy6DROLosjnsTWvDsM4hxPRUgu8RS4xnMsD99Q8cpNaD0Cb AHgpiTfjq8fKmgKqiDBQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1h9WsJ-0004mA-Ka; Thu, 28 Mar 2019 15:23:27 +0000 Received: from foss.arm.com ([217.140.101.70]) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1h9WrG-0003Rf-Bi for linux-arm-kernel@lists.infradead.org; Thu, 28 Mar 2019 15:22:28 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 9FED7165C; Thu, 28 Mar 2019 08:22:21 -0700 (PDT) Received: from e112269-lin.arm.com (e112269-lin.cambridge.arm.com [10.1.196.69]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id E1F133F557; Thu, 28 Mar 2019 08:22:17 -0700 (PDT) From: Steven Price To: linux-mm@kvack.org Subject: [PATCH v7 07/20] s390: mm: Add p?d_large() definitions Date: Thu, 28 Mar 2019 15:20:51 +0000 Message-Id: <20190328152104.23106-8-steven.price@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190328152104.23106-1-steven.price@arm.com> References: <20190328152104.23106-1-steven.price@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190328_082222_854760_B052DA90 X-CRM114-Status: GOOD ( 11.66 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , Peter Zijlstra , Catalin Marinas , Dave Hansen , Will Deacon , "H. Peter Anvin" , "Liang, Kan" , linux-s390@vger.kernel.org, x86@kernel.org, Steven Price , Ingo Molnar , Arnd Bergmann , Heiko Carstens , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Borislav Petkov , Andy Lutomirski , Thomas Gleixner , linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , linux-kernel@vger.kernel.org, James Morse , Martin Schwidefsky Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP walk_page_range() is going to be allowed to walk page tables other than those of user space. For this it needs to know when it has reached a 'leaf' entry in the page tables. This information is provided by the p?d_large() functions/macros. For s390, pud_large() and pmd_large() are already implemented as static inline functions. Add a #define so we don't pick up the generic version introduced in a later patch. CC: Martin Schwidefsky CC: Heiko Carstens CC: linux-s390@vger.kernel.org Signed-off-by: Steven Price --- arch/s390/include/asm/pgtable.h | 2 ++ 1 file changed, 2 insertions(+) diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pgtable.h index 76dc344edb8c..3ad4c69e1f2d 100644 --- a/arch/s390/include/asm/pgtable.h +++ b/arch/s390/include/asm/pgtable.h @@ -679,6 +679,7 @@ static inline int pud_none(pud_t pud) return pud_val(pud) == _REGION3_ENTRY_EMPTY; } +#define pud_large pud_large static inline int pud_large(pud_t pud) { if ((pud_val(pud) & _REGION_ENTRY_TYPE_MASK) != _REGION_ENTRY_TYPE_R3) @@ -696,6 +697,7 @@ static inline unsigned long pud_pfn(pud_t pud) return (pud_val(pud) & origin_mask) >> PAGE_SHIFT; } +#define pmd_large pmd_large static inline int pmd_large(pmd_t pmd) { return (pmd_val(pmd) & _SEGMENT_ENTRY_LARGE) != 0; From patchwork Thu Mar 28 15:20:52 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 10875175 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D520414DE for ; Thu, 28 Mar 2019 15:23:57 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BB0A828671 for ; Thu, 28 Mar 2019 15:23:57 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id AE9B628A4B; Thu, 28 Mar 2019 15:23:57 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 544D528A46 for ; Thu, 28 Mar 2019 15:23:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=ycUt9JI2VuN8OE4Rpkvb4zWydA5BDnBQ8IGt9Wzud1Y=; b=XgTAs+2t/eYMZr s8Sp0HH5faIDNBII9ODGtxIfJenwJfar67RQCWHJLCQflM6XVMaysAyA4YN5qdfN3T560PXUudITs BHg87zgX3nAneNoY1QM5xzohW1gJ1tZogOilkBksdDs2NCywDmggXpaMPF+R0p1cvBNX8FR0JEWWi 1XryJnALLh1Rs2FRd4hiFHP7X12Qy3HDgzp52+/McmAOLsJ7h6iVGivMxmS0FnoNzjjFG6bnkiy17 J8QgIVz5vafQHDn0g7lArBSKJw8EKpCWXpSFe4wwQ9vFhiYPOsPFDdBEzgE4voqT/+dRCo/thUY9i REY9gI7P+e1j1J1hf8PA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1h9Wsg-0005Ix-KG; Thu, 28 Mar 2019 15:23:50 +0000 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70] helo=foss.arm.com) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1h9WrK-0003XN-6z for linux-arm-kernel@lists.infradead.org; Thu, 28 Mar 2019 15:22:48 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 7B29A15AB; Thu, 28 Mar 2019 08:22:25 -0700 (PDT) Received: from e112269-lin.arm.com (e112269-lin.cambridge.arm.com [10.1.196.69]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id E0AA03F557; Thu, 28 Mar 2019 08:22:21 -0700 (PDT) From: Steven Price To: linux-mm@kvack.org Subject: [PATCH v7 08/20] sparc: mm: Add p?d_large() definitions Date: Thu, 28 Mar 2019 15:20:52 +0000 Message-Id: <20190328152104.23106-9-steven.price@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190328152104.23106-1-steven.price@arm.com> References: <20190328152104.23106-1-steven.price@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190328_082226_834730_94E49D30 X-CRM114-Status: GOOD ( 13.07 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , x86@kernel.org, Arnd Bergmann , Ard Biesheuvel , Peter Zijlstra , Catalin Marinas , Dave Hansen , Will Deacon , linux-kernel@vger.kernel.org, Steven Price , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Ingo Molnar , Borislav Petkov , Andy Lutomirski , "H. Peter Anvin" , sparclinux@vger.kernel.org, James Morse , Thomas Gleixner , "David S. Miller" , linux-arm-kernel@lists.infradead.org, "Liang, Kan" Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP walk_page_range() is going to be allowed to walk page tables other than those of user space. For this it needs to know when it has reached a 'leaf' entry in the page tables. This information is provided by the p?d_large() functions/macros. For sparc 64 bit, pmd_large() and pud_large() are already provided, so add #defines to prevent the generic versions (added in a later patch) from being used. CC: "David S. Miller" CC: sparclinux@vger.kernel.org Signed-off-by: Steven Price Acked-by: David S. Miller --- arch/sparc/include/asm/pgtable_64.h | 2 ++ 1 file changed, 2 insertions(+) diff --git a/arch/sparc/include/asm/pgtable_64.h b/arch/sparc/include/asm/pgtable_64.h index 1393a8ac596b..f502e937c8fe 100644 --- a/arch/sparc/include/asm/pgtable_64.h +++ b/arch/sparc/include/asm/pgtable_64.h @@ -713,6 +713,7 @@ static inline unsigned long pte_special(pte_t pte) return pte_val(pte) & _PAGE_SPECIAL; } +#define pmd_large pmd_large static inline unsigned long pmd_large(pmd_t pmd) { pte_t pte = __pte(pmd_val(pmd)); @@ -894,6 +895,7 @@ static inline unsigned long pud_page_vaddr(pud_t pud) #define pgd_present(pgd) (pgd_val(pgd) != 0U) #define pgd_clear(pgdp) (pgd_val(*(pgdp)) = 0UL) +#define pud_large pud_large static inline unsigned long pud_large(pud_t pud) { pte_t pte = __pte(pud_val(pud)); From patchwork Thu Mar 28 15:20:53 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 10875171 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 19E4C1390 for ; Thu, 28 Mar 2019 15:23:45 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0046C28671 for ; Thu, 28 Mar 2019 15:23:45 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E8A2E28A84; Thu, 28 Mar 2019 15:23:44 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 8B41628671 for ; Thu, 28 Mar 2019 15:23:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=yJGX35km5al+oPbAorzeLsZz+TBEk3Mwdf1ipGn4VGg=; b=FuaoFW71+Mj6D4 aiWV68qdmc/RNqtFCwq/JUJkDx8ZPkfPWLjGSNUok1sQfE2oOvIcsl5D/w8DGmKF8IbWy5A3yN/Tk mADRwjX4hwYzUQkKEK4MWfruPCRWEBdEkE6U0hyZJUDFCsAqXaTQkSfZoPeX9MeiBRl9+YK0D+s6M 4fS4F1vLps2ln/zFMsBEx/N5xZAGSUPO9t+xb1mf4z+LKUyEdeWn/aDRkHPOtQhETImhRwCZmfF5Q twhD87YdJMKWa0Y2ic+xZCEVlS2slq5PpNT571xZnhEwRI4D6tVp0L9Ny0HhAMwOp63dluj9DV3zL MkELIV9R+nC1+RKpKMDA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1h9WsX-00054R-Dr; Thu, 28 Mar 2019 15:23:41 +0000 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70] helo=foss.arm.com) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1h9WrN-0003dC-LU for linux-arm-kernel@lists.infradead.org; Thu, 28 Mar 2019 15:22:47 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0E718168F; Thu, 28 Mar 2019 08:22:29 -0700 (PDT) Received: from e112269-lin.arm.com (e112269-lin.cambridge.arm.com [10.1.196.69]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id BB5CF3F557; Thu, 28 Mar 2019 08:22:25 -0700 (PDT) From: Steven Price To: linux-mm@kvack.org Subject: [PATCH v7 09/20] x86: mm: Add p?d_large() definitions Date: Thu, 28 Mar 2019 15:20:53 +0000 Message-Id: <20190328152104.23106-10-steven.price@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190328152104.23106-1-steven.price@arm.com> References: <20190328152104.23106-1-steven.price@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190328_082230_499558_4BA17DBB X-CRM114-Status: GOOD ( 15.71 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , x86@kernel.org, Arnd Bergmann , Ard Biesheuvel , Peter Zijlstra , Catalin Marinas , Dave Hansen , Will Deacon , linux-kernel@vger.kernel.org, Steven Price , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Ingo Molnar , Borislav Petkov , Andy Lutomirski , "H. Peter Anvin" , James Morse , Thomas Gleixner , linux-arm-kernel@lists.infradead.org, "Liang, Kan" Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP walk_page_range() is going to be allowed to walk page tables other than those of user space. For this it needs to know when it has reached a 'leaf' entry in the page tables. This information is provided by the p?d_large() functions/macros. For x86 we already have static inline functions, so simply add #defines to prevent the generic versions (added in a later patch) from being picked up. We also need to add corresponding #undefs in dump_pagetables.c. This code will be removed when x86 is switched over to using the generic pagewalk code in a later patch. Signed-off-by: Steven Price --- arch/x86/include/asm/pgtable.h | 5 +++++ arch/x86/mm/dump_pagetables.c | 3 +++ 2 files changed, 8 insertions(+) diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h index 2779ace16d23..0dd04cf6ebeb 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -222,6 +222,7 @@ static inline unsigned long pgd_pfn(pgd_t pgd) return (pgd_val(pgd) & PTE_PFN_MASK) >> PAGE_SHIFT; } +#define p4d_large p4d_large static inline int p4d_large(p4d_t p4d) { /* No 512 GiB pages yet */ @@ -230,6 +231,7 @@ static inline int p4d_large(p4d_t p4d) #define pte_page(pte) pfn_to_page(pte_pfn(pte)) +#define pmd_large pmd_large static inline int pmd_large(pmd_t pte) { return pmd_flags(pte) & _PAGE_PSE; @@ -857,6 +859,7 @@ static inline pmd_t *pmd_offset(pud_t *pud, unsigned long address) return (pmd_t *)pud_page_vaddr(*pud) + pmd_index(address); } +#define pud_large pud_large static inline int pud_large(pud_t pud) { return (pud_val(pud) & (_PAGE_PSE | _PAGE_PRESENT)) == @@ -868,6 +871,7 @@ static inline int pud_bad(pud_t pud) return (pud_flags(pud) & ~(_KERNPG_TABLE | _PAGE_USER)) != 0; } #else +#define pud_large pud_large static inline int pud_large(pud_t pud) { return 0; @@ -1213,6 +1217,7 @@ static inline bool pgdp_maps_userspace(void *__ptr) return (((ptr & ~PAGE_MASK) / sizeof(pgd_t)) < PGD_KERNEL_START); } +#define pgd_large pgd_large static inline int pgd_large(pgd_t pgd) { return 0; } #ifdef CONFIG_PAGE_TABLE_ISOLATION diff --git a/arch/x86/mm/dump_pagetables.c b/arch/x86/mm/dump_pagetables.c index ee8f8ab46941..ca270fb00805 100644 --- a/arch/x86/mm/dump_pagetables.c +++ b/arch/x86/mm/dump_pagetables.c @@ -432,6 +432,7 @@ static void walk_pmd_level(struct seq_file *m, struct pg_state *st, pud_t addr, #else #define walk_pmd_level(m,s,a,e,p) walk_pte_level(m,s,__pmd(pud_val(a)),e,p) +#undef pud_large #define pud_large(a) pmd_large(__pmd(pud_val(a))) #define pud_none(a) pmd_none(__pmd(pud_val(a))) #endif @@ -467,6 +468,7 @@ static void walk_pud_level(struct seq_file *m, struct pg_state *st, p4d_t addr, #else #define walk_pud_level(m,s,a,e,p) walk_pmd_level(m,s,__pud(p4d_val(a)),e,p) +#undef p4d_large #define p4d_large(a) pud_large(__pud(p4d_val(a))) #define p4d_none(a) pud_none(__pud(p4d_val(a))) #endif @@ -501,6 +503,7 @@ static void walk_p4d_level(struct seq_file *m, struct pg_state *st, pgd_t addr, } } +#undef pgd_large #define pgd_large(a) (pgtable_l5_enabled() ? pgd_large(a) : p4d_large(__p4d(pgd_val(a)))) #define pgd_none(a) (pgtable_l5_enabled() ? pgd_none(a) : p4d_none(__p4d(pgd_val(a)))) From patchwork Thu Mar 28 15:20:54 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 10875235 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E93861708 for ; Thu, 28 Mar 2019 15:39:08 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id CF8C728B08 for ; Thu, 28 Mar 2019 15:39:08 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id C3CBD28BF3; Thu, 28 Mar 2019 15:39:08 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id C05C328B08 for ; Thu, 28 Mar 2019 15:39:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Ebi9QR4kYYb0ocJ+PowPLnhKBjvo7IhC/ITnIlCsqm8=; b=JHGaGB35Bwa8Y5 4Y8E3UcMo5zTjPPSyZq7g+PoZzkjT7YTGTWqDrIVgJUZyyqXERLXFxJl9HFbKtqqWysyLfvnLcQ4N M0QuTrDRBOKQ/eGw6VNz81+G/h18CEqy/Bk9+n2WwiaV3NG1mN23//vBxz1rX0oOUeZ4JiQnoyF5M NbMAjvtmGYTkn6qjBpm+2o0/X4EhciCdS6UC/ePBR381YRzPVxsazQvFpjhoJmc+xgZ8GcIjExwBe PWx8zLGF38226+ydciLkF+8qx+r+4IDYJPpFSoTIFaitgr8ACAf0+mDmO/jO/lcJJ4EB9QMv6pOfN zRl8Hf/BoqsXHo1Okuaw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1h9X7R-0007ST-OV; Thu, 28 Mar 2019 15:39:05 +0000 Received: from casper.infradead.org ([2001:8b0:10b:1236::1]) by bombadil.infradead.org with esmtps (Exim 4.90_1 #2 (Red Hat Linux)) id 1h9X74-0006cY-Tl for linux-arm-kernel@bombadil.infradead.org; Thu, 28 Mar 2019 15:38:42 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=U1aR140X7RnjOrD7ClczvI1VvyAi5tuJM37h24SqYlk=; b=OfpM6uMa0mulYgB9C4+AOPX9Bs 6TAp37D/gWsuQip5MgrSsqVsIuErGRLDKM7s3vQjACin571kWkh4NMfZzW6Yc0uuXMNC40WC3sgaD P10gj5yFpMnFPT/ty8c8mWwm8jO/H44sch1hJ8K3ymZ1xMACWubMh3884aaNf0tVPcBPcwY8TkCtY t73RtLMOifdB/GrRp96pBHLbMOvT5VAuTQy+wBYYO/ZnQ68hNbLb+rmCL8KDHEtyVnTd5YOpFMupm nISOZJAog+SRdXQFHmc5qvylMf5FcCSVwGRBldCuhFe/bYQybrhcXLM3sTdSIseqo22fdHHh33+PV /Cmer5hA==; Received: from foss.arm.com ([217.140.101.70]) by casper.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1h9WrR-0001fv-Do for linux-arm-kernel@lists.infradead.org; Thu, 28 Mar 2019 15:22:35 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 958871684; Thu, 28 Mar 2019 08:22:32 -0700 (PDT) Received: from e112269-lin.arm.com (e112269-lin.cambridge.arm.com [10.1.196.69]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 4EB6A3F557; Thu, 28 Mar 2019 08:22:29 -0700 (PDT) From: Steven Price To: linux-mm@kvack.org Subject: [PATCH v7 10/20] mm: Add generic p?d_large() macros Date: Thu, 28 Mar 2019 15:20:54 +0000 Message-Id: <20190328152104.23106-11-steven.price@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190328152104.23106-1-steven.price@arm.com> References: <20190328152104.23106-1-steven.price@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190328_152233_627157_C8EB54D9 X-CRM114-Status: GOOD ( 10.29 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , x86@kernel.org, Arnd Bergmann , Ard Biesheuvel , Peter Zijlstra , Catalin Marinas , Dave Hansen , Will Deacon , linux-kernel@vger.kernel.org, Steven Price , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Ingo Molnar , Borislav Petkov , Andy Lutomirski , "H. Peter Anvin" , James Morse , Thomas Gleixner , linux-arm-kernel@lists.infradead.org, "Liang, Kan" Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Exposing the pud/pgd levels of the page tables to walk_page_range() means we may come across the exotic large mappings that come with large areas of contiguous memory (such as the kernel's linear map). For architectures that don't provide p?d_large() macros, provide generic does nothing defaults. Signed-off-by: Steven Price --- include/asm-generic/pgtable.h | 19 +++++++++++++++++++ 1 file changed, 19 insertions(+) diff --git a/include/asm-generic/pgtable.h b/include/asm-generic/pgtable.h index fa782fba51ee..9c5d0f73db67 100644 --- a/include/asm-generic/pgtable.h +++ b/include/asm-generic/pgtable.h @@ -1186,4 +1186,23 @@ static inline bool arch_has_pfn_modify_check(void) #define mm_pmd_folded(mm) __is_defined(__PAGETABLE_PMD_FOLDED) #endif +/* + * p?d_large() - true if this entry is a final mapping to a physical address. + * This differs from p?d_huge() by the fact that they are always available (if + * the architecture supports large pages at the appropriate level) even + * if CONFIG_HUGETLB_PAGE is not defined. + */ +#ifndef pgd_large +#define pgd_large(x) 0 +#endif +#ifndef p4d_large +#define p4d_large(x) 0 +#endif +#ifndef pud_large +#define pud_large(x) 0 +#endif +#ifndef pmd_large +#define pmd_large(x) 0 +#endif + #endif /* _ASM_GENERIC_PGTABLE_H */ From patchwork Thu Mar 28 15:20:55 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 10875233 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8F6271708 for ; Thu, 28 Mar 2019 15:39:01 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 71C9C28B08 for ; Thu, 28 Mar 2019 15:39:01 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 65E4428BE1; Thu, 28 Mar 2019 15:39:01 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id EE6AB28B08 for ; Thu, 28 Mar 2019 15:39:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=5IcVW+fLtvpnk9V6KAVYPi12fnFmA7qp/9p3wikACvY=; b=PMU91EAUa+KQVl nzjS5zSnR8hcHlRQn7mTNzDc2oU7nVtXvKmSMl9afjTnJ5oHAZvSMcQhdGtmR9jul5Q4GRKWRFc7a csIzPq31wi13h7mYcDhBZxC8mT/2k7se4IISXN5/8LbsF2CP63Ms171tKXhOfUdDclMUe50B8qoY+ 29nquA2MGULSBx7ja27kpOSHF8XtfIRp8pw7BT9MHXfly702ul1j21qxVXc0HD8y9ZDlqxen9W/g3 s9xJkOJAtvu9qxLSrDXzXZ950t7E/RsGjNnhHzV1AqgXJ/aCRLVMu3+O/6D9yZne1I/pmz0elKI06 M0/vMnmwka2PQFCvbRjQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1h9X7H-0007HO-8P; Thu, 28 Mar 2019 15:38:55 +0000 Received: from casper.infradead.org ([2001:8b0:10b:1236::1]) by bombadil.infradead.org with esmtps (Exim 4.90_1 #2 (Red Hat Linux)) id 1h9X74-0006fT-PW for linux-arm-kernel@bombadil.infradead.org; Thu, 28 Mar 2019 15:38:42 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=ahKSq6BT07JiMm6WJgk15VRtm9UPqW7p0TN43pTC2F8=; b=mhl7b8EPlI1t3yf74Q3Glmz88Y EsSCVaCP9XILdwxle8oETqOG9SDUEazJg4VNYOmOdbQmmpuUl3pyzYpO0pawMbiniZdJRM8uYDfDc ap3NNDAh/CMpUS2sG6AJL/b8wCN0pocHzjGNltaio/p8ShW2CYYlYSHSE5ktPtwpvZvgk8Ln7eWai 0HgNDemIWAkQJsJ320IdvW2uOsY2EXtcbffjHyG6SWQjSG0eKeG3FPovekNEBRQDrqhcwF1wGwLuD loIyuZ4C4Duy8Hh9JmiZIWUbEd7mp4AHrHQLpmaNvQ3KEPkNAgLP5/vaanL+Dc+vSPvoi0M+cwckv DIhZQJXg==; Received: from foss.arm.com ([217.140.101.70]) by casper.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1h9WrU-0001fv-F7 for linux-arm-kernel@lists.infradead.org; Thu, 28 Mar 2019 15:22:41 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 28EF7169E; Thu, 28 Mar 2019 08:22:36 -0700 (PDT) Received: from e112269-lin.arm.com (e112269-lin.cambridge.arm.com [10.1.196.69]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id D662E3F557; Thu, 28 Mar 2019 08:22:32 -0700 (PDT) From: Steven Price To: linux-mm@kvack.org Subject: [PATCH v7 11/20] mm: pagewalk: Add p4d_entry() and pgd_entry() Date: Thu, 28 Mar 2019 15:20:55 +0000 Message-Id: <20190328152104.23106-12-steven.price@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190328152104.23106-1-steven.price@arm.com> References: <20190328152104.23106-1-steven.price@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190328_152236_925044_321B1A08 X-CRM114-Status: GOOD ( 15.99 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , x86@kernel.org, Arnd Bergmann , Ard Biesheuvel , Peter Zijlstra , Catalin Marinas , Dave Hansen , Will Deacon , linux-kernel@vger.kernel.org, Steven Price , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Ingo Molnar , Borislav Petkov , Andy Lutomirski , "H. Peter Anvin" , James Morse , Thomas Gleixner , linux-arm-kernel@lists.infradead.org, "Liang, Kan" Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP pgd_entry() and pud_entry() were removed by commit 0b1fbfe50006c410 ("mm/pagewalk: remove pgd_entry() and pud_entry()") because there were no users. We're about to add users so reintroduce them, along with p4d_entry() as we now have 5 levels of tables. Note that commit a00cc7d9dd93d66a ("mm, x86: add support for PUD-sized transparent hugepages") already re-added pud_entry() but with different semantics to the other callbacks. Since there have never been upstream users of this, revert the semantics back to match the other callbacks. This means pud_entry() is called for all entries, not just transparent huge pages. Signed-off-by: Steven Price --- include/linux/mm.h | 15 +++++++++------ mm/pagewalk.c | 27 ++++++++++++++++----------- 2 files changed, 25 insertions(+), 17 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 76769749b5a5..f6de08c116e6 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1367,15 +1367,14 @@ void unmap_vmas(struct mmu_gather *tlb, struct vm_area_struct *start_vma, /** * mm_walk - callbacks for walk_page_range - * @pud_entry: if set, called for each non-empty PUD (2nd-level) entry - * this handler should only handle pud_trans_huge() puds. - * the pmd_entry or pte_entry callbacks will be used for - * regular PUDs. - * @pmd_entry: if set, called for each non-empty PMD (3rd-level) entry + * @pgd_entry: if set, called for each non-empty PGD (top-level) entry + * @p4d_entry: if set, called for each non-empty P4D entry + * @pud_entry: if set, called for each non-empty PUD entry + * @pmd_entry: if set, called for each non-empty PMD entry * this handler is required to be able to handle * pmd_trans_huge() pmds. They may simply choose to * split_huge_page() instead of handling it explicitly. - * @pte_entry: if set, called for each non-empty PTE (4th-level) entry + * @pte_entry: if set, called for each non-empty PTE (lowest-level) entry * @pte_hole: if set, called for each hole at all levels * @hugetlb_entry: if set, called for each hugetlb entry * @test_walk: caller specific callback function to determine whether @@ -1390,6 +1389,10 @@ void unmap_vmas(struct mmu_gather *tlb, struct vm_area_struct *start_vma, * (see the comment on walk_page_range() for more details) */ struct mm_walk { + int (*pgd_entry)(pgd_t *pgd, unsigned long addr, + unsigned long next, struct mm_walk *walk); + int (*p4d_entry)(p4d_t *p4d, unsigned long addr, + unsigned long next, struct mm_walk *walk); int (*pud_entry)(pud_t *pud, unsigned long addr, unsigned long next, struct mm_walk *walk); int (*pmd_entry)(pmd_t *pmd, unsigned long addr, diff --git a/mm/pagewalk.c b/mm/pagewalk.c index c3084ff2569d..98373a9f88b8 100644 --- a/mm/pagewalk.c +++ b/mm/pagewalk.c @@ -90,15 +90,9 @@ static int walk_pud_range(p4d_t *p4d, unsigned long addr, unsigned long end, } if (walk->pud_entry) { - spinlock_t *ptl = pud_trans_huge_lock(pud, walk->vma); - - if (ptl) { - err = walk->pud_entry(pud, addr, next, walk); - spin_unlock(ptl); - if (err) - break; - continue; - } + err = walk->pud_entry(pud, addr, next, walk); + if (err) + break; } split_huge_pud(walk->vma, pud, addr); @@ -131,7 +125,12 @@ static int walk_p4d_range(pgd_t *pgd, unsigned long addr, unsigned long end, break; continue; } - if (walk->pmd_entry || walk->pte_entry) + if (walk->p4d_entry) { + err = walk->p4d_entry(p4d, addr, next, walk); + if (err) + break; + } + if (walk->pud_entry || walk->pmd_entry || walk->pte_entry) err = walk_pud_range(p4d, addr, next, walk); if (err) break; @@ -157,7 +156,13 @@ static int walk_pgd_range(unsigned long addr, unsigned long end, break; continue; } - if (walk->pmd_entry || walk->pte_entry) + if (walk->pgd_entry) { + err = walk->pgd_entry(pgd, addr, next, walk); + if (err) + break; + } + if (walk->p4d_entry || walk->pud_entry || walk->pmd_entry || + walk->pte_entry) err = walk_p4d_range(pgd, addr, next, walk); if (err) break; From patchwork Thu Mar 28 15:20:56 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 10875231 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 590F1922 for ; Thu, 28 Mar 2019 15:38:53 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3DD0328AD5 for ; Thu, 28 Mar 2019 15:38:53 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 30D7128BB3; Thu, 28 Mar 2019 15:38:53 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id D443328AD5 for ; Thu, 28 Mar 2019 15:38:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=jFOakWldWxP+moj5o6adakYeA92DKwxMLsWSodlHc+E=; b=YbcDOlM/dWClyn k886Sstz8W5J+RmoWWkuKKb39mp87IOJ8y2n+8LEJEx1IJ6uy1kRb+vA8kfbYzLKEtxCXNwCfx1pQ t91S22Nb86ZjTimhp3jfkNKfTbyW74lpXIwt+5QQn1ZuiNCsPqMBHoSXFrTW3EN8bM3J36qCRuEOX L4uR+5F7cWYF+LDSsc3kQbMHHoOYdtouZW6j0nbgx1t857xWM57nnCCfgdIBGyvF8gQbbrqiCUyYM 9M1+oFNcatLoNsZKyz0xzyNGFfpUR5va2M8tHyP5ZsKCGSKVtlctQcblRSD4DGIXAFhDR4AbKvAnk mNHA00f96Ln2hNPJQrgQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1h9X75-00072o-J6; Thu, 28 Mar 2019 15:38:43 +0000 Received: from casper.infradead.org ([2001:8b0:10b:1236::1]) by bombadil.infradead.org with esmtps (Exim 4.90_1 #2 (Red Hat Linux)) id 1h9X74-0006d2-37 for linux-arm-kernel@bombadil.infradead.org; Thu, 28 Mar 2019 15:38:42 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=NXbLxkVEjkUC98nDqbwMu9XwBnfr8Fy+W9xUz87hsi8=; b=Vc9EBWozp50hAqoNi6dpluNp7Z BkJSUZutknwzqh8/YxHTTlt95Dwe5tWtwxoLbmiBmvtZQ9ckdtzb9+VMfthZERgPAUI0N5cndhJLm u5XtH6UScEIGqfisM9/2ucWObMAFfSS220hpoNwvyfOzsXmWFuLBNiesaPO9t0dYGY31OucyjiLvO pSj0WGgttlfFUZiuNjTcHfbFeoouzcxQu9zXm3OVwuyfJx9YwvygFTgxz+oO68MplZv5Sw38N0Qgf VSQkHRHuUgdY5bYaq33Ez79OnkBUDMzf7UGAMlKJXmfIu/CyO4MlX59EKABKOHLzsftBSpgk563Ik NTzwSXow==; Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70] helo=foss.arm.com) by casper.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1h9WrY-0001jP-St for linux-arm-kernel@lists.infradead.org; Thu, 28 Mar 2019 15:22:43 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id AF4B216A3; Thu, 28 Mar 2019 08:22:39 -0700 (PDT) Received: from e112269-lin.arm.com (e112269-lin.cambridge.arm.com [10.1.196.69]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 687DF3F557; Thu, 28 Mar 2019 08:22:36 -0700 (PDT) From: Steven Price To: linux-mm@kvack.org Subject: [PATCH v7 12/20] mm: pagewalk: Allow walking without vma Date: Thu, 28 Mar 2019 15:20:56 +0000 Message-Id: <20190328152104.23106-13-steven.price@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190328152104.23106-1-steven.price@arm.com> References: <20190328152104.23106-1-steven.price@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190328_152241_085635_D13E5707 X-CRM114-Status: GOOD ( 12.46 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , x86@kernel.org, Arnd Bergmann , Ard Biesheuvel , Peter Zijlstra , Catalin Marinas , Dave Hansen , Will Deacon , linux-kernel@vger.kernel.org, Steven Price , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Ingo Molnar , Borislav Petkov , Andy Lutomirski , "H. Peter Anvin" , James Morse , Thomas Gleixner , linux-arm-kernel@lists.infradead.org, "Liang, Kan" Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Since 48684a65b4e3: "mm: pagewalk: fix misbehavior of walk_page_range for vma(VM_PFNMAP)", page_table_walk() will report any kernel area as a hole, because it lacks a vma. This means each arch has re-implemented page table walking when needed, for example in the per-arch ptdump walker. Remove the requirement to have a vma except when trying to split huge pages. Signed-off-by: Steven Price --- mm/pagewalk.c | 25 +++++++++++++++++-------- 1 file changed, 17 insertions(+), 8 deletions(-) diff --git a/mm/pagewalk.c b/mm/pagewalk.c index 98373a9f88b8..dac0c848b458 100644 --- a/mm/pagewalk.c +++ b/mm/pagewalk.c @@ -36,7 +36,7 @@ static int walk_pmd_range(pud_t *pud, unsigned long addr, unsigned long end, do { again: next = pmd_addr_end(addr, end); - if (pmd_none(*pmd) || !walk->vma) { + if (pmd_none(*pmd)) { if (walk->pte_hole) err = walk->pte_hole(addr, next, walk); if (err) @@ -59,9 +59,14 @@ static int walk_pmd_range(pud_t *pud, unsigned long addr, unsigned long end, if (!walk->pte_entry) continue; - split_huge_pmd(walk->vma, pmd, addr); - if (pmd_trans_unstable(pmd)) - goto again; + if (walk->vma) { + split_huge_pmd(walk->vma, pmd, addr); + if (pmd_trans_unstable(pmd)) + goto again; + } else if (pmd_large(*pmd)) { + continue; + } + err = walk_pte_range(pmd, addr, next, walk); if (err) break; @@ -81,7 +86,7 @@ static int walk_pud_range(p4d_t *p4d, unsigned long addr, unsigned long end, do { again: next = pud_addr_end(addr, end); - if (pud_none(*pud) || !walk->vma) { + if (pud_none(*pud)) { if (walk->pte_hole) err = walk->pte_hole(addr, next, walk); if (err) @@ -95,9 +100,13 @@ static int walk_pud_range(p4d_t *p4d, unsigned long addr, unsigned long end, break; } - split_huge_pud(walk->vma, pud, addr); - if (pud_none(*pud)) - goto again; + if (walk->vma) { + split_huge_pud(walk->vma, pud, addr); + if (pud_none(*pud)) + goto again; + } else if (pud_large(*pud)) { + continue; + } if (walk->pmd_entry || walk->pte_entry) err = walk_pmd_range(pud, addr, next, walk); From patchwork Thu Mar 28 15:20:57 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 10875177 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1BCC41390 for ; Thu, 28 Mar 2019 15:24:06 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 023DA28A46 for ; Thu, 28 Mar 2019 15:24:06 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id EA82928A84; Thu, 28 Mar 2019 15:24:05 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 937AE28A46 for ; Thu, 28 Mar 2019 15:24:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=AYWp7weqwYXUbSw2XkULYH0EHbF9J4g8P9NbCjs1PZA=; b=DuoVosqpy32lHu IwQMKWWtFHOjd9y/9G1TEDiNxDp9hmOaOVdUZgRy5ImrWBSHSWhpOqwUv9RTqSYVjBZXOH4Fx9Oa4 dCo/dkXTUFL+6DLFBW7elIstJsrbzn9TTwpTC2/VeP8xmlMhq+NOekmGqgD8OhOM7itnsGsBzJw7f HRqG/8b+C69Rs11+gFMGADJAYZiIY9xY9SloCwIAEUrrUCuqLS4lK7X8YoS7sEaLb7uFHOv8cIu4t jTFmVlRBj5yT2nFgQ9pjV2fjbiThL2ulsYWw6kj9HVMnl9Ge2n2k96zKUC7+B/aPZDpBg0I6dh6T2 H0v3MEc/GbL9/7fvsLsA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1h9Wsr-0005XY-Vl; Thu, 28 Mar 2019 15:24:01 +0000 Received: from foss.arm.com ([217.140.101.70]) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1h9Wrc-0003xG-Sz for linux-arm-kernel@lists.infradead.org; Thu, 28 Mar 2019 15:22:52 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 428BE1715; Thu, 28 Mar 2019 08:22:43 -0700 (PDT) Received: from e112269-lin.arm.com (e112269-lin.cambridge.arm.com [10.1.196.69]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id EF9053F557; Thu, 28 Mar 2019 08:22:39 -0700 (PDT) From: Steven Price To: linux-mm@kvack.org Subject: [PATCH v7 13/20] mm: pagewalk: Add test_p?d callbacks Date: Thu, 28 Mar 2019 15:20:57 +0000 Message-Id: <20190328152104.23106-14-steven.price@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190328152104.23106-1-steven.price@arm.com> References: <20190328152104.23106-1-steven.price@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190328_082245_021327_E0FCFF51 X-CRM114-Status: GOOD ( 15.63 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , x86@kernel.org, Arnd Bergmann , Ard Biesheuvel , Peter Zijlstra , Catalin Marinas , Dave Hansen , Will Deacon , linux-kernel@vger.kernel.org, Steven Price , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Ingo Molnar , Borislav Petkov , Andy Lutomirski , "H. Peter Anvin" , James Morse , Thomas Gleixner , linux-arm-kernel@lists.infradead.org, "Liang, Kan" Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP It is useful to be able to skip parts of the page table tree even when walking without VMAs. Add test_p?d callbacks similar to test_walk but which are called just before a table at that level is walked. If the callback returns non-zero then the entire table is skipped. Signed-off-by: Steven Price --- include/linux/mm.h | 11 +++++++++++ mm/pagewalk.c | 24 ++++++++++++++++++++++++ 2 files changed, 35 insertions(+) diff --git a/include/linux/mm.h b/include/linux/mm.h index f6de08c116e6..a4c1ed255455 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1382,6 +1382,11 @@ void unmap_vmas(struct mmu_gather *tlb, struct vm_area_struct *start_vma, * value means "do page table walk over the current vma," * and a negative one means "abort current page table walk * right now." 1 means "skip the current vma." + * @test_pmd: similar to test_walk(), but called for every pmd. + * @test_pud: similar to test_walk(), but called for every pud. + * @test_p4d: similar to test_walk(), but called for every p4d. + * Returning 0 means walk this part of the page tables, + * returning 1 means to skip this range. * @mm: mm_struct representing the target process of page table walk * @vma: vma currently walked (NULL if walking outside vmas) * @private: private data for callbacks' usage @@ -1406,6 +1411,12 @@ struct mm_walk { struct mm_walk *walk); int (*test_walk)(unsigned long addr, unsigned long next, struct mm_walk *walk); + int (*test_pmd)(unsigned long addr, unsigned long next, + pmd_t *pmd_start, struct mm_walk *walk); + int (*test_pud)(unsigned long addr, unsigned long next, + pud_t *pud_start, struct mm_walk *walk); + int (*test_p4d)(unsigned long addr, unsigned long next, + p4d_t *p4d_start, struct mm_walk *walk); struct mm_struct *mm; struct vm_area_struct *vma; void *private; diff --git a/mm/pagewalk.c b/mm/pagewalk.c index dac0c848b458..231655db1295 100644 --- a/mm/pagewalk.c +++ b/mm/pagewalk.c @@ -32,6 +32,14 @@ static int walk_pmd_range(pud_t *pud, unsigned long addr, unsigned long end, unsigned long next; int err = 0; + if (walk->test_pmd) { + err = walk->test_pmd(addr, end, pmd_offset(pud, 0), walk); + if (err < 0) + return err; + if (err > 0) + return 0; + } + pmd = pmd_offset(pud, addr); do { again: @@ -82,6 +90,14 @@ static int walk_pud_range(p4d_t *p4d, unsigned long addr, unsigned long end, unsigned long next; int err = 0; + if (walk->test_pud) { + err = walk->test_pud(addr, end, pud_offset(p4d, 0), walk); + if (err < 0) + return err; + if (err > 0) + return 0; + } + pud = pud_offset(p4d, addr); do { again: @@ -124,6 +140,14 @@ static int walk_p4d_range(pgd_t *pgd, unsigned long addr, unsigned long end, unsigned long next; int err = 0; + if (walk->test_p4d) { + err = walk->test_p4d(addr, end, p4d_offset(pgd, 0), walk); + if (err < 0) + return err; + if (err > 0) + return 0; + } + p4d = p4d_offset(pgd, addr); do { next = p4d_addr_end(addr, end); From patchwork Thu Mar 28 15:20:58 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 10875227 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6F8DD1708 for ; Thu, 28 Mar 2019 15:37:26 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4DF5128BAF for ; Thu, 28 Mar 2019 15:37:26 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3F9EA28BE1; Thu, 28 Mar 2019 15:37:26 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id CB31C28BAF for ; Thu, 28 Mar 2019 15:37:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=+lbbeMPLBGXcaaWsNP2pIyysY01inGu5nuYKWolvJ90=; b=RqZN7YsiU5dJ/4 poDNT6S/KxiXAioNQ/dnUM2oTp+3WUp13ue3Su3gVNFxyGGGYwo1QVh8IaHMBuIznudgFe0xyNh2e 41mV3f6uanKgbcBqBnChjnCBCDZ6YA9DoajgppBeNsq4Q112lFZfBctLN0bn04IhmO3D19YHEwcUD NzpiUQ6RNDOIyFbvYi33HHjdtzCxSQHfkSD01Qp7MCP7j1esNA3Tk1SLQIsx7FCA2Jv8TWUdoqB0E qxIE7qFKqM0+nuIc4io2Feuz69HFKPTNzayz+abPMck/J+UFxUQFKl5egWevl80WC7TsdI4sNNHkb KyAVfRlaDBfEVMyH5TyQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1h9X5o-0006OP-OH; Thu, 28 Mar 2019 15:37:24 +0000 Received: from foss.arm.com ([217.140.101.70]) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1h9Wrg-00040p-5E for linux-arm-kernel@lists.infradead.org; Thu, 28 Mar 2019 15:22:55 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C9FA6165C; Thu, 28 Mar 2019 08:22:46 -0700 (PDT) Received: from e112269-lin.arm.com (e112269-lin.cambridge.arm.com [10.1.196.69]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 836DA3F557; Thu, 28 Mar 2019 08:22:43 -0700 (PDT) From: Steven Price To: linux-mm@kvack.org Subject: [PATCH v7 14/20] arm64: mm: Convert mm/dump.c to use walk_page_range() Date: Thu, 28 Mar 2019 15:20:58 +0000 Message-Id: <20190328152104.23106-15-steven.price@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190328152104.23106-1-steven.price@arm.com> References: <20190328152104.23106-1-steven.price@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190328_082248_345587_6C51F76D X-CRM114-Status: GOOD ( 14.94 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , x86@kernel.org, Arnd Bergmann , Ard Biesheuvel , Peter Zijlstra , Catalin Marinas , Dave Hansen , Will Deacon , linux-kernel@vger.kernel.org, Steven Price , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Ingo Molnar , Borislav Petkov , Andy Lutomirski , "H. Peter Anvin" , James Morse , Thomas Gleixner , linux-arm-kernel@lists.infradead.org, "Liang, Kan" Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Now walk_page_range() can walk kernel page tables, we can switch the arm64 ptdump code over to using it, simplifying the code. Signed-off-by: Steven Price --- arch/arm64/mm/dump.c | 117 ++++++++++++++++++++++--------------------- 1 file changed, 59 insertions(+), 58 deletions(-) diff --git a/arch/arm64/mm/dump.c b/arch/arm64/mm/dump.c index 14fe23cd5932..ea20c1213498 100644 --- a/arch/arm64/mm/dump.c +++ b/arch/arm64/mm/dump.c @@ -72,7 +72,7 @@ struct pg_state { struct seq_file *seq; const struct addr_marker *marker; unsigned long start_address; - unsigned level; + int level; u64 current_prot; bool check_wx; unsigned long wx_pages; @@ -234,11 +234,14 @@ static void note_prot_wx(struct pg_state *st, unsigned long addr) st->wx_pages += (addr - st->start_address) / PAGE_SIZE; } -static void note_page(struct pg_state *st, unsigned long addr, unsigned level, +static void note_page(struct pg_state *st, unsigned long addr, int level, u64 val) { static const char units[] = "KMGTPE"; - u64 prot = val & pg_level[level].mask; + u64 prot = 0; + + if (level >= 0) + prot = val & pg_level[level].mask; if (!st->level) { st->level = level; @@ -286,73 +289,71 @@ static void note_page(struct pg_state *st, unsigned long addr, unsigned level, } -static void walk_pte(struct pg_state *st, pmd_t *pmdp, unsigned long start, - unsigned long end) +static int pud_entry(pud_t *pud, unsigned long addr, + unsigned long next, struct mm_walk *walk) { - unsigned long addr = start; - pte_t *ptep = pte_offset_kernel(pmdp, start); + struct pg_state *st = walk->private; + pud_t val = READ_ONCE(*pud); + + if (pud_table(val)) + return 0; + + note_page(st, addr, 2, pud_val(val)); - do { - note_page(st, addr, 4, READ_ONCE(pte_val(*ptep))); - } while (ptep++, addr += PAGE_SIZE, addr != end); + return 0; } -static void walk_pmd(struct pg_state *st, pud_t *pudp, unsigned long start, - unsigned long end) +static int pmd_entry(pmd_t *pmd, unsigned long addr, + unsigned long next, struct mm_walk *walk) { - unsigned long next, addr = start; - pmd_t *pmdp = pmd_offset(pudp, start); - - do { - pmd_t pmd = READ_ONCE(*pmdp); - next = pmd_addr_end(addr, end); - - if (pmd_none(pmd) || pmd_sect(pmd)) { - note_page(st, addr, 3, pmd_val(pmd)); - } else { - BUG_ON(pmd_bad(pmd)); - walk_pte(st, pmdp, addr, next); - } - } while (pmdp++, addr = next, addr != end); + struct pg_state *st = walk->private; + pmd_t val = READ_ONCE(*pmd); + + if (pmd_table(val)) + return 0; + + note_page(st, addr, 3, pmd_val(val)); + + return 0; } -static void walk_pud(struct pg_state *st, pgd_t *pgdp, unsigned long start, - unsigned long end) +static int pte_entry(pte_t *pte, unsigned long addr, + unsigned long next, struct mm_walk *walk) { - unsigned long next, addr = start; - pud_t *pudp = pud_offset(pgdp, start); - - do { - pud_t pud = READ_ONCE(*pudp); - next = pud_addr_end(addr, end); - - if (pud_none(pud) || pud_sect(pud)) { - note_page(st, addr, 2, pud_val(pud)); - } else { - BUG_ON(pud_bad(pud)); - walk_pmd(st, pudp, addr, next); - } - } while (pudp++, addr = next, addr != end); + struct pg_state *st = walk->private; + pte_t val = READ_ONCE(*pte); + + note_page(st, addr, 4, pte_val(val)); + + return 0; +} + +static int pte_hole(unsigned long addr, unsigned long next, + struct mm_walk *walk) +{ + struct pg_state *st = walk->private; + + note_page(st, addr, -1, 0); + + return 0; } static void walk_pgd(struct pg_state *st, struct mm_struct *mm, - unsigned long start) + unsigned long start) { - unsigned long end = (start < TASK_SIZE_64) ? TASK_SIZE_64 : 0; - unsigned long next, addr = start; - pgd_t *pgdp = pgd_offset(mm, start); - - do { - pgd_t pgd = READ_ONCE(*pgdp); - next = pgd_addr_end(addr, end); - - if (pgd_none(pgd)) { - note_page(st, addr, 1, pgd_val(pgd)); - } else { - BUG_ON(pgd_bad(pgd)); - walk_pud(st, pgdp, addr, next); - } - } while (pgdp++, addr = next, addr != end); + struct mm_walk walk = { + .mm = mm, + .private = st, + .pud_entry = pud_entry, + .pmd_entry = pmd_entry, + .pte_entry = pte_entry, + .pte_hole = pte_hole + }; + down_read(&mm->mmap_sem); + walk_page_range(start, start | (((unsigned long)PTRS_PER_PGD << + PGDIR_SHIFT) - 1), + &walk); + up_read(&mm->mmap_sem); } void ptdump_walk_pgd(struct seq_file *m, struct ptdump_info *info) From patchwork Thu Mar 28 15:20:59 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 10875179 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 18E5D1390 for ; Thu, 28 Mar 2019 15:24:29 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 00084284AF for ; Thu, 28 Mar 2019 15:24:28 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E842F28906; Thu, 28 Mar 2019 15:24:28 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 838E8284AF for ; Thu, 28 Mar 2019 15:24:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=BnFdb/6yCoYhIdwXROZS6wvIpuQ18YVe6MQtawP5RF8=; b=BlsbTTlTg6NuFZ +l3Wzyjsrm4ietLsBPMuxtJMudn4ItARHrV6tYRrP+YXFzRwA9cr1rJ3QsrQAS2wpqX0eZm3DyvJX zimfFneFlVqm9gOURDavuW5mGIYMT6LcX+6zWkQQHXO/rFgBSfD0BwOlEQrlVpczId9x5dwaBNhtx hNVPVy5VlexWYdgXaHgKhy2P0Td5GPoV2qWUAGNPBn18xbmthK1wtFwalX7joqVc0vm9wY7GZtQze 8diQSV9v9nHA8U1+byPJ4fZs+/yPzdR2x7cLHXMoUOH/YjGL7ePkcHMAtnmHpNkCsW5ZKHrXB5ifQ 7TKEyFXmno/uPFIi6VwA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1h9WtF-00065r-CG; Thu, 28 Mar 2019 15:24:25 +0000 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70] helo=foss.arm.com) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1h9Wrj-00043d-3I for linux-arm-kernel@lists.infradead.org; Thu, 28 Mar 2019 15:22:57 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 5D57815AB; Thu, 28 Mar 2019 08:22:50 -0700 (PDT) Received: from e112269-lin.arm.com (e112269-lin.cambridge.arm.com [10.1.196.69]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 16D333F557; Thu, 28 Mar 2019 08:22:46 -0700 (PDT) From: Steven Price To: linux-mm@kvack.org Subject: [PATCH v7 15/20] x86: mm: Don't display pages which aren't present in debugfs Date: Thu, 28 Mar 2019 15:20:59 +0000 Message-Id: <20190328152104.23106-16-steven.price@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190328152104.23106-1-steven.price@arm.com> References: <20190328152104.23106-1-steven.price@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190328_082251_296463_DF929F80 X-CRM114-Status: GOOD ( 14.41 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , x86@kernel.org, Arnd Bergmann , Ard Biesheuvel , Peter Zijlstra , Catalin Marinas , Dave Hansen , Will Deacon , linux-kernel@vger.kernel.org, Steven Price , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Ingo Molnar , Borislav Petkov , Andy Lutomirski , "H. Peter Anvin" , James Morse , Thomas Gleixner , linux-arm-kernel@lists.infradead.org, "Liang, Kan" Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP For the /sys/kernel/debug/page_tables/ files, rather than outputing a mostly empty line when a block of memory isn't present just skip the line. This keeps the output shorter and will help with a future change switching to using the generic page walk code as we no longer care about the 'level' that the page table holes are at. Signed-off-by: Steven Price --- arch/x86/mm/dump_pagetables.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/arch/x86/mm/dump_pagetables.c b/arch/x86/mm/dump_pagetables.c index ca270fb00805..e2b53db92c34 100644 --- a/arch/x86/mm/dump_pagetables.c +++ b/arch/x86/mm/dump_pagetables.c @@ -304,8 +304,8 @@ static void note_page(struct seq_file *m, struct pg_state *st, /* * Now print the actual finished series */ - if (!st->marker->max_lines || - st->lines < st->marker->max_lines) { + if ((cur & _PAGE_PRESENT) && (!st->marker->max_lines || + st->lines < st->marker->max_lines)) { pt_dump_seq_printf(m, st->to_dmesg, "0x%0*lx-0x%0*lx ", width, st->start_address, @@ -321,7 +321,8 @@ static void note_page(struct seq_file *m, struct pg_state *st, printk_prot(m, st->current_prot, st->level, st->to_dmesg); } - st->lines++; + if (cur & _PAGE_PRESENT) + st->lines++; /* * We print markers for special areas of address space, From patchwork Thu Mar 28 15:21:00 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 10875183 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D30C314DE for ; Thu, 28 Mar 2019 15:25:16 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B24F9284AF for ; Thu, 28 Mar 2019 15:25:16 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A279C28AD5; Thu, 28 Mar 2019 15:25:16 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 151C3284AF for ; Thu, 28 Mar 2019 15:25:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=kzEh2kmua2EqH7ymp/LD5IkbIRvdKt09JocvbRIjD9o=; b=JObEEKskjYPhbk /fYlpATATSOkzFAIA+xZ/FqHhsCOj6dXpShQuCGqFz3T4wK0d75dMDckTQJJcYQgIrS8GUzL1lyGH 1/sMavkcFwx7RfPqpPGex1MaJ4blx/iPlZkvBlgynvAJz0OBOfcf0J9cKW1dH0RyIllNdRLFgrjpi Z+oiXdhDgYP0NU+zG3NYuwd6aWCnXxWl9D8FMddOUnuyCKABf6MqpBa22o0BmYNuMUWwWMyfZhhPD M1uResdFC4I1PpGAClcm82lQezxPOEbYdrNUslvNbS/7JF+z0EzazF1T548e+DefPWEZICo9VMZpD DyAQfi57+RTTx9uXaAgQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1h9Wu0-0007Sv-Ag; Thu, 28 Mar 2019 15:25:12 +0000 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70] helo=foss.arm.com) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1h9Wrn-00045K-19 for linux-arm-kernel@lists.infradead.org; Thu, 28 Mar 2019 15:23:11 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 223841684; Thu, 28 Mar 2019 08:22:54 -0700 (PDT) Received: from e112269-lin.arm.com (e112269-lin.cambridge.arm.com [10.1.196.69]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 9DA2C3F557; Thu, 28 Mar 2019 08:22:50 -0700 (PDT) From: Steven Price To: linux-mm@kvack.org Subject: [PATCH v7 16/20] x86: mm: Point to struct seq_file from struct pg_state Date: Thu, 28 Mar 2019 15:21:00 +0000 Message-Id: <20190328152104.23106-17-steven.price@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190328152104.23106-1-steven.price@arm.com> References: <20190328152104.23106-1-steven.price@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190328_082255_083155_C072F28A X-CRM114-Status: GOOD ( 16.39 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , x86@kernel.org, Arnd Bergmann , Ard Biesheuvel , Peter Zijlstra , Catalin Marinas , Dave Hansen , Will Deacon , linux-kernel@vger.kernel.org, Steven Price , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Ingo Molnar , Borislav Petkov , Andy Lutomirski , "H. Peter Anvin" , James Morse , Thomas Gleixner , linux-arm-kernel@lists.infradead.org, "Liang, Kan" Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP mm/dump_pagetables.c passes both struct seq_file and struct pg_state down the chain of walk_*_level() functions to be passed to note_page(). Instead place the struct seq_file in struct pg_state and access it from struct pg_state (which is private to this file) in note_page(). Signed-off-by: Steven Price --- arch/x86/mm/dump_pagetables.c | 69 ++++++++++++++++++----------------- 1 file changed, 35 insertions(+), 34 deletions(-) diff --git a/arch/x86/mm/dump_pagetables.c b/arch/x86/mm/dump_pagetables.c index e2b53db92c34..3d12ac031144 100644 --- a/arch/x86/mm/dump_pagetables.c +++ b/arch/x86/mm/dump_pagetables.c @@ -40,6 +40,7 @@ struct pg_state { bool to_dmesg; bool check_wx; unsigned long wx_pages; + struct seq_file *seq; }; struct addr_marker { @@ -268,11 +269,12 @@ static void note_wx(struct pg_state *st) * of PTE entries; the next one is different so we need to * print what we collected so far. */ -static void note_page(struct seq_file *m, struct pg_state *st, - pgprot_t new_prot, pgprotval_t new_eff, int level) +static void note_page(struct pg_state *st, pgprot_t new_prot, + pgprotval_t new_eff, int level) { pgprotval_t prot, cur, eff; static const char units[] = "BKMGTPE"; + struct seq_file *m = st->seq; /* * If we have a "break" in the series, we need to flush the state that @@ -358,8 +360,8 @@ static inline pgprotval_t effective_prot(pgprotval_t prot1, pgprotval_t prot2) ((prot1 | prot2) & _PAGE_NX); } -static void walk_pte_level(struct seq_file *m, struct pg_state *st, pmd_t addr, - pgprotval_t eff_in, unsigned long P) +static void walk_pte_level(struct pg_state *st, pmd_t addr, pgprotval_t eff_in, + unsigned long P) { int i; pte_t *pte; @@ -370,7 +372,7 @@ static void walk_pte_level(struct seq_file *m, struct pg_state *st, pmd_t addr, pte = pte_offset_map(&addr, st->current_address); prot = pte_flags(*pte); eff = effective_prot(eff_in, prot); - note_page(m, st, __pgprot(prot), eff, 5); + note_page(st, __pgprot(prot), eff, 5); pte_unmap(pte); } } @@ -383,22 +385,20 @@ static void walk_pte_level(struct seq_file *m, struct pg_state *st, pmd_t addr, * us dozens of seconds (minutes for 5-level config) while checking for * W+X mapping or reading kernel_page_tables debugfs file. */ -static inline bool kasan_page_table(struct seq_file *m, struct pg_state *st, - void *pt) +static inline bool kasan_page_table(struct pg_state *st, void *pt) { if (__pa(pt) == __pa(kasan_early_shadow_pmd) || (pgtable_l5_enabled() && __pa(pt) == __pa(kasan_early_shadow_p4d)) || __pa(pt) == __pa(kasan_early_shadow_pud)) { pgprotval_t prot = pte_flags(kasan_early_shadow_pte[0]); - note_page(m, st, __pgprot(prot), 0, 5); + note_page(st, __pgprot(prot), 0, 5); return true; } return false; } #else -static inline bool kasan_page_table(struct seq_file *m, struct pg_state *st, - void *pt) +static inline bool kasan_page_table(struct pg_state *st, void *pt) { return false; } @@ -406,7 +406,7 @@ static inline bool kasan_page_table(struct seq_file *m, struct pg_state *st, #if PTRS_PER_PMD > 1 -static void walk_pmd_level(struct seq_file *m, struct pg_state *st, pud_t addr, +static void walk_pmd_level(struct pg_state *st, pud_t addr, pgprotval_t eff_in, unsigned long P) { int i; @@ -420,19 +420,19 @@ static void walk_pmd_level(struct seq_file *m, struct pg_state *st, pud_t addr, prot = pmd_flags(*start); eff = effective_prot(eff_in, prot); if (pmd_large(*start) || !pmd_present(*start)) { - note_page(m, st, __pgprot(prot), eff, 4); - } else if (!kasan_page_table(m, st, pmd_start)) { - walk_pte_level(m, st, *start, eff, + note_page(st, __pgprot(prot), eff, 4); + } else if (!kasan_page_table(st, pmd_start)) { + walk_pte_level(st, *start, eff, P + i * PMD_LEVEL_MULT); } } else - note_page(m, st, __pgprot(0), 0, 4); + note_page(st, __pgprot(0), 0, 4); start++; } } #else -#define walk_pmd_level(m,s,a,e,p) walk_pte_level(m,s,__pmd(pud_val(a)),e,p) +#define walk_pmd_level(s,a,e,p) walk_pte_level(s,__pmd(pud_val(a)),e,p) #undef pud_large #define pud_large(a) pmd_large(__pmd(pud_val(a))) #define pud_none(a) pmd_none(__pmd(pud_val(a))) @@ -440,8 +440,8 @@ static void walk_pmd_level(struct seq_file *m, struct pg_state *st, pud_t addr, #if PTRS_PER_PUD > 1 -static void walk_pud_level(struct seq_file *m, struct pg_state *st, p4d_t addr, - pgprotval_t eff_in, unsigned long P) +static void walk_pud_level(struct pg_state *st, p4d_t addr, pgprotval_t eff_in, + unsigned long P) { int i; pud_t *start, *pud_start; @@ -455,34 +455,34 @@ static void walk_pud_level(struct seq_file *m, struct pg_state *st, p4d_t addr, prot = pud_flags(*start); eff = effective_prot(eff_in, prot); if (pud_large(*start) || !pud_present(*start)) { - note_page(m, st, __pgprot(prot), eff, 3); - } else if (!kasan_page_table(m, st, pud_start)) { - walk_pmd_level(m, st, *start, eff, + note_page(st, __pgprot(prot), eff, 3); + } else if (!kasan_page_table(st, pud_start)) { + walk_pmd_level(st, *start, eff, P + i * PUD_LEVEL_MULT); } } else - note_page(m, st, __pgprot(0), 0, 3); + note_page(st, __pgprot(0), 0, 3); start++; } } #else -#define walk_pud_level(m,s,a,e,p) walk_pmd_level(m,s,__pud(p4d_val(a)),e,p) +#define walk_pud_level(s,a,e,p) walk_pmd_level(s,__pud(p4d_val(a)),e,p) #undef p4d_large #define p4d_large(a) pud_large(__pud(p4d_val(a))) #define p4d_none(a) pud_none(__pud(p4d_val(a))) #endif -static void walk_p4d_level(struct seq_file *m, struct pg_state *st, pgd_t addr, - pgprotval_t eff_in, unsigned long P) +static void walk_p4d_level(struct pg_state *st, pgd_t addr, pgprotval_t eff_in, + unsigned long P) { int i; p4d_t *start, *p4d_start; pgprotval_t prot, eff; if (PTRS_PER_P4D == 1) - return walk_pud_level(m, st, __p4d(pgd_val(addr)), eff_in, P); + return walk_pud_level(st, __p4d(pgd_val(addr)), eff_in, P); p4d_start = start = (p4d_t *)pgd_page_vaddr(addr); @@ -492,13 +492,13 @@ static void walk_p4d_level(struct seq_file *m, struct pg_state *st, pgd_t addr, prot = p4d_flags(*start); eff = effective_prot(eff_in, prot); if (p4d_large(*start) || !p4d_present(*start)) { - note_page(m, st, __pgprot(prot), eff, 2); - } else if (!kasan_page_table(m, st, p4d_start)) { - walk_pud_level(m, st, *start, eff, + note_page(st, __pgprot(prot), eff, 2); + } else if (!kasan_page_table(st, p4d_start)) { + walk_pud_level(st, *start, eff, P + i * P4D_LEVEL_MULT); } } else - note_page(m, st, __pgprot(0), 0, 2); + note_page(st, __pgprot(0), 0, 2); start++; } @@ -536,6 +536,7 @@ static void ptdump_walk_pgd_level_core(struct seq_file *m, pgd_t *pgd, } st.check_wx = checkwx; + st.seq = m; if (checkwx) st.wx_pages = 0; @@ -549,13 +550,13 @@ static void ptdump_walk_pgd_level_core(struct seq_file *m, pgd_t *pgd, eff = prot; #endif if (pgd_large(*start) || !pgd_present(*start)) { - note_page(m, &st, __pgprot(prot), eff, 1); + note_page(&st, __pgprot(prot), eff, 1); } else { - walk_p4d_level(m, &st, *start, eff, + walk_p4d_level(&st, *start, eff, i * PGD_LEVEL_MULT); } } else - note_page(m, &st, __pgprot(0), 0, 1); + note_page(&st, __pgprot(0), 0, 1); cond_resched(); start++; @@ -563,7 +564,7 @@ static void ptdump_walk_pgd_level_core(struct seq_file *m, pgd_t *pgd, /* Flush out the last page */ st.current_address = normalize_addr(PTRS_PER_PGD*PGD_LEVEL_MULT); - note_page(m, &st, __pgprot(0), 0, 0); + note_page(&st, __pgprot(0), 0, 0); if (!checkwx) return; if (st.wx_pages) From patchwork Thu Mar 28 15:21:01 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 10875185 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B944014DE for ; Thu, 28 Mar 2019 15:25:29 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A156C284AF for ; Thu, 28 Mar 2019 15:25:29 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 9448E28B74; Thu, 28 Mar 2019 15:25:29 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 37F4F284AF for ; Thu, 28 Mar 2019 15:25:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=p1j3ZvhBGtFu125vXV18HklrfCWLyixbjw9RYoINkNQ=; b=kFVylk4dmD24XT NEeUyWOIAfTXgbsb+mx2FQNkIwm1d6h4dLI+drlKAMz9KS5DmlsWLG6uAVIZPuElj8NK/nwuX6HVT aBa6J5OGGI/mBQ/dl1FPvBwURz+gyAt5RLOz1fOHDY2I2W27A2FyP/rHW1+FqLc2gMGJCR2IzCd5I InuQZBmGxPB3TSQsUinBImIPt78bA6oYVKwiLpdrt1ZTJg+rUV1OdeU6ZNcD2f7tLi1BJxHie+eIj Ia5nn7OqSCVv8UemFl9Dx8aCIh9vEQCFdhxiH9wUYP98jVI+yZL6q8txhU83litMbBCs1g0j8aFev pM1jqNMG+CQHMzYFEv0A==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1h9WuA-0008RO-IW; Thu, 28 Mar 2019 15:25:22 +0000 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70] helo=foss.arm.com) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1h9Wrp-00043d-TC for linux-arm-kernel@lists.infradead.org; Thu, 28 Mar 2019 15:23:10 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C649C168F; Thu, 28 Mar 2019 08:22:57 -0700 (PDT) Received: from e112269-lin.arm.com (e112269-lin.cambridge.arm.com [10.1.196.69]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 7DF2F3F557; Thu, 28 Mar 2019 08:22:54 -0700 (PDT) From: Steven Price To: linux-mm@kvack.org Subject: [PATCH v7 17/20] x86: mm+efi: Convert ptdump_walk_pgd_level() to take a mm_struct Date: Thu, 28 Mar 2019 15:21:01 +0000 Message-Id: <20190328152104.23106-18-steven.price@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190328152104.23106-1-steven.price@arm.com> References: <20190328152104.23106-1-steven.price@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190328_082258_160372_B4D862FA X-CRM114-Status: GOOD ( 14.42 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , x86@kernel.org, Arnd Bergmann , Ard Biesheuvel , Peter Zijlstra , Catalin Marinas , Dave Hansen , Will Deacon , linux-kernel@vger.kernel.org, Steven Price , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Ingo Molnar , Borislav Petkov , Andy Lutomirski , "H. Peter Anvin" , James Morse , Thomas Gleixner , linux-arm-kernel@lists.infradead.org, "Liang, Kan" Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP To enable x86 to use the generic walk_page_range() function, the callers of ptdump_walk_pgd_level() need to pass an mm_struct rather than the raw pgd_t pointer. Luckily since commit 7e904a91bf60 ("efi: Use efi_mm in x86 as well as ARM") we now have an mm_struct for EFI on x86. Signed-off-by: Steven Price --- arch/x86/include/asm/pgtable.h | 2 +- arch/x86/mm/dump_pagetables.c | 4 ++-- arch/x86/platform/efi/efi_32.c | 2 +- arch/x86/platform/efi/efi_64.c | 4 ++-- 4 files changed, 6 insertions(+), 6 deletions(-) diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h index 0dd04cf6ebeb..579959750f34 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -27,7 +27,7 @@ extern pgd_t early_top_pgt[PTRS_PER_PGD]; int __init __early_make_pgtable(unsigned long address, pmdval_t pmd); -void ptdump_walk_pgd_level(struct seq_file *m, pgd_t *pgd); +void ptdump_walk_pgd_level(struct seq_file *m, struct mm_struct *mm); void ptdump_walk_pgd_level_debugfs(struct seq_file *m, pgd_t *pgd, bool user); void ptdump_walk_pgd_level_checkwx(void); void ptdump_walk_user_pgd_level_checkwx(void); diff --git a/arch/x86/mm/dump_pagetables.c b/arch/x86/mm/dump_pagetables.c index 3d12ac031144..ddf8ea6b059d 100644 --- a/arch/x86/mm/dump_pagetables.c +++ b/arch/x86/mm/dump_pagetables.c @@ -574,9 +574,9 @@ static void ptdump_walk_pgd_level_core(struct seq_file *m, pgd_t *pgd, pr_info("x86/mm: Checked W+X mappings: passed, no W+X pages found.\n"); } -void ptdump_walk_pgd_level(struct seq_file *m, pgd_t *pgd) +void ptdump_walk_pgd_level(struct seq_file *m, struct mm_struct *mm) { - ptdump_walk_pgd_level_core(m, pgd, false, true); + ptdump_walk_pgd_level_core(m, mm->pgd, false, true); } void ptdump_walk_pgd_level_debugfs(struct seq_file *m, pgd_t *pgd, bool user) diff --git a/arch/x86/platform/efi/efi_32.c b/arch/x86/platform/efi/efi_32.c index 9959657127f4..9175ceaa6e72 100644 --- a/arch/x86/platform/efi/efi_32.c +++ b/arch/x86/platform/efi/efi_32.c @@ -49,7 +49,7 @@ void efi_sync_low_kernel_mappings(void) {} void __init efi_dump_pagetable(void) { #ifdef CONFIG_EFI_PGT_DUMP - ptdump_walk_pgd_level(NULL, swapper_pg_dir); + ptdump_walk_pgd_level(NULL, init_mm); #endif } diff --git a/arch/x86/platform/efi/efi_64.c b/arch/x86/platform/efi/efi_64.c index cf0347f61b21..a2e0f9800190 100644 --- a/arch/x86/platform/efi/efi_64.c +++ b/arch/x86/platform/efi/efi_64.c @@ -611,9 +611,9 @@ void __init efi_dump_pagetable(void) { #ifdef CONFIG_EFI_PGT_DUMP if (efi_enabled(EFI_OLD_MEMMAP)) - ptdump_walk_pgd_level(NULL, swapper_pg_dir); + ptdump_walk_pgd_level(NULL, init_mm); else - ptdump_walk_pgd_level(NULL, efi_mm.pgd); + ptdump_walk_pgd_level(NULL, efi_mm); #endif } From patchwork Thu Mar 28 15:21:02 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 10875187 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id ED1791390 for ; Thu, 28 Mar 2019 15:25:36 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D3EA5284AF for ; Thu, 28 Mar 2019 15:25:36 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id C747228AD5; Thu, 28 Mar 2019 15:25:36 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 3A74328AAF for ; Thu, 28 Mar 2019 15:25:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=QnjxBbxjX+x2rURLjrH6m4vKTwHNGt4krTLltvD25mw=; b=fH2iBbmDGPSULr /MJgDenuOGKi83ffAJEyFp8Zt/qEogNU/01sSkOHUANyScHOzh5yPmkZGvs3h1Zp1+HMBveRp1EI2 IB24Wwu0nhxKuhhfaBcoswYBzsFkgMF+gUP+eoXseymRYGy0AfpSLMn88ktdkh248rgD+pdCxqzHg AkwXaPxU3Cvpz7YMdpWx1PstfhHVDAi6KyfCXINUphAsMLvIXddXtoavpXEudhfeOPvYhrIJ/poyN 2sN2jItp4PFTcIOwMfWLyjG5zVLxRa5/TZQ17pKS6YLCqFkArnp4oHLfiGncAc310UsyjRPVPo5Bk 0ijBH/0AsXESOhGruD7A==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1h9WuM-0000EZ-2L; Thu, 28 Mar 2019 15:25:34 +0000 Received: from foss.arm.com ([217.140.101.70]) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1h9Wrt-0004B3-QC for linux-arm-kernel@lists.infradead.org; Thu, 28 Mar 2019 15:23:17 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 5988D15AB; Thu, 28 Mar 2019 08:23:01 -0700 (PDT) Received: from e112269-lin.arm.com (e112269-lin.cambridge.arm.com [10.1.196.69]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 1282A3F557; Thu, 28 Mar 2019 08:22:57 -0700 (PDT) From: Steven Price To: linux-mm@kvack.org Subject: [PATCH v7 18/20] x86: mm: Convert ptdump_walk_pgd_level_debugfs() to take an mm_struct Date: Thu, 28 Mar 2019 15:21:02 +0000 Message-Id: <20190328152104.23106-19-steven.price@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190328152104.23106-1-steven.price@arm.com> References: <20190328152104.23106-1-steven.price@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190328_082302_458634_63229F4A X-CRM114-Status: GOOD ( 14.50 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , x86@kernel.org, Arnd Bergmann , Ard Biesheuvel , Peter Zijlstra , Catalin Marinas , Dave Hansen , Will Deacon , linux-kernel@vger.kernel.org, Steven Price , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Ingo Molnar , Borislav Petkov , Andy Lutomirski , "H. Peter Anvin" , James Morse , Thomas Gleixner , linux-arm-kernel@lists.infradead.org, "Liang, Kan" Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP To enable x86 to use the generic walk_page_range() function, the callers of ptdump_walk_pgd_level_debugfs() need to pass in the mm_struct. This means that ptdump_walk_pgd_level_core() is now always passed a valid pgd, so drop the support for pgd==NULL. Signed-off-by: Steven Price --- arch/x86/include/asm/pgtable.h | 3 ++- arch/x86/mm/debug_pagetables.c | 8 ++++---- arch/x86/mm/dump_pagetables.c | 14 ++++++-------- 3 files changed, 12 insertions(+), 13 deletions(-) diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h index 579959750f34..5abf693dc9b2 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -28,7 +28,8 @@ extern pgd_t early_top_pgt[PTRS_PER_PGD]; int __init __early_make_pgtable(unsigned long address, pmdval_t pmd); void ptdump_walk_pgd_level(struct seq_file *m, struct mm_struct *mm); -void ptdump_walk_pgd_level_debugfs(struct seq_file *m, pgd_t *pgd, bool user); +void ptdump_walk_pgd_level_debugfs(struct seq_file *m, struct mm_struct *mm, + bool user); void ptdump_walk_pgd_level_checkwx(void); void ptdump_walk_user_pgd_level_checkwx(void); diff --git a/arch/x86/mm/debug_pagetables.c b/arch/x86/mm/debug_pagetables.c index cd84f067e41d..824131052574 100644 --- a/arch/x86/mm/debug_pagetables.c +++ b/arch/x86/mm/debug_pagetables.c @@ -6,7 +6,7 @@ static int ptdump_show(struct seq_file *m, void *v) { - ptdump_walk_pgd_level_debugfs(m, NULL, false); + ptdump_walk_pgd_level_debugfs(m, &init_mm, false); return 0; } @@ -16,7 +16,7 @@ static int ptdump_curknl_show(struct seq_file *m, void *v) { if (current->mm->pgd) { down_read(¤t->mm->mmap_sem); - ptdump_walk_pgd_level_debugfs(m, current->mm->pgd, false); + ptdump_walk_pgd_level_debugfs(m, current->mm, false); up_read(¤t->mm->mmap_sem); } return 0; @@ -31,7 +31,7 @@ static int ptdump_curusr_show(struct seq_file *m, void *v) { if (current->mm->pgd) { down_read(¤t->mm->mmap_sem); - ptdump_walk_pgd_level_debugfs(m, current->mm->pgd, true); + ptdump_walk_pgd_level_debugfs(m, current->mm, true); up_read(¤t->mm->mmap_sem); } return 0; @@ -46,7 +46,7 @@ static struct dentry *pe_efi; static int ptdump_efi_show(struct seq_file *m, void *v) { if (efi_mm.pgd) - ptdump_walk_pgd_level_debugfs(m, efi_mm.pgd, false); + ptdump_walk_pgd_level_debugfs(m, &efi_mm, false); return 0; } diff --git a/arch/x86/mm/dump_pagetables.c b/arch/x86/mm/dump_pagetables.c index ddf8ea6b059d..40b3f1da6e15 100644 --- a/arch/x86/mm/dump_pagetables.c +++ b/arch/x86/mm/dump_pagetables.c @@ -525,16 +525,12 @@ static inline bool is_hypervisor_range(int idx) static void ptdump_walk_pgd_level_core(struct seq_file *m, pgd_t *pgd, bool checkwx, bool dmesg) { - pgd_t *start = INIT_PGD; + pgd_t *start = pgd; pgprotval_t prot, eff; int i; struct pg_state st = {}; - if (pgd) { - start = pgd; - st.to_dmesg = dmesg; - } - + st.to_dmesg = dmesg; st.check_wx = checkwx; st.seq = m; if (checkwx) @@ -579,8 +575,10 @@ void ptdump_walk_pgd_level(struct seq_file *m, struct mm_struct *mm) ptdump_walk_pgd_level_core(m, mm->pgd, false, true); } -void ptdump_walk_pgd_level_debugfs(struct seq_file *m, pgd_t *pgd, bool user) +void ptdump_walk_pgd_level_debugfs(struct seq_file *m, struct mm_struct *mm, + bool user) { + pgd_t *pgd = mm->pgd; #ifdef CONFIG_PAGE_TABLE_ISOLATION if (user && static_cpu_has(X86_FEATURE_PTI)) pgd = kernel_to_user_pgdp(pgd); @@ -606,7 +604,7 @@ void ptdump_walk_user_pgd_level_checkwx(void) void ptdump_walk_pgd_level_checkwx(void) { - ptdump_walk_pgd_level_core(NULL, NULL, true, false); + ptdump_walk_pgd_level_core(NULL, INIT_PGD, true, false); } static int __init pt_dump_init(void) From patchwork Thu Mar 28 15:21:03 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 10875181 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2C3681390 for ; Thu, 28 Mar 2019 15:25:09 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 130B2284AF for ; Thu, 28 Mar 2019 15:25:09 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 076D828AEC; Thu, 28 Mar 2019 15:25:09 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 9FCF1284AF for ; Thu, 28 Mar 2019 15:25:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=OK5INMz5wzvA86sYcI7aZ/Giv1WIANZz4utVzZp1CP8=; b=S4WGrPEQMpShcb quj/Oq/FrDeO5RxHQuLD8Ja+zXGgSzl6EirTZbi10eTuRhHYHtXkwFszLstP2tLFNnHykFFq1V2Fs NQI2Wo6nxovD38qcsCrPgyj4YUfLCeratMqktF3qc9M772qmlMpr2Yy6QL3PeZ7a933MhYNgm1PIw SQoVNO/HDMS7TYjOTL3wHgslcLLLsxplUtaWEriKszsEPanfbLLM7aRmBpC5hEbn2KCuJew/ncZOt Q0FQ5MT68K5JAxs4fxibh38ar0nnp1mWvJ592qevZQAa6YhDeewsqu64IrDsrHNlYfsa5mXASbsyQ f4frDonXKoSst3YgbScA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1h9Wto-0006r5-86; Thu, 28 Mar 2019 15:25:00 +0000 Received: from foss.arm.com ([217.140.101.70]) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1h9Wrx-0004G4-F4 for linux-arm-kernel@lists.infradead.org; Thu, 28 Mar 2019 15:23:30 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E1069169E; Thu, 28 Mar 2019 08:23:04 -0700 (PDT) Received: from e112269-lin.arm.com (e112269-lin.cambridge.arm.com [10.1.196.69]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 99FF43F557; Thu, 28 Mar 2019 08:23:01 -0700 (PDT) From: Steven Price To: linux-mm@kvack.org Subject: [PATCH v7 19/20] x86: mm: Convert ptdump_walk_pgd_level_core() to take an mm_struct Date: Thu, 28 Mar 2019 15:21:03 +0000 Message-Id: <20190328152104.23106-20-steven.price@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190328152104.23106-1-steven.price@arm.com> References: <20190328152104.23106-1-steven.price@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190328_082306_007137_FB0B7A8B X-CRM114-Status: GOOD ( 15.61 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , x86@kernel.org, Arnd Bergmann , Ard Biesheuvel , Peter Zijlstra , Catalin Marinas , Dave Hansen , Will Deacon , linux-kernel@vger.kernel.org, Steven Price , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Ingo Molnar , Borislav Petkov , Andy Lutomirski , "H. Peter Anvin" , James Morse , Thomas Gleixner , linux-arm-kernel@lists.infradead.org, "Liang, Kan" Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP An mm_struct is needed to enable x86 to use of the generic walk_page_range() function. In the case of walking the user page tables (when CONFIG_PAGE_TABLE_ISOLATION is enabled), it is necessary to create a fake_mm structure because there isn't an mm_struct with a pointer to the pgd of the user page tables. This fake_mm structure is initialised with the minimum necessary for the generic page walk code. Signed-off-by: Steven Price --- arch/x86/mm/dump_pagetables.c | 36 ++++++++++++++++++++--------------- 1 file changed, 21 insertions(+), 15 deletions(-) diff --git a/arch/x86/mm/dump_pagetables.c b/arch/x86/mm/dump_pagetables.c index 40b3f1da6e15..c0fbb9e5a790 100644 --- a/arch/x86/mm/dump_pagetables.c +++ b/arch/x86/mm/dump_pagetables.c @@ -111,8 +111,6 @@ static struct addr_marker address_markers[] = { [END_OF_SPACE_NR] = { -1, NULL } }; -#define INIT_PGD ((pgd_t *) &init_top_pgt) - #else /* CONFIG_X86_64 */ enum address_markers_idx { @@ -147,8 +145,6 @@ static struct addr_marker address_markers[] = { [END_OF_SPACE_NR] = { -1, NULL } }; -#define INIT_PGD (swapper_pg_dir) - #endif /* !CONFIG_X86_64 */ /* Multipliers for offsets within the PTEs */ @@ -522,10 +518,10 @@ static inline bool is_hypervisor_range(int idx) #endif } -static void ptdump_walk_pgd_level_core(struct seq_file *m, pgd_t *pgd, +static void ptdump_walk_pgd_level_core(struct seq_file *m, struct mm_struct *mm, bool checkwx, bool dmesg) { - pgd_t *start = pgd; + pgd_t *start = mm->pgd; pgprotval_t prot, eff; int i; struct pg_state st = {}; @@ -572,39 +568,49 @@ static void ptdump_walk_pgd_level_core(struct seq_file *m, pgd_t *pgd, void ptdump_walk_pgd_level(struct seq_file *m, struct mm_struct *mm) { - ptdump_walk_pgd_level_core(m, mm->pgd, false, true); + ptdump_walk_pgd_level_core(m, mm, false, true); } +#ifdef CONFIG_PAGE_TABLE_ISOLATION +static void ptdump_walk_pgd_level_user_core(struct seq_file *m, + struct mm_struct *mm, + bool checkwx, bool dmesg) +{ + struct mm_struct fake_mm = { + .pgd = kernel_to_user_pgdp(mm->pgd) + }; + init_rwsem(&fake_mm.mmap_sem); + ptdump_walk_pgd_level_core(m, &fake_mm, checkwx, dmesg); +} +#endif + void ptdump_walk_pgd_level_debugfs(struct seq_file *m, struct mm_struct *mm, bool user) { - pgd_t *pgd = mm->pgd; #ifdef CONFIG_PAGE_TABLE_ISOLATION if (user && static_cpu_has(X86_FEATURE_PTI)) - pgd = kernel_to_user_pgdp(pgd); + ptdump_walk_pgd_level_user_core(m, mm, false, false); + else #endif - ptdump_walk_pgd_level_core(m, pgd, false, false); + ptdump_walk_pgd_level_core(m, mm, false, false); } EXPORT_SYMBOL_GPL(ptdump_walk_pgd_level_debugfs); void ptdump_walk_user_pgd_level_checkwx(void) { #ifdef CONFIG_PAGE_TABLE_ISOLATION - pgd_t *pgd = INIT_PGD; - if (!(__supported_pte_mask & _PAGE_NX) || !static_cpu_has(X86_FEATURE_PTI)) return; pr_info("x86/mm: Checking user space page tables\n"); - pgd = kernel_to_user_pgdp(pgd); - ptdump_walk_pgd_level_core(NULL, pgd, true, false); + ptdump_walk_pgd_level_user_core(NULL, &init_mm, true, false); #endif } void ptdump_walk_pgd_level_checkwx(void) { - ptdump_walk_pgd_level_core(NULL, INIT_PGD, true, false); + ptdump_walk_pgd_level_core(NULL, &init_mm, true, false); } static int __init pt_dump_init(void) From patchwork Thu Mar 28 15:21:04 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 10875189 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3C4411390 for ; Thu, 28 Mar 2019 15:25:52 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1ECD628AAF for ; Thu, 28 Mar 2019 15:25:52 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 11B56284AF; Thu, 28 Mar 2019 15:25:52 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 60D0A284AF for ; Thu, 28 Mar 2019 15:25:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=k5SV4la3BlumHR9TKOuIurKb0gMPYjF0NGMxmRtD1Ms=; b=Rz3VSFXCkIxiHY HL+yYFY0gswzoVedCL4lCkhtjOZEHnD+bPzkWrV/1s87Nusbtw147FneNWs0uOaUpBbwTVMuAVTuc tFJRNeR78Q/v+fWj0TMh64uWA3FPKtcFCDHgDSIdX1qz0PePcyUxux/8krFuC678nfKDhOXYIzxqO CT81IsR5TgHyVduhuiQwsAa7D69M1NWn0TqCtdjvbxNl1GTg1GOnffoEVvK1erxbWO2Kqd9PR47Ns BrkjfLro7GPwYO7e2ljZ5ThUuanNnxuvTal2xnKmP0hZynGnXv0Aez2/GGoHH2oBe5uI1wXn3BhkW w34omWQywES3GjzhFB1A==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1h9WuU-0000S0-KY; Thu, 28 Mar 2019 15:25:42 +0000 Received: from foss.arm.com ([217.140.101.70]) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1h9Ws1-0004KO-4n for linux-arm-kernel@lists.infradead.org; Thu, 28 Mar 2019 15:23:41 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 73748165C; Thu, 28 Mar 2019 08:23:08 -0700 (PDT) Received: from e112269-lin.arm.com (e112269-lin.cambridge.arm.com [10.1.196.69]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 2D87D3F557; Thu, 28 Mar 2019 08:23:05 -0700 (PDT) From: Steven Price To: linux-mm@kvack.org Subject: [PATCH v7 20/20] x86: mm: Convert dump_pagetables to use walk_page_range Date: Thu, 28 Mar 2019 15:21:04 +0000 Message-Id: <20190328152104.23106-21-steven.price@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190328152104.23106-1-steven.price@arm.com> References: <20190328152104.23106-1-steven.price@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190328_082309_586593_30F07111 X-CRM114-Status: GOOD ( 17.41 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , x86@kernel.org, Arnd Bergmann , Ard Biesheuvel , Peter Zijlstra , Catalin Marinas , Dave Hansen , Will Deacon , linux-kernel@vger.kernel.org, Steven Price , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Ingo Molnar , Borislav Petkov , Andy Lutomirski , "H. Peter Anvin" , James Morse , Thomas Gleixner , linux-arm-kernel@lists.infradead.org, "Liang, Kan" Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Make use of the new functionality in walk_page_range to remove the arch page walking code and use the generic code to walk the page tables. The effective permissions are passed down the chain using new fields in struct pg_state. The KASAN optimisation is implemented by including test_p?d callbacks which can decide to skip an entire tree of entries Signed-off-by: Steven Price --- arch/x86/mm/dump_pagetables.c | 280 ++++++++++++++++++---------------- 1 file changed, 146 insertions(+), 134 deletions(-) diff --git a/arch/x86/mm/dump_pagetables.c b/arch/x86/mm/dump_pagetables.c index c0fbb9e5a790..f6b814aaddf7 100644 --- a/arch/x86/mm/dump_pagetables.c +++ b/arch/x86/mm/dump_pagetables.c @@ -33,6 +33,10 @@ struct pg_state { int level; pgprot_t current_prot; pgprotval_t effective_prot; + pgprotval_t effective_prot_pgd; + pgprotval_t effective_prot_p4d; + pgprotval_t effective_prot_pud; + pgprotval_t effective_prot_pmd; unsigned long start_address; unsigned long current_address; const struct addr_marker *marker; @@ -356,22 +360,21 @@ static inline pgprotval_t effective_prot(pgprotval_t prot1, pgprotval_t prot2) ((prot1 | prot2) & _PAGE_NX); } -static void walk_pte_level(struct pg_state *st, pmd_t addr, pgprotval_t eff_in, - unsigned long P) +static int ptdump_pte_entry(pte_t *pte, unsigned long addr, + unsigned long next, struct mm_walk *walk) { - int i; - pte_t *pte; - pgprotval_t prot, eff; - - for (i = 0; i < PTRS_PER_PTE; i++) { - st->current_address = normalize_addr(P + i * PTE_LEVEL_MULT); - pte = pte_offset_map(&addr, st->current_address); - prot = pte_flags(*pte); - eff = effective_prot(eff_in, prot); - note_page(st, __pgprot(prot), eff, 5); - pte_unmap(pte); - } + struct pg_state *st = walk->private; + pgprotval_t eff, prot; + + st->current_address = normalize_addr(addr); + + prot = pte_flags(*pte); + eff = effective_prot(st->effective_prot_pmd, prot); + note_page(st, __pgprot(prot), eff, 5); + + return 0; } + #ifdef CONFIG_KASAN /* @@ -400,131 +403,152 @@ static inline bool kasan_page_table(struct pg_state *st, void *pt) } #endif -#if PTRS_PER_PMD > 1 - -static void walk_pmd_level(struct pg_state *st, pud_t addr, - pgprotval_t eff_in, unsigned long P) +static int ptdump_test_pmd(unsigned long addr, unsigned long next, + pmd_t *pmd, struct mm_walk *walk) { - int i; - pmd_t *start, *pmd_start; - pgprotval_t prot, eff; - - pmd_start = start = (pmd_t *)pud_page_vaddr(addr); - for (i = 0; i < PTRS_PER_PMD; i++) { - st->current_address = normalize_addr(P + i * PMD_LEVEL_MULT); - if (!pmd_none(*start)) { - prot = pmd_flags(*start); - eff = effective_prot(eff_in, prot); - if (pmd_large(*start) || !pmd_present(*start)) { - note_page(st, __pgprot(prot), eff, 4); - } else if (!kasan_page_table(st, pmd_start)) { - walk_pte_level(st, *start, eff, - P + i * PMD_LEVEL_MULT); - } - } else - note_page(st, __pgprot(0), 0, 4); - start++; - } + struct pg_state *st = walk->private; + + st->current_address = normalize_addr(addr); + + if (kasan_page_table(st, pmd)) + return 1; + return 0; } -#else -#define walk_pmd_level(s,a,e,p) walk_pte_level(s,__pmd(pud_val(a)),e,p) -#undef pud_large -#define pud_large(a) pmd_large(__pmd(pud_val(a))) -#define pud_none(a) pmd_none(__pmd(pud_val(a))) -#endif +static int ptdump_pmd_entry(pmd_t *pmd, unsigned long addr, + unsigned long next, struct mm_walk *walk) +{ + struct pg_state *st = walk->private; + pgprotval_t eff, prot; + + prot = pmd_flags(*pmd); + eff = effective_prot(st->effective_prot_pud, prot); + + st->current_address = normalize_addr(addr); + + if (pmd_large(*pmd)) + note_page(st, __pgprot(prot), eff, 4); -#if PTRS_PER_PUD > 1 + st->effective_prot_pmd = eff; -static void walk_pud_level(struct pg_state *st, p4d_t addr, pgprotval_t eff_in, - unsigned long P) + return 0; +} + +static int ptdump_test_pud(unsigned long addr, unsigned long next, + pud_t *pud, struct mm_walk *walk) { - int i; - pud_t *start, *pud_start; - pgprotval_t prot, eff; - - pud_start = start = (pud_t *)p4d_page_vaddr(addr); - - for (i = 0; i < PTRS_PER_PUD; i++) { - st->current_address = normalize_addr(P + i * PUD_LEVEL_MULT); - if (!pud_none(*start)) { - prot = pud_flags(*start); - eff = effective_prot(eff_in, prot); - if (pud_large(*start) || !pud_present(*start)) { - note_page(st, __pgprot(prot), eff, 3); - } else if (!kasan_page_table(st, pud_start)) { - walk_pmd_level(st, *start, eff, - P + i * PUD_LEVEL_MULT); - } - } else - note_page(st, __pgprot(0), 0, 3); + struct pg_state *st = walk->private; - start++; - } + st->current_address = normalize_addr(addr); + + if (kasan_page_table(st, pud)) + return 1; + return 0; } -#else -#define walk_pud_level(s,a,e,p) walk_pmd_level(s,__pud(p4d_val(a)),e,p) -#undef p4d_large -#define p4d_large(a) pud_large(__pud(p4d_val(a))) -#define p4d_none(a) pud_none(__pud(p4d_val(a))) -#endif +static int ptdump_pud_entry(pud_t *pud, unsigned long addr, + unsigned long next, struct mm_walk *walk) +{ + struct pg_state *st = walk->private; + pgprotval_t eff, prot; + + prot = pud_flags(*pud); + eff = effective_prot(st->effective_prot_p4d, prot); + + st->current_address = normalize_addr(addr); + + if (pud_large(*pud)) + note_page(st, __pgprot(prot), eff, 3); + + st->effective_prot_pud = eff; -static void walk_p4d_level(struct pg_state *st, pgd_t addr, pgprotval_t eff_in, - unsigned long P) + return 0; +} + +static int ptdump_test_p4d(unsigned long addr, unsigned long next, + p4d_t *p4d, struct mm_walk *walk) { - int i; - p4d_t *start, *p4d_start; - pgprotval_t prot, eff; - - if (PTRS_PER_P4D == 1) - return walk_pud_level(st, __p4d(pgd_val(addr)), eff_in, P); - - p4d_start = start = (p4d_t *)pgd_page_vaddr(addr); - - for (i = 0; i < PTRS_PER_P4D; i++) { - st->current_address = normalize_addr(P + i * P4D_LEVEL_MULT); - if (!p4d_none(*start)) { - prot = p4d_flags(*start); - eff = effective_prot(eff_in, prot); - if (p4d_large(*start) || !p4d_present(*start)) { - note_page(st, __pgprot(prot), eff, 2); - } else if (!kasan_page_table(st, p4d_start)) { - walk_pud_level(st, *start, eff, - P + i * P4D_LEVEL_MULT); - } - } else - note_page(st, __pgprot(0), 0, 2); + struct pg_state *st = walk->private; - start++; - } + st->current_address = normalize_addr(addr); + + if (kasan_page_table(st, p4d)) + return 1; + return 0; } -#undef pgd_large -#define pgd_large(a) (pgtable_l5_enabled() ? pgd_large(a) : p4d_large(__p4d(pgd_val(a)))) -#define pgd_none(a) (pgtable_l5_enabled() ? pgd_none(a) : p4d_none(__p4d(pgd_val(a)))) +static int ptdump_p4d_entry(p4d_t *p4d, unsigned long addr, + unsigned long next, struct mm_walk *walk) +{ + struct pg_state *st = walk->private; + pgprotval_t eff, prot; + + prot = p4d_flags(*p4d); + eff = effective_prot(st->effective_prot_pgd, prot); + + st->current_address = normalize_addr(addr); + + if (p4d_large(*p4d)) + note_page(st, __pgprot(prot), eff, 2); + + st->effective_prot_p4d = eff; + + return 0; +} -static inline bool is_hypervisor_range(int idx) +static int ptdump_pgd_entry(pgd_t *pgd, unsigned long addr, + unsigned long next, struct mm_walk *walk) { -#ifdef CONFIG_X86_64 - /* - * A hole in the beginning of kernel address space reserved - * for a hypervisor. - */ - return (idx >= pgd_index(GUARD_HOLE_BASE_ADDR)) && - (idx < pgd_index(GUARD_HOLE_END_ADDR)); + struct pg_state *st = walk->private; + pgprotval_t eff, prot; + + prot = pgd_flags(*pgd); + +#ifdef CONFIG_X86_PAE + eff = _PAGE_USER | _PAGE_RW; #else - return false; + eff = prot; #endif + + st->current_address = normalize_addr(addr); + + if (pgd_large(*pgd)) + note_page(st, __pgprot(prot), eff, 1); + + st->effective_prot_pgd = eff; + + return 0; +} + +static int ptdump_hole(unsigned long addr, unsigned long next, + struct mm_walk *walk) +{ + struct pg_state *st = walk->private; + + st->current_address = normalize_addr(addr); + + note_page(st, __pgprot(0), 0, -1); + + return 0; } static void ptdump_walk_pgd_level_core(struct seq_file *m, struct mm_struct *mm, bool checkwx, bool dmesg) { - pgd_t *start = mm->pgd; - pgprotval_t prot, eff; - int i; struct pg_state st = {}; + struct mm_walk walk = { + .mm = mm, + .pgd_entry = ptdump_pgd_entry, + .p4d_entry = ptdump_p4d_entry, + .pud_entry = ptdump_pud_entry, + .pmd_entry = ptdump_pmd_entry, + .pte_entry = ptdump_pte_entry, + .test_p4d = ptdump_test_p4d, + .test_pud = ptdump_test_pud, + .test_pmd = ptdump_test_pmd, + .pte_hole = ptdump_hole, + .private = &st + }; st.to_dmesg = dmesg; st.check_wx = checkwx; @@ -532,27 +556,15 @@ static void ptdump_walk_pgd_level_core(struct seq_file *m, struct mm_struct *mm, if (checkwx) st.wx_pages = 0; - for (i = 0; i < PTRS_PER_PGD; i++) { - st.current_address = normalize_addr(i * PGD_LEVEL_MULT); - if (!pgd_none(*start) && !is_hypervisor_range(i)) { - prot = pgd_flags(*start); -#ifdef CONFIG_X86_PAE - eff = _PAGE_USER | _PAGE_RW; + down_read(&mm->mmap_sem); +#ifdef CONFIG_X86_64 + walk_page_range(0, PTRS_PER_PGD*PGD_LEVEL_MULT/2, &walk); + walk_page_range(normalize_addr(PTRS_PER_PGD*PGD_LEVEL_MULT/2), ~0, + &walk); #else - eff = prot; + walk_page_range(0, ~0, &walk); #endif - if (pgd_large(*start) || !pgd_present(*start)) { - note_page(&st, __pgprot(prot), eff, 1); - } else { - walk_p4d_level(&st, *start, eff, - i * PGD_LEVEL_MULT); - } - } else - note_page(&st, __pgprot(0), 0, 1); - - cond_resched(); - start++; - } + up_read(&mm->mmap_sem); /* Flush out the last page */ st.current_address = normalize_addr(PTRS_PER_PGD*PGD_LEVEL_MULT);