From patchwork Thu Apr 4 09:46:49 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anshuman Khandual X-Patchwork-Id: 10885327 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A9AA21669 for ; Thu, 4 Apr 2019 09:47:34 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8A02B289EA for ; Thu, 4 Apr 2019 09:47:34 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 7BC6C28A22; Thu, 4 Apr 2019 09:47:34 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id D07EC289EA for ; Thu, 4 Apr 2019 09:47:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=ajBBd3UzdJiwc4XzOr2a2KKOppYS54SvBlqH4urOJQc=; b=SmMbht6SicnP607WWA9KIDGmbw KdhgtRsmp5pElKvvcl2dG43Q/86Dhg5G2ppa4LGLuMWlyUYD5JTRB45k5c3TIugKh6USK/vF/sXfD gn0ksDcq3o8SpqMv2zrOmN7aEMk7bt6PxVRBSDQxzd6IQny3PhXsKvtIO7Y5WiMmfR8LbbONgJ6Vz 24sF/6RquZOjNxuXJ/XC+0ToN4scyKFnbl/UuJzY4Xftm5Zac9jSsQmpvOI0O5990hdapPVC9MJfH WkdwCmp7b/0VEqp3gZAl99RfqQYNvB1kmy4KWAl8vpTpaGhvh/zrP9M790H/dx99hh8VxfvS1Zoxz 6VToVzeA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1hByxz-0001UT-7S; Thu, 04 Apr 2019 09:47:27 +0000 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70] helo=foss.arm.com) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1hByxd-0000z9-4f for linux-arm-kernel@lists.infradead.org; Thu, 04 Apr 2019 09:47:10 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D4C04169E; Thu, 4 Apr 2019 02:47:04 -0700 (PDT) Received: from p8cg001049571a15.blr.arm.com (p8cg001049571a15.blr.arm.com [10.162.40.100]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 98E153F557; Thu, 4 Apr 2019 02:46:58 -0700 (PDT) From: Anshuman Khandual To: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org, akpm@linux-foundation.org, will.deacon@arm.com, catalin.marinas@arm.com Subject: [RFC 1/2] mm/vmemmap: Enable vmem_altmap based base page mapping for vmemmap Date: Thu, 4 Apr 2019 15:16:49 +0530 Message-Id: <1554371210-24736-1-git-send-email-anshuman.khandual@arm.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1554265806-11501-1-git-send-email-anshuman.khandual@arm.com> References: <1554265806-11501-1-git-send-email-anshuman.khandual@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190404_024705_886655_EB5F709A X-CRM114-Status: GOOD ( 14.54 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: mark.rutland@arm.com, mhocko@suse.com, david@redhat.com, robin.murphy@arm.com, cai@lca.pw, logang@deltatee.com, james.morse@arm.com, cpandya@codeaurora.org, arunks@codeaurora.org, dan.j.williams@intel.com, mgorman@techsingularity.net, osalvador@suse.de MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP vmemmap_populate_basepages() is used for vmemmap mapping across platforms. On arm64 it is used for ARM64_16K_PAGES and ARM64_64K_PAGES configs. When applicable enable it's allocation from device memory range through struct vmem_altpamp. Individual archs should enable this when appropriate. Hence keep it disabled to continue with the existing semantics. Signed-off-by: Anshuman Khandual --- arch/arm64/mm/mmu.c | 2 +- arch/ia64/mm/discontig.c | 2 +- arch/x86/mm/init_64.c | 4 ++-- include/linux/mm.h | 5 +++-- mm/sparse-vmemmap.c | 14 ++++++++++---- 5 files changed, 17 insertions(+), 10 deletions(-) diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 4b25b7544763..2859aa89cc4a 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -921,7 +921,7 @@ remove_pagetable(unsigned long start, unsigned long end, int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node, struct vmem_altmap *altmap) { - return vmemmap_populate_basepages(start, end, node); + return vmemmap_populate_basepages(start, end, node, NULL); } #else /* !ARM64_SWAPPER_USES_SECTION_MAPS */ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node, diff --git a/arch/ia64/mm/discontig.c b/arch/ia64/mm/discontig.c index 05490dd073e6..faefd7ec991f 100644 --- a/arch/ia64/mm/discontig.c +++ b/arch/ia64/mm/discontig.c @@ -660,7 +660,7 @@ void arch_refresh_nodedata(int update_node, pg_data_t *update_pgdat) int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node, struct vmem_altmap *altmap) { - return vmemmap_populate_basepages(start, end, node); + return vmemmap_populate_basepages(start, end, node, NULL); } void vmemmap_free(unsigned long start, unsigned long end, diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index bccff68e3267..e7e05d1b8bcf 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -1450,7 +1450,7 @@ static int __meminit vmemmap_populate_hugepages(unsigned long start, vmemmap_verify((pte_t *)pmd, node, addr, next); continue; } - if (vmemmap_populate_basepages(addr, next, node)) + if (vmemmap_populate_basepages(addr, next, node, NULL)) return -ENOMEM; } return 0; @@ -1468,7 +1468,7 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node, __func__); err = -ENOMEM; } else - err = vmemmap_populate_basepages(start, end, node); + err = vmemmap_populate_basepages(start, end, node, NULL); if (!err) sync_global_pgds(start, end - 1); return err; diff --git a/include/linux/mm.h b/include/linux/mm.h index 76769749b5a5..a62e9ff24af3 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2672,14 +2672,15 @@ pgd_t *vmemmap_pgd_populate(unsigned long addr, int node); p4d_t *vmemmap_p4d_populate(pgd_t *pgd, unsigned long addr, int node); pud_t *vmemmap_pud_populate(p4d_t *p4d, unsigned long addr, int node); pmd_t *vmemmap_pmd_populate(pud_t *pud, unsigned long addr, int node); -pte_t *vmemmap_pte_populate(pmd_t *pmd, unsigned long addr, int node); +pte_t *vmemmap_pte_populate(pmd_t *pmd, unsigned long addr, int node, + struct vmem_altmap *altmap); void *vmemmap_alloc_block(unsigned long size, int node); struct vmem_altmap; void *vmemmap_alloc_block_buf(unsigned long size, int node); void *altmap_alloc_block_buf(unsigned long size, struct vmem_altmap *altmap); void vmemmap_verify(pte_t *, int, unsigned long, unsigned long); int vmemmap_populate_basepages(unsigned long start, unsigned long end, - int node); + int node, struct vmem_altmap *altmap); int vmemmap_populate(unsigned long start, unsigned long end, int node, struct vmem_altmap *altmap); void vmemmap_populate_print_last(void); diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c index 7fec05796796..81a0960b5cd4 100644 --- a/mm/sparse-vmemmap.c +++ b/mm/sparse-vmemmap.c @@ -140,12 +140,18 @@ void __meminit vmemmap_verify(pte_t *pte, int node, start, end - 1); } -pte_t * __meminit vmemmap_pte_populate(pmd_t *pmd, unsigned long addr, int node) +pte_t * __meminit vmemmap_pte_populate(pmd_t *pmd, unsigned long addr, int node, + struct vmem_altmap *altmap) { pte_t *pte = pte_offset_kernel(pmd, addr); if (pte_none(*pte)) { pte_t entry; - void *p = vmemmap_alloc_block_buf(PAGE_SIZE, node); + void *p; + + if (altmap) + p = altmap_alloc_block_buf(PAGE_SIZE, altmap); + else + p = vmemmap_alloc_block_buf(PAGE_SIZE, node); if (!p) return NULL; entry = pfn_pte(__pa(p) >> PAGE_SHIFT, PAGE_KERNEL); @@ -214,7 +220,7 @@ pgd_t * __meminit vmemmap_pgd_populate(unsigned long addr, int node) } int __meminit vmemmap_populate_basepages(unsigned long start, - unsigned long end, int node) + unsigned long end, int node, struct vmem_altmap *altmap) { unsigned long addr = start; pgd_t *pgd; @@ -236,7 +242,7 @@ int __meminit vmemmap_populate_basepages(unsigned long start, pmd = vmemmap_pmd_populate(pud, addr, node); if (!pmd) return -ENOMEM; - pte = vmemmap_pte_populate(pmd, addr, node); + pte = vmemmap_pte_populate(pmd, addr, node, altmap); if (!pte) return -ENOMEM; vmemmap_verify(pte, node, addr, addr + PAGE_SIZE); From patchwork Thu Apr 4 09:46:50 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anshuman Khandual X-Patchwork-Id: 10885329 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C970B1669 for ; Thu, 4 Apr 2019 09:47:40 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id ABA5C289EA for ; Thu, 4 Apr 2019 09:47:40 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 9FDC228A22; Thu, 4 Apr 2019 09:47:40 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 541B2289EA for ; Thu, 4 Apr 2019 09:47:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=phvBYxeI66vOa7T70fY+KSLkTrRHy77cEj4HtjgIGzY=; b=nDocMuLK4S3h1DJeywjM8OJORo Hl2kCG3ApG71nWhq4T7omG1Dxy6Zf5eLoNcxxS9EYhfct7eXWS2AbYj/3UJhzJRIK/VwE/eTNIQim 9a6PL0IGSbIH55AyWDfBFI1cQnXdOusJT/K7MM5VNVv8mzpu7QPAVsWaCd/wmXtkGvrn1oau9UXdo tUIf+9S29+ZFUz6te4VgGqfGOHe1UKGhO6AuzmzYR6rYjkuPiP7TQ3KLA+ZwkJ9ZUw3jQTaGaaHah J7ReIVI//JMeXbkBYsACvG4V3rUdS9gNWbm6ej88St9IPOi4FbeY2AM1Jn2RELp/5JMiqkANBEOKx AIn6avBw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1hByy9-0001fk-NO; Thu, 04 Apr 2019 09:47:37 +0000 Received: from foss.arm.com ([217.140.101.70]) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1hByxj-00018k-Np for linux-arm-kernel@lists.infradead.org; Thu, 04 Apr 2019 09:47:15 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 67B1C169E; Thu, 4 Apr 2019 02:47:11 -0700 (PDT) Received: from p8cg001049571a15.blr.arm.com (p8cg001049571a15.blr.arm.com [10.162.40.100]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 6B6113F557; Thu, 4 Apr 2019 02:47:05 -0700 (PDT) From: Anshuman Khandual To: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org, akpm@linux-foundation.org, will.deacon@arm.com, catalin.marinas@arm.com Subject: [RFC 2/2] arm64/mm: Enable ZONE_DEVICE for all page configs Date: Thu, 4 Apr 2019 15:16:50 +0530 Message-Id: <1554371210-24736-2-git-send-email-anshuman.khandual@arm.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1554371210-24736-1-git-send-email-anshuman.khandual@arm.com> References: <1554265806-11501-1-git-send-email-anshuman.khandual@arm.com> <1554371210-24736-1-git-send-email-anshuman.khandual@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190404_024712_412646_BC852DD5 X-CRM114-Status: GOOD ( 10.69 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: mark.rutland@arm.com, mhocko@suse.com, david@redhat.com, robin.murphy@arm.com, cai@lca.pw, logang@deltatee.com, james.morse@arm.com, cpandya@codeaurora.org, arunks@codeaurora.org, dan.j.williams@intel.com, mgorman@techsingularity.net, osalvador@suse.de MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Now that vmemmap_populate_basepages() supports struct vmem_altmap based allocations, ZONE_DEVICE can be functional across all page size configs. Now vmemmap_populate_baepages() takes in actual struct vmem_altmap for allocation and remove_pagetable() should accommodate such new PTE level vmemmap mappings. Just remove the ARCH_HAS_ZONE_DEVICE dependency from ARM64_4K_PAGES. Signed-off-by: Anshuman Khandual --- arch/arm64/Kconfig | 2 +- arch/arm64/mm/mmu.c | 10 +++++----- 2 files changed, 6 insertions(+), 6 deletions(-) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index b5d8cf57e220..4a37a33a4fe5 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -31,7 +31,7 @@ config ARM64 select ARCH_HAS_SYSCALL_WRAPPER select ARCH_HAS_TEARDOWN_DMA_OPS if IOMMU_SUPPORT select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST - select ARCH_HAS_ZONE_DEVICE if ARM64_4K_PAGES + select ARCH_HAS_ZONE_DEVICE select ARCH_HAVE_NMI_SAFE_CMPXCHG select ARCH_INLINE_READ_LOCK if !PREEMPT select ARCH_INLINE_READ_LOCK_BH if !PREEMPT diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 2859aa89cc4a..509ed7e547a3 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -818,8 +818,8 @@ static void __meminit free_pud_table(pud_t *pud_start, pgd_t *pgd, bool direct) #endif static void __meminit -remove_pte_table(pte_t *pte_start, unsigned long addr, - unsigned long end, bool direct) +remove_pte_table(pte_t *pte_start, unsigned long addr, unsigned long end, + bool direct, struct vmem_altmap *altmap) { pte_t *pte; @@ -829,7 +829,7 @@ remove_pte_table(pte_t *pte_start, unsigned long addr, continue; if (!direct) - free_pagetable(pte_page(*pte), 0); + free_huge_pagetable(pte_page(*pte), 0, altmap); spin_lock(&init_mm.page_table_lock); pte_clear(&init_mm, addr, pte); spin_unlock(&init_mm.page_table_lock); @@ -860,7 +860,7 @@ remove_pmd_table(pmd_t *pmd_start, unsigned long addr, unsigned long end, continue; } pte_base = pte_offset_kernel(pmd, 0UL); - remove_pte_table(pte_base, addr, next, direct); + remove_pte_table(pte_base, addr, next, direct, altmap); free_pte_table(pte_base, pmd, direct); } } @@ -921,7 +921,7 @@ remove_pagetable(unsigned long start, unsigned long end, int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node, struct vmem_altmap *altmap) { - return vmemmap_populate_basepages(start, end, node, NULL); + return vmemmap_populate_basepages(start, end, node, altmap); } #else /* !ARM64_SWAPPER_USES_SECTION_MAPS */ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node,