From patchwork Mon Dec 9 19:13:45 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Logan Gunthorpe X-Patchwork-Id: 11280077 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 11151139A for ; Mon, 9 Dec 2019 19:14:20 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C1A1820726 for ; Mon, 9 Dec 2019 19:14:19 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C1A1820726 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=deltatee.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 50A806B283F; Mon, 9 Dec 2019 14:14:11 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 46CB76B2840; Mon, 9 Dec 2019 14:14:11 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 32DAB6B2843; Mon, 9 Dec 2019 14:14:11 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0219.hostedemail.com [216.40.44.219]) by kanga.kvack.org (Postfix) with ESMTP id 040E86B2842 for ; Mon, 9 Dec 2019 14:14:10 -0500 (EST) Received: from smtpin20.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with SMTP id A3828180AD81D for ; Mon, 9 Dec 2019 19:14:10 +0000 (UTC) X-FDA: 76246553460.20.land28_7bb5b5a6b1a5a X-Spam-Summary: 2,0,0,7a70491697edb336,d41d8cd98f00b204,gunthorp@deltatee.com,:linux-kernel@vger.kernel.org:linux-arm-kernel@lists.infradead.org:linux-ia64@vger.kernel.org:linuxppc-dev@lists.ozlabs.org:linux-s390@vger.kernel.org:linux-sh@vger.kernel.org:platform-driver-x86@vger.kernel.org::hch@lst.de:dan.j.williams@intel.com:akpm@linux-foundation.org:catalin.marinas@arm.com:will@kernel.org:benh@kernel.crashing.org:tglx@linutronix.de:mingo@redhat.com:bp@alien8.de:dave.hansen@linux.intel.com:luto@kernel.org:peterz@infradead.org:logang@deltatee.com:david@redhat.com:mhocko@suse.com,RULES_HIT:1:2:41:355:379:541:800:960:966:968:973:988:989:1260:1311:1314:1345:1359:1437:1515:1605:1730:1747:1777:1792:2194:2196:2199:2200:2393:2559:2562:2693:2705:2918:3138:3139:3140:3141:3142:3740:3865:3866:3867:3868:3870:3871:3872:3874:4051:4250:4321:4385:4605:5007:6117:6261:6737:6742:7875:7903:7974:8603:8660:9036:10004:11026:11473:11658:11914:12043:12048:12296:12297:12438:12517:12519:12555:12895:1298 6:13148: X-HE-Tag: land28_7bb5b5a6b1a5a X-Filterd-Recvd-Size: 11318 Received: from ale.deltatee.com (ale.deltatee.com [207.54.116.67]) by imf05.hostedemail.com (Postfix) with ESMTP for ; Mon, 9 Dec 2019 19:14:09 +0000 (UTC) Received: from cgy1-donard.priv.deltatee.com ([172.16.1.31]) by ale.deltatee.com with esmtps (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1ieOTl-00025p-6A; Mon, 09 Dec 2019 12:14:04 -0700 Received: from gunthorp by cgy1-donard.priv.deltatee.com with local (Exim 4.92) (envelope-from ) id 1ieOTd-0001Ms-Ju; Mon, 09 Dec 2019 12:13:49 -0700 From: Logan Gunthorpe To: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-ia64@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, platform-driver-x86@vger.kernel.org, linux-mm@kvack.org, Christoph Hellwig , Dan Williams , Andrew Morton Cc: Catalin Marinas , Will Deacon , Benjamin Herrenschmidt , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , Andy Lutomirski , Peter Zijlstra , Logan Gunthorpe , David Hildenbrand , Michal Hocko Date: Mon, 9 Dec 2019 12:13:45 -0700 Message-Id: <20191209191346.5197-6-logang@deltatee.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191209191346.5197-1-logang@deltatee.com> References: <20191209191346.5197-1-logang@deltatee.com> MIME-Version: 1.0 X-SA-Exim-Connect-IP: 172.16.1.31 X-SA-Exim-Rcpt-To: linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-kernel@vger.kernel.org, linux-ia64@vger.kernel.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, platform-driver-x86@vger.kernel.org, linux-mm@kvack.org, hch@lst.de, dan.j.williams@intel.com, akpm@linux-foundation.org, catalin.marinas@arm.com, benh@kernel.crashing.org, tglx@linutronix.de, bp@alien8.de, dave.hansen@linux.intel.com, will@kernel.org, luto@kernel.org, peterz@infradead.org, logang@deltatee.com, mingo@redhat.com, david@redhat.com, mhocko@suse.com X-SA-Exim-Mail-From: gunthorp@deltatee.com X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on ale.deltatee.com X-Spam-Level: X-Spam-Status: No, score=-6.5 required=5.0 tests=ALL_TRUSTED,BAYES_00, MYRULES_FREE,MYRULES_NO_TEXT autolearn=no autolearn_force=no version=3.4.2 Subject: [PATCH 5/6] mm, memory_hotplug: Provide argument for the pgprot_t in arch_add_memory() X-SA-Exim-Version: 4.2.1 (built Wed, 08 May 2019 21:11:16 +0000) X-SA-Exim-Scanned: Yes (on ale.deltatee.com) X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: devm_memremap_pages() is currently used by the PCI P2PDMA code to create struct page mappings for IO memory. At present, these mappings are created with PAGE_KERNEL which implies setting the PAT bits to be WB. However, on x86, an mtrr register will typically override this and force the cache type to be UC-. In the case firmware doesn't set this register it is effectively WB and will typically result in a machine check exception when it's accessed. Other arches are not currently likely to function correctly seeing they don't have any MTRR registers to fall back on. To solve this, add an argument to arch_add_memory() to explicitly set the pgprot value to a specific value. Of the arches that support MEMORY_HOTPLUG: x86_64, s390 and arm64 is a simple change to pass the pgprot_t down to their respective functions which set up the page tables. For x86_32, set the page tables explicitly using _set_memory_prot() (seeing they are already mapped). For sh, reject anything but PAGE_KERNEL settings -- this should be fine, for now, seeing sh doesn't support ZONE_DEVICE anyway. Cc: Dan Williams Cc: David Hildenbrand Cc: Michal Hocko Signed-off-by: Logan Gunthorpe --- arch/arm64/mm/mmu.c | 4 ++-- arch/ia64/mm/init.c | 5 ++++- arch/powerpc/mm/mem.c | 4 ++-- arch/s390/mm/init.c | 4 ++-- arch/sh/mm/init.c | 5 ++++- arch/x86/mm/init_32.c | 7 ++++++- arch/x86/mm/init_64.c | 4 ++-- include/linux/memory_hotplug.h | 2 +- mm/memory_hotplug.c | 2 +- mm/memremap.c | 2 +- 10 files changed, 25 insertions(+), 14 deletions(-) diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 60c929f3683b..48b65272df15 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -1050,7 +1050,7 @@ int p4d_free_pud_page(p4d_t *p4d, unsigned long addr) } #ifdef CONFIG_MEMORY_HOTPLUG -int arch_add_memory(int nid, u64 start, u64 size, +int arch_add_memory(int nid, u64 start, u64 size, pgprot_t prot, struct mhp_restrictions *restrictions) { int flags = 0; @@ -1059,7 +1059,7 @@ int arch_add_memory(int nid, u64 start, u64 size, flags = NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS; __create_pgd_mapping(swapper_pg_dir, start, __phys_to_virt(start), - size, PAGE_KERNEL, __pgd_pgtable_alloc, flags); + size, prot, __pgd_pgtable_alloc, flags); return __add_pages(nid, start >> PAGE_SHIFT, size >> PAGE_SHIFT, restrictions); diff --git a/arch/ia64/mm/init.c b/arch/ia64/mm/init.c index bf9df2625bc8..15a1efcecd83 100644 --- a/arch/ia64/mm/init.c +++ b/arch/ia64/mm/init.c @@ -669,13 +669,16 @@ mem_init (void) } #ifdef CONFIG_MEMORY_HOTPLUG -int arch_add_memory(int nid, u64 start, u64 size, +int arch_add_memory(int nid, u64 start, u64 size, pgprot_t prot, struct mhp_restrictions *restrictions) { unsigned long start_pfn = start >> PAGE_SHIFT; unsigned long nr_pages = size >> PAGE_SHIFT; int ret; + if (prot != PAGE_KERNEL) + return -EINVAL; + ret = __add_pages(nid, start_pfn, nr_pages, restrictions); if (ret) printk("%s: Problem encountered in __add_pages() as ret=%d\n", diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c index 22525d8935ce..a901c2b65801 100644 --- a/arch/powerpc/mm/mem.c +++ b/arch/powerpc/mm/mem.c @@ -105,7 +105,7 @@ int __weak remove_section_mapping(unsigned long start, unsigned long end) return -ENODEV; } -int __ref arch_add_memory(int nid, u64 start, u64 size, +int __ref arch_add_memory(int nid, u64 start, u64 size, pgprot_t prot, struct mhp_restrictions *restrictions) { unsigned long start_pfn = start >> PAGE_SHIFT; @@ -115,7 +115,7 @@ int __ref arch_add_memory(int nid, u64 start, u64 size, resize_hpt_for_hotplug(memblock_phys_mem_size()); start = (unsigned long)__va(start); - rc = create_section_mapping(start, start + size, nid, PAGE_KERNEL); + rc = create_section_mapping(start, start + size, nid, prot); if (rc) { pr_warn("Unable to create mapping for hot added memory 0x%llx..0x%llx: %d\n", start, start + size, rc); diff --git a/arch/s390/mm/init.c b/arch/s390/mm/init.c index 263ebb074cdd..d3a67d8a1317 100644 --- a/arch/s390/mm/init.c +++ b/arch/s390/mm/init.c @@ -266,7 +266,7 @@ device_initcall(s390_cma_mem_init); #endif /* CONFIG_CMA */ -int arch_add_memory(int nid, u64 start, u64 size, +int arch_add_memory(int nid, u64 start, u64 size, pgprot_t prot, struct mhp_restrictions *restrictions) { unsigned long start_pfn = PFN_DOWN(start); @@ -276,7 +276,7 @@ int arch_add_memory(int nid, u64 start, u64 size, if (WARN_ON_ONCE(restrictions->altmap)) return -EINVAL; - rc = vmem_add_mapping(start, size, PAGE_KERNEL); + rc = vmem_add_mapping(start, size, prot); if (rc) return rc; diff --git a/arch/sh/mm/init.c b/arch/sh/mm/init.c index dfdbaa50946e..cf9f788115ff 100644 --- a/arch/sh/mm/init.c +++ b/arch/sh/mm/init.c @@ -405,13 +405,16 @@ void __init mem_init(void) } #ifdef CONFIG_MEMORY_HOTPLUG -int arch_add_memory(int nid, u64 start, u64 size, +int arch_add_memory(int nid, u64 start, u64 size, pgprot_t prot, struct mhp_restrictions *restrictions) { unsigned long start_pfn = PFN_DOWN(start); unsigned long nr_pages = size >> PAGE_SHIFT; int ret; + if (prot != PAGE_KERNEL) + return -EINVAL; + /* We only have ZONE_NORMAL, so this is easy.. */ ret = __add_pages(nid, start_pfn, nr_pages, restrictions); if (unlikely(ret)) diff --git a/arch/x86/mm/init_32.c b/arch/x86/mm/init_32.c index d3cdd9137f42..c0fe624eb304 100644 --- a/arch/x86/mm/init_32.c +++ b/arch/x86/mm/init_32.c @@ -852,11 +852,16 @@ void __init mem_init(void) } #ifdef CONFIG_MEMORY_HOTPLUG -int arch_add_memory(int nid, u64 start, u64 size, +int arch_add_memory(int nid, u64 start, u64 size, pgprot_t prot, struct mhp_restrictions *restrictions) { unsigned long start_pfn = start >> PAGE_SHIFT; unsigned long nr_pages = size >> PAGE_SHIFT; + int ret; + + ret = _set_memory_prot(start, nr_pages, prot); + if (ret) + return ret; return __add_pages(nid, start_pfn, nr_pages, restrictions); } diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index 65a5093ec97b..c7d170d67b57 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -862,13 +862,13 @@ int add_pages(int nid, unsigned long start_pfn, unsigned long nr_pages, return ret; } -int arch_add_memory(int nid, u64 start, u64 size, +int arch_add_memory(int nid, u64 start, u64 size, pgprot_t prot, struct mhp_restrictions *restrictions) { unsigned long start_pfn = start >> PAGE_SHIFT; unsigned long nr_pages = size >> PAGE_SHIFT; - init_memory_mapping(start, start + size, PAGE_KERNEL); + init_memory_mapping(start, start + size, prot); return add_pages(nid, start_pfn, nr_pages, restrictions); } diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplug.h index f46ea71b4ffd..82e8b3fcebab 100644 --- a/include/linux/memory_hotplug.h +++ b/include/linux/memory_hotplug.h @@ -111,7 +111,7 @@ extern void __online_page_free(struct page *page); extern int try_online_node(int nid); -extern int arch_add_memory(int nid, u64 start, u64 size, +extern int arch_add_memory(int nid, u64 start, u64 size, pgprot_t prot, struct mhp_restrictions *restrictions); extern u64 max_mem_size; diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index df570e5c71cc..0a581a344a00 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -1035,7 +1035,7 @@ int __ref add_memory_resource(int nid, struct resource *res) new_node = ret; /* call arch's memory hotadd */ - ret = arch_add_memory(nid, start, size, &restrictions); + ret = arch_add_memory(nid, start, size, PAGE_KERNEL, &restrictions); if (ret < 0) goto error; diff --git a/mm/memremap.c b/mm/memremap.c index 03ccbdfeb697..4edcca074e15 100644 --- a/mm/memremap.c +++ b/mm/memremap.c @@ -281,7 +281,7 @@ void *memremap_pages(struct dev_pagemap *pgmap, int nid) } error = arch_add_memory(nid, res->start, resource_size(res), - &restrictions); + pgprot, &restrictions); } if (!error) {