From patchwork Wed Jul 29 03:34:19 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jia He X-Patchwork-Id: 11690299 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C9328722 for ; Wed, 29 Jul 2020 03:36:49 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 9AC992075F for ; Wed, 29 Jul 2020 03:36:49 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="N6fV85F9" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9AC992075F Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:MIME-Version:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:References:In-Reply-To:Message-Id:Date:Subject:To: From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=z4PQcJzKsBSM0X4JLrRPIOCWoZW+OlEBgzFTVPrHsMY=; b=N6fV85F9O/X6G8lfM/DZiZH5SQ hP6VieqXmyQQhNDSyR4ZrefmbAITo4mcgRlENZ9tMCFF+24VWlUrVZTtYR9FsMVyGpu6dQ0Dj0LcX muP4mHVMqtXPUNu1KGeCxlHC+mU3TNPu3mn1xELqq6AvuDmHd0mu5B3tNKZsCUf/y2GJ9367buO3u svZeisRFqjBDY5q85pP0S3E5ZjCFXWpSvNlSeOmDRlSWgcmYdBms+b55jFcSSmF/D6CtwUien7FBy 8BoN5xYV1w5tEDy6eaVRXIvwW/OQYKShZCT9r32pIVshpnUmENYUFjAckftcvkjJvTsosSEt5/mEQ t7M5yiwg==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1k0csJ-0002NX-0q; Wed, 29 Jul 2020 03:35:27 +0000 Received: from foss.arm.com ([217.140.110.172]) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1k0csE-0002Mb-Ld for linux-arm-kernel@lists.infradead.org; Wed, 29 Jul 2020 03:35:23 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 288FB31B; Tue, 28 Jul 2020 20:35:20 -0700 (PDT) Received: from localhost.localdomain (entos-thunderx2-02.shanghai.arm.com [10.169.212.213]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id AC23E3F66E; Tue, 28 Jul 2020 20:35:12 -0700 (PDT) From: Jia He To: Dan Williams , Vishal Verma , Mike Rapoport , David Hildenbrand Subject: [RFC PATCH 1/6] mm/memory_hotplug: remove redundant memory block size alignment check Date: Wed, 29 Jul 2020 11:34:19 +0800 Message-Id: <20200729033424.2629-2-justin.he@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200729033424.2629-1-justin.he@arm.com> References: <20200729033424.2629-1-justin.he@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200728_233522_813222_74CB8B71 X-CRM114-Status: GOOD ( 13.17 ) X-Spam-Score: -2.3 (--) X-Spam-Report: SpamAssassin version 3.4.4 on merlin.infradead.org summary: Content analysis details: (-2.3 points) pts rule name description ---- ---------------------- -------------------------------------------------- -2.3 RCVD_IN_DNSWL_MED RBL: Sender listed at https://www.dnswl.org/, medium trust [217.140.110.172 listed in list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , "Rafael J. Wysocki" , Catalin Marinas , Dave Hansen , linux-mm@kvack.org, Ira Weiny , Dave Jiang , Jason Gunthorpe , Will Deacon , Kaly Xin , Kees Cook , Anshuman Khandual , Hsin-Yi Wang , Jia He , linux-arm-kernel@lists.infradead.org, Pankaj Gupta , Steve Capper , Greg Kroah-Hartman , linux-nvdimm@lists.01.org, linux-kernel@vger.kernel.org, Wei Yang , Andrew Morton , Logan Gunthorpe MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org The alignment check has been done by check_hotplug_memory_range(). Hence the redundant one in create_memory_block_devices() can be removed. The similar redundant check is removed in remove_memory_block_devices(). Signed-off-by: Jia He --- drivers/base/memory.c | 8 -------- 1 file changed, 8 deletions(-) diff --git a/drivers/base/memory.c b/drivers/base/memory.c index 2b09b68b9f78..4a1691664c6c 100644 --- a/drivers/base/memory.c +++ b/drivers/base/memory.c @@ -642,10 +642,6 @@ int create_memory_block_devices(unsigned long start, unsigned long size) unsigned long block_id; int ret = 0; - if (WARN_ON_ONCE(!IS_ALIGNED(start, memory_block_size_bytes()) || - !IS_ALIGNED(size, memory_block_size_bytes()))) - return -EINVAL; - for (block_id = start_block_id; block_id != end_block_id; block_id++) { ret = init_memory_block(&mem, block_id, MEM_OFFLINE); if (ret) @@ -678,10 +674,6 @@ void remove_memory_block_devices(unsigned long start, unsigned long size) struct memory_block *mem; unsigned long block_id; - if (WARN_ON_ONCE(!IS_ALIGNED(start, memory_block_size_bytes()) || - !IS_ALIGNED(size, memory_block_size_bytes()))) - return; - for (block_id = start_block_id; block_id != end_block_id; block_id++) { mem = find_memory_block_by_id(block_id); if (WARN_ON_ONCE(!mem)) From patchwork Wed Jul 29 03:34:20 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jia He X-Patchwork-Id: 11690297 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CB78F17CA for ; Wed, 29 Jul 2020 03:36:49 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 9A0612074B for ; Wed, 29 Jul 2020 03:36:49 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="YZBVDusz" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9A0612074B Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:MIME-Version:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:References:In-Reply-To:Message-Id:Date:Subject:To: From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=B51ioS2z3NsnsIb38dgodkY/1GLlip6BjoH5EG2+moE=; b=YZBVDusz4Pokujw8IBtApnpAY6 tpY5nY/9SzbMFmoNgIWeTrmTWHlRFikyEe/L7MrB/emoFOze0Y0mrEDD1FxmG99JbJTKevqeaeOIv z+bQNsEyAyYjI3/Zx8cTv9LqsuEa+jtW5o7WvulwOp7QXqV0ajAgSTSzXTHauGUTb8OKGQ7oHFLfP pp9AeBpvHU5kzdiwotAPFIJV1cEdJPM9dCtuB+HpFO4lzKMG8xLnefHYjFnI4wdwr9uO1Mv5XxeKN wxLFgt4vBuQ1Kk9ODYQPXLMrpZtpLj1+cAl5yLZMhS7aU3AL330IWrk/IUkK9JN4H5BuxRzgigLib j6B7DysA==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1k0csO-0002Op-Oh; Wed, 29 Jul 2020 03:35:32 +0000 Received: from foss.arm.com ([217.140.110.172]) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1k0csK-0002Nm-P4 for linux-arm-kernel@lists.infradead.org; Wed, 29 Jul 2020 03:35:29 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 22CD631B; Tue, 28 Jul 2020 20:35:28 -0700 (PDT) Received: from localhost.localdomain (entos-thunderx2-02.shanghai.arm.com [10.169.212.213]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id A76983F66E; Tue, 28 Jul 2020 20:35:20 -0700 (PDT) From: Jia He To: Dan Williams , Vishal Verma , Mike Rapoport , David Hildenbrand Subject: [RFC PATCH 2/6] resource: export find_next_iomem_res() helper Date: Wed, 29 Jul 2020 11:34:20 +0800 Message-Id: <20200729033424.2629-3-justin.he@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200729033424.2629-1-justin.he@arm.com> References: <20200729033424.2629-1-justin.he@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200728_233528_968350_8323BD70 X-CRM114-Status: GOOD ( 14.05 ) X-Spam-Score: -2.3 (--) X-Spam-Report: SpamAssassin version 3.4.4 on merlin.infradead.org summary: Content analysis details: (-2.3 points) pts rule name description ---- ---------------------- -------------------------------------------------- -2.3 RCVD_IN_DNSWL_MED RBL: Sender listed at https://www.dnswl.org/, medium trust [217.140.110.172 listed in list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , "Rafael J. Wysocki" , Catalin Marinas , Dave Hansen , linux-mm@kvack.org, Ira Weiny , Dave Jiang , Jason Gunthorpe , Will Deacon , Kaly Xin , Kees Cook , Anshuman Khandual , Hsin-Yi Wang , Jia He , linux-arm-kernel@lists.infradead.org, Pankaj Gupta , Steve Capper , Greg Kroah-Hartman , linux-nvdimm@lists.01.org, linux-kernel@vger.kernel.org, Wei Yang , Andrew Morton , Logan Gunthorpe MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org The helper is to find the lowest iomem resource that covers part of [@start..@end] It is useful when relaxing the alignment check for dax pmem kmem. Signed-off-by: Jia He --- include/linux/ioport.h | 3 +++ kernel/resource.c | 3 ++- 2 files changed, 5 insertions(+), 1 deletion(-) diff --git a/include/linux/ioport.h b/include/linux/ioport.h index 6c2b06fe8beb..203fd16c9f45 100644 --- a/include/linux/ioport.h +++ b/include/linux/ioport.h @@ -247,6 +247,9 @@ extern struct resource * __request_region(struct resource *, extern void __release_region(struct resource *, resource_size_t, resource_size_t); +extern int find_next_iomem_res(resource_size_t start, resource_size_t end, + unsigned long flags, unsigned long desc, + bool first_lvl, struct resource *res); #ifdef CONFIG_MEMORY_HOTREMOVE extern int release_mem_region_adjustable(struct resource *, resource_size_t, resource_size_t); diff --git a/kernel/resource.c b/kernel/resource.c index 841737bbda9e..57e6a6802a3d 100644 --- a/kernel/resource.c +++ b/kernel/resource.c @@ -338,7 +338,7 @@ EXPORT_SYMBOL(release_resource); * @first_lvl: walk only the first level children, if set * @res: return ptr, if resource found */ -static int find_next_iomem_res(resource_size_t start, resource_size_t end, +int find_next_iomem_res(resource_size_t start, resource_size_t end, unsigned long flags, unsigned long desc, bool first_lvl, struct resource *res) { @@ -391,6 +391,7 @@ static int find_next_iomem_res(resource_size_t start, resource_size_t end, read_unlock(&resource_lock); return p ? 0 : -ENODEV; } +EXPORT_SYMBOL(find_next_iomem_res); static int __walk_iomem_res_desc(resource_size_t start, resource_size_t end, unsigned long flags, unsigned long desc, From patchwork Wed Jul 29 03:34:21 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jia He X-Patchwork-Id: 11690303 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id F162014B7 for ; Wed, 29 Jul 2020 03:36:55 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id C44F62074B for ; Wed, 29 Jul 2020 03:36:55 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="DbH3Q6Wr" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C44F62074B Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:MIME-Version:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:References:In-Reply-To:Message-Id:Date:Subject:To: From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=6V2w7SEsIEnDX8j6gerL0MKGBnDJxJKEj4BN7aWVjmI=; b=DbH3Q6Wr6N1+dsjrpFwgc+w95L Ty5C+bAwQazFohwyR6byJtun/134YNtZBEadzCqMAstzG2HqGN5E6vhCYUWvEk57KQddsmCluqhAe 6SBUL9jI0E5b+kQOrt7vv6a6q2VrGwWVPOXZQJ+yhhLu8BTz+4AHg9pivpQE4V4p1g2mey3ZpZGZj SAhp1UFlXiAFRl8aOvC2m/9bXb6DSbWlVW8P1DMj+3CPloE6x3xTZi9g6UFMS0NBMsriv1ACeFFOj KeefwhE2rQA6YWnznawsgSOJxAPUU1diMfy1rSaHODT0UJK0f3pyi33An8q/BgpDdhA4CJFOnLkMg QeEB69pg==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1k0csX-0002Rq-QA; Wed, 29 Jul 2020 03:35:41 +0000 Received: from foss.arm.com ([217.140.110.172]) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1k0csT-0002Q8-VQ for linux-arm-kernel@lists.infradead.org; Wed, 29 Jul 2020 03:35:39 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1CA4631B; Tue, 28 Jul 2020 20:35:36 -0700 (PDT) Received: from localhost.localdomain (entos-thunderx2-02.shanghai.arm.com [10.169.212.213]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id A161E3F66E; Tue, 28 Jul 2020 20:35:28 -0700 (PDT) From: Jia He To: Dan Williams , Vishal Verma , Mike Rapoport , David Hildenbrand Subject: [RFC PATCH 3/6] mm/memory_hotplug: allow pmem kmem not to align with memory_block_size Date: Wed, 29 Jul 2020 11:34:21 +0800 Message-Id: <20200729033424.2629-4-justin.he@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200729033424.2629-1-justin.he@arm.com> References: <20200729033424.2629-1-justin.he@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200728_233538_475313_99747AE7 X-CRM114-Status: GOOD ( 22.71 ) X-Spam-Score: -2.3 (--) X-Spam-Report: SpamAssassin version 3.4.4 on merlin.infradead.org summary: Content analysis details: (-2.3 points) pts rule name description ---- ---------------------- -------------------------------------------------- -2.3 RCVD_IN_DNSWL_MED RBL: Sender listed at https://www.dnswl.org/, medium trust [217.140.110.172 listed in list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , "Rafael J. Wysocki" , Catalin Marinas , Dave Hansen , linux-mm@kvack.org, Ira Weiny , Dave Jiang , Jason Gunthorpe , Will Deacon , Kaly Xin , Kees Cook , Anshuman Khandual , Hsin-Yi Wang , Jia He , linux-arm-kernel@lists.infradead.org, Pankaj Gupta , Steve Capper , Greg Kroah-Hartman , linux-nvdimm@lists.01.org, linux-kernel@vger.kernel.org, Wei Yang , Andrew Morton , Logan Gunthorpe MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org When dax pmem is probed as RAM device on arm64, previously, kmem_start in dev_dax_kmem_probe() should be aligned with 1G memblock size on arm64 due to SECTION_SIZE_BITS(30). There will be some meta data at the beginning/end of the iomem space, e.g. namespace info and nvdimm label: 240000000-33fdfffff : Persistent Memory 240000000-2403fffff : namespace0.0 280000000-2bfffffff : dax0.0 280000000-2bfffffff : System RAM Hence it makes the whole kmem space not aligned with memory_block_size for both start addr and end addr. Hence there is a big gap when kmem is added into memory block which causes big memory space wasting. This changes it by relaxing the alignment check for dax pmem kmem in the path of online/offline memory blocks. Signed-off-by: Jia He --- drivers/base/memory.c | 16 ++++++++++++++++ mm/memory_hotplug.c | 39 ++++++++++++++++++++++++++++++++++++++- 2 files changed, 54 insertions(+), 1 deletion(-) diff --git a/drivers/base/memory.c b/drivers/base/memory.c index 4a1691664c6c..3d2a94f3b1d9 100644 --- a/drivers/base/memory.c +++ b/drivers/base/memory.c @@ -334,6 +334,22 @@ static ssize_t valid_zones_show(struct device *dev, * online nodes otherwise the page_zone is not reliable */ if (mem->state == MEM_ONLINE) { +#ifdef CONFIG_ZONE_DEVICE + struct resource res; + int ret; + + /* adjust start_pfn for dax pmem kmem */ + ret = find_next_iomem_res(start_pfn << PAGE_SHIFT, + ((start_pfn + nr_pages) << PAGE_SHIFT) - 1, + IORESOURCE_SYSTEM_RAM | IORESOURCE_BUSY, + IORES_DESC_PERSISTENT_MEMORY, + false, &res); + if (!ret && PFN_UP(res.start) > start_pfn) { + nr_pages -= PFN_UP(res.start) - start_pfn; + start_pfn = PFN_UP(res.start); + } +#endif + /* * The block contains more than one zone can not be offlined. * This can happen e.g. for ZONE_DMA and ZONE_DMA32 diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index a53103dc292b..25745f67b680 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -999,6 +999,20 @@ int try_online_node(int nid) static int check_hotplug_memory_range(u64 start, u64 size) { +#ifdef CONFIG_ZONE_DEVICE + struct resource res; + int ret; + + /* Allow pmem kmem not to align with block size */ + ret = find_next_iomem_res(start, start + size - 1, + IORESOURCE_SYSTEM_RAM | IORESOURCE_BUSY, + IORES_DESC_PERSISTENT_MEMORY, + false, &res); + if (!ret) { + return 0; + } +#endif + /* memory range must be block size aligned */ if (!size || !IS_ALIGNED(start, memory_block_size_bytes()) || !IS_ALIGNED(size, memory_block_size_bytes())) { @@ -1481,19 +1495,42 @@ static int __ref __offline_pages(unsigned long start_pfn, mem_hotplug_begin(); /* - * Don't allow to offline memory blocks that contain holes. + * Don't allow to offline memory blocks that contain holes except + * for pmem. * Consequently, memory blocks with holes can never get onlined * via the hotplug path - online_pages() - as hotplugged memory has * no holes. This way, we e.g., don't have to worry about marking * memory holes PG_reserved, don't need pfn_valid() checks, and can * avoid using walk_system_ram_range() later. + * When dax pmem is used as RAM (kmem), holes at the beginning is + * allowed. */ walk_system_ram_range(start_pfn, end_pfn - start_pfn, &nr_pages, count_system_ram_pages_cb); if (nr_pages != end_pfn - start_pfn) { +#ifdef CONFIG_ZONE_DEVICE + struct resource res; + + /* Allow pmem kmem not to align with block size */ + ret = find_next_iomem_res(start_pfn << PAGE_SHIFT, + (end_pfn << PAGE_SHIFT) - 1, + IORESOURCE_SYSTEM_RAM | IORESOURCE_BUSY, + IORES_DESC_PERSISTENT_MEMORY, + false, &res); + if (ret) { + ret = -EINVAL; + reason = "memory holes"; + goto failed_removal; + } + + /* adjust start_pfn for dax pmem kmem */ + start_pfn = PFN_UP(res.start); + end_pfn = PFN_DOWN(res.end + 1); +#else ret = -EINVAL; reason = "memory holes"; goto failed_removal; +#endif } /* This makes hotplug much easier...and readable. From patchwork Wed Jul 29 03:34:22 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jia He X-Patchwork-Id: 11690305 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7893D138C for ; Wed, 29 Jul 2020 03:37:07 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 4DD8D2075F for ; Wed, 29 Jul 2020 03:37:07 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="2pB1IXro" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4DD8D2075F Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:MIME-Version:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:References:In-Reply-To:Message-Id:Date:Subject:To: From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=4OohH3nSGFtI51s6KWJlC2O5pcjel6vBdhyNeFuYdEs=; b=2pB1IXromL6tzyRUd1GGjeZ95V eRtvhAIQihL/dL8yj4t9XG2xr5JWn3yxbwniK6QfXOeK3460AdeXaOqydqS8mDeo/fthgkhsXOP+b 67FUF7Y05pdwONPLw8nKZhE40OB+MNFDl9/sDxWB4DDhhL7ZZXPji6G1G2U4YPMsuG89KBmn2nvpc H1gujPwHT6B6R9P6a5xivrvQO71B8lR2fynUm/cJOPu9sj0W0DMjTXBL6emySuJpP3mCDUnmzhQSa mbzNRJMjE+1yAW705y+QaqJIjhqQ2qN3PJNsCoBmHFKFvbNbXCnsKP/yEKja6NjGwDp+92mrhJyzd 5f1fj2qw==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1k0csg-0002VR-RE; Wed, 29 Jul 2020 03:35:50 +0000 Received: from foss.arm.com ([217.140.110.172]) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1k0csd-0002Tb-2D for linux-arm-kernel@lists.infradead.org; Wed, 29 Jul 2020 03:35:48 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1581D31B; Tue, 28 Jul 2020 20:35:44 -0700 (PDT) Received: from localhost.localdomain (entos-thunderx2-02.shanghai.arm.com [10.169.212.213]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 9ACF53F66E; Tue, 28 Jul 2020 20:35:36 -0700 (PDT) From: Jia He To: Dan Williams , Vishal Verma , Mike Rapoport , David Hildenbrand Subject: [RFC PATCH 4/6] mm/page_alloc: adjust the start, end in dax pmem kmem case Date: Wed, 29 Jul 2020 11:34:22 +0800 Message-Id: <20200729033424.2629-5-justin.he@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200729033424.2629-1-justin.he@arm.com> References: <20200729033424.2629-1-justin.he@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200728_233547_178986_9E2089F9 X-CRM114-Status: GOOD ( 16.52 ) X-Spam-Score: -2.3 (--) X-Spam-Report: SpamAssassin version 3.4.4 on merlin.infradead.org summary: Content analysis details: (-2.3 points) pts rule name description ---- ---------------------- -------------------------------------------------- -2.3 RCVD_IN_DNSWL_MED RBL: Sender listed at https://www.dnswl.org/, medium trust [217.140.110.172 listed in list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , "Rafael J. Wysocki" , Catalin Marinas , Dave Hansen , linux-mm@kvack.org, Ira Weiny , Dave Jiang , Jason Gunthorpe , Will Deacon , Kaly Xin , Kees Cook , Anshuman Khandual , Hsin-Yi Wang , Jia He , linux-arm-kernel@lists.infradead.org, Pankaj Gupta , Steve Capper , Greg Kroah-Hartman , linux-nvdimm@lists.01.org, linux-kernel@vger.kernel.org, Wei Yang , Andrew Morton , Logan Gunthorpe MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org There are 3 cases when doing online pages: - normal RAM, should be aligned with memory block size - persistent memory with ZONE_DEVICE - persistent memory used as normal RAM (kmem) with ZONE_NORMAL, this patch tries to adjust the start_pfn/end_pfn after finding the corresponding resource range. Without this patch, the check of __init_single_page when doing online memory will be failed because those pages haven't been mapped in mmu(not present from mmu's point of view). Signed-off-by: Jia He --- mm/page_alloc.c | 14 ++++++++++++++ 1 file changed, 14 insertions(+) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index e028b87ce294..13216ab3623f 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -5971,6 +5971,20 @@ void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone, if (start_pfn == altmap->base_pfn) start_pfn += altmap->reserve; end_pfn = altmap->base_pfn + vmem_altmap_offset(altmap); + } else { + struct resource res; + int ret; + + /* adjust the start,end in dax pmem kmem case */ + ret = find_next_iomem_res(start_pfn << PAGE_SHIFT, + (end_pfn << PAGE_SHIFT) - 1, + IORESOURCE_SYSTEM_RAM | IORESOURCE_BUSY, + IORES_DESC_PERSISTENT_MEMORY, + false, &res); + if (!ret) { + start_pfn = PFN_UP(res.start); + end_pfn = PFN_DOWN(res.end + 1); + } } #endif From patchwork Wed Jul 29 03:34:23 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jia He X-Patchwork-Id: 11690307 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 038CB138C for ; Wed, 29 Jul 2020 03:37:13 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id CC7C22074B for ; Wed, 29 Jul 2020 03:37:12 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="WANQS5fj" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org CC7C22074B Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:MIME-Version:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:References:In-Reply-To:Message-Id:Date:Subject:To: From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=eSPjvOdXhKiV16PVh0vzKvQIpreJdax8FskQSqO5OXA=; b=WANQS5fj3YKjFqUIqr6bMPTeFC YhpcGY1O+SXNbreUgtBUE5WHQsdVYvYgfK10vcav2gJuVlVf8aQXioSkuUl1511epI6VhkiQSMuNJ h2RHUwmK6WiKQI/4sLnBTDIMwVkfQT+vwmJggGNgFb+AEg0X90ZsuEaJSEeMIU3GQisTdN+ukc9Za i2jSl0hI8XQfIjQpGApxmJNA8gJYQglEPl+ILm9vSa/MQty7Pji8LM9jxCMYdsLVchSO0YXF1U7h4 +Qx0Qsc+t1zQBb22iC3uTIK/1pJ3ls6NzPY2P52XkU41vEUYoYctSosotGCVsaoj3FuFs375tYs4+ gLZbIoIw==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1k0csq-0002Z6-0C; Wed, 29 Jul 2020 03:36:00 +0000 Received: from foss.arm.com ([217.140.110.172]) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1k0csk-0002WZ-7w for linux-arm-kernel@lists.infradead.org; Wed, 29 Jul 2020 03:35:56 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0F46A31B; Tue, 28 Jul 2020 20:35:52 -0700 (PDT) Received: from localhost.localdomain (entos-thunderx2-02.shanghai.arm.com [10.169.212.213]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 93FEF3F66E; Tue, 28 Jul 2020 20:35:44 -0700 (PDT) From: Jia He To: Dan Williams , Vishal Verma , Mike Rapoport , David Hildenbrand Subject: [RFC PATCH 5/6] device-dax: relax the memblock size alignment for kmem_start Date: Wed, 29 Jul 2020 11:34:23 +0800 Message-Id: <20200729033424.2629-6-justin.he@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200729033424.2629-1-justin.he@arm.com> References: <20200729033424.2629-1-justin.he@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200728_233554_468911_0AEE076F X-CRM114-Status: GOOD ( 18.00 ) X-Spam-Score: -2.3 (--) X-Spam-Report: SpamAssassin version 3.4.4 on merlin.infradead.org summary: Content analysis details: (-2.3 points) pts rule name description ---- ---------------------- -------------------------------------------------- -2.3 RCVD_IN_DNSWL_MED RBL: Sender listed at https://www.dnswl.org/, medium trust [217.140.110.172 listed in list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , "Rafael J. Wysocki" , Catalin Marinas , Dave Hansen , linux-mm@kvack.org, Ira Weiny , Dave Jiang , Jason Gunthorpe , Will Deacon , Kaly Xin , Kees Cook , Anshuman Khandual , Hsin-Yi Wang , Jia He , linux-arm-kernel@lists.infradead.org, Pankaj Gupta , Steve Capper , Greg Kroah-Hartman , linux-nvdimm@lists.01.org, linux-kernel@vger.kernel.org, Wei Yang , Andrew Morton , Logan Gunthorpe MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org Previously, kmem_start in dev_dax_kmem_probe should be aligned with SECTION_SIZE_BITS(30), i.e. 1G memblock size on arm64. Even with Dan Williams' sub-section patch series, it was not helpful when adding the dax pmem kmem to memblock: $ndctl create-namespace -e namespace0.0 --mode=devdax --map=dev -s 2g -f -a 2M $echo dax0.0 > /sys/bus/dax/drivers/device_dax/unbind $echo dax0.0 > /sys/bus/dax/drivers/kmem/new_id $cat /proc/iomem ... 23c000000-23fffffff : System RAM 23dd40000-23fecffff : reserved 23fed0000-23fffffff : reserved 240000000-33fdfffff : Persistent Memory 240000000-2403fffff : namespace0.0 280000000-2bfffffff : dax0.0 <- boundary are aligned with 1G 280000000-2bfffffff : System RAM (kmem) $ lsmem RANGE SIZE STATE REMOVABLE BLOCK 0x0000000040000000-0x000000023fffffff 8G online yes 1-8 0x0000000280000000-0x00000002bfffffff 1G online yes 10 Memory block size: 1G Total online memory: 9G Total offline memory: 0B ... Hence there is a big gap between 0x2403fffff and 0x280000000 due to the 1G alignment on arm64. More than that, only 1G memory is returned while 2G is requested. On x86, the gap is relatively small due to SECTION_SIZE_BITS(27). Besides descreasing SECTION_SIZE_BITS on arm64, we can relax the alignment when adding the kmem. After this patch: 240000000-33fdfffff : Persistent Memory 240000000-2421fffff : namespace0.0 242400000-2bfffffff : dax0.0 242400000-2bfffffff : System RAM (kmem) $ lsmem RANGE SIZE STATE REMOVABLE BLOCK 0x0000000040000000-0x00000002bfffffff 10G online yes 1-10 Memory block size: 1G Total online memory: 10G Total offline memory: 0B Notes, block 9-10 are the newly hotplug added. This patches remove the tight alignment constraint of memory_block_size_bytes(), but still keep the constraint from online_pages_range(). Signed-off-by: Jia He --- drivers/dax/kmem.c | 22 +++++++++++++--------- 1 file changed, 13 insertions(+), 9 deletions(-) diff --git a/drivers/dax/kmem.c b/drivers/dax/kmem.c index d77786dc0d92..849d0706dfe0 100644 --- a/drivers/dax/kmem.c +++ b/drivers/dax/kmem.c @@ -30,9 +30,20 @@ int dev_dax_kmem_probe(struct device *dev) const char *new_res_name; int numa_node; int rc; + int order; - /* Hotplug starting at the beginning of the next block: */ - kmem_start = ALIGN(res->start, memory_block_size_bytes()); + /* kmem_start needn't be aligned with memory_block_size_bytes(). + * But given the constraint in online_pages_range(), adjust the + * alignment of kmem_start and kmem_size + */ + kmem_size = resource_size(res); + order = min_t(int, MAX_ORDER - 1, get_order(kmem_size)); + kmem_start = ALIGN(res->start, 1ul << (order + PAGE_SHIFT)); + /* Adjust the size down to compensate for moving up kmem_start: */ + kmem_size -= kmem_start - res->start; + /* Align the size down to cover only complete blocks: */ + kmem_size &= ~((1ul << (order + PAGE_SHIFT)) - 1); + kmem_end = kmem_start + kmem_size; /* * Ensure good NUMA information for the persistent memory. @@ -48,13 +59,6 @@ int dev_dax_kmem_probe(struct device *dev) numa_node, res); } - kmem_size = resource_size(res); - /* Adjust the size down to compensate for moving up kmem_start: */ - kmem_size -= kmem_start - res->start; - /* Align the size down to cover only complete blocks: */ - kmem_size &= ~(memory_block_size_bytes() - 1); - kmem_end = kmem_start + kmem_size; - new_res_name = kstrdup(dev_name(dev), GFP_KERNEL); if (!new_res_name) return -ENOMEM; From patchwork Wed Jul 29 03:34:24 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jia He X-Patchwork-Id: 11690309 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7548E14B7 for ; Wed, 29 Jul 2020 03:37:18 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 4A1DC2074B for ; Wed, 29 Jul 2020 03:37:18 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="y9wJ+tEm" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4A1DC2074B Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:MIME-Version:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:References:In-Reply-To:Message-Id:Date:Subject:To: From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=zxo0q5dBr4kHRv7XJAnzX/QzA0Fqov1oosEp4Ag7ZcQ=; b=y9wJ+tEmAJJ+7lGDvJA9o582DQ YaOQG6D2XqZQJT2VK0iTXvpmUzBe/FpbAcQgIUZ7h2UdN2oBZwu4iZyFTZJCLxIArTb4wA75E30SN fcVOlnATJllL+JueApWgbWafVdnaAgPT1x3FaIT2W25CoPnTlhCGHv1E4iTxlKcwQ0/KfJPioZ1No ZHe7QT9uwqgnUdjZoxPxGqIvjf3KNwdYv3Eq+bceLafbBrS30kFUnvZSQjDmctWPHuTBsWwiTr8n8 K/pWtDPGY04RYTMRzXVJKWY+MwG2nd2/98c0TPyHnO/bJmtdSN+Jrz4vWUGNWs87cIzQ15Hsa5E5o jsjkPatQ==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1k0ct2-0002dQ-7e; Wed, 29 Jul 2020 03:36:12 +0000 Received: from foss.arm.com ([217.140.110.172]) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1k0csr-0002Zb-Lm for linux-arm-kernel@lists.infradead.org; Wed, 29 Jul 2020 03:36:02 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 14AB831B; Tue, 28 Jul 2020 20:36:00 -0700 (PDT) Received: from localhost.localdomain (entos-thunderx2-02.shanghai.arm.com [10.169.212.213]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 8D9883F66E; Tue, 28 Jul 2020 20:35:52 -0700 (PDT) From: Jia He To: Dan Williams , Vishal Verma , Mike Rapoport , David Hildenbrand Subject: [RFC PATCH 6/6] arm64: fall back to vmemmap_populate_basepages if not aligned with PMD_SIZE Date: Wed, 29 Jul 2020 11:34:24 +0800 Message-Id: <20200729033424.2629-7-justin.he@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200729033424.2629-1-justin.he@arm.com> References: <20200729033424.2629-1-justin.he@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200728_233601_816046_4B7B9E97 X-CRM114-Status: GOOD ( 12.15 ) X-Spam-Score: -2.3 (--) X-Spam-Report: SpamAssassin version 3.4.4 on merlin.infradead.org summary: Content analysis details: (-2.3 points) pts rule name description ---- ---------------------- -------------------------------------------------- -2.3 RCVD_IN_DNSWL_MED RBL: Sender listed at https://www.dnswl.org/, medium trust [217.140.110.172 listed in list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , "Rafael J. Wysocki" , Catalin Marinas , Dave Hansen , linux-mm@kvack.org, Ira Weiny , Dave Jiang , Jason Gunthorpe , Will Deacon , Kaly Xin , Kees Cook , Anshuman Khandual , Hsin-Yi Wang , Jia He , linux-arm-kernel@lists.infradead.org, Pankaj Gupta , Steve Capper , Greg Kroah-Hartman , linux-nvdimm@lists.01.org, linux-kernel@vger.kernel.org, Wei Yang , Andrew Morton , Logan Gunthorpe MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org In dax pmem kmem (dax pmem used as RAM device) case, the start address might not be aligned with PMD_SIZE e.g. 240000000-33fdfffff : Persistent Memory 240000000-2421fffff : namespace0.0 242400000-2bfffffff : dax0.0 242400000-2bfffffff : System RAM (kmem) pfn_to_page(0x242400000) is fffffe0007e90000. Without this patch, vmemmap_populate(fffffe0007e90000, ...) will incorrectly create a pmd mapping [fffffe0007e00000, fffffe0008000000] which contains fffffe0007e90000. This adds the check and then falls back to vmemmap_populate_basepages() Signed-off-by: Jia He --- arch/arm64/mm/mmu.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index d69feb2cfb84..3b21bd47e801 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -1102,6 +1102,10 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node, do { next = pmd_addr_end(addr, end); + if (next - addr < PMD_SIZE) { + vmemmap_populate_basepages(start, next, node, altmap); + continue; + } pgdp = vmemmap_pgd_populate(addr, node); if (!pgdp)