Message ID | 20200729033424.2629-7-justin.he@arm.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show
Return-Path: <SRS0=Rbxa=BI=lists.infradead.org=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@kernel.org> Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7548E14B7 for <patchwork-linux-arm@patchwork.kernel.org>; Wed, 29 Jul 2020 03:37:18 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 4A1DC2074B for <patchwork-linux-arm@patchwork.kernel.org>; Wed, 29 Jul 2020 03:37:18 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="y9wJ+tEm" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4A1DC2074B Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:MIME-Version:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:References:In-Reply-To:Message-Id:Date:Subject:To: From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=zxo0q5dBr4kHRv7XJAnzX/QzA0Fqov1oosEp4Ag7ZcQ=; b=y9wJ+tEmAJJ+7lGDvJA9o582DQ YaOQG6D2XqZQJT2VK0iTXvpmUzBe/FpbAcQgIUZ7h2UdN2oBZwu4iZyFTZJCLxIArTb4wA75E30SN fcVOlnATJllL+JueApWgbWafVdnaAgPT1x3FaIT2W25CoPnTlhCGHv1E4iTxlKcwQ0/KfJPioZ1No ZHe7QT9uwqgnUdjZoxPxGqIvjf3KNwdYv3Eq+bceLafbBrS30kFUnvZSQjDmctWPHuTBsWwiTr8n8 K/pWtDPGY04RYTMRzXVJKWY+MwG2nd2/98c0TPyHnO/bJmtdSN+Jrz4vWUGNWs87cIzQ15Hsa5E5o jsjkPatQ==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1k0ct2-0002dQ-7e; Wed, 29 Jul 2020 03:36:12 +0000 Received: from foss.arm.com ([217.140.110.172]) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1k0csr-0002Zb-Lm for linux-arm-kernel@lists.infradead.org; Wed, 29 Jul 2020 03:36:02 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 14AB831B; Tue, 28 Jul 2020 20:36:00 -0700 (PDT) Received: from localhost.localdomain (entos-thunderx2-02.shanghai.arm.com [10.169.212.213]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 8D9883F66E; Tue, 28 Jul 2020 20:35:52 -0700 (PDT) From: Jia He <justin.he@arm.com> To: Dan Williams <dan.j.williams@intel.com>, Vishal Verma <vishal.l.verma@intel.com>, Mike Rapoport <rppt@linux.ibm.com>, David Hildenbrand <david@redhat.com> Subject: [RFC PATCH 6/6] arm64: fall back to vmemmap_populate_basepages if not aligned with PMD_SIZE Date: Wed, 29 Jul 2020 11:34:24 +0800 Message-Id: <20200729033424.2629-7-justin.he@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200729033424.2629-1-justin.he@arm.com> References: <20200729033424.2629-1-justin.he@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200728_233601_816046_4B7B9E97 X-CRM114-Status: GOOD ( 12.15 ) X-Spam-Score: -2.3 (--) X-Spam-Report: SpamAssassin version 3.4.4 on merlin.infradead.org summary: Content analysis details: (-2.3 points) pts rule name description ---- ---------------------- -------------------------------------------------- -2.3 RCVD_IN_DNSWL_MED RBL: Sender listed at https://www.dnswl.org/, medium trust [217.140.110.172 listed in list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: <linux-arm-kernel.lists.infradead.org> List-Unsubscribe: <http://lists.infradead.org/mailman/options/linux-arm-kernel>, <mailto:linux-arm-kernel-request@lists.infradead.org?subject=unsubscribe> List-Archive: <http://lists.infradead.org/pipermail/linux-arm-kernel/> List-Post: <mailto:linux-arm-kernel@lists.infradead.org> List-Help: <mailto:linux-arm-kernel-request@lists.infradead.org?subject=help> List-Subscribe: <http://lists.infradead.org/mailman/listinfo/linux-arm-kernel>, <mailto:linux-arm-kernel-request@lists.infradead.org?subject=subscribe> Cc: Mark Rutland <mark.rutland@arm.com>, "Rafael J. Wysocki" <rafael@kernel.org>, Catalin Marinas <catalin.marinas@arm.com>, Dave Hansen <dave.hansen@linux.intel.com>, linux-mm@kvack.org, Ira Weiny <ira.weiny@intel.com>, Dave Jiang <dave.jiang@intel.com>, Jason Gunthorpe <jgg@ziepe.ca>, Will Deacon <will@kernel.org>, Kaly Xin <Kaly.Xin@arm.com>, Kees Cook <keescook@chromium.org>, Anshuman Khandual <anshuman.khandual@arm.com>, Hsin-Yi Wang <hsinyi@chromium.org>, Jia He <justin.he@arm.com>, linux-arm-kernel@lists.infradead.org, Pankaj Gupta <pankaj.gupta.linux@gmail.com>, Steve Capper <steve.capper@arm.com>, Greg Kroah-Hartman <gregkh@linuxfoundation.org>, linux-nvdimm@lists.01.org, linux-kernel@vger.kernel.org, Wei Yang <richardw.yang@linux.intel.com>, Andrew Morton <akpm@linux-foundation.org>, Logan Gunthorpe <logang@deltatee.com> MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" <linux-arm-kernel-bounces@lists.infradead.org> Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org |
Series |
decrease unnecessary gap due to pmem kmem alignment
|
expand
|
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index d69feb2cfb84..3b21bd47e801 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -1102,6 +1102,10 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node, do { next = pmd_addr_end(addr, end); + if (next - addr < PMD_SIZE) { + vmemmap_populate_basepages(start, next, node, altmap); + continue; + } pgdp = vmemmap_pgd_populate(addr, node); if (!pgdp)
In dax pmem kmem (dax pmem used as RAM device) case, the start address might not be aligned with PMD_SIZE e.g. 240000000-33fdfffff : Persistent Memory 240000000-2421fffff : namespace0.0 242400000-2bfffffff : dax0.0 242400000-2bfffffff : System RAM (kmem) pfn_to_page(0x242400000) is fffffe0007e90000. Without this patch, vmemmap_populate(fffffe0007e90000, ...) will incorrectly create a pmd mapping [fffffe0007e00000, fffffe0008000000] which contains fffffe0007e90000. This adds the check and then falls back to vmemmap_populate_basepages() Signed-off-by: Jia He <justin.he@arm.com> --- arch/arm64/mm/mmu.c | 4 ++++ 1 file changed, 4 insertions(+)