From patchwork Fri Jun 28 04:44:42 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anshuman Khandual X-Patchwork-Id: 11021403 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D6B8315E6 for ; Fri, 28 Jun 2019 04:45:14 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C779828735 for ; Fri, 28 Jun 2019 04:45:14 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id BB29428738; Fri, 28 Jun 2019 04:45:14 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0F83D28735 for ; Fri, 28 Jun 2019 04:45:14 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E52FB6B0006; Fri, 28 Jun 2019 00:45:12 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id DDD8E8E0003; Fri, 28 Jun 2019 00:45:12 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CA6B68E0002; Fri, 28 Jun 2019 00:45:12 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-ed1-f72.google.com (mail-ed1-f72.google.com [209.85.208.72]) by kanga.kvack.org (Postfix) with ESMTP id 7D3FE6B0006 for ; Fri, 28 Jun 2019 00:45:12 -0400 (EDT) Received: by mail-ed1-f72.google.com with SMTP id b12so7684629eds.14 for ; Thu, 27 Jun 2019 21:45:12 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:in-reply-to:references; bh=yqR2A9BF5z1WiyLGSQ+X9x0SHG4YYsD89TXzq8JJB2o=; b=CGh3TH3JX7FdPKs/1VUp2X7HsrutwNAIj1M6+9ttnkpC2q+ae3G22VHvk/SAnbSWxR E8j/5PZ3NzD/q0I9rVKs+qen4HrsViV3jhs55AkASqyG6Zhv8fw2nhjp7LchtNnmqmyP EfaGex+kSftyHE5wOt34m9Fg5SCsrFkQlZbhdcp+GFWwsHJcGDwNf47q8ZEhZ//rPnnU Q2rCiELc9U5oRiumpn4CyKCXwo5LyqQNxohdZYkrFxeocplxZHsef8XC6o6Ebtg90Dpn /NdrTZDv4pHYenvqt7cB+qVzDyj1vNoa4rufkkDjYTeh4UCcpds5mUhVB86i4hAkO8c0 oh0Q== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of anshuman.khandual@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=anshuman.khandual@arm.com X-Gm-Message-State: APjAAAXKmQpet/vFTLKJerlXSXM7Gc+ZRYqQKQaGK351w3JWzRUwk2rj ZUjWOEN2sd9lz62BD+c1fMODtshIpy35kFEW0t1UTU6S5iVcCefKVnoN6ic9xlXSnO8d/j85UU9 CDuCr4ONkshpCJs8hpJoXrrA7ulc6t6K3fLzAEzowB1cyv1tmjqLFpsNQnXtJE5hNuA== X-Received: by 2002:a50:8b9c:: with SMTP id m28mr9082478edm.53.1561697112007; Thu, 27 Jun 2019 21:45:12 -0700 (PDT) X-Google-Smtp-Source: APXvYqyOgVuX80aAjzVCcsszdqmQcr3hYN+Ne02jVlTJ5jRP68Im1vVhSbcp2pTShpjewu+481vD X-Received: by 2002:a50:8b9c:: with SMTP id m28mr9082406edm.53.1561697111015; Thu, 27 Jun 2019 21:45:11 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1561697111; cv=none; d=google.com; s=arc-20160816; b=VsSx8FQ4tOK93zWZgCwY0jttOlMPqaWcuAcNKuYn1PPj+Zan9KxovUXu076kQXP2AG WPTcZ/JCoQMOiuKssDRUeN/0IgHfwj5LY0NnNnrE9Lw7eRA4zXcWixY2PAN0NiJyXVSy xspkAJ/Drl19onBJmZqmXDFrkeLlemejpfojLv4jORjfVGpSu4OOvk5k202NRJeLnlD3 /sSKf79P7wAWEh7K6j9RrQOJNt4kVdJciUYgKhUFScbuRYNc1Ff3u3WwyvAbk4tCpKYe 8gvM1ws3lBNoETc/MrlJj4U+enPjOuuWxWqCDw08SGSuonoJAVoe872Qjky/1nCMvqFM XNLA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from; bh=yqR2A9BF5z1WiyLGSQ+X9x0SHG4YYsD89TXzq8JJB2o=; b=MgVDm428pSZC1P/F7ZTtPx4nWUDkxyExM5UdvNKCPRYTcOJeIi50wmbgteW7d714le GOHvwHzpTAeIIHzZeX98lsskbQ7qhpaSaeyHgSWO9qCxxlY7ubJ4LhG3i2C4yYE4iOPr yDNSg2extMY1FFDlsBBbFhXbJtZXnLlPhB3er0auIYiPfIOF+nuZFbNA2Qz4asH8zCn3 KgehQrTr3/fXdNO+IEFaryDe5Eq0SMHaf6CGjIz+NIT2lB91BFwGDJWt5ZOU04kAAJsv U4eoa0f3ITDpEUxEf/bwM/K+MWvCHFP1vbJFaIQSc0DgPDiNFBai3SiUkZf3sFAyHuHL uV5w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of anshuman.khandual@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=anshuman.khandual@arm.com Received: from foss.arm.com (foss.arm.com. [217.140.110.172]) by mx.google.com with ESMTP id t5si999508edd.61.2019.06.27.21.45.10 for ; Thu, 27 Jun 2019 21:45:10 -0700 (PDT) Received-SPF: pass (google.com: domain of anshuman.khandual@arm.com designates 217.140.110.172 as permitted sender) client-ip=217.140.110.172; Authentication-Results: mx.google.com; spf=pass (google.com: domain of anshuman.khandual@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=anshuman.khandual@arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0B5DA344; Thu, 27 Jun 2019 21:45:10 -0700 (PDT) Received: from p8cg001049571a15.blr.arm.com (p8cg001049571a15.blr.arm.com [10.162.40.144]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 805AA3F706; Thu, 27 Jun 2019 21:45:06 -0700 (PDT) From: Anshuman Khandual To: linux-mm@kvack.org Cc: Anshuman Khandual , Catalin Marinas , Will Deacon , Tony Luck , Fenghua Yu , Dave Hansen , Andy Lutomirski , Andrew Morton , linux-arm-kernel@lists.infradead.org, linux-ia64@vger.kernel.org, x86@kernel.org, linux-kernel@vger.kernel.org Subject: [RFC 1/2] mm/sparsemem: Add vmem_altmap support in vmemmap_populate_basepages() Date: Fri, 28 Jun 2019 10:14:42 +0530 Message-Id: <1561697083-7329-2-git-send-email-anshuman.khandual@arm.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1561697083-7329-1-git-send-email-anshuman.khandual@arm.com> References: <1561697083-7329-1-git-send-email-anshuman.khandual@arm.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Generic vmemmap_populate_basepages() is used across platforms for vmemmap as standard or as fallback when huge pages mapping fails. On arm64 it is used for configs with ARM64_SWAPPER_USES_SECTION_MAPS applicable both for ARM64_16K_PAGES and ARM64_64K_PAGES which cannot use huge pages because of alignment requirements. This prevents those configs from allocating from device memory for vmemap mapping as vmemmap_populate_basepages() does not support vmem_altmap. This enables that required support. Each architecture should evaluate and decide on enabling device based base page allocation when appropriate. Hence this keeps it disabled for all architectures to preserve the existing semantics. Cc: Catalin Marinas Cc: Will Deacon Cc: Tony Luck Cc: Fenghua Yu Cc: Dave Hansen Cc: Andy Lutomirski Cc: Andrew Morton Cc: linux-arm-kernel@lists.infradead.org Cc: linux-ia64@vger.kernel.org Cc: x86@kernel.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Anshuman Khandual Acked-by: Will Deacon --- arch/arm64/mm/mmu.c | 2 +- arch/ia64/mm/discontig.c | 2 +- arch/x86/mm/init_64.c | 4 ++-- include/linux/mm.h | 5 +++-- mm/sparse-vmemmap.c | 16 +++++++++++----- 5 files changed, 18 insertions(+), 11 deletions(-) diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 194c84e..39e18d1 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -982,7 +982,7 @@ static void remove_pagetable(unsigned long start, unsigned long end, int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node, struct vmem_altmap *altmap) { - return vmemmap_populate_basepages(start, end, node); + return vmemmap_populate_basepages(start, end, node, NULL); } #else /* !ARM64_SWAPPER_USES_SECTION_MAPS */ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node, diff --git a/arch/ia64/mm/discontig.c b/arch/ia64/mm/discontig.c index 05490dd..faefd7e 100644 --- a/arch/ia64/mm/discontig.c +++ b/arch/ia64/mm/discontig.c @@ -660,7 +660,7 @@ void arch_refresh_nodedata(int update_node, pg_data_t *update_pgdat) int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node, struct vmem_altmap *altmap) { - return vmemmap_populate_basepages(start, end, node); + return vmemmap_populate_basepages(start, end, node, NULL); } void vmemmap_free(unsigned long start, unsigned long end, diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index 8335ac6..c67ad5d 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -1509,7 +1509,7 @@ static int __meminit vmemmap_populate_hugepages(unsigned long start, vmemmap_verify((pte_t *)pmd, node, addr, next); continue; } - if (vmemmap_populate_basepages(addr, next, node)) + if (vmemmap_populate_basepages(addr, next, node, NULL)) return -ENOMEM; } return 0; @@ -1527,7 +1527,7 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node, __func__); err = -ENOMEM; } else - err = vmemmap_populate_basepages(start, end, node); + err = vmemmap_populate_basepages(start, end, node, NULL); if (!err) sync_global_pgds(start, end - 1); return err; diff --git a/include/linux/mm.h b/include/linux/mm.h index c6ae9eb..dda9bd4 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2758,14 +2758,15 @@ pgd_t *vmemmap_pgd_populate(unsigned long addr, int node); p4d_t *vmemmap_p4d_populate(pgd_t *pgd, unsigned long addr, int node); pud_t *vmemmap_pud_populate(p4d_t *p4d, unsigned long addr, int node); pmd_t *vmemmap_pmd_populate(pud_t *pud, unsigned long addr, int node); -pte_t *vmemmap_pte_populate(pmd_t *pmd, unsigned long addr, int node); +pte_t *vmemmap_pte_populate(pmd_t *pmd, unsigned long addr, int node, + struct vmem_altmap *altmap); void *vmemmap_alloc_block(unsigned long size, int node); struct vmem_altmap; void *vmemmap_alloc_block_buf(unsigned long size, int node); void *altmap_alloc_block_buf(unsigned long size, struct vmem_altmap *altmap); void vmemmap_verify(pte_t *, int, unsigned long, unsigned long); int vmemmap_populate_basepages(unsigned long start, unsigned long end, - int node); + int node, struct vmem_altmap *altmap); int vmemmap_populate(unsigned long start, unsigned long end, int node, struct vmem_altmap *altmap); void vmemmap_populate_print_last(void); diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c index 7fec057..d333b75 100644 --- a/mm/sparse-vmemmap.c +++ b/mm/sparse-vmemmap.c @@ -140,12 +140,18 @@ void __meminit vmemmap_verify(pte_t *pte, int node, start, end - 1); } -pte_t * __meminit vmemmap_pte_populate(pmd_t *pmd, unsigned long addr, int node) +pte_t * __meminit vmemmap_pte_populate(pmd_t *pmd, unsigned long addr, int node, + struct vmem_altmap *altmap) { pte_t *pte = pte_offset_kernel(pmd, addr); if (pte_none(*pte)) { pte_t entry; - void *p = vmemmap_alloc_block_buf(PAGE_SIZE, node); + void *p; + + if (altmap) + p = altmap_alloc_block_buf(PAGE_SIZE, altmap); + else + p = vmemmap_alloc_block_buf(PAGE_SIZE, node); if (!p) return NULL; entry = pfn_pte(__pa(p) >> PAGE_SHIFT, PAGE_KERNEL); @@ -213,8 +219,8 @@ pgd_t * __meminit vmemmap_pgd_populate(unsigned long addr, int node) return pgd; } -int __meminit vmemmap_populate_basepages(unsigned long start, - unsigned long end, int node) +int __meminit vmemmap_populate_basepages(unsigned long start, unsigned long end, + int node, struct vmem_altmap *altmap) { unsigned long addr = start; pgd_t *pgd; @@ -236,7 +242,7 @@ int __meminit vmemmap_populate_basepages(unsigned long start, pmd = vmemmap_pmd_populate(pud, addr, node); if (!pmd) return -ENOMEM; - pte = vmemmap_pte_populate(pmd, addr, node); + pte = vmemmap_pte_populate(pmd, addr, node, altmap); if (!pte) return -ENOMEM; vmemmap_verify(pte, node, addr, addr + PAGE_SIZE); From patchwork Fri Jun 28 04:44:43 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anshuman Khandual X-Patchwork-Id: 11021407 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1191C15E6 for ; Fri, 28 Jun 2019 04:45:18 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 01F7528735 for ; Fri, 28 Jun 2019 04:45:18 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E9E6728738; Fri, 28 Jun 2019 04:45:17 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B6BC428737 for ; Fri, 28 Jun 2019 04:45:16 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2E2016B0007; Fri, 28 Jun 2019 00:45:15 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 26EDB8E0003; Fri, 28 Jun 2019 00:45:15 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0E6118E0002; Fri, 28 Jun 2019 00:45:15 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-ed1-f70.google.com (mail-ed1-f70.google.com [209.85.208.70]) by kanga.kvack.org (Postfix) with ESMTP id B46456B0007 for ; Fri, 28 Jun 2019 00:45:14 -0400 (EDT) Received: by mail-ed1-f70.google.com with SMTP id o13so7672022edt.4 for ; Thu, 27 Jun 2019 21:45:14 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:in-reply-to:references; bh=f/ES+U1FCscP5hRhk+jvt+PT3UjsX1bx8VTXlJuNXHQ=; b=jd8lHGLV0dmNZ5fMdv7RDgV2kCf+EyRGs5Se+7V3D+Y3XyzelrIvjpHxg6axLyF8Jn NOS0GXE8cPtOOWEYLUKLemNb9KzD+Y1xe/+6cF1rCdLZg09ui8El2H0m34FgvX4+oVf5 9IoqCMpaAhPjOr+yBX1eT1MzS58Y9EgmieL3vcAtK/ggykzq1pPiq+kQPu4QJreZDzy7 fE6BMqwkJH48YF4HuOWDJgrGa5TqMuC0sa+7Egcbls4oGY7JtKZgSBnV/N9n8egpSj+o e4UjtgX+gFbMcOEYfYOgBXnS7T5yahagkEhUafIqPR2U+7shP3U+7nLlDARifdhNKrJv A5uQ== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of anshuman.khandual@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=anshuman.khandual@arm.com X-Gm-Message-State: APjAAAWfybzuZUnId2aIZH7R2QD5v/ubetE22Le7KysGXH96mFT6RIEO DmRnZPK8n0kgH2E5Gw5mQkSOz7xdSB1W2bebUAT/WnI5irNGAAoNv1Rr5PxPJU3AFG/9a2EJKzP OoVZK4cnU2JpVILUlQiTE6JQOmmhiBaIWZMoePfz8fQcQEGTzsGIarfRuW4nlOEbmng== X-Received: by 2002:a50:9646:: with SMTP id y64mr8813191eda.111.1561697114304; Thu, 27 Jun 2019 21:45:14 -0700 (PDT) X-Google-Smtp-Source: APXvYqzqnRwWFzSwbUOhFXJKA1UB81FhtJ4nKmhdpCqhbUR5CfIVFF1ZqkGaxuatgO89SFu6m3og X-Received: by 2002:a50:9646:: with SMTP id y64mr8813138eda.111.1561697113526; Thu, 27 Jun 2019 21:45:13 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1561697113; cv=none; d=google.com; s=arc-20160816; b=HcLdbk9OolA4GDvatSfnU8SH0XoW6jyuJWUziEfVjHd6lRtJk6yEbM6wAOIVNmQfYm chSJKGUNPCb8gLIjTbYOL99iAw6yXr10z7wRGzy9UaJDa/VSRnT+OykN+xNvOu0wdZAt gC7nupL+/qK6bfVCMto37EdRYncvUQkAnqs1b3fJPurTvcixJhXs/pbH9IOLkRD8SV3X swy2mXA/8Rkv8H1nBoN4v/TIZWY/HDlnvrAqdEHgGqDXmmDkxxuFblbkPxp2m5ZxlmSE 8P/OqmhGtiTlCa+5Y4MQZxTTXp1YcOe1Rax/O46UALsB2kgGmgwT/Stphy/AtwtNkjUC /JJw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from; bh=f/ES+U1FCscP5hRhk+jvt+PT3UjsX1bx8VTXlJuNXHQ=; b=jYER6ny2wPD/kEur5C0ugdIsBiBoMVtVFp3aI6pX+2dz9SYpKcF4REwfNjAJsx0Jzm /KUR6zqLgOpFNh3/MU12xJj9q8aj4mNbWA0Z9QswlCUJFIUTSrsSvYjeS2bnyJUlG0KS OHV7+meO/7urSHXaiwNTZ7Zrs1i9zQ/EbSyKp5ARsGiIzK8y0W2wQBHf7/rtFHr6ckUe 366SZsMUPSCtwdx6ng5/NBroekHAaDOn8vAnh4JQu02obZQr2cQE7Wc0E85OIYroVZ7Q +QM9IDflBqDU1GoZZhK8eup14+DnqOyrw7Q538hGXQVo3Xc92AWEsSlCP8mud/r2iSei C35w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of anshuman.khandual@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=anshuman.khandual@arm.com Received: from foss.arm.com (foss.arm.com. [217.140.110.172]) by mx.google.com with ESMTP id h20si637565ejx.253.2019.06.27.21.45.13 for ; Thu, 27 Jun 2019 21:45:13 -0700 (PDT) Received-SPF: pass (google.com: domain of anshuman.khandual@arm.com designates 217.140.110.172 as permitted sender) client-ip=217.140.110.172; Authentication-Results: mx.google.com; spf=pass (google.com: domain of anshuman.khandual@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=anshuman.khandual@arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 9A26EC0A; Thu, 27 Jun 2019 21:45:12 -0700 (PDT) Received: from p8cg001049571a15.blr.arm.com (p8cg001049571a15.blr.arm.com [10.162.40.144]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 7CDC23F706; Thu, 27 Jun 2019 21:45:10 -0700 (PDT) From: Anshuman Khandual To: linux-mm@kvack.org Cc: Anshuman Khandual , Catalin Marinas , Will Deacon , Mark Rutland , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [RFC 2/2] arm64/mm: Enable device memory allocation and free for vmemmap mapping Date: Fri, 28 Jun 2019 10:14:43 +0530 Message-Id: <1561697083-7329-3-git-send-email-anshuman.khandual@arm.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1561697083-7329-1-git-send-email-anshuman.khandual@arm.com> References: <1561697083-7329-1-git-send-email-anshuman.khandual@arm.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP This enables vmemmap_populate() and vmemmap_free() functions to incorporate struct vmem_altmap based device memory allocation and free requests. With this device memory with specific atlmap configuration can be hot plugged and hot removed as ZONE_DEVICE memory on arm64 platforms. Cc: Catalin Marinas Cc: Will Deacon Cc: Mark Rutland Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Anshuman Khandual --- arch/arm64/mm/mmu.c | 57 ++++++++++++++++++++++++++++++++++------------------- 1 file changed, 37 insertions(+), 20 deletions(-) diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 39e18d1..8867bbd 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -735,15 +735,26 @@ int kern_addr_valid(unsigned long addr) } #ifdef CONFIG_MEMORY_HOTPLUG -static void free_hotplug_page_range(struct page *page, size_t size) +static void free_hotplug_page_range(struct page *page, size_t size, + struct vmem_altmap *altmap) { - WARN_ON(!page || PageReserved(page)); - free_pages((unsigned long)page_address(page), get_order(size)); + if (altmap) { + /* + * vmemmap_populate() creates vmemmap mapping either at pte + * or pmd level. Unmapping request at any other level would + * be a problem. + */ + WARN_ON((size != PAGE_SIZE) && (size != PMD_SIZE)); + vmem_altmap_free(altmap, size >> PAGE_SHIFT); + } else { + WARN_ON(!page || PageReserved(page)); + free_pages((unsigned long)page_address(page), get_order(size)); + } } static void free_hotplug_pgtable_page(struct page *page) { - free_hotplug_page_range(page, PAGE_SIZE); + free_hotplug_page_range(page, PAGE_SIZE, NULL); } static void free_pte_table(pmd_t *pmdp, unsigned long addr) @@ -807,7 +818,8 @@ static void free_pud_table(pgd_t *pgdp, unsigned long addr) } static void unmap_hotplug_pte_range(pmd_t *pmdp, unsigned long addr, - unsigned long end, bool sparse_vmap) + unsigned long end, bool sparse_vmap, + struct vmem_altmap *altmap) { struct page *page; pte_t *ptep, pte; @@ -823,12 +835,13 @@ static void unmap_hotplug_pte_range(pmd_t *pmdp, unsigned long addr, pte_clear(&init_mm, addr, ptep); flush_tlb_kernel_range(addr, addr + PAGE_SIZE); if (sparse_vmap) - free_hotplug_page_range(page, PAGE_SIZE); + free_hotplug_page_range(page, PAGE_SIZE, altmap); } while (addr += PAGE_SIZE, addr < end); } static void unmap_hotplug_pmd_range(pud_t *pudp, unsigned long addr, - unsigned long end, bool sparse_vmap) + unsigned long end, bool sparse_vmap, + struct vmem_altmap *altmap) { unsigned long next; struct page *page; @@ -847,16 +860,17 @@ static void unmap_hotplug_pmd_range(pud_t *pudp, unsigned long addr, pmd_clear(pmdp); flush_tlb_kernel_range(addr, next); if (sparse_vmap) - free_hotplug_page_range(page, PMD_SIZE); + free_hotplug_page_range(page, PMD_SIZE, altmap); continue; } WARN_ON(!pmd_table(pmd)); - unmap_hotplug_pte_range(pmdp, addr, next, sparse_vmap); + unmap_hotplug_pte_range(pmdp, addr, next, sparse_vmap, altmap); } while (addr = next, addr < end); } static void unmap_hotplug_pud_range(pgd_t *pgdp, unsigned long addr, - unsigned long end, bool sparse_vmap) + unsigned long end, bool sparse_vmap, + struct vmem_altmap *altmap) { unsigned long next; struct page *page; @@ -875,16 +889,16 @@ static void unmap_hotplug_pud_range(pgd_t *pgdp, unsigned long addr, pud_clear(pudp); flush_tlb_kernel_range(addr, next); if (sparse_vmap) - free_hotplug_page_range(page, PUD_SIZE); + free_hotplug_page_range(page, PUD_SIZE, altmap); continue; } WARN_ON(!pud_table(pud)); - unmap_hotplug_pmd_range(pudp, addr, next, sparse_vmap); + unmap_hotplug_pmd_range(pudp, addr, next, sparse_vmap, altmap); } while (addr = next, addr < end); } static void unmap_hotplug_range(unsigned long addr, unsigned long end, - bool sparse_vmap) + bool sparse_vmap, struct vmem_altmap *altmap) { unsigned long next; pgd_t *pgdp, pgd; @@ -897,7 +911,7 @@ static void unmap_hotplug_range(unsigned long addr, unsigned long end, continue; WARN_ON(!pgd_present(pgd)); - unmap_hotplug_pud_range(pgdp, addr, next, sparse_vmap); + unmap_hotplug_pud_range(pgdp, addr, next, sparse_vmap, altmap); } while (addr = next, addr < end); } @@ -970,9 +984,9 @@ static void free_empty_tables(unsigned long addr, unsigned long end) } static void remove_pagetable(unsigned long start, unsigned long end, - bool sparse_vmap) + bool sparse_vmap, struct vmem_altmap *altmap) { - unmap_hotplug_range(start, end, sparse_vmap); + unmap_hotplug_range(start, end, sparse_vmap, altmap); free_empty_tables(start, end); } #endif @@ -982,7 +996,7 @@ static void remove_pagetable(unsigned long start, unsigned long end, int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node, struct vmem_altmap *altmap) { - return vmemmap_populate_basepages(start, end, node, NULL); + return vmemmap_populate_basepages(start, end, node, altmap); } #else /* !ARM64_SWAPPER_USES_SECTION_MAPS */ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node, @@ -1009,7 +1023,10 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node, if (pmd_none(READ_ONCE(*pmdp))) { void *p = NULL; - p = vmemmap_alloc_block_buf(PMD_SIZE, node); + if (altmap) + p = altmap_alloc_block_buf(PMD_SIZE, altmap); + else + p = vmemmap_alloc_block_buf(PMD_SIZE, node); if (!p) return -ENOMEM; @@ -1043,7 +1060,7 @@ void vmemmap_free(unsigned long start, unsigned long end, * given vmemmap range being hot-removed. Just unmap and free the * range instead. */ - unmap_hotplug_range(start, end, true); + unmap_hotplug_range(start, end, true, altmap); #endif } #endif /* CONFIG_SPARSEMEM_VMEMMAP */ @@ -1336,7 +1353,7 @@ static void __remove_pgd_mapping(pgd_t *pgdir, unsigned long start, u64 size) unsigned long end = start + size; WARN_ON(pgdir != init_mm.pgd); - remove_pagetable(start, end, false); + remove_pagetable(start, end, false, NULL); } int arch_add_memory(int nid, u64 start, u64 size,