From patchwork Wed Feb 1 12:46:23 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: AKASHI Takahiro X-Patchwork-Id: 9549623 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 6D69760415 for ; Wed, 1 Feb 2017 12:45:56 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 57CCC28422 for ; Wed, 1 Feb 2017 12:45:56 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4C79828437; Wed, 1 Feb 2017 12:45:56 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, T_DKIM_INVALID autolearn=no version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [65.50.211.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id C9EF328422 for ; Wed, 1 Feb 2017 12:45:55 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.87 #1 (Red Hat Linux)) id 1cYuIN-0002Yd-Gd; Wed, 01 Feb 2017 12:45:55 +0000 Received: from mail-pf0-x22a.google.com ([2607:f8b0:400e:c00::22a]) by bombadil.infradead.org with esmtps (Exim 4.87 #1 (Red Hat Linux)) id 1cYuIH-0002Dt-8J for linux-arm-kernel@lists.infradead.org; Wed, 01 Feb 2017 12:45:53 +0000 Received: by mail-pf0-x22a.google.com with SMTP id 189so118656250pfu.3 for ; Wed, 01 Feb 2017 04:45:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=FaWfBdWIwUJ51QVW0Ff+Ehqj9tC/3NBgcbQwRvewp4Y=; b=cn3WJPobU7PcqNy93DjPZ4TJAvB9TuRztCdHQSw5eH3iQUPT616wKSw6Gt7JSPq13B ubgZbMiH2Vcn6Y1yzIZG3lWGLBGLZFfF6pBJGxZriI2eJSm/WtMasjElibSOnrywUu1l WdcIdD1PG5nAnzBReve2+TPz/eLr4Wr/9aF00= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=FaWfBdWIwUJ51QVW0Ff+Ehqj9tC/3NBgcbQwRvewp4Y=; b=mGzX6ZbhdrxkgG6iv0iSOZm1k3JrbVRnuwLjgtC7OOGaxmTZjoosGNmOfdi5DDuZGJ LECuHGwWPLGomMBrFhqEjaAtvKm4+AshXzjwlCTDj0ikdUcbMQUvSrGHoFNJe9pH7mJG n/XFukMQDdADDvgt++VDGBdV2sPj9ldL95DnGAu+DV2a7wDtkkOGkBADqkfFRWu6CGQv FCLQPdV8RBktY2aqNzugcopgxv3oJmENwfB8fHWgZXwTz+8yUTNcFrqa+c1Z+4h8ZOSw jNoHdV42BNMBHLR/hO6ItcRuJNXPb2DN7MvpPU1E4bhkVDffmpaoq4PuCVqqhz0YjIFD xsEw== X-Gm-Message-State: AIkVDXKej65KlckvgviQbuy7vb9ij7a0kLEzwLC1Adj1hyG/7RWdVIoPiS/MWqQNk/4EqvVc X-Received: by 10.84.229.143 with SMTP id c15mr4154731plk.177.1485953128368; Wed, 01 Feb 2017 04:45:28 -0800 (PST) Received: from linaro.org ([121.95.100.191]) by smtp.googlemail.com with ESMTPSA id h17sm49443989pfh.62.2017.02.01.04.45.27 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 01 Feb 2017 04:45:27 -0800 (PST) From: AKASHI Takahiro To: catalin.marinas@arm.com, will.deacon@arm.com Subject: [PATCH v31 04/12] arm64: mm: allow for unmapping part of kernel mapping Date: Wed, 1 Feb 2017 21:46:23 +0900 Message-Id: <20170201124630.6016-3-takahiro.akashi@linaro.org> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20170201124218.5823-1-takahiro.akashi@linaro.org> References: <20170201124218.5823-1-takahiro.akashi@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20170201_044549_440726_974C3897 X-CRM114-Status: GOOD ( 14.69 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: mark.rutland@arm.com, geoff@infradead.org, kexec@lists.infradead.org, AKASHI Takahiro , james.morse@arm.com, bauerman@linux.vnet.ibm.com, dyoung@redhat.com, linux-arm-kernel@lists.infradead.org MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP A new function, remove_pgd_mapping(), is added. It allows us to unmap a specific portion of kernel mapping later as far as the mapping is made using create_pgd_mapping() and unless we try to free a sub-set of memory range within a section mapping. This function will be used in a later kdump patch to implement the protection against corrupting crash dump kernel memory which is to be set aside. Signed-off-by: AKASHI Takahiro --- arch/arm64/include/asm/mmu.h | 2 + arch/arm64/mm/mmu.c | 130 +++++++++++++++++++++++++++++++++++++++++-- 2 files changed, 127 insertions(+), 5 deletions(-) diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h index 47619411f0ff..04eb240736d8 100644 --- a/arch/arm64/include/asm/mmu.h +++ b/arch/arm64/include/asm/mmu.h @@ -36,6 +36,8 @@ extern void init_mem_pgprot(void); extern void create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys, unsigned long virt, phys_addr_t size, pgprot_t prot, bool page_mappings_only); +extern void remove_pgd_mapping(struct mm_struct *mm, unsigned long virt, + phys_addr_t size); extern void *fixmap_remap_fdt(phys_addr_t dt_phys); #endif diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 17243e43184e..9d3cea1db3b4 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -334,12 +334,10 @@ static void __init create_mapping_noalloc(phys_addr_t phys, unsigned long virt, __create_pgd_mapping(init_mm.pgd, phys, virt, size, prot, NULL, false); } -void __init create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys, - unsigned long virt, phys_addr_t size, - pgprot_t prot, bool page_mappings_only) +void create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys, + unsigned long virt, phys_addr_t size, + pgprot_t prot, bool page_mappings_only) { - BUG_ON(mm == &init_mm); - __create_pgd_mapping(mm->pgd, phys, virt, size, prot, pgd_pgtable_alloc, page_mappings_only); } @@ -357,6 +355,128 @@ static void create_mapping_late(phys_addr_t phys, unsigned long virt, NULL, debug_pagealloc_enabled()); } +static bool pgtable_is_cleared(void *pgtable) +{ + unsigned long *desc, *end; + + for (desc = pgtable, end = (unsigned long *)(pgtable + PAGE_SIZE); + desc < end; desc++) + if (*desc) + return false; + + return true; +} + +static void clear_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end) +{ + pte_t *pte; + + if (WARN_ON(pmd_bad(*pmd))) + return; + + pte = pte_set_fixmap_offset(pmd, addr); + + do { + pte_clear(NULL, NULL, pte); + } while (pte++, addr += PAGE_SIZE, addr != end); + + pte_clear_fixmap(); +} + +static void clear_pmd_range(pud_t *pud, unsigned long addr, unsigned long end) +{ + pmd_t *pmd; + unsigned long next; + + if (WARN_ON(pud_bad(*pud))) + return; + + pmd = pmd_set_fixmap_offset(pud, addr); + + do { + next = pmd_addr_end(addr, end); + + if (pmd_table(*pmd)) { + clear_pte_range(pmd, addr, next); + if (((next - addr) == PMD_SIZE) || + pgtable_is_cleared(__va(pmd_page_paddr(*pmd)))) { + __free_page(pmd_page(*pmd)); + pmd_clear(pmd); + } + } else { + if (WARN_ON((next - addr) != PMD_SIZE)) + return; + pmd_clear(pmd); + } + } while (pmd++, addr = next, addr != end); + + pmd_clear_fixmap(); +} + +static void clear_pud_range(pgd_t *pgd, unsigned long addr, unsigned long end) +{ + pud_t *pud; + unsigned long next; + + if (WARN_ON(pgd_bad(*pgd))) + return; + + pud = pud_set_fixmap_offset(pgd, addr); + + do { + next = pud_addr_end(addr, end); + + if (pud_table(*pud)) { + clear_pmd_range(pud, addr, next); +#if CONFIG_PGTABLE_LEVELS > 2 + if (((next - addr) == PUD_SIZE) || + pgtable_is_cleared(__va(pud_page_paddr(*pud)))) { + __free_page(pud_page(*pud)); + pud_clear(pud); + } +#endif + } else { +#if CONFIG_PGTABLE_LEVELS > 2 + if (WARN_ON((next - addr) != PUD_SIZE)) + return; + pud_clear(pud); +#endif + } + } while (pud++, addr = next, addr != end); + + pud_clear_fixmap(); +} + +static void __remove_pgd_mapping(pgd_t *pgdir, unsigned long virt, + phys_addr_t size) +{ + unsigned long addr, length, end, next; + pgd_t *pgd = pgd_offset_raw(pgdir, virt); + + addr = virt & PAGE_MASK; + length = PAGE_ALIGN(size + (virt & ~PAGE_MASK)); + + end = addr + length; + do { + next = pgd_addr_end(addr, end); + clear_pud_range(pgd, addr, next); + +#if CONFIG_PGTABLE_LEVELS > 3 + if (((next - addr) == PGD_SIZE) || + pgtable_is_cleared(__va(pgd_page_paddr(*pgd)))) { + __free_page(pgd_page(*pgd)); + pgd_clear(pgd); + } +#endif + } while (pgd++, addr = next, addr != end); +} + +void remove_pgd_mapping(struct mm_struct *mm, unsigned long virt, + phys_addr_t size) +{ + __remove_pgd_mapping(mm->pgd, virt, size); +} + static void __init __map_memblock(pgd_t *pgd, phys_addr_t start, phys_addr_t end) { unsigned long kernel_start = __pa(_text);