From patchwork Thu Apr 2 08:40:52 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Russell Currey X-Patchwork-Id: 11470645 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9CB291667 for ; Thu, 2 Apr 2020 12:32:59 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id CAFB620757 for ; Thu, 2 Apr 2020 12:32:58 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=russell.cc header.i=@russell.cc header.b="EZgS2pqa"; dkim=pass (2048-bit key) header.d=messagingengine.com header.i=@messagingengine.com header.b="IZYJExNp" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org CAFB620757 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=russell.cc Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-18374-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 9760 invoked by uid 550); 2 Apr 2020 12:31:57 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 9488 invoked from network); 2 Apr 2020 12:31:49 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=russell.cc; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; s=fm1; bh=7R7IdzkGiZmeG W2gS6NLH9ZT+BLkV5BQs8Fc1hM3RU0=; b=EZgS2pqa/mJCj7AH0QptTQNP1vjJU BPsKmSpMxyjYSbNQQOU9Fnf2kSNGsNpZz7NWFU0kCv8nY+WV73gFWFADd0ZQBlSe R6cSw8AgMi9jyDVcl2+pyvC51fbWT438eiiQ87e4qstEINLK4tYWPDaPPrpGcKP5 2NuGdkyxxegCKATTv85lWKIBxKgfXZkOMMkdCLp8+lK7Uyc/yOF9oIFWYfl+kuWc 5naBIDvDM5rKOMrPy6VZa4KXJAEjdAqkh9mdoxxpgUW8o2tRh3zfMd8eGErknNdM gLPqF81vaa/ufXzAetXjzE2WgFnQDmZi3G1wNdiXGgdJFupzTsLlt45/w== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm2; bh=7R7IdzkGiZmeGW2gS6NLH9ZT+BLkV5BQs8Fc1hM3RU0=; b=IZYJExNp ao1Y02uLOUwQocDzL3D+6bOTaB1ewhFKKD6CzHxDBMxxMHoeDuq9Yx5LDxaJrk8E uoNWZCiItRADNUV6e9YmOs+qoPvzed/emDiu2/xoHnVn3IBqN3FoezzXL2EmcrKB EBs6KhPR0dvq9lps5Oe5C2EIRoXcYqxye9mfu7pUTLFOvXnKRsOw716fjfsaPWNM jQEhcd7yCLjX702LBPzTZ0jCUbgNl1uK5d2tMUGPQCwR4bODx8r3QdQWZICKWzDx shFgDKHCycVoDisHdvLO/76D8B8qfcCI0z7m1dQi607MgTGQfuhJg/FzGx58ze8e +mT/D7MJufM2EQ== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduhedrtdeggddtgecutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenfg hrlhcuvffnffculdduhedmnecujfgurhephffvufffkffojghfggfgsedtkeertdertddt necuhfhrohhmpeftuhhsshgvlhhlucevuhhrrhgvhicuoehruhhstghurhesrhhushhsvg hllhdrtggtqeenucfkphepuddvuddrgeehrddvuddvrddvfeelnecuvehluhhsthgvrhfu ihiivgepgeenucfrrghrrghmpehmrghilhhfrhhomheprhhushgtuhhrsehruhhsshgvlh hlrdgttg X-ME-Proxy: From: Russell Currey To: linuxppc-dev@lists.ozlabs.org Cc: Christophe Leroy , mpe@ellerman.id.au, ajd@linux.ibm.com, dja@axtens.net, npiggin@gmail.com, kernel-hardening@lists.openwall.com, Russell Currey Subject: [PATCH v8 7/7] powerpc/32: use set_memory_attr() Date: Thu, 2 Apr 2020 19:40:52 +1100 Message-Id: <20200402084053.188537-7-ruscur@russell.cc> X-Mailer: git-send-email 2.26.0 In-Reply-To: <20200402084053.188537-1-ruscur@russell.cc> References: <20200402084053.188537-1-ruscur@russell.cc> MIME-Version: 1.0 From: Christophe Leroy Use set_memory_attr() instead of the PPC32 specific change_page_attr() change_page_attr() was checking that the address was not mapped by blocks and was handling highmem, but that's unneeded because the affected pages can't be in highmem and block mapping verification is already done by the callers. Signed-off-by: Christophe Leroy [ruscur: rebase on powerpc/merge with Christophe's new patches] Signed-off-by: Russell Currey --- v8: Rebase on powerpc/merge arch/powerpc/mm/pgtable_32.c | 60 ++++++------------------------------ 1 file changed, 10 insertions(+), 50 deletions(-) diff --git a/arch/powerpc/mm/pgtable_32.c b/arch/powerpc/mm/pgtable_32.c index f62de06e3d07..0d9d164fad26 100644 --- a/arch/powerpc/mm/pgtable_32.c +++ b/arch/powerpc/mm/pgtable_32.c @@ -23,6 +23,7 @@ #include #include #include +#include #include #include @@ -121,64 +122,20 @@ void __init mapin_ram(void) } } -static int __change_page_attr_noflush(struct page *page, pgprot_t prot) -{ - pte_t *kpte; - unsigned long address; - - BUG_ON(PageHighMem(page)); - address = (unsigned long)page_address(page); - - if (v_block_mapped(address)) - return 0; - kpte = virt_to_kpte(address); - if (!kpte) - return -EINVAL; - __set_pte_at(&init_mm, address, kpte, mk_pte(page, prot), 0); - - return 0; -} - -/* - * Change the page attributes of an page in the linear mapping. - * - * THIS DOES NOTHING WITH BAT MAPPINGS, DEBUG USE ONLY - */ -static int change_page_attr(struct page *page, int numpages, pgprot_t prot) -{ - int i, err = 0; - unsigned long flags; - struct page *start = page; - - local_irq_save(flags); - for (i = 0; i < numpages; i++, page++) { - err = __change_page_attr_noflush(page, prot); - if (err) - break; - } - wmb(); - local_irq_restore(flags); - flush_tlb_kernel_range((unsigned long)page_address(start), - (unsigned long)page_address(page)); - return err; -} - void mark_initmem_nx(void) { - struct page *page = virt_to_page(_sinittext); unsigned long numpages = PFN_UP((unsigned long)_einittext) - PFN_DOWN((unsigned long)_sinittext); if (v_block_mapped((unsigned long)_stext + 1)) mmu_mark_initmem_nx(); else - change_page_attr(page, numpages, PAGE_KERNEL); + set_memory_attr((unsigned long)_sinittext, numpages, PAGE_KERNEL); } #ifdef CONFIG_STRICT_KERNEL_RWX void mark_rodata_ro(void) { - struct page *page; unsigned long numpages; if (v_block_mapped((unsigned long)_sinittext)) { @@ -187,20 +144,18 @@ void mark_rodata_ro(void) return; } - page = virt_to_page(_stext); numpages = PFN_UP((unsigned long)_etext) - PFN_DOWN((unsigned long)_stext); - change_page_attr(page, numpages, PAGE_KERNEL_ROX); + set_memory_attr((unsigned long)_stext, numpages, PAGE_KERNEL_ROX); /* * mark .rodata as read only. Use __init_begin rather than __end_rodata * to cover NOTES and EXCEPTION_TABLE. */ - page = virt_to_page(__start_rodata); numpages = PFN_UP((unsigned long)__init_begin) - PFN_DOWN((unsigned long)__start_rodata); - change_page_attr(page, numpages, PAGE_KERNEL_RO); + set_memory_attr((unsigned long)__start_rodata, numpages, PAGE_KERNEL_RO); // mark_initmem_nx() should have already run by now ptdump_check_wx(); @@ -210,9 +165,14 @@ void mark_rodata_ro(void) #ifdef CONFIG_DEBUG_PAGEALLOC void __kernel_map_pages(struct page *page, int numpages, int enable) { + unsigned long addr = (unsigned long)page_address(page); + if (PageHighMem(page)) return; - change_page_attr(page, numpages, enable ? PAGE_KERNEL : __pgprot(0)); + if (enable) + set_memory_attr(addr, numpages, PAGE_KERNEL); + else + set_memory_attr(addr, numpages, __pgprot(0)); } #endif /* CONFIG_DEBUG_PAGEALLOC */