From patchwork Tue Mar 31 04:48:25 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Russell Currey X-Patchwork-Id: 11466723 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B9A05912 for ; Tue, 31 Mar 2020 04:50:00 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id E72DC206EB for ; Tue, 31 Mar 2020 04:49:59 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=russell.cc header.i=@russell.cc header.b="OyzXHsKO"; dkim=pass (2048-bit key) header.d=messagingengine.com header.i=@messagingengine.com header.b="UT2XjkMW" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E72DC206EB Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=russell.cc Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-18321-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 20265 invoked by uid 550); 31 Mar 2020 04:49:15 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 20153 invoked from network); 31 Mar 2020 04:49:14 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=russell.cc; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; s=fm1; bh=ZLAZryI+IRf6Z 3eYrBz4DeBzmXqKWU+nW4J9jKdBaZE=; b=OyzXHsKO9hYAkc4/b9QORvxk3bRc3 mRn0TH7gHqvRATPDGrpe0z2Mftpg82ngs/CFCdoRzwdsCc8bS3xoWdxJ02HURXvY cgrAor0mhZuPpFta7+oSFd6r87zqPNxY8lcHwUOP7CArnBkF3fWR/tmdDVn3T7Xo CJUz3PbMf6KzmvkDHJbBdIehZQ49HLfbsKvPcpTAs87ifFR9eLPQaTJLGWK53C4w iBwyeMCOxduWuNf3PwgKwxvFQr0fLaYbREDLozZi2MxjobTrdzIYbBTVFfQpYP2R XleEVWwhXjWZW+FbRoSv/D3XvVPXALRYXBR7P3gDbjq9eTpstF+mmOcBw== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm2; bh=ZLAZryI+IRf6Z3eYrBz4DeBzmXqKWU+nW4J9jKdBaZE=; b=UT2XjkMW hIdNaXSNFzjBV5IChTNYT8J6W7LcQvMs3P8ZDeEca9SF5oS3F3de7TYelEmOyaBO qOWWCoeAN5zGvtI77a3WTW1eZED1cUEO+nqtJ1PkZZQ1uHVPNzELtf/UqbSLWivr bmE0uvA7PJlHtwm0/uCeaigxCpG4Ig18P9xsxwPM37fQ7v4Pzm4/GX5waCi4axOS ihSSISxGpR5ypv54LKfEgTNQfAA2HU1tB0yBlFogd+O5xhI2gcctKwhMh0bfA/SE 0Fqx3ltiXJQSqv0bzmmszwVlCPxj/MznDrAghAyTxyk/yNWtH4/t0hR0S1yvmlg+ 22zaVPHV6rjTXQ== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedugedrudeiiedgkeeiucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne gfrhhlucfvnfffucdludehmdenucfjughrpefhvffufffkofgjfhgggfestdekredtredt tdenucfhrhhomheptfhushhsvghllhcuvehurhhrvgihuceorhhushgtuhhrsehruhhssh gvlhhlrdgttgeqnecukfhppeduvddurdeghedrvdduvddrvdefleenucevlhhushhtvghr ufhiiigvpeehnecurfgrrhgrmhepmhgrihhlfhhrohhmpehruhhstghurhesrhhushhsvg hllhdrtggt X-ME-Proxy: From: Russell Currey To: linuxppc-dev@lists.ozlabs.org Cc: Christophe Leroy , mpe@ellerman.id.au, ajd@linux.ibm.com, dja@axtens.net, npiggin@gmail.com, kernel-hardening@lists.openwall.com Subject: [PATCH v7 7/7] powerpc/32: use set_memory_attr() Date: Tue, 31 Mar 2020 15:48:25 +1100 Message-Id: <20200331044825.591653-8-ruscur@russell.cc> X-Mailer: git-send-email 2.26.0 In-Reply-To: <20200331044825.591653-1-ruscur@russell.cc> References: <20200331044825.591653-1-ruscur@russell.cc> MIME-Version: 1.0 From: Christophe Leroy Use set_memory_attr() instead of the PPC32 specific change_page_attr() change_page_attr() was checking that the address was not mapped by blocks and was handling highmem, but that's unneeded because the affected pages can't be in highmem and block mapping verification is already done by the callers. Signed-off-by: Christophe Leroy --- arch/powerpc/mm/pgtable_32.c | 95 ++++-------------------------------- 1 file changed, 10 insertions(+), 85 deletions(-) diff --git a/arch/powerpc/mm/pgtable_32.c b/arch/powerpc/mm/pgtable_32.c index 5fb90edd865e..3d92eaf3ee2f 100644 --- a/arch/powerpc/mm/pgtable_32.c +++ b/arch/powerpc/mm/pgtable_32.c @@ -23,6 +23,7 @@ #include #include #include +#include #include #include @@ -121,99 +122,20 @@ void __init mapin_ram(void) } } -/* Scan the real Linux page tables and return a PTE pointer for - * a virtual address in a context. - * Returns true (1) if PTE was found, zero otherwise. The pointer to - * the PTE pointer is unmodified if PTE is not found. - */ -static int -get_pteptr(struct mm_struct *mm, unsigned long addr, pte_t **ptep, pmd_t **pmdp) -{ - pgd_t *pgd; - pud_t *pud; - pmd_t *pmd; - pte_t *pte; - int retval = 0; - - pgd = pgd_offset(mm, addr & PAGE_MASK); - if (pgd) { - pud = pud_offset(pgd, addr & PAGE_MASK); - if (pud && pud_present(*pud)) { - pmd = pmd_offset(pud, addr & PAGE_MASK); - if (pmd_present(*pmd)) { - pte = pte_offset_map(pmd, addr & PAGE_MASK); - if (pte) { - retval = 1; - *ptep = pte; - if (pmdp) - *pmdp = pmd; - /* XXX caller needs to do pte_unmap, yuck */ - } - } - } - } - return(retval); -} - -static int __change_page_attr_noflush(struct page *page, pgprot_t prot) -{ - pte_t *kpte; - pmd_t *kpmd; - unsigned long address; - - BUG_ON(PageHighMem(page)); - address = (unsigned long)page_address(page); - - if (v_block_mapped(address)) - return 0; - if (!get_pteptr(&init_mm, address, &kpte, &kpmd)) - return -EINVAL; - __set_pte_at(&init_mm, address, kpte, mk_pte(page, prot), 0); - pte_unmap(kpte); - - return 0; -} - -/* - * Change the page attributes of an page in the linear mapping. - * - * THIS DOES NOTHING WITH BAT MAPPINGS, DEBUG USE ONLY - */ -static int change_page_attr(struct page *page, int numpages, pgprot_t prot) -{ - int i, err = 0; - unsigned long flags; - struct page *start = page; - - local_irq_save(flags); - for (i = 0; i < numpages; i++, page++) { - err = __change_page_attr_noflush(page, prot); - if (err) - break; - } - wmb(); - local_irq_restore(flags); - flush_tlb_kernel_range((unsigned long)page_address(start), - (unsigned long)page_address(page)); - return err; -} - void mark_initmem_nx(void) { - struct page *page = virt_to_page(_sinittext); unsigned long numpages = PFN_UP((unsigned long)_einittext) - PFN_DOWN((unsigned long)_sinittext); if (v_block_mapped((unsigned long)_stext + 1)) mmu_mark_initmem_nx(); else - change_page_attr(page, numpages, PAGE_KERNEL); + set_memory_attr((unsigned long)_sinittext, numpages, PAGE_KERNEL); } #ifdef CONFIG_STRICT_KERNEL_RWX void mark_rodata_ro(void) { - struct page *page; unsigned long numpages; if (v_block_mapped((unsigned long)_sinittext)) { @@ -222,20 +144,18 @@ void mark_rodata_ro(void) return; } - page = virt_to_page(_stext); numpages = PFN_UP((unsigned long)_etext) - PFN_DOWN((unsigned long)_stext); - change_page_attr(page, numpages, PAGE_KERNEL_ROX); + set_memory_attr((unsigned long)_stext, numpages, PAGE_KERNEL_ROX); /* * mark .rodata as read only. Use __init_begin rather than __end_rodata * to cover NOTES and EXCEPTION_TABLE. */ - page = virt_to_page(__start_rodata); numpages = PFN_UP((unsigned long)__init_begin) - PFN_DOWN((unsigned long)__start_rodata); - change_page_attr(page, numpages, PAGE_KERNEL_RO); + set_memory_attr((unsigned long)__start_rodata, numpages, PAGE_KERNEL_RO); // mark_initmem_nx() should have already run by now ptdump_check_wx(); @@ -245,9 +165,14 @@ void mark_rodata_ro(void) #ifdef CONFIG_DEBUG_PAGEALLOC void __kernel_map_pages(struct page *page, int numpages, int enable) { + unsigned long addr = (unsigned long)page_address(page); + if (PageHighMem(page)) return; - change_page_attr(page, numpages, enable ? PAGE_KERNEL : __pgprot(0)); + if (enable) + set_memory_attr(addr, numpages, PAGE_KERNEL); + else + set_memory_attr(addr, numpages, __pgprot(0)); } #endif /* CONFIG_DEBUG_PAGEALLOC */