From patchwork Wed Apr 29 02:05:27 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Christopher M. Riedl" X-Patchwork-Id: 11515813 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B6B69912 for ; Wed, 29 Apr 2020 02:04:40 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id 29B1520731 for ; Wed, 29 Apr 2020 02:04:39 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 29B1520731 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=informatik.wtf Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-18670-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 17761 invoked by uid 550); 29 Apr 2020 02:04:27 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 17678 invoked from network); 29 Apr 2020 02:04:25 -0000 From: "Christopher M. Riedl" To: linuxppc-dev@lists.ozlabs.org, kernel-hardening@lists.openwall.com Subject: [RFC PATCH v2 1/5] powerpc/mm: Introduce temporary mm Date: Tue, 28 Apr 2020 21:05:27 -0500 Message-Id: <20200429020531.20684-2-cmr@informatik.wtf> X-Mailer: git-send-email 2.26.1 In-Reply-To: <20200429020531.20684-1-cmr@informatik.wtf> References: <20200429020531.20684-1-cmr@informatik.wtf> MIME-Version: 1.0 X-Virus-Scanned: ClamAV using ClamSMTP x86 supports the notion of a temporary mm which restricts access to temporary PTEs to a single CPU. A temporary mm is useful for situations where a CPU needs to perform sensitive operations (such as patching a STRICT_KERNEL_RWX kernel) requiring temporary mappings without exposing said mappings to other CPUs. A side benefit is that other CPU TLBs do not need to be flushed when the temporary mm is torn down. Mappings in the temporary mm can be set in the userspace portion of the address-space. Interrupts must be disabled while the temporary mm is in use. HW breakpoints, which may have been set by userspace as watchpoints on addresses now within the temporary mm, are saved and disabled when loading the temporary mm. The HW breakpoints are restored when unloading the temporary mm. All HW breakpoints are indiscriminately disabled while the temporary mm is in use. Based on x86 implementation: commit cefa929c034e ("x86/mm: Introduce temporary mm structs") Signed-off-by: Christopher M. Riedl --- arch/powerpc/include/asm/debug.h | 1 + arch/powerpc/include/asm/mmu_context.h | 54 ++++++++++++++++++++++++++ arch/powerpc/kernel/process.c | 5 +++ 3 files changed, 60 insertions(+) diff --git a/arch/powerpc/include/asm/debug.h b/arch/powerpc/include/asm/debug.h index 7756026b95ca..b945bc16c932 100644 --- a/arch/powerpc/include/asm/debug.h +++ b/arch/powerpc/include/asm/debug.h @@ -45,6 +45,7 @@ static inline int debugger_break_match(struct pt_regs *regs) { return 0; } static inline int debugger_fault_handler(struct pt_regs *regs) { return 0; } #endif +void __get_breakpoint(struct arch_hw_breakpoint *brk); void __set_breakpoint(struct arch_hw_breakpoint *brk); bool ppc_breakpoint_available(void); #ifdef CONFIG_PPC_ADV_DEBUG_REGS diff --git a/arch/powerpc/include/asm/mmu_context.h b/arch/powerpc/include/asm/mmu_context.h index 360367c579de..57a8695fe63f 100644 --- a/arch/powerpc/include/asm/mmu_context.h +++ b/arch/powerpc/include/asm/mmu_context.h @@ -10,6 +10,7 @@ #include #include #include +#include /* * Most if the context management is out of line @@ -270,5 +271,58 @@ static inline int arch_dup_mmap(struct mm_struct *oldmm, return 0; } +struct temp_mm { + struct mm_struct *temp; + struct mm_struct *prev; + bool is_kernel_thread; + struct arch_hw_breakpoint brk; +}; + +static inline void init_temp_mm(struct temp_mm *temp_mm, struct mm_struct *mm) +{ + temp_mm->temp = mm; + temp_mm->prev = NULL; + temp_mm->is_kernel_thread = false; + memset(&temp_mm->brk, 0, sizeof(temp_mm->brk)); +} + +static inline void use_temporary_mm(struct temp_mm *temp_mm) +{ + lockdep_assert_irqs_disabled(); + + temp_mm->is_kernel_thread = current->mm == NULL; + if (temp_mm->is_kernel_thread) + temp_mm->prev = current->active_mm; + else + temp_mm->prev = current->mm; + + /* + * Hash requires a non-NULL current->mm to allocate a userspace address + * when handling a page fault. Does not appear to hurt in Radix either. + */ + current->mm = temp_mm->temp; + switch_mm_irqs_off(NULL, temp_mm->temp, current); + + if (ppc_breakpoint_available()) { + __get_breakpoint(&temp_mm->brk); + if (temp_mm->brk.type != 0) + hw_breakpoint_disable(); + } +} + +static inline void unuse_temporary_mm(struct temp_mm *temp_mm) +{ + lockdep_assert_irqs_disabled(); + + if (temp_mm->is_kernel_thread) + current->mm = NULL; + else + current->mm = temp_mm->prev; + switch_mm_irqs_off(NULL, temp_mm->prev, current); + + if (ppc_breakpoint_available() && temp_mm->brk.type != 0) + __set_breakpoint(&temp_mm->brk); +} + #endif /* __KERNEL__ */ #endif /* __ASM_POWERPC_MMU_CONTEXT_H */ diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c index 9c21288f8645..ec4cf890d92c 100644 --- a/arch/powerpc/kernel/process.c +++ b/arch/powerpc/kernel/process.c @@ -800,6 +800,11 @@ static inline int set_breakpoint_8xx(struct arch_hw_breakpoint *brk) return 0; } +void __get_breakpoint(struct arch_hw_breakpoint *brk) +{ + memcpy(brk, this_cpu_ptr(¤t_brk), sizeof(*brk)); +} + void __set_breakpoint(struct arch_hw_breakpoint *brk) { memcpy(this_cpu_ptr(¤t_brk), brk, sizeof(*brk)); From patchwork Wed Apr 29 02:05:28 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Christopher M. Riedl" X-Patchwork-Id: 11515815 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D04AD912 for ; Wed, 29 Apr 2020 02:04:48 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id 4432220731 for ; Wed, 29 Apr 2020 02:04:48 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4432220731 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=informatik.wtf Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-18671-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 17819 invoked by uid 550); 29 Apr 2020 02:04:27 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 17690 invoked from network); 29 Apr 2020 02:04:26 -0000 From: "Christopher M. Riedl" To: linuxppc-dev@lists.ozlabs.org, kernel-hardening@lists.openwall.com Subject: [RFC PATCH v2 2/5] powerpc/lib: Initialize a temporary mm for code patching Date: Tue, 28 Apr 2020 21:05:28 -0500 Message-Id: <20200429020531.20684-3-cmr@informatik.wtf> X-Mailer: git-send-email 2.26.1 In-Reply-To: <20200429020531.20684-1-cmr@informatik.wtf> References: <20200429020531.20684-1-cmr@informatik.wtf> MIME-Version: 1.0 X-Virus-Scanned: ClamAV using ClamSMTP When code patching a STRICT_KERNEL_RWX kernel the page containing the address to be patched is temporarily mapped with permissive memory protections. Currently, a per-cpu vmalloc patch area is used for this purpose. While the patch area is per-cpu, the temporary page mapping is inserted into the kernel page tables for the duration of the patching. The mapping is exposed to CPUs other than the patching CPU - this is undesirable from a hardening perspective. Use the `poking_init` init hook to prepare a temporary mm and patching address. Initialize the temporary mm by copying the init mm. Choose a randomized patching address inside the temporary mm userspace address portion. The next patch uses the temporary mm and patching address for code patching. Based on x86 implementation: commit 4fc19708b165 ("x86/alternatives: Initialize temporary mm for patching") Signed-off-by: Christopher M. Riedl --- arch/powerpc/lib/code-patching.c | 33 ++++++++++++++++++++++++++++++++ 1 file changed, 33 insertions(+) diff --git a/arch/powerpc/lib/code-patching.c b/arch/powerpc/lib/code-patching.c index 3345f039a876..259c19480a85 100644 --- a/arch/powerpc/lib/code-patching.c +++ b/arch/powerpc/lib/code-patching.c @@ -11,6 +11,8 @@ #include #include #include +#include +#include #include #include @@ -39,6 +41,37 @@ int raw_patch_instruction(unsigned int *addr, unsigned int instr) } #ifdef CONFIG_STRICT_KERNEL_RWX + +static struct mm_struct *patching_mm __ro_after_init; +static unsigned long patching_addr __ro_after_init; + +void __init poking_init(void) +{ + spinlock_t *ptl; /* for protecting pte table */ + pte_t *ptep; + + /* + * Some parts of the kernel (static keys for example) depend on + * successful code patching. Code patching under STRICT_KERNEL_RWX + * requires this setup - otherwise we cannot patch at all. We use + * BUG_ON() here and later since an early failure is preferred to + * buggy behavior and/or strange crashes later. + */ + patching_mm = copy_init_mm(); + BUG_ON(!patching_mm); + + /* + * In hash we cannot go above DEFAULT_MAP_WINDOW easily. + * XXX: Do we want additional bits of entropy for radix? + */ + patching_addr = (get_random_long() & PAGE_MASK) % + (DEFAULT_MAP_WINDOW - PAGE_SIZE); + + ptep = get_locked_pte(patching_mm, patching_addr, &ptl); + BUG_ON(!ptep); + pte_unmap_unlock(ptep, ptl); +} + static DEFINE_PER_CPU(struct vm_struct *, text_poke_area); static int text_area_cpu_up(unsigned int cpu) From patchwork Wed Apr 29 02:05:29 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Christopher M. Riedl" X-Patchwork-Id: 11515819 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 77226112C for ; Wed, 29 Apr 2020 02:05:07 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id B47DA20731 for ; Wed, 29 Apr 2020 02:05:06 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B47DA20731 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=informatik.wtf Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-18673-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 17911 invoked by uid 550); 29 Apr 2020 02:04:29 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 17765 invoked from network); 29 Apr 2020 02:04:27 -0000 From: "Christopher M. Riedl" To: linuxppc-dev@lists.ozlabs.org, kernel-hardening@lists.openwall.com Subject: [RFC PATCH v2 3/5] powerpc/lib: Use a temporary mm for code patching Date: Tue, 28 Apr 2020 21:05:29 -0500 Message-Id: <20200429020531.20684-4-cmr@informatik.wtf> X-Mailer: git-send-email 2.26.1 In-Reply-To: <20200429020531.20684-1-cmr@informatik.wtf> References: <20200429020531.20684-1-cmr@informatik.wtf> MIME-Version: 1.0 X-Virus-Scanned: ClamAV using ClamSMTP Currently, code patching a STRICT_KERNEL_RWX exposes the temporary mappings to other CPUs. These mappings should be kept local to the CPU doing the patching. Use the pre-initialized temporary mm and patching address for this purpose. Also add a check after patching to ensure the patch succeeded. Use the KUAP functions on non-BOOKS3_64 platforms since the temporary mapping for patching uses a userspace address (to keep the mapping local). On BOOKS3_64 platforms hash does not implement KUAP and on radix the use of PAGE_KERNEL sets EAA[0] for the PTE which means the AMR (KUAP) protection is ignored (see PowerISA v3.0b, Fig, 35). Based on x86 implementation: commit b3fd8e83ada0 ("x86/alternatives: Use temporary mm for text poking") Signed-off-by: Christopher M. Riedl --- arch/powerpc/lib/code-patching.c | 149 ++++++++++++------------------- 1 file changed, 55 insertions(+), 94 deletions(-) diff --git a/arch/powerpc/lib/code-patching.c b/arch/powerpc/lib/code-patching.c index 259c19480a85..26f06cdb5d7e 100644 --- a/arch/powerpc/lib/code-patching.c +++ b/arch/powerpc/lib/code-patching.c @@ -19,6 +19,7 @@ #include #include #include +#include static int __patch_instruction(unsigned int *exec_addr, unsigned int instr, unsigned int *patch_addr) @@ -72,101 +73,58 @@ void __init poking_init(void) pte_unmap_unlock(ptep, ptl); } -static DEFINE_PER_CPU(struct vm_struct *, text_poke_area); - -static int text_area_cpu_up(unsigned int cpu) -{ - struct vm_struct *area; - - area = get_vm_area(PAGE_SIZE, VM_ALLOC); - if (!area) { - WARN_ONCE(1, "Failed to create text area for cpu %d\n", - cpu); - return -1; - } - this_cpu_write(text_poke_area, area); - - return 0; -} - -static int text_area_cpu_down(unsigned int cpu) -{ - free_vm_area(this_cpu_read(text_poke_area)); - return 0; -} - -/* - * Run as a late init call. This allows all the boot time patching to be done - * simply by patching the code, and then we're called here prior to - * mark_rodata_ro(), which happens after all init calls are run. Although - * BUG_ON() is rude, in this case it should only happen if ENOMEM, and we judge - * it as being preferable to a kernel that will crash later when someone tries - * to use patch_instruction(). - */ -static int __init setup_text_poke_area(void) -{ - BUG_ON(!cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, - "powerpc/text_poke:online", text_area_cpu_up, - text_area_cpu_down)); - - return 0; -} -late_initcall(setup_text_poke_area); +struct patch_mapping { + spinlock_t *ptl; /* for protecting pte table */ + pte_t *ptep; + struct temp_mm temp_mm; +}; /* * This can be called for kernel text or a module. */ -static int map_patch_area(void *addr, unsigned long text_poke_addr) +static int map_patch(const void *addr, struct patch_mapping *patch_mapping) { - unsigned long pfn; - int err; + struct page *page; + pte_t pte; + pgprot_t pgprot; if (is_vmalloc_addr(addr)) - pfn = vmalloc_to_pfn(addr); + page = vmalloc_to_page(addr); else - pfn = __pa_symbol(addr) >> PAGE_SHIFT; + page = virt_to_page(addr); - err = map_kernel_page(text_poke_addr, (pfn << PAGE_SHIFT), PAGE_KERNEL); + if (radix_enabled()) + pgprot = PAGE_KERNEL; + else + pgprot = PAGE_SHARED; - pr_devel("Mapped addr %lx with pfn %lx:%d\n", text_poke_addr, pfn, err); - if (err) + patch_mapping->ptep = get_locked_pte(patching_mm, patching_addr, + &patch_mapping->ptl); + if (unlikely(!patch_mapping->ptep)) { + pr_warn("map patch: failed to allocate pte for patching\n"); return -1; + } + + pte = mk_pte(page, pgprot); + if (!IS_ENABLED(CONFIG_PPC_BOOK3S_64)) + pte = pte_mkdirty(pte); + set_pte_at(patching_mm, patching_addr, patch_mapping->ptep, pte); + + init_temp_mm(&patch_mapping->temp_mm, patching_mm); + use_temporary_mm(&patch_mapping->temp_mm); return 0; } -static inline int unmap_patch_area(unsigned long addr) +static void unmap_patch(struct patch_mapping *patch_mapping) { - pte_t *ptep; - pmd_t *pmdp; - pud_t *pudp; - pgd_t *pgdp; - - pgdp = pgd_offset_k(addr); - if (unlikely(!pgdp)) - return -EINVAL; - - pudp = pud_offset(pgdp, addr); - if (unlikely(!pudp)) - return -EINVAL; - - pmdp = pmd_offset(pudp, addr); - if (unlikely(!pmdp)) - return -EINVAL; - - ptep = pte_offset_kernel(pmdp, addr); - if (unlikely(!ptep)) - return -EINVAL; + /* In hash, pte_clear flushes the tlb */ + pte_clear(patching_mm, patching_addr, patch_mapping->ptep); + unuse_temporary_mm(&patch_mapping->temp_mm); - pr_devel("clearing mm %p, pte %p, addr %lx\n", &init_mm, ptep, addr); - - /* - * In hash, pte_clear flushes the tlb, in radix, we have to - */ - pte_clear(&init_mm, addr, ptep); - flush_tlb_kernel_range(addr, addr + PAGE_SIZE); - - return 0; + /* In radix, we have to explicitly flush the tlb (no-op in hash) */ + local_flush_tlb_mm(patching_mm); + pte_unmap_unlock(patch_mapping->ptep, patch_mapping->ptl); } static int do_patch_instruction(unsigned int *addr, unsigned int instr) @@ -174,33 +132,36 @@ static int do_patch_instruction(unsigned int *addr, unsigned int instr) int err; unsigned int *patch_addr = NULL; unsigned long flags; - unsigned long text_poke_addr; - unsigned long kaddr = (unsigned long)addr; + struct patch_mapping patch_mapping; /* - * During early early boot patch_instruction is called - * when text_poke_area is not ready, but we still need - * to allow patching. We just do the plain old patching + * The patching_mm is initialized before calling mark_rodata_ro. Prior + * to this, patch_instruction is called when we don't have (and don't + * need) the patching_mm so just do plain old patching. */ - if (!this_cpu_read(text_poke_area)) + if (!patching_mm) return raw_patch_instruction(addr, instr); local_irq_save(flags); - text_poke_addr = (unsigned long)__this_cpu_read(text_poke_area)->addr; - if (map_patch_area(addr, text_poke_addr)) { - err = -1; + err = map_patch(addr, &patch_mapping); + if (err) goto out; - } - patch_addr = (unsigned int *)(text_poke_addr) + - ((kaddr & ~PAGE_MASK) / sizeof(unsigned int)); + patch_addr = (unsigned int *)(patching_addr | offset_in_page(addr)); - __patch_instruction(addr, instr, patch_addr); + if (!radix_enabled()) + allow_write_to_user(patch_addr, sizeof(instr)); + err = __patch_instruction(addr, instr, patch_addr); + if (!radix_enabled()) + prevent_write_to_user(patch_addr, sizeof(instr)); - err = unmap_patch_area(text_poke_addr); - if (err) - pr_warn("failed to unmap %lx\n", text_poke_addr); + unmap_patch(&patch_mapping); + /* + * Something is wrong if what we just wrote doesn't match what we + * think we just wrote. + */ + WARN_ON(*addr != instr); out: local_irq_restore(flags); From patchwork Wed Apr 29 02:05:30 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Christopher M. Riedl" X-Patchwork-Id: 11515817 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 89B67112C for ; Wed, 29 Apr 2020 02:04:57 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id E8D3C20731 for ; Wed, 29 Apr 2020 02:04:56 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E8D3C20731 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=informatik.wtf Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-18672-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 17898 invoked by uid 550); 29 Apr 2020 02:04:28 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 17766 invoked from network); 29 Apr 2020 02:04:27 -0000 From: "Christopher M. Riedl" To: linuxppc-dev@lists.ozlabs.org, kernel-hardening@lists.openwall.com Subject: [RFC PATCH v2 4/5] powerpc/lib: Add LKDTM accessor for patching addr Date: Tue, 28 Apr 2020 21:05:30 -0500 Message-Id: <20200429020531.20684-5-cmr@informatik.wtf> X-Mailer: git-send-email 2.26.1 In-Reply-To: <20200429020531.20684-1-cmr@informatik.wtf> References: <20200429020531.20684-1-cmr@informatik.wtf> MIME-Version: 1.0 X-Virus-Scanned: ClamAV using ClamSMTP When live patching a STRICT_RWX kernel, a mapping is installed at a "patching address" with temporary write permissions. Provide a LKDTM-only accessor function for this address in preparation for a LKDTM test which attempts to "hijack" this mapping by writing to it from another CPU. Signed-off-by: Christopher M. Riedl --- arch/powerpc/lib/code-patching.c | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/arch/powerpc/lib/code-patching.c b/arch/powerpc/lib/code-patching.c index 26f06cdb5d7e..cfbdef90384e 100644 --- a/arch/powerpc/lib/code-patching.c +++ b/arch/powerpc/lib/code-patching.c @@ -46,6 +46,13 @@ int raw_patch_instruction(unsigned int *addr, unsigned int instr) static struct mm_struct *patching_mm __ro_after_init; static unsigned long patching_addr __ro_after_init; +#ifdef CONFIG_LKDTM +unsigned long read_cpu_patching_addr(unsigned int cpu) +{ + return patching_addr; +} +#endif + void __init poking_init(void) { spinlock_t *ptl; /* for protecting pte table */ From patchwork Wed Apr 29 02:05:31 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Christopher M. Riedl" X-Patchwork-Id: 11515821 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 26E17912 for ; Wed, 29 Apr 2020 02:05:18 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id 8CF9B20731 for ; Wed, 29 Apr 2020 02:05:17 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8CF9B20731 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=informatik.wtf Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-18674-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 17962 invoked by uid 550); 29 Apr 2020 02:04:30 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 17804 invoked from network); 29 Apr 2020 02:04:27 -0000 From: "Christopher M. Riedl" To: linuxppc-dev@lists.ozlabs.org, kernel-hardening@lists.openwall.com Subject: [RFC PATCH v2 5/5] powerpc: Add LKDTM test to hijack a patch mapping Date: Tue, 28 Apr 2020 21:05:31 -0500 Message-Id: <20200429020531.20684-6-cmr@informatik.wtf> X-Mailer: git-send-email 2.26.1 In-Reply-To: <20200429020531.20684-1-cmr@informatik.wtf> References: <20200429020531.20684-1-cmr@informatik.wtf> MIME-Version: 1.0 X-Virus-Scanned: ClamAV using ClamSMTP When live patching with STRICT_KERNEL_RWX, the CPU doing the patching must use a temporary mapping which allows for writing to kernel text. During the entire window of time when this temporary mapping is in use, another CPU could write to the same mapping and maliciously alter kernel text. Implement a LKDTM test to attempt to exploit such a openings when a CPU is patching under STRICT_KERNEL_RWX. The test is only implemented on powerpc for now. The LKDTM "hijack" test works as follows: 1. A CPU executes an infinite loop to patch an instruction. This is the "patching" CPU. 2. Another CPU attempts to write to the address of the temporary mapping used by the "patching" CPU. This other CPU is the "hijacker" CPU. The hijack either fails with a segfault or succeeds, in which case some kernel text is now overwritten. How to run the test: mount -t debugfs none /sys/kernel/debug (echo HIJACK_PATCH > /sys/kernel/debug/provoke-crash/DIRECT) Signed-off-by: Christopher M. Riedl --- drivers/misc/lkdtm/core.c | 1 + drivers/misc/lkdtm/lkdtm.h | 1 + drivers/misc/lkdtm/perms.c | 99 ++++++++++++++++++++++++++++++++++++++ 3 files changed, 101 insertions(+) diff --git a/drivers/misc/lkdtm/core.c b/drivers/misc/lkdtm/core.c index a5e344df9166..482e72f6a1e1 100644 --- a/drivers/misc/lkdtm/core.c +++ b/drivers/misc/lkdtm/core.c @@ -145,6 +145,7 @@ static const struct crashtype crashtypes[] = { CRASHTYPE(WRITE_RO), CRASHTYPE(WRITE_RO_AFTER_INIT), CRASHTYPE(WRITE_KERN), + CRASHTYPE(HIJACK_PATCH), CRASHTYPE(REFCOUNT_INC_OVERFLOW), CRASHTYPE(REFCOUNT_ADD_OVERFLOW), CRASHTYPE(REFCOUNT_INC_NOT_ZERO_OVERFLOW), diff --git a/drivers/misc/lkdtm/lkdtm.h b/drivers/misc/lkdtm/lkdtm.h index 601a2156a0d4..bfcf3542370d 100644 --- a/drivers/misc/lkdtm/lkdtm.h +++ b/drivers/misc/lkdtm/lkdtm.h @@ -62,6 +62,7 @@ void lkdtm_EXEC_USERSPACE(void); void lkdtm_EXEC_NULL(void); void lkdtm_ACCESS_USERSPACE(void); void lkdtm_ACCESS_NULL(void); +void lkdtm_HIJACK_PATCH(void); /* lkdtm_refcount.c */ void lkdtm_REFCOUNT_INC_OVERFLOW(void); diff --git a/drivers/misc/lkdtm/perms.c b/drivers/misc/lkdtm/perms.c index 62f76d506f04..547ce16e03e5 100644 --- a/drivers/misc/lkdtm/perms.c +++ b/drivers/misc/lkdtm/perms.c @@ -9,6 +9,7 @@ #include #include #include +#include #include /* Whether or not to fill the target memory area with do_nothing(). */ @@ -213,6 +214,104 @@ void lkdtm_ACCESS_NULL(void) *ptr = tmp; } +#if defined(CONFIG_PPC) && defined(CONFIG_STRICT_KERNEL_RWX) +#include + +extern unsigned long read_cpu_patching_addr(unsigned int cpu); + +static unsigned int * const patch_site = (unsigned int * const)&do_nothing; + +static int lkdtm_patching_cpu(void *data) +{ + int err = 0; + + pr_info("starting patching_cpu=%d\n", smp_processor_id()); + do { + err = patch_instruction(patch_site, 0xdeadbeef); + } while (*READ_ONCE(patch_site) == 0xdeadbeef && + !err && !kthread_should_stop()); + + if (err) + pr_warn("patch_instruction returned error: %d\n", err); + + set_current_state(TASK_INTERRUPTIBLE); + while (!kthread_should_stop()) { + schedule(); + set_current_state(TASK_INTERRUPTIBLE); + } + + return err; +} + +void lkdtm_HIJACK_PATCH(void) +{ + struct task_struct *patching_kthrd; + int patching_cpu, hijacker_cpu, original_insn, attempts; + unsigned long addr; + bool hijacked; + + if (num_online_cpus() < 2) { + pr_warn("need at least two cpus\n"); + return; + } + + original_insn = *READ_ONCE(patch_site); + + hijacker_cpu = smp_processor_id(); + patching_cpu = cpumask_any_but(cpu_online_mask, hijacker_cpu); + + patching_kthrd = kthread_create_on_node(&lkdtm_patching_cpu, NULL, + cpu_to_node(patching_cpu), + "lkdtm_patching_cpu"); + kthread_bind(patching_kthrd, patching_cpu); + wake_up_process(patching_kthrd); + + addr = offset_in_page(patch_site) | read_cpu_patching_addr(patching_cpu); + + pr_info("starting hijacker_cpu=%d\n", hijacker_cpu); + for (attempts = 0; attempts < 100000; ++attempts) { + /* Use __put_user to catch faults without an Oops */ + hijacked = !__put_user(0xbad00bad, (unsigned int *)addr); + + if (hijacked) { + if (kthread_stop(patching_kthrd)) + goto out; + break; + } + } + pr_info("hijack attempts: %d\n", attempts); + + if (hijacked) { + if (*READ_ONCE(patch_site) == 0xbad00bad) + pr_err("overwrote kernel text\n"); + /* + * There are window conditions where the hijacker cpu manages to + * write to the patch site but the site gets overwritten again by + * the patching cpu. We still consider that a "successful" hijack + * since the hijacker cpu did not fault on the write. + */ + pr_err("FAIL: wrote to another cpu's patching area\n"); + } else { + kthread_stop(patching_kthrd); + } + +out: + /* Restore the original insn for any future lkdtm tests */ + patch_instruction(patch_site, original_insn); +} + +#else + +void lkdtm_HIJACK_PATCH(void) +{ + if (!IS_ENABLED(CONFIG_PPC)) + pr_err("XFAIL: this test is powerpc-only\n"); + if (!IS_ENABLED(CONFIG_STRICT_KERNEL_RWX)) + pr_err("XFAIL: this test requires CONFIG_STRICT_KERNEL_RWX\n"); +} + +#endif /* CONFIG_PPC && CONFIG_STRICT_KERNEL_RWX */ + void __init lkdtm_perms_init(void) { /* Make sure we can write to __ro_after_init values during __init */