From patchwork Tue Nov 13 13:07:25 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nadav Amit X-Patchwork-Id: 10681489 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D75A713BF for ; Tue, 13 Nov 2018 20:26:15 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C93002B63C for ; Tue, 13 Nov 2018 20:26:15 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id BD40A2B649; Tue, 13 Nov 2018 20:26:15 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.4 required=2.0 tests=BAYES_00,DATE_IN_PAST_06_12, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 36F542B63C for ; Tue, 13 Nov 2018 20:26:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730969AbeKNGZ3 (ORCPT ); Wed, 14 Nov 2018 01:25:29 -0500 Received: from ex13-edg-ou-001.vmware.com ([208.91.0.189]:21360 "EHLO EX13-EDG-OU-001.vmware.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728730AbeKNGZ2 (ORCPT ); Wed, 14 Nov 2018 01:25:28 -0500 Received: from sc9-mailhost2.vmware.com (10.113.161.72) by EX13-EDG-OU-001.vmware.com (10.113.208.155) with Microsoft SMTP Server id 15.0.1156.6; Tue, 13 Nov 2018 12:25:11 -0800 Received: from sc2-haas01-esx0118.eng.vmware.com (sc2-haas01-esx0118.eng.vmware.com [10.172.44.118]) by sc9-mailhost2.vmware.com (Postfix) with ESMTP id A44571B8190; Tue, 13 Nov 2018 15:25:38 -0500 (EST) From: Nadav Amit To: Ingo Molnar CC: , , "H. Peter Anvin" , Thomas Gleixner , Borislav Petkov , Dave Hansen , Peter Zijlstra , , , , Nadav Amit , Kees Cook , Dave Hansen Subject: [PATCH v5 05/10] x86/alternative: initializing temporary mm for patching Date: Tue, 13 Nov 2018 05:07:25 -0800 Message-ID: <20181113130730.44844-6-namit@vmware.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181113130730.44844-1-namit@vmware.com> References: <20181113130730.44844-1-namit@vmware.com> MIME-Version: 1.0 Received-SPF: None (EX13-EDG-OU-001.vmware.com: namit@vmware.com does not designate permitted sender hosts) Sender: owner-linux-security-module@vger.kernel.org Precedence: bulk List-ID: X-Virus-Scanned: ClamAV using ClamSMTP To prevent improper use of the PTEs that are used for text patching, we want to use a temporary mm struct. We initailize it by copying the init mm. The address that will be used for patching is taken from the lower area that is usually used for the task memory. Doing so prevents the need to frequently synchronize the temporary-mm (e.g., when BPF programs are installed), since different PGDs are used for the task memory. Finally, we randomize the address of the PTEs to harden against exploits that use these PTEs. Cc: Kees Cook Cc: Peter Zijlstra Cc: Dave Hansen Reviewed-by: Masami Hiramatsu Tested-by: Masami Hiramatsu Suggested-by: Andy Lutomirski Signed-off-by: Nadav Amit --- arch/x86/include/asm/pgtable.h | 3 +++ arch/x86/include/asm/text-patching.h | 2 ++ arch/x86/kernel/alternative.c | 3 +++ arch/x86/mm/init_64.c | 35 ++++++++++++++++++++++++++++ init/main.c | 3 +++ 5 files changed, 46 insertions(+) diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h index 40616e805292..e8f630d9a2ed 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -1021,6 +1021,9 @@ static inline void __meminit init_trampoline_default(void) /* Default trampoline pgd value */ trampoline_pgd_entry = init_top_pgt[pgd_index(__PAGE_OFFSET)]; } + +void __init poking_init(void); + # ifdef CONFIG_RANDOMIZE_MEMORY void __meminit init_trampoline(void); # else diff --git a/arch/x86/include/asm/text-patching.h b/arch/x86/include/asm/text-patching.h index 5a2600370763..e5716ef9a721 100644 --- a/arch/x86/include/asm/text-patching.h +++ b/arch/x86/include/asm/text-patching.h @@ -39,5 +39,7 @@ extern int text_poke_kgdb(void *addr, const void *opcode, size_t len); extern int poke_int3_handler(struct pt_regs *regs); extern void *text_poke_bp(void *addr, const void *opcode, size_t len, void *handler); extern int after_bootmem; +extern __ro_after_init struct mm_struct *poking_mm; +extern __ro_after_init unsigned long poking_addr; #endif /* _ASM_X86_TEXT_PATCHING_H */ diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c index ebe9210dc92e..d3ae5c26e5a0 100644 --- a/arch/x86/kernel/alternative.c +++ b/arch/x86/kernel/alternative.c @@ -678,6 +678,9 @@ void *__init_or_module text_poke_early(void *addr, const void *opcode, return addr; } +__ro_after_init struct mm_struct *poking_mm; +__ro_after_init unsigned long poking_addr; + static int __text_poke(void *addr, const void *opcode, size_t len) { unsigned long flags; diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index 5fab264948c2..356a28569a19 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -53,6 +53,7 @@ #include #include #include +#include #include "mm_internal.h" @@ -1388,6 +1389,40 @@ unsigned long memory_block_size_bytes(void) return memory_block_size_probed; } +/* + * Initialize an mm_struct to be used during poking and a pointer to be used + * during patching. + */ +void __init poking_init(void) +{ + spinlock_t *ptl; + pte_t *ptep; + + poking_mm = copy_init_mm(); + BUG_ON(!poking_mm); + + /* + * Randomize the poking address, but make sure that the following page + * will be mapped at the same PMD. We need 2 pages, so find space for 3, + * and adjust the address if the PMD ends after the first one. + */ + poking_addr = TASK_UNMAPPED_BASE + + (kaslr_get_random_long("Poking") & PAGE_MASK) % + (TASK_SIZE - TASK_UNMAPPED_BASE - 3 * PAGE_SIZE); + + if (((poking_addr + PAGE_SIZE) & ~PMD_MASK) == 0) + poking_addr += PAGE_SIZE; + + /* + * We need to trigger the allocation of the page-tables that will be + * needed for poking now. Later, poking may be performed in an atomic + * section, which might cause allocation to fail. + */ + ptep = get_locked_pte(poking_mm, poking_addr, &ptl); + BUG_ON(!ptep); + pte_unmap_unlock(ptep, ptl); +} + #ifdef CONFIG_SPARSEMEM_VMEMMAP /* * Initialise the sparsemem vmemmap using huge-pages at the PMD level. diff --git a/init/main.c b/init/main.c index ee147103ba1b..a461150adfb1 100644 --- a/init/main.c +++ b/init/main.c @@ -497,6 +497,8 @@ void __init __weak thread_stack_cache_init(void) void __init __weak mem_encrypt_init(void) { } +void __init __weak poking_init(void) { } + bool initcall_debug; core_param(initcall_debug, initcall_debug, bool, 0644); @@ -731,6 +733,7 @@ asmlinkage __visible void __init start_kernel(void) taskstats_init_early(); delayacct_init(); + poking_init(); check_bugs(); acpi_subsystem_init();