From patchwork Wed Dec 5 01:33:55 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nadav Amit X-Patchwork-Id: 10713319 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A5A721731 for ; Wed, 5 Dec 2018 08:52:10 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 969B02CC9F for ; Wed, 5 Dec 2018 08:52:10 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8AC8B2CCBC; Wed, 5 Dec 2018 08:52:10 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.4 required=2.0 tests=BAYES_00,DATE_IN_PAST_06_12, MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id ACD9A2CCC4 for ; Wed, 5 Dec 2018 08:52:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727356AbeLEIwI (ORCPT ); Wed, 5 Dec 2018 03:52:08 -0500 Received: from ex13-edg-ou-001.vmware.com ([208.91.0.189]:33316 "EHLO EX13-EDG-OU-001.vmware.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726861AbeLEIwH (ORCPT ); Wed, 5 Dec 2018 03:52:07 -0500 Received: from sc9-mailhost3.vmware.com (10.113.161.73) by EX13-EDG-OU-001.vmware.com (10.113.208.155) with Microsoft SMTP Server id 15.0.1156.6; Wed, 5 Dec 2018 00:52:02 -0800 Received: from sc2-haas01-esx0118.eng.vmware.com (sc2-haas01-esx0118.eng.vmware.com [10.172.44.118]) by sc9-mailhost3.vmware.com (Postfix) with ESMTP id C9C204138E; Wed, 5 Dec 2018 00:52:04 -0800 (PST) From: Nadav Amit To: Ingo Molnar CC: , , "H. Peter Anvin" , Thomas Gleixner , Borislav Petkov , Andy Lutomirski , Nadav Amit , Dave Hansen , Peter Zijlstra , , , , Nadav Amit , Kees Cook , Dave Hansen , Masami Hiramatsu Subject: [PATCH v7 01/14] Fix "x86/alternatives: Lockdep-enforce text_mutex in text_poke*()" Date: Tue, 4 Dec 2018 17:33:55 -0800 Message-ID: <20181205013408.47725-2-namit@vmware.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181205013408.47725-1-namit@vmware.com> References: <20181205013408.47725-1-namit@vmware.com> MIME-Version: 1.0 Received-SPF: None (EX13-EDG-OU-001.vmware.com: namit@vmware.com does not designate permitted sender hosts) Sender: linux-integrity-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-integrity@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP text_mutex is currently expected to be held before text_poke() is called, but we kgdb does not take the mutex, and instead *supposedly* ensures the lock is not taken and will not be acquired by any other core while text_poke() is running. The reason for the "supposedly" comment is that it is not entirely clear that this would be the case if gdb_do_roundup is zero. This patch creates two wrapper functions, text_poke() and text_poke_kgdb() which do or do not run the lockdep assertion respectively. While we are at it, change the return code of text_poke() to something meaningful. One day, callers might actually respect it and the existing BUG_ON() when patching fails could be removed. For kgdb, the return value can actually be used. Cc: Andy Lutomirski Cc: Kees Cook Cc: Dave Hansen Cc: Masami Hiramatsu Fixes: 9222f606506c ("x86/alternatives: Lockdep-enforce text_mutex in text_poke*()") Suggested-by: Peter Zijlstra Acked-by: Jiri Kosina Signed-off-by: Nadav Amit --- arch/x86/include/asm/text-patching.h | 1 + arch/x86/kernel/alternative.c | 52 ++++++++++++++++++++-------- arch/x86/kernel/kgdb.c | 11 +++--- 3 files changed, 45 insertions(+), 19 deletions(-) diff --git a/arch/x86/include/asm/text-patching.h b/arch/x86/include/asm/text-patching.h index e85ff65c43c3..f8fc8e86cf01 100644 --- a/arch/x86/include/asm/text-patching.h +++ b/arch/x86/include/asm/text-patching.h @@ -35,6 +35,7 @@ extern void *text_poke_early(void *addr, const void *opcode, size_t len); * inconsistent instruction while you patch. */ extern void *text_poke(void *addr, const void *opcode, size_t len); +extern void *text_poke_kgdb(void *addr, const void *opcode, size_t len); extern int poke_int3_handler(struct pt_regs *regs); extern void *text_poke_bp(void *addr, const void *opcode, size_t len, void *handler); extern int after_bootmem; diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c index ebeac487a20c..c6a3a10a2fd5 100644 --- a/arch/x86/kernel/alternative.c +++ b/arch/x86/kernel/alternative.c @@ -678,18 +678,7 @@ void *__init_or_module text_poke_early(void *addr, const void *opcode, return addr; } -/** - * text_poke - Update instructions on a live kernel - * @addr: address to modify - * @opcode: source of the copy - * @len: length to copy - * - * Only atomic text poke/set should be allowed when not doing early patching. - * It means the size must be writable atomically and the address must be aligned - * in a way that permits an atomic write. It also makes sure we fit on a single - * page. - */ -void *text_poke(void *addr, const void *opcode, size_t len) +static void *__text_poke(void *addr, const void *opcode, size_t len) { unsigned long flags; char *vaddr; @@ -702,8 +691,6 @@ void *text_poke(void *addr, const void *opcode, size_t len) */ BUG_ON(!after_bootmem); - lockdep_assert_held(&text_mutex); - if (!core_kernel_text((unsigned long)addr)) { pages[0] = vmalloc_to_page(addr); pages[1] = vmalloc_to_page(addr + PAGE_SIZE); @@ -732,6 +719,43 @@ void *text_poke(void *addr, const void *opcode, size_t len) return addr; } +/** + * text_poke - Update instructions on a live kernel + * @addr: address to modify + * @opcode: source of the copy + * @len: length to copy + * + * Only atomic text poke/set should be allowed when not doing early patching. + * It means the size must be writable atomically and the address must be aligned + * in a way that permits an atomic write. It also makes sure we fit on a single + * page. + */ +void *text_poke(void *addr, const void *opcode, size_t len) +{ + lockdep_assert_held(&text_mutex); + + return __text_poke(addr, opcode, len); +} + +/** + * text_poke_kgdb - Update instructions on a live kernel by kgdb + * @addr: address to modify + * @opcode: source of the copy + * @len: length to copy + * + * Only atomic text poke/set should be allowed when not doing early patching. + * It means the size must be writable atomically and the address must be aligned + * in a way that permits an atomic write. It also makes sure we fit on a single + * page. + * + * Context: should only be used by kgdb, which ensures no other core is running, + * despite the fact it does not hold the text_mutex. + */ +void *text_poke_kgdb(void *addr, const void *opcode, size_t len) +{ + return __text_poke(addr, opcode, len); +} + static void do_sync_core(void *info) { sync_core(); diff --git a/arch/x86/kernel/kgdb.c b/arch/x86/kernel/kgdb.c index 8e36f249646e..2636ca8394bd 100644 --- a/arch/x86/kernel/kgdb.c +++ b/arch/x86/kernel/kgdb.c @@ -763,13 +763,13 @@ int kgdb_arch_set_breakpoint(struct kgdb_bkpt *bpt) if (!err) return err; /* - * It is safe to call text_poke() because normal kernel execution + * It is safe to call text_poke_kgdb() because normal kernel execution * is stopped on all cores, so long as the text_mutex is not locked. */ if (mutex_is_locked(&text_mutex)) return -EBUSY; - text_poke((void *)bpt->bpt_addr, arch_kgdb_ops.gdb_bpt_instr, - BREAK_INSTR_SIZE); + text_poke_kgdb((void *)bpt->bpt_addr, arch_kgdb_ops.gdb_bpt_instr, + BREAK_INSTR_SIZE); err = probe_kernel_read(opc, (char *)bpt->bpt_addr, BREAK_INSTR_SIZE); if (err) return err; @@ -788,12 +788,13 @@ int kgdb_arch_remove_breakpoint(struct kgdb_bkpt *bpt) if (bpt->type != BP_POKE_BREAKPOINT) goto knl_write; /* - * It is safe to call text_poke() because normal kernel execution + * It is safe to call text_poke_kgdb() because normal kernel execution * is stopped on all cores, so long as the text_mutex is not locked. */ if (mutex_is_locked(&text_mutex)) goto knl_write; - text_poke((void *)bpt->bpt_addr, bpt->saved_instr, BREAK_INSTR_SIZE); + text_poke_kgdb((void *)bpt->bpt_addr, bpt->saved_instr, + BREAK_INSTR_SIZE); err = probe_kernel_read(opc, (char *)bpt->bpt_addr, BREAK_INSTR_SIZE); if (err || memcmp(opc, bpt->saved_instr, BREAK_INSTR_SIZE)) goto knl_write; From patchwork Wed Dec 5 01:33:56 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nadav Amit X-Patchwork-Id: 10713369 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 878F0109C for ; Wed, 5 Dec 2018 08:53:26 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 78F9D2CCA9 for ; Wed, 5 Dec 2018 08:53:26 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 6D8402CCB8; Wed, 5 Dec 2018 08:53:26 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.4 required=2.0 tests=BAYES_00,DATE_IN_PAST_06_12, MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id F347A2CCD4 for ; Wed, 5 Dec 2018 08:53:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726937AbeLEIxT (ORCPT ); Wed, 5 Dec 2018 03:53:19 -0500 Received: from ex13-edg-ou-002.vmware.com ([208.91.0.190]:46460 "EHLO EX13-EDG-OU-002.vmware.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726889AbeLEIwH (ORCPT ); Wed, 5 Dec 2018 03:52:07 -0500 Received: from sc9-mailhost3.vmware.com (10.113.161.73) by EX13-EDG-OU-002.vmware.com (10.113.208.156) with Microsoft SMTP Server id 15.0.1156.6; Wed, 5 Dec 2018 00:52:03 -0800 Received: from sc2-haas01-esx0118.eng.vmware.com (sc2-haas01-esx0118.eng.vmware.com [10.172.44.118]) by sc9-mailhost3.vmware.com (Postfix) with ESMTP id D102B41391; Wed, 5 Dec 2018 00:52:04 -0800 (PST) From: Nadav Amit To: Ingo Molnar CC: , , "H. Peter Anvin" , Thomas Gleixner , Borislav Petkov , Andy Lutomirski , Nadav Amit , Dave Hansen , Peter Zijlstra , , , , Nadav Amit , Kees Cook , Dave Hansen , Masami Hiramatsu Subject: [PATCH v7 02/14] x86/jump_label: Use text_poke_early() during early init Date: Tue, 4 Dec 2018 17:33:56 -0800 Message-ID: <20181205013408.47725-3-namit@vmware.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181205013408.47725-1-namit@vmware.com> References: <20181205013408.47725-1-namit@vmware.com> MIME-Version: 1.0 Received-SPF: None (EX13-EDG-OU-002.vmware.com: namit@vmware.com does not designate permitted sender hosts) Sender: linux-integrity-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-integrity@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP There is no apparent reason not to use text_poke_early() while we are during early-init and we do not patch code that might be on the stack (i.e., we'll return to the middle of the patched code). This appears to be the case of jump-labels, so do so. This is required for the next patches that would set a temporary mm for patching, which is initialized after some static-keys are enabled/disabled. Cc: Andy Lutomirski Cc: Kees Cook Cc: Dave Hansen Cc: Masami Hiramatsu Cc: Peter Zijlstra Signed-off-by: Nadav Amit --- arch/x86/kernel/jump_label.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/arch/x86/kernel/jump_label.c b/arch/x86/kernel/jump_label.c index aac0c1f7e354..ed5fe274a7d8 100644 --- a/arch/x86/kernel/jump_label.c +++ b/arch/x86/kernel/jump_label.c @@ -52,7 +52,12 @@ static void __ref __jump_label_transform(struct jump_entry *entry, jmp.offset = jump_entry_target(entry) - (jump_entry_code(entry) + JUMP_LABEL_NOP_SIZE); - if (early_boot_irqs_disabled) + /* + * As long as we're UP and not yet marked RO, we can use + * text_poke_early; SYSTEM_BOOTING guarantees both, as we switch to + * SYSTEM_SCHEDULING before going either. + */ + if (system_state == SYSTEM_BOOTING) poker = text_poke_early; if (type == JUMP_LABEL_JMP) { From patchwork Wed Dec 5 01:33:57 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nadav Amit X-Patchwork-Id: 10713371 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 56CE013AF for ; Wed, 5 Dec 2018 08:53:31 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 482C42CCE0 for ; Wed, 5 Dec 2018 08:53:31 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3C7832CCE5; Wed, 5 Dec 2018 08:53:31 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.4 required=2.0 tests=BAYES_00,DATE_IN_PAST_06_12, MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A616D2CCE0 for ; Wed, 5 Dec 2018 08:53:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727170AbeLEIwG (ORCPT ); Wed, 5 Dec 2018 03:52:06 -0500 Received: from ex13-edg-ou-001.vmware.com ([208.91.0.189]:33316 "EHLO EX13-EDG-OU-001.vmware.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726351AbeLEIwG (ORCPT ); Wed, 5 Dec 2018 03:52:06 -0500 Received: from sc9-mailhost3.vmware.com (10.113.161.73) by EX13-EDG-OU-001.vmware.com (10.113.208.155) with Microsoft SMTP Server id 15.0.1156.6; Wed, 5 Dec 2018 00:52:02 -0800 Received: from sc2-haas01-esx0118.eng.vmware.com (sc2-haas01-esx0118.eng.vmware.com [10.172.44.118]) by sc9-mailhost3.vmware.com (Postfix) with ESMTP id D7E9C41396; Wed, 5 Dec 2018 00:52:04 -0800 (PST) From: Nadav Amit To: Ingo Molnar CC: , , "H. Peter Anvin" , Thomas Gleixner , Borislav Petkov , Andy Lutomirski , Nadav Amit , Dave Hansen , Peter Zijlstra , , , , Kees Cook , Dave Hansen , Nadav Amit Subject: [PATCH v7 03/14] x86/mm: temporary mm struct Date: Tue, 4 Dec 2018 17:33:57 -0800 Message-ID: <20181205013408.47725-4-namit@vmware.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181205013408.47725-1-namit@vmware.com> References: <20181205013408.47725-1-namit@vmware.com> MIME-Version: 1.0 Received-SPF: None (EX13-EDG-OU-001.vmware.com: namit@vmware.com does not designate permitted sender hosts) Sender: linux-integrity-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-integrity@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Andy Lutomirski Sometimes we want to set a temporary page-table entries (PTEs) in one of the cores, without allowing other cores to use - even speculatively - these mappings. There are two benefits for doing so: (1) Security: if sensitive PTEs are set, temporary mm prevents their use in other cores. This hardens the security as it prevents exploding a dangling pointer to overwrite sensitive data using the sensitive PTE. (2) Avoiding TLB shootdowns: the PTEs do not need to be flushed in remote page-tables. To do so a temporary mm_struct can be used. Mappings which are private for this mm can be set in the userspace part of the address-space. During the whole time in which the temporary mm is loaded, interrupts must be disabled. The first use-case for temporary PTEs, which will follow, is for poking the kernel text. [ Commit message was written by Nadav ] Cc: Kees Cook Cc: Peter Zijlstra Cc: Dave Hansen Reviewed-by: Masami Hiramatsu Tested-by: Masami Hiramatsu Signed-off-by: Andy Lutomirski Signed-off-by: Nadav Amit --- arch/x86/include/asm/mmu_context.h | 32 ++++++++++++++++++++++++++++++ 1 file changed, 32 insertions(+) diff --git a/arch/x86/include/asm/mmu_context.h b/arch/x86/include/asm/mmu_context.h index 0ca50611e8ce..0141b7fa6d01 100644 --- a/arch/x86/include/asm/mmu_context.h +++ b/arch/x86/include/asm/mmu_context.h @@ -338,4 +338,36 @@ static inline unsigned long __get_current_cr3_fast(void) return cr3; } +typedef struct { + struct mm_struct *prev; +} temporary_mm_state_t; + +/* + * Using a temporary mm allows to set temporary mappings that are not accessible + * by other cores. Such mappings are needed to perform sensitive memory writes + * that override the kernel memory protections (e.g., W^X), without exposing the + * temporary page-table mappings that are required for these write operations to + * other cores. + * + * Context: The temporary mm needs to be used exclusively by a single core. To + * harden security IRQs must be disabled while the temporary mm is + * loaded, thereby preventing interrupt handler bugs from override the + * kernel memory protection. + */ +static inline temporary_mm_state_t use_temporary_mm(struct mm_struct *mm) +{ + temporary_mm_state_t state; + + lockdep_assert_irqs_disabled(); + state.prev = this_cpu_read(cpu_tlbstate.loaded_mm); + switch_mm_irqs_off(NULL, mm, current); + return state; +} + +static inline void unuse_temporary_mm(temporary_mm_state_t prev) +{ + lockdep_assert_irqs_disabled(); + switch_mm_irqs_off(NULL, prev.prev, current); +} + #endif /* _ASM_X86_MMU_CONTEXT_H */ From patchwork Wed Dec 5 01:33:58 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nadav Amit X-Patchwork-Id: 10713361 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 22E331731 for ; Wed, 5 Dec 2018 08:53:17 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 14E5D2CCDF for ; Wed, 5 Dec 2018 08:53:17 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 093C82CCE0; Wed, 5 Dec 2018 08:53:17 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.4 required=2.0 tests=BAYES_00,DATE_IN_PAST_06_12, MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9BF502CCD5 for ; Wed, 5 Dec 2018 08:53:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727376AbeLEIwJ (ORCPT ); Wed, 5 Dec 2018 03:52:09 -0500 Received: from ex13-edg-ou-002.vmware.com ([208.91.0.190]:46460 "EHLO EX13-EDG-OU-002.vmware.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727332AbeLEIwJ (ORCPT ); Wed, 5 Dec 2018 03:52:09 -0500 Received: from sc9-mailhost3.vmware.com (10.113.161.73) by EX13-EDG-OU-002.vmware.com (10.113.208.156) with Microsoft SMTP Server id 15.0.1156.6; Wed, 5 Dec 2018 00:52:03 -0800 Received: from sc2-haas01-esx0118.eng.vmware.com (sc2-haas01-esx0118.eng.vmware.com [10.172.44.118]) by sc9-mailhost3.vmware.com (Postfix) with ESMTP id DF21441398; Wed, 5 Dec 2018 00:52:04 -0800 (PST) From: Nadav Amit To: Ingo Molnar CC: , , "H. Peter Anvin" , Thomas Gleixner , Borislav Petkov , Andy Lutomirski , Nadav Amit , Dave Hansen , Peter Zijlstra , , , , Nadav Amit , Kees Cook , Dave Hansen Subject: [PATCH v7 04/14] fork: provide a function for copying init_mm Date: Tue, 4 Dec 2018 17:33:58 -0800 Message-ID: <20181205013408.47725-5-namit@vmware.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181205013408.47725-1-namit@vmware.com> References: <20181205013408.47725-1-namit@vmware.com> MIME-Version: 1.0 Received-SPF: None (EX13-EDG-OU-002.vmware.com: namit@vmware.com does not designate permitted sender hosts) Sender: linux-integrity-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-integrity@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Provide a function for copying init_mm. This function will be later used for setting a temporary mm. Cc: Andy Lutomirski Cc: Kees Cook Cc: Peter Zijlstra Cc: Dave Hansen Reviewed-by: Masami Hiramatsu Tested-by: Masami Hiramatsu Signed-off-by: Nadav Amit --- include/linux/sched/task.h | 1 + kernel/fork.c | 24 ++++++++++++++++++------ 2 files changed, 19 insertions(+), 6 deletions(-) diff --git a/include/linux/sched/task.h b/include/linux/sched/task.h index 108ede99e533..ac0a675678f5 100644 --- a/include/linux/sched/task.h +++ b/include/linux/sched/task.h @@ -74,6 +74,7 @@ extern void exit_itimers(struct signal_struct *); extern long _do_fork(unsigned long, unsigned long, unsigned long, int __user *, int __user *, unsigned long); extern long do_fork(unsigned long, unsigned long, unsigned long, int __user *, int __user *); struct task_struct *fork_idle(int); +struct mm_struct *copy_init_mm(void); extern pid_t kernel_thread(int (*fn)(void *), void *arg, unsigned long flags); extern long kernel_wait4(pid_t, int __user *, int, struct rusage *); diff --git a/kernel/fork.c b/kernel/fork.c index 07cddff89c7b..01d3f5b39363 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -1297,13 +1297,20 @@ void mm_release(struct task_struct *tsk, struct mm_struct *mm) complete_vfork_done(tsk); } -/* - * Allocate a new mm structure and copy contents from the - * mm structure of the passed in task structure. +/** + * dup_mm() - duplicates an existing mm structure + * @tsk: the task_struct with which the new mm will be associated. + * @oldmm: the mm to duplicate. + * + * Allocates a new mm structure and copy contents from the provided + * @oldmm structure. + * + * Return: the duplicated mm or NULL on failure. */ -static struct mm_struct *dup_mm(struct task_struct *tsk) +static struct mm_struct *dup_mm(struct task_struct *tsk, + struct mm_struct *oldmm) { - struct mm_struct *mm, *oldmm = current->mm; + struct mm_struct *mm; int err; mm = allocate_mm(); @@ -1370,7 +1377,7 @@ static int copy_mm(unsigned long clone_flags, struct task_struct *tsk) } retval = -ENOMEM; - mm = dup_mm(tsk); + mm = dup_mm(tsk, current->mm); if (!mm) goto fail_nomem; @@ -2176,6 +2183,11 @@ struct task_struct *fork_idle(int cpu) return task; } +struct mm_struct *copy_init_mm(void) +{ + return dup_mm(NULL, &init_mm); +} + /* * Ok, this is the main fork-routine. * From patchwork Wed Dec 5 01:33:59 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nadav Amit X-Patchwork-Id: 10713357 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E36911731 for ; Wed, 5 Dec 2018 08:53:15 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D42D22CC8D for ; Wed, 5 Dec 2018 08:53:15 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D257E2CCD5; Wed, 5 Dec 2018 08:53:15 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.4 required=2.0 tests=BAYES_00,DATE_IN_PAST_06_12, MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2DD302CC8D for ; Wed, 5 Dec 2018 08:53:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727388AbeLEIwK (ORCPT ); Wed, 5 Dec 2018 03:52:10 -0500 Received: from ex13-edg-ou-001.vmware.com ([208.91.0.189]:33316 "EHLO EX13-EDG-OU-001.vmware.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727337AbeLEIwJ (ORCPT ); Wed, 5 Dec 2018 03:52:09 -0500 Received: from sc9-mailhost3.vmware.com (10.113.161.73) by EX13-EDG-OU-001.vmware.com (10.113.208.155) with Microsoft SMTP Server id 15.0.1156.6; Wed, 5 Dec 2018 00:52:02 -0800 Received: from sc2-haas01-esx0118.eng.vmware.com (sc2-haas01-esx0118.eng.vmware.com [10.172.44.118]) by sc9-mailhost3.vmware.com (Postfix) with ESMTP id EF44E412B7; Wed, 5 Dec 2018 00:52:04 -0800 (PST) From: Nadav Amit To: Ingo Molnar CC: , , "H. Peter Anvin" , Thomas Gleixner , Borislav Petkov , Andy Lutomirski , Nadav Amit , Dave Hansen , Peter Zijlstra , , , , Nadav Amit , Kees Cook , Dave Hansen Subject: [PATCH v7 05/14] x86/alternative: initializing temporary mm for patching Date: Tue, 4 Dec 2018 17:33:59 -0800 Message-ID: <20181205013408.47725-6-namit@vmware.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181205013408.47725-1-namit@vmware.com> References: <20181205013408.47725-1-namit@vmware.com> MIME-Version: 1.0 Received-SPF: None (EX13-EDG-OU-001.vmware.com: namit@vmware.com does not designate permitted sender hosts) Sender: linux-integrity-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-integrity@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP To prevent improper use of the PTEs that are used for text patching, we want to use a temporary mm struct. We initailize it by copying the init mm. The address that will be used for patching is taken from the lower area that is usually used for the task memory. Doing so prevents the need to frequently synchronize the temporary-mm (e.g., when BPF programs are installed), since different PGDs are used for the task memory. Finally, we randomize the address of the PTEs to harden against exploits that use these PTEs. Cc: Kees Cook Cc: Peter Zijlstra Cc: Dave Hansen Reviewed-by: Masami Hiramatsu Tested-by: Masami Hiramatsu Suggested-by: Andy Lutomirski Signed-off-by: Nadav Amit --- arch/x86/include/asm/pgtable.h | 3 +++ arch/x86/include/asm/text-patching.h | 2 ++ arch/x86/kernel/alternative.c | 3 +++ arch/x86/mm/init_64.c | 35 ++++++++++++++++++++++++++++ init/main.c | 3 +++ 5 files changed, 46 insertions(+) diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h index 40616e805292..e8f630d9a2ed 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -1021,6 +1021,9 @@ static inline void __meminit init_trampoline_default(void) /* Default trampoline pgd value */ trampoline_pgd_entry = init_top_pgt[pgd_index(__PAGE_OFFSET)]; } + +void __init poking_init(void); + # ifdef CONFIG_RANDOMIZE_MEMORY void __meminit init_trampoline(void); # else diff --git a/arch/x86/include/asm/text-patching.h b/arch/x86/include/asm/text-patching.h index f8fc8e86cf01..a75eed841eed 100644 --- a/arch/x86/include/asm/text-patching.h +++ b/arch/x86/include/asm/text-patching.h @@ -39,5 +39,7 @@ extern void *text_poke_kgdb(void *addr, const void *opcode, size_t len); extern int poke_int3_handler(struct pt_regs *regs); extern void *text_poke_bp(void *addr, const void *opcode, size_t len, void *handler); extern int after_bootmem; +extern __ro_after_init struct mm_struct *poking_mm; +extern __ro_after_init unsigned long poking_addr; #endif /* _ASM_X86_TEXT_PATCHING_H */ diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c index c6a3a10a2fd5..57fdde308bb6 100644 --- a/arch/x86/kernel/alternative.c +++ b/arch/x86/kernel/alternative.c @@ -678,6 +678,9 @@ void *__init_or_module text_poke_early(void *addr, const void *opcode, return addr; } +__ro_after_init struct mm_struct *poking_mm; +__ro_after_init unsigned long poking_addr; + static void *__text_poke(void *addr, const void *opcode, size_t len) { unsigned long flags; diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index 5fab264948c2..356a28569a19 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -53,6 +53,7 @@ #include #include #include +#include #include "mm_internal.h" @@ -1388,6 +1389,40 @@ unsigned long memory_block_size_bytes(void) return memory_block_size_probed; } +/* + * Initialize an mm_struct to be used during poking and a pointer to be used + * during patching. + */ +void __init poking_init(void) +{ + spinlock_t *ptl; + pte_t *ptep; + + poking_mm = copy_init_mm(); + BUG_ON(!poking_mm); + + /* + * Randomize the poking address, but make sure that the following page + * will be mapped at the same PMD. We need 2 pages, so find space for 3, + * and adjust the address if the PMD ends after the first one. + */ + poking_addr = TASK_UNMAPPED_BASE + + (kaslr_get_random_long("Poking") & PAGE_MASK) % + (TASK_SIZE - TASK_UNMAPPED_BASE - 3 * PAGE_SIZE); + + if (((poking_addr + PAGE_SIZE) & ~PMD_MASK) == 0) + poking_addr += PAGE_SIZE; + + /* + * We need to trigger the allocation of the page-tables that will be + * needed for poking now. Later, poking may be performed in an atomic + * section, which might cause allocation to fail. + */ + ptep = get_locked_pte(poking_mm, poking_addr, &ptl); + BUG_ON(!ptep); + pte_unmap_unlock(ptep, ptl); +} + #ifdef CONFIG_SPARSEMEM_VMEMMAP /* * Initialise the sparsemem vmemmap using huge-pages at the PMD level. diff --git a/init/main.c b/init/main.c index ee147103ba1b..a461150adfb1 100644 --- a/init/main.c +++ b/init/main.c @@ -497,6 +497,8 @@ void __init __weak thread_stack_cache_init(void) void __init __weak mem_encrypt_init(void) { } +void __init __weak poking_init(void) { } + bool initcall_debug; core_param(initcall_debug, initcall_debug, bool, 0644); @@ -731,6 +733,7 @@ asmlinkage __visible void __init start_kernel(void) taskstats_init_early(); delayacct_init(); + poking_init(); check_bugs(); acpi_subsystem_init(); From patchwork Wed Dec 5 01:34:00 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nadav Amit X-Patchwork-Id: 10713347 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id AF3DF109C for ; Wed, 5 Dec 2018 08:52:58 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A18162CCC2 for ; Wed, 5 Dec 2018 08:52:58 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 9F6F32CCB8; Wed, 5 Dec 2018 08:52:58 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.4 required=2.0 tests=BAYES_00,DATE_IN_PAST_06_12, MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DB05A2CCCC for ; Wed, 5 Dec 2018 08:52:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726874AbeLEIwu (ORCPT ); Wed, 5 Dec 2018 03:52:50 -0500 Received: from ex13-edg-ou-001.vmware.com ([208.91.0.189]:33316 "EHLO EX13-EDG-OU-001.vmware.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727403AbeLEIwL (ORCPT ); Wed, 5 Dec 2018 03:52:11 -0500 Received: from sc9-mailhost3.vmware.com (10.113.161.73) by EX13-EDG-OU-001.vmware.com (10.113.208.155) with Microsoft SMTP Server id 15.0.1156.6; Wed, 5 Dec 2018 00:52:02 -0800 Received: from sc2-haas01-esx0118.eng.vmware.com (sc2-haas01-esx0118.eng.vmware.com [10.172.44.118]) by sc9-mailhost3.vmware.com (Postfix) with ESMTP id 0CBC941391; Wed, 5 Dec 2018 00:52:05 -0800 (PST) From: Nadav Amit To: Ingo Molnar CC: , , "H. Peter Anvin" , Thomas Gleixner , Borislav Petkov , Andy Lutomirski , Nadav Amit , Dave Hansen , Peter Zijlstra , , , , Nadav Amit , Kees Cook , Dave Hansen , Masami Hiramatsu Subject: [PATCH v7 06/14] x86/alternative: use temporary mm for text poking Date: Tue, 4 Dec 2018 17:34:00 -0800 Message-ID: <20181205013408.47725-7-namit@vmware.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181205013408.47725-1-namit@vmware.com> References: <20181205013408.47725-1-namit@vmware.com> MIME-Version: 1.0 Received-SPF: None (EX13-EDG-OU-001.vmware.com: namit@vmware.com does not designate permitted sender hosts) Sender: linux-integrity-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-integrity@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP text_poke() can potentially compromise the security as it sets temporary PTEs in the fixmap. These PTEs might be used to rewrite the kernel code from other cores accidentally or maliciously, if an attacker gains the ability to write onto kernel memory. Moreover, since remote TLBs are not flushed after the temporary PTEs are removed, the time-window in which the code is writable is not limited if the fixmap PTEs - maliciously or accidentally - are cached in the TLB. To address these potential security hazards, we use a temporary mm for patching the code. Finally, text_poke() is also not conservative enough when mapping pages, as it always tries to map 2 pages, even when a single one is sufficient. So try to be more conservative, and do not map more than needed. Cc: Andy Lutomirski Cc: Kees Cook Cc: Peter Zijlstra Cc: Dave Hansen Cc: Masami Hiramatsu Signed-off-by: Nadav Amit --- arch/x86/include/asm/fixmap.h | 2 - arch/x86/kernel/alternative.c | 109 +++++++++++++++++++++++++++------- arch/x86/xen/mmu_pv.c | 2 - 3 files changed, 87 insertions(+), 26 deletions(-) diff --git a/arch/x86/include/asm/fixmap.h b/arch/x86/include/asm/fixmap.h index 50ba74a34a37..9da8cccdf3fb 100644 --- a/arch/x86/include/asm/fixmap.h +++ b/arch/x86/include/asm/fixmap.h @@ -103,8 +103,6 @@ enum fixed_addresses { #ifdef CONFIG_PARAVIRT FIX_PARAVIRT_BOOTMAP, #endif - FIX_TEXT_POKE1, /* reserve 2 pages for text_poke() */ - FIX_TEXT_POKE0, /* first page is last, because allocation is backward */ #ifdef CONFIG_X86_INTEL_MID FIX_LNW_VRTC, #endif diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c index 57fdde308bb6..8fc4685f3117 100644 --- a/arch/x86/kernel/alternative.c +++ b/arch/x86/kernel/alternative.c @@ -11,6 +11,7 @@ #include #include #include +#include #include #include #include @@ -683,41 +684,105 @@ __ro_after_init unsigned long poking_addr; static void *__text_poke(void *addr, const void *opcode, size_t len) { + bool cross_page_boundary = offset_in_page(addr) + len > PAGE_SIZE; + temporary_mm_state_t prev; + struct page *pages[2] = {NULL}; unsigned long flags; - char *vaddr; - struct page *pages[2]; - int i; + pte_t pte, *ptep; + spinlock_t *ptl; /* - * While boot memory allocator is runnig we cannot use struct - * pages as they are not yet initialized. + * While boot memory allocator is running we cannot use struct pages as + * they are not yet initialized. */ BUG_ON(!after_bootmem); if (!core_kernel_text((unsigned long)addr)) { pages[0] = vmalloc_to_page(addr); - pages[1] = vmalloc_to_page(addr + PAGE_SIZE); + if (cross_page_boundary) + pages[1] = vmalloc_to_page(addr + PAGE_SIZE); } else { pages[0] = virt_to_page(addr); WARN_ON(!PageReserved(pages[0])); - pages[1] = virt_to_page(addr + PAGE_SIZE); + if (cross_page_boundary) + pages[1] = virt_to_page(addr + PAGE_SIZE); } - BUG_ON(!pages[0]); + BUG_ON(!pages[0] || (cross_page_boundary && !pages[1])); + local_irq_save(flags); - set_fixmap(FIX_TEXT_POKE0, page_to_phys(pages[0])); - if (pages[1]) - set_fixmap(FIX_TEXT_POKE1, page_to_phys(pages[1])); - vaddr = (char *)fix_to_virt(FIX_TEXT_POKE0); - memcpy(&vaddr[(unsigned long)addr & ~PAGE_MASK], opcode, len); - clear_fixmap(FIX_TEXT_POKE0); - if (pages[1]) - clear_fixmap(FIX_TEXT_POKE1); - local_flush_tlb(); - sync_core(); - /* Could also do a CLFLUSH here to speed up CPU recovery; but - that causes hangs on some VIA CPUs. */ - for (i = 0; i < len; i++) - BUG_ON(((char *)addr)[i] != ((char *)opcode)[i]); + + /* + * The lock is not really needed, but this allows to avoid open-coding. + */ + ptep = get_locked_pte(poking_mm, poking_addr, &ptl); + + /* + * This must not fail; preallocated in poking_init(). + */ + VM_BUG_ON(!ptep); + + pte = mk_pte(pages[0], PAGE_KERNEL); + set_pte_at(poking_mm, poking_addr, ptep, pte); + + if (cross_page_boundary) { + pte = mk_pte(pages[1], PAGE_KERNEL); + set_pte_at(poking_mm, poking_addr + PAGE_SIZE, ptep + 1, pte); + } + + /* + * Loading the temporary mm behaves as a compiler barrier, which + * guarantees that the PTE will be set at the time memcpy() is done. + */ + prev = use_temporary_mm(poking_mm); + + kasan_disable_current(); + memcpy((u8 *)poking_addr + offset_in_page(addr), opcode, len); + kasan_enable_current(); + + /* + * Ensure that the PTE is only cleared after the instructions of memcpy + * were issued by using a compiler barrier. + */ + barrier(); + + pte_clear(poking_mm, poking_addr, ptep); + + /* + * __flush_tlb_one_user() performs a redundant TLB flush when PTI is on, + * as it also flushes the corresponding "user" address spaces, which + * does not exist. + * + * Poking, however, is already very inefficient since it does not try to + * batch updates, so we ignore this problem for the time being. + * + * Since the PTEs do not exist in other kernel address-spaces, we do + * not use __flush_tlb_one_kernel(), which when PTI is on would cause + * more unwarranted TLB flushes. + * + * There is a slight anomaly here: the PTE is a supervisor-only and + * (potentially) global and we use __flush_tlb_one_user() but this + * should be fine. + */ + __flush_tlb_one_user(poking_addr); + if (cross_page_boundary) { + pte_clear(poking_mm, poking_addr + PAGE_SIZE, ptep + 1); + __flush_tlb_one_user(poking_addr + PAGE_SIZE); + } + + /* + * Loading the previous page-table hierarchy requires a serializing + * instruction that already allows the core to see the updated version. + * Xen-PV is assumed to serialize execution in a similar manner. + */ + unuse_temporary_mm(prev); + + pte_unmap_unlock(ptep, ptl); + /* + * If the text doesn't match what we just wrote; something is + * fundamentally screwy, there's nothing we can really do about that. + */ + BUG_ON(memcmp(addr, opcode, len)); + local_irq_restore(flags); return addr; } diff --git a/arch/x86/xen/mmu_pv.c b/arch/x86/xen/mmu_pv.c index a5d7ed125337..e9b474c396d2 100644 --- a/arch/x86/xen/mmu_pv.c +++ b/arch/x86/xen/mmu_pv.c @@ -2318,8 +2318,6 @@ static void xen_set_fixmap(unsigned idx, phys_addr_t phys, pgprot_t prot) #elif defined(CONFIG_X86_VSYSCALL_EMULATION) case VSYSCALL_PAGE: #endif - case FIX_TEXT_POKE0: - case FIX_TEXT_POKE1: /* All local page mappings */ pte = pfn_pte(phys, prot); break; From patchwork Wed Dec 5 01:34:01 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nadav Amit X-Patchwork-Id: 10713355 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CDA881923 for ; Wed, 5 Dec 2018 08:53:15 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C0ED32CCE0 for ; Wed, 5 Dec 2018 08:53:15 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B4F602CCD5; Wed, 5 Dec 2018 08:53:15 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.4 required=2.0 tests=BAYES_00,DATE_IN_PAST_06_12, MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3AE1B2CCDE for ; Wed, 5 Dec 2018 08:53:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726944AbeLEIxC (ORCPT ); Wed, 5 Dec 2018 03:53:02 -0500 Received: from ex13-edg-ou-002.vmware.com ([208.91.0.190]:46460 "EHLO EX13-EDG-OU-002.vmware.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726351AbeLEIwK (ORCPT ); Wed, 5 Dec 2018 03:52:10 -0500 Received: from sc9-mailhost3.vmware.com (10.113.161.73) by EX13-EDG-OU-002.vmware.com (10.113.208.156) with Microsoft SMTP Server id 15.0.1156.6; Wed, 5 Dec 2018 00:52:03 -0800 Received: from sc2-haas01-esx0118.eng.vmware.com (sc2-haas01-esx0118.eng.vmware.com [10.172.44.118]) by sc9-mailhost3.vmware.com (Postfix) with ESMTP id 14F8741394; Wed, 5 Dec 2018 00:52:05 -0800 (PST) From: Nadav Amit To: Ingo Molnar CC: , , "H. Peter Anvin" , Thomas Gleixner , Borislav Petkov , Andy Lutomirski , Nadav Amit , Dave Hansen , Peter Zijlstra , , , , Nadav Amit Subject: [PATCH v7 07/14] x86/kgdb: avoid redundant comparison of patched code Date: Tue, 4 Dec 2018 17:34:01 -0800 Message-ID: <20181205013408.47725-8-namit@vmware.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181205013408.47725-1-namit@vmware.com> References: <20181205013408.47725-1-namit@vmware.com> MIME-Version: 1.0 Received-SPF: None (EX13-EDG-OU-002.vmware.com: namit@vmware.com does not designate permitted sender hosts) Sender: linux-integrity-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-integrity@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP text_poke() already ensures that the written value is the correct one and fails if that is not the case. There is no need for an additional comparison. Remove it. Signed-off-by: Nadav Amit --- arch/x86/kernel/kgdb.c | 14 +------------- 1 file changed, 1 insertion(+), 13 deletions(-) diff --git a/arch/x86/kernel/kgdb.c b/arch/x86/kernel/kgdb.c index 2636ca8394bd..86484510bf54 100644 --- a/arch/x86/kernel/kgdb.c +++ b/arch/x86/kernel/kgdb.c @@ -751,7 +751,6 @@ void kgdb_arch_set_pc(struct pt_regs *regs, unsigned long ip) int kgdb_arch_set_breakpoint(struct kgdb_bkpt *bpt) { int err; - char opc[BREAK_INSTR_SIZE]; bpt->type = BP_BREAKPOINT; err = probe_kernel_read(bpt->saved_instr, (char *)bpt->bpt_addr, @@ -770,11 +769,6 @@ int kgdb_arch_set_breakpoint(struct kgdb_bkpt *bpt) return -EBUSY; text_poke_kgdb((void *)bpt->bpt_addr, arch_kgdb_ops.gdb_bpt_instr, BREAK_INSTR_SIZE); - err = probe_kernel_read(opc, (char *)bpt->bpt_addr, BREAK_INSTR_SIZE); - if (err) - return err; - if (memcmp(opc, arch_kgdb_ops.gdb_bpt_instr, BREAK_INSTR_SIZE)) - return -EINVAL; bpt->type = BP_POKE_BREAKPOINT; return err; @@ -782,9 +776,6 @@ int kgdb_arch_set_breakpoint(struct kgdb_bkpt *bpt) int kgdb_arch_remove_breakpoint(struct kgdb_bkpt *bpt) { - int err; - char opc[BREAK_INSTR_SIZE]; - if (bpt->type != BP_POKE_BREAKPOINT) goto knl_write; /* @@ -795,10 +786,7 @@ int kgdb_arch_remove_breakpoint(struct kgdb_bkpt *bpt) goto knl_write; text_poke_kgdb((void *)bpt->bpt_addr, bpt->saved_instr, BREAK_INSTR_SIZE); - err = probe_kernel_read(opc, (char *)bpt->bpt_addr, BREAK_INSTR_SIZE); - if (err || memcmp(opc, bpt->saved_instr, BREAK_INSTR_SIZE)) - goto knl_write; - return err; + return 0; knl_write: return probe_kernel_write((char *)bpt->bpt_addr, From patchwork Wed Dec 5 01:34:02 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nadav Amit X-Patchwork-Id: 10713349 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 483651731 for ; Wed, 5 Dec 2018 08:53:14 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 39C262CCC2 for ; Wed, 5 Dec 2018 08:53:14 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2E2042CCE0; Wed, 5 Dec 2018 08:53:14 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.4 required=2.0 tests=BAYES_00,DATE_IN_PAST_06_12, MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7A2DB2CCDE for ; Wed, 5 Dec 2018 08:53:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727595AbeLEIxD (ORCPT ); Wed, 5 Dec 2018 03:53:03 -0500 Received: from ex13-edg-ou-001.vmware.com ([208.91.0.189]:33316 "EHLO EX13-EDG-OU-001.vmware.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727297AbeLEIwK (ORCPT ); Wed, 5 Dec 2018 03:52:10 -0500 Received: from sc9-mailhost3.vmware.com (10.113.161.73) by EX13-EDG-OU-001.vmware.com (10.113.208.155) with Microsoft SMTP Server id 15.0.1156.6; Wed, 5 Dec 2018 00:52:02 -0800 Received: from sc2-haas01-esx0118.eng.vmware.com (sc2-haas01-esx0118.eng.vmware.com [10.172.44.118]) by sc9-mailhost3.vmware.com (Postfix) with ESMTP id 1AF6F41280; Wed, 5 Dec 2018 00:52:05 -0800 (PST) From: Nadav Amit To: Ingo Molnar CC: , , "H. Peter Anvin" , Thomas Gleixner , Borislav Petkov , Andy Lutomirski , Nadav Amit , Dave Hansen , Peter Zijlstra , , , , Nadav Amit , Steven Rostedt Subject: [PATCH v7 08/14] x86/ftrace: Use text_poke_*() infrastructure Date: Tue, 4 Dec 2018 17:34:02 -0800 Message-ID: <20181205013408.47725-9-namit@vmware.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181205013408.47725-1-namit@vmware.com> References: <20181205013408.47725-1-namit@vmware.com> MIME-Version: 1.0 Received-SPF: None (EX13-EDG-OU-001.vmware.com: namit@vmware.com does not designate permitted sender hosts) Sender: linux-integrity-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-integrity@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP A following patch is going to make module allocated memory non-executable. This requires to modify ftrace and make the memory executable again after it is configured. In addition, this patch makes ftrace use the general text poking infrastructure instead ftrace's homegrown text patching. This provides the advantages of having slightly "safer" code patching and avoiding races with module removal or other mechanisms that patch the kernel code. Cc: Steven Rostedt Signed-off-by: Nadav Amit --- arch/x86/kernel/ftrace.c | 74 +++++++++++++--------------------------- 1 file changed, 23 insertions(+), 51 deletions(-) diff --git a/arch/x86/kernel/ftrace.c b/arch/x86/kernel/ftrace.c index 01ebcb6f263e..f05a0f9e2837 100644 --- a/arch/x86/kernel/ftrace.c +++ b/arch/x86/kernel/ftrace.c @@ -22,6 +22,7 @@ #include #include #include +#include #include @@ -29,23 +30,10 @@ #include #include #include +#include #ifdef CONFIG_DYNAMIC_FTRACE -int ftrace_arch_code_modify_prepare(void) -{ - set_kernel_text_rw(); - set_all_modules_text_rw(); - return 0; -} - -int ftrace_arch_code_modify_post_process(void) -{ - set_all_modules_text_ro(); - set_kernel_text_ro(); - return 0; -} - union ftrace_code_union { char code[MCOUNT_INSN_SIZE]; struct { @@ -79,22 +67,6 @@ within(unsigned long addr, unsigned long start, unsigned long end) return addr >= start && addr < end; } -static unsigned long text_ip_addr(unsigned long ip) -{ - /* - * On x86_64, kernel text mappings are mapped read-only, so we use - * the kernel identity mapping instead of the kernel text mapping - * to modify the kernel text. - * - * For 32bit kernels, these mappings are same and we can use - * kernel identity mapping to modify code. - */ - if (within(ip, (unsigned long)_text, (unsigned long)_etext)) - ip = (unsigned long)__va(__pa_symbol(ip)); - - return ip; -} - static const unsigned char *ftrace_nop_replace(void) { return ideal_nops[NOP_ATOMIC5]; @@ -124,13 +96,8 @@ ftrace_modify_code_direct(unsigned long ip, unsigned const char *old_code, if (memcmp(replaced, old_code, MCOUNT_INSN_SIZE) != 0) return -EINVAL; - ip = text_ip_addr(ip); - /* replace the text with the new text */ - if (probe_kernel_write((void *)ip, new_code, MCOUNT_INSN_SIZE)) - return -EPERM; - - sync_core(); + text_poke_early((void *)ip, new_code, MCOUNT_INSN_SIZE); return 0; } @@ -302,10 +269,7 @@ int ftrace_int3_handler(struct pt_regs *regs) static int ftrace_write(unsigned long ip, const char *val, int size) { - ip = text_ip_addr(ip); - - if (probe_kernel_write((void *)ip, val, size)) - return -EPERM; + text_poke((void *)ip, val, size); return 0; } @@ -653,9 +617,11 @@ void arch_ftrace_update_code(int command) { /* See comment above by declaration of modifying_ftrace_code */ atomic_inc(&modifying_ftrace_code); + mutex_lock(&text_mutex); ftrace_modify_all_code(command); + mutex_unlock(&text_mutex); atomic_dec(&modifying_ftrace_code); } @@ -741,6 +707,7 @@ create_trampoline(struct ftrace_ops *ops, unsigned int *tramp_size) unsigned long end_offset; unsigned long op_offset; unsigned long offset; + unsigned long npages; unsigned long size; unsigned long ip; unsigned long *ptr; @@ -748,7 +715,6 @@ create_trampoline(struct ftrace_ops *ops, unsigned int *tramp_size) /* 48 8b 15 is movq (%rip), %rdx */ unsigned const char op_ref[] = { 0x48, 0x8b, 0x15 }; union ftrace_op_code_union op_ptr; - int ret; if (ops->flags & FTRACE_OPS_FL_SAVE_REGS) { start_offset = (unsigned long)ftrace_regs_caller; @@ -772,19 +738,16 @@ create_trampoline(struct ftrace_ops *ops, unsigned int *tramp_size) return 0; *tramp_size = size + MCOUNT_INSN_SIZE + sizeof(void *); + npages = DIV_ROUND_UP(*tramp_size, PAGE_SIZE); /* Copy ftrace_caller onto the trampoline memory */ - ret = probe_kernel_read(trampoline, (void *)start_offset, size); - if (WARN_ON(ret < 0)) { - tramp_free(trampoline, *tramp_size); - return 0; - } + text_poke_early(trampoline, (void *)start_offset, size); ip = (unsigned long)trampoline + size; /* The trampoline ends with a jmp to ftrace_epilogue */ jmp = ftrace_jmp_replace(ip, (unsigned long)ftrace_epilogue); - memcpy(trampoline + size, jmp, MCOUNT_INSN_SIZE); + text_poke_early(trampoline + size, jmp, MCOUNT_INSN_SIZE); /* * The address of the ftrace_ops that is used for this trampoline @@ -813,11 +776,19 @@ create_trampoline(struct ftrace_ops *ops, unsigned int *tramp_size) op_ptr.offset = offset; /* put in the new offset to the ftrace_ops */ - memcpy(trampoline + op_offset, &op_ptr, OP_REF_SIZE); + text_poke_early(trampoline + op_offset, &op_ptr, OP_REF_SIZE); /* ALLOC_TRAMP flags lets us know we created it */ ops->flags |= FTRACE_OPS_FL_ALLOC_TRAMP; + set_memory_ro((unsigned long)trampoline, npages); + + /* + * TODO: Once we have better code (and page-table) protection + * mechanisms, ensure that the code has not been tampered before. + */ + set_memory_x((unsigned long)trampoline, npages); + return (unsigned long)trampoline; } @@ -853,8 +824,6 @@ void arch_ftrace_update_trampoline(struct ftrace_ops *ops) */ if (!(ops->flags & FTRACE_OPS_FL_ALLOC_TRAMP)) return; - npages = PAGE_ALIGN(ops->trampoline_size) >> PAGE_SHIFT; - set_memory_rw(ops->trampoline, npages); } else { ops->trampoline = create_trampoline(ops, &size); if (!ops->trampoline) @@ -863,6 +832,8 @@ void arch_ftrace_update_trampoline(struct ftrace_ops *ops) npages = PAGE_ALIGN(size) >> PAGE_SHIFT; } + mutex_lock(&text_mutex); + offset = calc_trampoline_call_offset(ops->flags & FTRACE_OPS_FL_SAVE_REGS); ip = ops->trampoline + offset; @@ -871,7 +842,8 @@ void arch_ftrace_update_trampoline(struct ftrace_ops *ops) /* Do a safe modify in case the trampoline is executing */ new = ftrace_call_replace(ip, (unsigned long)func); ret = update_ftrace_func(ip, new); - set_memory_ro(ops->trampoline, npages); + + mutex_unlock(&text_mutex); /* The update should never fail */ WARN_ON(ret); From patchwork Wed Dec 5 01:34:03 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nadav Amit X-Patchwork-Id: 10713341 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0DAFE1731 for ; Wed, 5 Dec 2018 08:52:51 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id F1FF32CCA9 for ; Wed, 5 Dec 2018 08:52:50 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E61092CCDB; Wed, 5 Dec 2018 08:52:50 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.4 required=2.0 tests=BAYES_00,DATE_IN_PAST_06_12, MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8B56F2CCD6 for ; Wed, 5 Dec 2018 08:52:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727506AbeLEIwo (ORCPT ); Wed, 5 Dec 2018 03:52:44 -0500 Received: from ex13-edg-ou-002.vmware.com ([208.91.0.190]:46460 "EHLO EX13-EDG-OU-002.vmware.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727409AbeLEIwL (ORCPT ); Wed, 5 Dec 2018 03:52:11 -0500 Received: from sc9-mailhost3.vmware.com (10.113.161.73) by EX13-EDG-OU-002.vmware.com (10.113.208.156) with Microsoft SMTP Server id 15.0.1156.6; Wed, 5 Dec 2018 00:52:03 -0800 Received: from sc2-haas01-esx0118.eng.vmware.com (sc2-haas01-esx0118.eng.vmware.com [10.172.44.118]) by sc9-mailhost3.vmware.com (Postfix) with ESMTP id 21D1941397; Wed, 5 Dec 2018 00:52:05 -0800 (PST) From: Nadav Amit To: Ingo Molnar CC: , , "H. Peter Anvin" , Thomas Gleixner , Borislav Petkov , Andy Lutomirski , Nadav Amit , Dave Hansen , Peter Zijlstra , , , , Nadav Amit , Masami Hiramatsu Subject: [PATCH v7 09/14] x86/kprobes: Instruction pages initialization enhancements Date: Tue, 4 Dec 2018 17:34:03 -0800 Message-ID: <20181205013408.47725-10-namit@vmware.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181205013408.47725-1-namit@vmware.com> References: <20181205013408.47725-1-namit@vmware.com> MIME-Version: 1.0 Received-SPF: None (EX13-EDG-OU-002.vmware.com: namit@vmware.com does not designate permitted sender hosts) Sender: linux-integrity-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-integrity@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This patch is a preparatory patch for a following patch that makes module allocated pages non-executable. The patch sets the page as executable after allocation. In the future, we may get better protection of executables. For example, by using hypercalls to request the hypervisor to protect VM executable pages from modifications using nested page-tables. This would allow us to ensure the executable has not changed between allocation and its write-protection. While at it, do some small cleanup of what appears to be unnecessary masking. Cc: Masami Hiramatsu Signed-off-by: Nadav Amit Acked-by: Masami Hiramatsu --- arch/x86/kernel/kprobes/core.c | 24 ++++++++++++++++++++---- 1 file changed, 20 insertions(+), 4 deletions(-) diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c index c33b06f5faa4..ca0118d3b3e8 100644 --- a/arch/x86/kernel/kprobes/core.c +++ b/arch/x86/kernel/kprobes/core.c @@ -431,8 +431,20 @@ void *alloc_insn_page(void) void *page; page = module_alloc(PAGE_SIZE); - if (page) - set_memory_ro((unsigned long)page & PAGE_MASK, 1); + if (page == NULL) + return NULL; + + /* + * First make the page read-only, and then only then make it executable + * to prevent it from being W+X in between. + */ + set_memory_ro((unsigned long)page, 1); + + /* + * TODO: Once additional kernel code protection mechanisms are set, ensure + * that the page was not maliciously altered and it is still zeroed. + */ + set_memory_x((unsigned long)page, 1); return page; } @@ -440,8 +452,12 @@ void *alloc_insn_page(void) /* Recover page to RW mode before releasing it */ void free_insn_page(void *page) { - set_memory_nx((unsigned long)page & PAGE_MASK, 1); - set_memory_rw((unsigned long)page & PAGE_MASK, 1); + /* + * First make the page non-executable, and then only then make it + * writable to prevent it from being W+X in between. + */ + set_memory_nx((unsigned long)page, 1); + set_memory_rw((unsigned long)page, 1); module_memfree(page); } From patchwork Wed Dec 5 01:34:04 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nadav Amit X-Patchwork-Id: 10713335 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5308D18A7 for ; Wed, 5 Dec 2018 08:52:42 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4671A2CCD5 for ; Wed, 5 Dec 2018 08:52:42 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3A3032CCD8; Wed, 5 Dec 2018 08:52:42 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.4 required=2.0 tests=BAYES_00,DATE_IN_PAST_06_12, MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id AF96E2CCDB for ; Wed, 5 Dec 2018 08:52:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727472AbeLEIwN (ORCPT ); Wed, 5 Dec 2018 03:52:13 -0500 Received: from ex13-edg-ou-001.vmware.com ([208.91.0.189]:33316 "EHLO EX13-EDG-OU-001.vmware.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727436AbeLEIwM (ORCPT ); Wed, 5 Dec 2018 03:52:12 -0500 Received: from sc9-mailhost3.vmware.com (10.113.161.73) by EX13-EDG-OU-001.vmware.com (10.113.208.155) with Microsoft SMTP Server id 15.0.1156.6; Wed, 5 Dec 2018 00:52:02 -0800 Received: from sc2-haas01-esx0118.eng.vmware.com (sc2-haas01-esx0118.eng.vmware.com [10.172.44.118]) by sc9-mailhost3.vmware.com (Postfix) with ESMTP id 282BC41399; Wed, 5 Dec 2018 00:52:05 -0800 (PST) From: Nadav Amit To: Ingo Molnar CC: , , "H. Peter Anvin" , Thomas Gleixner , Borislav Petkov , Andy Lutomirski , Nadav Amit , Dave Hansen , Peter Zijlstra , , , , Nadav Amit , Kees Cook , Dave Hansen , Masami Hiramatsu Subject: [PATCH v7 10/14] x86: avoid W^X being broken during modules loading Date: Tue, 4 Dec 2018 17:34:04 -0800 Message-ID: <20181205013408.47725-11-namit@vmware.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181205013408.47725-1-namit@vmware.com> References: <20181205013408.47725-1-namit@vmware.com> MIME-Version: 1.0 Received-SPF: None (EX13-EDG-OU-001.vmware.com: namit@vmware.com does not designate permitted sender hosts) Sender: linux-integrity-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-integrity@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP When modules and BPF filters are loaded, there is a time window in which some memory is both writable and executable. An attacker that has already found another vulnerability (e.g., a dangling pointer) might be able to exploit this behavior to overwrite kernel code. This patch prevents having writable executable PTEs in this stage. In addition, avoiding having R+X mappings can also slightly simplify the patching of modules code on initialization (e.g., by alternatives and static-key), as would be done in the next patch. This was actually the main motivation for this patch. To avoid having W+X mappings, set them initially as RW (NX) and after they are set as RO set them as X as well. Setting them as executable is done as a separate step to avoid one core in which the old PTE is cached (hence writable), and another which sees the updated PTE (executable), which would break the W^X protection. Cc: Kees Cook Cc: Peter Zijlstra Cc: Dave Hansen Cc: Masami Hiramatsu Suggested-by: Thomas Gleixner Suggested-by: Andy Lutomirski Signed-off-by: Nadav Amit --- arch/x86/kernel/alternative.c | 28 +++++++++++++++++++++------- arch/x86/kernel/module.c | 2 +- include/linux/filter.h | 6 ++++++ kernel/module.c | 10 ++++++++++ 4 files changed, 38 insertions(+), 8 deletions(-) diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c index 8fc4685f3117..18415e3b6000 100644 --- a/arch/x86/kernel/alternative.c +++ b/arch/x86/kernel/alternative.c @@ -667,15 +667,29 @@ void __init alternative_instructions(void) * handlers seeing an inconsistent instruction while you patch. */ void *__init_or_module text_poke_early(void *addr, const void *opcode, - size_t len) + size_t len) { unsigned long flags; - local_irq_save(flags); - memcpy(addr, opcode, len); - local_irq_restore(flags); - sync_core(); - /* Could also do a CLFLUSH here to speed up CPU recovery; but - that causes hangs on some VIA CPUs. */ + + if (static_cpu_has(X86_FEATURE_NX) && + is_module_text_address((unsigned long)addr)) { + /* + * Modules text is marked initially as non-executable, so the + * code cannot be running and speculative code-fetches are + * prevented. We can just change the code. + */ + memcpy(addr, opcode, len); + } else { + local_irq_save(flags); + memcpy(addr, opcode, len); + local_irq_restore(flags); + sync_core(); + + /* + * Could also do a CLFLUSH here to speed up CPU recovery; but + * that causes hangs on some VIA CPUs. + */ + } return addr; } diff --git a/arch/x86/kernel/module.c b/arch/x86/kernel/module.c index b052e883dd8c..cfa3106faee4 100644 --- a/arch/x86/kernel/module.c +++ b/arch/x86/kernel/module.c @@ -87,7 +87,7 @@ void *module_alloc(unsigned long size) p = __vmalloc_node_range(size, MODULE_ALIGN, MODULES_VADDR + get_module_load_offset(), MODULES_END, GFP_KERNEL, - PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE, + PAGE_KERNEL, 0, NUMA_NO_NODE, __builtin_return_address(0)); if (p && (kasan_module_alloc(p, size) < 0)) { vfree(p); diff --git a/include/linux/filter.h b/include/linux/filter.h index de629b706d1d..ee9ae03c5f56 100644 --- a/include/linux/filter.h +++ b/include/linux/filter.h @@ -704,7 +704,13 @@ static inline void bpf_prog_unlock_ro(struct bpf_prog *fp) static inline void bpf_jit_binary_lock_ro(struct bpf_binary_header *hdr) { + /* + * Perform mapping changes in two stages to avoid opening a time-window + * in which a PTE is cached in any TLB as writable, but marked as + * executable in the memory-resident mappings (e.g., page-tables). + */ set_memory_ro((unsigned long)hdr, hdr->pages); + set_memory_x((unsigned long)hdr, hdr->pages); } static inline void bpf_jit_binary_unlock_ro(struct bpf_binary_header *hdr) diff --git a/kernel/module.c b/kernel/module.c index 49a405891587..7cb207249437 100644 --- a/kernel/module.c +++ b/kernel/module.c @@ -1946,9 +1946,19 @@ void module_enable_ro(const struct module *mod, bool after_init) if (!rodata_enabled) return; + /* + * Perform mapping changes in two stages to avoid opening a time-window + * in which a PTE is cached in any TLB as writable, but marked as + * executable in the memory-resident mappings (e.g., page-tables). + */ frob_text(&mod->core_layout, set_memory_ro); + frob_text(&mod->core_layout, set_memory_x); + frob_rodata(&mod->core_layout, set_memory_ro); + frob_text(&mod->init_layout, set_memory_ro); + frob_text(&mod->init_layout, set_memory_x); + frob_rodata(&mod->init_layout, set_memory_ro); if (after_init) From patchwork Wed Dec 5 01:34:05 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nadav Amit X-Patchwork-Id: 10713337 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 847CA1923 for ; Wed, 5 Dec 2018 08:52:42 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 776BE2CC8D for ; Wed, 5 Dec 2018 08:52:42 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 75BDB2CCB8; Wed, 5 Dec 2018 08:52:42 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.4 required=2.0 tests=BAYES_00,DATE_IN_PAST_06_12, MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 081762CC8D for ; Wed, 5 Dec 2018 08:52:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727465AbeLEIwN (ORCPT ); Wed, 5 Dec 2018 03:52:13 -0500 Received: from ex13-edg-ou-002.vmware.com ([208.91.0.190]:46460 "EHLO EX13-EDG-OU-002.vmware.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727437AbeLEIwM (ORCPT ); Wed, 5 Dec 2018 03:52:12 -0500 Received: from sc9-mailhost3.vmware.com (10.113.161.73) by EX13-EDG-OU-002.vmware.com (10.113.208.156) with Microsoft SMTP Server id 15.0.1156.6; Wed, 5 Dec 2018 00:52:03 -0800 Received: from sc2-haas01-esx0118.eng.vmware.com (sc2-haas01-esx0118.eng.vmware.com [10.172.44.118]) by sc9-mailhost3.vmware.com (Postfix) with ESMTP id 2F837412B7; Wed, 5 Dec 2018 00:52:05 -0800 (PST) From: Nadav Amit To: Ingo Molnar CC: , , "H. Peter Anvin" , Thomas Gleixner , Borislav Petkov , Andy Lutomirski , Nadav Amit , Dave Hansen , Peter Zijlstra , , , , Nadav Amit , Kees Cook , Dave Hansen , Masami Hiramatsu Subject: [PATCH v7 11/14] x86/jump-label: remove support for custom poker Date: Tue, 4 Dec 2018 17:34:05 -0800 Message-ID: <20181205013408.47725-12-namit@vmware.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181205013408.47725-1-namit@vmware.com> References: <20181205013408.47725-1-namit@vmware.com> MIME-Version: 1.0 Received-SPF: None (EX13-EDG-OU-002.vmware.com: namit@vmware.com does not designate permitted sender hosts) Sender: linux-integrity-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-integrity@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP There are only two types of poking: early and breakpoint based. The use of a function pointer to perform poking complicates the code and is probably inefficient due to the use of indirect branches. Cc: Andy Lutomirski Cc: Kees Cook Cc: Peter Zijlstra Cc: Dave Hansen Cc: Masami Hiramatsu Signed-off-by: Nadav Amit --- arch/x86/kernel/jump_label.c | 24 ++++++++---------------- 1 file changed, 8 insertions(+), 16 deletions(-) diff --git a/arch/x86/kernel/jump_label.c b/arch/x86/kernel/jump_label.c index ed5fe274a7d8..994c13e2867d 100644 --- a/arch/x86/kernel/jump_label.c +++ b/arch/x86/kernel/jump_label.c @@ -39,7 +39,6 @@ static void bug_at(unsigned char *ip, int line) static void __ref __jump_label_transform(struct jump_entry *entry, enum jump_label_type type, - void *(*poker)(void *, const void *, size_t), int init) { union jump_code_union jmp; @@ -52,14 +51,6 @@ static void __ref __jump_label_transform(struct jump_entry *entry, jmp.offset = jump_entry_target(entry) - (jump_entry_code(entry) + JUMP_LABEL_NOP_SIZE); - /* - * As long as we're UP and not yet marked RO, we can use - * text_poke_early; SYSTEM_BOOTING guarantees both, as we switch to - * SYSTEM_SCHEDULING before going either. - */ - if (system_state == SYSTEM_BOOTING) - poker = text_poke_early; - if (type == JUMP_LABEL_JMP) { if (init) { expect = default_nop; line = __LINE__; @@ -82,16 +73,17 @@ static void __ref __jump_label_transform(struct jump_entry *entry, bug_at((void *)jump_entry_code(entry), line); /* - * Make text_poke_bp() a default fallback poker. + * As long as we're UP and not yet marked RO, we can use + * text_poke_early; SYSTEM_BOOTING guarantees both, as we switch to + * SYSTEM_SCHEDULING before going either. * * At the time the change is being done, just ignore whether we * are doing nop -> jump or jump -> nop transition, and assume * always nop being the 'currently valid' instruction - * */ - if (poker) { - (*poker)((void *)jump_entry_code(entry), code, - JUMP_LABEL_NOP_SIZE); + if (init || system_state == SYSTEM_BOOTING) { + text_poke_early((void *)jump_entry_code(entry), code, + JUMP_LABEL_NOP_SIZE); return; } @@ -103,7 +95,7 @@ void arch_jump_label_transform(struct jump_entry *entry, enum jump_label_type type) { mutex_lock(&text_mutex); - __jump_label_transform(entry, type, NULL, 0); + __jump_label_transform(entry, type, 0); mutex_unlock(&text_mutex); } @@ -133,7 +125,7 @@ __init_or_module void arch_jump_label_transform_static(struct jump_entry *entry, jlstate = JL_STATE_NO_UPDATE; } if (jlstate == JL_STATE_UPDATE) - __jump_label_transform(entry, type, text_poke_early, 1); + __jump_label_transform(entry, type, 1); } #endif From patchwork Wed Dec 5 01:34:06 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nadav Amit X-Patchwork-Id: 10713327 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 36EF51731 for ; Wed, 5 Dec 2018 08:52:32 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 23EA42CCC3 for ; Wed, 5 Dec 2018 08:52:32 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 0E1D32CCB0; Wed, 5 Dec 2018 08:52:32 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.4 required=2.0 tests=BAYES_00,DATE_IN_PAST_06_12, MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 80C752CCB0 for ; Wed, 5 Dec 2018 08:52:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727135AbeLEIwZ (ORCPT ); Wed, 5 Dec 2018 03:52:25 -0500 Received: from ex13-edg-ou-002.vmware.com ([208.91.0.190]:46460 "EHLO EX13-EDG-OU-002.vmware.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727458AbeLEIwN (ORCPT ); Wed, 5 Dec 2018 03:52:13 -0500 Received: from sc9-mailhost3.vmware.com (10.113.161.73) by EX13-EDG-OU-002.vmware.com (10.113.208.156) with Microsoft SMTP Server id 15.0.1156.6; Wed, 5 Dec 2018 00:52:03 -0800 Received: from sc2-haas01-esx0118.eng.vmware.com (sc2-haas01-esx0118.eng.vmware.com [10.172.44.118]) by sc9-mailhost3.vmware.com (Postfix) with ESMTP id 37C1841390; Wed, 5 Dec 2018 00:52:05 -0800 (PST) From: Nadav Amit To: Ingo Molnar CC: , , "H. Peter Anvin" , Thomas Gleixner , Borislav Petkov , Andy Lutomirski , Nadav Amit , Dave Hansen , Peter Zijlstra , , , , Nadav Amit , Kees Cook , Dave Hansen , Masami Hiramatsu Subject: [PATCH v7 12/14] x86/alternative: Remove the return value of text_poke_*() Date: Tue, 4 Dec 2018 17:34:06 -0800 Message-ID: <20181205013408.47725-13-namit@vmware.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181205013408.47725-1-namit@vmware.com> References: <20181205013408.47725-1-namit@vmware.com> MIME-Version: 1.0 Received-SPF: None (EX13-EDG-OU-002.vmware.com: namit@vmware.com does not designate permitted sender hosts) Sender: linux-integrity-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-integrity@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The return value of text_poke_early() and text_poke_bp() is useless. Remove it. Cc: Andy Lutomirski Cc: Kees Cook Cc: Peter Zijlstra Cc: Dave Hansen Cc: Masami Hiramatsu Signed-off-by: Nadav Amit --- arch/x86/include/asm/text-patching.h | 4 ++-- arch/x86/kernel/alternative.c | 11 ++++------- 2 files changed, 6 insertions(+), 9 deletions(-) diff --git a/arch/x86/include/asm/text-patching.h b/arch/x86/include/asm/text-patching.h index a75eed841eed..c90678fd391a 100644 --- a/arch/x86/include/asm/text-patching.h +++ b/arch/x86/include/asm/text-patching.h @@ -18,7 +18,7 @@ static inline void apply_paravirt(struct paravirt_patch_site *start, #define __parainstructions_end NULL #endif -extern void *text_poke_early(void *addr, const void *opcode, size_t len); +extern void text_poke_early(void *addr, const void *opcode, size_t len); /* * Clear and restore the kernel write-protection flag on the local CPU. @@ -37,7 +37,7 @@ extern void *text_poke_early(void *addr, const void *opcode, size_t len); extern void *text_poke(void *addr, const void *opcode, size_t len); extern void *text_poke_kgdb(void *addr, const void *opcode, size_t len); extern int poke_int3_handler(struct pt_regs *regs); -extern void *text_poke_bp(void *addr, const void *opcode, size_t len, void *handler); +extern void text_poke_bp(void *addr, const void *opcode, size_t len, void *handler); extern int after_bootmem; extern __ro_after_init struct mm_struct *poking_mm; extern __ro_after_init unsigned long poking_addr; diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c index 18415e3b6000..2740ad2c6f21 100644 --- a/arch/x86/kernel/alternative.c +++ b/arch/x86/kernel/alternative.c @@ -264,7 +264,7 @@ static void __init_or_module add_nops(void *insns, unsigned int len) extern struct alt_instr __alt_instructions[], __alt_instructions_end[]; extern s32 __smp_locks[], __smp_locks_end[]; -void *text_poke_early(void *addr, const void *opcode, size_t len); +void text_poke_early(void *addr, const void *opcode, size_t len); /* * Are we looking at a near JMP with a 1 or 4-byte displacement. @@ -666,8 +666,8 @@ void __init alternative_instructions(void) * instructions. And on the local CPU you need to be protected again NMI or MCE * handlers seeing an inconsistent instruction while you patch. */ -void *__init_or_module text_poke_early(void *addr, const void *opcode, - size_t len) +void __init_or_module text_poke_early(void *addr, const void *opcode, + size_t len) { unsigned long flags; @@ -690,7 +690,6 @@ void *__init_or_module text_poke_early(void *addr, const void *opcode, * that causes hangs on some VIA CPUs. */ } - return addr; } __ro_after_init struct mm_struct *poking_mm; @@ -893,7 +892,7 @@ int poke_int3_handler(struct pt_regs *regs) * replacing opcode * - sync cores */ -void *text_poke_bp(void *addr, const void *opcode, size_t len, void *handler) +void text_poke_bp(void *addr, const void *opcode, size_t len, void *handler) { unsigned char int3 = 0xcc; @@ -935,7 +934,5 @@ void *text_poke_bp(void *addr, const void *opcode, size_t len, void *handler) * the writing of the new instruction. */ bp_patching_in_progress = false; - - return addr; } From patchwork Wed Dec 5 01:34:07 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nadav Amit X-Patchwork-Id: 10713329 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 76719109C for ; Wed, 5 Dec 2018 08:52:39 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 64D6D2CC8D for ; Wed, 5 Dec 2018 08:52:39 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 58BBE2CCD8; Wed, 5 Dec 2018 08:52:39 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.4 required=2.0 tests=BAYES_00,DATE_IN_PAST_06_12, MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C729B2CCD5 for ; Wed, 5 Dec 2018 08:52:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726102AbeLEIwc (ORCPT ); Wed, 5 Dec 2018 03:52:32 -0500 Received: from ex13-edg-ou-001.vmware.com ([208.91.0.189]:33316 "EHLO EX13-EDG-OU-001.vmware.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727337AbeLEIwN (ORCPT ); Wed, 5 Dec 2018 03:52:13 -0500 Received: from sc9-mailhost3.vmware.com (10.113.161.73) by EX13-EDG-OU-001.vmware.com (10.113.208.155) with Microsoft SMTP Server id 15.0.1156.6; Wed, 5 Dec 2018 00:52:03 -0800 Received: from sc2-haas01-esx0118.eng.vmware.com (sc2-haas01-esx0118.eng.vmware.com [10.172.44.118]) by sc9-mailhost3.vmware.com (Postfix) with ESMTP id 3EDB341392; Wed, 5 Dec 2018 00:52:05 -0800 (PST) From: Nadav Amit To: Ingo Molnar CC: , , "H. Peter Anvin" , Thomas Gleixner , Borislav Petkov , Andy Lutomirski , Nadav Amit , Dave Hansen , Peter Zijlstra , , , , Nadav Amit , Rick P Edgecombe , Will Deacon Subject: [PATCH v7 13/14] module: Do not set nx for module memory before freeing Date: Tue, 4 Dec 2018 17:34:07 -0800 Message-ID: <20181205013408.47725-14-namit@vmware.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181205013408.47725-1-namit@vmware.com> References: <20181205013408.47725-1-namit@vmware.com> MIME-Version: 1.0 Received-SPF: None (EX13-EDG-OU-001.vmware.com: namit@vmware.com does not designate permitted sender hosts) Sender: linux-integrity-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-integrity@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP When module memory is about to be freed, there is no apparent reason to make it (and its data) executable, but that's exactly what is done today. This is not efficient and not secure. There are various theories why it was done, but none of them seem as something that really require it today. nios2 uses kmalloc for module memory, but anyhow it does not change the PTEs of the module memory. In x86, changing vmalloc'd memory mappings also modifies the direct mapping alias, but the NX-bit is not modified in such way. So let's remove it. Andy suggested that the changes of the PTEs can be avoided (excluding the direct-mapping alias), which is true. However, in x86 it requires some cleanup of the contiguous page allocator, which is outside of the scope of this patch-set. Cc: Rick P Edgecombe Cc: Will Deacon Cc: Andy Lutomirski Signed-off-by: Nadav Amit --- kernel/module.c | 35 ++++++++++++++++++++++------------- 1 file changed, 22 insertions(+), 13 deletions(-) diff --git a/kernel/module.c b/kernel/module.c index 7cb207249437..57c5b23746e7 100644 --- a/kernel/module.c +++ b/kernel/module.c @@ -2027,20 +2027,29 @@ void set_all_modules_text_ro(void) mutex_unlock(&module_mutex); } -static void disable_ro_nx(const struct module_layout *layout) +static void module_restore_mappings(const struct module_layout *layout) { - if (rodata_enabled) { - frob_text(layout, set_memory_rw); - frob_rodata(layout, set_memory_rw); - frob_ro_after_init(layout, set_memory_rw); - } - frob_rodata(layout, set_memory_x); - frob_ro_after_init(layout, set_memory_x); - frob_writable_data(layout, set_memory_x); + /* + * First, make the mappings of the code non-executable to prevent + * transient W+X mappings from being set when the text is set as RW. + */ + frob_text(layout, set_memory_nx); + + if (!rodata_enabled) + return; + + /* + * Second, set the memory as writable. Although the module memory is + * about to be freed, these calls are required (at least on x86) to + * restore the direct map to its "correct" state. + */ + frob_text(layout, set_memory_rw); + frob_rodata(layout, set_memory_rw); + frob_ro_after_init(layout, set_memory_rw); } #else -static void disable_ro_nx(const struct module_layout *layout) { } +static void module_restore_mappings(const struct module_layout *layout) { } static void module_enable_nx(const struct module *mod) { } static void module_disable_nx(const struct module *mod) { } #endif @@ -2173,7 +2182,7 @@ static void free_module(struct module *mod) mutex_unlock(&module_mutex); /* This may be empty, but that's OK */ - disable_ro_nx(&mod->init_layout); + module_restore_mappings(&mod->init_layout); module_arch_freeing_init(mod); module_memfree(mod->init_layout.base); kfree(mod->args); @@ -2183,7 +2192,7 @@ static void free_module(struct module *mod) lockdep_free_key_range(mod->core_layout.base, mod->core_layout.size); /* Finally, free the core (containing the module structure) */ - disable_ro_nx(&mod->core_layout); + module_restore_mappings(&mod->core_layout); module_memfree(mod->core_layout.base); } @@ -3507,7 +3516,7 @@ static noinline int do_init_module(struct module *mod) #endif module_enable_ro(mod, true); mod_tree_remove_init(mod); - disable_ro_nx(&mod->init_layout); + module_restore_mappings(&mod->init_layout); module_arch_freeing_init(mod); mod->init_layout.base = NULL; mod->init_layout.size = 0; From patchwork Wed Dec 5 01:34:08 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nadav Amit X-Patchwork-Id: 10713323 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CAD34109C for ; Wed, 5 Dec 2018 08:52:23 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BE77A2CCC7 for ; Wed, 5 Dec 2018 08:52:23 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id BC4FD2CCBA; Wed, 5 Dec 2018 08:52:23 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.4 required=2.0 tests=BAYES_00,DATE_IN_PAST_06_12, MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 544A52CCCD for ; Wed, 5 Dec 2018 08:52:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727512AbeLEIwQ (ORCPT ); Wed, 5 Dec 2018 03:52:16 -0500 Received: from ex13-edg-ou-002.vmware.com ([208.91.0.190]:46460 "EHLO EX13-EDG-OU-002.vmware.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727500AbeLEIwP (ORCPT ); Wed, 5 Dec 2018 03:52:15 -0500 Received: from sc9-mailhost3.vmware.com (10.113.161.73) by EX13-EDG-OU-002.vmware.com (10.113.208.156) with Microsoft SMTP Server id 15.0.1156.6; Wed, 5 Dec 2018 00:52:04 -0800 Received: from sc2-haas01-esx0118.eng.vmware.com (sc2-haas01-esx0118.eng.vmware.com [10.172.44.118]) by sc9-mailhost3.vmware.com (Postfix) with ESMTP id 45F6D41396; Wed, 5 Dec 2018 00:52:05 -0800 (PST) From: Nadav Amit To: Ingo Molnar CC: , , "H. Peter Anvin" , Thomas Gleixner , Borislav Petkov , Andy Lutomirski , Nadav Amit , Dave Hansen , Peter Zijlstra , , , , Nadav Amit Subject: [PATCH v7 14/14] module: Prevent module removal racing with text_poke() Date: Tue, 4 Dec 2018 17:34:08 -0800 Message-ID: <20181205013408.47725-15-namit@vmware.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181205013408.47725-1-namit@vmware.com> References: <20181205013408.47725-1-namit@vmware.com> MIME-Version: 1.0 Received-SPF: None (EX13-EDG-OU-002.vmware.com: namit@vmware.com does not designate permitted sender hosts) Sender: linux-integrity-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-integrity@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP It seems dangerous to allow code modifications to take place concurrently with module unloading. So take the text_mutex while the memory of the module is freed. Signed-off-by: Nadav Amit --- kernel/module.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/kernel/module.c b/kernel/module.c index 57c5b23746e7..b45754961143 100644 --- a/kernel/module.c +++ b/kernel/module.c @@ -64,6 +64,7 @@ #include #include #include +#include #include #include "module-internal.h" @@ -2181,6 +2182,9 @@ static void free_module(struct module *mod) synchronize_sched(); mutex_unlock(&module_mutex); + /* Protect against patching of the module while it is being removed */ + mutex_lock(&text_mutex); + /* This may be empty, but that's OK */ module_restore_mappings(&mod->init_layout); module_arch_freeing_init(mod); @@ -2194,6 +2198,7 @@ static void free_module(struct module *mod) /* Finally, free the core (containing the module structure) */ module_restore_mappings(&mod->core_layout); module_memfree(mod->core_layout.base); + mutex_unlock(&text_mutex); } void *__symbol_get(const char *symbol)