From patchwork Sat Apr 27 10:06:36 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicolai Stange X-Patchwork-Id: 10920147 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 76F2214C0 for ; Sat, 27 Apr 2019 10:08:09 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 68BAA28DEE for ; Sat, 27 Apr 2019 10:08:09 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 5D07528DF1; Sat, 27 Apr 2019 10:08:09 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E3AD628DEE for ; Sat, 27 Apr 2019 10:08:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726402AbfD0KHs (ORCPT ); Sat, 27 Apr 2019 06:07:48 -0400 Received: from mx2.suse.de ([195.135.220.15]:51352 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1725912AbfD0KHs (ORCPT ); Sat, 27 Apr 2019 06:07:48 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 3F992AD25; Sat, 27 Apr 2019 10:07:46 +0000 (UTC) From: Nicolai Stange To: Steven Rostedt Cc: Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H. Peter Anvin" , x86@kernel.org, Josh Poimboeuf , Jiri Kosina , Miroslav Benes , Petr Mladek , Joe Lawrence , Shuah Khan , Konrad Rzeszutek Wilk , Tim Chen , Sebastian Andrzej Siewior , Mimi Zohar , Juergen Gross , Nick Desaulniers , Nayna Jain , Masahiro Yamada , Andy Lutomirski , Joerg Roedel , linux-kernel@vger.kernel.org, live-patching@vger.kernel.org, linux-kselftest@vger.kernel.org, Nicolai Stange Subject: [PATCH 1/4] x86/thread_info: introduce ->ftrace_int3_stack member Date: Sat, 27 Apr 2019 12:06:36 +0200 Message-Id: <20190427100639.15074-2-nstange@suse.de> X-Mailer: git-send-email 2.13.7 In-Reply-To: <20190427100639.15074-1-nstange@suse.de> References: <20190427100639.15074-1-nstange@suse.de> Sender: linux-kselftest-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Before actually rewriting an insn, x86' DYNAMIC_FTRACE implementation places an int3 breakpoint on it. Currently, ftrace_int3_handler() simply treats the insn in question as nop and advances %rip past it. An upcoming patch will improve this by making the int3 trap handler emulate the call insn. To this end, ftrace_int3_handler() will be made to change its iret frame's ->ip to some stub which will then mimic the function call in the original context. Somehow the trapping ->ip address will have to get communicated from ftrace_int3_handler() to these stubs though. Note that at any given point in time, there can be at most four such call insn emulations pending: namely at most one per "process", "irq", "softirq" and "nmi" context. Introduce struct ftrace_int3_stack providing four entries for storing the instruction pointer. In principle, it could be made per-cpu, but this would require making ftrace_int3_handler() to return with preemption disabled and to enable it from those emulation stubs again only after the stack's top entry has been consumed. I've been told that this would "break a lot of norms" and that making this stack part of struct thread_info instead would be less fragile. Follow this advice and add a struct ftrace_int3_stack instance to x86's struct thread_info. Note that these stacks will get only rarely accessed (only during ftrace's code modifications) and thus, cache line dirtying won't have any significant impact on the neighbouring fields. Initialization will take place implicitly through INIT_THREAD_INFO as per the rules for missing elements in initializers. The memcpy() in arch_dup_task_struct() will propagate the initial state properly, because it's always run in process context and won't ever see a non-zero ->depth value. Finally, add the necessary bits to asm-offsets for making struct ftrace_int3_stack accessible from assembly. Suggested-by: Steven Rostedt Signed-off-by: Nicolai Stange --- arch/x86/include/asm/thread_info.h | 11 +++++++++++ arch/x86/kernel/asm-offsets.c | 8 ++++++++ 2 files changed, 19 insertions(+) diff --git a/arch/x86/include/asm/thread_info.h b/arch/x86/include/asm/thread_info.h index e0eccbcb8447..83434a88cfbb 100644 --- a/arch/x86/include/asm/thread_info.h +++ b/arch/x86/include/asm/thread_info.h @@ -56,6 +56,17 @@ struct task_struct; struct thread_info { unsigned long flags; /* low level flags */ u32 status; /* thread synchronous flags */ +#ifdef CONFIG_DYNAMIC_FTRACE + struct ftrace_int3_stack { + int depth; + /* + * There can be at most one slot in use per context, + * i.e. at most one for "normal", "irq", "softirq" and + * "nmi" each. + */ + unsigned long slots[4]; + } ftrace_int3_stack; +#endif }; #define INIT_THREAD_INFO(tsk) \ diff --git a/arch/x86/kernel/asm-offsets.c b/arch/x86/kernel/asm-offsets.c index 168543d077d7..ca6ee24a0c6e 100644 --- a/arch/x86/kernel/asm-offsets.c +++ b/arch/x86/kernel/asm-offsets.c @@ -105,4 +105,12 @@ static void __used common(void) OFFSET(TSS_sp0, tss_struct, x86_tss.sp0); OFFSET(TSS_sp1, tss_struct, x86_tss.sp1); OFFSET(TSS_sp2, tss_struct, x86_tss.sp2); + +#ifdef CONFIG_DYNAMIC_FTRACE + BLANK(); + OFFSET(TASK_TI_ftrace_int3_depth, task_struct, + thread_info.ftrace_int3_stack.depth); + OFFSET(TASK_TI_ftrace_int3_slots, task_struct, + thread_info.ftrace_int3_stack.slots); +#endif } From patchwork Sat Apr 27 10:06:37 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicolai Stange X-Patchwork-Id: 10920145 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7EF2014C0 for ; Sat, 27 Apr 2019 10:08:06 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 71C8328DEE for ; Sat, 27 Apr 2019 10:08:06 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 65DFB28DF1; Sat, 27 Apr 2019 10:08:06 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1448928DEE for ; Sat, 27 Apr 2019 10:08:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726465AbfD0KHv (ORCPT ); Sat, 27 Apr 2019 06:07:51 -0400 Received: from mx2.suse.de ([195.135.220.15]:51380 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1725902AbfD0KHt (ORCPT ); Sat, 27 Apr 2019 06:07:49 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 847AFAD41; Sat, 27 Apr 2019 10:07:47 +0000 (UTC) From: Nicolai Stange To: Steven Rostedt Cc: Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H. Peter Anvin" , x86@kernel.org, Josh Poimboeuf , Jiri Kosina , Miroslav Benes , Petr Mladek , Joe Lawrence , Shuah Khan , Konrad Rzeszutek Wilk , Tim Chen , Sebastian Andrzej Siewior , Mimi Zohar , Juergen Gross , Nick Desaulniers , Nayna Jain , Masahiro Yamada , Andy Lutomirski , Joerg Roedel , linux-kernel@vger.kernel.org, live-patching@vger.kernel.org, linux-kselftest@vger.kernel.org, Nicolai Stange Subject: [PATCH 2/4] ftrace: drop 'static' qualifier from ftrace_ops_list_func() Date: Sat, 27 Apr 2019 12:06:37 +0200 Message-Id: <20190427100639.15074-3-nstange@suse.de> X-Mailer: git-send-email 2.13.7 In-Reply-To: <20190427100639.15074-1-nstange@suse.de> References: <20190427100639.15074-1-nstange@suse.de> Sender: linux-kselftest-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP With an upcoming patch improving x86' ftrace_int3_handler() not to simply skip over the insn being updated, ftrace_ops_list_func() will have to get referenced from arch/x86 code. Drop its 'static' qualifier. Signed-off-by: Nicolai Stange --- kernel/trace/ftrace.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c index b920358dd8f7..ed3c20811d9a 100644 --- a/kernel/trace/ftrace.c +++ b/kernel/trace/ftrace.c @@ -125,8 +125,8 @@ ftrace_func_t ftrace_trace_function __read_mostly = ftrace_stub; struct ftrace_ops global_ops; #if ARCH_SUPPORTS_FTRACE_OPS -static void ftrace_ops_list_func(unsigned long ip, unsigned long parent_ip, - struct ftrace_ops *op, struct pt_regs *regs); +void ftrace_ops_list_func(unsigned long ip, unsigned long parent_ip, + struct ftrace_ops *op, struct pt_regs *regs); #else /* See comment below, where ftrace_ops_list_func is defined */ static void ftrace_ops_no_ops(unsigned long ip, unsigned long parent_ip); @@ -6302,8 +6302,8 @@ __ftrace_ops_list_func(unsigned long ip, unsigned long parent_ip, * set the ARCH_SUPPORTS_FTRACE_OPS. */ #if ARCH_SUPPORTS_FTRACE_OPS -static void ftrace_ops_list_func(unsigned long ip, unsigned long parent_ip, - struct ftrace_ops *op, struct pt_regs *regs) +void ftrace_ops_list_func(unsigned long ip, unsigned long parent_ip, + struct ftrace_ops *op, struct pt_regs *regs) { __ftrace_ops_list_func(ip, parent_ip, NULL, regs); } From patchwork Sat Apr 27 10:06:38 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicolai Stange X-Patchwork-Id: 10920143 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 916EC14DB for ; Sat, 27 Apr 2019 10:08:05 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8297D28DEE for ; Sat, 27 Apr 2019 10:08:05 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 7406728DF1; Sat, 27 Apr 2019 10:08:05 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8481E28DEE for ; Sat, 27 Apr 2019 10:08:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726497AbfD0KHw (ORCPT ); Sat, 27 Apr 2019 06:07:52 -0400 Received: from mx2.suse.de ([195.135.220.15]:51424 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1725912AbfD0KHv (ORCPT ); Sat, 27 Apr 2019 06:07:51 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 19B31ADF7; Sat, 27 Apr 2019 10:07:49 +0000 (UTC) From: Nicolai Stange To: Steven Rostedt Cc: Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H. Peter Anvin" , x86@kernel.org, Josh Poimboeuf , Jiri Kosina , Miroslav Benes , Petr Mladek , Joe Lawrence , Shuah Khan , Konrad Rzeszutek Wilk , Tim Chen , Sebastian Andrzej Siewior , Mimi Zohar , Juergen Gross , Nick Desaulniers , Nayna Jain , Masahiro Yamada , Andy Lutomirski , Joerg Roedel , linux-kernel@vger.kernel.org, live-patching@vger.kernel.org, linux-kselftest@vger.kernel.org, Nicolai Stange Subject: [PATCH 3/4] x86/ftrace: make ftrace_int3_handler() not to skip fops invocation Date: Sat, 27 Apr 2019 12:06:38 +0200 Message-Id: <20190427100639.15074-4-nstange@suse.de> X-Mailer: git-send-email 2.13.7 In-Reply-To: <20190427100639.15074-1-nstange@suse.de> References: <20190427100639.15074-1-nstange@suse.de> Sender: linux-kselftest-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP With dynamic ftrace, ftrace patches call sites on x86 in a three steps: 1. Put a breakpoint at the to be patched location, 2. update call site and 3. finally remove the breakpoint again. Note that the breakpoint ftrace_int3_handler() simply makes execution to skip over the to be patched instruction. This patching happens in the following circumstances: a.) the global ftrace_trace_function changes and the call sites at ftrace_call and ftrace_regs_call get patched, b.) an ftrace_ops' ->func changes and the call site in its trampoline gets patched, c.) graph tracing gets enabled/disabled and the jump site at ftrace_graph_call gets patched, d.) a mcount site gets converted from nop -> call, call -> nop, or call -> call. The latter case, i.e. a mcount call getting redirected, is possible in e.g. a transition from trampolined to not trampolined upon a user enabling function tracing on a live patched function. The ftrace_int3_handler() simply skipping over the updated insn is quite problematic in the context of live patching, because it means that a live patched function gets temporarily reverted to its unpatched original and this breaks the live patching consistency model. But even without live patching, it is desirable to avoid missing traces when making changes to the tracing setup. Make ftrace_int3_handler not to skip over the fops invocation, but modify the interrupted control flow to issue a call as needed. Case c.) from the list above can be ignored, because a jmp instruction gets changed to a nop or vice versa. The remaining cases a.), b.) and d.) all involve call instructions. For a.) and b.), the call always goes to some ftrace_func_t and the generic ftrace_ops_list_func() impementation will be able to demultiplex and do the right thing. For case d.), the call target is either of ftrace_caller(), ftrace_regs_caller() or some ftrace_ops' trampoline. Because providing the register state won't cause any harm for !FTRACE_OPS_FL_SAVE_REGS ftrace_ops, ftrace_regs_caller() would be a suitable target capable of handling any case. ftrace_int3_handler()'s context is different from the interrupted call instruction's one, obviously. In order to be able to emulate the call within the original context, make ftrace_int3_handler() set its iret frame's ->ip to some helper stub. Upon return from the trap, this stub will then mimic the call by pushing the the return address onto the stack and issuing a jmp to the target address. As describe above, the jmp target will be either of ftrace_ops_list_func() or ftrace_regs_caller(). Provide one such stub implementation for each of the two cases. Finally, the desired return address, which is derived from the trapping ->ip, must get passed from ftrace_int3_handler() to these stubs. Make ftrace_int3_handler() push it onto the ftrace_int3_stack introduced by an earlier patch and let the stubs consume it. Be careful to use proper compiler barriers such that nested int3 handling from e.g. irqs won't clobber entries owned by outer instances. Suggested-by: Steven Rostedt Signed-off-by: Nicolai Stange --- arch/x86/kernel/Makefile | 1 + arch/x86/kernel/ftrace.c | 79 +++++++++++++++++++++++++++++++------ arch/x86/kernel/ftrace_int3_stubs.S | 61 ++++++++++++++++++++++++++++ 3 files changed, 130 insertions(+), 11 deletions(-) create mode 100644 arch/x86/kernel/ftrace_int3_stubs.S diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile index 00b7e27bc2b7..0b63ae02b1f3 100644 --- a/arch/x86/kernel/Makefile +++ b/arch/x86/kernel/Makefile @@ -93,6 +93,7 @@ obj-$(CONFIG_LIVEPATCH) += livepatch.o obj-$(CONFIG_FUNCTION_TRACER) += ftrace_$(BITS).o obj-$(CONFIG_FUNCTION_GRAPH_TRACER) += ftrace.o obj-$(CONFIG_FTRACE_SYSCALLS) += ftrace.o +obj-$(CONFIG_DYNAMIC_FTRACE) += ftrace_int3_stubs.o obj-$(CONFIG_X86_TSC) += trace_clock.o obj-$(CONFIG_KEXEC_CORE) += machine_kexec_$(BITS).o obj-$(CONFIG_KEXEC_CORE) += relocate_kernel_$(BITS).o crash.o diff --git a/arch/x86/kernel/ftrace.c b/arch/x86/kernel/ftrace.c index ef49517f6bb2..917494f35cba 100644 --- a/arch/x86/kernel/ftrace.c +++ b/arch/x86/kernel/ftrace.c @@ -280,25 +280,84 @@ static nokprobe_inline int is_ftrace_caller(unsigned long ip) return 0; } -/* - * A breakpoint was added to the code address we are about to - * modify, and this is the handle that will just skip over it. - * We are either changing a nop into a trace call, or a trace - * call to a nop. While the change is taking place, we treat - * it just like it was a nop. - */ +extern void ftrace_graph_call(void); + +asmlinkage void ftrace_int3_stub_regs_caller(void); +asmlinkage void ftrace_int3_stub_list_func(void); + +/* A breakpoint was added to the code address we are about to modify. */ int ftrace_int3_handler(struct pt_regs *regs) { unsigned long ip; + bool is_ftrace_location; + struct ftrace_int3_stack *stack; + int slot; if (WARN_ON_ONCE(!regs)) return 0; ip = regs->ip - 1; - if (!ftrace_location(ip) && !is_ftrace_caller(ip)) + is_ftrace_location = ftrace_location(ip); + if (!is_ftrace_location && !is_ftrace_caller(ip)) return 0; - regs->ip += MCOUNT_INSN_SIZE - 1; + ip += MCOUNT_INSN_SIZE; + + if (!is_ftrace_location && + ftrace_update_func == (unsigned long)ftrace_graph_call) { + /* + * The insn at ftrace_graph_call is being changed from a + * nop to a jmp or vice versa. Treat it as a nop and + * skip over it. + */ + regs->ip = ip; + return 1; + } + + /* + * The insn having the breakpoint on it is either some mcount + * call site or one of ftrace_call, ftrace_regs_call and their + * equivalents within some trampoline. The currently pending + * transition is known to turn the insn from a nop to a call, + * from a call to a nop or to change the target address of an + * existing call. We're going to emulate a call to the most + * generic implementation capable of handling any possible + * configuration. For the mcount sites that would be + * ftrace_regs_caller() and for the remaining calls, which all + * have got some ftrace_func_t target, ftrace_ops_list_func() + * will do the right thing. + * + * However, the call insn can't get emulated from this trap + * handler here. Rewrite the iret frame's ->ip value to one of + * the ftrace_int3_stub instances which will then setup + * everything in the original context. The address following + * the current insn will be passed to the stub via the + * ftrace_int3_stack. + */ + stack = ¤t_thread_info()->ftrace_int3_stack; + if (WARN_ON_ONCE(stack->depth >= 4)) { + /* + * This should not happen as at most one stack slot is + * required per the contexts "normal", "irq", + * "softirq" and "nmi" each. However, be conservative + * and treat it like a nop. + */ + regs->ip = ip; + return 1; + } + + /* + * Make sure interrupts will see the incremented ->depth value + * before writing the stack entry. + */ + slot = stack->depth; + WRITE_ONCE(stack->depth, slot + 1); + WRITE_ONCE(stack->slots[slot], ip); + + if (is_ftrace_location) + regs->ip = (unsigned long)ftrace_int3_stub_regs_caller; + else + regs->ip = (unsigned long)ftrace_int3_stub_list_func; return 1; } @@ -949,8 +1008,6 @@ void arch_ftrace_trampoline_free(struct ftrace_ops *ops) #ifdef CONFIG_FUNCTION_GRAPH_TRACER #ifdef CONFIG_DYNAMIC_FTRACE -extern void ftrace_graph_call(void); - static unsigned char *ftrace_jmp_replace(unsigned long ip, unsigned long addr) { return ftrace_text_replace(0xe9, ip, addr); diff --git a/arch/x86/kernel/ftrace_int3_stubs.S b/arch/x86/kernel/ftrace_int3_stubs.S new file mode 100644 index 000000000000..ef5f580450bb --- /dev/null +++ b/arch/x86/kernel/ftrace_int3_stubs.S @@ -0,0 +1,61 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (C) 2019 SUSE Linux GmbH */ + +#include +#include +#include +#include +#include + +#ifdef CONFIG_X86_64 +#define WORD_SIZE 8 +#else +#define WORD_SIZE 4 +#endif + +.macro ftrace_int3_stub call_target + /* + * We got here from ftrace_int3_handler() because a breakpoint + * on a call insn currently being modified has been + * hit. ftrace_int3_handler() can't emulate the function call + * directly, because it's running at a different position on + * the stack, obviously. Hence it sets the regs->ip to this + * stub so that we end up here upon the iret from the int3 + * handler. The stack is now in its original state and we can + * emulate the function call insn by pushing the return + * address onto the stack and jumping to the call target. The + * desired return address has been put onto the ftrace_int3_stack + * kept within struct thread_info. + */ + UNWIND_HINT_EMPTY + /* Reserve room for the emulated call's return address. */ + sub $WORD_SIZE, %_ASM_SP + /* + * Pop the return address from ftrace_int3_ip_stack and write + * it to the location just reserved. Be careful to retrieve + * the address before decrementing ->depth in order to protect + * against nested contexts clobbering it. + */ + push %_ASM_AX + push %_ASM_CX + push %_ASM_DX + mov PER_CPU_VAR(current_task), %_ASM_AX + mov TASK_TI_ftrace_int3_depth(%_ASM_AX), %_ASM_CX + dec %_ASM_CX + mov TASK_TI_ftrace_int3_slots(%_ASM_AX, %_ASM_CX, WORD_SIZE), %_ASM_DX + mov %_ASM_CX, TASK_TI_ftrace_int3_depth(%_ASM_AX) + mov %_ASM_DX, 3*WORD_SIZE(%_ASM_SP) + pop %_ASM_DX + pop %_ASM_CX + pop %_ASM_AX + /* Finally, transfer control to the target function. */ + jmp \call_target +.endm + +ENTRY(ftrace_int3_stub_regs_caller) + ftrace_int3_stub ftrace_regs_caller +END(ftrace_int3_stub_regs_caller) + +ENTRY(ftrace_int3_stub_list_func) + ftrace_int3_stub ftrace_ops_list_func +END(ftrace_int3_stub_list_func) From patchwork Sat Apr 27 10:06:39 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicolai Stange X-Patchwork-Id: 10920141 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 36B6D14DB for ; Sat, 27 Apr 2019 10:08:02 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2708928DEE for ; Sat, 27 Apr 2019 10:08:02 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1A45928DF1; Sat, 27 Apr 2019 10:08:02 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9349C28DEE for ; Sat, 27 Apr 2019 10:08:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726536AbfD0KHy (ORCPT ); Sat, 27 Apr 2019 06:07:54 -0400 Received: from mx2.suse.de ([195.135.220.15]:51468 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726503AbfD0KHy (ORCPT ); Sat, 27 Apr 2019 06:07:54 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id D3645ADF2; Sat, 27 Apr 2019 10:07:51 +0000 (UTC) From: Nicolai Stange To: Steven Rostedt Cc: Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H. Peter Anvin" , x86@kernel.org, Josh Poimboeuf , Jiri Kosina , Miroslav Benes , Petr Mladek , Joe Lawrence , Shuah Khan , Konrad Rzeszutek Wilk , Tim Chen , Sebastian Andrzej Siewior , Mimi Zohar , Juergen Gross , Nick Desaulniers , Nayna Jain , Masahiro Yamada , Andy Lutomirski , Joerg Roedel , linux-kernel@vger.kernel.org, live-patching@vger.kernel.org, linux-kselftest@vger.kernel.org, Nicolai Stange Subject: [PATCH 4/4] selftests/livepatch: add "ftrace a live patched function" test Date: Sat, 27 Apr 2019 12:06:39 +0200 Message-Id: <20190427100639.15074-5-nstange@suse.de> X-Mailer: git-send-email 2.13.7 In-Reply-To: <20190427100639.15074-1-nstange@suse.de> References: <20190427100639.15074-1-nstange@suse.de> Sender: linux-kselftest-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP There had been an issue with interactions between tracing and live patching due to how x86' CONFIG_DYNAMIC_FTRACE used to handle the breakpoints at the updated instructions from its ftrace_int3_handler(). More specifically, starting to trace a live patched function caused a short period in time where the live patching redirection became ineffective. In particular, the guarantees from the consistency model couldn't be held up in this situation. Implement a testcase for verifying that a function's live patch replacement is kept effective when enabling tracing on it. Reuse the existing 'test_klp_livepatch' live patch module which patches cmdline_proc_show(), the handler for /proc/cmdline. Let the testcase in a loop - apply this live patch, - launch a background shell job enabling tracing on that function - while continuously verifying that the contents of /proc/cmdline still match what would be expected when the live patch is applied. Signed-off-by: Nicolai Stange --- tools/testing/selftests/livepatch/Makefile | 3 +- .../livepatch/test-livepatch-vs-ftrace.sh | 44 ++++++++++++++++++++++ 2 files changed, 46 insertions(+), 1 deletion(-) create mode 100755 tools/testing/selftests/livepatch/test-livepatch-vs-ftrace.sh diff --git a/tools/testing/selftests/livepatch/Makefile b/tools/testing/selftests/livepatch/Makefile index af4aee79bebb..bfa5353f6d17 100644 --- a/tools/testing/selftests/livepatch/Makefile +++ b/tools/testing/selftests/livepatch/Makefile @@ -3,6 +3,7 @@ TEST_GEN_PROGS := \ test-livepatch.sh \ test-callbacks.sh \ - test-shadow-vars.sh + test-shadow-vars.sh \ + test-livepatch-vs-ftrace.sh include ../lib.mk diff --git a/tools/testing/selftests/livepatch/test-livepatch-vs-ftrace.sh b/tools/testing/selftests/livepatch/test-livepatch-vs-ftrace.sh new file mode 100755 index 000000000000..5c982ec56373 --- /dev/null +++ b/tools/testing/selftests/livepatch/test-livepatch-vs-ftrace.sh @@ -0,0 +1,44 @@ +#!/bin/bash +# SPDX-License-Identifier: GPL-2.0 +# Copyright (C) 2019 SUSE Linux GmbH + +. $(dirname $0)/functions.sh + +set -e + +MOD_LIVEPATCH=test_klp_livepatch + +# TEST: ftrace a live patched function +# - load a livepatch that modifies the output from /proc/cmdline +# - install a function tracer at the live patched function +# - verify that the function is still patched by reading /proc/cmdline +# - unload the livepatch and make sure the patch was removed + +echo -n "TEST: ftrace a live patched function ... " +dmesg -C + +for i in $(seq 1 3); do + load_lp $MOD_LIVEPATCH + + ( echo cmdline_proc_show > /sys/kernel/debug/tracing/set_ftrace_filter; + echo function > /sys/kernel/debug/tracing/current_tracer ) & + + for j in $(seq 1 200); do + if [[ "$(cat /proc/cmdline)" != \ + "$MOD_LIVEPATCH: this has been live patched" ]] ; then + echo -e "FAIL\n\n" + die "livepatch kselftest(s) failed" + fi + done + + wait %1 + + echo nop > /sys/kernel/debug/tracing/current_tracer + echo > /sys/kernel/debug/tracing/set_ftrace_filter + + disable_lp $MOD_LIVEPATCH + unload_lp $MOD_LIVEPATCH +done + +echo "ok" +exit 0