diff mbox series

[RFC,v2] ftrace/x86: Emulate call function while updating in breakpoint handler

Message ID 20190430175334.423821c0@gandalf.local.home (mailing list archive)
State New
Headers show
Series [RFC,v2] ftrace/x86: Emulate call function while updating in breakpoint handler | expand

Commit Message

Steven Rostedt April 30, 2019, 9:53 p.m. UTC
From: "Steven Rostedt (VMware)" <rostedt@goodmis.org>

Nicolai Stange discovered[1] that if live kernel patching is enabled, and the
function tracer started tracing the same function that was patched, the
conversion of the fentry call site during the translation of going from
calling the live kernel patch trampoline to the iterator trampoline, would
have as slight window where it didn't call anything.

As live kernel patching depends on ftrace to always call its code (to
prevent the function being traced from being called, as it will redirect
it). This small window would allow the old buggy function to be called, and
this can cause undesirable results.

Nicolai submitted new patches[2] but these were controversial. As this is
similar to the static call emulation issues that came up a while ago[3],
Linus suggested using per CPU data along with special trampolines[4] to emulate
the calls.

Linus's solution was for text poke (which was mostly what the static_call
code did), but as ftrace has its own mechanism, it required doing its own
thing.

Having ftrace use its own per CPU data and having its own set of specialized
trampolines solves the issue of missed calls that live kernel patching
suffers.

[1] http://lkml.kernel.org/r/20180726104029.7736-1-nstange@suse.de
[2] http://lkml.kernel.org/r/20190427100639.15074-1-nstange@suse.de
[3] http://lkml.kernel.org/r/3cf04e113d71c9f8e4be95fb84a510f085aa4afa.1541711457.git.jpoimboe@redhat.com
[4] http://lkml.kernel.org/r/CAHk-=wh5OpheSU8Em_Q3Hg8qw_JtoijxOdPtHru6d+5K8TWM=A@mail.gmail.com

Inspired-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: stable@vger.kernel.org
Fixes: b700e7f03df5 ("livepatch: kernel: add support for live patching")
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
---

Changes since v1:

  - Use "push push ret" instead of indirect jumps (Linus)
  - Handle 32 bit as well as non SMP
  - Fool lockdep into thinking interrupts are enabled


 arch/x86/kernel/ftrace.c | 175 +++++++++++++++++++++++++++++++++++++--
 1 file changed, 170 insertions(+), 5 deletions(-)

Comments

Steven Rostedt May 1, 2019, 1:35 a.m. UTC | #1
On Tue, 30 Apr 2019 17:53:34 -0400
Steven Rostedt <rostedt@goodmis.org> wrote:

> +	if (ftrace_location(ip)) {
> +		/* A breakpoint at the beginning of the function was hit */
> +		if (in_nmi()) {
> +			/* NMIs have their own trampoline */
> +			this_cpu_write(ftrace_bp_call_nmi_return, (void *)ip + MCOUNT_INSN_SIZE);
> +			regs->ip = (unsigned long) ftrace_emulate_call_nmi;
> +			return 1;
> +		}
> +		this_cpu_write(ftrace_bp_call_return, (void *)ip + MCOUNT_INSN_SIZE);
> +		if (regs->flags & X86_EFLAGS_IF) {
> +			regs->flags &= ~X86_EFLAGS_IF;
> +			regs->ip = (unsigned long) ftrace_emulate_call_irqoff;
> +			/* Tell lockdep here we are enabling interrupts */
> +			trace_hardirqs_on();

This isn't good enough. The return from interrupt does call lockdep
saying interrupts are disabled. Need to add the lockdep tracking in the
asm as well.

Probably easier to move it from inline asm to ftrace_X.S and use the
lockdep TRACE_ON/OFF macros.

-- Steve




> +		} else {
> +			regs->ip = (unsigned long) ftrace_emulate_call_irqon;
> +		}
> +		return 1;
> +	} else if (is_ftrace_caller(ip)) {
> +		/* An ftrace trampoline is being updated */
> +		if (!ftrace_update_func_call) {
> +			/* If it's a jump, just need to skip it */
> +			regs->ip += MCOUNT_INSN_SIZE -1;
> +			return 1;
> +		}
> +		if (in_nmi()) {
> +			/* NMIs have their own trampoline */
> +			this_cpu_write(ftrace_bp_call_nmi_return, (void *)ip + MCOUNT_INSN_SIZE);
> +			regs->ip = (unsigned long) ftrace_emulate_call_update_nmi;
> +			return 1;
> +		}
> +		this_cpu_write(ftrace_bp_call_return, (void *)ip + MCOUNT_INSN_SIZE);
> +		if (regs->flags & X86_EFLAGS_IF) {
> +			regs->flags &= ~X86_EFLAGS_IF;
> +			regs->ip = (unsigned long) ftrace_emulate_call_update_irqoff;
> +			trace_hardirqs_on();
> +		} else {
> +			regs->ip = (unsigned long) ftrace_emulate_call_update_irqon;
> +		}
> +		return 1;
> +	}
>  
> -	return 1;
> +	return 0;
>  }
Linus Torvalds May 1, 2019, 1:58 a.m. UTC | #2
On Tue, Apr 30, 2019 at 6:35 PM Steven Rostedt <rostedt@goodmis.org> wrote:
>
>
> Probably easier to move it from inline asm to ftrace_X.S and use the
> lockdep TRACE_ON/OFF macros.

Yeah, that should clean up the percpu stuff too since we have helper
macros for it for *.S files anyway.

I only did the asm() in C because it made the "look, something like
this" patch simpler to test (and it made it easy to check the
generated asm file). Not because it was a good idea ;)

                 Linus
Nicolai Stange May 1, 2019, 8:26 a.m. UTC | #3
Hi Steve,

many thanks for moving this forward!


Steven Rostedt <rostedt@goodmis.org> writes:

>
> diff --git a/arch/x86/kernel/ftrace.c b/arch/x86/kernel/ftrace.c
> index ef49517f6bb2..9160f5cc3b6d 100644
> --- a/arch/x86/kernel/ftrace.c
> +++ b/arch/x86/kernel/ftrace.c
> @@ -17,6 +17,7 @@
>  #include <linux/uaccess.h>
>  #include <linux/ftrace.h>
>  #include <linux/percpu.h>
> +#include <linux/frame.h>
>  #include <linux/sched.h>
>  #include <linux/slab.h>
>  #include <linux/init.h>
> @@ -232,6 +233,9 @@ int ftrace_modify_call(struct dyn_ftrace *rec, unsigned long old_addr,
>  
>  static unsigned long ftrace_update_func;
>  
> +/* Used within inline asm below */
> +unsigned long ftrace_update_func_call;
> +
>  static int update_ftrace_func(unsigned long ip, void *new)
>  {
>  	unsigned char old[MCOUNT_INSN_SIZE];
> @@ -259,6 +263,8 @@ int ftrace_update_ftrace_func(ftrace_func_t func)
>  	unsigned char *new;
>  	int ret;
>  
> +	ftrace_update_func_call = (unsigned long)func;
> +
>  	new = ftrace_call_replace(ip, (unsigned long)func);
>  	ret = update_ftrace_func(ip, new);
>  
> @@ -280,6 +286,125 @@ static nokprobe_inline int is_ftrace_caller(unsigned long ip)
>  	return 0;
>  }
>  
> +/*
> + * We need to handle the "call func1" -> "call func2" case.
> + * Just skipping the call is not sufficient as it will be like
> + * changing to "nop" first and then updating the call. But some
> + * users of ftrace require calls never to be missed.
> + *
> + * To emulate the call while converting the call site with a breakpoint,
> + * some trampolines are used along with per CPU buffers.
> + * There are three trampolines for the call sites and three trampolines
> + * for the updating of the call in ftrace trampoline. The three
> + * trampolines are:
> + *
> + * 1) Interrupts are enabled when the breakpoint is hit
> + * 2) Interrupts are disabled when the breakpoint is hit
> + * 3) The breakpoint was hit in an NMI
> + *
> + * As per CPU data is used, interrupts must be disabled to prevent them
> + * from corrupting the data. A separate NMI trampoline is used for the
> + * NMI case. If interrupts are already disabled, then the return path
> + * of where the breakpoint was hit (saved in the per CPU data) is pushed
> + * on the stack and then a jump to either the ftrace_caller (which will
> + * loop through all registered ftrace_ops handlers depending on the ip
> + * address), or if its a ftrace trampoline call update, it will call
> + * ftrace_update_func_call which will hold the call that should be
> + * called.
> + */
> +extern asmlinkage void ftrace_emulate_call_irqon(void);
> +extern asmlinkage void ftrace_emulate_call_irqoff(void);
> +extern asmlinkage void ftrace_emulate_call_nmi(void);
> +extern asmlinkage void ftrace_emulate_call_update_irqoff(void);
> +extern asmlinkage void ftrace_emulate_call_update_irqon(void);
> +extern asmlinkage void ftrace_emulate_call_update_nmi(void);
> +
> +static DEFINE_PER_CPU(void *, ftrace_bp_call_return);
> +static DEFINE_PER_CPU(void *, ftrace_bp_call_nmi_return);

Andy mentioned #DB and #MC exceptions here:
https://lkml.kernel.org/r/C55DED25-C60D-4731-9A6B-92BDA8771766@amacapital.net

I think that #DB won't be possible, provided the trampolines below get
tagged as NOKPROBE (do_int3() and ftrace_int3_handler() already have
it).

It's highly theoretic, but tracing do_machine_check() could clobber
ftrace_bp_call_return or ftrace_bp_call_nmi_return?


> +#ifdef CONFIG_SMP
> +#ifdef CONFIG_X86_64
> +# define BP_CALL_RETURN		"%gs:ftrace_bp_call_return"
> +# define BP_CALL_NMI_RETURN	"%gs:ftrace_bp_call_nmi_return"
> +#else
> +# define BP_CALL_RETURN		"%fs:ftrace_bp_call_return"
> +# define BP_CALL_NMI_RETURN	"%fs:ftrace_bp_call_nmi_return"
> +#endif
> +#else /* SMP */
> +# define BP_CALL_RETURN		"ftrace_bp_call_return"
> +# define BP_CALL_NMI_RETURN	"ftrace_bp_call_nmi_return"
> +#endif
> +
> +/* To hold the ftrace_caller address to push on the stack */
> +void *ftrace_caller_func = (void *)ftrace_caller;

The live patching ftrace_ops need ftrace_regs_caller.


> +
> +asm(
> +	".text\n"
> +
> +	/* Trampoline for function update with interrupts enabled */
> +	".global ftrace_emulate_call_irqoff\n"
> +	".type ftrace_emulate_call_irqoff, @function\n"
> +	"ftrace_emulate_call_irqoff:\n\t"
> +		"push "BP_CALL_RETURN"\n\t"
> +		"push ftrace_caller_func\n"
> +		"sti\n\t"
> +		"ret\n\t"
> +	".size ftrace_emulate_call_irqoff, .-ftrace_emulate_call_irqoff\n"
> +
> +	/* Trampoline for function update with interrupts disabled*/
> +	".global ftrace_emulate_call_irqon\n"

The naming is perhaps a bit confusing, i.e. "update with interrupts
disabled" vs. "irqon"... How about swapping irqoff<->irqon?

Thanks,

Nicolai


> +	".type ftrace_emulate_call_irqon, @function\n"
> +	"ftrace_emulate_call_irqon:\n\t"
> +		"push "BP_CALL_RETURN"\n\t"
> +		"push ftrace_caller_func\n"
> +		"ret\n\t"
> +	".size ftrace_emulate_call_irqon, .-ftrace_emulate_call_irqon\n"
> +
> +	/* Trampoline for function update in an NMI */
> +	".global ftrace_emulate_call_nmi\n"
> +	".type ftrace_emulate_call_nmi, @function\n"
> +	"ftrace_emulate_call_nmi:\n\t"
> +		"push "BP_CALL_NMI_RETURN"\n\t"
> +		"push ftrace_caller_func\n"
> +		"ret\n\t"
> +	".size ftrace_emulate_call_nmi, .-ftrace_emulate_call_nmi\n"
> +
Steven Rostedt May 1, 2019, 1:22 p.m. UTC | #4
On Wed, 01 May 2019 10:26:32 +0200
Nicolai Stange <nstange@suse.de> wrote:

> > +extern asmlinkage void ftrace_emulate_call_irqon(void);
> > +extern asmlinkage void ftrace_emulate_call_irqoff(void);
> > +extern asmlinkage void ftrace_emulate_call_nmi(void);
> > +extern asmlinkage void ftrace_emulate_call_update_irqoff(void);
> > +extern asmlinkage void ftrace_emulate_call_update_irqon(void);
> > +extern asmlinkage void ftrace_emulate_call_update_nmi(void);
> > +
> > +static DEFINE_PER_CPU(void *, ftrace_bp_call_return);
> > +static DEFINE_PER_CPU(void *, ftrace_bp_call_nmi_return);  
> 
> Andy mentioned #DB and #MC exceptions here:
> https://lkml.kernel.org/r/C55DED25-C60D-4731-9A6B-92BDA8771766@amacapital.net
> 
> I think that #DB won't be possible, provided the trampolines below get
> tagged as NOKPROBE (do_int3() and ftrace_int3_handler() already have
> it).
> 
> It's highly theoretic, but tracing do_machine_check() could clobber
> ftrace_bp_call_return or ftrace_bp_call_nmi_return?

Probably shouldn't trace do_machine_check() then ;-)

> 
> 
> > +#ifdef CONFIG_SMP
> > +#ifdef CONFIG_X86_64
> > +# define BP_CALL_RETURN		"%gs:ftrace_bp_call_return"
> > +# define BP_CALL_NMI_RETURN	"%gs:ftrace_bp_call_nmi_return"
> > +#else
> > +# define BP_CALL_RETURN		"%fs:ftrace_bp_call_return"
> > +# define BP_CALL_NMI_RETURN	"%fs:ftrace_bp_call_nmi_return"
> > +#endif
> > +#else /* SMP */
> > +# define BP_CALL_RETURN		"ftrace_bp_call_return"
> > +# define BP_CALL_NMI_RETURN	"ftrace_bp_call_nmi_return"
> > +#endif
> > +
> > +/* To hold the ftrace_caller address to push on the stack */
> > +void *ftrace_caller_func = (void *)ftrace_caller;  
> 
> The live patching ftrace_ops need ftrace_regs_caller.

Ah, you're right. Luckily ftrace_regs_caller is a superset of
ftrace_caller. That is, those only needing ftrace_caller can do fine
with ftrace_regs_caller (but not vice versa).

Easy enough to fix.

> 
> 
> > +
> > +asm(
> > +	".text\n"
> > +
> > +	/* Trampoline for function update with interrupts enabled */
> > +	".global ftrace_emulate_call_irqoff\n"
> > +	".type ftrace_emulate_call_irqoff, @function\n"
> > +	"ftrace_emulate_call_irqoff:\n\t"
> > +		"push "BP_CALL_RETURN"\n\t"
> > +		"push ftrace_caller_func\n"
> > +		"sti\n\t"
> > +		"ret\n\t"
> > +	".size ftrace_emulate_call_irqoff, .-ftrace_emulate_call_irqoff\n"
> > +
> > +	/* Trampoline for function update with interrupts disabled*/
> > +	".global ftrace_emulate_call_irqon\n"  
> 
> The naming is perhaps a bit confusing, i.e. "update with interrupts
> disabled" vs. "irqon"... How about swapping irqoff<->irqon?

I just used the terminology Linus used. It is confusing. Perhaps just
call it ftrace_emulate_call (for non sti case) and
ftrace_emulate_call_sti for the sti case. That should remove the
confusion.

-- Steve
diff mbox series

Patch

diff --git a/arch/x86/kernel/ftrace.c b/arch/x86/kernel/ftrace.c
index ef49517f6bb2..9160f5cc3b6d 100644
--- a/arch/x86/kernel/ftrace.c
+++ b/arch/x86/kernel/ftrace.c
@@ -17,6 +17,7 @@ 
 #include <linux/uaccess.h>
 #include <linux/ftrace.h>
 #include <linux/percpu.h>
+#include <linux/frame.h>
 #include <linux/sched.h>
 #include <linux/slab.h>
 #include <linux/init.h>
@@ -232,6 +233,9 @@  int ftrace_modify_call(struct dyn_ftrace *rec, unsigned long old_addr,
 
 static unsigned long ftrace_update_func;
 
+/* Used within inline asm below */
+unsigned long ftrace_update_func_call;
+
 static int update_ftrace_func(unsigned long ip, void *new)
 {
 	unsigned char old[MCOUNT_INSN_SIZE];
@@ -259,6 +263,8 @@  int ftrace_update_ftrace_func(ftrace_func_t func)
 	unsigned char *new;
 	int ret;
 
+	ftrace_update_func_call = (unsigned long)func;
+
 	new = ftrace_call_replace(ip, (unsigned long)func);
 	ret = update_ftrace_func(ip, new);
 
@@ -280,6 +286,125 @@  static nokprobe_inline int is_ftrace_caller(unsigned long ip)
 	return 0;
 }
 
+/*
+ * We need to handle the "call func1" -> "call func2" case.
+ * Just skipping the call is not sufficient as it will be like
+ * changing to "nop" first and then updating the call. But some
+ * users of ftrace require calls never to be missed.
+ *
+ * To emulate the call while converting the call site with a breakpoint,
+ * some trampolines are used along with per CPU buffers.
+ * There are three trampolines for the call sites and three trampolines
+ * for the updating of the call in ftrace trampoline. The three
+ * trampolines are:
+ *
+ * 1) Interrupts are enabled when the breakpoint is hit
+ * 2) Interrupts are disabled when the breakpoint is hit
+ * 3) The breakpoint was hit in an NMI
+ *
+ * As per CPU data is used, interrupts must be disabled to prevent them
+ * from corrupting the data. A separate NMI trampoline is used for the
+ * NMI case. If interrupts are already disabled, then the return path
+ * of where the breakpoint was hit (saved in the per CPU data) is pushed
+ * on the stack and then a jump to either the ftrace_caller (which will
+ * loop through all registered ftrace_ops handlers depending on the ip
+ * address), or if its a ftrace trampoline call update, it will call
+ * ftrace_update_func_call which will hold the call that should be
+ * called.
+ */
+extern asmlinkage void ftrace_emulate_call_irqon(void);
+extern asmlinkage void ftrace_emulate_call_irqoff(void);
+extern asmlinkage void ftrace_emulate_call_nmi(void);
+extern asmlinkage void ftrace_emulate_call_update_irqoff(void);
+extern asmlinkage void ftrace_emulate_call_update_irqon(void);
+extern asmlinkage void ftrace_emulate_call_update_nmi(void);
+
+static DEFINE_PER_CPU(void *, ftrace_bp_call_return);
+static DEFINE_PER_CPU(void *, ftrace_bp_call_nmi_return);
+
+#ifdef CONFIG_SMP
+#ifdef CONFIG_X86_64
+# define BP_CALL_RETURN		"%gs:ftrace_bp_call_return"
+# define BP_CALL_NMI_RETURN	"%gs:ftrace_bp_call_nmi_return"
+#else
+# define BP_CALL_RETURN		"%fs:ftrace_bp_call_return"
+# define BP_CALL_NMI_RETURN	"%fs:ftrace_bp_call_nmi_return"
+#endif
+#else /* SMP */
+# define BP_CALL_RETURN		"ftrace_bp_call_return"
+# define BP_CALL_NMI_RETURN	"ftrace_bp_call_nmi_return"
+#endif
+
+/* To hold the ftrace_caller address to push on the stack */
+void *ftrace_caller_func = (void *)ftrace_caller;
+
+asm(
+	".text\n"
+
+	/* Trampoline for function update with interrupts enabled */
+	".global ftrace_emulate_call_irqoff\n"
+	".type ftrace_emulate_call_irqoff, @function\n"
+	"ftrace_emulate_call_irqoff:\n\t"
+		"push "BP_CALL_RETURN"\n\t"
+		"push ftrace_caller_func\n"
+		"sti\n\t"
+		"ret\n\t"
+	".size ftrace_emulate_call_irqoff, .-ftrace_emulate_call_irqoff\n"
+
+	/* Trampoline for function update with interrupts disabled*/
+	".global ftrace_emulate_call_irqon\n"
+	".type ftrace_emulate_call_irqon, @function\n"
+	"ftrace_emulate_call_irqon:\n\t"
+		"push "BP_CALL_RETURN"\n\t"
+		"push ftrace_caller_func\n"
+		"ret\n\t"
+	".size ftrace_emulate_call_irqon, .-ftrace_emulate_call_irqon\n"
+
+	/* Trampoline for function update in an NMI */
+	".global ftrace_emulate_call_nmi\n"
+	".type ftrace_emulate_call_nmi, @function\n"
+	"ftrace_emulate_call_nmi:\n\t"
+		"push "BP_CALL_NMI_RETURN"\n\t"
+		"push ftrace_caller_func\n"
+		"ret\n\t"
+	".size ftrace_emulate_call_nmi, .-ftrace_emulate_call_nmi\n"
+
+	/* Trampoline for ftrace trampoline call update with interrupts enabled */
+	".global ftrace_emulate_call_update_irqoff\n"
+	".type ftrace_emulate_call_update_irqoff, @function\n"
+	"ftrace_emulate_call_update_irqoff:\n\t"
+		"push "BP_CALL_RETURN"\n\t"
+		"push ftrace_update_func_call\n"
+		"sti\n\t"
+		"ret\n\t"
+	".size ftrace_emulate_call_update_irqoff, .-ftrace_emulate_call_update_irqoff\n"
+
+	/* Trampoline for ftrace trampoline call update with interrupts disabled */
+	".global ftrace_emulate_call_update_irqon\n"
+	".type ftrace_emulate_call_update_irqon, @function\n"
+	"ftrace_emulate_call_update_irqon:\n\t"
+		"push "BP_CALL_RETURN"\n\t"
+		"push ftrace_update_func_call\n"
+		"ret\n\t"
+	".size ftrace_emulate_call_update_irqon, .-ftrace_emulate_call_update_irqon\n"
+
+	/* Trampoline for ftrace trampoline call update in an NMI */
+	".global ftrace_emulate_call_update_nmi\n"
+	".type ftrace_emulate_call_update_nmi, @function\n"
+	"ftrace_emulate_call_update_nmi:\n\t"
+		"push "BP_CALL_NMI_RETURN"\n\t"
+		"push ftrace_update_func_call\n"
+		"ret\n\t"
+	".size ftrace_emulate_call_update_nmi, .-ftrace_emulate_call_update_nmi\n"
+	".previous\n");
+
+STACK_FRAME_NON_STANDARD(ftrace_emulate_call_irqoff);
+STACK_FRAME_NON_STANDARD(ftrace_emulate_call_irqon);
+STACK_FRAME_NON_STANDARD(ftrace_emulate_call_nmi);
+STACK_FRAME_NON_STANDARD(ftrace_emulate_call_update_irqoff);
+STACK_FRAME_NON_STANDARD(ftrace_emulate_call_update_irqon);
+STACK_FRAME_NON_STANDARD(ftrace_emulate_call_update_nmi);
+
 /*
  * A breakpoint was added to the code address we are about to
  * modify, and this is the handle that will just skip over it.
@@ -295,12 +420,49 @@  int ftrace_int3_handler(struct pt_regs *regs)
 		return 0;
 
 	ip = regs->ip - 1;
-	if (!ftrace_location(ip) && !is_ftrace_caller(ip))
-		return 0;
-
-	regs->ip += MCOUNT_INSN_SIZE - 1;
+	if (ftrace_location(ip)) {
+		/* A breakpoint at the beginning of the function was hit */
+		if (in_nmi()) {
+			/* NMIs have their own trampoline */
+			this_cpu_write(ftrace_bp_call_nmi_return, (void *)ip + MCOUNT_INSN_SIZE);
+			regs->ip = (unsigned long) ftrace_emulate_call_nmi;
+			return 1;
+		}
+		this_cpu_write(ftrace_bp_call_return, (void *)ip + MCOUNT_INSN_SIZE);
+		if (regs->flags & X86_EFLAGS_IF) {
+			regs->flags &= ~X86_EFLAGS_IF;
+			regs->ip = (unsigned long) ftrace_emulate_call_irqoff;
+			/* Tell lockdep here we are enabling interrupts */
+			trace_hardirqs_on();
+		} else {
+			regs->ip = (unsigned long) ftrace_emulate_call_irqon;
+		}
+		return 1;
+	} else if (is_ftrace_caller(ip)) {
+		/* An ftrace trampoline is being updated */
+		if (!ftrace_update_func_call) {
+			/* If it's a jump, just need to skip it */
+			regs->ip += MCOUNT_INSN_SIZE -1;
+			return 1;
+		}
+		if (in_nmi()) {
+			/* NMIs have their own trampoline */
+			this_cpu_write(ftrace_bp_call_nmi_return, (void *)ip + MCOUNT_INSN_SIZE);
+			regs->ip = (unsigned long) ftrace_emulate_call_update_nmi;
+			return 1;
+		}
+		this_cpu_write(ftrace_bp_call_return, (void *)ip + MCOUNT_INSN_SIZE);
+		if (regs->flags & X86_EFLAGS_IF) {
+			regs->flags &= ~X86_EFLAGS_IF;
+			regs->ip = (unsigned long) ftrace_emulate_call_update_irqoff;
+			trace_hardirqs_on();
+		} else {
+			regs->ip = (unsigned long) ftrace_emulate_call_update_irqon;
+		}
+		return 1;
+	}
 
-	return 1;
+	return 0;
 }
 NOKPROBE_SYMBOL(ftrace_int3_handler);
 
@@ -859,6 +1021,8 @@  void arch_ftrace_update_trampoline(struct ftrace_ops *ops)
 
 	func = ftrace_ops_get_func(ops);
 
+	ftrace_update_func_call = (unsigned long)func;
+
 	/* Do a safe modify in case the trampoline is executing */
 	new = ftrace_call_replace(ip, (unsigned long)func);
 	ret = update_ftrace_func(ip, new);
@@ -960,6 +1124,7 @@  static int ftrace_mod_jmp(unsigned long ip, void *func)
 {
 	unsigned char *new;
 
+	ftrace_update_func_call = 0;
 	new = ftrace_jmp_replace(ip, (unsigned long)func);
 
 	return update_ftrace_func(ip, new);