Message ID | 20230705081547.25130-3-petr.pavlu@suse.com (mailing list archive) |
---|---|
State | Superseded |
Headers | show |
Series | x86/retpoline,kprobes: Fix the [__indirect_thunk_start, ..end] range | expand |
On Wed, Jul 05, 2023 at 10:15:47AM +0200, Petr Pavlu wrote: > Functions can_optimize() and insn_is_indirect_jump() consider jumps to > the range [__indirect_thunk_start, __indirect_thunk_end] as indirect > jumps and prevent use of optprobes in functions containing them. Why ?!? I mean, doing an opt-probe of an indirect jump/call instruction itself doesn't really make sense and I can see why you'd want to not do that. But why disallow an opt-probe if there's one in the function as a whole, but not the probe target?
On Wed, Jul 05, 2023 at 10:15:47AM +0200, Petr Pavlu wrote: > --- > arch/x86/kernel/vmlinux.lds.S | 2 -- > arch/x86/lib/retpoline.S | 4 ++++ > 2 files changed, 4 insertions(+), 2 deletions(-) > > diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S > index a4cd04c458df..dd5b0a68cf84 100644 > --- a/arch/x86/kernel/vmlinux.lds.S > +++ b/arch/x86/kernel/vmlinux.lds.S > @@ -133,9 +133,7 @@ SECTIONS > KPROBES_TEXT > SOFTIRQENTRY_TEXT > #ifdef CONFIG_RETPOLINE > - __indirect_thunk_start = .; > *(.text..__x86.*) > - __indirect_thunk_end = .; > #endif Arguably you could do: __indirect_thunk_start = .; *(.text..__x86.indirect_thunk) __indirect_thunk_end = .; *(.text..__x86.return_think) Or somesuch, seems simpler to me.
On Wed, 5 Jul 2023 10:58:57 +0200 Peter Zijlstra <peterz@infradead.org> wrote: > On Wed, Jul 05, 2023 at 10:15:47AM +0200, Petr Pavlu wrote: > > Functions can_optimize() and insn_is_indirect_jump() consider jumps to > > the range [__indirect_thunk_start, __indirect_thunk_end] as indirect > > jumps and prevent use of optprobes in functions containing them. > > Why ?!? I mean, doing an opt-probe of an indirect jump/call instruction > itself doesn't really make sense and I can see why you'd want to not do > that. But why disallow an opt-probe if there's one in the function as a > whole, but not the probe target? Here we need to clarify the reason why functions which have indirect jumps are not allowed to use opt-probe. Since optprobe can replace multiple instructions with a jump, if any jmp (is used for jump inside same function) jumps to the second and subsequent instructions replaced by optprobe's jump, that target instruction can not be optimized. The problem of indirect jump (which jumps to the same function) is that we don't know which addresses will be the target of the indirect jump. So, for safety, I disallow optprobe for such function. In that case, normal kprobe is used because it replaces only one instruction. If I understand correctly, all indirect jump will be replaced with JMP_NOSPEC. If you read the insn_jump_into_range, I onlu jecks the jump code, not call. So the functions only have indirect call still allow optprobe. Thank you,
On Wed, 5 Jul 2023 10:15:47 +0200 Petr Pavlu <petr.pavlu@suse.com> wrote: > Functions can_optimize() and insn_is_indirect_jump() consider jumps to > the range [__indirect_thunk_start, __indirect_thunk_end] as indirect > jumps and prevent use of optprobes in functions containing them. > > Linker script arch/x86/kernel/vmlinux.lds.S places into this range also > the special section .text.__x86.return_thunk which contains the return > thunk. It causes that machines which use the return thunk as > a mitigation and don't have it patched by any alternative then end up > not being able to use optprobes in any regular function. Ah, I got it. So with retpoline, the 'ret' instruction is replaced by 'jmp __x86_return_thunk' and the "__x86_return_thunk" is also listed in the __indirect_thunk_start/end. Good catch! And I think Peter's suggestion is simpler and easier to understand. Can you update this? Thank you, > > The return thunk doesn't need to be treated as an indirect jump from the > perspective of insn_is_indirect_jump(). It returns to a caller and > cannot land into an optprobe jump operand which is the purpose of the > insn_is_indirect_jump() check. > > Fix the problem by defining the symbols __indirect_thunk_start and > __indirect_thunk_end directly in arch/x86/lib/retpoline.S. This is > possible because commit 9bc0bb50727c ("objtool/x86: Rewrite retpoline > thunk calls") made all indirect thunks present in a single section. > > Fixes: 0b53c374b9ef ("x86/retpoline: Use -mfunction-return") > Signed-off-by: Petr Pavlu <petr.pavlu@suse.com> > --- > arch/x86/kernel/vmlinux.lds.S | 2 -- > arch/x86/lib/retpoline.S | 4 ++++ > 2 files changed, 4 insertions(+), 2 deletions(-) > > diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S > index a4cd04c458df..dd5b0a68cf84 100644 > --- a/arch/x86/kernel/vmlinux.lds.S > +++ b/arch/x86/kernel/vmlinux.lds.S > @@ -133,9 +133,7 @@ SECTIONS > KPROBES_TEXT > SOFTIRQENTRY_TEXT > #ifdef CONFIG_RETPOLINE > - __indirect_thunk_start = .; > *(.text..__x86.*) > - __indirect_thunk_end = .; > #endif > STATIC_CALL_TEXT > > diff --git a/arch/x86/lib/retpoline.S b/arch/x86/lib/retpoline.S > index 3bea96341d00..f45a3e7f776f 100644 > --- a/arch/x86/lib/retpoline.S > +++ b/arch/x86/lib/retpoline.S > @@ -14,6 +14,7 @@ > > .section .text..__x86.indirect_thunk > > +SYM_ENTRY(__indirect_thunk_start, SYM_L_GLOBAL, SYM_A_NONE) > > .macro POLINE reg > ANNOTATE_INTRA_FUNCTION_CALL > @@ -125,6 +126,9 @@ SYM_CODE_END(__x86_indirect_jump_thunk_array) > #include <asm/GEN-for-each-reg.h> > #undef GEN > #endif > + > +SYM_ENTRY(__indirect_thunk_end, SYM_L_GLOBAL, SYM_A_NONE) > + > /* > * This function name is magical and is used by -mfunction-return=thunk-extern > * for the compiler to generate JMPs to it. > -- > 2.35.3 >
On Wed, Jul 05, 2023 at 11:20:38PM +0900, Masami Hiramatsu wrote: > On Wed, 5 Jul 2023 10:58:57 +0200 > Peter Zijlstra <peterz@infradead.org> wrote: > > > On Wed, Jul 05, 2023 at 10:15:47AM +0200, Petr Pavlu wrote: > > > Functions can_optimize() and insn_is_indirect_jump() consider jumps to > > > the range [__indirect_thunk_start, __indirect_thunk_end] as indirect > > > jumps and prevent use of optprobes in functions containing them. > > > > Why ?!? I mean, doing an opt-probe of an indirect jump/call instruction > > itself doesn't really make sense and I can see why you'd want to not do > > that. But why disallow an opt-probe if there's one in the function as a > > whole, but not the probe target? > > Here we need to clarify the reason why functions which have indirect jumps > are not allowed to use opt-probe. Since optprobe can replace multiple > instructions with a jump, if any jmp (is used for jump inside same function) > jumps to the second and subsequent instructions replaced by optprobe's jump, > that target instruction can not be optimized. > > The problem of indirect jump (which jumps to the same function) is that > we don't know which addresses will be the target of the indirect jump. > So, for safety, I disallow optprobe for such function. In that case, normal > kprobe is used because it replaces only one instruction. Ah, you're worried about jump-tables; you don't want to optimize across a jump-table target because then things go *boom*. There's two things: - when X86_KERNEL_IBT=y any indirect jump target should be an ENDBR instruction, so jump-table targets can be easily detected. - when RETPOLINE=y || X86_KERNEL_IBT=y we have jump-tables disabled, search for -fno-jump-table in arch/x86/Makefile. At some point in the future we should be able to allow jump-tables for RETPOLINE=n && IBT=y builds (provided the compilers behave), but we currently don't bother to find out. Therefore, when either CONFIG option is found, you can assume that any indirect jump will be to another function. > If I understand correctly, all indirect jump will be replaced with JMP_NOSPEC. > If you read the insn_jump_into_range, I onlu jecks the jump code, not call. > So the functions only have indirect call still allow optprobe. With the introduction of kCFI JMP_NOSPEC is no longer an equivalent to a C indirect jump.
On Wed, 5 Jul 2023 16:50:17 +0200 Peter Zijlstra <peterz@infradead.org> wrote: > On Wed, Jul 05, 2023 at 11:20:38PM +0900, Masami Hiramatsu wrote: > > On Wed, 5 Jul 2023 10:58:57 +0200 > > Peter Zijlstra <peterz@infradead.org> wrote: > > > > > On Wed, Jul 05, 2023 at 10:15:47AM +0200, Petr Pavlu wrote: > > > > Functions can_optimize() and insn_is_indirect_jump() consider jumps to > > > > the range [__indirect_thunk_start, __indirect_thunk_end] as indirect > > > > jumps and prevent use of optprobes in functions containing them. > > > > > > Why ?!? I mean, doing an opt-probe of an indirect jump/call instruction > > > itself doesn't really make sense and I can see why you'd want to not do > > > that. But why disallow an opt-probe if there's one in the function as a > > > whole, but not the probe target? > > > > Here we need to clarify the reason why functions which have indirect jumps > > are not allowed to use opt-probe. Since optprobe can replace multiple > > instructions with a jump, if any jmp (is used for jump inside same function) > > jumps to the second and subsequent instructions replaced by optprobe's jump, > > that target instruction can not be optimized. > > > > The problem of indirect jump (which jumps to the same function) is that > > we don't know which addresses will be the target of the indirect jump. > > So, for safety, I disallow optprobe for such function. In that case, normal > > kprobe is used because it replaces only one instruction. > > Ah, you're worried about jump-tables; you don't want to optimize across > a jump-table target because then things go *boom*. > > There's two things: > > - when X86_KERNEL_IBT=y any indirect jump target should be an ENDBR > instruction, so jump-table targets can be easily detected. > > - when RETPOLINE=y || X86_KERNEL_IBT=y we have jump-tables disabled, > search for -fno-jump-table in arch/x86/Makefile. > > At some point in the future we should be able to allow jump-tables for > RETPOLINE=n && IBT=y builds (provided the compilers behave), but we > currently don't bother to find out. > > Therefore, when either CONFIG option is found, you can assume that any > indirect jump will be to another function. OK, I confirmed that '-fno-jump-tables' is set when X86_KERNEL_IBT=y || RETPOLINE=y so we can skip this indirect jump check. That makes things simpler. > > > If I understand correctly, all indirect jump will be replaced with JMP_NOSPEC. > > If you read the insn_jump_into_range, I onlu jecks the jump code, not call. > > So the functions only have indirect call still allow optprobe. > > With the introduction of kCFI JMP_NOSPEC is no longer an equivalent to a > C indirect jump. If I understand correctly, kCFI is enabled by CFI_CLANG, and clang is not using jump-tables by default, so we can focus on gcc. In that case current check still work, correct? Thank you,
On Thu, Jul 06, 2023 at 09:47:23AM +0900, Masami Hiramatsu wrote: > > > If I understand correctly, all indirect jump will be replaced with JMP_NOSPEC. > > > If you read the insn_jump_into_range, I onlu jecks the jump code, not call. > > > So the functions only have indirect call still allow optprobe. > > > > With the introduction of kCFI JMP_NOSPEC is no longer an equivalent to a > > C indirect jump. > > If I understand correctly, kCFI is enabled by CFI_CLANG, and clang is not > using jump-tables by default, so we can focus on gcc. In that case > current check still work, correct? IIRC clang can use jump tables, but like GCC needs RETPOLINE=n and IBT=n, so effectively nobody has them. The reason I did mention kCFI though is that kCFI has a larger 'indirect jump' sequence, and I'm not sure we've thought about what can go sideways if that's optprobed. I suspect the UD2 that's in there will go 'funny' if it's relocated into an optprobe, as in, it'll not be recognised as a CFI fail.
On Thu, 6 Jul 2023 09:17:05 +0200 Peter Zijlstra <peterz@infradead.org> wrote: > On Thu, Jul 06, 2023 at 09:47:23AM +0900, Masami Hiramatsu wrote: > > > > > If I understand correctly, all indirect jump will be replaced with JMP_NOSPEC. > > > > If you read the insn_jump_into_range, I onlu jecks the jump code, not call. > > > > So the functions only have indirect call still allow optprobe. > > > > > > With the introduction of kCFI JMP_NOSPEC is no longer an equivalent to a > > > C indirect jump. > > > > If I understand correctly, kCFI is enabled by CFI_CLANG, and clang is not > > using jump-tables by default, so we can focus on gcc. In that case > > current check still work, correct? > > IIRC clang can use jump tables, but like GCC needs RETPOLINE=n and > IBT=n, so effectively nobody has them. So if it requires RETPOLINE=n, current __indirect_thunk_start/end checking is not required, right? (that code is embraced with "#ifdef CONFIG_RETPOLINE") > > The reason I did mention kCFI though is that kCFI has a larger 'indirect > jump' sequence, and I'm not sure we've thought about what can go > sideways if that's optprobed. If I understand correctly, kCFI checks only indirect function call (check pointer), so no jump tables. Or does it use indirect 'jump' ? > > I suspect the UD2 that's in there will go 'funny' if it's relocated into > an optprobe, as in, it'll not be recognised as a CFI fail. UD2 can't be optprobed (kprobe neither) because it can change the dumped BUG address... Thank you,
On Thu, Jul 06, 2023 at 06:00:14PM +0900, Masami Hiramatsu wrote: > On Thu, 6 Jul 2023 09:17:05 +0200 > Peter Zijlstra <peterz@infradead.org> wrote: > > > On Thu, Jul 06, 2023 at 09:47:23AM +0900, Masami Hiramatsu wrote: > > > > > > > If I understand correctly, all indirect jump will be replaced with JMP_NOSPEC. > > > > > If you read the insn_jump_into_range, I onlu jecks the jump code, not call. > > > > > So the functions only have indirect call still allow optprobe. > > > > > > > > With the introduction of kCFI JMP_NOSPEC is no longer an equivalent to a > > > > C indirect jump. > > > > > > If I understand correctly, kCFI is enabled by CFI_CLANG, and clang is not > > > using jump-tables by default, so we can focus on gcc. In that case > > > current check still work, correct? > > > > IIRC clang can use jump tables, but like GCC needs RETPOLINE=n and > > IBT=n, so effectively nobody has them. > > So if it requires RETPOLINE=n, current __indirect_thunk_start/end checking > is not required, right? (that code is embraced with "#ifdef CONFIG_RETPOLINE") Correct. > > > > The reason I did mention kCFI though is that kCFI has a larger 'indirect > > jump' sequence, and I'm not sure we've thought about what can go > > sideways if that's optprobed. > > If I understand correctly, kCFI checks only indirect function call (check > pointer), so no jump tables. Or does it use indirect 'jump' ? Yes, it's indirect function calls only. Imagine our function (bar) doing an indirect call, it will (as clang always does) have the function pointer in r11: bar: ... movl $(-0x12345678),%r10d addl -15(%r11), %r10d je 1f ud2 1: call __x86_indirect_thunk_r11 And then the function it calls (foo) looks like: __cfi_foo: movl $0x12345678, %eax .skip 11, 0x90 foo: endbr .... So if the caller (in bar) and the callee (foo) have the same hash value (0x12345678 in this case) then it will be equal and we continue on our merry way. However, if they do not match, we'll trip that #UD and the handle_cfi_failure() will try and match the address to __{start,stop}__kcfi_traps[]. Additinoally decode_cfi_insn() will try and decode that whole call sequence in order to obtain the target address and typeid (hash). optprobes might disturb this code. > > I suspect the UD2 that's in there will go 'funny' if it's relocated into > > an optprobe, as in, it'll not be recognised as a CFI fail. > > UD2 can't be optprobed (kprobe neither) because it can change the dumped > BUG address... Right, same problem here. But could the movl/addl be opt-probed? That would wreck decode_cfi_insn(). Then again, if decode_cfi_insn() fails, we'll get report_cfi_failure_noaddr(), which is less informative. So it looks like nothing too horrible happens...
On Thu, 6 Jul 2023 13:34:03 +0200 Peter Zijlstra <peterz@infradead.org> wrote: > On Thu, Jul 06, 2023 at 06:00:14PM +0900, Masami Hiramatsu wrote: > > On Thu, 6 Jul 2023 09:17:05 +0200 > > Peter Zijlstra <peterz@infradead.org> wrote: > > > > > On Thu, Jul 06, 2023 at 09:47:23AM +0900, Masami Hiramatsu wrote: > > > > > > > > > If I understand correctly, all indirect jump will be replaced with JMP_NOSPEC. > > > > > > If you read the insn_jump_into_range, I onlu jecks the jump code, not call. > > > > > > So the functions only have indirect call still allow optprobe. > > > > > > > > > > With the introduction of kCFI JMP_NOSPEC is no longer an equivalent to a > > > > > C indirect jump. > > > > > > > > If I understand correctly, kCFI is enabled by CFI_CLANG, and clang is not > > > > using jump-tables by default, so we can focus on gcc. In that case > > > > current check still work, correct? > > > > > > IIRC clang can use jump tables, but like GCC needs RETPOLINE=n and > > > IBT=n, so effectively nobody has them. > > > > So if it requires RETPOLINE=n, current __indirect_thunk_start/end checking > > is not required, right? (that code is embraced with "#ifdef CONFIG_RETPOLINE") > > Correct. > > > > > > > The reason I did mention kCFI though is that kCFI has a larger 'indirect > > > jump' sequence, and I'm not sure we've thought about what can go > > > sideways if that's optprobed. > > > > If I understand correctly, kCFI checks only indirect function call (check > > pointer), so no jump tables. Or does it use indirect 'jump' ? > > Yes, it's indirect function calls only. > > Imagine our function (bar) doing an indirect call, it will (as clang > always does) have the function pointer in r11: > > bar: > ... > movl $(-0x12345678),%r10d > addl -15(%r11), %r10d > je 1f > ud2 > 1: call __x86_indirect_thunk_r11 > > > > And then the function it calls (foo) looks like: > > __cfi_foo: > movl $0x12345678, %eax > .skip 11, 0x90 > foo: > endbr > .... > > > > So if the caller (in bar) and the callee (foo) have the same hash value > (0x12345678 in this case) then it will be equal and we continue on our > merry way. > > However, if they do not match, we'll trip that #UD and the > handle_cfi_failure() will try and match the address to > __{start,stop}__kcfi_traps[]. Additinoally decode_cfi_insn() will try > and decode that whole call sequence in order to obtain the target > address and typeid (hash). Thank you for the explanation! This helps me! > > optprobes might disturb this code. So either optprobe or kprobes (any text instrumentation) do not touch __cfi_FUNC symbols light before FUNC. > > > > I suspect the UD2 that's in there will go 'funny' if it's relocated into > > > an optprobe, as in, it'll not be recognised as a CFI fail. > > > > UD2 can't be optprobed (kprobe neither) because it can change the dumped > > BUG address... > > Right, same problem here. But could the movl/addl be opt-probed? That > would wreck decode_cfi_insn(). Then again, if decode_cfi_insn() fails, > we'll get report_cfi_failure_noaddr(), which is less informative. Ok, so if that sequence is always expected, I can also prohibit probing it. Or, maybe it is better to generalize the API to access original instruction which is used from kprobes, so that decode_cfi_insn() can get the original (non-probed) insn. > > So it looks like nothing too horrible happens... > > Thank you,
On 7/6/23 13:34, Peter Zijlstra wrote: > On Thu, Jul 06, 2023 at 06:00:14PM +0900, Masami Hiramatsu wrote: >> On Thu, 6 Jul 2023 09:17:05 +0200 >> Peter Zijlstra <peterz@infradead.org> wrote: >> >>> On Thu, Jul 06, 2023 at 09:47:23AM +0900, Masami Hiramatsu wrote: >>> >>>>>> If I understand correctly, all indirect jump will be replaced with JMP_NOSPEC. >>>>>> If you read the insn_jump_into_range, I onlu jecks the jump code, not call. >>>>>> So the functions only have indirect call still allow optprobe. >>>>> >>>>> With the introduction of kCFI JMP_NOSPEC is no longer an equivalent to a >>>>> C indirect jump. >>>> >>>> If I understand correctly, kCFI is enabled by CFI_CLANG, and clang is not >>>> using jump-tables by default, so we can focus on gcc. In that case >>>> current check still work, correct? >>> >>> IIRC clang can use jump tables, but like GCC needs RETPOLINE=n and >>> IBT=n, so effectively nobody has them. >> >> So if it requires RETPOLINE=n, current __indirect_thunk_start/end checking >> is not required, right? (that code is embraced with "#ifdef CONFIG_RETPOLINE") > > Correct. Thank you both for the explanation. If I understand correctly, it means this second patch can be dropped and I can instead replace it with a removal of the mentioned check. That will also void the main motivation for the first patch but that one should be still at least useful to make the LTO_CLANG=y build lay out the code in the same way as with other configurations. I will post an updated series with these changes. -- Petr
On Sat, 8 Jul 2023 16:18:29 +0200 Petr Pavlu <petr.pavlu@suse.com> wrote: > On 7/6/23 13:34, Peter Zijlstra wrote: > > On Thu, Jul 06, 2023 at 06:00:14PM +0900, Masami Hiramatsu wrote: > >> On Thu, 6 Jul 2023 09:17:05 +0200 > >> Peter Zijlstra <peterz@infradead.org> wrote: > >> > >>> On Thu, Jul 06, 2023 at 09:47:23AM +0900, Masami Hiramatsu wrote: > >>> > >>>>>> If I understand correctly, all indirect jump will be replaced with JMP_NOSPEC. > >>>>>> If you read the insn_jump_into_range, I onlu jecks the jump code, not call. > >>>>>> So the functions only have indirect call still allow optprobe. > >>>>> > >>>>> With the introduction of kCFI JMP_NOSPEC is no longer an equivalent to a > >>>>> C indirect jump. > >>>> > >>>> If I understand correctly, kCFI is enabled by CFI_CLANG, and clang is not > >>>> using jump-tables by default, so we can focus on gcc. In that case > >>>> current check still work, correct? > >>> > >>> IIRC clang can use jump tables, but like GCC needs RETPOLINE=n and > >>> IBT=n, so effectively nobody has them. > >> > >> So if it requires RETPOLINE=n, current __indirect_thunk_start/end checking > >> is not required, right? (that code is embraced with "#ifdef CONFIG_RETPOLINE") > > > > Correct. > > Thank you both for the explanation. > > If I understand correctly, it means this second patch can be dropped and > I can instead replace it with a removal of the mentioned check. That > will also void the main motivation for the first patch but that one > should be still at least useful to make the LTO_CLANG=y build lay out > the code in the same way as with other configurations. Yes, something like removing __indirect_thunk_start/end check and disabling insn_is_indirect_jump() when defined(CONFIG_RETPOLINE) || defined(CONFIG_X86_KERNEL_IBT). kCFI case is also handled later but another series. Thank you, > > I will post an updated series with these changes. > > -- Petr >
diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S index a4cd04c458df..dd5b0a68cf84 100644 --- a/arch/x86/kernel/vmlinux.lds.S +++ b/arch/x86/kernel/vmlinux.lds.S @@ -133,9 +133,7 @@ SECTIONS KPROBES_TEXT SOFTIRQENTRY_TEXT #ifdef CONFIG_RETPOLINE - __indirect_thunk_start = .; *(.text..__x86.*) - __indirect_thunk_end = .; #endif STATIC_CALL_TEXT diff --git a/arch/x86/lib/retpoline.S b/arch/x86/lib/retpoline.S index 3bea96341d00..f45a3e7f776f 100644 --- a/arch/x86/lib/retpoline.S +++ b/arch/x86/lib/retpoline.S @@ -14,6 +14,7 @@ .section .text..__x86.indirect_thunk +SYM_ENTRY(__indirect_thunk_start, SYM_L_GLOBAL, SYM_A_NONE) .macro POLINE reg ANNOTATE_INTRA_FUNCTION_CALL @@ -125,6 +126,9 @@ SYM_CODE_END(__x86_indirect_jump_thunk_array) #include <asm/GEN-for-each-reg.h> #undef GEN #endif + +SYM_ENTRY(__indirect_thunk_end, SYM_L_GLOBAL, SYM_A_NONE) + /* * This function name is magical and is used by -mfunction-return=thunk-extern * for the compiler to generate JMPs to it.
Functions can_optimize() and insn_is_indirect_jump() consider jumps to the range [__indirect_thunk_start, __indirect_thunk_end] as indirect jumps and prevent use of optprobes in functions containing them. Linker script arch/x86/kernel/vmlinux.lds.S places into this range also the special section .text.__x86.return_thunk which contains the return thunk. It causes that machines which use the return thunk as a mitigation and don't have it patched by any alternative then end up not being able to use optprobes in any regular function. The return thunk doesn't need to be treated as an indirect jump from the perspective of insn_is_indirect_jump(). It returns to a caller and cannot land into an optprobe jump operand which is the purpose of the insn_is_indirect_jump() check. Fix the problem by defining the symbols __indirect_thunk_start and __indirect_thunk_end directly in arch/x86/lib/retpoline.S. This is possible because commit 9bc0bb50727c ("objtool/x86: Rewrite retpoline thunk calls") made all indirect thunks present in a single section. Fixes: 0b53c374b9ef ("x86/retpoline: Use -mfunction-return") Signed-off-by: Petr Pavlu <petr.pavlu@suse.com> --- arch/x86/kernel/vmlinux.lds.S | 2 -- arch/x86/lib/retpoline.S | 4 ++++ 2 files changed, 4 insertions(+), 2 deletions(-)