Message ID | 20211026120310.296470217@infradead.org (mailing list archive) |
---|---|
State | Superseded |
Delegated to: | BPF |
Headers | show |
Series | x86: Rewrite the retpoline rewrite logic | expand |
Context | Check | Description |
---|---|---|
netdev/tree_selection | success | Not a local patch |
bpf/vmtest-bpf-next-PR | fail | merge-conflict |
bpf/vmtest-bpf-PR | fail | merge-conflict |
bpf/vmtest-bpf-next | pending | VM_Test |
On Tue, Oct 26, 2021 at 02:01:43PM +0200, Peter Zijlstra wrote: > + op = insn->opcode.bytes[0]; > + > + /* > + * Convert: > + * > + * Jcc.d32 __x86_indirect_thunk_\reg > + * > + * into: > + * > + * Jncc.d8 1f > + * JMP *%\reg > + * NOP > + * 1: > + */ Let's explain the second part of the test better: /* Jcc opcodes are in the range 0x80-0x8f */ Yeah, you have that range check below but still. > + if (op == 0x0f && (insn->opcode.bytes[1] & 0xf0) == 0x80) { > + cc = insn->opcode.bytes[1] & 0xf; > + cc ^= 1; /* invert condition */ > + > + bytes[i++] = 0x70 + cc; /* Jcc.d8 */ > + bytes[i++] = insn->length - 2; maybe put at the end here: /* 2 == sizeof(Jcc.d8) */ to have it explicit what that 2 means. But yeah, looks good. Thx. > + > + op = JMP32_INSN_OPCODE; > + } > + > + ret = emit_indirect(op, reg, bytes + i); > + if (ret < 0) > + return ret; > + i += ret; > > for (; i < insn->length;) > bytes[i++] = BYTES_NOP1; > @@ -443,6 +469,10 @@ void __init_or_module noinline apply_ret > case JMP32_INSN_OPCODE: > break; > > + case 0x0f: /* escape */ > + if (op2 >= 0x80 && op2 <= 0x8f) > + break; > + fallthrough; > default: > WARN_ON_ONCE(1); > continue; > >
--- a/arch/x86/kernel/alternative.c +++ b/arch/x86/kernel/alternative.c @@ -393,7 +393,8 @@ static int emit_indirect(int op, int reg static int patch_retpoline(void *addr, struct insn *insn, u8 *bytes) { retpoline_thunk_t *target; - int reg, i = 0; + int reg, ret, i = 0; + u8 op, cc; target = addr + insn->length + insn->immediate.value; reg = target - __x86_indirect_thunk_array; @@ -407,9 +408,34 @@ static int patch_retpoline(void *addr, s if (cpu_feature_enabled(X86_FEATURE_RETPOLINE)) return -1; - i = emit_indirect(insn->opcode.bytes[0], reg, bytes); - if (i < 0) - return i; + op = insn->opcode.bytes[0]; + + /* + * Convert: + * + * Jcc.d32 __x86_indirect_thunk_\reg + * + * into: + * + * Jncc.d8 1f + * JMP *%\reg + * NOP + * 1: + */ + if (op == 0x0f && (insn->opcode.bytes[1] & 0xf0) == 0x80) { + cc = insn->opcode.bytes[1] & 0xf; + cc ^= 1; /* invert condition */ + + bytes[i++] = 0x70 + cc; /* Jcc.d8 */ + bytes[i++] = insn->length - 2; + + op = JMP32_INSN_OPCODE; + } + + ret = emit_indirect(op, reg, bytes + i); + if (ret < 0) + return ret; + i += ret; for (; i < insn->length;) bytes[i++] = BYTES_NOP1; @@ -443,6 +469,10 @@ void __init_or_module noinline apply_ret case JMP32_INSN_OPCODE: break; + case 0x0f: /* escape */ + if (op2 >= 0x80 && op2 <= 0x8f) + break; + fallthrough; default: WARN_ON_ONCE(1); continue;
Handle the rare cases where the compiler (clang) does an indirect conditional tail-call using: Jcc __x86_indirect_thunk_\reg For the !RETPOLINE case this can be rewritten to fit the original (6 byte) instruction like: Jncc.d8 1f JMP *%\reg NOP 1: Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> --- arch/x86/kernel/alternative.c | 38 ++++++++++++++++++++++++++++++++++---- 1 file changed, 34 insertions(+), 4 deletions(-)