Message ID | 20191017000554.11927-3-sean.j.christopherson@intel.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | x86/vdso: sgx: Bug fixes for v23 | expand |
diff --git a/arch/x86/entry/vdso/vsgx_enter_enclave.S b/arch/x86/entry/vdso/vsgx_enter_enclave.S index e56737cc9f2c..d36043b99dc6 100644 --- a/arch/x86/entry/vdso/vsgx_enter_enclave.S +++ b/arch/x86/entry/vdso/vsgx_enter_enclave.S @@ -142,10 +142,10 @@ ENTRY(__vdso_sgx_enter_enclave) /* * Align stack per x86_64 ABI. Note, %rsp needs to be 16-byte aligned - * _after_ pushing the three parameters on the stack. + * _after_ pushing the parameters on the stack, hence the bonus push. */ and $-0x10, %rsp - sub $0x8, %rsp + push %rax /* Push @e, the "return" value and @tcs as params to the callback. */ push 0x18(%rbp)
Use a "PUSH reg" instead of "SUB imm32, reg" to align the stack. The PUSH is a one-byte opcode, whereas the SUB is a four-byte opcode. Suggested-by: Cedric Xing <cedric.xing@intel.com> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> --- arch/x86/entry/vdso/vsgx_enter_enclave.S | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)