From patchwork Thu Dec 5 00:09:38 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Garnier X-Patchwork-Id: 11273777 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9A1CD930 for ; Thu, 5 Dec 2019 00:10:35 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id 730C524654 for ; Thu, 5 Dec 2019 00:10:34 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b="UeX8H4BK" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 730C524654 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=chromium.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-17456-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 27667 invoked by uid 550); 5 Dec 2019 00:10:24 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 26427 invoked from network); 5 Dec 2019 00:10:23 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ifKYNdiEt5/5Ah/qzUz/7gUPOxalv86PxbaBtLSs0wk=; b=UeX8H4BKzkEDLpmq4lkBn0Q1BjgVg1ZNEGLBI+Rn8NOxsXhkrc/Q72cEBLAyexytgi mmbU5sSzXk5a41BuVAhms8Z+mqdXBKHQUVvWy/rPxFBhbcGm934CA4VRR2GkPR9P0PN1 jrfzCcFPmvee6dSkVJLtoWCLCM0tQUmsZAh/0= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ifKYNdiEt5/5Ah/qzUz/7gUPOxalv86PxbaBtLSs0wk=; b=MkJOUPuo3fM555GqyAJktN3ewr9AAV/rDWcidj6gVU9t9AQBxRYjyY8GLKNi34JLwu 1m7kcXOkeRZaHnSnis0dEk2ezM1FhmFxTxqvzuFQAw7cvCI6PldWi7hS1orWhV43Jeod TawIG04KlSXIjRVU6QmSruSzzrgnv0xJ1TzJDaHxnaAMjJLuYJezWRGyloi2EAM2N2C5 fyuvBaFoqFFEPjBCG8rghyQ9/l/Xw8/W0YdYQ6/pkURbCs3sPdcAYtqICfckcXWAyGzQ FFTGok+NULN1QiYn5NntY8RsWet1UMfvRPpCE3rjPfY8llWHKOc3tLzkhL2gmOEaXCXg Q+RQ== X-Gm-Message-State: APjAAAUvzrJy0FQjsPstqn8L2z9otuMUy9Dn7shVJPykGavyx0+hwo43 dwzWYxDbVIG9vZ4jIvG2fHqBf6ZgQ+4= X-Google-Smtp-Source: APXvYqxt0OcjAnaSP8Ue8ck6KDHj3XMPMu/i7T9WuDiwrqFh+GtCVKRi8csOGWIsQb+Gueu5uWS59g== X-Received: by 2002:a17:90a:4e5:: with SMTP id g92mr6175659pjg.94.1575504610326; Wed, 04 Dec 2019 16:10:10 -0800 (PST) From: Thomas Garnier To: kernel-hardening@lists.openwall.com Cc: kristen@linux.intel.com, keescook@chromium.org, Thomas Garnier , Herbert Xu , "David S. Miller" , Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H. Peter Anvin" , x86@kernel.org, linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v10 01/11] x86/crypto: Adapt assembly for PIE support Date: Wed, 4 Dec 2019 16:09:38 -0800 Message-Id: <20191205000957.112719-2-thgarnie@chromium.org> X-Mailer: git-send-email 2.24.0.393.g34dc348eaf-goog In-Reply-To: <20191205000957.112719-1-thgarnie@chromium.org> References: <20191205000957.112719-1-thgarnie@chromium.org> MIME-Version: 1.0 Change the assembly code to use only relative references of symbols for the kernel to be PIE compatible. Position Independent Executable (PIE) support will allow to extend the KASLR randomization range below 0xffffffff80000000. Signed-off-by: Thomas Garnier --- arch/x86/crypto/aegis128-aesni-asm.S | 6 +- arch/x86/crypto/aesni-intel_asm.S | 8 +- arch/x86/crypto/aesni-intel_avx-x86_64.S | 3 +- arch/x86/crypto/camellia-aesni-avx-asm_64.S | 42 ++++----- arch/x86/crypto/camellia-aesni-avx2-asm_64.S | 44 ++++----- arch/x86/crypto/camellia-x86_64-asm_64.S | 8 +- arch/x86/crypto/cast5-avx-x86_64-asm_64.S | 50 +++++----- arch/x86/crypto/cast6-avx-x86_64-asm_64.S | 44 +++++---- arch/x86/crypto/des3_ede-asm_64.S | 96 +++++++++++++------- arch/x86/crypto/ghash-clmulni-intel_asm.S | 4 +- arch/x86/crypto/glue_helper-asm-avx.S | 4 +- arch/x86/crypto/glue_helper-asm-avx2.S | 6 +- arch/x86/crypto/sha256-avx2-asm.S | 18 ++-- 13 files changed, 191 insertions(+), 142 deletions(-) diff --git a/arch/x86/crypto/aegis128-aesni-asm.S b/arch/x86/crypto/aegis128-aesni-asm.S index 51d46d93efbc..c4bd32d8ca43 100644 --- a/arch/x86/crypto/aegis128-aesni-asm.S +++ b/arch/x86/crypto/aegis128-aesni-asm.S @@ -200,8 +200,8 @@ SYM_FUNC_START(crypto_aegis128_aesni_init) movdqa KEY, STATE4 /* load the constants: */ - movdqa .Laegis128_const_0, STATE2 - movdqa .Laegis128_const_1, STATE1 + movdqa .Laegis128_const_0(%rip), STATE2 + movdqa .Laegis128_const_1(%rip), STATE1 pxor STATE2, STATE3 pxor STATE1, STATE4 @@ -681,7 +681,7 @@ SYM_FUNC_START(crypto_aegis128_aesni_dec_tail) punpcklbw T0, T0 punpcklbw T0, T0 punpcklbw T0, T0 - movdqa .Laegis128_counter, T1 + movdqa .Laegis128_counter(%rip), T1 pcmpgtb T1, T0 pand T0, MSG diff --git a/arch/x86/crypto/aesni-intel_asm.S b/arch/x86/crypto/aesni-intel_asm.S index d28503f99f58..2b8d501493c2 100644 --- a/arch/x86/crypto/aesni-intel_asm.S +++ b/arch/x86/crypto/aesni-intel_asm.S @@ -2597,7 +2597,7 @@ SYM_FUNC_END(aesni_cbc_dec) * BSWAP_MASK == endian swapping mask */ SYM_FUNC_START_LOCAL(_aesni_inc_init) - movaps .Lbswap_mask, BSWAP_MASK + movaps .Lbswap_mask(%rip), BSWAP_MASK movaps IV, CTR PSHUFB_XMM BSWAP_MASK CTR mov $1, TCTR_LOW @@ -2724,12 +2724,12 @@ SYM_FUNC_START(aesni_xts_crypt8) cmpb $0, %cl movl $0, %ecx movl $240, %r10d - leaq _aesni_enc4, %r11 - leaq _aesni_dec4, %rax + leaq _aesni_enc4(%rip), %r11 + leaq _aesni_dec4(%rip), %rax cmovel %r10d, %ecx cmoveq %rax, %r11 - movdqa .Lgf128mul_x_ble_mask, GF128MUL_MASK + movdqa .Lgf128mul_x_ble_mask(%rip), GF128MUL_MASK movups (IVP), IV mov 480(KEYP), KLEN diff --git a/arch/x86/crypto/aesni-intel_avx-x86_64.S b/arch/x86/crypto/aesni-intel_avx-x86_64.S index bfa1c0b3e5b4..838221507976 100644 --- a/arch/x86/crypto/aesni-intel_avx-x86_64.S +++ b/arch/x86/crypto/aesni-intel_avx-x86_64.S @@ -660,7 +660,8 @@ _get_AAD_rest0\@: vpshufb and an array of shuffle masks */ movq %r12, %r11 salq $4, %r11 - vmovdqu aad_shift_arr(%r11), \T1 + leaq aad_shift_arr(%rip), %rax + vmovdqu (%rax,%r11,), \T1 vpshufb \T1, \T7, \T7 _get_AAD_rest_final\@: vpshufb SHUF_MASK(%rip), \T7, \T7 diff --git a/arch/x86/crypto/camellia-aesni-avx-asm_64.S b/arch/x86/crypto/camellia-aesni-avx-asm_64.S index d01ddd73de65..8a203c507f81 100644 --- a/arch/x86/crypto/camellia-aesni-avx-asm_64.S +++ b/arch/x86/crypto/camellia-aesni-avx-asm_64.S @@ -53,10 +53,10 @@ /* \ * S-function with AES subbytes \ */ \ - vmovdqa .Linv_shift_row, t4; \ - vbroadcastss .L0f0f0f0f, t7; \ - vmovdqa .Lpre_tf_lo_s1, t0; \ - vmovdqa .Lpre_tf_hi_s1, t1; \ + vmovdqa .Linv_shift_row(%rip), t4; \ + vbroadcastss .L0f0f0f0f(%rip), t7; \ + vmovdqa .Lpre_tf_lo_s1(%rip), t0; \ + vmovdqa .Lpre_tf_hi_s1(%rip), t1; \ \ /* AES inverse shift rows */ \ vpshufb t4, x0, x0; \ @@ -69,8 +69,8 @@ vpshufb t4, x6, x6; \ \ /* prefilter sboxes 1, 2 and 3 */ \ - vmovdqa .Lpre_tf_lo_s4, t2; \ - vmovdqa .Lpre_tf_hi_s4, t3; \ + vmovdqa .Lpre_tf_lo_s4(%rip), t2; \ + vmovdqa .Lpre_tf_hi_s4(%rip), t3; \ filter_8bit(x0, t0, t1, t7, t6); \ filter_8bit(x7, t0, t1, t7, t6); \ filter_8bit(x1, t0, t1, t7, t6); \ @@ -84,8 +84,8 @@ filter_8bit(x6, t2, t3, t7, t6); \ \ /* AES subbytes + AES shift rows */ \ - vmovdqa .Lpost_tf_lo_s1, t0; \ - vmovdqa .Lpost_tf_hi_s1, t1; \ + vmovdqa .Lpost_tf_lo_s1(%rip), t0; \ + vmovdqa .Lpost_tf_hi_s1(%rip), t1; \ vaesenclast t4, x0, x0; \ vaesenclast t4, x7, x7; \ vaesenclast t4, x1, x1; \ @@ -96,16 +96,16 @@ vaesenclast t4, x6, x6; \ \ /* postfilter sboxes 1 and 4 */ \ - vmovdqa .Lpost_tf_lo_s3, t2; \ - vmovdqa .Lpost_tf_hi_s3, t3; \ + vmovdqa .Lpost_tf_lo_s3(%rip), t2; \ + vmovdqa .Lpost_tf_hi_s3(%rip), t3; \ filter_8bit(x0, t0, t1, t7, t6); \ filter_8bit(x7, t0, t1, t7, t6); \ filter_8bit(x3, t0, t1, t7, t6); \ filter_8bit(x6, t0, t1, t7, t6); \ \ /* postfilter sbox 3 */ \ - vmovdqa .Lpost_tf_lo_s2, t4; \ - vmovdqa .Lpost_tf_hi_s2, t5; \ + vmovdqa .Lpost_tf_lo_s2(%rip), t4; \ + vmovdqa .Lpost_tf_hi_s2(%rip), t5; \ filter_8bit(x2, t2, t3, t7, t6); \ filter_8bit(x5, t2, t3, t7, t6); \ \ @@ -444,7 +444,7 @@ SYM_FUNC_END(roundsm16_x4_x5_x6_x7_x0_x1_x2_x3_y4_y5_y6_y7_y0_y1_y2_y3_ab) transpose_4x4(c0, c1, c2, c3, a0, a1); \ transpose_4x4(d0, d1, d2, d3, a0, a1); \ \ - vmovdqu .Lshufb_16x16b, a0; \ + vmovdqu .Lshufb_16x16b(%rip), a0; \ vmovdqu st1, a1; \ vpshufb a0, a2, a2; \ vpshufb a0, a3, a3; \ @@ -483,7 +483,7 @@ SYM_FUNC_END(roundsm16_x4_x5_x6_x7_x0_x1_x2_x3_y4_y5_y6_y7_y0_y1_y2_y3_ab) #define inpack16_pre(x0, x1, x2, x3, x4, x5, x6, x7, y0, y1, y2, y3, y4, y5, \ y6, y7, rio, key) \ vmovq key, x0; \ - vpshufb .Lpack_bswap, x0, x0; \ + vpshufb .Lpack_bswap(%rip), x0, x0; \ \ vpxor 0 * 16(rio), x0, y7; \ vpxor 1 * 16(rio), x0, y6; \ @@ -534,7 +534,7 @@ SYM_FUNC_END(roundsm16_x4_x5_x6_x7_x0_x1_x2_x3_y4_y5_y6_y7_y0_y1_y2_y3_ab) vmovdqu x0, stack_tmp0; \ \ vmovq key, x0; \ - vpshufb .Lpack_bswap, x0, x0; \ + vpshufb .Lpack_bswap(%rip), x0, x0; \ \ vpxor x0, y7, y7; \ vpxor x0, y6, y6; \ @@ -1017,7 +1017,7 @@ SYM_FUNC_START(camellia_ctr_16way) subq $(16 * 16), %rsp; movq %rsp, %rax; - vmovdqa .Lbswap128_mask, %xmm14; + vmovdqa .Lbswap128_mask(%rip), %xmm14; /* load IV and byteswap */ vmovdqu (%rcx), %xmm0; @@ -1066,7 +1066,7 @@ SYM_FUNC_START(camellia_ctr_16way) /* inpack16_pre: */ vmovq (key_table)(CTX), %xmm15; - vpshufb .Lpack_bswap, %xmm15, %xmm15; + vpshufb .Lpack_bswap(%rip), %xmm15, %xmm15; vpxor %xmm0, %xmm15, %xmm0; vpxor %xmm1, %xmm15, %xmm1; vpxor %xmm2, %xmm15, %xmm2; @@ -1134,7 +1134,7 @@ SYM_FUNC_START_LOCAL(camellia_xts_crypt_16way) subq $(16 * 16), %rsp; movq %rsp, %rax; - vmovdqa .Lxts_gf128mul_and_shl1_mask, %xmm14; + vmovdqa .Lxts_gf128mul_and_shl1_mask(%rip), %xmm14; /* load IV */ vmovdqu (%rcx), %xmm0; @@ -1210,7 +1210,7 @@ SYM_FUNC_START_LOCAL(camellia_xts_crypt_16way) /* inpack16_pre: */ vmovq (key_table)(CTX, %r8, 8), %xmm15; - vpshufb .Lpack_bswap, %xmm15, %xmm15; + vpshufb .Lpack_bswap(%rip), %xmm15, %xmm15; vpxor 0 * 16(%rax), %xmm15, %xmm0; vpxor %xmm1, %xmm15, %xmm1; vpxor %xmm2, %xmm15, %xmm2; @@ -1265,7 +1265,7 @@ SYM_FUNC_START(camellia_xts_enc_16way) */ xorl %r8d, %r8d; /* input whitening key, 0 for enc */ - leaq __camellia_enc_blk16, %r9; + leaq __camellia_enc_blk16(%rip), %r9; jmp camellia_xts_crypt_16way; SYM_FUNC_END(camellia_xts_enc_16way) @@ -1283,7 +1283,7 @@ SYM_FUNC_START(camellia_xts_dec_16way) movl $24, %eax; cmovel %eax, %r8d; /* input whitening key, last for dec */ - leaq __camellia_dec_blk16, %r9; + leaq __camellia_dec_blk16(%rip), %r9; jmp camellia_xts_crypt_16way; SYM_FUNC_END(camellia_xts_dec_16way) diff --git a/arch/x86/crypto/camellia-aesni-avx2-asm_64.S b/arch/x86/crypto/camellia-aesni-avx2-asm_64.S index 563ef6e83cdd..d7f4cd4b1702 100644 --- a/arch/x86/crypto/camellia-aesni-avx2-asm_64.S +++ b/arch/x86/crypto/camellia-aesni-avx2-asm_64.S @@ -65,12 +65,12 @@ /* \ * S-function with AES subbytes \ */ \ - vbroadcasti128 .Linv_shift_row, t4; \ - vpbroadcastd .L0f0f0f0f, t7; \ - vbroadcasti128 .Lpre_tf_lo_s1, t5; \ - vbroadcasti128 .Lpre_tf_hi_s1, t6; \ - vbroadcasti128 .Lpre_tf_lo_s4, t2; \ - vbroadcasti128 .Lpre_tf_hi_s4, t3; \ + vbroadcasti128 .Linv_shift_row(%rip), t4; \ + vpbroadcastd .L0f0f0f0f(%rip), t7; \ + vbroadcasti128 .Lpre_tf_lo_s1(%rip), t5; \ + vbroadcasti128 .Lpre_tf_hi_s1(%rip), t6; \ + vbroadcasti128 .Lpre_tf_lo_s4(%rip), t2; \ + vbroadcasti128 .Lpre_tf_hi_s4(%rip), t3; \ \ /* AES inverse shift rows */ \ vpshufb t4, x0, x0; \ @@ -116,8 +116,8 @@ vinserti128 $1, t2##_x, x6, x6; \ vextracti128 $1, x1, t3##_x; \ vextracti128 $1, x4, t2##_x; \ - vbroadcasti128 .Lpost_tf_lo_s1, t0; \ - vbroadcasti128 .Lpost_tf_hi_s1, t1; \ + vbroadcasti128 .Lpost_tf_lo_s1(%rip), t0; \ + vbroadcasti128 .Lpost_tf_hi_s1(%rip), t1; \ vaesenclast t4##_x, x2##_x, x2##_x; \ vaesenclast t4##_x, t6##_x, t6##_x; \ vinserti128 $1, t6##_x, x2, x2; \ @@ -132,16 +132,16 @@ vinserti128 $1, t2##_x, x4, x4; \ \ /* postfilter sboxes 1 and 4 */ \ - vbroadcasti128 .Lpost_tf_lo_s3, t2; \ - vbroadcasti128 .Lpost_tf_hi_s3, t3; \ + vbroadcasti128 .Lpost_tf_lo_s3(%rip), t2; \ + vbroadcasti128 .Lpost_tf_hi_s3(%rip), t3; \ filter_8bit(x0, t0, t1, t7, t6); \ filter_8bit(x7, t0, t1, t7, t6); \ filter_8bit(x3, t0, t1, t7, t6); \ filter_8bit(x6, t0, t1, t7, t6); \ \ /* postfilter sbox 3 */ \ - vbroadcasti128 .Lpost_tf_lo_s2, t4; \ - vbroadcasti128 .Lpost_tf_hi_s2, t5; \ + vbroadcasti128 .Lpost_tf_lo_s2(%rip), t4; \ + vbroadcasti128 .Lpost_tf_hi_s2(%rip), t5; \ filter_8bit(x2, t2, t3, t7, t6); \ filter_8bit(x5, t2, t3, t7, t6); \ \ @@ -478,7 +478,7 @@ SYM_FUNC_END(roundsm32_x4_x5_x6_x7_x0_x1_x2_x3_y4_y5_y6_y7_y0_y1_y2_y3_ab) transpose_4x4(c0, c1, c2, c3, a0, a1); \ transpose_4x4(d0, d1, d2, d3, a0, a1); \ \ - vbroadcasti128 .Lshufb_16x16b, a0; \ + vbroadcasti128 .Lshufb_16x16b(%rip), a0; \ vmovdqu st1, a1; \ vpshufb a0, a2, a2; \ vpshufb a0, a3, a3; \ @@ -517,7 +517,7 @@ SYM_FUNC_END(roundsm32_x4_x5_x6_x7_x0_x1_x2_x3_y4_y5_y6_y7_y0_y1_y2_y3_ab) #define inpack32_pre(x0, x1, x2, x3, x4, x5, x6, x7, y0, y1, y2, y3, y4, y5, \ y6, y7, rio, key) \ vpbroadcastq key, x0; \ - vpshufb .Lpack_bswap, x0, x0; \ + vpshufb .Lpack_bswap(%rip), x0, x0; \ \ vpxor 0 * 32(rio), x0, y7; \ vpxor 1 * 32(rio), x0, y6; \ @@ -568,7 +568,7 @@ SYM_FUNC_END(roundsm32_x4_x5_x6_x7_x0_x1_x2_x3_y4_y5_y6_y7_y0_y1_y2_y3_ab) vmovdqu x0, stack_tmp0; \ \ vpbroadcastq key, x0; \ - vpshufb .Lpack_bswap, x0, x0; \ + vpshufb .Lpack_bswap(%rip), x0, x0; \ \ vpxor x0, y7, y7; \ vpxor x0, y6, y6; \ @@ -1108,7 +1108,7 @@ SYM_FUNC_START(camellia_ctr_32way) vmovdqu (%rcx), %xmm0; vmovdqa %xmm0, %xmm1; inc_le128(%xmm0, %xmm15, %xmm14); - vbroadcasti128 .Lbswap128_mask, %ymm14; + vbroadcasti128 .Lbswap128_mask(%rip), %ymm14; vinserti128 $1, %xmm0, %ymm1, %ymm0; vpshufb %ymm14, %ymm0, %ymm13; vmovdqu %ymm13, 15 * 32(%rax); @@ -1154,7 +1154,7 @@ SYM_FUNC_START(camellia_ctr_32way) /* inpack32_pre: */ vpbroadcastq (key_table)(CTX), %ymm15; - vpshufb .Lpack_bswap, %ymm15, %ymm15; + vpshufb .Lpack_bswap(%rip), %ymm15, %ymm15; vpxor %ymm0, %ymm15, %ymm0; vpxor %ymm1, %ymm15, %ymm1; vpxor %ymm2, %ymm15, %ymm2; @@ -1238,13 +1238,13 @@ SYM_FUNC_START_LOCAL(camellia_xts_crypt_32way) subq $(16 * 32), %rsp; movq %rsp, %rax; - vbroadcasti128 .Lxts_gf128mul_and_shl1_mask_0, %ymm12; + vbroadcasti128 .Lxts_gf128mul_and_shl1_mask_0(%rip), %ymm12; /* load IV and construct second IV */ vmovdqu (%rcx), %xmm0; vmovdqa %xmm0, %xmm15; gf128mul_x_ble(%xmm0, %xmm12, %xmm13); - vbroadcasti128 .Lxts_gf128mul_and_shl1_mask_1, %ymm13; + vbroadcasti128 .Lxts_gf128mul_and_shl1_mask_1(%rip), %ymm13; vinserti128 $1, %xmm0, %ymm15, %ymm0; vpxor 0 * 32(%rdx), %ymm0, %ymm15; vmovdqu %ymm15, 15 * 32(%rax); @@ -1321,7 +1321,7 @@ SYM_FUNC_START_LOCAL(camellia_xts_crypt_32way) /* inpack32_pre: */ vpbroadcastq (key_table)(CTX, %r8, 8), %ymm15; - vpshufb .Lpack_bswap, %ymm15, %ymm15; + vpshufb .Lpack_bswap(%rip), %ymm15, %ymm15; vpxor 0 * 32(%rax), %ymm15, %ymm0; vpxor %ymm1, %ymm15, %ymm1; vpxor %ymm2, %ymm15, %ymm2; @@ -1379,7 +1379,7 @@ SYM_FUNC_START(camellia_xts_enc_32way) xorl %r8d, %r8d; /* input whitening key, 0 for enc */ - leaq __camellia_enc_blk32, %r9; + leaq __camellia_enc_blk32(%rip), %r9; jmp camellia_xts_crypt_32way; SYM_FUNC_END(camellia_xts_enc_32way) @@ -1397,7 +1397,7 @@ SYM_FUNC_START(camellia_xts_dec_32way) movl $24, %eax; cmovel %eax, %r8d; /* input whitening key, last for dec */ - leaq __camellia_dec_blk32, %r9; + leaq __camellia_dec_blk32(%rip), %r9; jmp camellia_xts_crypt_32way; SYM_FUNC_END(camellia_xts_dec_32way) diff --git a/arch/x86/crypto/camellia-x86_64-asm_64.S b/arch/x86/crypto/camellia-x86_64-asm_64.S index 1372e6408850..3a4e5cca583a 100644 --- a/arch/x86/crypto/camellia-x86_64-asm_64.S +++ b/arch/x86/crypto/camellia-x86_64-asm_64.S @@ -77,11 +77,13 @@ #define RXORbl %r9b #define xor2ror16(T0, T1, tmp1, tmp2, ab, dst) \ + leaq T0(%rip), tmp1; \ movzbl ab ## bl, tmp2 ## d; \ + xorq (tmp1, tmp2, 8), dst; \ + leaq T1(%rip), tmp2; \ movzbl ab ## bh, tmp1 ## d; \ - rorq $16, ab; \ - xorq T0(, tmp2, 8), dst; \ - xorq T1(, tmp1, 8), dst; + xorq (tmp2, tmp1, 8), dst; \ + rorq $16, ab; /********************************************************************** 1-way camellia diff --git a/arch/x86/crypto/cast5-avx-x86_64-asm_64.S b/arch/x86/crypto/cast5-avx-x86_64-asm_64.S index 8a6181b08b59..ffa9ed5f096b 100644 --- a/arch/x86/crypto/cast5-avx-x86_64-asm_64.S +++ b/arch/x86/crypto/cast5-avx-x86_64-asm_64.S @@ -83,16 +83,20 @@ #define lookup_32bit(src, dst, op1, op2, op3, interleave_op, il_reg) \ - movzbl src ## bh, RID1d; \ - movzbl src ## bl, RID2d; \ - shrq $16, src; \ - movl s1(, RID1, 4), dst ## d; \ - op1 s2(, RID2, 4), dst ## d; \ - movzbl src ## bh, RID1d; \ - movzbl src ## bl, RID2d; \ - interleave_op(il_reg); \ - op2 s3(, RID1, 4), dst ## d; \ - op3 s4(, RID2, 4), dst ## d; + movzbl src ## bh, RID1d; \ + leaq s1(%rip), RID2; \ + movl (RID2, RID1, 4), dst ## d; \ + movzbl src ## bl, RID2d; \ + leaq s2(%rip), RID1; \ + op1 (RID1, RID2, 4), dst ## d; \ + shrq $16, src; \ + movzbl src ## bh, RID1d; \ + leaq s3(%rip), RID2; \ + op2 (RID2, RID1, 4), dst ## d; \ + movzbl src ## bl, RID2d; \ + leaq s4(%rip), RID1; \ + op3 (RID1, RID2, 4), dst ## d; \ + interleave_op(il_reg); #define dummy(d) /* do nothing */ @@ -151,15 +155,15 @@ subround(l ## 3, r ## 3, l ## 4, r ## 4, f); #define enc_preload_rkr() \ - vbroadcastss .L16_mask, RKR; \ + vbroadcastss .L16_mask(%rip), RKR; \ /* add 16-bit rotation to key rotations (mod 32) */ \ vpxor kr(CTX), RKR, RKR; #define dec_preload_rkr() \ - vbroadcastss .L16_mask, RKR; \ + vbroadcastss .L16_mask(%rip), RKR; \ /* add 16-bit rotation to key rotations (mod 32) */ \ vpxor kr(CTX), RKR, RKR; \ - vpshufb .Lbswap128_mask, RKR, RKR; + vpshufb .Lbswap128_mask(%rip), RKR, RKR; #define transpose_2x4(x0, x1, t0, t1) \ vpunpckldq x1, x0, t0; \ @@ -236,9 +240,9 @@ SYM_FUNC_START_LOCAL(__cast5_enc_blk16) movq %rdi, CTX; - vmovdqa .Lbswap_mask, RKM; - vmovd .Lfirst_mask, R1ST; - vmovd .L32_mask, R32; + vmovdqa .Lbswap_mask(%rip), RKM; + vmovd .Lfirst_mask(%rip), R1ST; + vmovd .L32_mask(%rip), R32; enc_preload_rkr(); inpack_blocks(RL1, RR1, RTMP, RX, RKM); @@ -272,7 +276,7 @@ SYM_FUNC_START_LOCAL(__cast5_enc_blk16) popq %rbx; popq %r15; - vmovdqa .Lbswap_mask, RKM; + vmovdqa .Lbswap_mask(%rip), RKM; outunpack_blocks(RR1, RL1, RTMP, RX, RKM); outunpack_blocks(RR2, RL2, RTMP, RX, RKM); @@ -310,9 +314,9 @@ SYM_FUNC_START_LOCAL(__cast5_dec_blk16) movq %rdi, CTX; - vmovdqa .Lbswap_mask, RKM; - vmovd .Lfirst_mask, R1ST; - vmovd .L32_mask, R32; + vmovdqa .Lbswap_mask(%rip), RKM; + vmovd .Lfirst_mask(%rip), R1ST; + vmovd .L32_mask(%rip), R32; dec_preload_rkr(); inpack_blocks(RL1, RR1, RTMP, RX, RKM); @@ -343,7 +347,7 @@ SYM_FUNC_START_LOCAL(__cast5_dec_blk16) round(RL, RR, 1, 2); round(RR, RL, 0, 1); - vmovdqa .Lbswap_mask, RKM; + vmovdqa .Lbswap_mask(%rip), RKM; popq %rbx; popq %r15; @@ -506,8 +510,8 @@ SYM_FUNC_START(cast5_ctr_16way) vpcmpeqd RKR, RKR, RKR; vpaddq RKR, RKR, RKR; /* low: -2, high: -2 */ - vmovdqa .Lbswap_iv_mask, R1ST; - vmovdqa .Lbswap128_mask, RKM; + vmovdqa .Lbswap_iv_mask(%rip), R1ST; + vmovdqa .Lbswap128_mask(%rip), RKM; /* load IV and byteswap */ vmovq (%rcx), RX; diff --git a/arch/x86/crypto/cast6-avx-x86_64-asm_64.S b/arch/x86/crypto/cast6-avx-x86_64-asm_64.S index 932a3ce32a88..c46a61411f98 100644 --- a/arch/x86/crypto/cast6-avx-x86_64-asm_64.S +++ b/arch/x86/crypto/cast6-avx-x86_64-asm_64.S @@ -83,16 +83,20 @@ #define lookup_32bit(src, dst, op1, op2, op3, interleave_op, il_reg) \ - movzbl src ## bh, RID1d; \ - movzbl src ## bl, RID2d; \ - shrq $16, src; \ - movl s1(, RID1, 4), dst ## d; \ - op1 s2(, RID2, 4), dst ## d; \ - movzbl src ## bh, RID1d; \ - movzbl src ## bl, RID2d; \ - interleave_op(il_reg); \ - op2 s3(, RID1, 4), dst ## d; \ - op3 s4(, RID2, 4), dst ## d; + movzbl src ## bh, RID1d; \ + leaq s1(%rip), RID2; \ + movl (RID2, RID1, 4), dst ## d; \ + movzbl src ## bl, RID2d; \ + leaq s2(%rip), RID1; \ + op1 (RID1, RID2, 4), dst ## d; \ + shrq $16, src; \ + movzbl src ## bh, RID1d; \ + leaq s3(%rip), RID2; \ + op2 (RID2, RID1, 4), dst ## d; \ + movzbl src ## bl, RID2d; \ + leaq s4(%rip), RID1; \ + op3 (RID1, RID2, 4), dst ## d; \ + interleave_op(il_reg); #define dummy(d) /* do nothing */ @@ -175,10 +179,10 @@ qop(RD, RC, 1); #define shuffle(mask) \ - vpshufb mask, RKR, RKR; + vpshufb mask(%rip), RKR, RKR; #define preload_rkr(n, do_mask, mask) \ - vbroadcastss .L16_mask, RKR; \ + vbroadcastss .L16_mask(%rip), RKR; \ /* add 16-bit rotation to key rotations (mod 32) */ \ vpxor (kr+n*16)(CTX), RKR, RKR; \ do_mask(mask); @@ -260,9 +264,9 @@ SYM_FUNC_START_LOCAL(__cast6_enc_blk8) movq %rdi, CTX; - vmovdqa .Lbswap_mask, RKM; - vmovd .Lfirst_mask, R1ST; - vmovd .L32_mask, R32; + vmovdqa .Lbswap_mask(%rip), RKM; + vmovd .Lfirst_mask(%rip), R1ST; + vmovd .L32_mask(%rip), R32; inpack_blocks(RA1, RB1, RC1, RD1, RTMP, RX, RKRF, RKM); inpack_blocks(RA2, RB2, RC2, RD2, RTMP, RX, RKRF, RKM); @@ -286,7 +290,7 @@ SYM_FUNC_START_LOCAL(__cast6_enc_blk8) popq %rbx; popq %r15; - vmovdqa .Lbswap_mask, RKM; + vmovdqa .Lbswap_mask(%rip), RKM; outunpack_blocks(RA1, RB1, RC1, RD1, RTMP, RX, RKRF, RKM); outunpack_blocks(RA2, RB2, RC2, RD2, RTMP, RX, RKRF, RKM); @@ -308,9 +312,9 @@ SYM_FUNC_START_LOCAL(__cast6_dec_blk8) movq %rdi, CTX; - vmovdqa .Lbswap_mask, RKM; - vmovd .Lfirst_mask, R1ST; - vmovd .L32_mask, R32; + vmovdqa .Lbswap_mask(%rip), RKM; + vmovd .Lfirst_mask(%rip), R1ST; + vmovd .L32_mask(%rip), R32; inpack_blocks(RA1, RB1, RC1, RD1, RTMP, RX, RKRF, RKM); inpack_blocks(RA2, RB2, RC2, RD2, RTMP, RX, RKRF, RKM); @@ -334,7 +338,7 @@ SYM_FUNC_START_LOCAL(__cast6_dec_blk8) popq %rbx; popq %r15; - vmovdqa .Lbswap_mask, RKM; + vmovdqa .Lbswap_mask(%rip), RKM; outunpack_blocks(RA1, RB1, RC1, RD1, RTMP, RX, RKRF, RKM); outunpack_blocks(RA2, RB2, RC2, RD2, RTMP, RX, RKRF, RKM); diff --git a/arch/x86/crypto/des3_ede-asm_64.S b/arch/x86/crypto/des3_ede-asm_64.S index fac0fdc3f25d..c841de5fe065 100644 --- a/arch/x86/crypto/des3_ede-asm_64.S +++ b/arch/x86/crypto/des3_ede-asm_64.S @@ -129,21 +129,29 @@ movzbl RW0bl, RT2d; \ movzbl RW0bh, RT3d; \ shrq $16, RW0; \ - movq s8(, RT0, 8), RT0; \ - xorq s6(, RT1, 8), to; \ + leaq s8(%rip), RW1; \ + movq (RW1, RT0, 8), RT0; \ + leaq s6(%rip), RW1; \ + xorq (RW1, RT1, 8), to; \ movzbl RW0bl, RL1d; \ movzbl RW0bh, RT1d; \ shrl $16, RW0d; \ - xorq s4(, RT2, 8), RT0; \ - xorq s2(, RT3, 8), to; \ + leaq s4(%rip), RW1; \ + xorq (RW1, RT2, 8), RT0; \ + leaq s2(%rip), RW1; \ + xorq (RW1, RT3, 8), to; \ movzbl RW0bl, RT2d; \ movzbl RW0bh, RT3d; \ - xorq s7(, RL1, 8), RT0; \ - xorq s5(, RT1, 8), to; \ - xorq s3(, RT2, 8), RT0; \ + leaq s7(%rip), RW1; \ + xorq (RW1, RL1, 8), RT0; \ + leaq s5(%rip), RW1; \ + xorq (RW1, RT1, 8), to; \ + leaq s3(%rip), RW1; \ + xorq (RW1, RT2, 8), RT0; \ load_next_key(n, RW0); \ xorq RT0, to; \ - xorq s1(, RT3, 8), to; \ + leaq s1(%rip), RW1; \ + xorq (RW1, RT3, 8), to; \ #define load_next_key(n, RWx) \ movq (((n) + 1) * 8)(CTX), RWx; @@ -355,65 +363,89 @@ SYM_FUNC_END(des3_ede_x86_64_crypt_blk) movzbl RW0bl, RT3d; \ movzbl RW0bh, RT1d; \ shrq $16, RW0; \ - xorq s8(, RT3, 8), to##0; \ - xorq s6(, RT1, 8), to##0; \ + leaq s8(%rip), RT2; \ + xorq (RT2, RT3, 8), to##0; \ + leaq s6(%rip), RT2; \ + xorq (RT2, RT1, 8), to##0; \ movzbl RW0bl, RT3d; \ movzbl RW0bh, RT1d; \ shrq $16, RW0; \ - xorq s4(, RT3, 8), to##0; \ - xorq s2(, RT1, 8), to##0; \ + leaq s4(%rip), RT2; \ + xorq (RT2, RT3, 8), to##0; \ + leaq s2(%rip), RT2; \ + xorq (RT2, RT1, 8), to##0; \ movzbl RW0bl, RT3d; \ movzbl RW0bh, RT1d; \ shrl $16, RW0d; \ - xorq s7(, RT3, 8), to##0; \ - xorq s5(, RT1, 8), to##0; \ + leaq s7(%rip), RT2; \ + xorq (RT2, RT3, 8), to##0; \ + leaq s5(%rip), RT2; \ + xorq (RT2, RT1, 8), to##0; \ movzbl RW0bl, RT3d; \ movzbl RW0bh, RT1d; \ load_next_key(n, RW0); \ - xorq s3(, RT3, 8), to##0; \ - xorq s1(, RT1, 8), to##0; \ + leaq s3(%rip), RT2; \ + xorq (RT2, RT3, 8), to##0; \ + leaq s1(%rip), RT2; \ + xorq (RT2, RT1, 8), to##0; \ xorq from##1, RW1; \ movzbl RW1bl, RT3d; \ movzbl RW1bh, RT1d; \ shrq $16, RW1; \ - xorq s8(, RT3, 8), to##1; \ - xorq s6(, RT1, 8), to##1; \ + leaq s8(%rip), RT2; \ + xorq (RT2, RT3, 8), to##1; \ + leaq s6(%rip), RT2; \ + xorq (RT2, RT1, 8), to##1; \ movzbl RW1bl, RT3d; \ movzbl RW1bh, RT1d; \ shrq $16, RW1; \ - xorq s4(, RT3, 8), to##1; \ - xorq s2(, RT1, 8), to##1; \ + leaq s4(%rip), RT2; \ + xorq (RT2, RT3, 8), to##1; \ + leaq s2(%rip), RT2; \ + xorq (RT2, RT1, 8), to##1; \ movzbl RW1bl, RT3d; \ movzbl RW1bh, RT1d; \ shrl $16, RW1d; \ - xorq s7(, RT3, 8), to##1; \ - xorq s5(, RT1, 8), to##1; \ + leaq s7(%rip), RT2; \ + xorq (RT2, RT3, 8), to##1; \ + leaq s5(%rip), RT2; \ + xorq (RT2, RT1, 8), to##1; \ movzbl RW1bl, RT3d; \ movzbl RW1bh, RT1d; \ do_movq(RW0, RW1); \ - xorq s3(, RT3, 8), to##1; \ - xorq s1(, RT1, 8), to##1; \ + leaq s3(%rip), RT2; \ + xorq (RT2, RT3, 8), to##1; \ + leaq s1(%rip), RT2; \ + xorq (RT2, RT1, 8), to##1; \ xorq from##2, RW2; \ movzbl RW2bl, RT3d; \ movzbl RW2bh, RT1d; \ shrq $16, RW2; \ - xorq s8(, RT3, 8), to##2; \ - xorq s6(, RT1, 8), to##2; \ + leaq s8(%rip), RT2; \ + xorq (RT2, RT3, 8), to##2; \ + leaq s6(%rip), RT2; \ + xorq (RT2, RT1, 8), to##2; \ movzbl RW2bl, RT3d; \ movzbl RW2bh, RT1d; \ shrq $16, RW2; \ - xorq s4(, RT3, 8), to##2; \ - xorq s2(, RT1, 8), to##2; \ + leaq s4(%rip), RT2; \ + xorq (RT2, RT3, 8), to##2; \ + leaq s2(%rip), RT2; \ + xorq (RT2, RT1, 8), to##2; \ movzbl RW2bl, RT3d; \ movzbl RW2bh, RT1d; \ shrl $16, RW2d; \ - xorq s7(, RT3, 8), to##2; \ - xorq s5(, RT1, 8), to##2; \ + leaq s7(%rip), RT2; \ + xorq (RT2, RT3, 8), to##2; \ + leaq s5(%rip), RT2; \ + xorq (RT2, RT1, 8), to##2; \ movzbl RW2bl, RT3d; \ movzbl RW2bh, RT1d; \ do_movq(RW0, RW2); \ - xorq s3(, RT3, 8), to##2; \ - xorq s1(, RT1, 8), to##2; + leaq s3(%rip), RT2; \ + xorq (RT2, RT3, 8), to##2; \ + leaq s1(%rip), RT2; \ + xorq (RT2, RT1, 8), to##2; #define __movq(src, dst) \ movq src, dst; diff --git a/arch/x86/crypto/ghash-clmulni-intel_asm.S b/arch/x86/crypto/ghash-clmulni-intel_asm.S index bb9735fbb865..59717ade66d7 100644 --- a/arch/x86/crypto/ghash-clmulni-intel_asm.S +++ b/arch/x86/crypto/ghash-clmulni-intel_asm.S @@ -94,7 +94,7 @@ SYM_FUNC_START(clmul_ghash_mul) FRAME_BEGIN movups (%rdi), DATA movups (%rsi), SHASH - movaps .Lbswap_mask, BSWAP + movaps .Lbswap_mask(%rip), BSWAP PSHUFB_XMM BSWAP DATA call __clmul_gf128mul_ble PSHUFB_XMM BSWAP DATA @@ -111,7 +111,7 @@ SYM_FUNC_START(clmul_ghash_update) FRAME_BEGIN cmp $16, %rdx jb .Lupdate_just_ret # check length - movaps .Lbswap_mask, BSWAP + movaps .Lbswap_mask(%rip), BSWAP movups (%rdi), DATA movups (%rcx), SHASH PSHUFB_XMM BSWAP DATA diff --git a/arch/x86/crypto/glue_helper-asm-avx.S b/arch/x86/crypto/glue_helper-asm-avx.S index d08fc575ef7f..a9736f85fef0 100644 --- a/arch/x86/crypto/glue_helper-asm-avx.S +++ b/arch/x86/crypto/glue_helper-asm-avx.S @@ -44,7 +44,7 @@ #define load_ctr_8way(iv, bswap, x0, x1, x2, x3, x4, x5, x6, x7, t0, t1, t2) \ vpcmpeqd t0, t0, t0; \ vpsrldq $8, t0, t0; /* low: -1, high: 0 */ \ - vmovdqa bswap, t1; \ + vmovdqa bswap(%rip), t1; \ \ /* load IV and byteswap */ \ vmovdqu (iv), x7; \ @@ -89,7 +89,7 @@ #define load_xts_8way(iv, src, dst, x0, x1, x2, x3, x4, x5, x6, x7, tiv, t0, \ t1, xts_gf128mul_and_shl1_mask) \ - vmovdqa xts_gf128mul_and_shl1_mask, t0; \ + vmovdqa xts_gf128mul_and_shl1_mask(%rip), t0; \ \ /* load IV */ \ vmovdqu (iv), tiv; \ diff --git a/arch/x86/crypto/glue_helper-asm-avx2.S b/arch/x86/crypto/glue_helper-asm-avx2.S index d84508c85c13..efbf4953707e 100644 --- a/arch/x86/crypto/glue_helper-asm-avx2.S +++ b/arch/x86/crypto/glue_helper-asm-avx2.S @@ -62,7 +62,7 @@ vmovdqu (iv), t2x; \ vmovdqa t2x, t3x; \ inc_le128(t2x, t0x, t1x); \ - vbroadcasti128 bswap, t1; \ + vbroadcasti128 bswap(%rip), t1; \ vinserti128 $1, t2x, t3, t2; /* ab: le0 ; cd: le1 */ \ vpshufb t1, t2, x0; \ \ @@ -119,13 +119,13 @@ tivx, t0, t0x, t1, t1x, t2, t2x, t3, \ xts_gf128mul_and_shl1_mask_0, \ xts_gf128mul_and_shl1_mask_1) \ - vbroadcasti128 xts_gf128mul_and_shl1_mask_0, t1; \ + vbroadcasti128 xts_gf128mul_and_shl1_mask_0(%rip), t1; \ \ /* load IV and construct second IV */ \ vmovdqu (iv), tivx; \ vmovdqa tivx, t0x; \ gf128mul_x_ble(tivx, t1x, t2x); \ - vbroadcasti128 xts_gf128mul_and_shl1_mask_1, t2; \ + vbroadcasti128 xts_gf128mul_and_shl1_mask_1(%rip), t2; \ vinserti128 $1, tivx, t0, tiv; \ vpxor (0*32)(src), tiv, x0; \ vmovdqu tiv, (0*32)(dst); \ diff --git a/arch/x86/crypto/sha256-avx2-asm.S b/arch/x86/crypto/sha256-avx2-asm.S index 519b551ad576..c516f61bb8d9 100644 --- a/arch/x86/crypto/sha256-avx2-asm.S +++ b/arch/x86/crypto/sha256-avx2-asm.S @@ -592,19 +592,23 @@ last_block_enter: .align 16 loop1: - vpaddd K256+0*32(SRND), X0, XFER + leaq K256(%rip), INP + vpaddd 0*32(INP, SRND), X0, XFER vmovdqa XFER, 0*32+_XFER(%rsp, SRND) FOUR_ROUNDS_AND_SCHED _XFER + 0*32 - vpaddd K256+1*32(SRND), X0, XFER + leaq K256(%rip), INP + vpaddd 1*32(INP, SRND), X0, XFER vmovdqa XFER, 1*32+_XFER(%rsp, SRND) FOUR_ROUNDS_AND_SCHED _XFER + 1*32 - vpaddd K256+2*32(SRND), X0, XFER + leaq K256(%rip), INP + vpaddd 2*32(INP, SRND), X0, XFER vmovdqa XFER, 2*32+_XFER(%rsp, SRND) FOUR_ROUNDS_AND_SCHED _XFER + 2*32 - vpaddd K256+3*32(SRND), X0, XFER + leaq K256(%rip), INP + vpaddd 3*32(INP, SRND), X0, XFER vmovdqa XFER, 3*32+_XFER(%rsp, SRND) FOUR_ROUNDS_AND_SCHED _XFER + 3*32 @@ -614,11 +618,13 @@ loop1: loop2: ## Do last 16 rounds with no scheduling - vpaddd K256+0*32(SRND), X0, XFER + leaq K256(%rip), INP + vpaddd 0*32(INP, SRND), X0, XFER vmovdqa XFER, 0*32+_XFER(%rsp, SRND) DO_4ROUNDS _XFER + 0*32 - vpaddd K256+1*32(SRND), X1, XFER + leaq K256(%rip), INP + vpaddd 1*32(INP, SRND), X1, XFER vmovdqa XFER, 1*32+_XFER(%rsp, SRND) DO_4ROUNDS _XFER + 1*32 add $2*32, SRND From patchwork Thu Dec 5 00:09:39 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Garnier X-Patchwork-Id: 11273779 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7067614BD for ; Thu, 5 Dec 2019 00:10:44 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id CAA002464D for ; Thu, 5 Dec 2019 00:10:43 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b="GC95mQV5" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org CAA002464D Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=chromium.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-17457-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 27807 invoked by uid 550); 5 Dec 2019 00:10:25 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 27697 invoked from network); 5 Dec 2019 00:10:24 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=D3o0S53BeNnZfZd99UaFa744uBy1bdSUJelyk4yNxHs=; b=GC95mQV5lCJoxcnVgn7296LK/LZBl3LEgiS0w9Qsv+v/6fbsEouw6kOnJdXsFNqaIi ffnEamkViwozMeBKfXxdLm0oEVeqCoZhAOCOuTTXnq8ARzgED4nTN2Lg02dSegV3KT2s 0M9m5OFAtgEpGoe6zM5fLdwUOKXdYd2RGeaW0= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=D3o0S53BeNnZfZd99UaFa744uBy1bdSUJelyk4yNxHs=; b=i+H40ceTCOkQEH9qL7HCcu1aUdTbrXZL1zNQPp1TtuxGozvECZuKeVVMXUGJkuLRnE P8ZlWU5nylyu8EwRaw8J29fZp8qzTdbqWzRxyi6hGd83OxdDReiUnBkFyT4pwxJbplqZ d5EEIhTKk0tg2F8YgIuCT9yPsPN1gYs95sQGtunIv+RYtnNarP8OXn5B9pHnrk8LgMNb NHGy+4EjZ8n0Xphjbcl9E7FQYZrjKZbPrrDHqfyc9RXZTM6x2WKYxmgiYVRbsosgXxSP tjBckxX/hxjYTnyz772MoozLoRi6Kchk0mc/3AFaV1OCCovYrIGzKBbrpdwwG5YHK1ri +ujQ== X-Gm-Message-State: APjAAAW//V9VeKns8LELPJmI0tRrq1RA25thld4ByGBH6oEftkL6SU8X NAGUhN/PzmJ0KAO0xVLAVlYLfNxAWCE= X-Google-Smtp-Source: APXvYqxZa14mrTVNWUdtWyPFCBpCEtH1XqTiMCdfNPGTL0gp3E35ktiDS7ypx6yJ/1yVmjm1S0Bzdw== X-Received: by 2002:a62:6381:: with SMTP id x123mr6145520pfb.75.1575504612382; Wed, 04 Dec 2019 16:10:12 -0800 (PST) From: Thomas Garnier To: kernel-hardening@lists.openwall.com Cc: kristen@linux.intel.com, keescook@chromium.org, Thomas Garnier , Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H. Peter Anvin" , x86@kernel.org, Peter Zijlstra , Masami Hiramatsu , Will Deacon , linux-kernel@vger.kernel.org Subject: [PATCH v10 02/11] x86: Add macro to get symbol address for PIE support Date: Wed, 4 Dec 2019 16:09:39 -0800 Message-Id: <20191205000957.112719-3-thgarnie@chromium.org> X-Mailer: git-send-email 2.24.0.393.g34dc348eaf-goog In-Reply-To: <20191205000957.112719-1-thgarnie@chromium.org> References: <20191205000957.112719-1-thgarnie@chromium.org> MIME-Version: 1.0 Add a new _ASM_MOVABS macro to fetch a symbol address. Replace "_ASM_MOV $, %dst" code construct that are not compatible with PIE. Signed-off-by: Thomas Garnier --- arch/x86/include/asm/asm.h | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/x86/include/asm/asm.h b/arch/x86/include/asm/asm.h index cd339b88d5d4..644bdbf149ee 100644 --- a/arch/x86/include/asm/asm.h +++ b/arch/x86/include/asm/asm.h @@ -32,6 +32,7 @@ #define _ASM_ALIGN __ASM_SEL(.balign 4, .balign 8) #define _ASM_MOV __ASM_SIZE(mov) +#define _ASM_MOVABS __ASM_SEL(movl, movabsq) #define _ASM_INC __ASM_SIZE(inc) #define _ASM_DEC __ASM_SIZE(dec) #define _ASM_ADD __ASM_SIZE(add) From patchwork Thu Dec 5 00:09:40 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Garnier X-Patchwork-Id: 11273785 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 408C16C1 for ; Thu, 5 Dec 2019 00:10:52 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id 99DA3214B2 for ; Thu, 5 Dec 2019 00:10:51 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b="FDTctH2C" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 99DA3214B2 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=chromium.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-17458-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 27935 invoked by uid 550); 5 Dec 2019 00:10:27 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 27821 invoked from network); 5 Dec 2019 00:10:26 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=AI0Sbf2qfvdip7zDaIMGaj7cN/UAI4vRqAvCejejut0=; b=FDTctH2CeJShcdMHsHl2Oi7MHODy229XFf1dMS36U2X/QdXpsbAtsCUC75G8Tt4iZ/ wFsIWH/tIi6YoU802WLaDkHZFr84ABsj1X/04mTK+cDldigsFFAdsf9RjxmBi1w9DMRf Y3eRIoUILPqZg0xNze2z+SLJZaY99XtfVjbec= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=AI0Sbf2qfvdip7zDaIMGaj7cN/UAI4vRqAvCejejut0=; b=L7BBay87hWNu7BpVDN7k4AzLyacZaAcM1AA6dPTVPUzN4UVvQimUjUeGJ3BxHQGdfh EyEkjjUw5oYnQbuYlFQ0rovKbp935DxuZjMzq9qkHOBqR1M7ubwRdcv3eMmUPrX19qEU mEeIZ0L8yLk0Q95DALvprV/yXgPJUQBGg1TxGL+makrktrScF/li+GqAMBjkNz8SXsum f7myLFMWNbHCMoAqV7GFPS2MDuK7LEU1xrANdPmmfna2c9hvfgdy55lPWp5H/KNuHvX0 j1orNlZU6WavbafOt8aA21cxCI55pau6ARim/xJTywsjEfV1Yn7DFcQP+4Wyj29+QNFH 2DGQ== X-Gm-Message-State: APjAAAWtm1pTkN6exSOh15/qerN9ERagVQlzuNwTOfrHU/dX0fTfty9/ GPlwPT+BItaGufB4/YZdfKbg2q3QaCw= X-Google-Smtp-Source: APXvYqyoUG5RvODwUDikHFr0GME5meRLVDUYTKsj6+bCgJ3vbygvGSX80K/1+YC5RDXowxCFuo8yEw== X-Received: by 2002:a17:90a:a612:: with SMTP id c18mr6163516pjq.49.1575504614402; Wed, 04 Dec 2019 16:10:14 -0800 (PST) From: Thomas Garnier To: kernel-hardening@lists.openwall.com Cc: kristen@linux.intel.com, keescook@chromium.org, Thomas Garnier , Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H. Peter Anvin" , x86@kernel.org, Greg Kroah-Hartman , Allison Randal , Alexios Zavras , Jiri Slaby , linux-kernel@vger.kernel.org Subject: [PATCH v10 03/11] x86: relocate_kernel - Adapt assembly for PIE support Date: Wed, 4 Dec 2019 16:09:40 -0800 Message-Id: <20191205000957.112719-4-thgarnie@chromium.org> X-Mailer: git-send-email 2.24.0.393.g34dc348eaf-goog In-Reply-To: <20191205000957.112719-1-thgarnie@chromium.org> References: <20191205000957.112719-1-thgarnie@chromium.org> MIME-Version: 1.0 Change the assembly code to use only absolute references of symbols for the kernel to be PIE compatible. Position Independent Executable (PIE) support will allow to extend the KASLR randomization range below 0xffffffff80000000. Signed-off-by: Thomas Garnier Reviewed-by: Kees Cook --- arch/x86/kernel/relocate_kernel_64.S | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/kernel/relocate_kernel_64.S b/arch/x86/kernel/relocate_kernel_64.S index ef3ba99068d3..c294339df5ef 100644 --- a/arch/x86/kernel/relocate_kernel_64.S +++ b/arch/x86/kernel/relocate_kernel_64.S @@ -206,7 +206,7 @@ SYM_CODE_START_LOCAL_NOALIGN(identity_mapped) movq %rax, %cr3 lea PAGE_SIZE(%r8), %rsp call swap_pages - movq $virtual_mapped, %rax + movabsq $virtual_mapped, %rax pushq %rax ret SYM_CODE_END(identity_mapped) From patchwork Thu Dec 5 00:09:41 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Garnier X-Patchwork-Id: 11273791 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C87756C1 for ; Thu, 5 Dec 2019 00:11:00 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id 2DDB020674 for ; Thu, 5 Dec 2019 00:11:00 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b="ZJRP8vF3" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2DDB020674 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=chromium.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-17459-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 27974 invoked by uid 550); 5 Dec 2019 00:10:28 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 27910 invoked from network); 5 Dec 2019 00:10:27 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=sl9+RtgVbb7KwtMxDxQLj2F2VAZGCx1aAKzBbfOxt1g=; b=ZJRP8vF30U0k7vaqJoBuANMFiYYfPxKnryHE+msf40PIzZ2XSlMmsYPF0mGufSJdUR 4H2AxF8qdpgOFyXbO2LGTs5xyhXmVdZWKtNq5VikWN//N+UtZBGPCJGRg0Anl75AVr85 uHT83t8RIonezj2yYKWA+eGP1srhqMzW/Z7Ys= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=sl9+RtgVbb7KwtMxDxQLj2F2VAZGCx1aAKzBbfOxt1g=; b=ocBiqpEf0yX5cEIu2+WvAbmwssnpLTHGCG7FP0xILHBYwJmLV14YiXrN7+87oev2n4 U/zq3RUkyQj1NkK47aAePc+HyiJzsc+6xm2UQLg/CKDddzP/vKmHJIAgQNIzHQIMebKw v5oMT8JRHzJqj6m//8hwD3/3FeJ99fUhBjOOjsNkwRsn+wCA6cAJbZ7zOpvQVzmNLK6l oq7zva6Ky9ZSM6wm/DRRGjTZONi1yf51tbsCOzgq0TJklHeAz2j/7XwnX8vh7OWy+JJS bDPBgE6qTGUHj2vjKDCg/NNVFK+lWKsAe7Mr+IlLufUwlcIifi3X0ZDOKfWDhLh2xhGB Xv2Q== X-Gm-Message-State: APjAAAV2ZgAaZwgu1T2dlFnfuYqX3UJyjNiCepzf3xG8WpxRO3BYWN6d n+oLuhlGmsyzzExkQNbaq+o+xwCei2Q= X-Google-Smtp-Source: APXvYqyUZRhwcUxDg88BSvcVoZjK0y9Ie8prRgjXiPT5oLIYZyeO09SYIVpYFNvdP8jyPMYnr2JPiw== X-Received: by 2002:a17:902:d881:: with SMTP id b1mr6107230plz.170.1575504615218; Wed, 04 Dec 2019 16:10:15 -0800 (PST) From: Thomas Garnier To: kernel-hardening@lists.openwall.com Cc: kristen@linux.intel.com, keescook@chromium.org, Thomas Garnier , Andy Lutomirski , Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H. Peter Anvin" , x86@kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v10 04/11] x86/entry/64: Adapt assembly for PIE support Date: Wed, 4 Dec 2019 16:09:41 -0800 Message-Id: <20191205000957.112719-5-thgarnie@chromium.org> X-Mailer: git-send-email 2.24.0.393.g34dc348eaf-goog In-Reply-To: <20191205000957.112719-1-thgarnie@chromium.org> References: <20191205000957.112719-1-thgarnie@chromium.org> MIME-Version: 1.0 Change the assembly code to use only relative references of symbols for the kernel to be PIE compatible. Position Independent Executable (PIE) support will allow to extend the KASLR randomization range below 0xffffffff80000000. Signed-off-by: Thomas Garnier Reviewed-by: Kees Cook --- arch/x86/entry/entry_64.S | 16 +++++++++++----- 1 file changed, 11 insertions(+), 5 deletions(-) diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S index 76942cbd95a1..f14363625f4b 100644 --- a/arch/x86/entry/entry_64.S +++ b/arch/x86/entry/entry_64.S @@ -1329,7 +1329,8 @@ SYM_CODE_START_LOCAL(error_entry) movl %ecx, %eax /* zero extend */ cmpq %rax, RIP+8(%rsp) je .Lbstep_iret - cmpq $.Lgs_change, RIP+8(%rsp) + leaq .Lgs_change(%rip), %rcx + cmpq %rcx, RIP+8(%rsp) jne .Lerror_entry_done_lfence /* @@ -1529,10 +1530,10 @@ SYM_CODE_START(nmi) * resume the outer NMI. */ - movq $repeat_nmi, %rdx + leaq repeat_nmi(%rip), %rdx cmpq 8(%rsp), %rdx ja 1f - movq $end_repeat_nmi, %rdx + leaq end_repeat_nmi(%rip), %rdx cmpq 8(%rsp), %rdx ja nested_nmi_out 1: @@ -1586,7 +1587,8 @@ nested_nmi: pushq %rdx pushfq pushq $__KERNEL_CS - pushq $repeat_nmi + leaq repeat_nmi(%rip), %rdx + pushq %rdx /* Put stack back */ addq $(6*8), %rsp @@ -1625,7 +1627,11 @@ first_nmi: addq $8, (%rsp) /* Fix up RSP */ pushfq /* RFLAGS */ pushq $__KERNEL_CS /* CS */ - pushq $1f /* RIP */ + pushq $0 /* Future return address */ + pushq %rdx /* Save RAX */ + leaq 1f(%rip), %rdx /* RIP */ + movq %rdx, 8(%rsp) /* Put 1f on return address */ + popq %rdx /* Restore RAX */ iretq /* continues at repeat_nmi below */ UNWIND_HINT_IRET_REGS 1: From patchwork Thu Dec 5 00:09:42 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Garnier X-Patchwork-Id: 11273793 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5FF796C1 for ; Thu, 5 Dec 2019 00:11:10 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id B873E20674 for ; Thu, 5 Dec 2019 00:11:09 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b="StNnDlpB" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B873E20674 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=chromium.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-17460-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 28051 invoked by uid 550); 5 Dec 2019 00:10:30 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 27997 invoked from network); 5 Dec 2019 00:10:29 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ALzclmclfe7/Are+E2zwtoN5E20g3s/7atxslYp5zww=; b=StNnDlpBGGpkxh4LmqeR9wQsbcrNJKHaZeCmAjkMHb8h4GTl1ZQHqs4uCC/JvHxElq /txNvdxS5adtJxiAROs0VVndZR8Kr1R4yXT4EJxkMc7Wt7DhsZDWJ5E+4a1VFw/xMwZy P/zLTliHZPcyMRn4gvSewOaYYINB+zv5mAwd8= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ALzclmclfe7/Are+E2zwtoN5E20g3s/7atxslYp5zww=; b=UUkN69k/LkiXp8OdDrkrUK4fAHOtYAJMz9ZlzCXRCZdwnaMynCmEJAON2veGFMhcD/ Di8tpNb9HnmDlIZ22q4z4CA71/3DYqEcQwb3Vi/KShUWD+UMS+kAb1p8CsBXB4oYJnyv xoWbmAZYXlc+47XysmarN3KGCeHkYivIfTP+4f3CcMZF+C5IWmSLGBeE8TCxmtkz4b0C Cib80Qicw0zf28ErIRUwT/8sTSYnNiUHfiL/Hk91LDHk4jwwJ9+O5h5VpCb1uB0PeyG2 USVOJG+fdX5cMJXCbvFlKlUesjT0qVDcQDUISmGBxnsuZilBx7rcqDFK2NuWULXrzt9m bWnw== X-Gm-Message-State: APjAAAUzXE6/uISZWVQkUMYpyUccr1FfyiQQpAubjJ/3WdQolRPHk/od ubnh8FF3RbkhZpEkfiBUYM3CTp1wSRo= X-Google-Smtp-Source: APXvYqxSdZ7Wkqu57YaZ3sPkRlqDS6GltWmE59e0kHGg9y8FxcGordcqDvfMgNftRbTyxHsQtapMFw== X-Received: by 2002:a17:902:b089:: with SMTP id p9mr6262250plr.154.1575504617608; Wed, 04 Dec 2019 16:10:17 -0800 (PST) From: Thomas Garnier To: kernel-hardening@lists.openwall.com Cc: kristen@linux.intel.com, keescook@chromium.org, Thomas Garnier , Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H. Peter Anvin" , x86@kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v10 05/11] x86: pm-trace - Adapt assembly for PIE support Date: Wed, 4 Dec 2019 16:09:42 -0800 Message-Id: <20191205000957.112719-6-thgarnie@chromium.org> X-Mailer: git-send-email 2.24.0.393.g34dc348eaf-goog In-Reply-To: <20191205000957.112719-1-thgarnie@chromium.org> References: <20191205000957.112719-1-thgarnie@chromium.org> MIME-Version: 1.0 Change assembly to use the new _ASM_MOVABS macro instead of _ASM_MOV for the assembly to be PIE compatible. Position Independent Executable (PIE) support will allow to extend the KASLR randomization range below 0xffffffff80000000. Signed-off-by: Thomas Garnier Reviewed-by: Kees Cook --- arch/x86/include/asm/pm-trace.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/include/asm/pm-trace.h b/arch/x86/include/asm/pm-trace.h index bfa32aa428e5..972070806ce9 100644 --- a/arch/x86/include/asm/pm-trace.h +++ b/arch/x86/include/asm/pm-trace.h @@ -8,7 +8,7 @@ do { \ if (pm_trace_enabled) { \ const void *tracedata; \ - asm volatile(_ASM_MOV " $1f,%0\n" \ + asm volatile(_ASM_MOVABS " $1f,%0\n" \ ".section .tracedata,\"a\"\n" \ "1:\t.word %c1\n\t" \ _ASM_PTR " %c2\n" \ From patchwork Thu Dec 5 00:09:43 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Garnier X-Patchwork-Id: 11273795 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 14F10930 for ; Thu, 5 Dec 2019 00:11:20 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id 6FBB420674 for ; Thu, 5 Dec 2019 00:11:19 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b="f/EVziDY" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6FBB420674 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=chromium.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-17461-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 28305 invoked by uid 550); 5 Dec 2019 00:10:32 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 28224 invoked from network); 5 Dec 2019 00:10:31 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=7yE3ZHkR69UGcsqSedpJxhASD5SeB0FinXe1byG56UQ=; b=f/EVziDY7KK0UVtP07zeQE8P/OZC24lNIxT+R0yrkPhKCRQlrqlzCOrH5fi8fvczJ5 bhXqfRmivbfuDr2Tu6VTuKi28nf92jc1ge/UxVZlXwVOvxwsjHB0e4vv9oMBJYuLmvcB d+qDYPPdtKxm9l1gNlaJYeQGZJZ06L4X45ZUM= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=7yE3ZHkR69UGcsqSedpJxhASD5SeB0FinXe1byG56UQ=; b=PksUano7lnQ00nm5G1XXqXUOdUDeGoug2ucteLIYFX/zz4F/SiR1SbkmuN/BJp/uY2 JpFZ0wmBoM9USJpJKyBFj1Up4tmIcTQ7jeVwhfReRYif3UjKb2VXeixvFASaZCKODtL2 CFmox5ste+US6ujaxb3vi27r0+lluGm8PZGH4JFeSXXioRCaMNY1KOVejTl21sB/YMFF iPdGYtLLTnDYpStY1AGJaTj2ilbmuoLNBX/tJgKmJbgW1hyJTChqDB9vuS9SlWGdnCJd qIjEPQyv1CtnkTt0V6NhieqFujadKbVfZIAGE8ZYRhKr0+qYCA+HxGea5ti/G7qVnb0T NmAw== X-Gm-Message-State: APjAAAWQxt2FppJ/aQWTuAnzrY8oYUJkkue9uus71+yOdgzVO9hTdnvc izoRaJvLSEFcxJvFZSQatQqieWFEChU= X-Google-Smtp-Source: APXvYqwCkYksA3H9gQM8Dd/UpM0c4sw+LmMk/E0hmNMYq/n4Q5BnskCCI0QZYAt9KoQF8+EThAaDog== X-Received: by 2002:a63:3f4f:: with SMTP id m76mr6186602pga.353.1575504619867; Wed, 04 Dec 2019 16:10:19 -0800 (PST) From: Thomas Garnier To: kernel-hardening@lists.openwall.com Cc: kristen@linux.intel.com, keescook@chromium.org, Thomas Garnier , Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H. Peter Anvin" , x86@kernel.org, Andy Lutomirski , "Peter Zijlstra (Intel)" , Len Brown , linux-kernel@vger.kernel.org Subject: [PATCH v10 06/11] x86/CPU: Adapt assembly for PIE support Date: Wed, 4 Dec 2019 16:09:43 -0800 Message-Id: <20191205000957.112719-7-thgarnie@chromium.org> X-Mailer: git-send-email 2.24.0.393.g34dc348eaf-goog In-Reply-To: <20191205000957.112719-1-thgarnie@chromium.org> References: <20191205000957.112719-1-thgarnie@chromium.org> MIME-Version: 1.0 Change the assembly code to use only relative references of symbols for the kernel to be PIE compatible. Position Independent Executable (PIE) support will allow to extend the KASLR randomization range below 0xffffffff80000000. Signed-off-by: Thomas Garnier --- arch/x86/include/asm/processor.h | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h index 0340aad3f2fc..77fa291a60bb 100644 --- a/arch/x86/include/asm/processor.h +++ b/arch/x86/include/asm/processor.h @@ -742,11 +742,13 @@ static inline void sync_core(void) "pushfq\n\t" "mov %%cs, %0\n\t" "pushq %q0\n\t" - "pushq $1f\n\t" + "leaq 1f(%%rip), %q0\n\t" + "pushq %q0\n\t" "iretq\n\t" UNWIND_HINT_RESTORE "1:" - : "=&r" (tmp), ASM_CALL_CONSTRAINT : : "cc", "memory"); + : "=&r" (tmp), ASM_CALL_CONSTRAINT + : : "cc", "memory"); #endif } From patchwork Thu Dec 5 00:09:44 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Garnier X-Patchwork-Id: 11273797 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0A109930 for ; Thu, 5 Dec 2019 00:11:31 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id 61EE6206DF for ; Thu, 5 Dec 2019 00:11:30 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b="WEcqZpbc" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 61EE6206DF Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=chromium.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-17462-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 28484 invoked by uid 550); 5 Dec 2019 00:10:34 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 28349 invoked from network); 5 Dec 2019 00:10:33 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=h4Dt4B0xvb6l2StfkmNXj0JeSmBC5J6hiTtCjiH9uWc=; b=WEcqZpbc1pZeNKaytuCxvflJzb3CvyBUE0VhX1AoSwi10PnANDNHhZP08zA68JySCe +CW5pEEMug0LNYGtcKsPGjKTUZHY5772Xq4DmJsPO/iZwlo59MHJpSb91SZOLSspoVBN S5mWPnOhEPt8Ov9ALUBkzfoWaFPjEMRzvrvd0= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=h4Dt4B0xvb6l2StfkmNXj0JeSmBC5J6hiTtCjiH9uWc=; b=J5T9Nm5yBpWF3/MfzQjzjco4o+sdsLvUkpzAA2eUPs7udvEMkz72Ks7jEqHQ41Z02h lMMk2nvzhXDFvXmX4097uyh+VVnTMTO4lv84bvCAZf5UnzNPoZ8sDmWAzyyYtC6zw6jk NLmjOcAb7hScpliL7Jtumky/6dy2lOa8+66q2FvS2o7GMgtQY/jn4wpBsAKOIIPTWX4q aqaRNKoTObyRCxzgINbRO107NfnbYcnAcuVyy7RiavL2gPjot3FQGxArnYOj0LCBTkji MN0db/+E8U4WsReTiPTeqtmF4VCLoDFbhvieFQIuYGbTx738RZaok/260x9et/WRns99 w70A== X-Gm-Message-State: APjAAAV4y1TLJEOdM9A3frNbohg+OGPCxa5FQYdoBELch+cngu3Ij+Fk IUku21NJLqdL7Ln9tp0DsnvDNB1ixbw= X-Google-Smtp-Source: APXvYqzs7mYKYZ8RNkTKFC9raPmoZRYlLWxa85CESf7aVhfXMIKbbBi+m+lve2GIK5jr+LhkUWziBQ== X-Received: by 2002:a63:3484:: with SMTP id b126mr6187685pga.17.1575504620811; Wed, 04 Dec 2019 16:10:20 -0800 (PST) From: Thomas Garnier To: kernel-hardening@lists.openwall.com Cc: kristen@linux.intel.com, keescook@chromium.org, Thomas Garnier , Pavel Machek , "Rafael J . Wysocki" , "Rafael J. Wysocki" , Len Brown , Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H. Peter Anvin" , x86@kernel.org, linux-pm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v10 07/11] x86/acpi: Adapt assembly for PIE support Date: Wed, 4 Dec 2019 16:09:44 -0800 Message-Id: <20191205000957.112719-8-thgarnie@chromium.org> X-Mailer: git-send-email 2.24.0.393.g34dc348eaf-goog In-Reply-To: <20191205000957.112719-1-thgarnie@chromium.org> References: <20191205000957.112719-1-thgarnie@chromium.org> MIME-Version: 1.0 Change the assembly code to use only relative references of symbols for the kernel to be PIE compatible. Position Independent Executable (PIE) support will allow to extend the KASLR randomization range below 0xffffffff80000000. Signed-off-by: Thomas Garnier Acked-by: Pavel Machek Acked-by: Rafael J. Wysocki Reviewed-by: Kees Cook --- arch/x86/kernel/acpi/wakeup_64.S | 31 ++++++++++++++++--------------- 1 file changed, 16 insertions(+), 15 deletions(-) diff --git a/arch/x86/kernel/acpi/wakeup_64.S b/arch/x86/kernel/acpi/wakeup_64.S index c8daa92f38dc..8e221285d9f1 100644 --- a/arch/x86/kernel/acpi/wakeup_64.S +++ b/arch/x86/kernel/acpi/wakeup_64.S @@ -15,7 +15,7 @@ * Hooray, we are in Long 64-bit mode (but still running in low memory) */ SYM_FUNC_START(wakeup_long64) - movq saved_magic, %rax + movq saved_magic(%rip), %rax movq $0x123456789abcdef0, %rdx cmpq %rdx, %rax je 2f @@ -31,14 +31,14 @@ SYM_FUNC_START(wakeup_long64) movw %ax, %es movw %ax, %fs movw %ax, %gs - movq saved_rsp, %rsp + movq saved_rsp(%rip), %rsp - movq saved_rbx, %rbx - movq saved_rdi, %rdi - movq saved_rsi, %rsi - movq saved_rbp, %rbp + movq saved_rbx(%rip), %rbx + movq saved_rdi(%rip), %rdi + movq saved_rsi(%rip), %rsi + movq saved_rbp(%rip), %rbp - movq saved_rip, %rax + movq saved_rip(%rip), %rax jmp *%rax SYM_FUNC_END(wakeup_long64) @@ -48,7 +48,7 @@ SYM_FUNC_START(do_suspend_lowlevel) xorl %eax, %eax call save_processor_state - movq $saved_context, %rax + leaq saved_context(%rip), %rax movq %rsp, pt_regs_sp(%rax) movq %rbp, pt_regs_bp(%rax) movq %rsi, pt_regs_si(%rax) @@ -67,13 +67,14 @@ SYM_FUNC_START(do_suspend_lowlevel) pushfq popq pt_regs_flags(%rax) - movq $.Lresume_point, saved_rip(%rip) + leaq .Lresume_point(%rip), %rax + movq %rax, saved_rip(%rip) - movq %rsp, saved_rsp - movq %rbp, saved_rbp - movq %rbx, saved_rbx - movq %rdi, saved_rdi - movq %rsi, saved_rsi + movq %rsp, saved_rsp(%rip) + movq %rbp, saved_rbp(%rip) + movq %rbx, saved_rbx(%rip) + movq %rdi, saved_rdi(%rip) + movq %rsi, saved_rsi(%rip) addq $8, %rsp movl $3, %edi @@ -85,7 +86,7 @@ SYM_FUNC_START(do_suspend_lowlevel) .align 4 .Lresume_point: /* We don't restore %rax, it must be 0 anyway */ - movq $saved_context, %rax + leaq saved_context(%rip), %rax movq saved_context_cr4(%rax), %rbx movq %rbx, %cr4 movq saved_context_cr3(%rax), %rbx From patchwork Thu Dec 5 00:09:45 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Garnier X-Patchwork-Id: 11273799 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7C64D6C1 for ; Thu, 5 Dec 2019 00:11:40 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id D889A206DF for ; Thu, 5 Dec 2019 00:11:39 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b="bEkm8Gxi" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D889A206DF Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=chromium.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-17463-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 29712 invoked by uid 550); 5 Dec 2019 00:10:36 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 28538 invoked from network); 5 Dec 2019 00:10:34 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Bvtp7PhLfJQZeHXECkSv3FhI1NErWyMTeUAZ1On118k=; b=bEkm8Gxis3TcLMrQK/O3kffjb1WvsBNakAbPX0QbwZ1pwClz/FKwdldVkZ8SoGnhuq Gu09c92s2HUmTipHCa6Qomcdu6FUNz+30JcUXlMMe8UgQDsNboOQc5uCn1QnbC/a9JwJ gkncFq9vbXJAS0ui3YNCs92RDjLvMC0X5jdpI= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Bvtp7PhLfJQZeHXECkSv3FhI1NErWyMTeUAZ1On118k=; b=WhCIX0mlf0SN6hxzwGIAJVOMK5j2Zv4bJ8VR9ImOTR3nuv2o9WtvcOurX5p/bc6r9y 7dnyahWRaLS4Kg7S35eYuRBQaOAvYYozfLqOfZrNUtWfWmJM9tnXSac/TRkkZoQOKdjm I+3OQYgmMyyURCyci/ODT59YaPAH/dElSlYhhcipKRewhBLPt9fURhzflB/476T+nuWR WFWzWi5uPxmyVP2weJBRenUKvjOuFRQ+SYCx2Ymlkuw2iJAG27OIBlvfMLqmnVe+tmdq 8azHBd00AIFWq/NJsEiSCyDf9EKe7OJ7N5KsQ9NtXZeN9Rr59tV1+wDXX/KFePwDTIoR 3b8w== X-Gm-Message-State: APjAAAXodWV6YhF4IZVgNV9QoBtUHheB4j/hcG2vgJwRBsfF1zIVecRP RANi/dalvcnxFM5lJchN8VLJdiNqtRk= X-Google-Smtp-Source: APXvYqxnbG88LdR3MCSnkD4TJXy10ExbjaE5WpyEp4RxlLVQEHycA/D0B77X7KbnCQ7VIqp8L5Jl6g== X-Received: by 2002:aa7:8146:: with SMTP id d6mr5975089pfn.171.1575504622966; Wed, 04 Dec 2019 16:10:22 -0800 (PST) From: Thomas Garnier To: kernel-hardening@lists.openwall.com Cc: kristen@linux.intel.com, keescook@chromium.org, Thomas Garnier , Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H. Peter Anvin" , x86@kernel.org, Jiri Slaby , Juergen Gross , Peter Zijlstra , linux-kernel@vger.kernel.org Subject: [PATCH v10 08/11] x86/boot/64: Adapt assembly for PIE support Date: Wed, 4 Dec 2019 16:09:45 -0800 Message-Id: <20191205000957.112719-9-thgarnie@chromium.org> X-Mailer: git-send-email 2.24.0.393.g34dc348eaf-goog In-Reply-To: <20191205000957.112719-1-thgarnie@chromium.org> References: <20191205000957.112719-1-thgarnie@chromium.org> MIME-Version: 1.0 Change the assembly code to use absolute reference for transition between address spaces and relative references when referencing global variables in the same address space. Ensure the kernel built with PIE references the correct addresses based on context. Position Independent Executable (PIE) support will allow to extend the KASLR randomization range below 0xffffffff80000000. Signed-off-by: Thomas Garnier Reviewed-by: Kees Cook --- arch/x86/kernel/head_64.S | 15 +++++++++------ 1 file changed, 9 insertions(+), 6 deletions(-) diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S index 4bbc770af632..40a467f8e116 100644 --- a/arch/x86/kernel/head_64.S +++ b/arch/x86/kernel/head_64.S @@ -87,7 +87,8 @@ SYM_CODE_START_NOALIGN(startup_64) popq %rsi /* Form the CR3 value being sure to include the CR3 modifier */ - addq $(early_top_pgt - __START_KERNEL_map), %rax + movabs $(early_top_pgt - __START_KERNEL_map), %rcx + addq %rcx, %rax jmp 1f SYM_CODE_END(startup_64) @@ -119,7 +120,8 @@ SYM_CODE_START(secondary_startup_64) popq %rsi /* Form the CR3 value being sure to include the CR3 modifier */ - addq $(init_top_pgt - __START_KERNEL_map), %rax + movabs $(init_top_pgt - __START_KERNEL_map), %rcx + addq %rcx, %rax 1: /* Enable PAE mode, PGE and LA57 */ @@ -137,7 +139,7 @@ SYM_CODE_START(secondary_startup_64) movq %rax, %cr3 /* Ensure I am executing from virtual addresses */ - movq $1f, %rax + movabs $1f, %rax ANNOTATE_RETPOLINE_SAFE jmp *%rax 1: @@ -234,11 +236,12 @@ SYM_CODE_START(secondary_startup_64) * REX.W + FF /5 JMP m16:64 Jump far, absolute indirect, * address given in m16:64. */ - pushq $.Lafter_lret # put return address on stack for unwinder + movabs $.Lafter_lret, %rax + pushq %rax # put return address on stack for unwinder xorl %ebp, %ebp # clear frame pointer - movq initial_code(%rip), %rax + leaq initial_code(%rip), %rax pushq $__KERNEL_CS # set correct cs - pushq %rax # target address in negative space + pushq (%rax) # target address in negative space lretq .Lafter_lret: SYM_CODE_END(secondary_startup_64) From patchwork Thu Dec 5 00:09:46 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Garnier X-Patchwork-Id: 11273801 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 138906C1 for ; Thu, 5 Dec 2019 00:11:51 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id 6E638206DF for ; Thu, 5 Dec 2019 00:11:50 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b="An6EKBfY" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6E638206DF Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=chromium.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-17464-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 29772 invoked by uid 550); 5 Dec 2019 00:10:36 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 28621 invoked from network); 5 Dec 2019 00:10:35 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=CyqJWPK5gXiDkj7pmzxZFEFTPFB7rpHSJzIvAweXZ1A=; b=An6EKBfYvskuNlarkZw8jmwWOHMO23mba1ZDMioIp6NWW+BYGhM0azn1Tj5l0aSL3l F84yQ1vmqcaKNJ8NVSbhIx2+fPjAVurVTHG7H+k9UgRIebIWnlVc93mV69v1aWfsM7R9 PDaE08TiD3RYOGNFDjF8MurzqNd0oyu9Pza+w= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=CyqJWPK5gXiDkj7pmzxZFEFTPFB7rpHSJzIvAweXZ1A=; b=SdygN9rJkueVaAOghO0jzdG0MAmdDx69PIoalbyd4RmLzRtpSs+XKtmnFR/8y9DANk /Oj+m8qcU0Ya0rxjET2B2+a83qEEf16S3GUv00OAthj5L63L/IMylQlPbe+WSlqFQga+ 6UkbnoQgpgorc8hXdI8dye9Z5NzYOW5WUe2rjFh6h1Y+XcX/WfB0WGWGN0aVhJ55kdT2 wjtyIVG1A/+lotkHleCDx5q4OgtEUmn8Gi48DABfYUPdAhvanMPSB+2VflOjlslvAXHy FKUkWabafZhD2O7q5Y5BNSgzaLDaKUNCIHwaA3vKG8k7ID36cWrnklfvGKtrSmHadOQw 6H/w== X-Gm-Message-State: APjAAAVckOYZDXgqc+FL/xGbbFCogEfy4v8ZHJ0bqPKNxX9RlCr4+oVi Lz76sIREpK+d1LImurpcHN6Gva5M/YM= X-Google-Smtp-Source: APXvYqyJHl10vg4Yuv8v57HFoPF0TX9oP8AEzxMSwkMkq2tj2R127UTZp0aTC99o/y/3ahYnP/dATA== X-Received: by 2002:a17:902:9a97:: with SMTP id w23mr6163243plp.79.1575504623796; Wed, 04 Dec 2019 16:10:23 -0800 (PST) From: Thomas Garnier To: kernel-hardening@lists.openwall.com Cc: kristen@linux.intel.com, keescook@chromium.org, Thomas Garnier , Pavel Machek , "Rafael J . Wysocki" , "Rafael J. Wysocki" , Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H. Peter Anvin" , x86@kernel.org, linux-pm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v10 09/11] x86/power/64: Adapt assembly for PIE support Date: Wed, 4 Dec 2019 16:09:46 -0800 Message-Id: <20191205000957.112719-10-thgarnie@chromium.org> X-Mailer: git-send-email 2.24.0.393.g34dc348eaf-goog In-Reply-To: <20191205000957.112719-1-thgarnie@chromium.org> References: <20191205000957.112719-1-thgarnie@chromium.org> MIME-Version: 1.0 Change the assembly code to use only relative references of symbols for the kernel to be PIE compatible. Position Independent Executable (PIE) support will allow to extend the KASLR randomization range below 0xffffffff80000000. Signed-off-by: Thomas Garnier Acked-by: Pavel Machek Acked-by: Rafael J. Wysocki Reviewed-by: Kees Cook --- arch/x86/power/hibernate_asm_64.S | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/x86/power/hibernate_asm_64.S b/arch/x86/power/hibernate_asm_64.S index 7918b8415f13..977b8ae85045 100644 --- a/arch/x86/power/hibernate_asm_64.S +++ b/arch/x86/power/hibernate_asm_64.S @@ -23,7 +23,7 @@ #include SYM_FUNC_START(swsusp_arch_suspend) - movq $saved_context, %rax + leaq saved_context(%rip), %rax movq %rsp, pt_regs_sp(%rax) movq %rbp, pt_regs_bp(%rax) movq %rsi, pt_regs_si(%rax) @@ -116,7 +116,7 @@ SYM_FUNC_START(restore_registers) movq %rax, %cr4; # turn PGE back on /* We don't restore %rax, it must be 0 anyway */ - movq $saved_context, %rax + leaq saved_context(%rip), %rax movq pt_regs_sp(%rax), %rsp movq pt_regs_bp(%rax), %rbp movq pt_regs_si(%rax), %rsi From patchwork Thu Dec 5 00:09:47 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Garnier X-Patchwork-Id: 11273803 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0A3FB6C1 for ; Thu, 5 Dec 2019 00:12:01 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id 665B8206DF for ; Thu, 5 Dec 2019 00:12:00 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b="TYU9bizn" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 665B8206DF Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=chromium.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-17465-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 29819 invoked by uid 550); 5 Dec 2019 00:10:38 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 29770 invoked from network); 5 Dec 2019 00:10:36 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Wom0YuEw+sxPN46lShSOJcleDb+6/uebIAUT9UJe6O4=; b=TYU9biznX4Fr+oXplt8t6lZVC875R5GdQpDtGyoG7jHn1dhskEBnd09tvZhJGAFU/g LrL5kio9op22EkuMbWapAdZnmHsJNSN3VvpSYWblpjxw1xSpBmEfPLeFLApm+7QNnUe1 b/lOwzkq65DhE5MjKUY1R1NraoVNUwtPyd4uk= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Wom0YuEw+sxPN46lShSOJcleDb+6/uebIAUT9UJe6O4=; b=hiybdenMdEyvIArLuSO1SVMrXvtzgyaxsxY2KEddIo2w8wagJ6ZHjXcduS5uS8SMHr TFW3iJk3ENwXo332DuqndzWEca/1chr3VF1DBLFvHZVNsbG4ElKWqaamtOKQ571tRbm0 HhSSu1eT/xHACTe9Of16FIjOsefpmN8tFvVIoUzpN3JCnkpaZm+Gryxc0yksNoovQihg R8D52Kbv/CgLA95aWZprztBOqiMuBAfHrLrxmL5SdrKOdZ1fVsgIsvePb6bBqDVdzm74 Yv+74mx662RdojZMuEw9G7HFaNL0jojPmAlCH4+gQl/i2YzprfI3mvcO2B1IX85jVmTf dIbQ== X-Gm-Message-State: APjAAAVIPEhkVkhnwBdrvizdNGbqupC+bpwakJJirruYKe1z3ShhNVij WV1Vp0rxUpS480o7WZ7Ddjn9aeUKYM8= X-Google-Smtp-Source: APXvYqx+rdjo7KYD764+cA3j+DYShZXYaqzplHV9aZLeeWqpbwUsyT49bWt0YB2+lFklCCpyLgsBCw== X-Received: by 2002:aa7:9afb:: with SMTP id y27mr6140094pfp.91.1575504624844; Wed, 04 Dec 2019 16:10:24 -0800 (PST) From: Thomas Garnier To: kernel-hardening@lists.openwall.com Cc: kristen@linux.intel.com, keescook@chromium.org, Thomas Garnier , Juergen Gross , Thomas Hellstrom , "VMware, Inc." , Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H. Peter Anvin" , x86@kernel.org, virtualization@lists.linux-foundation.org, linux-kernel@vger.kernel.org Subject: [PATCH v10 10/11] x86/paravirt: Adapt assembly for PIE support Date: Wed, 4 Dec 2019 16:09:47 -0800 Message-Id: <20191205000957.112719-11-thgarnie@chromium.org> X-Mailer: git-send-email 2.24.0.393.g34dc348eaf-goog In-Reply-To: <20191205000957.112719-1-thgarnie@chromium.org> References: <20191205000957.112719-1-thgarnie@chromium.org> MIME-Version: 1.0 If PIE is enabled, switch the paravirt assembly constraints to be compatible. The %c/i constrains generate smaller code so is kept by default. Position Independent Executable (PIE) support will allow to extend the KASLR randomization range below 0xffffffff80000000. Signed-off-by: Thomas Garnier Acked-by: Juergen Gross --- arch/x86/include/asm/paravirt_types.h | 32 +++++++++++++++++++++++---- 1 file changed, 28 insertions(+), 4 deletions(-) diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h index 84812964d3dd..82f7ca22e0ae 100644 --- a/arch/x86/include/asm/paravirt_types.h +++ b/arch/x86/include/asm/paravirt_types.h @@ -336,9 +336,32 @@ extern struct paravirt_patch_template pv_ops; #define PARAVIRT_PATCH(x) \ (offsetof(struct paravirt_patch_template, x) / sizeof(void *)) +#ifdef CONFIG_X86_PIE +#define paravirt_opptr_call "a" +#define paravirt_opptr_type "p" + +/* + * Alternative patching requires a maximum of 7 bytes but the relative call is + * only 6 bytes. If PIE is enabled, add an additional nop to the call + * instruction to ensure patching is possible. + * + * Without PIE, the call is reg/mem64: + * ff 14 25 68 37 02 82 callq *0xffffffff82023768 + * + * With PIE, it is relative to %rip and take 1-less byte: + * ff 15 fa d9 ff 00 callq *0xffd9fa(%rip) # + * + */ +#define PARAVIRT_CALL_POST "nop;" +#else +#define paravirt_opptr_call "c" +#define paravirt_opptr_type "i" +#define PARAVIRT_CALL_POST "" +#endif + #define paravirt_type(op) \ [paravirt_typenum] "i" (PARAVIRT_PATCH(op)), \ - [paravirt_opptr] "i" (&(pv_ops.op)) + [paravirt_opptr] paravirt_opptr_type (&(pv_ops.op)) #define paravirt_clobber(clobber) \ [paravirt_clobber] "i" (clobber) @@ -377,9 +400,10 @@ int paravirt_disable_iospace(void); * offset into the paravirt_patch_template structure, and can therefore be * freely converted back into a structure offset. */ -#define PARAVIRT_CALL \ - ANNOTATE_RETPOLINE_SAFE \ - "call *%c[paravirt_opptr];" +#define PARAVIRT_CALL \ + ANNOTATE_RETPOLINE_SAFE \ + "call *%" paravirt_opptr_call "[paravirt_opptr];" \ + PARAVIRT_CALL_POST /* * These macros are intended to wrap calls through one of the paravirt From patchwork Thu Dec 5 00:09:48 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Garnier X-Patchwork-Id: 11273805 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id BF3A56C1 for ; Thu, 5 Dec 2019 00:12:11 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id 23CD6206DF for ; Thu, 5 Dec 2019 00:12:10 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b="TOFLyf0y" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 23CD6206DF Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=chromium.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-17466-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 30049 invoked by uid 550); 5 Dec 2019 00:10:40 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 29910 invoked from network); 5 Dec 2019 00:10:38 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=+buIVC/1VPKvECsxMqNDPQ+i8AJPTJg9/lSk6De449o=; b=TOFLyf0yU9XQEB/FeqiNbT3V7qcPRtpa6NeVrNIlQWiqVak0lOBZUpf1Yvd6G9OnJs zZoCQoQD7O7Nl7/81xFWrt6uhDQwLfgPC/kMg1DZ9nekQl/hofNQHT34Lb2PaxDQZZUv 8pChH0mbB4xQmPqsjrYFEWKoH1hHaWA/Hxq4Q= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=+buIVC/1VPKvECsxMqNDPQ+i8AJPTJg9/lSk6De449o=; b=egzuyxWzS3uWZv+yyPrYi2G1S5MOUWvpBTlHgMcAD8SPcgtHweHjpMclP52l2TnxtF ZPa/jjHTKTCWMbZOIRlVyHXSyuTGACkQEkNnsuqFKtuHG/cFJTmiG+BGw5vuqE3/pZU7 NkPMF8vkWCVzjkyUFDx7hrHgsQOHYfR6+IyrnNffBs9ukn/TBDwTSQHfCGQzS4IGjXF6 +nES+sRuQ7o+XiTcQ6YEHln8hmZbbK3bP9aSx8xl5+0thjThiORNh7wrqyKsCSBhx7/t FmLWFXo9OSc+nJO0mH3+EsIWbjIgZH28kfpTAFiW+BQmCdxoIDCIO844sM2UkEJkJYiQ DnZw== X-Gm-Message-State: APjAAAW+0LdN5gfdiUQK1g3N3peb3785U/rVSHLI0TXRXWdFJdU/kiSz BeULYqCKiCG+zN8SM6bW/cVMFbKhYwc= X-Google-Smtp-Source: APXvYqzgIu9igh7MFHq8gluXRTDKpLsMAIXgdWsDG2JqoQ+PmfOnBN59SFV+7mTZDoxKqvhnDHf7wQ== X-Received: by 2002:a17:902:7083:: with SMTP id z3mr6280284plk.215.1575504626887; Wed, 04 Dec 2019 16:10:26 -0800 (PST) From: Thomas Garnier To: kernel-hardening@lists.openwall.com Cc: kristen@linux.intel.com, keescook@chromium.org, Thomas Garnier , Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H. Peter Anvin" , x86@kernel.org, Peter Zijlstra , Rasmus Villemoes , linux-kernel@vger.kernel.org Subject: [PATCH v10 11/11] x86/alternatives: Adapt assembly for PIE support Date: Wed, 4 Dec 2019 16:09:48 -0800 Message-Id: <20191205000957.112719-12-thgarnie@chromium.org> X-Mailer: git-send-email 2.24.0.393.g34dc348eaf-goog In-Reply-To: <20191205000957.112719-1-thgarnie@chromium.org> References: <20191205000957.112719-1-thgarnie@chromium.org> MIME-Version: 1.0 Change the assembly options to work with pointers instead of integers. The generated code is the same PIE just ensures input is a pointer. Position Independent Executable (PIE) support will allow to extend the KASLR randomization range below 0xffffffff80000000. Signed-off-by: Thomas Garnier --- arch/x86/include/asm/alternative.h | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/arch/x86/include/asm/alternative.h b/arch/x86/include/asm/alternative.h index 13adca37c99a..43a148042656 100644 --- a/arch/x86/include/asm/alternative.h +++ b/arch/x86/include/asm/alternative.h @@ -243,7 +243,7 @@ static inline int alternatives_text_reserved(void *start, void *end) /* Like alternative_io, but for replacing a direct call with another one. */ #define alternative_call(oldfunc, newfunc, feature, output, input...) \ asm_inline volatile (ALTERNATIVE("call %P[old]", "call %P[new]", feature) \ - : output : [old] "i" (oldfunc), [new] "i" (newfunc), ## input) + : output : [old] "X" (oldfunc), [new] "X" (newfunc), ## input) /* * Like alternative_call, but there are two features and respective functions. @@ -256,8 +256,8 @@ static inline int alternatives_text_reserved(void *start, void *end) asm_inline volatile (ALTERNATIVE_2("call %P[old]", "call %P[new1]", feature1,\ "call %P[new2]", feature2) \ : output, ASM_CALL_CONSTRAINT \ - : [old] "i" (oldfunc), [new1] "i" (newfunc1), \ - [new2] "i" (newfunc2), ## input) + : [old] "X" (oldfunc), [new1] "X" (newfunc1), \ + [new2] "X" (newfunc2), ## input) /* * use this macro(s) if you need more than one output parameter