From patchwork Tue Jul 30 19:12:45 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Garnier X-Patchwork-Id: 11066619 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id F19A61399 for ; Tue, 30 Jul 2019 19:13:47 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E4B4D28823 for ; Tue, 30 Jul 2019 19:13:47 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D89A4288A9; Tue, 30 Jul 2019 19:13:47 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.3 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id 3ED07288A6 for ; Tue, 30 Jul 2019 19:13:44 +0000 (UTC) Received: (qmail 28152 invoked by uid 550); 30 Jul 2019 19:13:26 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 28033 invoked from network); 30 Jul 2019 19:13:25 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=X66RA9gWYm1RMV+E5x3dnoqX6qaP5JfLjpZdThvWS3I=; b=RwVn/FTcTG9dlv6mcakC0KCP2Pmdn+82U7EkKwkcv5BPx2Mt2ueNzfw8SAbmrfkYhf mYhDbUT4ThJS9Gexka1kLz5Nikv5ithOqUwFkC4gWdunoBuZKQVmX6krUjWQnj5fgXIH Gj//qaK6geIY24r4TnZlFui+Sf0VCqkZgeN8U= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=X66RA9gWYm1RMV+E5x3dnoqX6qaP5JfLjpZdThvWS3I=; b=I17CeKwE1GHBCzTepognxOeJri6FdPjwGo4VEg4Bdjcl+zAIGiF4gVDTFB4Y9zYgCo Tv88aC00V22IyX1lC+3ugYSsesvJdp9vjhUUsUWY6AA3pSUslwNzHNrqZ+MFkgc/GrdD 2mUMeTEmCAUGuxbUzytJgF2ruQwbVD3uB0moN3W5ymIDn3QWSXJy12YoUeS4QHSyH4i6 52CyqCm6fTkZRr2SO9QgrMBeVyUVQet7gwHDg771MN4JNbBd/8/Ndu+HyQiJJQrNj19w EYMv6KuRY0U8W8KxnJcoUxy1xZJIArstcFL/r0CMApbay4+GQFxELNuvOGx9vj2Tm9tP 1oow== X-Gm-Message-State: APjAAAWZeUmmlBwsrw+mf60CroawRgWZqWuCr2N857sW+z6YJir+NdGD u+dfm447wmNlINKjDNwRzB7wOBFOB4w= X-Google-Smtp-Source: APXvYqxFd57/OblDpFU753v4RddMja57D7RtqoXlBvWCu6SgNJBMRrIfvy6yvJBaJ8/WVEE+dGjXjw== X-Received: by 2002:a63:b20f:: with SMTP id x15mr1238990pge.453.1564513990153; Tue, 30 Jul 2019 12:13:10 -0700 (PDT) From: Thomas Garnier To: kernel-hardening@lists.openwall.com Cc: kristen@linux.intel.com, keescook@chromium.org, Thomas Garnier , Herbert Xu , "David S. Miller" , Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H. Peter Anvin" , x86@kernel.org, linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v9 01/11] x86/crypto: Adapt assembly for PIE support Date: Tue, 30 Jul 2019 12:12:45 -0700 Message-Id: <20190730191303.206365-2-thgarnie@chromium.org> X-Mailer: git-send-email 2.22.0.770.g0f2c4a37fd-goog In-Reply-To: <20190730191303.206365-1-thgarnie@chromium.org> References: <20190730191303.206365-1-thgarnie@chromium.org> MIME-Version: 1.0 X-Virus-Scanned: ClamAV using ClamSMTP Change the assembly code to use only relative references of symbols for the kernel to be PIE compatible. Position Independent Executable (PIE) support will allow to extend the KASLR randomization range below 0xffffffff80000000. Signed-off-by: Thomas Garnier --- arch/x86/crypto/aegis128-aesni-asm.S | 6 +- arch/x86/crypto/aesni-intel_asm.S | 8 +- arch/x86/crypto/aesni-intel_avx-x86_64.S | 3 +- arch/x86/crypto/camellia-aesni-avx-asm_64.S | 42 ++++----- arch/x86/crypto/camellia-aesni-avx2-asm_64.S | 44 ++++----- arch/x86/crypto/camellia-x86_64-asm_64.S | 8 +- arch/x86/crypto/cast5-avx-x86_64-asm_64.S | 50 +++++----- arch/x86/crypto/cast6-avx-x86_64-asm_64.S | 44 +++++---- arch/x86/crypto/des3_ede-asm_64.S | 96 +++++++++++++------- arch/x86/crypto/ghash-clmulni-intel_asm.S | 4 +- arch/x86/crypto/glue_helper-asm-avx.S | 4 +- arch/x86/crypto/glue_helper-asm-avx2.S | 6 +- arch/x86/crypto/sha256-avx2-asm.S | 18 ++-- 13 files changed, 191 insertions(+), 142 deletions(-) diff --git a/arch/x86/crypto/aegis128-aesni-asm.S b/arch/x86/crypto/aegis128-aesni-asm.S index 4434607e366d..00aff3321c16 100644 --- a/arch/x86/crypto/aegis128-aesni-asm.S +++ b/arch/x86/crypto/aegis128-aesni-asm.S @@ -200,8 +200,8 @@ ENTRY(crypto_aegis128_aesni_init) movdqa KEY, STATE4 /* load the constants: */ - movdqa .Laegis128_const_0, STATE2 - movdqa .Laegis128_const_1, STATE1 + movdqa .Laegis128_const_0(%rip), STATE2 + movdqa .Laegis128_const_1(%rip), STATE1 pxor STATE2, STATE3 pxor STATE1, STATE4 @@ -681,7 +681,7 @@ ENTRY(crypto_aegis128_aesni_dec_tail) punpcklbw T0, T0 punpcklbw T0, T0 punpcklbw T0, T0 - movdqa .Laegis128_counter, T1 + movdqa .Laegis128_counter(%rip), T1 pcmpgtb T1, T0 pand T0, MSG diff --git a/arch/x86/crypto/aesni-intel_asm.S b/arch/x86/crypto/aesni-intel_asm.S index e40bdf024ba7..36e2cff7fb19 100644 --- a/arch/x86/crypto/aesni-intel_asm.S +++ b/arch/x86/crypto/aesni-intel_asm.S @@ -2606,7 +2606,7 @@ ENDPROC(aesni_cbc_dec) */ .align 4 _aesni_inc_init: - movaps .Lbswap_mask, BSWAP_MASK + movaps .Lbswap_mask(%rip), BSWAP_MASK movaps IV, CTR PSHUFB_XMM BSWAP_MASK CTR mov $1, TCTR_LOW @@ -2734,12 +2734,12 @@ ENTRY(aesni_xts_crypt8) cmpb $0, %cl movl $0, %ecx movl $240, %r10d - leaq _aesni_enc4, %r11 - leaq _aesni_dec4, %rax + leaq _aesni_enc4(%rip), %r11 + leaq _aesni_dec4(%rip), %rax cmovel %r10d, %ecx cmoveq %rax, %r11 - movdqa .Lgf128mul_x_ble_mask, GF128MUL_MASK + movdqa .Lgf128mul_x_ble_mask(%rip), GF128MUL_MASK movups (IVP), IV mov 480(KEYP), KLEN diff --git a/arch/x86/crypto/aesni-intel_avx-x86_64.S b/arch/x86/crypto/aesni-intel_avx-x86_64.S index 91c039ab5699..210ac0e61eaf 100644 --- a/arch/x86/crypto/aesni-intel_avx-x86_64.S +++ b/arch/x86/crypto/aesni-intel_avx-x86_64.S @@ -660,7 +660,8 @@ _get_AAD_rest0\@: vpshufb and an array of shuffle masks */ movq %r12, %r11 salq $4, %r11 - vmovdqu aad_shift_arr(%r11), \T1 + leaq aad_shift_arr(%rip), %rax + vmovdqu (%rax,%r11,), \T1 vpshufb \T1, \T7, \T7 _get_AAD_rest_final\@: vpshufb SHUF_MASK(%rip), \T7, \T7 diff --git a/arch/x86/crypto/camellia-aesni-avx-asm_64.S b/arch/x86/crypto/camellia-aesni-avx-asm_64.S index a14af6eb09cb..f94ec9a5552b 100644 --- a/arch/x86/crypto/camellia-aesni-avx-asm_64.S +++ b/arch/x86/crypto/camellia-aesni-avx-asm_64.S @@ -53,10 +53,10 @@ /* \ * S-function with AES subbytes \ */ \ - vmovdqa .Linv_shift_row, t4; \ - vbroadcastss .L0f0f0f0f, t7; \ - vmovdqa .Lpre_tf_lo_s1, t0; \ - vmovdqa .Lpre_tf_hi_s1, t1; \ + vmovdqa .Linv_shift_row(%rip), t4; \ + vbroadcastss .L0f0f0f0f(%rip), t7; \ + vmovdqa .Lpre_tf_lo_s1(%rip), t0; \ + vmovdqa .Lpre_tf_hi_s1(%rip), t1; \ \ /* AES inverse shift rows */ \ vpshufb t4, x0, x0; \ @@ -69,8 +69,8 @@ vpshufb t4, x6, x6; \ \ /* prefilter sboxes 1, 2 and 3 */ \ - vmovdqa .Lpre_tf_lo_s4, t2; \ - vmovdqa .Lpre_tf_hi_s4, t3; \ + vmovdqa .Lpre_tf_lo_s4(%rip), t2; \ + vmovdqa .Lpre_tf_hi_s4(%rip), t3; \ filter_8bit(x0, t0, t1, t7, t6); \ filter_8bit(x7, t0, t1, t7, t6); \ filter_8bit(x1, t0, t1, t7, t6); \ @@ -84,8 +84,8 @@ filter_8bit(x6, t2, t3, t7, t6); \ \ /* AES subbytes + AES shift rows */ \ - vmovdqa .Lpost_tf_lo_s1, t0; \ - vmovdqa .Lpost_tf_hi_s1, t1; \ + vmovdqa .Lpost_tf_lo_s1(%rip), t0; \ + vmovdqa .Lpost_tf_hi_s1(%rip), t1; \ vaesenclast t4, x0, x0; \ vaesenclast t4, x7, x7; \ vaesenclast t4, x1, x1; \ @@ -96,16 +96,16 @@ vaesenclast t4, x6, x6; \ \ /* postfilter sboxes 1 and 4 */ \ - vmovdqa .Lpost_tf_lo_s3, t2; \ - vmovdqa .Lpost_tf_hi_s3, t3; \ + vmovdqa .Lpost_tf_lo_s3(%rip), t2; \ + vmovdqa .Lpost_tf_hi_s3(%rip), t3; \ filter_8bit(x0, t0, t1, t7, t6); \ filter_8bit(x7, t0, t1, t7, t6); \ filter_8bit(x3, t0, t1, t7, t6); \ filter_8bit(x6, t0, t1, t7, t6); \ \ /* postfilter sbox 3 */ \ - vmovdqa .Lpost_tf_lo_s2, t4; \ - vmovdqa .Lpost_tf_hi_s2, t5; \ + vmovdqa .Lpost_tf_lo_s2(%rip), t4; \ + vmovdqa .Lpost_tf_hi_s2(%rip), t5; \ filter_8bit(x2, t2, t3, t7, t6); \ filter_8bit(x5, t2, t3, t7, t6); \ \ @@ -444,7 +444,7 @@ ENDPROC(roundsm16_x4_x5_x6_x7_x0_x1_x2_x3_y4_y5_y6_y7_y0_y1_y2_y3_ab) transpose_4x4(c0, c1, c2, c3, a0, a1); \ transpose_4x4(d0, d1, d2, d3, a0, a1); \ \ - vmovdqu .Lshufb_16x16b, a0; \ + vmovdqu .Lshufb_16x16b(%rip), a0; \ vmovdqu st1, a1; \ vpshufb a0, a2, a2; \ vpshufb a0, a3, a3; \ @@ -483,7 +483,7 @@ ENDPROC(roundsm16_x4_x5_x6_x7_x0_x1_x2_x3_y4_y5_y6_y7_y0_y1_y2_y3_ab) #define inpack16_pre(x0, x1, x2, x3, x4, x5, x6, x7, y0, y1, y2, y3, y4, y5, \ y6, y7, rio, key) \ vmovq key, x0; \ - vpshufb .Lpack_bswap, x0, x0; \ + vpshufb .Lpack_bswap(%rip), x0, x0; \ \ vpxor 0 * 16(rio), x0, y7; \ vpxor 1 * 16(rio), x0, y6; \ @@ -534,7 +534,7 @@ ENDPROC(roundsm16_x4_x5_x6_x7_x0_x1_x2_x3_y4_y5_y6_y7_y0_y1_y2_y3_ab) vmovdqu x0, stack_tmp0; \ \ vmovq key, x0; \ - vpshufb .Lpack_bswap, x0, x0; \ + vpshufb .Lpack_bswap(%rip), x0, x0; \ \ vpxor x0, y7, y7; \ vpxor x0, y6, y6; \ @@ -1017,7 +1017,7 @@ ENTRY(camellia_ctr_16way) subq $(16 * 16), %rsp; movq %rsp, %rax; - vmovdqa .Lbswap128_mask, %xmm14; + vmovdqa .Lbswap128_mask(%rip), %xmm14; /* load IV and byteswap */ vmovdqu (%rcx), %xmm0; @@ -1066,7 +1066,7 @@ ENTRY(camellia_ctr_16way) /* inpack16_pre: */ vmovq (key_table)(CTX), %xmm15; - vpshufb .Lpack_bswap, %xmm15, %xmm15; + vpshufb .Lpack_bswap(%rip), %xmm15, %xmm15; vpxor %xmm0, %xmm15, %xmm0; vpxor %xmm1, %xmm15, %xmm1; vpxor %xmm2, %xmm15, %xmm2; @@ -1134,7 +1134,7 @@ camellia_xts_crypt_16way: subq $(16 * 16), %rsp; movq %rsp, %rax; - vmovdqa .Lxts_gf128mul_and_shl1_mask, %xmm14; + vmovdqa .Lxts_gf128mul_and_shl1_mask(%rip), %xmm14; /* load IV */ vmovdqu (%rcx), %xmm0; @@ -1210,7 +1210,7 @@ camellia_xts_crypt_16way: /* inpack16_pre: */ vmovq (key_table)(CTX, %r8, 8), %xmm15; - vpshufb .Lpack_bswap, %xmm15, %xmm15; + vpshufb .Lpack_bswap(%rip), %xmm15, %xmm15; vpxor 0 * 16(%rax), %xmm15, %xmm0; vpxor %xmm1, %xmm15, %xmm1; vpxor %xmm2, %xmm15, %xmm2; @@ -1265,7 +1265,7 @@ ENTRY(camellia_xts_enc_16way) */ xorl %r8d, %r8d; /* input whitening key, 0 for enc */ - leaq __camellia_enc_blk16, %r9; + leaq __camellia_enc_blk16(%rip), %r9; jmp camellia_xts_crypt_16way; ENDPROC(camellia_xts_enc_16way) @@ -1283,7 +1283,7 @@ ENTRY(camellia_xts_dec_16way) movl $24, %eax; cmovel %eax, %r8d; /* input whitening key, last for dec */ - leaq __camellia_dec_blk16, %r9; + leaq __camellia_dec_blk16(%rip), %r9; jmp camellia_xts_crypt_16way; ENDPROC(camellia_xts_dec_16way) diff --git a/arch/x86/crypto/camellia-aesni-avx2-asm_64.S b/arch/x86/crypto/camellia-aesni-avx2-asm_64.S index 4be4c7c3ba27..545ff16a196b 100644 --- a/arch/x86/crypto/camellia-aesni-avx2-asm_64.S +++ b/arch/x86/crypto/camellia-aesni-avx2-asm_64.S @@ -65,12 +65,12 @@ /* \ * S-function with AES subbytes \ */ \ - vbroadcasti128 .Linv_shift_row, t4; \ - vpbroadcastd .L0f0f0f0f, t7; \ - vbroadcasti128 .Lpre_tf_lo_s1, t5; \ - vbroadcasti128 .Lpre_tf_hi_s1, t6; \ - vbroadcasti128 .Lpre_tf_lo_s4, t2; \ - vbroadcasti128 .Lpre_tf_hi_s4, t3; \ + vbroadcasti128 .Linv_shift_row(%rip), t4; \ + vpbroadcastd .L0f0f0f0f(%rip), t7; \ + vbroadcasti128 .Lpre_tf_lo_s1(%rip), t5; \ + vbroadcasti128 .Lpre_tf_hi_s1(%rip), t6; \ + vbroadcasti128 .Lpre_tf_lo_s4(%rip), t2; \ + vbroadcasti128 .Lpre_tf_hi_s4(%rip), t3; \ \ /* AES inverse shift rows */ \ vpshufb t4, x0, x0; \ @@ -116,8 +116,8 @@ vinserti128 $1, t2##_x, x6, x6; \ vextracti128 $1, x1, t3##_x; \ vextracti128 $1, x4, t2##_x; \ - vbroadcasti128 .Lpost_tf_lo_s1, t0; \ - vbroadcasti128 .Lpost_tf_hi_s1, t1; \ + vbroadcasti128 .Lpost_tf_lo_s1(%rip), t0; \ + vbroadcasti128 .Lpost_tf_hi_s1(%rip), t1; \ vaesenclast t4##_x, x2##_x, x2##_x; \ vaesenclast t4##_x, t6##_x, t6##_x; \ vinserti128 $1, t6##_x, x2, x2; \ @@ -132,16 +132,16 @@ vinserti128 $1, t2##_x, x4, x4; \ \ /* postfilter sboxes 1 and 4 */ \ - vbroadcasti128 .Lpost_tf_lo_s3, t2; \ - vbroadcasti128 .Lpost_tf_hi_s3, t3; \ + vbroadcasti128 .Lpost_tf_lo_s3(%rip), t2; \ + vbroadcasti128 .Lpost_tf_hi_s3(%rip), t3; \ filter_8bit(x0, t0, t1, t7, t6); \ filter_8bit(x7, t0, t1, t7, t6); \ filter_8bit(x3, t0, t1, t7, t6); \ filter_8bit(x6, t0, t1, t7, t6); \ \ /* postfilter sbox 3 */ \ - vbroadcasti128 .Lpost_tf_lo_s2, t4; \ - vbroadcasti128 .Lpost_tf_hi_s2, t5; \ + vbroadcasti128 .Lpost_tf_lo_s2(%rip), t4; \ + vbroadcasti128 .Lpost_tf_hi_s2(%rip), t5; \ filter_8bit(x2, t2, t3, t7, t6); \ filter_8bit(x5, t2, t3, t7, t6); \ \ @@ -478,7 +478,7 @@ ENDPROC(roundsm32_x4_x5_x6_x7_x0_x1_x2_x3_y4_y5_y6_y7_y0_y1_y2_y3_ab) transpose_4x4(c0, c1, c2, c3, a0, a1); \ transpose_4x4(d0, d1, d2, d3, a0, a1); \ \ - vbroadcasti128 .Lshufb_16x16b, a0; \ + vbroadcasti128 .Lshufb_16x16b(%rip), a0; \ vmovdqu st1, a1; \ vpshufb a0, a2, a2; \ vpshufb a0, a3, a3; \ @@ -517,7 +517,7 @@ ENDPROC(roundsm32_x4_x5_x6_x7_x0_x1_x2_x3_y4_y5_y6_y7_y0_y1_y2_y3_ab) #define inpack32_pre(x0, x1, x2, x3, x4, x5, x6, x7, y0, y1, y2, y3, y4, y5, \ y6, y7, rio, key) \ vpbroadcastq key, x0; \ - vpshufb .Lpack_bswap, x0, x0; \ + vpshufb .Lpack_bswap(%rip), x0, x0; \ \ vpxor 0 * 32(rio), x0, y7; \ vpxor 1 * 32(rio), x0, y6; \ @@ -568,7 +568,7 @@ ENDPROC(roundsm32_x4_x5_x6_x7_x0_x1_x2_x3_y4_y5_y6_y7_y0_y1_y2_y3_ab) vmovdqu x0, stack_tmp0; \ \ vpbroadcastq key, x0; \ - vpshufb .Lpack_bswap, x0, x0; \ + vpshufb .Lpack_bswap(%rip), x0, x0; \ \ vpxor x0, y7, y7; \ vpxor x0, y6, y6; \ @@ -1108,7 +1108,7 @@ ENTRY(camellia_ctr_32way) vmovdqu (%rcx), %xmm0; vmovdqa %xmm0, %xmm1; inc_le128(%xmm0, %xmm15, %xmm14); - vbroadcasti128 .Lbswap128_mask, %ymm14; + vbroadcasti128 .Lbswap128_mask(%rip), %ymm14; vinserti128 $1, %xmm0, %ymm1, %ymm0; vpshufb %ymm14, %ymm0, %ymm13; vmovdqu %ymm13, 15 * 32(%rax); @@ -1154,7 +1154,7 @@ ENTRY(camellia_ctr_32way) /* inpack32_pre: */ vpbroadcastq (key_table)(CTX), %ymm15; - vpshufb .Lpack_bswap, %ymm15, %ymm15; + vpshufb .Lpack_bswap(%rip), %ymm15, %ymm15; vpxor %ymm0, %ymm15, %ymm0; vpxor %ymm1, %ymm15, %ymm1; vpxor %ymm2, %ymm15, %ymm2; @@ -1238,13 +1238,13 @@ camellia_xts_crypt_32way: subq $(16 * 32), %rsp; movq %rsp, %rax; - vbroadcasti128 .Lxts_gf128mul_and_shl1_mask_0, %ymm12; + vbroadcasti128 .Lxts_gf128mul_and_shl1_mask_0(%rip), %ymm12; /* load IV and construct second IV */ vmovdqu (%rcx), %xmm0; vmovdqa %xmm0, %xmm15; gf128mul_x_ble(%xmm0, %xmm12, %xmm13); - vbroadcasti128 .Lxts_gf128mul_and_shl1_mask_1, %ymm13; + vbroadcasti128 .Lxts_gf128mul_and_shl1_mask_1(%rip), %ymm13; vinserti128 $1, %xmm0, %ymm15, %ymm0; vpxor 0 * 32(%rdx), %ymm0, %ymm15; vmovdqu %ymm15, 15 * 32(%rax); @@ -1321,7 +1321,7 @@ camellia_xts_crypt_32way: /* inpack32_pre: */ vpbroadcastq (key_table)(CTX, %r8, 8), %ymm15; - vpshufb .Lpack_bswap, %ymm15, %ymm15; + vpshufb .Lpack_bswap(%rip), %ymm15, %ymm15; vpxor 0 * 32(%rax), %ymm15, %ymm0; vpxor %ymm1, %ymm15, %ymm1; vpxor %ymm2, %ymm15, %ymm2; @@ -1379,7 +1379,7 @@ ENTRY(camellia_xts_enc_32way) xorl %r8d, %r8d; /* input whitening key, 0 for enc */ - leaq __camellia_enc_blk32, %r9; + leaq __camellia_enc_blk32(%rip), %r9; jmp camellia_xts_crypt_32way; ENDPROC(camellia_xts_enc_32way) @@ -1397,7 +1397,7 @@ ENTRY(camellia_xts_dec_32way) movl $24, %eax; cmovel %eax, %r8d; /* input whitening key, last for dec */ - leaq __camellia_dec_blk32, %r9; + leaq __camellia_dec_blk32(%rip), %r9; jmp camellia_xts_crypt_32way; ENDPROC(camellia_xts_dec_32way) diff --git a/arch/x86/crypto/camellia-x86_64-asm_64.S b/arch/x86/crypto/camellia-x86_64-asm_64.S index 23528bc18fc6..021b0f0090f4 100644 --- a/arch/x86/crypto/camellia-x86_64-asm_64.S +++ b/arch/x86/crypto/camellia-x86_64-asm_64.S @@ -77,11 +77,13 @@ #define RXORbl %r9b #define xor2ror16(T0, T1, tmp1, tmp2, ab, dst) \ + leaq T0(%rip), tmp1; \ movzbl ab ## bl, tmp2 ## d; \ + xorq (tmp1, tmp2, 8), dst; \ + leaq T1(%rip), tmp2; \ movzbl ab ## bh, tmp1 ## d; \ - rorq $16, ab; \ - xorq T0(, tmp2, 8), dst; \ - xorq T1(, tmp1, 8), dst; + xorq (tmp2, tmp1, 8), dst; \ + rorq $16, ab; /********************************************************************** 1-way camellia diff --git a/arch/x86/crypto/cast5-avx-x86_64-asm_64.S b/arch/x86/crypto/cast5-avx-x86_64-asm_64.S index dc55c3332fcc..213b5d8a9d08 100644 --- a/arch/x86/crypto/cast5-avx-x86_64-asm_64.S +++ b/arch/x86/crypto/cast5-avx-x86_64-asm_64.S @@ -83,16 +83,20 @@ #define lookup_32bit(src, dst, op1, op2, op3, interleave_op, il_reg) \ - movzbl src ## bh, RID1d; \ - movzbl src ## bl, RID2d; \ - shrq $16, src; \ - movl s1(, RID1, 4), dst ## d; \ - op1 s2(, RID2, 4), dst ## d; \ - movzbl src ## bh, RID1d; \ - movzbl src ## bl, RID2d; \ - interleave_op(il_reg); \ - op2 s3(, RID1, 4), dst ## d; \ - op3 s4(, RID2, 4), dst ## d; + movzbl src ## bh, RID1d; \ + leaq s1(%rip), RID2; \ + movl (RID2, RID1, 4), dst ## d; \ + movzbl src ## bl, RID2d; \ + leaq s2(%rip), RID1; \ + op1 (RID1, RID2, 4), dst ## d; \ + shrq $16, src; \ + movzbl src ## bh, RID1d; \ + leaq s3(%rip), RID2; \ + op2 (RID2, RID1, 4), dst ## d; \ + movzbl src ## bl, RID2d; \ + leaq s4(%rip), RID1; \ + op3 (RID1, RID2, 4), dst ## d; \ + interleave_op(il_reg); #define dummy(d) /* do nothing */ @@ -151,15 +155,15 @@ subround(l ## 3, r ## 3, l ## 4, r ## 4, f); #define enc_preload_rkr() \ - vbroadcastss .L16_mask, RKR; \ + vbroadcastss .L16_mask(%rip), RKR; \ /* add 16-bit rotation to key rotations (mod 32) */ \ vpxor kr(CTX), RKR, RKR; #define dec_preload_rkr() \ - vbroadcastss .L16_mask, RKR; \ + vbroadcastss .L16_mask(%rip), RKR; \ /* add 16-bit rotation to key rotations (mod 32) */ \ vpxor kr(CTX), RKR, RKR; \ - vpshufb .Lbswap128_mask, RKR, RKR; + vpshufb .Lbswap128_mask(%rip), RKR, RKR; #define transpose_2x4(x0, x1, t0, t1) \ vpunpckldq x1, x0, t0; \ @@ -236,9 +240,9 @@ __cast5_enc_blk16: movq %rdi, CTX; - vmovdqa .Lbswap_mask, RKM; - vmovd .Lfirst_mask, R1ST; - vmovd .L32_mask, R32; + vmovdqa .Lbswap_mask(%rip), RKM; + vmovd .Lfirst_mask(%rip), R1ST; + vmovd .L32_mask(%rip), R32; enc_preload_rkr(); inpack_blocks(RL1, RR1, RTMP, RX, RKM); @@ -272,7 +276,7 @@ __cast5_enc_blk16: popq %rbx; popq %r15; - vmovdqa .Lbswap_mask, RKM; + vmovdqa .Lbswap_mask(%rip), RKM; outunpack_blocks(RR1, RL1, RTMP, RX, RKM); outunpack_blocks(RR2, RL2, RTMP, RX, RKM); @@ -310,9 +314,9 @@ __cast5_dec_blk16: movq %rdi, CTX; - vmovdqa .Lbswap_mask, RKM; - vmovd .Lfirst_mask, R1ST; - vmovd .L32_mask, R32; + vmovdqa .Lbswap_mask(%rip), RKM; + vmovd .Lfirst_mask(%rip), R1ST; + vmovd .L32_mask(%rip), R32; dec_preload_rkr(); inpack_blocks(RL1, RR1, RTMP, RX, RKM); @@ -343,7 +347,7 @@ __cast5_dec_blk16: round(RL, RR, 1, 2); round(RR, RL, 0, 1); - vmovdqa .Lbswap_mask, RKM; + vmovdqa .Lbswap_mask(%rip), RKM; popq %rbx; popq %r15; @@ -506,8 +510,8 @@ ENTRY(cast5_ctr_16way) vpcmpeqd RKR, RKR, RKR; vpaddq RKR, RKR, RKR; /* low: -2, high: -2 */ - vmovdqa .Lbswap_iv_mask, R1ST; - vmovdqa .Lbswap128_mask, RKM; + vmovdqa .Lbswap_iv_mask(%rip), R1ST; + vmovdqa .Lbswap128_mask(%rip), RKM; /* load IV and byteswap */ vmovq (%rcx), RX; diff --git a/arch/x86/crypto/cast6-avx-x86_64-asm_64.S b/arch/x86/crypto/cast6-avx-x86_64-asm_64.S index 4f0a7cdb94d9..9879a12c243a 100644 --- a/arch/x86/crypto/cast6-avx-x86_64-asm_64.S +++ b/arch/x86/crypto/cast6-avx-x86_64-asm_64.S @@ -83,16 +83,20 @@ #define lookup_32bit(src, dst, op1, op2, op3, interleave_op, il_reg) \ - movzbl src ## bh, RID1d; \ - movzbl src ## bl, RID2d; \ - shrq $16, src; \ - movl s1(, RID1, 4), dst ## d; \ - op1 s2(, RID2, 4), dst ## d; \ - movzbl src ## bh, RID1d; \ - movzbl src ## bl, RID2d; \ - interleave_op(il_reg); \ - op2 s3(, RID1, 4), dst ## d; \ - op3 s4(, RID2, 4), dst ## d; + movzbl src ## bh, RID1d; \ + leaq s1(%rip), RID2; \ + movl (RID2, RID1, 4), dst ## d; \ + movzbl src ## bl, RID2d; \ + leaq s2(%rip), RID1; \ + op1 (RID1, RID2, 4), dst ## d; \ + shrq $16, src; \ + movzbl src ## bh, RID1d; \ + leaq s3(%rip), RID2; \ + op2 (RID2, RID1, 4), dst ## d; \ + movzbl src ## bl, RID2d; \ + leaq s4(%rip), RID1; \ + op3 (RID1, RID2, 4), dst ## d; \ + interleave_op(il_reg); #define dummy(d) /* do nothing */ @@ -175,10 +179,10 @@ qop(RD, RC, 1); #define shuffle(mask) \ - vpshufb mask, RKR, RKR; + vpshufb mask(%rip), RKR, RKR; #define preload_rkr(n, do_mask, mask) \ - vbroadcastss .L16_mask, RKR; \ + vbroadcastss .L16_mask(%rip), RKR; \ /* add 16-bit rotation to key rotations (mod 32) */ \ vpxor (kr+n*16)(CTX), RKR, RKR; \ do_mask(mask); @@ -260,9 +264,9 @@ __cast6_enc_blk8: movq %rdi, CTX; - vmovdqa .Lbswap_mask, RKM; - vmovd .Lfirst_mask, R1ST; - vmovd .L32_mask, R32; + vmovdqa .Lbswap_mask(%rip), RKM; + vmovd .Lfirst_mask(%rip), R1ST; + vmovd .L32_mask(%rip), R32; inpack_blocks(RA1, RB1, RC1, RD1, RTMP, RX, RKRF, RKM); inpack_blocks(RA2, RB2, RC2, RD2, RTMP, RX, RKRF, RKM); @@ -286,7 +290,7 @@ __cast6_enc_blk8: popq %rbx; popq %r15; - vmovdqa .Lbswap_mask, RKM; + vmovdqa .Lbswap_mask(%rip), RKM; outunpack_blocks(RA1, RB1, RC1, RD1, RTMP, RX, RKRF, RKM); outunpack_blocks(RA2, RB2, RC2, RD2, RTMP, RX, RKRF, RKM); @@ -308,9 +312,9 @@ __cast6_dec_blk8: movq %rdi, CTX; - vmovdqa .Lbswap_mask, RKM; - vmovd .Lfirst_mask, R1ST; - vmovd .L32_mask, R32; + vmovdqa .Lbswap_mask(%rip), RKM; + vmovd .Lfirst_mask(%rip), R1ST; + vmovd .L32_mask(%rip), R32; inpack_blocks(RA1, RB1, RC1, RD1, RTMP, RX, RKRF, RKM); inpack_blocks(RA2, RB2, RC2, RD2, RTMP, RX, RKRF, RKM); @@ -334,7 +338,7 @@ __cast6_dec_blk8: popq %rbx; popq %r15; - vmovdqa .Lbswap_mask, RKM; + vmovdqa .Lbswap_mask(%rip), RKM; outunpack_blocks(RA1, RB1, RC1, RD1, RTMP, RX, RKRF, RKM); outunpack_blocks(RA2, RB2, RC2, RD2, RTMP, RX, RKRF, RKM); diff --git a/arch/x86/crypto/des3_ede-asm_64.S b/arch/x86/crypto/des3_ede-asm_64.S index 7fca43099a5f..e51dcf8c7eb7 100644 --- a/arch/x86/crypto/des3_ede-asm_64.S +++ b/arch/x86/crypto/des3_ede-asm_64.S @@ -129,21 +129,29 @@ movzbl RW0bl, RT2d; \ movzbl RW0bh, RT3d; \ shrq $16, RW0; \ - movq s8(, RT0, 8), RT0; \ - xorq s6(, RT1, 8), to; \ + leaq s8(%rip), RW1; \ + movq (RW1, RT0, 8), RT0; \ + leaq s6(%rip), RW1; \ + xorq (RW1, RT1, 8), to; \ movzbl RW0bl, RL1d; \ movzbl RW0bh, RT1d; \ shrl $16, RW0d; \ - xorq s4(, RT2, 8), RT0; \ - xorq s2(, RT3, 8), to; \ + leaq s4(%rip), RW1; \ + xorq (RW1, RT2, 8), RT0; \ + leaq s2(%rip), RW1; \ + xorq (RW1, RT3, 8), to; \ movzbl RW0bl, RT2d; \ movzbl RW0bh, RT3d; \ - xorq s7(, RL1, 8), RT0; \ - xorq s5(, RT1, 8), to; \ - xorq s3(, RT2, 8), RT0; \ + leaq s7(%rip), RW1; \ + xorq (RW1, RL1, 8), RT0; \ + leaq s5(%rip), RW1; \ + xorq (RW1, RT1, 8), to; \ + leaq s3(%rip), RW1; \ + xorq (RW1, RT2, 8), RT0; \ load_next_key(n, RW0); \ xorq RT0, to; \ - xorq s1(, RT3, 8), to; \ + leaq s1(%rip), RW1; \ + xorq (RW1, RT3, 8), to; \ #define load_next_key(n, RWx) \ movq (((n) + 1) * 8)(CTX), RWx; @@ -355,65 +363,89 @@ ENDPROC(des3_ede_x86_64_crypt_blk) movzbl RW0bl, RT3d; \ movzbl RW0bh, RT1d; \ shrq $16, RW0; \ - xorq s8(, RT3, 8), to##0; \ - xorq s6(, RT1, 8), to##0; \ + leaq s8(%rip), RT2; \ + xorq (RT2, RT3, 8), to##0; \ + leaq s6(%rip), RT2; \ + xorq (RT2, RT1, 8), to##0; \ movzbl RW0bl, RT3d; \ movzbl RW0bh, RT1d; \ shrq $16, RW0; \ - xorq s4(, RT3, 8), to##0; \ - xorq s2(, RT1, 8), to##0; \ + leaq s4(%rip), RT2; \ + xorq (RT2, RT3, 8), to##0; \ + leaq s2(%rip), RT2; \ + xorq (RT2, RT1, 8), to##0; \ movzbl RW0bl, RT3d; \ movzbl RW0bh, RT1d; \ shrl $16, RW0d; \ - xorq s7(, RT3, 8), to##0; \ - xorq s5(, RT1, 8), to##0; \ + leaq s7(%rip), RT2; \ + xorq (RT2, RT3, 8), to##0; \ + leaq s5(%rip), RT2; \ + xorq (RT2, RT1, 8), to##0; \ movzbl RW0bl, RT3d; \ movzbl RW0bh, RT1d; \ load_next_key(n, RW0); \ - xorq s3(, RT3, 8), to##0; \ - xorq s1(, RT1, 8), to##0; \ + leaq s3(%rip), RT2; \ + xorq (RT2, RT3, 8), to##0; \ + leaq s1(%rip), RT2; \ + xorq (RT2, RT1, 8), to##0; \ xorq from##1, RW1; \ movzbl RW1bl, RT3d; \ movzbl RW1bh, RT1d; \ shrq $16, RW1; \ - xorq s8(, RT3, 8), to##1; \ - xorq s6(, RT1, 8), to##1; \ + leaq s8(%rip), RT2; \ + xorq (RT2, RT3, 8), to##1; \ + leaq s6(%rip), RT2; \ + xorq (RT2, RT1, 8), to##1; \ movzbl RW1bl, RT3d; \ movzbl RW1bh, RT1d; \ shrq $16, RW1; \ - xorq s4(, RT3, 8), to##1; \ - xorq s2(, RT1, 8), to##1; \ + leaq s4(%rip), RT2; \ + xorq (RT2, RT3, 8), to##1; \ + leaq s2(%rip), RT2; \ + xorq (RT2, RT1, 8), to##1; \ movzbl RW1bl, RT3d; \ movzbl RW1bh, RT1d; \ shrl $16, RW1d; \ - xorq s7(, RT3, 8), to##1; \ - xorq s5(, RT1, 8), to##1; \ + leaq s7(%rip), RT2; \ + xorq (RT2, RT3, 8), to##1; \ + leaq s5(%rip), RT2; \ + xorq (RT2, RT1, 8), to##1; \ movzbl RW1bl, RT3d; \ movzbl RW1bh, RT1d; \ do_movq(RW0, RW1); \ - xorq s3(, RT3, 8), to##1; \ - xorq s1(, RT1, 8), to##1; \ + leaq s3(%rip), RT2; \ + xorq (RT2, RT3, 8), to##1; \ + leaq s1(%rip), RT2; \ + xorq (RT2, RT1, 8), to##1; \ xorq from##2, RW2; \ movzbl RW2bl, RT3d; \ movzbl RW2bh, RT1d; \ shrq $16, RW2; \ - xorq s8(, RT3, 8), to##2; \ - xorq s6(, RT1, 8), to##2; \ + leaq s8(%rip), RT2; \ + xorq (RT2, RT3, 8), to##2; \ + leaq s6(%rip), RT2; \ + xorq (RT2, RT1, 8), to##2; \ movzbl RW2bl, RT3d; \ movzbl RW2bh, RT1d; \ shrq $16, RW2; \ - xorq s4(, RT3, 8), to##2; \ - xorq s2(, RT1, 8), to##2; \ + leaq s4(%rip), RT2; \ + xorq (RT2, RT3, 8), to##2; \ + leaq s2(%rip), RT2; \ + xorq (RT2, RT1, 8), to##2; \ movzbl RW2bl, RT3d; \ movzbl RW2bh, RT1d; \ shrl $16, RW2d; \ - xorq s7(, RT3, 8), to##2; \ - xorq s5(, RT1, 8), to##2; \ + leaq s7(%rip), RT2; \ + xorq (RT2, RT3, 8), to##2; \ + leaq s5(%rip), RT2; \ + xorq (RT2, RT1, 8), to##2; \ movzbl RW2bl, RT3d; \ movzbl RW2bh, RT1d; \ do_movq(RW0, RW2); \ - xorq s3(, RT3, 8), to##2; \ - xorq s1(, RT1, 8), to##2; + leaq s3(%rip), RT2; \ + xorq (RT2, RT3, 8), to##2; \ + leaq s1(%rip), RT2; \ + xorq (RT2, RT1, 8), to##2; #define __movq(src, dst) \ movq src, dst; diff --git a/arch/x86/crypto/ghash-clmulni-intel_asm.S b/arch/x86/crypto/ghash-clmulni-intel_asm.S index 5d53effe8abe..f8029074a99e 100644 --- a/arch/x86/crypto/ghash-clmulni-intel_asm.S +++ b/arch/x86/crypto/ghash-clmulni-intel_asm.S @@ -94,7 +94,7 @@ ENTRY(clmul_ghash_mul) FRAME_BEGIN movups (%rdi), DATA movups (%rsi), SHASH - movaps .Lbswap_mask, BSWAP + movaps .Lbswap_mask(%rip), BSWAP PSHUFB_XMM BSWAP DATA call __clmul_gf128mul_ble PSHUFB_XMM BSWAP DATA @@ -111,7 +111,7 @@ ENTRY(clmul_ghash_update) FRAME_BEGIN cmp $16, %rdx jb .Lupdate_just_ret # check length - movaps .Lbswap_mask, BSWAP + movaps .Lbswap_mask(%rip), BSWAP movups (%rdi), DATA movups (%rcx), SHASH PSHUFB_XMM BSWAP DATA diff --git a/arch/x86/crypto/glue_helper-asm-avx.S b/arch/x86/crypto/glue_helper-asm-avx.S index d08fc575ef7f..a9736f85fef0 100644 --- a/arch/x86/crypto/glue_helper-asm-avx.S +++ b/arch/x86/crypto/glue_helper-asm-avx.S @@ -44,7 +44,7 @@ #define load_ctr_8way(iv, bswap, x0, x1, x2, x3, x4, x5, x6, x7, t0, t1, t2) \ vpcmpeqd t0, t0, t0; \ vpsrldq $8, t0, t0; /* low: -1, high: 0 */ \ - vmovdqa bswap, t1; \ + vmovdqa bswap(%rip), t1; \ \ /* load IV and byteswap */ \ vmovdqu (iv), x7; \ @@ -89,7 +89,7 @@ #define load_xts_8way(iv, src, dst, x0, x1, x2, x3, x4, x5, x6, x7, tiv, t0, \ t1, xts_gf128mul_and_shl1_mask) \ - vmovdqa xts_gf128mul_and_shl1_mask, t0; \ + vmovdqa xts_gf128mul_and_shl1_mask(%rip), t0; \ \ /* load IV */ \ vmovdqu (iv), tiv; \ diff --git a/arch/x86/crypto/glue_helper-asm-avx2.S b/arch/x86/crypto/glue_helper-asm-avx2.S index d84508c85c13..efbf4953707e 100644 --- a/arch/x86/crypto/glue_helper-asm-avx2.S +++ b/arch/x86/crypto/glue_helper-asm-avx2.S @@ -62,7 +62,7 @@ vmovdqu (iv), t2x; \ vmovdqa t2x, t3x; \ inc_le128(t2x, t0x, t1x); \ - vbroadcasti128 bswap, t1; \ + vbroadcasti128 bswap(%rip), t1; \ vinserti128 $1, t2x, t3, t2; /* ab: le0 ; cd: le1 */ \ vpshufb t1, t2, x0; \ \ @@ -119,13 +119,13 @@ tivx, t0, t0x, t1, t1x, t2, t2x, t3, \ xts_gf128mul_and_shl1_mask_0, \ xts_gf128mul_and_shl1_mask_1) \ - vbroadcasti128 xts_gf128mul_and_shl1_mask_0, t1; \ + vbroadcasti128 xts_gf128mul_and_shl1_mask_0(%rip), t1; \ \ /* load IV and construct second IV */ \ vmovdqu (iv), tivx; \ vmovdqa tivx, t0x; \ gf128mul_x_ble(tivx, t1x, t2x); \ - vbroadcasti128 xts_gf128mul_and_shl1_mask_1, t2; \ + vbroadcasti128 xts_gf128mul_and_shl1_mask_1(%rip), t2; \ vinserti128 $1, tivx, t0, tiv; \ vpxor (0*32)(src), tiv, x0; \ vmovdqu tiv, (0*32)(dst); \ diff --git a/arch/x86/crypto/sha256-avx2-asm.S b/arch/x86/crypto/sha256-avx2-asm.S index 1420db15dcdd..e7730d93cceb 100644 --- a/arch/x86/crypto/sha256-avx2-asm.S +++ b/arch/x86/crypto/sha256-avx2-asm.S @@ -592,19 +592,23 @@ last_block_enter: .align 16 loop1: - vpaddd K256+0*32(SRND), X0, XFER + leaq K256(%rip), INP + vpaddd 0*32(INP, SRND), X0, XFER vmovdqa XFER, 0*32+_XFER(%rsp, SRND) FOUR_ROUNDS_AND_SCHED _XFER + 0*32 - vpaddd K256+1*32(SRND), X0, XFER + leaq K256(%rip), INP + vpaddd 1*32(INP, SRND), X0, XFER vmovdqa XFER, 1*32+_XFER(%rsp, SRND) FOUR_ROUNDS_AND_SCHED _XFER + 1*32 - vpaddd K256+2*32(SRND), X0, XFER + leaq K256(%rip), INP + vpaddd 2*32(INP, SRND), X0, XFER vmovdqa XFER, 2*32+_XFER(%rsp, SRND) FOUR_ROUNDS_AND_SCHED _XFER + 2*32 - vpaddd K256+3*32(SRND), X0, XFER + leaq K256(%rip), INP + vpaddd 3*32(INP, SRND), X0, XFER vmovdqa XFER, 3*32+_XFER(%rsp, SRND) FOUR_ROUNDS_AND_SCHED _XFER + 3*32 @@ -614,11 +618,13 @@ loop1: loop2: ## Do last 16 rounds with no scheduling - vpaddd K256+0*32(SRND), X0, XFER + leaq K256(%rip), INP + vpaddd 0*32(INP, SRND), X0, XFER vmovdqa XFER, 0*32+_XFER(%rsp, SRND) DO_4ROUNDS _XFER + 0*32 - vpaddd K256+1*32(SRND), X1, XFER + leaq K256(%rip), INP + vpaddd 1*32(INP, SRND), X1, XFER vmovdqa XFER, 1*32+_XFER(%rsp, SRND) DO_4ROUNDS _XFER + 1*32 add $2*32, SRND From patchwork Tue Jul 30 19:12:46 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Garnier X-Patchwork-Id: 11066617 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 52DBD1399 for ; Tue, 30 Jul 2019 19:13:39 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4A2D228867 for ; Tue, 30 Jul 2019 19:13:39 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4883B28872; Tue, 30 Jul 2019 19:13:39 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.3 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id 87E8D288B7 for ; Tue, 30 Jul 2019 19:13:38 +0000 (UTC) Received: (qmail 28018 invoked by uid 550); 30 Jul 2019 19:13:24 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 27913 invoked from network); 30 Jul 2019 19:13:24 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=cCMnXk6r3FPny+fs1rFc2SWMhw3bJ0loL2W4TiBySr8=; b=m478ElFD7q//7AsCxpL9OrAKQBrInm/GCtxGDID+1HbrAnQFF52Ffax0qaupTPDbf/ GHWHftjDWsObpuzqxd4JH6kFG9bWkMamNih32q2W6VEU2kHfJUJMUiqfHVVZTC06mUOj MC1EU6K7NgoUvlEjAb13R+v1h/BbOjctlPBjk= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=cCMnXk6r3FPny+fs1rFc2SWMhw3bJ0loL2W4TiBySr8=; b=kNecb1Jgm48ozYSJxSbSNrkw8I52NnXWWwgK9WVqMRjvhBq6jre2HMlrQZTvLVLVKI KTQYgnE2HGYGN/0smN4Z735mSrjQ+ZaGmKhOIRR/aywryIaOqisSZ7fggaQIesfHYrah juu7hSdQourBvgv9O5T/iPL251s5UntIiLoVv9ZGdBlAOTz8rcWrGbtXP0zpBK8fXbwk 5ELEukW0xjvrEjT0VQkEOjdXNxIOxw6hHGYQ+Ii1dLdecspoFs7yChO3bdzOss5g17YU CGwPDlpQHRZ7xlJf6XD2cTp9TPORsw5R6B3Mn3eS6RU5qJbJdctfFSCIvHJz8q8ki9T1 GT7w== X-Gm-Message-State: APjAAAXt3k9+VAQr8ri2TfXVx1/Sd0uNvx9fmLAGQzBVJVKzjuErqBL4 PZ8wu5qW2wvEunm2Im+SdDWpNDrKxlg= X-Google-Smtp-Source: APXvYqxEEHpoFd295wN4FVenXhrnLJncXmAxP37HqjVX+fNi4I8BrYTgPlqRHeP0sCOCSagbyTiddw== X-Received: by 2002:a63:fd57:: with SMTP id m23mr44942158pgj.204.1564513992259; Tue, 30 Jul 2019 12:13:12 -0700 (PDT) From: Thomas Garnier To: kernel-hardening@lists.openwall.com Cc: kristen@linux.intel.com, keescook@chromium.org, Thomas Garnier , Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H. Peter Anvin" , x86@kernel.org, Peter Zijlstra , Nadav Amit , Jann Horn , linux-kernel@vger.kernel.org Subject: [PATCH v9 02/11] x86: Add macro to get symbol address for PIE support Date: Tue, 30 Jul 2019 12:12:46 -0700 Message-Id: <20190730191303.206365-3-thgarnie@chromium.org> X-Mailer: git-send-email 2.22.0.770.g0f2c4a37fd-goog In-Reply-To: <20190730191303.206365-1-thgarnie@chromium.org> References: <20190730191303.206365-1-thgarnie@chromium.org> MIME-Version: 1.0 X-Virus-Scanned: ClamAV using ClamSMTP Add a new _ASM_MOVABS macro to fetch a symbol address. It will be used to replace "_ASM_MOV $, %dst" code construct that are not compatible with PIE. Signed-off-by: Thomas Garnier --- arch/x86/include/asm/asm.h | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/x86/include/asm/asm.h b/arch/x86/include/asm/asm.h index 3ff577c0b102..3a686057e882 100644 --- a/arch/x86/include/asm/asm.h +++ b/arch/x86/include/asm/asm.h @@ -30,6 +30,7 @@ #define _ASM_ALIGN __ASM_SEL(.balign 4, .balign 8) #define _ASM_MOV __ASM_SIZE(mov) +#define _ASM_MOVABS __ASM_SEL(movl, movabsq) #define _ASM_INC __ASM_SIZE(inc) #define _ASM_DEC __ASM_SIZE(dec) #define _ASM_ADD __ASM_SIZE(add) From patchwork Tue Jul 30 19:12:47 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Garnier X-Patchwork-Id: 11066625 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 25A291399 for ; Tue, 30 Jul 2019 19:14:02 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1C1D828862 for ; Tue, 30 Jul 2019 19:14:02 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1005C288A9; Tue, 30 Jul 2019 19:14:02 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.3 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id 4FAA0288C3 for ; Tue, 30 Jul 2019 19:14:01 +0000 (UTC) Received: (qmail 28283 invoked by uid 550); 30 Jul 2019 19:13:27 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 28099 invoked from network); 30 Jul 2019 19:13:26 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=W0NzhAJDW6f6zv48MhA58X21t6ZEAVN81pjWGMonLUQ=; b=OjB3x2X1IjXnZyRT0y4ci0Z+ZmocMVw3jXB+231CIxuQ+Vi+vnoHGPII6xu43tIQCC izbGp42+Tr1lJM8CDyrUkPzRGeM0gleYKorJYw2ygKdUkeHux+isppWd00bdLc2rwBsV 7BjDP8/gm1rSJVMA09kWAibgoOJb+bvzAzmtA= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=W0NzhAJDW6f6zv48MhA58X21t6ZEAVN81pjWGMonLUQ=; b=BiUDFtQE3Pp3HkY9IlFlXUh6ybWKfe0bsWwG/u4o0ZsOjdCGm/TukdTBzPl0psbRjm GJXlF4ulstsJGpirpaWPKKCeIpnW3h1PwhzfmtrKSrZYmwLf0lZ/YBxM8x/+gntlL1EP BLYfVuXxvP390Ylz+Xd84/zVfz+ggRebTL5M4JPm7m/OEqRdFRG94UaV9KLqA6qDydCB 9nUUPnCYynfxemIQSRNuOtq91ZpEo7Xm8Xpj1t4AhVfZMT8hrOxNQsdaoy1IBi3XQ2/1 rY0ZWF3azbt55XRhjoQiChKWk2IX5MhkGB5Y4tfZADbrA1ldFF+1xHFMxmG9rHR8vJFf WIsA== X-Gm-Message-State: APjAAAVJ5Wb5gCCoqOIEprvMk7tgVPn89blVl0Gf7LB6DymXUv+I6FSH gx11O9eB42vZ6kSb5W4+v4CSFquORMo= X-Google-Smtp-Source: APXvYqwsIu5EjDisGZPc1G7mJNiIJ7XnbgYepkvqZhs11fZLulhf+xJkltnPFGWfc108tM55QvLPhQ== X-Received: by 2002:a17:902:2ae7:: with SMTP id j94mr116539349plb.270.1564513994552; Tue, 30 Jul 2019 12:13:14 -0700 (PDT) From: Thomas Garnier To: kernel-hardening@lists.openwall.com Cc: kristen@linux.intel.com, keescook@chromium.org, Thomas Garnier , Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H. Peter Anvin" , x86@kernel.org, Allison Randal , Alexios Zavras , linux-kernel@vger.kernel.org Subject: [PATCH v9 03/11] x86: relocate_kernel - Adapt assembly for PIE support Date: Tue, 30 Jul 2019 12:12:47 -0700 Message-Id: <20190730191303.206365-4-thgarnie@chromium.org> X-Mailer: git-send-email 2.22.0.770.g0f2c4a37fd-goog In-Reply-To: <20190730191303.206365-1-thgarnie@chromium.org> References: <20190730191303.206365-1-thgarnie@chromium.org> MIME-Version: 1.0 X-Virus-Scanned: ClamAV using ClamSMTP Change the assembly code to use only absolute references of symbols for the kernel to be PIE compatible. Position Independent Executable (PIE) support will allow to extend the KASLR randomization range below 0xffffffff80000000. Signed-off-by: Thomas Garnier Reviewed-by: Kees Cook --- arch/x86/kernel/relocate_kernel_64.S | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/kernel/relocate_kernel_64.S b/arch/x86/kernel/relocate_kernel_64.S index c51ccff5cd01..c72889b09840 100644 --- a/arch/x86/kernel/relocate_kernel_64.S +++ b/arch/x86/kernel/relocate_kernel_64.S @@ -206,7 +206,7 @@ identity_mapped: movq %rax, %cr3 lea PAGE_SIZE(%r8), %rsp call swap_pages - movq $virtual_mapped, %rax + movabsq $virtual_mapped, %rax pushq %rax ret From patchwork Tue Jul 30 19:12:48 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Garnier X-Patchwork-Id: 11066621 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8A42113A0 for ; Tue, 30 Jul 2019 19:13:54 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7FBA22857E for ; Tue, 30 Jul 2019 19:13:54 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 73FC128867; Tue, 30 Jul 2019 19:13:54 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.3 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id A27EF28872 for ; Tue, 30 Jul 2019 19:13:53 +0000 (UTC) Received: (qmail 28323 invoked by uid 550); 30 Jul 2019 19:13:28 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 28255 invoked from network); 30 Jul 2019 19:13:27 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=g8F81aPRcXP2Hl/h5880SBWLCYrdCb9mGhwAq1nau08=; b=dUh629CxLDWe2ICkvFeC4qXjIQGvvsjijYV4+IzjvPROpIi+iYdIzNRfF6AXW9yD7j 6av49IFog/hErgPxDo8LO6KszfGUsQuRVtAukrJN89JRjJOTeTiYRi4ZcCu0cfmC3eUo 7R5M5POv61QFetY8gle4w1aDZ1RV9FSIBJtlE= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=g8F81aPRcXP2Hl/h5880SBWLCYrdCb9mGhwAq1nau08=; b=TG0zFdQ06xzexvX9l5pEUhtHbJ2f49xsLsm2f18lqqWCGWDjTMyvBsRWCgx3TxN8t+ p432syLc4yWNGlRckGtpYe2wmxpS2mHyvICMbnShFWEfEbzLjxKripnkCMAf7WdwPHjA eS20LWrnNKMH+ZARk8LewCeEwkxupI+iJU/zZTSE1LZAE1Xx+Wr2szxZoYt04GzN0P08 pIkD9uPFBQ++BjivHl0DuvFQjvjByrvce7RwZeuwpt/S73D1MnMF+LeLmXap0LH/eKCQ +mMh+2j/MGD22Y7Lj/63wuM35RiuW79W6bQFHz4SSYLQr7eIX3UmOWlDDKTDVPgiXCWu wGNQ== X-Gm-Message-State: APjAAAUPmyrkwSekH8yNzLE4q10z5eeWGhvrC/o/ly6WCzO5wtbae6Gi Du6CysaUjMlvTH7v/Ik9I/NCx5P+3yw= X-Google-Smtp-Source: APXvYqwGmbprmSHopN+BDz1XZeLfq9alpJ09UTr1PphowM1pkkwfMHHOEjFK6E/ulqm8Y3g/IHWAbQ== X-Received: by 2002:a65:6108:: with SMTP id z8mr78911177pgu.289.1564513995418; Tue, 30 Jul 2019 12:13:15 -0700 (PDT) From: Thomas Garnier To: kernel-hardening@lists.openwall.com Cc: kristen@linux.intel.com, keescook@chromium.org, Thomas Garnier , Andy Lutomirski , Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H. Peter Anvin" , x86@kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v9 04/11] x86/entry/64: Adapt assembly for PIE support Date: Tue, 30 Jul 2019 12:12:48 -0700 Message-Id: <20190730191303.206365-5-thgarnie@chromium.org> X-Mailer: git-send-email 2.22.0.770.g0f2c4a37fd-goog In-Reply-To: <20190730191303.206365-1-thgarnie@chromium.org> References: <20190730191303.206365-1-thgarnie@chromium.org> MIME-Version: 1.0 X-Virus-Scanned: ClamAV using ClamSMTP Change the assembly code to use only relative references of symbols for the kernel to be PIE compatible. Position Independent Executable (PIE) support will allow to extend the KASLR randomization range below 0xffffffff80000000. Signed-off-by: Thomas Garnier Reviewed-by: Kees Cook --- arch/x86/entry/entry_64.S | 16 +++++++++++----- 1 file changed, 11 insertions(+), 5 deletions(-) diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S index 3f5a978a02a7..4b588a902009 100644 --- a/arch/x86/entry/entry_64.S +++ b/arch/x86/entry/entry_64.S @@ -1317,7 +1317,8 @@ ENTRY(error_entry) movl %ecx, %eax /* zero extend */ cmpq %rax, RIP+8(%rsp) je .Lbstep_iret - cmpq $.Lgs_change, RIP+8(%rsp) + leaq .Lgs_change(%rip), %rcx + cmpq %rcx, RIP+8(%rsp) jne .Lerror_entry_done /* @@ -1514,10 +1515,10 @@ ENTRY(nmi) * resume the outer NMI. */ - movq $repeat_nmi, %rdx + leaq repeat_nmi(%rip), %rdx cmpq 8(%rsp), %rdx ja 1f - movq $end_repeat_nmi, %rdx + leaq end_repeat_nmi(%rip), %rdx cmpq 8(%rsp), %rdx ja nested_nmi_out 1: @@ -1571,7 +1572,8 @@ nested_nmi: pushq %rdx pushfq pushq $__KERNEL_CS - pushq $repeat_nmi + leaq repeat_nmi(%rip), %rdx + pushq %rdx /* Put stack back */ addq $(6*8), %rsp @@ -1610,7 +1612,11 @@ first_nmi: addq $8, (%rsp) /* Fix up RSP */ pushfq /* RFLAGS */ pushq $__KERNEL_CS /* CS */ - pushq $1f /* RIP */ + pushq $0 /* Future return address */ + pushq %rax /* Save RAX */ + leaq 1f(%rip), %rax /* RIP */ + movq %rax, 8(%rsp) /* Put 1f on return address */ + popq %rax /* Restore RAX */ iretq /* continues at repeat_nmi below */ UNWIND_HINT_IRET_REGS 1: From patchwork Tue Jul 30 19:12:49 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Garnier X-Patchwork-Id: 11066629 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 59B5813A0 for ; Tue, 30 Jul 2019 19:14:11 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4F66928896 for ; Tue, 30 Jul 2019 19:14:11 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 43D66288B4; Tue, 30 Jul 2019 19:14:11 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.3 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id B802B28896 for ; Tue, 30 Jul 2019 19:14:09 +0000 (UTC) Received: (qmail 28409 invoked by uid 550); 30 Jul 2019 19:13:31 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 28334 invoked from network); 30 Jul 2019 19:13:29 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=okwXTawHyuFa4icd55twJCHtmhUjDA75Nfe9+fhiSRI=; b=WTaj/tCWBI6QVlq4nKTsAxYz1lBPqS7a9eK0QDXPMmbAPy3XkclyuEpGhNiM9+Qc+a GfZPY1tJuv8B58Sdyy1D7YxyiHEVqzCtOyRukRfXTl4S9nHMyfrDJWQu9PKz6P5BMI8I gB3+kX3pX//wpXa3Ez7mNRLg83QLHpddaT690= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=okwXTawHyuFa4icd55twJCHtmhUjDA75Nfe9+fhiSRI=; b=U6RJlUgYb8sAph6ZbUyWCNbL2FaFokBuxzvH7/QL70h4p09mcGCYuZQBtsOtAaAwzm ngbZelKa57o2nCDm1nb6/rIOs6Sr1kC/jYxDL3rV9HfdH4Y0EeHYMm8b5Er+7fL5Q2qr SWqmbdDP38wzQVl+el2sst7dQU4MF6kCqzk4OkS7uouDBO+cxbary3jLWxtj36FyoKqv Buq1UyLK+oL0iWJhsJNN+GJ2o6qM6qHKouTsNhhrldzmoAfy6hQljk8iRmLpmYDQZIih b90zoDiC2iOwtC9+C/J7lYTROHpvhNhIc9wNY+Jrt7WPoBJl4mEAxjHJ+vII9P0G+8Yx /WHA== X-Gm-Message-State: APjAAAUTmMHe2jwRrBqL2Joxgt0xWpklvfdeH6o+cmDTrbOGC2ZehxLb /mmIErCyz5E7uRHYyKD+439jR0yE9k4= X-Google-Smtp-Source: APXvYqyboxn+AlEAdie6u7CC57V8sWfALD8FpbLQHIYoEr0y0DEAHDVqK2YOnXcRGhf2UDGdqqHftQ== X-Received: by 2002:a63:184b:: with SMTP id 11mr49666372pgy.112.1564513997493; Tue, 30 Jul 2019 12:13:17 -0700 (PDT) From: Thomas Garnier To: kernel-hardening@lists.openwall.com Cc: kristen@linux.intel.com, keescook@chromium.org, Thomas Garnier , Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H. Peter Anvin" , x86@kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v9 05/11] x86: pm-trace - Adapt assembly for PIE support Date: Tue, 30 Jul 2019 12:12:49 -0700 Message-Id: <20190730191303.206365-6-thgarnie@chromium.org> X-Mailer: git-send-email 2.22.0.770.g0f2c4a37fd-goog In-Reply-To: <20190730191303.206365-1-thgarnie@chromium.org> References: <20190730191303.206365-1-thgarnie@chromium.org> MIME-Version: 1.0 X-Virus-Scanned: ClamAV using ClamSMTP Change assembly to use the new _ASM_MOVABS macro instead of _ASM_MOV for the assembly to be PIE compatible. Position Independent Executable (PIE) support will allow to extend the KASLR randomization range below 0xffffffff80000000. Signed-off-by: Thomas Garnier Reviewed-by: Kees Cook --- arch/x86/include/asm/pm-trace.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/include/asm/pm-trace.h b/arch/x86/include/asm/pm-trace.h index bfa32aa428e5..972070806ce9 100644 --- a/arch/x86/include/asm/pm-trace.h +++ b/arch/x86/include/asm/pm-trace.h @@ -8,7 +8,7 @@ do { \ if (pm_trace_enabled) { \ const void *tracedata; \ - asm volatile(_ASM_MOV " $1f,%0\n" \ + asm volatile(_ASM_MOVABS " $1f,%0\n" \ ".section .tracedata,\"a\"\n" \ "1:\t.word %c1\n\t" \ _ASM_PTR " %c2\n" \ From patchwork Tue Jul 30 19:12:50 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Garnier X-Patchwork-Id: 11066631 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 279F41399 for ; Tue, 30 Jul 2019 19:14:20 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1DE1A2887A for ; Tue, 30 Jul 2019 19:14:20 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 12152288A0; Tue, 30 Jul 2019 19:14:20 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.3 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id 4E2F82887A for ; Tue, 30 Jul 2019 19:14:19 +0000 (UTC) Received: (qmail 28444 invoked by uid 550); 30 Jul 2019 19:13:32 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 28380 invoked from network); 30 Jul 2019 19:13:31 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Qa0vdqU/e4QPHFVnoNOrEozPMumAGtkoZQ9IP0jYnkg=; b=m0iQTXUNy9Ivxk8S8rx6UuqUrCNk/lmb9ABT6tX720KuMWsD28POJqIHBW6/ecVDNQ 1wbWq2OFtsCV/gCMaCWhi/N9ivwFCLXmf8NoCB80kLVtvh5oGNjC8TdkekyoWfngXYpv g4yOAr38UwABJuJh4u7l9MRi4cavnGZbRi9Oc= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Qa0vdqU/e4QPHFVnoNOrEozPMumAGtkoZQ9IP0jYnkg=; b=FXbcu3BZNRl6gYWYsaxwgddr2VwA6Yho2bFXQzmzyi8i84ygTU/+vyKkZENYV9BtVi ejjN+xRHm7bCo8zgZbnmaqN7c2nJJkRRlQirBFSpx2NiYiy7Wwpj8KgeHh+z5U/wB/vf GJU54ohurtxUadJaoaD2dW5TXpJE41iXhUYEIDSzFQ81Tj5/pw0b8rsFgYwhXPN94Kgi ECI+Ytc1wetNDkIJU34hNpYWlQa1YFmmSJIzvwGAGx8WcZoXrZ8pVwxb9kzOjMpLPSei zNGOy0KtzFuAauaK5s+9QCjep0fsXV5AHV7AW/Pyk/SpJ8Cx3CSzmVwK2JAWm14ZO6CO 543w== X-Gm-Message-State: APjAAAW8EPsWd72xKmgNieDlZFdUv6GQNrkN1M39uwvrYt84yzNH+Nkv q9pOw8gowwBo6eHw0RjVLAVM9bozQwE= X-Google-Smtp-Source: APXvYqw4FhQAyLlzTXXeXnWi4AdlBXSr5qODFDIUZbZ8fdBVPq/Zyh+2sg3WQmjXFcNGWYaH8U26QA== X-Received: by 2002:a17:90a:b115:: with SMTP id z21mr56057575pjq.64.1564513999646; Tue, 30 Jul 2019 12:13:19 -0700 (PDT) From: Thomas Garnier To: kernel-hardening@lists.openwall.com Cc: kristen@linux.intel.com, keescook@chromium.org, Thomas Garnier , Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H. Peter Anvin" , x86@kernel.org, "Peter Zijlstra (Intel)" , Andrew Morton , Len Brown , Andy Lutomirski , linux-kernel@vger.kernel.org Subject: [PATCH v9 06/11] x86/CPU: Adapt assembly for PIE support Date: Tue, 30 Jul 2019 12:12:50 -0700 Message-Id: <20190730191303.206365-7-thgarnie@chromium.org> X-Mailer: git-send-email 2.22.0.770.g0f2c4a37fd-goog In-Reply-To: <20190730191303.206365-1-thgarnie@chromium.org> References: <20190730191303.206365-1-thgarnie@chromium.org> MIME-Version: 1.0 X-Virus-Scanned: ClamAV using ClamSMTP Change the assembly code to use only relative references of symbols for the kernel to be PIE compatible. Position Independent Executable (PIE) support will allow to extend the KASLR randomization range below 0xffffffff80000000. Signed-off-by: Thomas Garnier --- arch/x86/include/asm/processor.h | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h index 6e0a3b43d027..bf333d62889e 100644 --- a/arch/x86/include/asm/processor.h +++ b/arch/x86/include/asm/processor.h @@ -713,11 +713,13 @@ static inline void sync_core(void) "pushfq\n\t" "mov %%cs, %0\n\t" "pushq %q0\n\t" - "pushq $1f\n\t" + "leaq 1f(%%rip), %q0\n\t" + "pushq %q0\n\t" "iretq\n\t" UNWIND_HINT_RESTORE "1:" - : "=&r" (tmp), ASM_CALL_CONSTRAINT : : "cc", "memory"); + : "=&r" (tmp), ASM_CALL_CONSTRAINT + : : "cc", "memory"); #endif } From patchwork Tue Jul 30 19:12:51 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Garnier X-Patchwork-Id: 11066633 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 463E713A0 for ; Tue, 30 Jul 2019 19:14:32 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3E569285DA for ; Tue, 30 Jul 2019 19:14:32 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 32AA828872; Tue, 30 Jul 2019 19:14:32 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.3 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id 579FE28867 for ; Tue, 30 Jul 2019 19:14:31 +0000 (UTC) Received: (qmail 28491 invoked by uid 550); 30 Jul 2019 19:13:33 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 28415 invoked from network); 30 Jul 2019 19:13:32 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=9kWf22Rclzq8G5FoRJbzmMz7xyL94toIUGR9urAcRyA=; b=F2smkGoK8KcptIdvV2SjQfrL6PlXipy7JjEVMHgzprpHrwcKNVKa93zl9Fymh8BUlh fpA3d71j1qBWHNgMomuekNiCQ8MlumiE/acWNOlnavR9gnndo7YWMyqkKvctTUhMJcWm 5nm1cizH8+WJfF2ZVgUHYpKcxa3wRzq+Z8spk= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=9kWf22Rclzq8G5FoRJbzmMz7xyL94toIUGR9urAcRyA=; b=Fl60l8Xq8TEECKWon504kA95TlpO45RUeGCTWJ1qCZc/G3+Rahded8gY+2nmw00LST k7oq95rqDmULEXNawB2K/gTUyqP7sftWPOyBUsQb+OIXco9Pl9bWVpCkqxt1UV4gfe1Q ppadr8YZimXKjuwj3mRsm0QQTMZSA4ecnrQrKIgKKWpP17sKvF561kT9QKQbylaOPZCq ZxC9J1/Qer4N9Rp7+GgIZ+6LBnRrUmfVlXf13YQLtvVIhC7ALONKbKBCv2342duS+ZTf B7seJS2+RLgcP0UkjYnE2Q8cJXbsI2ohOVNxCCoL+8c/NYqVdKP5Fzii7hdyVoQiusN3 eByQ== X-Gm-Message-State: APjAAAVxyfpXNyxzlrVe/A7fphr+VPvAsWRX7BhSiyH+Y2eAvg4GWCmu CSCIFkVmtEuDXahFO0IY/l+g7vFBTF8= X-Google-Smtp-Source: APXvYqzr6BK0Xn2ozWdzr0KbSdWi9wDSK/YZN9WyQ63sDSm1hYENlMW/XPd2uWlP9oB/IStB2Eg/AA== X-Received: by 2002:a63:784c:: with SMTP id t73mr113500712pgc.268.1564514000500; Tue, 30 Jul 2019 12:13:20 -0700 (PDT) From: Thomas Garnier To: kernel-hardening@lists.openwall.com Cc: kristen@linux.intel.com, keescook@chromium.org, Thomas Garnier , Pavel Machek , "Rafael J . Wysocki" , "Rafael J. Wysocki" , Len Brown , Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H. Peter Anvin" , x86@kernel.org, linux-pm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v9 07/11] x86/acpi: Adapt assembly for PIE support Date: Tue, 30 Jul 2019 12:12:51 -0700 Message-Id: <20190730191303.206365-8-thgarnie@chromium.org> X-Mailer: git-send-email 2.22.0.770.g0f2c4a37fd-goog In-Reply-To: <20190730191303.206365-1-thgarnie@chromium.org> References: <20190730191303.206365-1-thgarnie@chromium.org> MIME-Version: 1.0 X-Virus-Scanned: ClamAV using ClamSMTP Change the assembly code to use only relative references of symbols for the kernel to be PIE compatible. Position Independent Executable (PIE) support will allow to extend the KASLR randomization range below 0xffffffff80000000. Signed-off-by: Thomas Garnier Acked-by: Pavel Machek Acked-by: Rafael J. Wysocki Reviewed-by: Kees Cook --- arch/x86/kernel/acpi/wakeup_64.S | 31 ++++++++++++++++--------------- 1 file changed, 16 insertions(+), 15 deletions(-) diff --git a/arch/x86/kernel/acpi/wakeup_64.S b/arch/x86/kernel/acpi/wakeup_64.S index b0715c3ac18d..3ec6c1b74ad4 100644 --- a/arch/x86/kernel/acpi/wakeup_64.S +++ b/arch/x86/kernel/acpi/wakeup_64.S @@ -15,7 +15,7 @@ * Hooray, we are in Long 64-bit mode (but still running in low memory) */ ENTRY(wakeup_long64) - movq saved_magic, %rax + movq saved_magic(%rip), %rax movq $0x123456789abcdef0, %rdx cmpq %rdx, %rax jne bogus_64_magic @@ -26,14 +26,14 @@ ENTRY(wakeup_long64) movw %ax, %es movw %ax, %fs movw %ax, %gs - movq saved_rsp, %rsp + movq saved_rsp(%rip), %rsp - movq saved_rbx, %rbx - movq saved_rdi, %rdi - movq saved_rsi, %rsi - movq saved_rbp, %rbp + movq saved_rbx(%rip), %rbx + movq saved_rdi(%rip), %rdi + movq saved_rsi(%rip), %rsi + movq saved_rbp(%rip), %rbp - movq saved_rip, %rax + movq saved_rip(%rip), %rax jmp *%rax ENDPROC(wakeup_long64) @@ -46,7 +46,7 @@ ENTRY(do_suspend_lowlevel) xorl %eax, %eax call save_processor_state - movq $saved_context, %rax + leaq saved_context(%rip), %rax movq %rsp, pt_regs_sp(%rax) movq %rbp, pt_regs_bp(%rax) movq %rsi, pt_regs_si(%rax) @@ -65,13 +65,14 @@ ENTRY(do_suspend_lowlevel) pushfq popq pt_regs_flags(%rax) - movq $.Lresume_point, saved_rip(%rip) + leaq .Lresume_point(%rip), %rax + movq %rax, saved_rip(%rip) - movq %rsp, saved_rsp - movq %rbp, saved_rbp - movq %rbx, saved_rbx - movq %rdi, saved_rdi - movq %rsi, saved_rsi + movq %rsp, saved_rsp(%rip) + movq %rbp, saved_rbp(%rip) + movq %rbx, saved_rbx(%rip) + movq %rdi, saved_rdi(%rip) + movq %rsi, saved_rsi(%rip) addq $8, %rsp movl $3, %edi @@ -83,7 +84,7 @@ ENTRY(do_suspend_lowlevel) .align 4 .Lresume_point: /* We don't restore %rax, it must be 0 anyway */ - movq $saved_context, %rax + leaq saved_context(%rip), %rax movq saved_context_cr4(%rax), %rbx movq %rbx, %cr4 movq saved_context_cr3(%rax), %rbx From patchwork Tue Jul 30 19:12:52 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Garnier X-Patchwork-Id: 11066635 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 701631399 for ; Tue, 30 Jul 2019 19:14:46 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 66DCB285DA for ; Tue, 30 Jul 2019 19:14:46 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 5AD78288A0; Tue, 30 Jul 2019 19:14:46 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.3 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id 64E96285DA for ; Tue, 30 Jul 2019 19:14:45 +0000 (UTC) Received: (qmail 28584 invoked by uid 550); 30 Jul 2019 19:13:36 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 28551 invoked from network); 30 Jul 2019 19:13:35 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=w9qKOvSY58oXoZzqByoYWX1yQMI9BNWXHffy8/37d08=; b=UzfwNipG4HY3slWhmBB4ukvohWeOPP67m+ETT6UkK8qvjgImw05klp/yhBxcCntWO2 Ue5RqYysOk+r2ZuO2MoS6akd/tkiuhAyhsDIFONkGYFFIBErMVturE3dLLSfMd4vh1Fz g/7+6diM+H6Lch0wDYcCiTIoZJUM/Ru/YUYHk= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=w9qKOvSY58oXoZzqByoYWX1yQMI9BNWXHffy8/37d08=; b=BNH/MG+ZQRJBNtf5r1r0T2LNa3fFefvAMz034aErVgAC5Yb921/A7B6cX2yylrrkUv iU4DAsgOL8EiMYAKZKU0lWsCxKn5H7YPAaZl+LT9oB/qHi+ISIOUOl3nSPHuAsR17yJO BK2LaRbupJb+o40BUFDutp9DSwKu48YnRAOtK8XEX5jMFYtI46Di2AG161u7rYl57kwd qMTG0FL32OjC7QBvi+vr9wO6YfS0y82AFdWQvDNj9D3Kj5o1T5+f8KDDyD1lfd/SkHJF PwBgpfzYKZnX74aaUvfz1SetJNbuekcUQ9p5xTwJjsvterNz0p14g54VY8nzfvKJ9acp tKTA== X-Gm-Message-State: APjAAAWvOK9nBEUTNZJyMOKJOt0VgXFZuS2xaP6mpYuxHryqemK02MF6 C0eGYZLI260kPqhoDkcSLpBsH8H+qTE= X-Google-Smtp-Source: APXvYqy6D4jSuBlQiYSSxJuvYeBUKPk9VGJxOmy0ZoMGLMmWHX1N9VP8jp1Vbly1uFyIRpTQEY74pQ== X-Received: by 2002:aa7:8102:: with SMTP id b2mr2027397pfi.105.1564514002619; Tue, 30 Jul 2019 12:13:22 -0700 (PDT) From: Thomas Garnier To: kernel-hardening@lists.openwall.com Cc: kristen@linux.intel.com, keescook@chromium.org, Thomas Garnier , Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H. Peter Anvin" , x86@kernel.org, Juergen Gross , Peter Zijlstra , Boris Ostrovsky , Josh Poimboeuf , Maran Wilson , Feng Tang , linux-kernel@vger.kernel.org Subject: [PATCH v9 08/11] x86/boot/64: Adapt assembly for PIE support Date: Tue, 30 Jul 2019 12:12:52 -0700 Message-Id: <20190730191303.206365-9-thgarnie@chromium.org> X-Mailer: git-send-email 2.22.0.770.g0f2c4a37fd-goog In-Reply-To: <20190730191303.206365-1-thgarnie@chromium.org> References: <20190730191303.206365-1-thgarnie@chromium.org> MIME-Version: 1.0 X-Virus-Scanned: ClamAV using ClamSMTP Change the assembly code to use only relative references of symbols for the kernel to be PIE compatible. Early at boot, the kernel is mapped at a temporary address while preparing the page table. To know the changes needed for the page table with KASLR, the boot code calculate the difference between the expected address of the kernel and the one chosen by KASLR. It does not work with PIE because all symbols in code are relatives. Instead of getting the future relocated virtual address, you will get the current temporary mapping. Instructions were changed to have absolute 64-bit references. Position Independent Executable (PIE) support will allow to extend the KASLR randomization range below 0xffffffff80000000. Signed-off-by: Thomas Garnier Reviewed-by: Kees Cook --- arch/x86/kernel/head_64.S | 16 ++++++++++------ 1 file changed, 10 insertions(+), 6 deletions(-) diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S index f3d3e9646a99..9a3f96566eb2 100644 --- a/arch/x86/kernel/head_64.S +++ b/arch/x86/kernel/head_64.S @@ -88,8 +88,10 @@ startup_64: popq %rsi /* Form the CR3 value being sure to include the CR3 modifier */ - addq $(early_top_pgt - __START_KERNEL_map), %rax + movabs $(early_top_pgt - __START_KERNEL_map), %rcx + addq %rcx, %rax jmp 1f + ENTRY(secondary_startup_64) UNWIND_HINT_EMPTY /* @@ -118,7 +120,8 @@ ENTRY(secondary_startup_64) popq %rsi /* Form the CR3 value being sure to include the CR3 modifier */ - addq $(init_top_pgt - __START_KERNEL_map), %rax + movabs $(init_top_pgt - __START_KERNEL_map), %rcx + addq %rcx, %rax 1: /* Enable PAE mode, PGE and LA57 */ @@ -136,7 +139,7 @@ ENTRY(secondary_startup_64) movq %rax, %cr3 /* Ensure I am executing from virtual addresses */ - movq $1f, %rax + movabs $1f, %rax ANNOTATE_RETPOLINE_SAFE jmp *%rax 1: @@ -233,11 +236,12 @@ ENTRY(secondary_startup_64) * REX.W + FF /5 JMP m16:64 Jump far, absolute indirect, * address given in m16:64. */ - pushq $.Lafter_lret # put return address on stack for unwinder + movabs $.Lafter_lret, %rax + pushq %rax # put return address on stack for unwinder xorl %ebp, %ebp # clear frame pointer - movq initial_code(%rip), %rax + leaq initial_code(%rip), %rax pushq $__KERNEL_CS # set correct cs - pushq %rax # target address in negative space + pushq (%rax) # target address in negative space lretq .Lafter_lret: END(secondary_startup_64) From patchwork Tue Jul 30 19:12:53 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Garnier X-Patchwork-Id: 11066637 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E83CA1399 for ; Tue, 30 Jul 2019 19:15:00 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DFB5228524 for ; Tue, 30 Jul 2019 19:15:00 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D3908288A9; Tue, 30 Jul 2019 19:15:00 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.3 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id 098EB28862 for ; Tue, 30 Jul 2019 19:14:59 +0000 (UTC) Received: (qmail 28611 invoked by uid 550); 30 Jul 2019 19:13:37 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 28556 invoked from network); 30 Jul 2019 19:13:35 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=aCFi9VKfA/regOH6ME0h7V2OJPrhlZ4OE3HFuZl/Mr4=; b=hk0Bac0aiWQasaW/82OqlXyeS+zRRbeGWMF9iYoYCCjmyk5hKfhbErMAfoM104i7Br TdpwZFEl9pU+frxz1cazoSP6L1BNAC+8jncLJVwuFEsDynqzb3OP7vNis0EU/ikkPW8m /LeN2upB3IFugt/kUzhj6IfHluZI3rXDFFkRA= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=aCFi9VKfA/regOH6ME0h7V2OJPrhlZ4OE3HFuZl/Mr4=; b=oo5Ltfg+7HJL2pSc54dulT8YTyo5Ikmbe8sNR2xs0GilclheBKax2nkMCuWIdmKUQM i4zxjCaeoncxR9RoCX7OM6LHiIauwIQuIcvtM5fRsKFNDYk6AjV2TMDZ0cdNHLsXWhD9 etwq+9QpLEAQSTyVEKThXfTrWkhO4rACpMsGGdrfU3Axoa+cAY0qM8Mc97ELj6M9lSWX q7/RJDM6qfItEVPrHVMShN1t8HP3PxizM7k/eVj6lzfPspNy7Jn1DFvVK3BvvB8bF1NT 353YBp6F8iM5dBdZQbLWIElw80VnUPxQmaEeqtLd96HrqldtZwqCmMHM6fv2Dv/Q3cY0 +Lgg== X-Gm-Message-State: APjAAAXOLtdxnXT1DXRpQKfEGzuGEvfuQr9lNXVl4D/g7cld3lRMSETk 9i1tWpLHmq7zPZxYqvxNoyiIioZFmA4= X-Google-Smtp-Source: APXvYqyTpbQ7f3xITrwKbyFWcIPqfOysYAwjkN9vIf84u+eLPDg/LwjNFZLDtMcMgZUrWQ+iiuxmZg== X-Received: by 2002:a65:57ca:: with SMTP id q10mr113599135pgr.52.1564514003446; Tue, 30 Jul 2019 12:13:23 -0700 (PDT) From: Thomas Garnier To: kernel-hardening@lists.openwall.com Cc: kristen@linux.intel.com, keescook@chromium.org, Thomas Garnier , Pavel Machek , "Rafael J . Wysocki" , "Rafael J. Wysocki" , Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H. Peter Anvin" , x86@kernel.org, linux-pm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v9 09/11] x86/power/64: Adapt assembly for PIE support Date: Tue, 30 Jul 2019 12:12:53 -0700 Message-Id: <20190730191303.206365-10-thgarnie@chromium.org> X-Mailer: git-send-email 2.22.0.770.g0f2c4a37fd-goog In-Reply-To: <20190730191303.206365-1-thgarnie@chromium.org> References: <20190730191303.206365-1-thgarnie@chromium.org> MIME-Version: 1.0 X-Virus-Scanned: ClamAV using ClamSMTP Change the assembly code to use only relative references of symbols for the kernel to be PIE compatible. Position Independent Executable (PIE) support will allow to extend the KASLR randomization range below 0xffffffff80000000. Signed-off-by: Thomas Garnier Acked-by: Pavel Machek Acked-by: Rafael J. Wysocki Reviewed-by: Kees Cook --- arch/x86/power/hibernate_asm_64.S | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/x86/power/hibernate_asm_64.S b/arch/x86/power/hibernate_asm_64.S index a4d5eb0a7ece..796cd19d575b 100644 --- a/arch/x86/power/hibernate_asm_64.S +++ b/arch/x86/power/hibernate_asm_64.S @@ -23,7 +23,7 @@ #include ENTRY(swsusp_arch_suspend) - movq $saved_context, %rax + leaq saved_context(%rip), %rax movq %rsp, pt_regs_sp(%rax) movq %rbp, pt_regs_bp(%rax) movq %rsi, pt_regs_si(%rax) @@ -114,7 +114,7 @@ ENTRY(restore_registers) movq %rax, %cr4; # turn PGE back on /* We don't restore %rax, it must be 0 anyway */ - movq $saved_context, %rax + leaq saved_context(%rip), %rax movq pt_regs_sp(%rax), %rsp movq pt_regs_bp(%rax), %rbp movq pt_regs_si(%rax), %rsi From patchwork Tue Jul 30 19:12:54 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Garnier X-Patchwork-Id: 11066639 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4A73313A0 for ; Tue, 30 Jul 2019 19:15:13 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3ED28288AD for ; Tue, 30 Jul 2019 19:15:13 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 33059288BA; Tue, 30 Jul 2019 19:15:13 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.3 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id 5911F288B2 for ; Tue, 30 Jul 2019 19:15:12 +0000 (UTC) Received: (qmail 29781 invoked by uid 550); 30 Jul 2019 19:13:37 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 28564 invoked from network); 30 Jul 2019 19:13:36 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=4p7Yl4EvALrNapOIPblFQ4RqIFxBJYtmcwuhdJ36a4M=; b=gqg94NNJOdf9cHSR+XH6US+L0jKiqTwG0Xmx1oLuP1Hhp+nDgjGFTPn60im+Ecit12 y/IVI2h60aZld1+y9XWSydA3vMe0xeJM9dR6rAMy+Kp7VanoIic5nQjtZLeRn6MaGJms 8PRL5H4oh61RS+rUbxcD5n83K03BRRSwK4xJY= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=4p7Yl4EvALrNapOIPblFQ4RqIFxBJYtmcwuhdJ36a4M=; b=mVrqlMvfnV+UB+B8qQGmEYUKwCjyhPHvn05u0aqWjb44gQj2P0MdagITrE7IVtMCKL TfkqcDvdQOTTsdR9R4WVjgoxplW4A6m7SAkq65yrPrriG69jDulP2pxz09SG7sSyKJ0N ttPlnz5WnyNFgA7CrJE8QWRdd7UzD5CrUyy4ExLkV0CCZjKUUtz9y9UmcRIWxa/1gc2o 7UOQEPdzBsbIbiIxfZy/xkOUYrcHC/KS+BSvafET9dKQJEn+yIv+0j9Z7wf9EC8QF35r zichhTGDkWXapHSb1A4sjtzyib4knv44g/HrpPPZU4pyHXe0Omu25F3eMm6qPZvkxsdb 3Vmw== X-Gm-Message-State: APjAAAWbpKqoH4S+Scaphv6j5ZlVfk7YxuWWK62cuvSfOVigpEwxBkET HwThq/PPI6E34zgvGSa8CGvBJA7yLpA= X-Google-Smtp-Source: APXvYqwoE5YtV9czMWqYh5jaOXzzNfppyGGX1LntKX22Zkz6VcJhYSb1VjmmI91pfMMS60cxTSLgPQ== X-Received: by 2002:aa7:9407:: with SMTP id x7mr44914613pfo.163.1564514004448; Tue, 30 Jul 2019 12:13:24 -0700 (PDT) From: Thomas Garnier To: kernel-hardening@lists.openwall.com Cc: kristen@linux.intel.com, keescook@chromium.org, Thomas Garnier , Juergen Gross , Thomas Hellstrom , "VMware, Inc." , Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H. Peter Anvin" , x86@kernel.org, virtualization@lists.linux-foundation.org, linux-kernel@vger.kernel.org Subject: [PATCH v9 10/11] x86/paravirt: Adapt assembly for PIE support Date: Tue, 30 Jul 2019 12:12:54 -0700 Message-Id: <20190730191303.206365-11-thgarnie@chromium.org> X-Mailer: git-send-email 2.22.0.770.g0f2c4a37fd-goog In-Reply-To: <20190730191303.206365-1-thgarnie@chromium.org> References: <20190730191303.206365-1-thgarnie@chromium.org> MIME-Version: 1.0 X-Virus-Scanned: ClamAV using ClamSMTP if PIE is enabled, switch the paravirt assembly constraints to be compatible. The %c/i constrains generate smaller code so is kept by default. Position Independent Executable (PIE) support will allow to extend the KASLR randomization range below 0xffffffff80000000. Signed-off-by: Thomas Garnier Acked-by: Juergen Gross --- arch/x86/include/asm/paravirt_types.h | 25 +++++++++++++++++++++---- 1 file changed, 21 insertions(+), 4 deletions(-) diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h index 70b654f3ffe5..fd7dc37d0010 100644 --- a/arch/x86/include/asm/paravirt_types.h +++ b/arch/x86/include/asm/paravirt_types.h @@ -338,9 +338,25 @@ extern struct paravirt_patch_template pv_ops; #define PARAVIRT_PATCH(x) \ (offsetof(struct paravirt_patch_template, x) / sizeof(void *)) +#ifdef CONFIG_X86_PIE +#define paravirt_opptr_call "a" +#define paravirt_opptr_type "p" + +/* + * Alternative patching requires a maximum of 7 bytes but the relative call is + * only 6 bytes. If PIE is enabled, add an additional nop to the call + * instruction to ensure patching is possible. + */ +#define PARAVIRT_CALL_POST "nop;" +#else +#define paravirt_opptr_call "c" +#define paravirt_opptr_type "i" +#define PARAVIRT_CALL_POST "" +#endif + #define paravirt_type(op) \ [paravirt_typenum] "i" (PARAVIRT_PATCH(op)), \ - [paravirt_opptr] "i" (&(pv_ops.op)) + [paravirt_opptr] paravirt_opptr_type (&(pv_ops.op)) #define paravirt_clobber(clobber) \ [paravirt_clobber] "i" (clobber) @@ -379,9 +395,10 @@ int paravirt_disable_iospace(void); * offset into the paravirt_patch_template structure, and can therefore be * freely converted back into a structure offset. */ -#define PARAVIRT_CALL \ - ANNOTATE_RETPOLINE_SAFE \ - "call *%c[paravirt_opptr];" +#define PARAVIRT_CALL \ + ANNOTATE_RETPOLINE_SAFE \ + "call *%" paravirt_opptr_call "[paravirt_opptr];" \ + PARAVIRT_CALL_POST /* * These macros are intended to wrap calls through one of the paravirt From patchwork Tue Jul 30 19:12:55 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Garnier X-Patchwork-Id: 11066641 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 755CE13AC for ; Tue, 30 Jul 2019 19:15:25 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6B987288A2 for ; Tue, 30 Jul 2019 19:15:25 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 6030A288AA; Tue, 30 Jul 2019 19:15:25 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.3 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id 8C82F288AD for ; Tue, 30 Jul 2019 19:15:24 +0000 (UTC) Received: (qmail 29937 invoked by uid 550); 30 Jul 2019 19:13:40 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 29786 invoked from network); 30 Jul 2019 19:13:38 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=djb+K5YV06tE6CuanEg0CCbjMrpDNzEbnLAfye7ZkSY=; b=NEHgj6CUyhuGNXZT844G24GmkoVlxjwnp/ZZK+KeYPZYfnl+y2UvSztPruWijQ4cSI kBY38ehlnCl9JH9OOX/JsC03zQgxQlLzWzBZnYs6E5MkdIvE+bVThcfwBcwDd1fydiyS XhhW1KJvnX2Qrszc09rMS6Fb8iqbbIIl1pYec= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=djb+K5YV06tE6CuanEg0CCbjMrpDNzEbnLAfye7ZkSY=; b=g1lRB8fHTut2JIMkLz6wdBYNCNAHPsa9AlknIqDtAA3GuXXTMxzuLISwk4KWr4EdAL jRqwx+nn3GVBr3dyAIqu2RDJ1m++zA6CvVcP1j47tsAWdjYbjD4bcv/vmsV7TmSm4k2a hOm/Lo4xlhi6Kd68L0HZU9Nc+lsbTW8pIka0H+O//n0Fed2/W0lxImLsHERs+zfTA4YJ f1ZeNqMx+ZARh5ZOGV2bXUHfTn4zWPSeS6Lny9sl8wEu/hu6LOPLlKhRBNDErzRonbpH QiEnXPVoJJtjaL69r7LpsqvyOYaFl7YaQEkJKDj6D6CT8Oc/rd2lJL+IuOJYTEeacwPm A+Ow== X-Gm-Message-State: APjAAAUC/0qdHWkw+qcduN2aiK8RjDePa6iktotKhEPML1PrHbc/O0Lg wlYeJohTLbBbfnawNMbOKtsJ1T8lT08= X-Google-Smtp-Source: APXvYqx7PDw4+udfJ0/X/aulEp6speZuvn3qPpw/ORb5g1xlvlm+szU4lbJM3sBZ+Qyz/KRfApdJug== X-Received: by 2002:a17:902:42a5:: with SMTP id h34mr120262792pld.16.1564514006452; Tue, 30 Jul 2019 12:13:26 -0700 (PDT) From: Thomas Garnier To: kernel-hardening@lists.openwall.com Cc: kristen@linux.intel.com, keescook@chromium.org, Thomas Garnier , Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H. Peter Anvin" , x86@kernel.org, Peter Zijlstra , Nadav Amit , linux-kernel@vger.kernel.org Subject: [PATCH v9 11/11] x86/alternatives: Adapt assembly for PIE support Date: Tue, 30 Jul 2019 12:12:55 -0700 Message-Id: <20190730191303.206365-12-thgarnie@chromium.org> X-Mailer: git-send-email 2.22.0.770.g0f2c4a37fd-goog In-Reply-To: <20190730191303.206365-1-thgarnie@chromium.org> References: <20190730191303.206365-1-thgarnie@chromium.org> MIME-Version: 1.0 X-Virus-Scanned: ClamAV using ClamSMTP Change the assembly options to work with pointers instead of integers. Position Independent Executable (PIE) support will allow to extend the KASLR randomization range below 0xffffffff80000000. Signed-off-by: Thomas Garnier --- arch/x86/include/asm/alternative.h | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/arch/x86/include/asm/alternative.h b/arch/x86/include/asm/alternative.h index 094fbc9c0b1c..28a838106e5f 100644 --- a/arch/x86/include/asm/alternative.h +++ b/arch/x86/include/asm/alternative.h @@ -243,7 +243,7 @@ static inline int alternatives_text_reserved(void *start, void *end) /* Like alternative_io, but for replacing a direct call with another one. */ #define alternative_call(oldfunc, newfunc, feature, output, input...) \ asm volatile (ALTERNATIVE("call %P[old]", "call %P[new]", feature) \ - : output : [old] "i" (oldfunc), [new] "i" (newfunc), ## input) + : output : [old] "X" (oldfunc), [new] "X" (newfunc), ## input) /* * Like alternative_call, but there are two features and respective functions. @@ -256,8 +256,8 @@ static inline int alternatives_text_reserved(void *start, void *end) asm volatile (ALTERNATIVE_2("call %P[old]", "call %P[new1]", feature1,\ "call %P[new2]", feature2) \ : output, ASM_CALL_CONSTRAINT \ - : [old] "i" (oldfunc), [new1] "i" (newfunc1), \ - [new2] "i" (newfunc2), ## input) + : [old] "X" (oldfunc), [new1] "X" (newfunc1), \ + [new2] "X" (newfunc2), ## input) /* * use this macro(s) if you need more than one output parameter