From patchwork Tue Feb 18 19:58:25 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 11389439 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6A62314E3 for ; Tue, 18 Feb 2020 19:58:59 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 4A82D24670 for ; Tue, 18 Feb 2020 19:58:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1582055939; bh=rser7L/TlChhfAw4/EWBy5Mt4/UXpcAt5CSaJhFUou8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=z+tics9JYnzTeCbABJkc2hr3zvZpz+pmUrIKuHxYSw4O2Ywnh8uD6w/835NMjiCVz KlvvK99HxF8487ATLKTpKB7ReYSH+frJDBT5MdlrF4TKdDiZP9mAHO5NVkXCKkJ/4a FMc+u9UCDRxx3pLZGeFXYgqHzRCKUZyaVAOAiC6Q= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728335AbgBRT65 (ORCPT ); Tue, 18 Feb 2020 14:58:57 -0500 Received: from foss.arm.com ([217.140.110.172]:60426 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728327AbgBRT64 (ORCPT ); Tue, 18 Feb 2020 14:58:56 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 241B4FEC; Tue, 18 Feb 2020 11:58:56 -0800 (PST) Received: from localhost (unknown [10.37.6.21]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 98B4F3F68F; Tue, 18 Feb 2020 11:58:55 -0800 (PST) From: Mark Brown To: Herbert Xu , "David S. Miller" , Catalin Marinas , Will Deacon , Marc Zyngier , James Morse , Julien Thierry , Suzuki K Poulose Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-crypto@vger.kernel.org, Mark Brown , Ard Biesheuvel Subject: [PATCH 01/18] arm64: crypto: Modernize some extra assembly annotations Date: Tue, 18 Feb 2020 19:58:25 +0000 Message-Id: <20200218195842.34156-2-broonie@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200218195842.34156-1-broonie@kernel.org> References: <20200218195842.34156-1-broonie@kernel.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org A couple of functions were missed in the modernisation of assembly macros, update them too. Signed-off-by: Mark Brown Reviewed-by: Ard Biesheuvel --- arch/arm64/crypto/ghash-ce-core.S | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/arch/arm64/crypto/ghash-ce-core.S b/arch/arm64/crypto/ghash-ce-core.S index 084c6a30b03a..6b958dcdf136 100644 --- a/arch/arm64/crypto/ghash-ce-core.S +++ b/arch/arm64/crypto/ghash-ce-core.S @@ -587,20 +587,20 @@ CPU_LE( rev w8, w8 ) * struct ghash_key const *k, u64 dg[], u8 ctr[], * int rounds, u8 tag) */ -ENTRY(pmull_gcm_encrypt) +SYM_FUNC_START(pmull_gcm_encrypt) pmull_gcm_do_crypt 1 -ENDPROC(pmull_gcm_encrypt) +SYM_FUNC_END(pmull_gcm_encrypt) /* * void pmull_gcm_decrypt(int blocks, u8 dst[], const u8 src[], * struct ghash_key const *k, u64 dg[], u8 ctr[], * int rounds, u8 tag) */ -ENTRY(pmull_gcm_decrypt) +SYM_FUNC_START(pmull_gcm_decrypt) pmull_gcm_do_crypt 0 -ENDPROC(pmull_gcm_decrypt) +SYM_FUNC_END(pmull_gcm_decrypt) -pmull_gcm_ghash_4x: +SYM_FUNC_START_LOCAL(pmull_gcm_ghash_4x) movi MASK.16b, #0xe1 shl MASK.2d, MASK.2d, #57 @@ -681,9 +681,9 @@ pmull_gcm_ghash_4x: eor XL.16b, XL.16b, T2.16b ret -ENDPROC(pmull_gcm_ghash_4x) +SYM_FUNC_END(pmull_gcm_ghash_4x) -pmull_gcm_enc_4x: +SYM_FUNC_START_LOCAL(pmull_gcm_enc_4x) ld1 {KS0.16b}, [x5] // load upper counter sub w10, w8, #4 sub w11, w8, #3 @@ -746,7 +746,7 @@ pmull_gcm_enc_4x: eor INP3.16b, INP3.16b, KS3.16b ret -ENDPROC(pmull_gcm_enc_4x) +SYM_FUNC_END(pmull_gcm_enc_4x) .section ".rodata", "a" .align 6 From patchwork Tue Feb 18 19:58:26 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 11389513 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8210A13A4 for ; Tue, 18 Feb 2020 20:07:35 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 5B36F2176D for ; Tue, 18 Feb 2020 20:07:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1582056455; bh=sBRtfGkql7WuIgjfYrTGpRDtx+DaytPT7CqY450J90E=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=rM15NTnPrFSWipVUm8rKwd+QFCoaBP1jkxXZOVBNYjMFbRUDcvjQ5aBHfk8dRgjzh W9Um4GZ1JU7/09r7XMoQBNjr63/dnp9vkSfjwJp7Lm4PLQs9y7FLTb09t9sJv5U2w4 ynYHTflR1fLK+jCCo7Crnkoq+2fBUxa/gBmN5xWE= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726609AbgBRUHe (ORCPT ); Tue, 18 Feb 2020 15:07:34 -0500 Received: from foss.arm.com ([217.140.110.172]:60446 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728339AbgBRT67 (ORCPT ); Tue, 18 Feb 2020 14:58:59 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 5C99F101E; Tue, 18 Feb 2020 11:58:58 -0800 (PST) Received: from localhost (unknown [10.37.6.21]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id D21653F68F; Tue, 18 Feb 2020 11:58:57 -0800 (PST) From: Mark Brown To: Herbert Xu , "David S. Miller" , Catalin Marinas , Will Deacon , Marc Zyngier , James Morse , Julien Thierry , Suzuki K Poulose Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-crypto@vger.kernel.org, Mark Brown , Ard Biesheuvel Subject: [PATCH 02/18] arm64: crypto: Modernize names for AES function macros Date: Tue, 18 Feb 2020 19:58:26 +0000 Message-Id: <20200218195842.34156-3-broonie@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200218195842.34156-1-broonie@kernel.org> References: <20200218195842.34156-1-broonie@kernel.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Now that the rest of the code has been converted to the modern START/END macros the AES_ENTRY() and AES_ENDPROC() macros look out of place and like they need updating. Rename them to AES_FUNC_START() and AES_FUNC_END() to line up with the modern style assembly macros. Signed-off-by: Mark Brown Reviewed-by: Ard Biesheuvel --- arch/arm64/crypto/aes-ce.S | 4 +-- arch/arm64/crypto/aes-modes.S | 48 +++++++++++++++++------------------ arch/arm64/crypto/aes-neon.S | 4 +-- 3 files changed, 28 insertions(+), 28 deletions(-) diff --git a/arch/arm64/crypto/aes-ce.S b/arch/arm64/crypto/aes-ce.S index 45062553467f..1dc5bbbfeed2 100644 --- a/arch/arm64/crypto/aes-ce.S +++ b/arch/arm64/crypto/aes-ce.S @@ -9,8 +9,8 @@ #include #include -#define AES_ENTRY(func) SYM_FUNC_START(ce_ ## func) -#define AES_ENDPROC(func) SYM_FUNC_END(ce_ ## func) +#define AES_FUNC_START(func) SYM_FUNC_START(ce_ ## func) +#define AES_FUNC_END(func) SYM_FUNC_END(ce_ ## func) .arch armv8-a+crypto diff --git a/arch/arm64/crypto/aes-modes.S b/arch/arm64/crypto/aes-modes.S index 8a2faa42b57e..cf618d8f6cec 100644 --- a/arch/arm64/crypto/aes-modes.S +++ b/arch/arm64/crypto/aes-modes.S @@ -51,7 +51,7 @@ SYM_FUNC_END(aes_decrypt_block5x) * int blocks) */ -AES_ENTRY(aes_ecb_encrypt) +AES_FUNC_START(aes_ecb_encrypt) stp x29, x30, [sp, #-16]! mov x29, sp @@ -79,10 +79,10 @@ ST5( st1 {v4.16b}, [x0], #16 ) .Lecbencout: ldp x29, x30, [sp], #16 ret -AES_ENDPROC(aes_ecb_encrypt) +AES_FUNC_END(aes_ecb_encrypt) -AES_ENTRY(aes_ecb_decrypt) +AES_FUNC_START(aes_ecb_decrypt) stp x29, x30, [sp, #-16]! mov x29, sp @@ -110,7 +110,7 @@ ST5( st1 {v4.16b}, [x0], #16 ) .Lecbdecout: ldp x29, x30, [sp], #16 ret -AES_ENDPROC(aes_ecb_decrypt) +AES_FUNC_END(aes_ecb_decrypt) /* @@ -126,7 +126,7 @@ AES_ENDPROC(aes_ecb_decrypt) * u32 const rk2[]); */ -AES_ENTRY(aes_essiv_cbc_encrypt) +AES_FUNC_START(aes_essiv_cbc_encrypt) ld1 {v4.16b}, [x5] /* get iv */ mov w8, #14 /* AES-256: 14 rounds */ @@ -135,7 +135,7 @@ AES_ENTRY(aes_essiv_cbc_encrypt) enc_switch_key w3, x2, x6 b .Lcbcencloop4x -AES_ENTRY(aes_cbc_encrypt) +AES_FUNC_START(aes_cbc_encrypt) ld1 {v4.16b}, [x5] /* get iv */ enc_prepare w3, x2, x6 @@ -167,10 +167,10 @@ AES_ENTRY(aes_cbc_encrypt) .Lcbcencout: st1 {v4.16b}, [x5] /* return iv */ ret -AES_ENDPROC(aes_cbc_encrypt) -AES_ENDPROC(aes_essiv_cbc_encrypt) +AES_FUNC_END(aes_cbc_encrypt) +AES_FUNC_END(aes_essiv_cbc_encrypt) -AES_ENTRY(aes_essiv_cbc_decrypt) +AES_FUNC_START(aes_essiv_cbc_decrypt) stp x29, x30, [sp, #-16]! mov x29, sp @@ -181,7 +181,7 @@ AES_ENTRY(aes_essiv_cbc_decrypt) encrypt_block cbciv, w8, x6, x7, w9 b .Lessivcbcdecstart -AES_ENTRY(aes_cbc_decrypt) +AES_FUNC_START(aes_cbc_decrypt) stp x29, x30, [sp, #-16]! mov x29, sp @@ -238,8 +238,8 @@ ST5( st1 {v4.16b}, [x0], #16 ) st1 {cbciv.16b}, [x5] /* return iv */ ldp x29, x30, [sp], #16 ret -AES_ENDPROC(aes_cbc_decrypt) -AES_ENDPROC(aes_essiv_cbc_decrypt) +AES_FUNC_END(aes_cbc_decrypt) +AES_FUNC_END(aes_essiv_cbc_decrypt) /* @@ -249,7 +249,7 @@ AES_ENDPROC(aes_essiv_cbc_decrypt) * int rounds, int bytes, u8 const iv[]) */ -AES_ENTRY(aes_cbc_cts_encrypt) +AES_FUNC_START(aes_cbc_cts_encrypt) adr_l x8, .Lcts_permute_table sub x4, x4, #16 add x9, x8, #32 @@ -276,9 +276,9 @@ AES_ENTRY(aes_cbc_cts_encrypt) st1 {v0.16b}, [x4] /* overlapping stores */ st1 {v1.16b}, [x0] ret -AES_ENDPROC(aes_cbc_cts_encrypt) +AES_FUNC_END(aes_cbc_cts_encrypt) -AES_ENTRY(aes_cbc_cts_decrypt) +AES_FUNC_START(aes_cbc_cts_decrypt) adr_l x8, .Lcts_permute_table sub x4, x4, #16 add x9, x8, #32 @@ -305,7 +305,7 @@ AES_ENTRY(aes_cbc_cts_decrypt) st1 {v2.16b}, [x4] /* overlapping stores */ st1 {v0.16b}, [x0] ret -AES_ENDPROC(aes_cbc_cts_decrypt) +AES_FUNC_END(aes_cbc_cts_decrypt) .section ".rodata", "a" .align 6 @@ -324,7 +324,7 @@ AES_ENDPROC(aes_cbc_cts_decrypt) * int blocks, u8 ctr[]) */ -AES_ENTRY(aes_ctr_encrypt) +AES_FUNC_START(aes_ctr_encrypt) stp x29, x30, [sp, #-16]! mov x29, sp @@ -409,7 +409,7 @@ ST5( st1 {v4.16b}, [x0], #16 ) rev x7, x7 ins vctr.d[0], x7 b .Lctrcarrydone -AES_ENDPROC(aes_ctr_encrypt) +AES_FUNC_END(aes_ctr_encrypt) /* @@ -433,7 +433,7 @@ AES_ENDPROC(aes_ctr_encrypt) uzp1 xtsmask.4s, xtsmask.4s, \tmp\().4s .endm -AES_ENTRY(aes_xts_encrypt) +AES_FUNC_START(aes_xts_encrypt) stp x29, x30, [sp, #-16]! mov x29, sp @@ -518,9 +518,9 @@ AES_ENTRY(aes_xts_encrypt) st1 {v2.16b}, [x4] /* overlapping stores */ mov w4, wzr b .Lxtsencctsout -AES_ENDPROC(aes_xts_encrypt) +AES_FUNC_END(aes_xts_encrypt) -AES_ENTRY(aes_xts_decrypt) +AES_FUNC_START(aes_xts_decrypt) stp x29, x30, [sp, #-16]! mov x29, sp @@ -612,13 +612,13 @@ AES_ENTRY(aes_xts_decrypt) st1 {v2.16b}, [x4] /* overlapping stores */ mov w4, wzr b .Lxtsdecctsout -AES_ENDPROC(aes_xts_decrypt) +AES_FUNC_END(aes_xts_decrypt) /* * aes_mac_update(u8 const in[], u32 const rk[], int rounds, * int blocks, u8 dg[], int enc_before, int enc_after) */ -AES_ENTRY(aes_mac_update) +AES_FUNC_START(aes_mac_update) frame_push 6 mov x19, x0 @@ -676,4 +676,4 @@ AES_ENTRY(aes_mac_update) ld1 {v0.16b}, [x23] /* get dg */ enc_prepare w21, x20, x0 b .Lmacloop4x -AES_ENDPROC(aes_mac_update) +AES_FUNC_END(aes_mac_update) diff --git a/arch/arm64/crypto/aes-neon.S b/arch/arm64/crypto/aes-neon.S index 247d34ddaab0..e47d3ec2cfb4 100644 --- a/arch/arm64/crypto/aes-neon.S +++ b/arch/arm64/crypto/aes-neon.S @@ -8,8 +8,8 @@ #include #include -#define AES_ENTRY(func) SYM_FUNC_START(neon_ ## func) -#define AES_ENDPROC(func) SYM_FUNC_END(neon_ ## func) +#define AES_FUNC_START(func) SYM_FUNC_START(neon_ ## func) +#define AES_FUNC_END(func) SYM_FUNC_END(neon_ ## func) xtsmask .req v7 cbciv .req v7 From patchwork Tue Feb 18 19:58:27 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 11389511 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C224013A4 for ; Tue, 18 Feb 2020 20:07:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id A37B82176D for ; Tue, 18 Feb 2020 20:07:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1582056453; bh=Gp+ot2WK5J41nqOMbT9JYhikYsmpsq0FDINwNHnYPRo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=msbeRYzxHXlikgvaRo1ZKLd8EivF/JC8YHZKnSXQyK9tkXfGCSTgTNmE3Eb34b2Pc opO9yRDNTSzHUHOFsDO2g1B/eiZJCQTJDa7P2UOcybn7x3FluVlNVvI0UODdAtZqEP cmecd8Ncu+rQv8M/QOFXBGwBwbnGrBnryMue8Lhk= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726467AbgBRUHc (ORCPT ); Tue, 18 Feb 2020 15:07:32 -0500 Received: from foss.arm.com ([217.140.110.172]:60462 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728346AbgBRT7B (ORCPT ); Tue, 18 Feb 2020 14:59:01 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 9733331B; Tue, 18 Feb 2020 11:59:00 -0800 (PST) Received: from localhost (unknown [10.37.6.21]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 199473F68F; Tue, 18 Feb 2020 11:58:59 -0800 (PST) From: Mark Brown To: Herbert Xu , "David S. Miller" , Catalin Marinas , Will Deacon , Marc Zyngier , James Morse , Julien Thierry , Suzuki K Poulose Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-crypto@vger.kernel.org, Mark Brown Subject: [PATCH 03/18] arm64: entry: Annotate vector table and handlers as code Date: Tue, 18 Feb 2020 19:58:27 +0000 Message-Id: <20200218195842.34156-4-broonie@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200218195842.34156-1-broonie@kernel.org> References: <20200218195842.34156-1-broonie@kernel.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org In an effort to clarify and simplify the annotation of assembly functions new macros have been introduced. These replace ENTRY and ENDPROC with two different annotations for normal functions and those with unusual calling conventions. The vector table and handlers aren't normal C style code so should be annotated as CODE. Signed-off-by: Mark Brown --- arch/arm64/kernel/entry.S | 76 +++++++++++++++++++-------------------- 1 file changed, 38 insertions(+), 38 deletions(-) diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S index 9461d812ae27..1454f3ea2e2e 100644 --- a/arch/arm64/kernel/entry.S +++ b/arch/arm64/kernel/entry.S @@ -465,7 +465,7 @@ alternative_endif .pushsection ".entry.text", "ax" .align 11 -ENTRY(vectors) +SYM_CODE_START(vectors) kernel_ventry 1, sync_invalid // Synchronous EL1t kernel_ventry 1, irq_invalid // IRQ EL1t kernel_ventry 1, fiq_invalid // FIQ EL1t @@ -492,7 +492,7 @@ ENTRY(vectors) kernel_ventry 0, fiq_invalid, 32 // FIQ 32-bit EL0 kernel_ventry 0, error_invalid, 32 // Error 32-bit EL0 #endif -END(vectors) +SYM_CODE_END(vectors) #ifdef CONFIG_VMAP_STACK /* @@ -534,57 +534,57 @@ __bad_stack: ASM_BUG() .endm -el0_sync_invalid: +SYM_CODE_START_LOCAL(el0_sync_invalid) inv_entry 0, BAD_SYNC -ENDPROC(el0_sync_invalid) +SYM_CODE_END(el0_sync_invalid) -el0_irq_invalid: +SYM_CODE_START_LOCAL(el0_irq_invalid) inv_entry 0, BAD_IRQ -ENDPROC(el0_irq_invalid) +SYM_CODE_END(el0_irq_invalid) -el0_fiq_invalid: +SYM_CODE_START_LOCAL(el0_fiq_invalid) inv_entry 0, BAD_FIQ -ENDPROC(el0_fiq_invalid) +SYM_CODE_END(el0_fiq_invalid) -el0_error_invalid: +SYM_CODE_START_LOCAL(el0_error_invalid) inv_entry 0, BAD_ERROR -ENDPROC(el0_error_invalid) +SYM_CODE_END(el0_error_invalid) #ifdef CONFIG_COMPAT -el0_fiq_invalid_compat: +SYM_CODE_START_LOCAL(el0_fiq_invalid_compat) inv_entry 0, BAD_FIQ, 32 -ENDPROC(el0_fiq_invalid_compat) +SYM_CODE_END(el0_fiq_invalid_compat) #endif -el1_sync_invalid: +SYM_CODE_START_LOCAL(el1_sync_invalid) inv_entry 1, BAD_SYNC -ENDPROC(el1_sync_invalid) +SYM_CODE_END(el1_sync_invalid) -el1_irq_invalid: +SYM_CODE_START_LOCAL(el1_irq_invalid) inv_entry 1, BAD_IRQ -ENDPROC(el1_irq_invalid) +SYM_CODE_END(el1_irq_invalid) -el1_fiq_invalid: +SYM_CODE_START_LOCAL(el1_fiq_invalid) inv_entry 1, BAD_FIQ -ENDPROC(el1_fiq_invalid) +SYM_CODE_END(el1_fiq_invalid) -el1_error_invalid: +SYM_CODE_START_LOCAL(el1_error_invalid) inv_entry 1, BAD_ERROR -ENDPROC(el1_error_invalid) +SYM_CODE_END(el1_error_invalid) /* * EL1 mode handlers. */ .align 6 -el1_sync: +SYM_CODE_START_LOCAL_NOALIGN(el1_sync) kernel_entry 1 mov x0, sp bl el1_sync_handler kernel_exit 1 -ENDPROC(el1_sync) +SYM_CODE_END(el1_sync) .align 6 -el1_irq: +SYM_CODE_START_LOCAL_NOALIGN(el1_irq) kernel_entry 1 gic_prio_irq_setup pmr=x20, tmp=x1 enable_da_f @@ -639,42 +639,42 @@ alternative_else_nop_endif #endif kernel_exit 1 -ENDPROC(el1_irq) +SYM_CODE_END(el1_irq) /* * EL0 mode handlers. */ .align 6 -el0_sync: +SYM_CODE_START_LOCAL_NOALIGN(el0_sync) kernel_entry 0 mov x0, sp bl el0_sync_handler b ret_to_user -ENDPROC(el0_sync) +SYM_CODE_END(el0_sync) #ifdef CONFIG_COMPAT .align 6 -el0_sync_compat: +SYM_CODE_START_LOCAL_NOALIGN(el0_sync_compat) kernel_entry 0, 32 mov x0, sp bl el0_sync_compat_handler b ret_to_user -ENDPROC(el0_sync_compat) +SYM_CODE_END(el0_sync_compat) .align 6 -el0_irq_compat: +SYM_CODE_START_LOCAL_NOALIGN(el0_irq_compat) kernel_entry 0, 32 b el0_irq_naked -ENDPROC(el0_irq_compat) +SYM_CODE_END(el0_irq_compat) -el0_error_compat: +SYM_CODE_START_LOCAL_NOALIGN(el0_error_compat) kernel_entry 0, 32 b el0_error_naked -ENDPROC(el0_error_compat) +SYM_CODE_END(el0_error_compat) #endif .align 6 -el0_irq: +SYM_CODE_START_LOCAL_NOALIGN(el0_irq) kernel_entry 0 el0_irq_naked: gic_prio_irq_setup pmr=x20, tmp=x0 @@ -696,9 +696,9 @@ el0_irq_naked: bl trace_hardirqs_on #endif b ret_to_user -ENDPROC(el0_irq) +SYM_CODE_END(el0_irq) -el1_error: +SYM_CODE_START_LOCAL(el1_error) kernel_entry 1 mrs x1, esr_el1 gic_prio_kentry_setup tmp=x2 @@ -706,9 +706,9 @@ el1_error: mov x0, sp bl do_serror kernel_exit 1 -ENDPROC(el1_error) +SYM_CODE_END(el1_error) -el0_error: +SYM_CODE_START_LOCAL(el0_error) kernel_entry 0 el0_error_naked: mrs x25, esr_el1 @@ -720,7 +720,7 @@ el0_error_naked: bl do_serror enable_da_f b ret_to_user -ENDPROC(el0_error) +SYM_CODE_END(el0_error) /* * Ok, we need to do extra processing, enter the slow path. From patchwork Tue Feb 18 19:58:28 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 11389509 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 71C5113A4 for ; Tue, 18 Feb 2020 20:07:32 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 522E32176D for ; Tue, 18 Feb 2020 20:07:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1582056452; bh=DCGiYbWHIh3kCt5B9ecVq5GKWjYL/TZF72N3+/bhObo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=Jbe1TEQYhBkHbOvrm9vAEowYMHPTmWBcL1oU/E/LdzO+GrB2tBBmzA+RoNinX9ZVr zeqMTleat3GOLiDzkiA/gh7mgPE763THI40P6vgIeGVsUf6wHoL1KmKZZ4UxUKPeuN bE1t9yyvb1JaEu2TYazcqtpHUs/yILOzBFQr5Kto= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728138AbgBRUHb (ORCPT ); Tue, 18 Feb 2020 15:07:31 -0500 Received: from foss.arm.com ([217.140.110.172]:60480 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728355AbgBRT7D (ORCPT ); Tue, 18 Feb 2020 14:59:03 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id CE883FEC; Tue, 18 Feb 2020 11:59:02 -0800 (PST) Received: from localhost (unknown [10.37.6.21]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 5210A3F68F; Tue, 18 Feb 2020 11:59:02 -0800 (PST) From: Mark Brown To: Herbert Xu , "David S. Miller" , Catalin Marinas , Will Deacon , Marc Zyngier , James Morse , Julien Thierry , Suzuki K Poulose Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-crypto@vger.kernel.org, Mark Brown Subject: [PATCH 04/18] arm64: entry: Annotate ret_from_fork as code Date: Tue, 18 Feb 2020 19:58:28 +0000 Message-Id: <20200218195842.34156-5-broonie@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200218195842.34156-1-broonie@kernel.org> References: <20200218195842.34156-1-broonie@kernel.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org In an effort to clarify and simplify the annotation of assembly functions new macros have been introduced. These replace ENTRY and ENDPROC with two different annotations for normal functions and those with unusual calling conventions. ret_from_fork is not a normal C function and should therefore be annotated as code. Signed-off-by: Mark Brown --- arch/arm64/kernel/entry.S | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S index 1454f3ea2e2e..d535cb8a7413 100644 --- a/arch/arm64/kernel/entry.S +++ b/arch/arm64/kernel/entry.S @@ -902,14 +902,14 @@ NOKPROBE(cpu_switch_to) /* * This is how we return from a fork. */ -ENTRY(ret_from_fork) +SYM_CODE_START(ret_from_fork) bl schedule_tail cbz x19, 1f // not a kernel thread mov x0, x20 blr x19 1: get_current_task tsk b ret_to_user -ENDPROC(ret_from_fork) +SYM_CODE_END(ret_from_fork) NOKPROBE(ret_from_fork) #ifdef CONFIG_ARM_SDE_INTERFACE From patchwork Tue Feb 18 19:58:29 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 11389507 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 780951580 for ; Tue, 18 Feb 2020 20:07:22 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 4FAB021D56 for ; Tue, 18 Feb 2020 20:07:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1582056442; bh=zZsWoqdIHZcdkei7cGgwl7Fy8uvN9auYb1rafXaZqDQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=x4cpS1N19GTBsqF7+cKQeEYm6kZHhyYH+QGxJVSbidRYmBTn1uN3TybpqHNxno48r 66bRyEo2pyfh+AlUbT2MBW42tI6NBuE41whoHfBp/zNxzDnLR1fibl5G2DKTZKzrrt 9V5D3ZssCERKtr3awa5gTW+riCuYvrikgPqlwjbs= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726539AbgBRUHU (ORCPT ); Tue, 18 Feb 2020 15:07:20 -0500 Received: from foss.arm.com ([217.140.110.172]:60504 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727994AbgBRT7F (ORCPT ); Tue, 18 Feb 2020 14:59:05 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 13FF231B; Tue, 18 Feb 2020 11:59:05 -0800 (PST) Received: from localhost (unknown [10.37.6.21]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 89F933F68F; Tue, 18 Feb 2020 11:59:04 -0800 (PST) From: Mark Brown To: Herbert Xu , "David S. Miller" , Catalin Marinas , Will Deacon , Marc Zyngier , James Morse , Julien Thierry , Suzuki K Poulose Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-crypto@vger.kernel.org, Mark Brown Subject: [PATCH 05/18] arm64: entry: Additional annotation conversions for entry.S Date: Tue, 18 Feb 2020 19:58:29 +0000 Message-Id: <20200218195842.34156-6-broonie@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200218195842.34156-1-broonie@kernel.org> References: <20200218195842.34156-1-broonie@kernel.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org In an effort to clarify and simplify the annotation of assembly functions in the kernel new macros have been introduced. These replace ENTRY and ENDPROC with separate annotations for standard C callable functions, data and code with different calling conventions. Update the remaining annotations in the entry.S code to the new macros. Signed-off-by: Mark Brown --- arch/arm64/kernel/entry.S | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S index d535cb8a7413..fbf69fe94412 100644 --- a/arch/arm64/kernel/entry.S +++ b/arch/arm64/kernel/entry.S @@ -832,7 +832,7 @@ alternative_else_nop_endif .endm .align 11 -ENTRY(tramp_vectors) +SYM_CODE_START_NOALIGN(tramp_vectors) .space 0x400 tramp_ventry @@ -844,15 +844,15 @@ ENTRY(tramp_vectors) tramp_ventry 32 tramp_ventry 32 tramp_ventry 32 -END(tramp_vectors) +SYM_CODE_END(tramp_vectors) -ENTRY(tramp_exit_native) +SYM_CODE_START(tramp_exit_native) tramp_exit -END(tramp_exit_native) +SYM_CODE_END(tramp_exit_native) -ENTRY(tramp_exit_compat) +SYM_CODE_START(tramp_exit_compat) tramp_exit 32 -END(tramp_exit_compat) +SYM_CODE_END(tramp_exit_compat) .ltorg .popsection // .entry.tramp.text @@ -874,7 +874,7 @@ __entry_tramp_data_start: * Previous and next are guaranteed not to be the same. * */ -ENTRY(cpu_switch_to) +SYM_FUNC_START(cpu_switch_to) mov x10, #THREAD_CPU_CONTEXT add x8, x0, x10 mov x9, sp @@ -896,7 +896,7 @@ ENTRY(cpu_switch_to) mov sp, x9 msr sp_el0, x1 ret -ENDPROC(cpu_switch_to) +SYM_FUNC_END(cpu_switch_to) NOKPROBE(cpu_switch_to) /* From patchwork Tue Feb 18 19:58:30 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 11389505 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 26DD813A4 for ; Tue, 18 Feb 2020 20:07:21 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 0828722B48 for ; Tue, 18 Feb 2020 20:07:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1582056441; bh=2VGJrBUUJ34uyvAPaWD0iBj3zeGKVnl//pNG9gyRMFQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=ONWzxslbhQNqcWQFTDeDEVVlsaMbEh7is802Nc7pRm3UzfIXk7f171PnDpqC9pQCW ZxDEnxsxpGSZaOD04Av5wJ47zkkMp3lT22S5Er2gAXYoUfbtHzpc5XAUbpGASIkmat r6RdZmjiFtDFa57/z4ACJLXiXQabt1UWxwk2NbEA= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726757AbgBRUHT (ORCPT ); Tue, 18 Feb 2020 15:07:19 -0500 Received: from foss.arm.com ([217.140.110.172]:60524 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728011AbgBRT7H (ORCPT ); Tue, 18 Feb 2020 14:59:07 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 4B5C5FEC; Tue, 18 Feb 2020 11:59:07 -0800 (PST) Received: from localhost (unknown [10.37.6.21]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id C1D9C3F68F; Tue, 18 Feb 2020 11:59:06 -0800 (PST) From: Mark Brown To: Herbert Xu , "David S. Miller" , Catalin Marinas , Will Deacon , Marc Zyngier , James Morse , Julien Thierry , Suzuki K Poulose Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-crypto@vger.kernel.org, Mark Brown Subject: [PATCH 06/18] arm64: entry-ftrace.S: Convert to modern annotations for assembly functions Date: Tue, 18 Feb 2020 19:58:30 +0000 Message-Id: <20200218195842.34156-7-broonie@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200218195842.34156-1-broonie@kernel.org> References: <20200218195842.34156-1-broonie@kernel.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org In an effort to clarify and simplify the annotation of assembly functions in the kernel new macros have been introduced. These replace ENTRY and ENDPROC and also add a new annotation for static functions which previously had no ENTRY equivalent. Update the annotations in the core kernel code to the new macros. Signed-off-by: Mark Brown --- arch/arm64/kernel/entry-ftrace.S | 28 ++++++++++++++-------------- 1 file changed, 14 insertions(+), 14 deletions(-) diff --git a/arch/arm64/kernel/entry-ftrace.S b/arch/arm64/kernel/entry-ftrace.S index 7d02f9966d34..3d32b6d325d7 100644 --- a/arch/arm64/kernel/entry-ftrace.S +++ b/arch/arm64/kernel/entry-ftrace.S @@ -91,11 +91,11 @@ ENTRY(ftrace_common) ldr_l x2, function_trace_op // op mov x3, sp // regs -GLOBAL(ftrace_call) +SYM_INNER_LABEL(ftrace_call, SYM_L_GLOBAL) bl ftrace_stub #ifdef CONFIG_FUNCTION_GRAPH_TRACER -GLOBAL(ftrace_graph_call) // ftrace_graph_caller(); +SYM_INNER_LABEL(ftrace_graph_call, SYM_L_GLOBAL) // ftrace_graph_caller(); nop // If enabled, this will be replaced // "b ftrace_graph_caller" #endif @@ -218,7 +218,7 @@ ENDPROC(ftrace_graph_caller) * - tracer function to probe instrumented function's entry, * - ftrace_graph_caller to set up an exit hook */ -ENTRY(_mcount) +SYM_FUNC_START(_mcount) mcount_enter ldr_l x2, ftrace_trace_function @@ -242,7 +242,7 @@ skip_ftrace_call: // } b.ne ftrace_graph_caller // ftrace_graph_caller(); #endif /* CONFIG_FUNCTION_GRAPH_TRACER */ mcount_exit -ENDPROC(_mcount) +SYM_FUNC_END(_mcount) EXPORT_SYMBOL(_mcount) NOKPROBE(_mcount) @@ -253,9 +253,9 @@ NOKPROBE(_mcount) * and later on, NOP to branch to ftrace_caller() when enabled or branch to * NOP when disabled per-function base. */ -ENTRY(_mcount) +SYM_FUNC_START(_mcount) ret -ENDPROC(_mcount) +SYM_FUNC_END(_mcount) EXPORT_SYMBOL(_mcount) NOKPROBE(_mcount) @@ -268,24 +268,24 @@ NOKPROBE(_mcount) * - tracer function to probe instrumented function's entry, * - ftrace_graph_caller to set up an exit hook */ -ENTRY(ftrace_caller) +SYM_FUNC_START(ftrace_caller) mcount_enter mcount_get_pc0 x0 // function's pc mcount_get_lr x1 // function's lr -GLOBAL(ftrace_call) // tracer(pc, lr); +SYM_INNER_LABEL(ftrace_call, SYM_L_GLOBAL) // tracer(pc, lr); nop // This will be replaced with "bl xxx" // where xxx can be any kind of tracer. #ifdef CONFIG_FUNCTION_GRAPH_TRACER -GLOBAL(ftrace_graph_call) // ftrace_graph_caller(); +SYM_INNER_LABEL(ftrace_graph_call) // ftrace_graph_caller(); nop // If enabled, this will be replaced // "b ftrace_graph_caller" #endif mcount_exit -ENDPROC(ftrace_caller) +SYM_FUNC_END(ftrace_caller) #endif /* CONFIG_DYNAMIC_FTRACE */ #ifdef CONFIG_FUNCTION_GRAPH_TRACER @@ -298,20 +298,20 @@ ENDPROC(ftrace_caller) * the call stack in order to intercept instrumented function's return path * and run return_to_handler() later on its exit. */ -ENTRY(ftrace_graph_caller) +SYM_FUNC_START(ftrace_graph_caller) mcount_get_pc x0 // function's pc mcount_get_lr_addr x1 // pointer to function's saved lr mcount_get_parent_fp x2 // parent's fp bl prepare_ftrace_return // prepare_ftrace_return(pc, &lr, fp) mcount_exit -ENDPROC(ftrace_graph_caller) +SYM_FUNC_END(ftrace_graph_caller) #endif /* CONFIG_FUNCTION_GRAPH_TRACER */ #endif /* CONFIG_DYNAMIC_FTRACE_WITH_REGS */ -ENTRY(ftrace_stub) +SYM_FUNC_START(ftrace_stub) ret -ENDPROC(ftrace_stub) +SYM_FUNC_END(ftrace_stub) #ifdef CONFIG_FUNCTION_GRAPH_TRACER /* From patchwork Tue Feb 18 19:58:31 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 11389443 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3971A14E3 for ; Tue, 18 Feb 2020 19:59:11 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 1932924655 for ; Tue, 18 Feb 2020 19:59:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1582055951; bh=cq1/WyEK6G9FtWLN34MWhUx8seO8mDcFqp1C6P6AjXk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=Hi162F//FD4Corpa9C+PcpQSVZou+oIAh+n6E65XyRKtf+OWZRtdlyGE2MCfDGDGR jbugozgQQg5NKy9pMKLcVfINPJWp/+q6XmvYv8t7V7eXLQaAIH8dWHNfABzI+gUvOr irK+QxMuoGphj6zyOXpQZth4uG3eT57m7kOr/OZg= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728375AbgBRT7K (ORCPT ); Tue, 18 Feb 2020 14:59:10 -0500 Received: from foss.arm.com ([217.140.110.172]:60542 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726723AbgBRT7J (ORCPT ); Tue, 18 Feb 2020 14:59:09 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 819B131B; Tue, 18 Feb 2020 11:59:09 -0800 (PST) Received: from localhost (unknown [10.37.6.21]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 05B5B3F68F; Tue, 18 Feb 2020 11:59:08 -0800 (PST) From: Mark Brown To: Herbert Xu , "David S. Miller" , Catalin Marinas , Will Deacon , Marc Zyngier , James Morse , Julien Thierry , Suzuki K Poulose Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-crypto@vger.kernel.org, Mark Brown Subject: [PATCH 07/18] arm64: ftrace: Correct annotation of ftrace_caller assembly Date: Tue, 18 Feb 2020 19:58:31 +0000 Message-Id: <20200218195842.34156-8-broonie@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200218195842.34156-1-broonie@kernel.org> References: <20200218195842.34156-1-broonie@kernel.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org In an effort to clarify and simplify the annotation of assembly functions new macros have been introduced. These replace ENTRY and ENDPROC with two different annotations for normal functions and those with unusual calling conventions. The patchable function entry versions of ftrace_*_caller don't follow the usual AAPCS rules, pushing things onto the stack which they don't clean up, and therefore should be annotated as code rather than functions. Signed-off-by: Mark Brown --- arch/arm64/kernel/entry-ftrace.S | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/arch/arm64/kernel/entry-ftrace.S b/arch/arm64/kernel/entry-ftrace.S index 3d32b6d325d7..baf5a20a5566 100644 --- a/arch/arm64/kernel/entry-ftrace.S +++ b/arch/arm64/kernel/entry-ftrace.S @@ -75,17 +75,17 @@ add x29, sp, #S_STACKFRAME .endm -ENTRY(ftrace_regs_caller) +SYM_CODE_START(ftrace_regs_caller) ftrace_regs_entry 1 b ftrace_common -ENDPROC(ftrace_regs_caller) +SYM_CODE_END(ftrace_regs_caller) -ENTRY(ftrace_caller) +SYM_CODE_START(ftrace_caller) ftrace_regs_entry 0 b ftrace_common -ENDPROC(ftrace_caller) +SYM_CODE_END(ftrace_caller) -ENTRY(ftrace_common) +SYM_CODE_START(ftrace_common) sub x0, x30, #AARCH64_INSN_SIZE // ip (callsite's BL insn) mov x1, x9 // parent_ip (callsite's LR) ldr_l x2, function_trace_op // op @@ -122,17 +122,17 @@ ftrace_common_return: add sp, sp, #S_FRAME_SIZE + 16 ret x9 -ENDPROC(ftrace_common) +SYM_CODE_END(ftrace_common) #ifdef CONFIG_FUNCTION_GRAPH_TRACER -ENTRY(ftrace_graph_caller) +SYM_CODE_START(ftrace_graph_caller) ldr x0, [sp, #S_PC] sub x0, x0, #AARCH64_INSN_SIZE // ip (callsite's BL insn) add x1, sp, #S_LR // parent_ip (callsite's LR) ldr x2, [sp, #S_FRAME_SIZE] // parent fp (callsite's FP) bl prepare_ftrace_return b ftrace_common_return -ENDPROC(ftrace_graph_caller) +SYM_CODE_END(ftrace_graph_caller) #endif #else /* CONFIG_DYNAMIC_FTRACE_WITH_REGS */ From patchwork Tue Feb 18 19:58:32 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 11389501 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3D9581580 for ; Tue, 18 Feb 2020 20:07:17 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 1E28A22B48 for ; Tue, 18 Feb 2020 20:07:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1582056437; bh=yKELR+a2DGlgOWOkRXdu30TYJCriJHRWcmqGrw4F+4Y=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=etQQvGZlAB2vf1fDSlyBoxSkdJF4vbYquygyLWGfvx1Di7EOqDELxhAxA1qgP/WzB VCC2ddGulZyu33u3Ddx5/eJ5wEp2jgURe+rKQJRRImF5e4FgYAKp5l/W/sH3jUULoa F3KI0gBq6knXHQpEBhd4l7eV9KPEU+QSmsJtgCms= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728382AbgBRT7N (ORCPT ); Tue, 18 Feb 2020 14:59:13 -0500 Received: from foss.arm.com ([217.140.110.172]:60558 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728032AbgBRT7M (ORCPT ); Tue, 18 Feb 2020 14:59:12 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B99B6101E; Tue, 18 Feb 2020 11:59:11 -0800 (PST) Received: from localhost (unknown [10.37.6.21]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 3BBAC3F68F; Tue, 18 Feb 2020 11:59:11 -0800 (PST) From: Mark Brown To: Herbert Xu , "David S. Miller" , Catalin Marinas , Will Deacon , Marc Zyngier , James Morse , Julien Thierry , Suzuki K Poulose Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-crypto@vger.kernel.org, Mark Brown Subject: [PATCH 08/18] arm64: ftrace: Modernise annotation of return_to_handler Date: Tue, 18 Feb 2020 19:58:32 +0000 Message-Id: <20200218195842.34156-9-broonie@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200218195842.34156-1-broonie@kernel.org> References: <20200218195842.34156-1-broonie@kernel.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org In an effort to clarify and simplify the annotation of assembly functions new macros have been introduced. These replace ENTRY and ENDPROC with two different annotations for normal functions and those with unusual calling conventions. return_to_handler does entertaining things with LR so doesn't follow the usual C conventions and should therefore be annotated as code rather than a function. Signed-off-by: Mark Brown --- arch/arm64/kernel/entry-ftrace.S | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/arm64/kernel/entry-ftrace.S b/arch/arm64/kernel/entry-ftrace.S index baf5a20a5566..820101821ac4 100644 --- a/arch/arm64/kernel/entry-ftrace.S +++ b/arch/arm64/kernel/entry-ftrace.S @@ -320,7 +320,7 @@ SYM_FUNC_END(ftrace_stub) * Run ftrace_return_to_handler() before going back to parent. * @fp is checked against the value passed by ftrace_graph_caller(). */ -ENTRY(return_to_handler) +SYM_CODE_START(return_to_handler) /* save return value regs */ sub sp, sp, #64 stp x0, x1, [sp] @@ -340,5 +340,5 @@ ENTRY(return_to_handler) add sp, sp, #64 ret -END(return_to_handler) +SYM_CODE_END(return_to_handler) #endif /* CONFIG_FUNCTION_GRAPH_TRACER */ From patchwork Tue Feb 18 19:58:33 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 11389445 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id BFC49930 for ; Tue, 18 Feb 2020 19:59:17 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 983072465A for ; Tue, 18 Feb 2020 19:59:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1582055957; bh=zb4iwY6aMN2w9/tC7e32I5RSAcmRHaCbshD66uEvf94=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=cqpKhuwevMgt4s9QwjmaT/F9I6+PsIqqqX32qi+KeVabPN+ohW4rdmiaQSwZNeRDU 3gFrgYmSME1Epbc+ZqR1YO/9w3OAOvvY+W6fttvx7ujIS521wLEFY/pQS/oNHF/bAA KhgV5tPYAbMw1WVxv3tIrEbBBSyPVkwQsdxJtBO8= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727901AbgBRT7Q (ORCPT ); Tue, 18 Feb 2020 14:59:16 -0500 Received: from foss.arm.com ([217.140.110.172]:60580 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727462AbgBRT7O (ORCPT ); Tue, 18 Feb 2020 14:59:14 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0050F31B; Tue, 18 Feb 2020 11:59:14 -0800 (PST) Received: from localhost (unknown [10.37.6.21]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 766753F68F; Tue, 18 Feb 2020 11:59:13 -0800 (PST) From: Mark Brown To: Herbert Xu , "David S. Miller" , Catalin Marinas , Will Deacon , Marc Zyngier , James Morse , Julien Thierry , Suzuki K Poulose Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-crypto@vger.kernel.org, Mark Brown Subject: [PATCH 09/18] arm64: head.S: Convert to modern annotations for assembly functions Date: Tue, 18 Feb 2020 19:58:33 +0000 Message-Id: <20200218195842.34156-10-broonie@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200218195842.34156-1-broonie@kernel.org> References: <20200218195842.34156-1-broonie@kernel.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org In an effort to clarify and simplify the annotation of assembly functions in the kernel new macros have been introduced. These replace ENTRY and ENDPROC and also add a new annotation for static functions which previously had no ENTRY equivalent. Update the annotations in the core kernel code to the new macros. Signed-off-by: Mark Brown --- arch/arm64/kernel/head.S | 56 ++++++++++++++++++++-------------------- 1 file changed, 28 insertions(+), 28 deletions(-) diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index 989b1944cb71..716c946c98e9 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -275,7 +275,7 @@ ENDPROC(preserve_boot_args) * - first few MB of the kernel linear mapping to jump to once the MMU has * been enabled */ -__create_page_tables: +SYM_FUNC_START_LOCAL(__create_page_tables) mov x28, lr /* @@ -403,7 +403,7 @@ __create_page_tables: bl __inval_dcache_area ret x28 -ENDPROC(__create_page_tables) +SYM_FUNC_END(__create_page_tables) .ltorg /* @@ -411,7 +411,7 @@ ENDPROC(__create_page_tables) * * x0 = __PHYS_OFFSET */ -__primary_switched: +SYM_FUNC_START_LOCAL(__primary_switched) adrp x4, init_thread_union add sp, x4, #THREAD_SIZE adr_l x5, init_task @@ -456,7 +456,7 @@ __primary_switched: mov x29, #0 mov x30, #0 b start_kernel -ENDPROC(__primary_switched) +SYM_FUNC_END(__primary_switched) /* * end early head section, begin head code that is also used for @@ -475,7 +475,7 @@ EXPORT_SYMBOL(kimage_vaddr) * Returns either BOOT_CPU_MODE_EL1 or BOOT_CPU_MODE_EL2 in w0 if * booted in EL1 or EL2 respectively. */ -ENTRY(el2_setup) +SYM_FUNC_START(el2_setup) msr SPsel, #1 // We want to use SP_EL{1,2} mrs x0, CurrentEL cmp x0, #CurrentEL_EL2 @@ -636,13 +636,13 @@ install_el2_stub: msr elr_el2, lr mov w0, #BOOT_CPU_MODE_EL2 // This CPU booted in EL2 eret -ENDPROC(el2_setup) +SYM_FUNC_END(el2_setup) /* * Sets the __boot_cpu_mode flag depending on the CPU boot mode passed * in w0. See arch/arm64/include/asm/virt.h for more info. */ -set_cpu_boot_mode_flag: +SYM_FUNC_START_LOCAL(set_cpu_boot_mode_flag) adr_l x1, __boot_cpu_mode cmp w0, #BOOT_CPU_MODE_EL2 b.ne 1f @@ -651,7 +651,7 @@ set_cpu_boot_mode_flag: dmb sy dc ivac, x1 // Invalidate potentially stale cache line ret -ENDPROC(set_cpu_boot_mode_flag) +SYM_FUNC_END(set_cpu_boot_mode_flag) /* * These values are written with the MMU off, but read with the MMU on. @@ -683,7 +683,7 @@ ENTRY(__early_cpu_boot_status) * This provides a "holding pen" for platforms to hold all secondary * cores are held until we're ready for them to initialise. */ -ENTRY(secondary_holding_pen) +SYM_FUNC_START(secondary_holding_pen) bl el2_setup // Drop to EL1, w0=cpu_boot_mode bl set_cpu_boot_mode_flag mrs x0, mpidr_el1 @@ -695,19 +695,19 @@ pen: ldr x4, [x3] b.eq secondary_startup wfe b pen -ENDPROC(secondary_holding_pen) +SYM_FUNC_END(secondary_holding_pen) /* * Secondary entry point that jumps straight into the kernel. Only to * be used where CPUs are brought online dynamically by the kernel. */ -ENTRY(secondary_entry) +SYM_FUNC_START(secondary_entry) bl el2_setup // Drop to EL1 bl set_cpu_boot_mode_flag b secondary_startup -ENDPROC(secondary_entry) +SYM_FUNC_END(secondary_entry) -secondary_startup: +SYM_FUNC_START_LOCAL(secondary_startup) /* * Common entry point for secondary CPUs. */ @@ -717,9 +717,9 @@ secondary_startup: bl __enable_mmu ldr x8, =__secondary_switched br x8 -ENDPROC(secondary_startup) +SYM_FUNC_END(secondary_startup) -__secondary_switched: +SYM_FUNC_START_LOCAL(__secondary_switched) adr_l x5, vectors msr vbar_el1, x5 isb @@ -734,13 +734,13 @@ __secondary_switched: mov x29, #0 mov x30, #0 b secondary_start_kernel -ENDPROC(__secondary_switched) +SYM_FUNC_END(__secondary_switched) -__secondary_too_slow: +SYM_FUNC_START_LOCAL(__secondary_too_slow) wfe wfi b __secondary_too_slow -ENDPROC(__secondary_too_slow) +SYM_FUNC_END(__secondary_too_slow) /* * The booting CPU updates the failed status @__early_cpu_boot_status, @@ -772,7 +772,7 @@ ENDPROC(__secondary_too_slow) * Checks if the selected granule size is supported by the CPU. * If it isn't, park the CPU */ -ENTRY(__enable_mmu) +SYM_FUNC_START(__enable_mmu) mrs x2, ID_AA64MMFR0_EL1 ubfx x2, x2, #ID_AA64MMFR0_TGRAN_SHIFT, 4 cmp x2, #ID_AA64MMFR0_TGRAN_SUPPORTED @@ -796,9 +796,9 @@ ENTRY(__enable_mmu) dsb nsh isb ret -ENDPROC(__enable_mmu) +SYM_FUNC_END(__enable_mmu) -ENTRY(__cpu_secondary_check52bitva) +SYM_FUNC_START(__cpu_secondary_check52bitva) #ifdef CONFIG_ARM64_VA_BITS_52 ldr_l x0, vabits_actual cmp x0, #52 @@ -816,9 +816,9 @@ ENTRY(__cpu_secondary_check52bitva) #endif 2: ret -ENDPROC(__cpu_secondary_check52bitva) +SYM_FUNC_END(__cpu_secondary_check52bitva) -__no_granule_support: +SYM_FUNC_START_LOCAL(__no_granule_support) /* Indicate that this CPU can't boot and is stuck in the kernel */ update_early_cpu_boot_status \ CPU_STUCK_IN_KERNEL | CPU_STUCK_REASON_NO_GRAN, x1, x2 @@ -826,10 +826,10 @@ __no_granule_support: wfe wfi b 1b -ENDPROC(__no_granule_support) +SYM_FUNC_END(__no_granule_support) #ifdef CONFIG_RELOCATABLE -__relocate_kernel: +SYM_FUNC_START_LOCAL(__relocate_kernel) /* * Iterate over each entry in the relocation table, and apply the * relocations in place. @@ -931,10 +931,10 @@ __relocate_kernel: #endif ret -ENDPROC(__relocate_kernel) +SYM_FUNC_END(__relocate_kernel) #endif -__primary_switch: +SYM_FUNC_START_LOCAL(__primary_switch) #ifdef CONFIG_RANDOMIZE_BASE mov x19, x0 // preserve new SCTLR_EL1 value mrs x20, sctlr_el1 // preserve old SCTLR_EL1 value @@ -977,4 +977,4 @@ __primary_switch: ldr x8, =__primary_switched adrp x0, __PHYS_OFFSET br x8 -ENDPROC(__primary_switch) +SYM_FUNC_END(__primary_switch) From patchwork Tue Feb 18 19:58:34 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 11389503 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 875D714E3 for ; Tue, 18 Feb 2020 20:07:17 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 5EB3E21D56 for ; Tue, 18 Feb 2020 20:07:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1582056437; bh=i/rk+ajCdVnD6kEsM9NmAA+PXabLPamz9CGLdNNeYEY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=se7fsY/cES1ZiPglJ0kSeopgCDhn5qD4Pj96bSyuMjxQa42Nzm2XsISZe+4HLEhLp 3X+46mqxj5tKJ27l0SX0N3xzxB7fr3SvGMAI2oIGMvPip5p0CKhQuoVnY4wrRfp/qt GGKg1Tp89+nxqIHInueJZXu2QJFEL4Wb3Iv1T+H4= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726339AbgBRUHQ (ORCPT ); Tue, 18 Feb 2020 15:07:16 -0500 Received: from foss.arm.com ([217.140.110.172]:60600 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727553AbgBRT7Q (ORCPT ); Tue, 18 Feb 2020 14:59:16 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3B29BFEC; Tue, 18 Feb 2020 11:59:16 -0800 (PST) Received: from localhost (unknown [10.37.6.21]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id B0ECB3F68F; Tue, 18 Feb 2020 11:59:15 -0800 (PST) From: Mark Brown To: Herbert Xu , "David S. Miller" , Catalin Marinas , Will Deacon , Marc Zyngier , James Morse , Julien Thierry , Suzuki K Poulose Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-crypto@vger.kernel.org, Mark Brown Subject: [PATCH 10/18] arm64: head: Annotate stext and preserve_boot_args as code Date: Tue, 18 Feb 2020 19:58:34 +0000 Message-Id: <20200218195842.34156-11-broonie@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200218195842.34156-1-broonie@kernel.org> References: <20200218195842.34156-1-broonie@kernel.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org In an effort to clarify and simplify the annotation of assembly functions new macros have been introduced. These replace ENTRY and ENDPROC with two different annotations for normal functions and those with unusual calling conventions. Neither stext nor preserve_boot_args is called with the usual AAPCS calling conventions and they should therefore be annotated as code. Signed-off-by: Mark Brown --- arch/arm64/kernel/head.S | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index 716c946c98e9..c334863991e7 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -105,7 +105,7 @@ pe_header: * x24 __primary_switch() .. relocate_kernel() * current RELR displacement */ -ENTRY(stext) +SYM_CODE_START(stext) bl preserve_boot_args bl el2_setup // Drop to EL1, w0=cpu_boot_mode adrp x23, __PHYS_OFFSET @@ -120,12 +120,12 @@ ENTRY(stext) */ bl __cpu_setup // initialise processor b __primary_switch -ENDPROC(stext) +SYM_CODE_END(stext) /* * Preserve the arguments passed by the bootloader in x0 .. x3 */ -preserve_boot_args: +SYM_CODE_START_LOCAL(preserve_boot_args) mov x21, x0 // x21=FDT adr_l x0, boot_args // record the contents of @@ -137,7 +137,7 @@ preserve_boot_args: mov x1, #0x20 // 4 x 8 bytes b __inval_dcache_area // tail call -ENDPROC(preserve_boot_args) +SYM_CODE_END(preserve_boot_args) /* * Macro to create a table entry to the next page. From patchwork Tue Feb 18 19:58:35 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 11389447 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5CEF214E3 for ; Tue, 18 Feb 2020 19:59:21 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 3E00F2465A for ; Tue, 18 Feb 2020 19:59:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1582055961; bh=n5RF+KUBa5CG/NIgZ2ET54yV4zLm1aJ+1lltdC+JInc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=B8+Gvo4WLR6Qh0oaxF6aGvfrzJhItjlYzMtsJ7In82qaAomqM7sRYPl8RgZm4Tc8U P8MVvKbRMpY0bhWQ8P3KztjMYOkoHYlzjLF38TpyOTX5qCJUURMhQeuLikPHWFuWTs ewrw/+qIEpv15x7X21+zY/BPhJa/fUjGeQlMV1y8= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728408AbgBRT7T (ORCPT ); Tue, 18 Feb 2020 14:59:19 -0500 Received: from foss.arm.com ([217.140.110.172]:60616 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728385AbgBRT7S (ORCPT ); Tue, 18 Feb 2020 14:59:18 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 76C7C31B; Tue, 18 Feb 2020 11:59:18 -0800 (PST) Received: from localhost (unknown [10.37.6.21]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id ED71F3F68F; Tue, 18 Feb 2020 11:59:17 -0800 (PST) From: Mark Brown To: Herbert Xu , "David S. Miller" , Catalin Marinas , Will Deacon , Marc Zyngier , James Morse , Julien Thierry , Suzuki K Poulose Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-crypto@vger.kernel.org, Mark Brown Subject: [PATCH 11/18] arm64: kernel: Convert to modern annotations for assembly data Date: Tue, 18 Feb 2020 19:58:35 +0000 Message-Id: <20200218195842.34156-12-broonie@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200218195842.34156-1-broonie@kernel.org> References: <20200218195842.34156-1-broonie@kernel.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org In an effort to clarify and simplify the annotation of assembly functions in the kernel new macros have been introduced. These include specific annotations for the start and end of data, update symbols for data to use these. Signed-off-by: Mark Brown --- arch/arm64/kernel/entry.S | 7 ++++--- arch/arm64/kernel/head.S | 9 ++++++--- 2 files changed, 10 insertions(+), 6 deletions(-) diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S index fbf69fe94412..7439f29946fb 100644 --- a/arch/arm64/kernel/entry.S +++ b/arch/arm64/kernel/entry.S @@ -859,9 +859,9 @@ SYM_CODE_END(tramp_exit_compat) #ifdef CONFIG_RANDOMIZE_BASE .pushsection ".rodata", "a" .align PAGE_SHIFT - .globl __entry_tramp_data_start -__entry_tramp_data_start: +SYM_DATA_START(__entry_tramp_data_start) .quad vectors +SYM_DATA_END(__entry_tramp_data_start) .popsection // .rodata #endif /* CONFIG_RANDOMIZE_BASE */ #endif /* CONFIG_UNMAP_KERNEL_AT_EL0 */ @@ -983,8 +983,9 @@ NOKPROBE(__sdei_asm_exit_trampoline) .popsection // .entry.tramp.text #ifdef CONFIG_RANDOMIZE_BASE .pushsection ".rodata", "a" -__sdei_asm_trampoline_next_handler: +SYM_DATA_START(__sdei_asm_trampoline_next_handler) .quad __sdei_asm_handler +SYM_DATA_END(__sdei_asm_trampoline_next_handler) .popsection // .rodata #endif /* CONFIG_RANDOMIZE_BASE */ #endif /* CONFIG_UNMAP_KERNEL_AT_EL0 */ diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index c334863991e7..a06727354fad 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -464,8 +464,9 @@ SYM_FUNC_END(__primary_switched) */ .section ".idmap.text","awx" -ENTRY(kimage_vaddr) +SYM_DATA_START(kimage_vaddr) .quad _text - TEXT_OFFSET +SYM_DATA_END(kimage_vaddr) EXPORT_SYMBOL(kimage_vaddr) /* @@ -667,15 +668,17 @@ SYM_FUNC_END(set_cpu_boot_mode_flag) * This is not in .bss, because we set it sufficiently early that the boot-time * zeroing of .bss would clobber it. */ -ENTRY(__boot_cpu_mode) +SYM_DATA_START(__boot_cpu_mode) .long BOOT_CPU_MODE_EL2 .long BOOT_CPU_MODE_EL1 +SYM_DATA_END(__boot_cpu_mode) /* * The booting CPU updates the failed status @__early_cpu_boot_status, * with MMU turned off. */ -ENTRY(__early_cpu_boot_status) +SYM_DATA_START(__early_cpu_boot_status) .quad 0 +SYM_DATA_END(__early_cpu_boot_status) .popsection From patchwork Tue Feb 18 19:58:36 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 11389499 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0CFCA13A4 for ; Tue, 18 Feb 2020 20:07:11 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id D0E2622B48 for ; Tue, 18 Feb 2020 20:07:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1582056430; bh=Xtbx+r32BT7OOyUMNgnwH8wHFByCkW0ZuoKZm1jAufA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=ZUoSyRk2bvZ1eLorej5TvQkeHWgSaTVKiG2xTOiXylx9vJ1rc5MdSp3RgC94qsZPy lGS+JJKFpqDq7054Y4DC/fcTQHg1HDv1TFyLY9PEQXm7IAFCqOIaT+v3in9G9Q5sKs brfzgtCHGWjfOkLnaTC04e8sLhxVl1n17gFNxusA= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726383AbgBRUHK (ORCPT ); Tue, 18 Feb 2020 15:07:10 -0500 Received: from foss.arm.com ([217.140.110.172]:60634 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727184AbgBRT7V (ORCPT ); Tue, 18 Feb 2020 14:59:21 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B2D24FEC; Tue, 18 Feb 2020 11:59:20 -0800 (PST) Received: from localhost (unknown [10.37.6.21]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 32AD33F68F; Tue, 18 Feb 2020 11:59:20 -0800 (PST) From: Mark Brown To: Herbert Xu , "David S. Miller" , Catalin Marinas , Will Deacon , Marc Zyngier , James Morse , Julien Thierry , Suzuki K Poulose Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-crypto@vger.kernel.org, Mark Brown Subject: [PATCH 12/18] arm64: kernel: Convert to modern annotations for assembly functions Date: Tue, 18 Feb 2020 19:58:36 +0000 Message-Id: <20200218195842.34156-13-broonie@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200218195842.34156-1-broonie@kernel.org> References: <20200218195842.34156-1-broonie@kernel.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org In an effort to clarify and simplify the annotation of assembly functions in the kernel new macros have been introduced. These replace ENTRY and ENDPROC and also add a new annotation for static functions which previously had no ENTRY equivalent. Update the annotations in the core kernel code to the new macros. Signed-off-by: Mark Brown --- arch/arm64/kernel/cpu-reset.S | 4 +- arch/arm64/kernel/efi-entry.S | 4 +- arch/arm64/kernel/efi-rt-wrapper.S | 4 +- arch/arm64/kernel/entry-fpsimd.S | 20 ++++----- arch/arm64/kernel/hibernate-asm.S | 16 +++---- arch/arm64/kernel/hyp-stub.S | 20 ++++----- arch/arm64/kernel/probes/kprobes_trampoline.S | 4 +- arch/arm64/kernel/reloc_test_syms.S | 44 +++++++++---------- arch/arm64/kernel/relocate_kernel.S | 4 +- arch/arm64/kernel/sleep.S | 12 ++--- arch/arm64/kernel/smccc-call.S | 8 ++-- 11 files changed, 70 insertions(+), 70 deletions(-) diff --git a/arch/arm64/kernel/cpu-reset.S b/arch/arm64/kernel/cpu-reset.S index 32c7bf858dd9..4033e2efa8cf 100644 --- a/arch/arm64/kernel/cpu-reset.S +++ b/arch/arm64/kernel/cpu-reset.S @@ -29,7 +29,7 @@ * branch to what would be the reset vector. It must be executed with the * flat identity mapping. */ -ENTRY(__cpu_soft_restart) +SYM_FUNC_START(__cpu_soft_restart) /* Clear sctlr_el1 flags. */ mrs x12, sctlr_el1 ldr x13, =SCTLR_ELx_FLAGS @@ -47,6 +47,6 @@ ENTRY(__cpu_soft_restart) mov x1, x3 // arg1 mov x2, x4 // arg2 br x8 -ENDPROC(__cpu_soft_restart) +SYM_FUNC_END(__cpu_soft_restart) .popsection diff --git a/arch/arm64/kernel/efi-entry.S b/arch/arm64/kernel/efi-entry.S index 304d5b02ca67..de6ced92950e 100644 --- a/arch/arm64/kernel/efi-entry.S +++ b/arch/arm64/kernel/efi-entry.S @@ -25,7 +25,7 @@ * we want to be. The kernel image wants to be placed at TEXT_OFFSET * from start of RAM. */ -ENTRY(entry) +SYM_CODE_START(entry) /* * Create a stack frame to save FP/LR with extra space * for image_addr variable passed to efi_entry(). @@ -117,4 +117,4 @@ efi_load_fail: ret entry_end: -ENDPROC(entry) +SYM_CODE_END(entry) diff --git a/arch/arm64/kernel/efi-rt-wrapper.S b/arch/arm64/kernel/efi-rt-wrapper.S index 3fc71106cb2b..1192c4bb48df 100644 --- a/arch/arm64/kernel/efi-rt-wrapper.S +++ b/arch/arm64/kernel/efi-rt-wrapper.S @@ -5,7 +5,7 @@ #include -ENTRY(__efi_rt_asm_wrapper) +SYM_FUNC_START(__efi_rt_asm_wrapper) stp x29, x30, [sp, #-32]! mov x29, sp @@ -35,4 +35,4 @@ ENTRY(__efi_rt_asm_wrapper) b.ne 0f ret 0: b efi_handle_corrupted_x18 // tail call -ENDPROC(__efi_rt_asm_wrapper) +SYM_FUNC_END(__efi_rt_asm_wrapper) diff --git a/arch/arm64/kernel/entry-fpsimd.S b/arch/arm64/kernel/entry-fpsimd.S index 0f24eae8f3cc..f880dd63ddc3 100644 --- a/arch/arm64/kernel/entry-fpsimd.S +++ b/arch/arm64/kernel/entry-fpsimd.S @@ -16,34 +16,34 @@ * * x0 - pointer to struct fpsimd_state */ -ENTRY(fpsimd_save_state) +SYM_FUNC_START(fpsimd_save_state) fpsimd_save x0, 8 ret -ENDPROC(fpsimd_save_state) +SYM_FUNC_END(fpsimd_save_state) /* * Load the FP registers. * * x0 - pointer to struct fpsimd_state */ -ENTRY(fpsimd_load_state) +SYM_FUNC_START(fpsimd_load_state) fpsimd_restore x0, 8 ret -ENDPROC(fpsimd_load_state) +SYM_FUNC_END(fpsimd_load_state) #ifdef CONFIG_ARM64_SVE -ENTRY(sve_save_state) +SYM_FUNC_START(sve_save_state) sve_save 0, x1, 2 ret -ENDPROC(sve_save_state) +SYM_FUNC_END(sve_save_state) -ENTRY(sve_load_state) +SYM_FUNC_START(sve_load_state) sve_load 0, x1, x2, 3, x4 ret -ENDPROC(sve_load_state) +SYM_FUNC_END(sve_load_state) -ENTRY(sve_get_vl) +SYM_FUNC_START(sve_get_vl) _sve_rdvl 0, 1 ret -ENDPROC(sve_get_vl) +SYM_FUNC_END(sve_get_vl) #endif /* CONFIG_ARM64_SVE */ diff --git a/arch/arm64/kernel/hibernate-asm.S b/arch/arm64/kernel/hibernate-asm.S index 38bcd4d4e43b..2fad0ec846e3 100644 --- a/arch/arm64/kernel/hibernate-asm.S +++ b/arch/arm64/kernel/hibernate-asm.S @@ -65,7 +65,7 @@ * x5: physical address of a zero page that remains zero after resume */ .pushsection ".hibernate_exit.text", "ax" -ENTRY(swsusp_arch_suspend_exit) +SYM_CODE_START(swsusp_arch_suspend_exit) /* * We execute from ttbr0, change ttbr1 to our copied linear map tables * with a break-before-make via the zero page @@ -112,7 +112,7 @@ ENTRY(swsusp_arch_suspend_exit) 3: ret .ltorg -ENDPROC(swsusp_arch_suspend_exit) +SYM_CODE_END(swsusp_arch_suspend_exit) /* * Restore the hyp stub. @@ -121,15 +121,15 @@ ENDPROC(swsusp_arch_suspend_exit) * * x24: The physical address of __hyp_stub_vectors */ -el1_sync: +SYM_CODE_START_LOCAL(el1_sync) msr vbar_el2, x24 eret -ENDPROC(el1_sync) +SYM_CODE_END(el1_sync) .macro invalid_vector label -\label: +SYM_CODE_START_LOCAL(\label) b \label -ENDPROC(\label) +SYM_CODE_END(\label) .endm invalid_vector el2_sync_invalid @@ -143,7 +143,7 @@ ENDPROC(\label) /* el2 vectors - switch el2 here while we restore the memory image. */ .align 11 -ENTRY(hibernate_el2_vectors) +SYM_CODE_START(hibernate_el2_vectors) ventry el2_sync_invalid // Synchronous EL2t ventry el2_irq_invalid // IRQ EL2t ventry el2_fiq_invalid // FIQ EL2t @@ -163,6 +163,6 @@ ENTRY(hibernate_el2_vectors) ventry el1_irq_invalid // IRQ 32-bit EL1 ventry el1_fiq_invalid // FIQ 32-bit EL1 ventry el1_error_invalid // Error 32-bit EL1 -END(hibernate_el2_vectors) +SYM_CODE_END(hibernate_el2_vectors) .popsection diff --git a/arch/arm64/kernel/hyp-stub.S b/arch/arm64/kernel/hyp-stub.S index 73d46070b315..b095a42daa77 100644 --- a/arch/arm64/kernel/hyp-stub.S +++ b/arch/arm64/kernel/hyp-stub.S @@ -21,7 +21,7 @@ .align 11 -ENTRY(__hyp_stub_vectors) +SYM_CODE_START(__hyp_stub_vectors) ventry el2_sync_invalid // Synchronous EL2t ventry el2_irq_invalid // IRQ EL2t ventry el2_fiq_invalid // FIQ EL2t @@ -41,11 +41,11 @@ ENTRY(__hyp_stub_vectors) ventry el1_irq_invalid // IRQ 32-bit EL1 ventry el1_fiq_invalid // FIQ 32-bit EL1 ventry el1_error_invalid // Error 32-bit EL1 -ENDPROC(__hyp_stub_vectors) +SYM_CODE_END(__hyp_stub_vectors) .align 11 -el1_sync: +SYM_CODE_START_LOCAL(el1_sync) cmp x0, #HVC_SET_VECTORS b.ne 2f msr vbar_el2, x1 @@ -68,12 +68,12 @@ el1_sync: 9: mov x0, xzr eret -ENDPROC(el1_sync) +SYM_CODE_END(el1_sync) .macro invalid_vector label -\label: +SYM_CODE_START_LOCAL(\label) b \label -ENDPROC(\label) +SYM_CODE_END(\label) .endm invalid_vector el2_sync_invalid @@ -106,15 +106,15 @@ ENDPROC(\label) * initialisation entry point. */ -ENTRY(__hyp_set_vectors) +SYM_FUNC_START(__hyp_set_vectors) mov x1, x0 mov x0, #HVC_SET_VECTORS hvc #0 ret -ENDPROC(__hyp_set_vectors) +SYM_FUNC_END(__hyp_set_vectors) -ENTRY(__hyp_reset_vectors) +SYM_FUNC_START(__hyp_reset_vectors) mov x0, #HVC_RESET_VECTORS hvc #0 ret -ENDPROC(__hyp_reset_vectors) +SYM_FUNC_END(__hyp_reset_vectors) diff --git a/arch/arm64/kernel/probes/kprobes_trampoline.S b/arch/arm64/kernel/probes/kprobes_trampoline.S index 45dce03aaeaf..890ca72c5a51 100644 --- a/arch/arm64/kernel/probes/kprobes_trampoline.S +++ b/arch/arm64/kernel/probes/kprobes_trampoline.S @@ -61,7 +61,7 @@ ldp x28, x29, [sp, #S_X28] .endm -ENTRY(kretprobe_trampoline) +SYM_CODE_START(kretprobe_trampoline) sub sp, sp, #S_FRAME_SIZE save_all_base_regs @@ -79,4 +79,4 @@ ENTRY(kretprobe_trampoline) add sp, sp, #S_FRAME_SIZE ret -ENDPROC(kretprobe_trampoline) +SYM_CODE_END(kretprobe_trampoline) diff --git a/arch/arm64/kernel/reloc_test_syms.S b/arch/arm64/kernel/reloc_test_syms.S index 16a34f188f26..53e8cdfe80e1 100644 --- a/arch/arm64/kernel/reloc_test_syms.S +++ b/arch/arm64/kernel/reloc_test_syms.S @@ -5,81 +5,81 @@ #include -ENTRY(absolute_data64) +SYM_CODE_START(absolute_data64) ldr x0, 0f ret 0: .quad sym64_abs -ENDPROC(absolute_data64) +SYM_CODE_END(absolute_data64) -ENTRY(absolute_data32) +SYM_CODE_START(absolute_data32) ldr w0, 0f ret 0: .long sym32_abs -ENDPROC(absolute_data32) +SYM_CODE_END(absolute_data32) -ENTRY(absolute_data16) +SYM_CODE_START(absolute_data16) adr x0, 0f ldrh w0, [x0] ret 0: .short sym16_abs, 0 -ENDPROC(absolute_data16) +SYM_CODE_END(absolute_data16) -ENTRY(signed_movw) +SYM_CODE_START(signed_movw) movz x0, #:abs_g2_s:sym64_abs movk x0, #:abs_g1_nc:sym64_abs movk x0, #:abs_g0_nc:sym64_abs ret -ENDPROC(signed_movw) +SYM_CODE_END(signed_movw) -ENTRY(unsigned_movw) +SYM_CODE_START(unsigned_movw) movz x0, #:abs_g3:sym64_abs movk x0, #:abs_g2_nc:sym64_abs movk x0, #:abs_g1_nc:sym64_abs movk x0, #:abs_g0_nc:sym64_abs ret -ENDPROC(unsigned_movw) +SYM_CODE_END(unsigned_movw) .align 12 .space 0xff8 -ENTRY(relative_adrp) +SYM_CODE_START(relative_adrp) adrp x0, sym64_rel add x0, x0, #:lo12:sym64_rel ret -ENDPROC(relative_adrp) +SYM_CODE_END(relative_adrp) .align 12 .space 0xffc -ENTRY(relative_adrp_far) +SYM_CODE_START(relative_adrp_far) adrp x0, memstart_addr add x0, x0, #:lo12:memstart_addr ret -ENDPROC(relative_adrp_far) +SYM_CODE_END(relative_adrp_far) -ENTRY(relative_adr) +SYM_CODE_START(relative_adr) adr x0, sym64_rel ret -ENDPROC(relative_adr) +SYM_CODE_END(relative_adr) -ENTRY(relative_data64) +SYM_CODE_START(relative_data64) adr x1, 0f ldr x0, [x1] add x0, x0, x1 ret 0: .quad sym64_rel - . -ENDPROC(relative_data64) +SYM_CODE_END(relative_data64) -ENTRY(relative_data32) +SYM_CODE_START(relative_data32) adr x1, 0f ldr w0, [x1] add x0, x0, x1 ret 0: .long sym64_rel - . -ENDPROC(relative_data32) +SYM_CODE_END(relative_data32) -ENTRY(relative_data16) +SYM_CODE_START(relative_data16) adr x1, 0f ldrsh w0, [x1] add x0, x0, x1 ret 0: .short sym64_rel - ., 0 -ENDPROC(relative_data16) +SYM_CODE_END(relative_data16) diff --git a/arch/arm64/kernel/relocate_kernel.S b/arch/arm64/kernel/relocate_kernel.S index c1d7db71a726..dc65443f139d 100644 --- a/arch/arm64/kernel/relocate_kernel.S +++ b/arch/arm64/kernel/relocate_kernel.S @@ -26,7 +26,7 @@ * control_code_page, a special page which has been set up to be preserved * during the copy operation. */ -ENTRY(arm64_relocate_new_kernel) +SYM_CODE_START(arm64_relocate_new_kernel) /* Setup the list loop variables. */ mov x18, x2 /* x18 = dtb address */ @@ -111,7 +111,7 @@ ENTRY(arm64_relocate_new_kernel) mov x3, xzr br x17 -ENDPROC(arm64_relocate_new_kernel) +SYM_CODE_END(arm64_relocate_new_kernel) .ltorg diff --git a/arch/arm64/kernel/sleep.S b/arch/arm64/kernel/sleep.S index f5b04dd8a710..a079551aa7b0 100644 --- a/arch/arm64/kernel/sleep.S +++ b/arch/arm64/kernel/sleep.S @@ -61,7 +61,7 @@ * * x0 = struct sleep_stack_data area */ -ENTRY(__cpu_suspend_enter) +SYM_FUNC_START(__cpu_suspend_enter) stp x29, lr, [x0, #SLEEP_STACK_DATA_CALLEE_REGS] stp x19, x20, [x0,#SLEEP_STACK_DATA_CALLEE_REGS+16] stp x21, x22, [x0,#SLEEP_STACK_DATA_CALLEE_REGS+32] @@ -94,10 +94,10 @@ ENTRY(__cpu_suspend_enter) ldp x29, lr, [sp], #16 mov x0, #1 ret -ENDPROC(__cpu_suspend_enter) +SYM_FUNC_END(__cpu_suspend_enter) .pushsection ".idmap.text", "awx" -ENTRY(cpu_resume) +SYM_FUNC_START(cpu_resume) bl el2_setup // if in EL2 drop to EL1 cleanly bl __cpu_setup /* enable the MMU early - so we can access sleep_save_stash by va */ @@ -105,11 +105,11 @@ ENTRY(cpu_resume) bl __enable_mmu ldr x8, =_cpu_resume br x8 -ENDPROC(cpu_resume) +SYM_FUNC_END(cpu_resume) .ltorg .popsection -ENTRY(_cpu_resume) +SYM_FUNC_START(_cpu_resume) mrs x1, mpidr_el1 adr_l x8, mpidr_hash // x8 = struct mpidr_hash virt address @@ -145,4 +145,4 @@ ENTRY(_cpu_resume) ldp x29, lr, [x29] mov x0, #0 ret -ENDPROC(_cpu_resume) +SYM_FUNC_END(_cpu_resume) diff --git a/arch/arm64/kernel/smccc-call.S b/arch/arm64/kernel/smccc-call.S index 54655273d1e0..1f93809528a4 100644 --- a/arch/arm64/kernel/smccc-call.S +++ b/arch/arm64/kernel/smccc-call.S @@ -30,9 +30,9 @@ * unsigned long a6, unsigned long a7, struct arm_smccc_res *res, * struct arm_smccc_quirk *quirk) */ -ENTRY(__arm_smccc_smc) +SYM_FUNC_START(__arm_smccc_smc) SMCCC smc -ENDPROC(__arm_smccc_smc) +SYM_FUNC_END(__arm_smccc_smc) EXPORT_SYMBOL(__arm_smccc_smc) /* @@ -41,7 +41,7 @@ EXPORT_SYMBOL(__arm_smccc_smc) * unsigned long a6, unsigned long a7, struct arm_smccc_res *res, * struct arm_smccc_quirk *quirk) */ -ENTRY(__arm_smccc_hvc) +SYM_FUNC_START(__arm_smccc_hvc) SMCCC hvc -ENDPROC(__arm_smccc_hvc) +SYM_FUNC_END(__arm_smccc_hvc) EXPORT_SYMBOL(__arm_smccc_hvc) From patchwork Tue Feb 18 19:58:37 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 11389497 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 96AF913A4 for ; Tue, 18 Feb 2020 20:07:09 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 77B0A22B48 for ; Tue, 18 Feb 2020 20:07:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1582056429; bh=6/TbO83axwk3tVqqoaeKsoLA0VXz0VRNJNSKn3RG/So=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=Lst94tWJdA3sZ65nAegXpT4jKaEf9mIP08qria1qk5eYy1uWuFealzaQETxEHGOzu G/Hf8yYxYkd0IrWWAR/UkVjkQWwltb8uNPMskt+1couEcasNfL/MbwsiP7XPUzjwGz so8HAKskl0hgjy0qsLbuRPXh7YdCQ2Ysf8nRhzuo= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727984AbgBRUHI (ORCPT ); Tue, 18 Feb 2020 15:07:08 -0500 Received: from foss.arm.com ([217.140.110.172]:60660 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726383AbgBRT7X (ORCPT ); Tue, 18 Feb 2020 14:59:23 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id EB2D2101E; Tue, 18 Feb 2020 11:59:22 -0800 (PST) Received: from localhost (unknown [10.37.6.21]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 6D1873F68F; Tue, 18 Feb 2020 11:59:22 -0800 (PST) From: Mark Brown To: Herbert Xu , "David S. Miller" , Catalin Marinas , Will Deacon , Marc Zyngier , James Morse , Julien Thierry , Suzuki K Poulose Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-crypto@vger.kernel.org, Mark Brown Subject: [PATCH 13/18] arm64: kvm: Annotate assembly using modern annoations Date: Tue, 18 Feb 2020 19:58:37 +0000 Message-Id: <20200218195842.34156-14-broonie@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200218195842.34156-1-broonie@kernel.org> References: <20200218195842.34156-1-broonie@kernel.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org In an effort to clarify and simplify the annotation of assembly functions in the kernel new macros have been introduced. These replace ENTRY and ENDPROC with separate annotations for standard C callable functions, data and code with different calling conventions. Update the more straightforward annotations in the kvm code to the new macros. Signed-off-by: Mark Brown Acked-by: Marc Zyngier --- arch/arm64/kvm/hyp-init.S | 8 ++++---- arch/arm64/kvm/hyp.S | 4 ++-- arch/arm64/kvm/hyp/fpsimd.S | 8 ++++---- arch/arm64/kvm/hyp/hyp-entry.S | 15 ++++++++------- 4 files changed, 18 insertions(+), 17 deletions(-) diff --git a/arch/arm64/kvm/hyp-init.S b/arch/arm64/kvm/hyp-init.S index 160be2b4696d..84f32cf5abc7 100644 --- a/arch/arm64/kvm/hyp-init.S +++ b/arch/arm64/kvm/hyp-init.S @@ -18,7 +18,7 @@ .align 11 -ENTRY(__kvm_hyp_init) +SYM_CODE_START(__kvm_hyp_init) ventry __invalid // Synchronous EL2t ventry __invalid // IRQ EL2t ventry __invalid // FIQ EL2t @@ -117,9 +117,9 @@ CPU_BE( orr x4, x4, #SCTLR_ELx_EE) /* Hello, World! */ eret -ENDPROC(__kvm_hyp_init) +SYM_CODE_END(__kvm_hyp_init) -ENTRY(__kvm_handle_stub_hvc) +SYM_CODE_START(__kvm_handle_stub_hvc) cmp x0, #HVC_SOFT_RESTART b.ne 1f @@ -158,7 +158,7 @@ reset: ldr x0, =HVC_STUB_ERR eret -ENDPROC(__kvm_handle_stub_hvc) +SYM_CODE_END(__kvm_handle_stub_hvc) .ltorg diff --git a/arch/arm64/kvm/hyp.S b/arch/arm64/kvm/hyp.S index c0094d520dff..3c79a1124af2 100644 --- a/arch/arm64/kvm/hyp.S +++ b/arch/arm64/kvm/hyp.S @@ -28,7 +28,7 @@ * and is used to implement hyp stubs in the same way as in * arch/arm64/kernel/hyp_stub.S. */ -ENTRY(__kvm_call_hyp) +SYM_FUNC_START(__kvm_call_hyp) hvc #0 ret -ENDPROC(__kvm_call_hyp) +SYM_FUNC_END(__kvm_call_hyp) diff --git a/arch/arm64/kvm/hyp/fpsimd.S b/arch/arm64/kvm/hyp/fpsimd.S index 78ff53225691..5b8ff517ff10 100644 --- a/arch/arm64/kvm/hyp/fpsimd.S +++ b/arch/arm64/kvm/hyp/fpsimd.S @@ -11,12 +11,12 @@ .text .pushsection .hyp.text, "ax" -ENTRY(__fpsimd_save_state) +SYM_FUNC_START(__fpsimd_save_state) fpsimd_save x0, 1 ret -ENDPROC(__fpsimd_save_state) +SYM_FUNC_END(__fpsimd_save_state) -ENTRY(__fpsimd_restore_state) +SYM_FUNC_START(__fpsimd_restore_state) fpsimd_restore x0, 1 ret -ENDPROC(__fpsimd_restore_state) +SYM_FUNC_END(__fpsimd_restore_state) diff --git a/arch/arm64/kvm/hyp/hyp-entry.S b/arch/arm64/kvm/hyp/hyp-entry.S index ffa68d5713f1..0aea8f9ab23d 100644 --- a/arch/arm64/kvm/hyp/hyp-entry.S +++ b/arch/arm64/kvm/hyp/hyp-entry.S @@ -180,7 +180,7 @@ el2_error: eret sb -ENTRY(__hyp_do_panic) +SYM_FUNC_START(__hyp_do_panic) mov lr, #(PSR_F_BIT | PSR_I_BIT | PSR_A_BIT | PSR_D_BIT |\ PSR_MODE_EL1h) msr spsr_el2, lr @@ -188,18 +188,19 @@ ENTRY(__hyp_do_panic) msr elr_el2, lr eret sb -ENDPROC(__hyp_do_panic) +SYM_FUNC_END(__hyp_do_panic) -ENTRY(__hyp_panic) +SYM_CODE_START(__hyp_panic) get_host_ctxt x0, x1 b hyp_panic -ENDPROC(__hyp_panic) +SYM_CODE_END(__hyp_panic) .macro invalid_vector label, target = __hyp_panic .align 2 +SYM_CODE_START(\label) \label: b \target -ENDPROC(\label) +SYM_CODE_END(\label) .endm /* None of these should ever happen */ @@ -246,7 +247,7 @@ check_preamble_length 661b, 662b check_preamble_length 661b, 662b .endm -ENTRY(__kvm_hyp_vector) +SYM_CODE_START(__kvm_hyp_vector) invalid_vect el2t_sync_invalid // Synchronous EL2t invalid_vect el2t_irq_invalid // IRQ EL2t invalid_vect el2t_fiq_invalid // FIQ EL2t @@ -266,7 +267,7 @@ ENTRY(__kvm_hyp_vector) valid_vect el1_irq // IRQ 32-bit EL1 invalid_vect el1_fiq_invalid // FIQ 32-bit EL1 valid_vect el1_error // Error 32-bit EL1 -ENDPROC(__kvm_hyp_vector) +SYM_CODE_END(__kvm_hyp_vector) #ifdef CONFIG_KVM_INDIRECT_VECTORS .macro hyp_ventry From patchwork Tue Feb 18 19:58:38 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 11389449 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 86DA514E3 for ; Tue, 18 Feb 2020 19:59:27 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 67F242464E for ; Tue, 18 Feb 2020 19:59:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1582055967; bh=j3UIKZlwavp3+EwC11Ly/4J9ZTYUGCphJQVnMqeeRzI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=o14h7uPoRynRJsxhtLZdu+Dv+bAG3PB5lWAc7eol33lppmb+9AMJFXzi7wXVvJcl4 67GwBIXD9i+bsESpN0hbjDPtUu25MmfnUmAKLlhWgYAGrjeX0buW2VQ66gG2Q84Uoz 0F1Cyp94mrcdTigdUVUYA7rczwTZdDM2b4zo4AE0= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728412AbgBRT70 (ORCPT ); Tue, 18 Feb 2020 14:59:26 -0500 Received: from foss.arm.com ([217.140.110.172]:60674 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727872AbgBRT7Z (ORCPT ); Tue, 18 Feb 2020 14:59:25 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 2E7DE31B; Tue, 18 Feb 2020 11:59:25 -0800 (PST) Received: from localhost (unknown [10.37.6.21]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id A57833F68F; Tue, 18 Feb 2020 11:59:24 -0800 (PST) From: Mark Brown To: Herbert Xu , "David S. Miller" , Catalin Marinas , Will Deacon , Marc Zyngier , James Morse , Julien Thierry , Suzuki K Poulose Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-crypto@vger.kernel.org, Mark Brown Subject: [PATCH 14/18] arm64: kvm: Modernize annotation for __bp_harden_hyp_vecs Date: Tue, 18 Feb 2020 19:58:38 +0000 Message-Id: <20200218195842.34156-15-broonie@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200218195842.34156-1-broonie@kernel.org> References: <20200218195842.34156-1-broonie@kernel.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org We have recently introduced new macros for annotating assembly symbols for things that aren't C functions, SYM_CODE_START() and SYM_CODE_END(), in an effort to clarify and simplify our annotations of assembly files. Using these for __bp_harden_hyp_vecs is more involved than for most symbols as this symbol is annotated quite unusually as rather than just have the explicit symbol we define _start and _end symbols which we then use to compute the length. This does not play at all nicely with the new style macros. Since the size of the vectors is a known constant which won't vary the simplest thing to do is simply to drop the separate _start and _end symbols and just use a #define for the size. Signed-off-by: Mark Brown --- arch/arm64/include/asm/kvm_mmu.h | 9 ++++----- arch/arm64/include/asm/mmu.h | 4 +++- arch/arm64/kernel/cpu_errata.c | 2 +- arch/arm64/kvm/hyp/hyp-entry.S | 6 ++++-- 4 files changed, 12 insertions(+), 9 deletions(-) diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index 53d846f1bfe7..b5f723cf9599 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -480,7 +480,7 @@ static inline void *kvm_get_hyp_vector(void) int slot = -1; if (cpus_have_const_cap(ARM64_HARDEN_BRANCH_PREDICTOR) && data->fn) { - vect = kern_hyp_va(kvm_ksym_ref(__bp_harden_hyp_vecs_start)); + vect = kern_hyp_va(kvm_ksym_ref(__bp_harden_hyp_vecs)); slot = data->hyp_vectors_slot; } @@ -509,14 +509,13 @@ static inline int kvm_map_vectors(void) * HBP + HEL2 -> use hardened vertors and use exec mapping */ if (cpus_have_const_cap(ARM64_HARDEN_BRANCH_PREDICTOR)) { - __kvm_bp_vect_base = kvm_ksym_ref(__bp_harden_hyp_vecs_start); + __kvm_bp_vect_base = kvm_ksym_ref(__bp_harden_hyp_vecs); __kvm_bp_vect_base = kern_hyp_va(__kvm_bp_vect_base); } if (cpus_have_const_cap(ARM64_HARDEN_EL2_VECTORS)) { - phys_addr_t vect_pa = __pa_symbol(__bp_harden_hyp_vecs_start); - unsigned long size = (__bp_harden_hyp_vecs_end - - __bp_harden_hyp_vecs_start); + phys_addr_t vect_pa = __pa_symbol(__bp_harden_hyp_vecs); + unsigned long size = __BP_HARDEN_HYP_VECS_SZ; /* * Always allocate a spare vector slot, as we don't diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h index e4d862420bb4..a3324d6ccbfe 100644 --- a/arch/arm64/include/asm/mmu.h +++ b/arch/arm64/include/asm/mmu.h @@ -13,6 +13,7 @@ #define TTBR_ASID_MASK (UL(0xffff) << 48) #define BP_HARDEN_EL2_SLOTS 4 +#define __BP_HARDEN_HYP_VECS_SZ (BP_HARDEN_EL2_SLOTS * SZ_2K) #ifndef __ASSEMBLY__ @@ -45,7 +46,8 @@ struct bp_hardening_data { #if (defined(CONFIG_HARDEN_BRANCH_PREDICTOR) || \ defined(CONFIG_HARDEN_EL2_VECTORS)) -extern char __bp_harden_hyp_vecs_start[], __bp_harden_hyp_vecs_end[]; + +extern char __bp_harden_hyp_vecs[]; extern atomic_t arm64_el2_vector_last_slot; #endif /* CONFIG_HARDEN_BRANCH_PREDICTOR || CONFIG_HARDEN_EL2_VECTORS */ diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c index 703ad0a84f99..0af2201cefda 100644 --- a/arch/arm64/kernel/cpu_errata.c +++ b/arch/arm64/kernel/cpu_errata.c @@ -119,7 +119,7 @@ extern char __smccc_workaround_1_smc_end[]; static void __copy_hyp_vect_bpi(int slot, const char *hyp_vecs_start, const char *hyp_vecs_end) { - void *dst = lm_alias(__bp_harden_hyp_vecs_start + slot * SZ_2K); + void *dst = lm_alias(__bp_harden_hyp_vecs + slot * SZ_2K); int i; for (i = 0; i < SZ_2K; i += 0x80) diff --git a/arch/arm64/kvm/hyp/hyp-entry.S b/arch/arm64/kvm/hyp/hyp-entry.S index 0aea8f9ab23d..1e2ab928a92f 100644 --- a/arch/arm64/kvm/hyp/hyp-entry.S +++ b/arch/arm64/kvm/hyp/hyp-entry.S @@ -312,11 +312,13 @@ alternative_cb_end .endm .align 11 -ENTRY(__bp_harden_hyp_vecs_start) +SYM_CODE_START(__bp_harden_hyp_vecs) .rept BP_HARDEN_EL2_SLOTS generate_vectors .endr -ENTRY(__bp_harden_hyp_vecs_end) +1: .org __bp_harden_hyp_vecs + __BP_HARDEN_HYP_VECS_SZ + .org 1b +SYM_CODE_END(__bp_harden_hyp_vecs) .popsection From patchwork Tue Feb 18 19:58:39 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 11389495 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9772F14E3 for ; Tue, 18 Feb 2020 20:07:05 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 78B8522B48 for ; Tue, 18 Feb 2020 20:07:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1582056425; bh=uzY3qNW/TVXxIDrEpfAtWbrR6Pu6D9xU+6+meqsenkE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=TM4RBUjCRyjry4/DBoZEaaeKJHU5Z2UbC7UPm7s+QnUozq3eJxE1miwseNSmViuMv ITrkYFYQwHRN0kzqusUDub5IzIymSQJ+B8eX3UXeOl3Ehf+4jVBKqY6gQwteQgwNR+ +nFNi7Ss1+KI4dWy6r+He6tyPaNjZpa1XXDWcBF4= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726801AbgBRUHE (ORCPT ); Tue, 18 Feb 2020 15:07:04 -0500 Received: from foss.arm.com ([217.140.110.172]:60694 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728423AbgBRT71 (ORCPT ); Tue, 18 Feb 2020 14:59:27 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 673131045; Tue, 18 Feb 2020 11:59:27 -0800 (PST) Received: from localhost (unknown [10.37.6.21]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id DE5963F68F; Tue, 18 Feb 2020 11:59:26 -0800 (PST) From: Mark Brown To: Herbert Xu , "David S. Miller" , Catalin Marinas , Will Deacon , Marc Zyngier , James Morse , Julien Thierry , Suzuki K Poulose Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-crypto@vger.kernel.org, Mark Brown Subject: [PATCH 15/18] arm64: kvm: Modernize __smccc_workaround_1_smc_start annotations Date: Tue, 18 Feb 2020 19:58:39 +0000 Message-Id: <20200218195842.34156-16-broonie@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200218195842.34156-1-broonie@kernel.org> References: <20200218195842.34156-1-broonie@kernel.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org In an effort to clarify and simplify the annotation of assembly functions in the kernel new macros have been introduced. These replace ENTRY and ENDPROC with separate annotations for standard C callable functions, data and code with different calling conventions. Using these for __smccc_workaround_1_smc is more involved than for most symbols as this symbol is annotated quite unusually, rather than just have the explicit symbol we define _start and _end symbols which we then use to compute the length. This does not play at all nicely with the new style macros. Instead define a constant for the size of the function and use that in both the C code and for .org based size checks in the assembly code. Signed-off-by: Mark Brown --- arch/arm64/include/asm/kvm_asm.h | 4 ++++ arch/arm64/kernel/cpu_errata.c | 14 ++++++-------- arch/arm64/kvm/hyp/hyp-entry.S | 6 ++++-- 3 files changed, 14 insertions(+), 10 deletions(-) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index 44a243754c1b..7c7eeeaab9fa 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -36,6 +36,8 @@ */ #define KVM_VECTOR_PREAMBLE (2 * AARCH64_INSN_SIZE) +#define __SMCCC_WORKAROUND_1_SMC_SZ 36 + #ifndef __ASSEMBLY__ #include @@ -75,6 +77,8 @@ extern void __vgic_v3_init_lrs(void); extern u32 __kvm_get_mdcr_el2(void); +extern char __smccc_workaround_1_smc[__SMCCC_WORKAROUND_1_SMC_SZ]; + /* Home-grown __this_cpu_{ptr,read} variants that always work at HYP */ #define __hyp_this_cpu_ptr(sym) \ ({ \ diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c index 0af2201cefda..6a2ca339741c 100644 --- a/arch/arm64/kernel/cpu_errata.c +++ b/arch/arm64/kernel/cpu_errata.c @@ -11,6 +11,7 @@ #include #include #include +#include #include static bool __maybe_unused @@ -113,9 +114,6 @@ atomic_t arm64_el2_vector_last_slot = ATOMIC_INIT(-1); DEFINE_PER_CPU_READ_MOSTLY(struct bp_hardening_data, bp_hardening_data); #ifdef CONFIG_KVM_INDIRECT_VECTORS -extern char __smccc_workaround_1_smc_start[]; -extern char __smccc_workaround_1_smc_end[]; - static void __copy_hyp_vect_bpi(int slot, const char *hyp_vecs_start, const char *hyp_vecs_end) { @@ -163,9 +161,6 @@ static void install_bp_hardening_cb(bp_hardening_cb_t fn, raw_spin_unlock(&bp_lock); } #else -#define __smccc_workaround_1_smc_start NULL -#define __smccc_workaround_1_smc_end NULL - static void install_bp_hardening_cb(bp_hardening_cb_t fn, const char *hyp_vecs_start, const char *hyp_vecs_end) @@ -239,11 +234,14 @@ static int detect_harden_bp_fw(void) smccc_end = NULL; break; +#if IS_ENABLED(CONFIG_KVM_ARM_HOST) case SMCCC_CONDUIT_SMC: cb = call_smc_arch_workaround_1; - smccc_start = __smccc_workaround_1_smc_start; - smccc_end = __smccc_workaround_1_smc_end; + smccc_start = __smccc_workaround_1_smc; + smccc_end = __smccc_workaround_1_smc + + __SMCCC_WORKAROUND_1_SMC_SZ; break; +#endif default: return -1; diff --git a/arch/arm64/kvm/hyp/hyp-entry.S b/arch/arm64/kvm/hyp/hyp-entry.S index 1e2ab928a92f..c2a13ab3c471 100644 --- a/arch/arm64/kvm/hyp/hyp-entry.S +++ b/arch/arm64/kvm/hyp/hyp-entry.S @@ -322,7 +322,7 @@ SYM_CODE_END(__bp_harden_hyp_vecs) .popsection -ENTRY(__smccc_workaround_1_smc_start) +SYM_CODE_START(__smccc_workaround_1_smc) esb sub sp, sp, #(8 * 4) stp x2, x3, [sp, #(8 * 0)] @@ -332,5 +332,7 @@ ENTRY(__smccc_workaround_1_smc_start) ldp x2, x3, [sp, #(8 * 0)] ldp x0, x1, [sp, #(8 * 2)] add sp, sp, #(8 * 4) -ENTRY(__smccc_workaround_1_smc_end) +1: .org __smccc_workaround_1_smc + __SMCCC_WORKAROUND_1_SMC_SZ + .org 1b +SYM_CODE_END(__smccc_workaround_1_smc) #endif From patchwork Tue Feb 18 19:58:40 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 11389493 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B782B14E3 for ; Tue, 18 Feb 2020 20:07:02 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 99C9122B48 for ; Tue, 18 Feb 2020 20:07:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1582056422; bh=+EpoHTgJOzUBG+b206m+PV/9fUxVFmmP+AM25H/RWzc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=h4/k+VC4VDcnmGiRcXb+3jkshg/2jRdgcHfhszOx1+xijv4td3KXRtOYbkMpqOiw5 PgbzPOsvbKY/2DRAAWVejX58OHrMCX4xhZ/Lo+ppXTDmb11ow65alSFwMr0jy8cWmh E7rxzFSfXI57lrTtGL+VlBpvFJi9oS594V4Zphk0= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728433AbgBRT7b (ORCPT ); Tue, 18 Feb 2020 14:59:31 -0500 Received: from foss.arm.com ([217.140.110.172]:60712 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727158AbgBRT73 (ORCPT ); Tue, 18 Feb 2020 14:59:29 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 9EC6931B; Tue, 18 Feb 2020 11:59:29 -0800 (PST) Received: from localhost (unknown [10.37.6.21]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 22BE63F68F; Tue, 18 Feb 2020 11:59:28 -0800 (PST) From: Mark Brown To: Herbert Xu , "David S. Miller" , Catalin Marinas , Will Deacon , Marc Zyngier , James Morse , Julien Thierry , Suzuki K Poulose Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-crypto@vger.kernel.org, Mark Brown , James Morse Subject: [PATCH 16/18] arm64: sdei: Annotate SDEI entry points using new style annotations Date: Tue, 18 Feb 2020 19:58:40 +0000 Message-Id: <20200218195842.34156-17-broonie@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200218195842.34156-1-broonie@kernel.org> References: <20200218195842.34156-1-broonie@kernel.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org In an effort to clarify and simplify the annotation of assembly functions new macros have been introduced. These replace ENTRY and ENDPROC with two different annotations for normal functions and those with unusual calling conventions. The SDEI entry points are currently annotated as normal functions but are called from non-kernel contexts with non-standard calling convention and should therefore be annotated as such so do so. Signed-off-by: Mark Brown Acked-by: James Morse --- arch/arm64/kernel/entry.S | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S index 7439f29946fb..e5d4e30ee242 100644 --- a/arch/arm64/kernel/entry.S +++ b/arch/arm64/kernel/entry.S @@ -938,7 +938,7 @@ NOKPROBE(ret_from_fork) */ .ltorg .pushsection ".entry.tramp.text", "ax" -ENTRY(__sdei_asm_entry_trampoline) +SYM_CODE_START(__sdei_asm_entry_trampoline) mrs x4, ttbr1_el1 tbz x4, #USER_ASID_BIT, 1f @@ -960,7 +960,7 @@ ENTRY(__sdei_asm_entry_trampoline) ldr x4, =__sdei_asm_handler #endif br x4 -ENDPROC(__sdei_asm_entry_trampoline) +SYM_CODE_END(__sdei_asm_entry_trampoline) NOKPROBE(__sdei_asm_entry_trampoline) /* @@ -970,14 +970,14 @@ NOKPROBE(__sdei_asm_entry_trampoline) * x2: exit_mode * x4: struct sdei_registered_event argument from registration time. */ -ENTRY(__sdei_asm_exit_trampoline) +SYM_CODE_START(__sdei_asm_exit_trampoline) ldr x4, [x4, #(SDEI_EVENT_INTREGS + S_ORIG_ADDR_LIMIT)] cbnz x4, 1f tramp_unmap_kernel tmp=x4 1: sdei_handler_exit exit_mode=x2 -ENDPROC(__sdei_asm_exit_trampoline) +SYM_CODE_END(__sdei_asm_exit_trampoline) NOKPROBE(__sdei_asm_exit_trampoline) .ltorg .popsection // .entry.tramp.text @@ -1003,7 +1003,7 @@ SYM_DATA_END(__sdei_asm_trampoline_next_handler) * follow SMC-CC. We save (or retrieve) all the registers as the handler may * want them. */ -ENTRY(__sdei_asm_handler) +SYM_CODE_START(__sdei_asm_handler) stp x2, x3, [x1, #SDEI_EVENT_INTREGS + S_PC] stp x4, x5, [x1, #SDEI_EVENT_INTREGS + 16 * 2] stp x6, x7, [x1, #SDEI_EVENT_INTREGS + 16 * 3] @@ -1086,6 +1086,6 @@ alternative_else_nop_endif tramp_alias dst=x5, sym=__sdei_asm_exit_trampoline br x5 #endif -ENDPROC(__sdei_asm_handler) +SYM_CODE_END(__sdei_asm_handler) NOKPROBE(__sdei_asm_handler) #endif /* CONFIG_ARM_SDE_INTERFACE */ From patchwork Tue Feb 18 19:58:41 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 11389453 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id AB361930 for ; Tue, 18 Feb 2020 19:59:36 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 8B13C2465A for ; Tue, 18 Feb 2020 19:59:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1582055976; bh=3aekFAxZbI4K+U3+bCK6JpmHqD46Z/yonCcxr9hPnz8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=u9z0MokLgd42Fh3SNQ8ARroMw2rPJDPUpzItAGGU1jcZGKMawGBNoE2t+VOr0g4oC a8LDoLaurU65q6ESIsvyrZQAxT3ABsMmbmjxIrMrp2OttyZ0nVNmnSEeYFcea+bGda 69fgER/ZWbVdXGCDB5nsJWPpHUIN0KQ8N6Lbd3JA= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727857AbgBRT7d (ORCPT ); Tue, 18 Feb 2020 14:59:33 -0500 Received: from foss.arm.com ([217.140.110.172]:60732 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726482AbgBRT7c (ORCPT ); Tue, 18 Feb 2020 14:59:32 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D88DEFEC; Tue, 18 Feb 2020 11:59:31 -0800 (PST) Received: from localhost (unknown [10.37.6.21]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 5BF623F68F; Tue, 18 Feb 2020 11:59:31 -0800 (PST) From: Mark Brown To: Herbert Xu , "David S. Miller" , Catalin Marinas , Will Deacon , Marc Zyngier , James Morse , Julien Thierry , Suzuki K Poulose Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-crypto@vger.kernel.org, Mark Brown Subject: [PATCH 17/18] arm64: vdso: Convert to modern assembler annotations Date: Tue, 18 Feb 2020 19:58:41 +0000 Message-Id: <20200218195842.34156-18-broonie@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200218195842.34156-1-broonie@kernel.org> References: <20200218195842.34156-1-broonie@kernel.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org In an effort to clarify and simplify the annotation of assembly functions new macros have been introduced. These replace ENTRY and ENDPROC with two different annotations for normal functions and those with unusual calling conventions. Convert the assembly function in the arm64 VDSO to the new macros. Signed-off-by: Mark Brown --- arch/arm64/kernel/vdso/sigreturn.S | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/arm64/kernel/vdso/sigreturn.S b/arch/arm64/kernel/vdso/sigreturn.S index 0723aa398d6e..12324863d5c2 100644 --- a/arch/arm64/kernel/vdso/sigreturn.S +++ b/arch/arm64/kernel/vdso/sigreturn.S @@ -14,7 +14,7 @@ .text nop -ENTRY(__kernel_rt_sigreturn) +SYM_FUNC_START(__kernel_rt_sigreturn) .cfi_startproc .cfi_signal_frame .cfi_def_cfa x29, 0 @@ -23,4 +23,4 @@ ENTRY(__kernel_rt_sigreturn) mov x8, #__NR_rt_sigreturn svc #0 .cfi_endproc -ENDPROC(__kernel_rt_sigreturn) +SYM_FUNC_END(__kernel_rt_sigreturn) From patchwork Tue Feb 18 19:58:42 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 11389491 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 39C8613A4 for ; Tue, 18 Feb 2020 20:07:00 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 1AC2722B48 for ; Tue, 18 Feb 2020 20:07:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1582056420; bh=ysatse66WsEP+APXGdPuRUVO8vuni1I3B4/iyQFpvhg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=DIsugi8LlhqrkeRt3GTPSfX9ECYyEEpQegHWNibQgljPBiE0mq4rcepoR13d+NpHy Jz+BqCKQv8uHVKq9Bf3+zoQW+xpW8swT4SN548JdpfodiZ3brxN5Uxx3Rypxg/5q8x Kw/yMmgC7Usy6jl1OenxL970Q1jpPXBCUnL1bdkw= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728114AbgBRT7g (ORCPT ); Tue, 18 Feb 2020 14:59:36 -0500 Received: from foss.arm.com ([217.140.110.172]:60752 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728069AbgBRT7e (ORCPT ); Tue, 18 Feb 2020 14:59:34 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1E80C101E; Tue, 18 Feb 2020 11:59:34 -0800 (PST) Received: from localhost (unknown [10.37.6.21]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 94C873F68F; Tue, 18 Feb 2020 11:59:33 -0800 (PST) From: Mark Brown To: Herbert Xu , "David S. Miller" , Catalin Marinas , Will Deacon , Marc Zyngier , James Morse , Julien Thierry , Suzuki K Poulose Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-crypto@vger.kernel.org, Mark Brown Subject: [PATCH 18/18] arm64: vdso32: Convert to modern assembler annotations Date: Tue, 18 Feb 2020 19:58:42 +0000 Message-Id: <20200218195842.34156-19-broonie@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200218195842.34156-1-broonie@kernel.org> References: <20200218195842.34156-1-broonie@kernel.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org In an effort to clarify and simplify the annotation of assembly functions new macros have been introduced. These replace ENTRY and ENDPROC with two different annotations for normal functions and those with unusual calling conventions. Use these for the compat VDSO, allowing us to drop the custom ARM_ENTRY() and ARM_ENDPROC() macros. Signed-off-by: Mark Brown --- arch/arm64/kernel/vdso32/sigreturn.S | 23 ++++++++--------------- 1 file changed, 8 insertions(+), 15 deletions(-) diff --git a/arch/arm64/kernel/vdso32/sigreturn.S b/arch/arm64/kernel/vdso32/sigreturn.S index 1a81277c2d09..620524969696 100644 --- a/arch/arm64/kernel/vdso32/sigreturn.S +++ b/arch/arm64/kernel/vdso32/sigreturn.S @@ -10,13 +10,6 @@ #include #include -#define ARM_ENTRY(name) \ - ENTRY(name) - -#define ARM_ENDPROC(name) \ - .type name, %function; \ - END(name) - .text .arm @@ -24,39 +17,39 @@ .save {r0-r15} .pad #COMPAT_SIGFRAME_REGS_OFFSET nop -ARM_ENTRY(__kernel_sigreturn_arm) +SYM_FUNC_START(__kernel_sigreturn_arm) mov r7, #__NR_compat_sigreturn svc #0 .fnend -ARM_ENDPROC(__kernel_sigreturn_arm) +SYM_FUNC_END(__kernel_sigreturn_arm) .fnstart .save {r0-r15} .pad #COMPAT_RT_SIGFRAME_REGS_OFFSET nop -ARM_ENTRY(__kernel_rt_sigreturn_arm) +SYM_FUNC_START(__kernel_rt_sigreturn_arm) mov r7, #__NR_compat_rt_sigreturn svc #0 .fnend -ARM_ENDPROC(__kernel_rt_sigreturn_arm) +SYM_FUNC_END(__kernel_rt_sigreturn_arm) .thumb .fnstart .save {r0-r15} .pad #COMPAT_SIGFRAME_REGS_OFFSET nop -ARM_ENTRY(__kernel_sigreturn_thumb) +SYM_FUNC_START(__kernel_sigreturn_thumb) mov r7, #__NR_compat_sigreturn svc #0 .fnend -ARM_ENDPROC(__kernel_sigreturn_thumb) +SYM_FUNC_END(__kernel_sigreturn_thumb) .fnstart .save {r0-r15} .pad #COMPAT_RT_SIGFRAME_REGS_OFFSET nop -ARM_ENTRY(__kernel_rt_sigreturn_thumb) +SYM_FUNC_START(__kernel_rt_sigreturn_thumb) mov r7, #__NR_compat_rt_sigreturn svc #0 .fnend -ARM_ENDPROC(__kernel_rt_sigreturn_thumb) +SYM_FUNC_END(__kernel_rt_sigreturn_thumb)