From patchwork Mon Aug 27 15:38:11 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 10577395 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1B8DB14E1 for ; Mon, 27 Aug 2018 15:39:27 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 04A8029CDC for ; Mon, 27 Aug 2018 15:39:27 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id EC82429CE9; Mon, 27 Aug 2018 15:39:26 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 1C99229CDC for ; Mon, 27 Aug 2018 15:39:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=mONCOrdcf+RqDqTw2nKhjnO2eymdGb0RM9F9GPUqIqE=; b=os4E68qISL1dRVawSubQv2f+ZF DaAddX1NZ0mwJ6VZ/+9/8H6nmlWKDtOkmvy81mkkcwy9v8EjrqHjImW0780ulJReTdlvvDn2Bm+WO Vqvvah6yPaVKqoGBKeC5gWNxQg5Machw8wrI1yV77m1p5zUfAVOnet3GBeESaCAyCbAfUoW1srywz x2wO3s3p8f6uXRwbQnhGWi3XjuWcGZSvKa2NFcyWxHySHLUgvgGVdUBpgU1Hf3WqZ9D7byv0EEJlO I0N9zMzV1+4Kf+2bVJ7fBfvwCXgPhUH7zZUBlUVWwIYJy3Cv8pv46aTrTHZzF4mxQWOZsxOJeYBYo M3t8F2Ow==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1fuJbo-0003EK-Qu; Mon, 27 Aug 2018 15:39:16 +0000 Received: from mail-ed1-x543.google.com ([2a00:1450:4864:20::543]) by bombadil.infradead.org with esmtps (Exim 4.90_1 #2 (Red Hat Linux)) id 1fuJbE-0002xD-7P for linux-arm-kernel@lists.infradead.org; Mon, 27 Aug 2018 15:38:42 +0000 Received: by mail-ed1-x543.google.com with SMTP id h4-v6so10787468edi.6 for ; Mon, 27 Aug 2018 08:38:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=LZ90gpK2EZdSpGSKTauj8lePVUxIJsffU2wHecy1wtU=; b=SlIe5GzeUG8jBidZjmQYehny5h7exLeSbrVfaa582cjQzLmoU590zaMRszuWVYRUbs D0LLZm+AN8q1WyYVJjJByfDER6p0AfE9SjfOYkzHvBImiKe/6uGshqTjD6LhrTUcOZ0i ZUxGumssXPriUVkGpFZUIV8OJiUeNNLw3PP60= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=LZ90gpK2EZdSpGSKTauj8lePVUxIJsffU2wHecy1wtU=; b=GaD3ZxULGG2B4/KKejv0aWVZquHzEQl8UW4OMZsj7e5iwO2EugIA0TjvkgLFhrWvBT GzFaFcMAkWuBHeqKSOjuArnkJtyYYNfDxq+EBDHpy6Ep1G/XghudQhrJ0PfDhMVdP8Nz EZ9Yv/ZzFNis1XGV1x5CESKgzaWW2Of0fdhtpWtgCqpeb9OaMEsA6M1Vnz2mp8sz5dho lHZPJwzWsjhy0ZpUq4LaSEOFbU0qugznEF74ZMIKqdOT+ydNj8bD2s96OPHMZljp4B7s Tg8xGJZdWe870VO3+KQFXPZ2wuiFfRxcfzlYyIj5sQgMzacWbmPryZ72I+MpZa1nEYIO C/kw== X-Gm-Message-State: APzg51DbKc9AQASgYi2+nwasVojZQstcimkkCNMHbI2BUw9EdA3eCSgG /nTvEs6dqI1P0Ugh3Bsw7eYU8w== X-Google-Smtp-Source: ANB0VdaG8Gf6o3tSWu0XuXgbzPHCnpyk7Z4jIe/L/M5M2sSF1Nf6CLD2EsdX1ZRYoi/tS2wP9Lz0+w== X-Received: by 2002:a50:b53a:: with SMTP id y55-v6mr17579041edd.191.1535384309672; Mon, 27 Aug 2018 08:38:29 -0700 (PDT) Received: from rev02.home ([2a02:a212:9283:9800:24b9:e2d6:9acc:50dd]) by smtp.gmail.com with ESMTPSA id r44-v6sm8852984edd.87.2018.08.27.08.38.28 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 27 Aug 2018 08:38:28 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Subject: [PATCH 1/2] crypto: arm64/crct10dif - preparatory refactor for 8x8 PMULL version Date: Mon, 27 Aug 2018 17:38:11 +0200 Message-Id: <20180827153812.6763-2-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180827153812.6763-1-ard.biesheuvel@linaro.org> References: <20180827153812.6763-1-ard.biesheuvel@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20180827_083840_275450_FBF322F5 X-CRM114-Status: GOOD ( 18.42 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: herbert@gondor.apana.org.au, linux-scsi@vger.kernel.org, jeff.lien@wdc.com, Ard Biesheuvel , linux-kernel@vger.kernel.org, martin.petersen@oracle.com, linux-arm-kernel@lists.infradead.org MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Reorganize the CRC-T10DIF asm routine so we can easily instantiate an alternative version based on 8x8 polynomial multiplication in a subsequent patch. Signed-off-by: Ard Biesheuvel --- arch/arm64/crypto/crct10dif-ce-core.S | 160 +++++++++++--------- arch/arm64/crypto/crct10dif-ce-glue.c | 6 +- 2 files changed, 90 insertions(+), 76 deletions(-) diff --git a/arch/arm64/crypto/crct10dif-ce-core.S b/arch/arm64/crypto/crct10dif-ce-core.S index 663ea71cdb38..a39951015e86 100644 --- a/arch/arm64/crypto/crct10dif-ce-core.S +++ b/arch/arm64/crypto/crct10dif-ce-core.S @@ -80,7 +80,46 @@ vzr .req v13 -ENTRY(crc_t10dif_pmull) + .macro fold64, p, reg1, reg2 + ldp q11, q12, [arg2], #0x20 + + __pmull_\p v8, \reg1, v10, 2 + __pmull_\p \reg1, \reg1, v10 + +CPU_LE( rev64 v11.16b, v11.16b ) +CPU_LE( rev64 v12.16b, v12.16b ) + + __pmull_\p v9, \reg2, v10, 2 + __pmull_\p \reg2, \reg2, v10 + +CPU_LE( ext v11.16b, v11.16b, v11.16b, #8 ) +CPU_LE( ext v12.16b, v12.16b, v12.16b, #8 ) + + eor \reg1\().16b, \reg1\().16b, v8.16b + eor \reg2\().16b, \reg2\().16b, v9.16b + eor \reg1\().16b, \reg1\().16b, v11.16b + eor \reg2\().16b, \reg2\().16b, v12.16b + .endm + + .macro fold16, p, reg, rk + __pmull_\p v8, \reg, v10 + __pmull_\p \reg, \reg, v10, 2 + .ifnb \rk + ldr_l q10, \rk, x8 + .endif + eor v7.16b, v7.16b, v8.16b + eor v7.16b, v7.16b, \reg\().16b + .endm + + .macro __pmull_p64, rd, rn, rm, n + .ifb \n + pmull \rd\().1q, \rn\().1d, \rm\().1d + .else + pmull2 \rd\().1q, \rn\().2d, \rm\().2d + .endif + .endm + + .macro crc_t10dif_pmull, p frame_push 3, 128 mov arg1_low32, w0 @@ -96,7 +135,7 @@ ENTRY(crc_t10dif_pmull) cmp arg3, #256 // for sizes less than 128, we can't fold 64B at a time... - b.lt _less_than_128 + b.lt .L_less_than_128_\@ // load the initial crc value // crc value does not need to be byte-reflected, but it needs @@ -147,41 +186,19 @@ CPU_LE( ext v7.16b, v7.16b, v7.16b, #8 ) // buffer. The _fold_64_B_loop will fold 64B at a time // until we have 64+y Bytes of buffer - // fold 64B at a time. This section of the code folds 4 vector // registers in parallel -_fold_64_B_loop: - - .macro fold64, reg1, reg2 - ldp q11, q12, [arg2], #0x20 - - pmull2 v8.1q, \reg1\().2d, v10.2d - pmull \reg1\().1q, \reg1\().1d, v10.1d - -CPU_LE( rev64 v11.16b, v11.16b ) -CPU_LE( rev64 v12.16b, v12.16b ) - - pmull2 v9.1q, \reg2\().2d, v10.2d - pmull \reg2\().1q, \reg2\().1d, v10.1d - -CPU_LE( ext v11.16b, v11.16b, v11.16b, #8 ) -CPU_LE( ext v12.16b, v12.16b, v12.16b, #8 ) - - eor \reg1\().16b, \reg1\().16b, v8.16b - eor \reg2\().16b, \reg2\().16b, v9.16b - eor \reg1\().16b, \reg1\().16b, v11.16b - eor \reg2\().16b, \reg2\().16b, v12.16b - .endm +.L_fold_64_B_loop_\@: - fold64 v0, v1 - fold64 v2, v3 - fold64 v4, v5 - fold64 v6, v7 + fold64 \p, v0, v1 + fold64 \p, v2, v3 + fold64 \p, v4, v5 + fold64 \p, v6, v7 subs arg3, arg3, #128 // check if there is another 64B in the buffer to be able to fold - b.lt _fold_64_B_end + b.lt .L_fold_64_B_end_\@ if_will_cond_yield_neon stp q0, q1, [sp, #.Lframe_local_offset] @@ -197,9 +214,9 @@ CPU_LE( ext v12.16b, v12.16b, v12.16b, #8 ) movi vzr.16b, #0 // init zero register endif_yield_neon - b _fold_64_B_loop + b .L_fold_64_B_loop_\@ -_fold_64_B_end: +.L_fold_64_B_end_\@: // at this point, the buffer pointer is pointing at the last y Bytes // of the buffer the 64B of folded data is in 4 of the vector // registers: v0, v1, v2, v3 @@ -209,37 +226,27 @@ _fold_64_B_end: ldr_l q10, rk9, x8 - .macro fold16, reg, rk - pmull v8.1q, \reg\().1d, v10.1d - pmull2 \reg\().1q, \reg\().2d, v10.2d - .ifnb \rk - ldr_l q10, \rk, x8 - .endif - eor v7.16b, v7.16b, v8.16b - eor v7.16b, v7.16b, \reg\().16b - .endm - - fold16 v0, rk11 - fold16 v1, rk13 - fold16 v2, rk15 - fold16 v3, rk17 - fold16 v4, rk19 - fold16 v5, rk1 - fold16 v6 + fold16 \p, v0, rk11 + fold16 \p, v1, rk13 + fold16 \p, v2, rk15 + fold16 \p, v3, rk17 + fold16 \p, v4, rk19 + fold16 \p, v5, rk1 + fold16 \p, v6 // instead of 64, we add 48 to the loop counter to save 1 instruction // from the loop instead of a cmp instruction, we use the negative // flag with the jl instruction adds arg3, arg3, #(128-16) - b.lt _final_reduction_for_128 + b.lt .L_final_reduction_for_128_\@ // now we have 16+y bytes left to reduce. 16 Bytes is in register v7 // and the rest is in memory. We can fold 16 bytes at a time if y>=16 // continue folding 16B at a time -_16B_reduction_loop: - pmull v8.1q, v7.1d, v10.1d - pmull2 v7.1q, v7.2d, v10.2d +.L_16B_reduction_loop_\@: + __pmull_\p v8, v7, v10 + __pmull_\p v7, v7, v10, 2 eor v7.16b, v7.16b, v8.16b ldr q0, [arg2], #16 @@ -251,22 +258,22 @@ CPU_LE( ext v0.16b, v0.16b, v0.16b, #8 ) // instead of a cmp instruction, we utilize the flags with the // jge instruction equivalent of: cmp arg3, 16-16 // check if there is any more 16B in the buffer to be able to fold - b.ge _16B_reduction_loop + b.ge .L_16B_reduction_loop_\@ // now we have 16+z bytes left to reduce, where 0<= z < 16. // first, we reduce the data in the xmm7 register -_final_reduction_for_128: +.L_final_reduction_for_128_\@: // check if any more data to fold. If not, compute the CRC of // the final 128 bits adds arg3, arg3, #16 - b.eq _128_done + b.eq .L_128_done_\@ // here we are getting data that is less than 16 bytes. // since we know that there was data before the pointer, we can // offset the input pointer before the actual point, to receive // exactly 16 bytes. after that the registers need to be adjusted. -_get_last_two_regs: +.L_get_last_two_regs_\@: add arg2, arg2, arg3 ldr q1, [arg2, #-16] CPU_LE( rev64 v1.16b, v1.16b ) @@ -291,47 +298,46 @@ CPU_LE( ext v1.16b, v1.16b, v1.16b, #8 ) bsl v0.16b, v2.16b, v1.16b // fold 16 Bytes - pmull v8.1q, v7.1d, v10.1d - pmull2 v7.1q, v7.2d, v10.2d + __pmull_\p v8, v7, v10 + __pmull_\p v7, v7, v10, 2 eor v7.16b, v7.16b, v8.16b eor v7.16b, v7.16b, v0.16b -_128_done: +.L_128_done_\@: // compute crc of a 128-bit value ldr_l q10, rk5, x8 // rk5 and rk6 in xmm10 // 64b fold ext v0.16b, vzr.16b, v7.16b, #8 mov v7.d[0], v7.d[1] - pmull v7.1q, v7.1d, v10.1d + __pmull_\p v7, v7, v10 eor v7.16b, v7.16b, v0.16b // 32b fold ext v0.16b, v7.16b, vzr.16b, #4 mov v7.s[3], vzr.s[0] - pmull2 v0.1q, v0.2d, v10.2d + __pmull_\p v0, v0, v10, 2 eor v7.16b, v7.16b, v0.16b // barrett reduction -_barrett: ldr_l q10, rk7, x8 mov v0.d[0], v7.d[1] - pmull v0.1q, v0.1d, v10.1d + __pmull_\p v0, v0, v10 ext v0.16b, vzr.16b, v0.16b, #12 - pmull2 v0.1q, v0.2d, v10.2d + __pmull_\p v0, v0, v10, 2 ext v0.16b, vzr.16b, v0.16b, #12 eor v7.16b, v7.16b, v0.16b mov w0, v7.s[1] -_cleanup: +.L_cleanup_\@: // scale the result back to 16 bits lsr x0, x0, #16 frame_pop ret -_less_than_128: - cbz arg3, _cleanup +.L_less_than_128_\@: + cbz arg3, .L_cleanup_\@ movi v0.16b, #0 mov v0.s[3], arg1_low32 // get the initial crc value @@ -342,20 +348,20 @@ CPU_LE( ext v7.16b, v7.16b, v7.16b, #8 ) eor v7.16b, v7.16b, v0.16b // xor the initial crc value cmp arg3, #16 - b.eq _128_done // exactly 16 left - b.lt _less_than_16_left + b.eq .L_128_done_\@ // exactly 16 left + b.lt .L_less_than_16_left_\@ ldr_l q10, rk1, x8 // rk1 and rk2 in xmm10 // update the counter. subtract 32 instead of 16 to save one // instruction from the loop subs arg3, arg3, #32 - b.ge _16B_reduction_loop + b.ge .L_16B_reduction_loop_\@ add arg3, arg3, #16 - b _get_last_two_regs + b .L_get_last_two_regs_\@ -_less_than_16_left: +.L_less_than_16_left_\@: // shl r9, 4 adr_l x0, tbl_shf_table + 16 sub x0, x0, arg3 @@ -363,8 +369,12 @@ _less_than_16_left: movi v9.16b, #0x80 eor v0.16b, v0.16b, v9.16b tbl v7.16b, {v7.16b}, v0.16b - b _128_done -ENDPROC(crc_t10dif_pmull) + b .L_128_done_\@ + .endm + +ENTRY(crc_t10dif_pmull_p64) + crc_t10dif_pmull p64 +ENDPROC(crc_t10dif_pmull_p64) // precomputed constants // these constants are precomputed from the poly: diff --git a/arch/arm64/crypto/crct10dif-ce-glue.c b/arch/arm64/crypto/crct10dif-ce-glue.c index 96f0cae4a022..343a1e95b11a 100644 --- a/arch/arm64/crypto/crct10dif-ce-glue.c +++ b/arch/arm64/crypto/crct10dif-ce-glue.c @@ -22,7 +22,9 @@ #define CRC_T10DIF_PMULL_CHUNK_SIZE 16U -asmlinkage u16 crc_t10dif_pmull(u16 init_crc, const u8 buf[], u64 len); +asmlinkage u16 crc_t10dif_pmull_p64(u16 init_crc, const u8 buf[], u64 len); + +static u16 (*crc_t10dif_pmull)(u16 init_crc, const u8 buf[], u64 len); static int crct10dif_init(struct shash_desc *desc) { @@ -85,6 +87,8 @@ static struct shash_alg crc_t10dif_alg = { static int __init crc_t10dif_mod_init(void) { + crc_t10dif_pmull = crc_t10dif_pmull_p64; + return crypto_register_shash(&crc_t10dif_alg); } From patchwork Mon Aug 27 15:38:12 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 10577397 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5E72F14BD for ; Mon, 27 Aug 2018 15:40:30 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4959629D38 for ; Mon, 27 Aug 2018 15:40:30 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3D8FB29D40; Mon, 27 Aug 2018 15:40:30 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 916A429D2A for ; Mon, 27 Aug 2018 15:40:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=5emiAJcnYMFAhWZxD9ZU+o/C+M+NkVJue/XnKVA5Eoc=; b=eopES1O1TLC67YNktrlWtPhwL5 t57ODPq1vEhUgJ1GaZ8sslaJBMG2A+0MnAEzCG4mFRv//rhlVyDB9D/j4lQYwifJZ1j8O4lxOpnzP rZefh9JdcGEJFtyxRuDtO7jt8TDISG7e/Saw2mDtdeskXFw1SIc8T5j+sUJ4Fi2d8wrHLG4DRbp5J RnQfCWlU8fPxR8eKIO0nHYK9fPIWcSgm9olw5PRd4mRbs0GiSrctYq1xiIfYj9LP74VHEibhe94Di e8x8kGvmQdwR6s55RwEuWZTo8L4pebxxXkyHZTHj/hP0mbOQXgSDKyS9mRo4O703ahtvAuHKHRdr3 INcp3seA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1fuJcn-0004Xg-4M; Mon, 27 Aug 2018 15:40:17 +0000 Received: from mail-ed1-x541.google.com ([2a00:1450:4864:20::541]) by bombadil.infradead.org with esmtps (Exim 4.90_1 #2 (Red Hat Linux)) id 1fuJbG-0002xG-OB for linux-arm-kernel@lists.infradead.org; Mon, 27 Aug 2018 15:38:44 +0000 Received: by mail-ed1-x541.google.com with SMTP id e19-v6so10786761edq.7 for ; Mon, 27 Aug 2018 08:38:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=qFfETICKp42wHoXG3mlRg+AfHeEm1OORHgGgXkJ0A2o=; b=hbz19Vcc8JnCuZDitowc25G7yJqRycW/DIRZGRDj6RoEx5VF7+a75m1nn64p9ky6bF SCb13mDJa+fgr9BAAvrJd3iJJROQFPbg89ZvqRTf5DdxF8Z4BHV0LuXCrzAZirWNRPut vtBOOcKWz0oNPCpLFjdvpY1Nb0MCarCxx9uVg= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=qFfETICKp42wHoXG3mlRg+AfHeEm1OORHgGgXkJ0A2o=; b=g2hVljx4REURuFjU1nfOfwCOGt7hWfyWmc9HEhM03LA0XFA/sdn5LDLzwvtUilpUkF BUjEvcQZzpcBiiUo+4yRqw6Htp1zcjpMbKlPmmLjoFrC37jnH4iUHy+Eb/4cbGPmsADG QLI8NB1OdQy/QpXL9evCmq4tjKtFkiu6284xw4EOOjX7Lqdjl0VTwyD611z6kXtJNiSs LAXRw64M3JIJOOX4kqDaha8f7zFc1v8GO0vePrxvTNmWIXVaGTvFQIYmBpPMIkv0oH5d AWVdmLpRSVkQdi2VEMqMGMEKCbRXj/FST95aTzjMseRh8zH7/VhCsgWnVeKQk9RzwpaL A9Sw== X-Gm-Message-State: APzg51AFvJf9erD1kI2pPJZU+8/PUB7EXhvGlyL8q+a5Hx3xVcD++D66 hPaHZfABFNaRJ6qavhfzjdXMEA== X-Google-Smtp-Source: ANB0VdYfgqXxUBIQ/z7MgWplcbA5T7MD1C6S5s/o6ETmvLboFPSJG1pHsaXX/yLjNt/J0OEWWX9SfQ== X-Received: by 2002:a50:c2d1:: with SMTP id u17-v6mr18081109edf.119.1535384310900; Mon, 27 Aug 2018 08:38:30 -0700 (PDT) Received: from rev02.home ([2a02:a212:9283:9800:24b9:e2d6:9acc:50dd]) by smtp.gmail.com with ESMTPSA id r44-v6sm8852984edd.87.2018.08.27.08.38.29 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 27 Aug 2018 08:38:30 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Subject: [PATCH 2/2] crypto: arm64/crct10dif - implement non-Crypto Extensions alternative Date: Mon, 27 Aug 2018 17:38:12 +0200 Message-Id: <20180827153812.6763-3-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180827153812.6763-1-ard.biesheuvel@linaro.org> References: <20180827153812.6763-1-ard.biesheuvel@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20180827_083842_829735_7DC409A7 X-CRM114-Status: GOOD ( 16.32 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: herbert@gondor.apana.org.au, linux-scsi@vger.kernel.org, jeff.lien@wdc.com, Ard Biesheuvel , linux-kernel@vger.kernel.org, martin.petersen@oracle.com, linux-arm-kernel@lists.infradead.org MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP The arm64 implementation of the CRC-T10DIF algorithm uses the 64x64 bit polynomial multiplication instructions, which are optional in the architecture, and if these instructions are not available, we fall back to the C routine which is slow and inefficient. So let's reuse the 64x64 bit PMULL alternative from the GHASH driver that uses a sequence of ~40 instructions involving 8x8 bit PMULL and some shifting and masking. This is a lot slower than the original, but it is still twice as fast as the current [unoptimized] C code on Cortex-A53, and it is time invariant and much easier on the D-cache. Signed-off-by: Ard Biesheuvel --- arch/arm64/crypto/crct10dif-ce-core.S | 154 ++++++++++++++++++++ arch/arm64/crypto/crct10dif-ce-glue.c | 10 +- 2 files changed, 162 insertions(+), 2 deletions(-) diff --git a/arch/arm64/crypto/crct10dif-ce-core.S b/arch/arm64/crypto/crct10dif-ce-core.S index a39951015e86..9e82e8e8ed05 100644 --- a/arch/arm64/crypto/crct10dif-ce-core.S +++ b/arch/arm64/crypto/crct10dif-ce-core.S @@ -80,6 +80,145 @@ vzr .req v13 + ad .req v14 + bd .req v10 + + k00_16 .req v15 + k32_48 .req v16 + + t3 .req v17 + t4 .req v18 + t5 .req v19 + t6 .req v20 + t7 .req v21 + t8 .req v22 + t9 .req v23 + + perm1 .req v24 + perm2 .req v25 + perm3 .req v26 + perm4 .req v27 + + bd1 .req v28 + bd2 .req v29 + bd3 .req v30 + bd4 .req v31 + + .macro __pmull_init_p64 + .endm + + .macro __pmull_pre_p64, bd + .endm + + .macro __pmull_init_p8 + // k00_16 := 0x0000000000000000_000000000000ffff + // k32_48 := 0x00000000ffffffff_0000ffffffffffff + movi k32_48.2d, #0xffffffff + mov k32_48.h[2], k32_48.h[0] + ushr k00_16.2d, k32_48.2d, #32 + + // prepare the permutation vectors + mov_q x5, 0x080f0e0d0c0b0a09 + movi perm4.8b, #8 + dup perm1.2d, x5 + eor perm1.16b, perm1.16b, perm4.16b + ushr perm2.2d, perm1.2d, #8 + ushr perm3.2d, perm1.2d, #16 + ushr perm4.2d, perm1.2d, #24 + sli perm2.2d, perm1.2d, #56 + sli perm3.2d, perm1.2d, #48 + sli perm4.2d, perm1.2d, #40 + .endm + + .macro __pmull_pre_p8, bd + tbl bd1.16b, {\bd\().16b}, perm1.16b + tbl bd2.16b, {\bd\().16b}, perm2.16b + tbl bd3.16b, {\bd\().16b}, perm3.16b + tbl bd4.16b, {\bd\().16b}, perm4.16b + .endm + +__pmull_p8_core: +.L__pmull_p8_core: + ext t4.8b, ad.8b, ad.8b, #1 // A1 + ext t5.8b, ad.8b, ad.8b, #2 // A2 + ext t6.8b, ad.8b, ad.8b, #3 // A3 + + pmull t4.8h, t4.8b, bd.8b // F = A1*B + pmull t8.8h, ad.8b, bd1.8b // E = A*B1 + pmull t5.8h, t5.8b, bd.8b // H = A2*B + pmull t7.8h, ad.8b, bd2.8b // G = A*B2 + pmull t6.8h, t6.8b, bd.8b // J = A3*B + pmull t9.8h, ad.8b, bd3.8b // I = A*B3 + pmull t3.8h, ad.8b, bd4.8b // K = A*B4 + b 0f + +.L__pmull_p8_core2: + tbl t4.16b, {ad.16b}, perm1.16b // A1 + tbl t5.16b, {ad.16b}, perm2.16b // A2 + tbl t6.16b, {ad.16b}, perm3.16b // A3 + + pmull2 t4.8h, t4.16b, bd.16b // F = A1*B + pmull2 t8.8h, ad.16b, bd1.16b // E = A*B1 + pmull2 t5.8h, t5.16b, bd.16b // H = A2*B + pmull2 t7.8h, ad.16b, bd2.16b // G = A*B2 + pmull2 t6.8h, t6.16b, bd.16b // J = A3*B + pmull2 t9.8h, ad.16b, bd3.16b // I = A*B3 + pmull2 t3.8h, ad.16b, bd4.16b // K = A*B4 + +0: eor t4.16b, t4.16b, t8.16b // L = E + F + eor t5.16b, t5.16b, t7.16b // M = G + H + eor t6.16b, t6.16b, t9.16b // N = I + J + + uzp1 t8.2d, t4.2d, t5.2d + uzp2 t4.2d, t4.2d, t5.2d + uzp1 t7.2d, t6.2d, t3.2d + uzp2 t6.2d, t6.2d, t3.2d + + // t4 = (L) (P0 + P1) << 8 + // t5 = (M) (P2 + P3) << 16 + eor t8.16b, t8.16b, t4.16b + and t4.16b, t4.16b, k32_48.16b + + // t6 = (N) (P4 + P5) << 24 + // t7 = (K) (P6 + P7) << 32 + eor t7.16b, t7.16b, t6.16b + and t6.16b, t6.16b, k00_16.16b + + eor t8.16b, t8.16b, t4.16b + eor t7.16b, t7.16b, t6.16b + + zip2 t5.2d, t8.2d, t4.2d + zip1 t4.2d, t8.2d, t4.2d + zip2 t3.2d, t7.2d, t6.2d + zip1 t6.2d, t7.2d, t6.2d + + ext t4.16b, t4.16b, t4.16b, #15 + ext t5.16b, t5.16b, t5.16b, #14 + ext t6.16b, t6.16b, t6.16b, #13 + ext t3.16b, t3.16b, t3.16b, #12 + + eor t4.16b, t4.16b, t5.16b + eor t6.16b, t6.16b, t3.16b + ret +ENDPROC(__pmull_p8_core) + + .macro __pmull_p8, rq, ad, bd, i + .ifnc \bd, v10 + .err + .endif + mov ad.16b, \ad\().16b + .ifb \i + pmull \rq\().8h, \ad\().8b, bd.8b // D = A*B + .else + pmull2 \rq\().8h, \ad\().16b, bd.16b // D = A*B + .endif + + bl .L__pmull_p8_core\i + + eor \rq\().16b, \rq\().16b, t4.16b + eor \rq\().16b, \rq\().16b, t6.16b + .endm + .macro fold64, p, reg1, reg2 ldp q11, q12, [arg2], #0x20 @@ -106,6 +245,7 @@ CPU_LE( ext v12.16b, v12.16b, v12.16b, #8 ) __pmull_\p \reg, \reg, v10, 2 .ifnb \rk ldr_l q10, \rk, x8 + __pmull_pre_\p v10 .endif eor v7.16b, v7.16b, v8.16b eor v7.16b, v7.16b, \reg\().16b @@ -128,6 +268,8 @@ CPU_LE( ext v12.16b, v12.16b, v12.16b, #8 ) movi vzr.16b, #0 // init zero register + __pmull_init_\p + // adjust the 16-bit initial_crc value, scale it to 32 bits lsl arg1_low32, arg1_low32, #16 @@ -176,6 +318,7 @@ CPU_LE( ext v7.16b, v7.16b, v7.16b, #8 ) ldr_l q10, rk3, x8 // xmm10 has rk3 and rk4 // type of pmull instruction // will determine which constant to use + __pmull_pre_\p v10 // // we subtract 256 instead of 128 to save one instruction from the loop @@ -212,6 +355,8 @@ CPU_LE( ext v7.16b, v7.16b, v7.16b, #8 ) ldp q6, q7, [sp, #.Lframe_local_offset + 96] ldr_l q10, rk3, x8 movi vzr.16b, #0 // init zero register + __pmull_init_\p + __pmull_pre_\p v10 endif_yield_neon b .L_fold_64_B_loop_\@ @@ -225,6 +370,7 @@ CPU_LE( ext v7.16b, v7.16b, v7.16b, #8 ) // constants ldr_l q10, rk9, x8 + __pmull_pre_\p v10 fold16 \p, v0, rk11 fold16 \p, v1, rk13 @@ -306,6 +452,7 @@ CPU_LE( ext v1.16b, v1.16b, v1.16b, #8 ) .L_128_done_\@: // compute crc of a 128-bit value ldr_l q10, rk5, x8 // rk5 and rk6 in xmm10 + __pmull_pre_\p v10 // 64b fold ext v0.16b, vzr.16b, v7.16b, #8 @@ -321,6 +468,7 @@ CPU_LE( ext v1.16b, v1.16b, v1.16b, #8 ) // barrett reduction ldr_l q10, rk7, x8 + __pmull_pre_\p v10 mov v0.d[0], v7.d[1] __pmull_\p v0, v0, v10 @@ -352,6 +500,7 @@ CPU_LE( ext v7.16b, v7.16b, v7.16b, #8 ) b.lt .L_less_than_16_left_\@ ldr_l q10, rk1, x8 // rk1 and rk2 in xmm10 + __pmull_pre_\p v10 // update the counter. subtract 32 instead of 16 to save one // instruction from the loop @@ -372,6 +521,11 @@ CPU_LE( ext v7.16b, v7.16b, v7.16b, #8 ) b .L_128_done_\@ .endm +ENTRY(crc_t10dif_pmull_p8) + crc_t10dif_pmull p8 +ENDPROC(crc_t10dif_pmull_p8) + + .align 5 ENTRY(crc_t10dif_pmull_p64) crc_t10dif_pmull p64 ENDPROC(crc_t10dif_pmull_p64) diff --git a/arch/arm64/crypto/crct10dif-ce-glue.c b/arch/arm64/crypto/crct10dif-ce-glue.c index 343a1e95b11a..b461d62023f2 100644 --- a/arch/arm64/crypto/crct10dif-ce-glue.c +++ b/arch/arm64/crypto/crct10dif-ce-glue.c @@ -23,6 +23,7 @@ #define CRC_T10DIF_PMULL_CHUNK_SIZE 16U asmlinkage u16 crc_t10dif_pmull_p64(u16 init_crc, const u8 buf[], u64 len); +asmlinkage u16 crc_t10dif_pmull_p8(u16 init_crc, const u8 buf[], u64 len); static u16 (*crc_t10dif_pmull)(u16 init_crc, const u8 buf[], u64 len); @@ -87,7 +88,10 @@ static struct shash_alg crc_t10dif_alg = { static int __init crc_t10dif_mod_init(void) { - crc_t10dif_pmull = crc_t10dif_pmull_p64; + if (elf_hwcap & HWCAP_PMULL) + crc_t10dif_pmull = crc_t10dif_pmull_p64; + else + crc_t10dif_pmull = crc_t10dif_pmull_p8; return crypto_register_shash(&crc_t10dif_alg); } @@ -97,8 +101,10 @@ static void __exit crc_t10dif_mod_exit(void) crypto_unregister_shash(&crc_t10dif_alg); } -module_cpu_feature_match(PMULL, crc_t10dif_mod_init); +module_cpu_feature_match(ASIMD, crc_t10dif_mod_init); module_exit(crc_t10dif_mod_exit); MODULE_AUTHOR("Ard Biesheuvel "); MODULE_LICENSE("GPL v2"); +MODULE_ALIAS_CRYPTO("crct10dif"); +MODULE_ALIAS_CRYPTO("crct10dif-arm64-ce");