From patchwork Wed Dec 6 19:43:45 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 10096933 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id E3A61602BF for ; Wed, 6 Dec 2017 19:45:11 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D6EC128A99 for ; Wed, 6 Dec 2017 19:45:11 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id CADE028B82; Wed, 6 Dec 2017 19:45:11 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 751B228B82 for ; Wed, 6 Dec 2017 19:45:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752427AbdLFTpK (ORCPT ); Wed, 6 Dec 2017 14:45:10 -0500 Received: from mail-wm0-f65.google.com ([74.125.82.65]:42414 "EHLO mail-wm0-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751463AbdLFToz (ORCPT ); Wed, 6 Dec 2017 14:44:55 -0500 Received: by mail-wm0-f65.google.com with SMTP id l141so9224424wmg.1 for ; Wed, 06 Dec 2017 11:44:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=DEQY0/LoXnU9pE4NqHBI/5S5x1EkKDsFAJsw6tFA+2s=; b=JjbszcL1QLYfb9KIS8xhZQOQn2eIBxu0r6aj47MK99HUcf03NTf7obEUcPOmqNcyVt oB6igxGd/aX/rpzZqECbXOGi4RSATmwFJ1jylkBq7MZA6Mfv2sS6sz0jEQMtQy5Fys5T l6C+uoUNcfXE3UGQzP4kEzaKvkznd23ZRG2PE= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=DEQY0/LoXnU9pE4NqHBI/5S5x1EkKDsFAJsw6tFA+2s=; b=ls2Xfkza7C7Ph9O4LkDROfgqx4dH920AaP8lAsSYwrrW/75W/EbDtmlyFeYPWgUaPP d3hCQY6H5I58KBB3+2uh2w/bDOYoBZMhfjp5AZJjq9fbvnyf++kw0P/SRkLBPK/IYm/T zsj7hJOemoQih2igfJ3m6qHBM7NINjfCsT4bBLLQpFUMGaiDkLp2j0TpiI1uK7Yy9zrv aCi4okx+gqKVR74u2QSVS0axasCSbHEWVvdeetu4HYn7rterbH4BoRXhB9KJWriw72S3 4esHlKiasHQo383cgYcJ4mSF26TObcIVWWWe6wX/rcsT31JAQhvTPzGGhkSG1OSQLi0R 4wjg== X-Gm-Message-State: AKGB3mJs+Nc81adR9wYenZbNYTMIWK5/+agdM561P2+A71zvImNs02LX 25jKfZxmGzCosrITaLzy7wlZjb0yyt0= X-Google-Smtp-Source: AGs4zMYLptysyOnkk1bxlwtHi6wcmgtcSiWIyrjClSE6mGzzjztwMlzMYool3JVabFzScyIiHZ+2dQ== X-Received: by 10.28.24.210 with SMTP id 201mr8391378wmy.120.1512589493581; Wed, 06 Dec 2017 11:44:53 -0800 (PST) Received: from localhost.localdomain ([105.150.171.234]) by smtp.gmail.com with ESMTPSA id b66sm3596594wmh.32.2017.12.06.11.44.50 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 06 Dec 2017 11:44:52 -0800 (PST) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: herbert@gondor.apana.org.au, linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Dave Martin , Russell King - ARM Linux , Sebastian Andrzej Siewior , Mark Rutland , linux-rt-users@vger.kernel.org, Peter Zijlstra , Catalin Marinas , Will Deacon , Steven Rostedt , Thomas Gleixner Subject: [PATCH v3 19/20] crypto: arm64/crct10dif-ce - yield NEON after every block of input Date: Wed, 6 Dec 2017 19:43:45 +0000 Message-Id: <20171206194346.24393-20-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20171206194346.24393-1-ard.biesheuvel@linaro.org> References: <20171206194346.24393-1-ard.biesheuvel@linaro.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Avoid excessive scheduling delays under a preemptible kernel by yielding the NEON after every block of input. Signed-off-by: Ard Biesheuvel --- arch/arm64/crypto/crct10dif-ce-core.S | 32 +++++++++++++++++--- 1 file changed, 28 insertions(+), 4 deletions(-) diff --git a/arch/arm64/crypto/crct10dif-ce-core.S b/arch/arm64/crypto/crct10dif-ce-core.S index d5b5a8c038c8..111675f7bad5 100644 --- a/arch/arm64/crypto/crct10dif-ce-core.S +++ b/arch/arm64/crypto/crct10dif-ce-core.S @@ -74,13 +74,19 @@ .text .cpu generic+crypto - arg1_low32 .req w0 - arg2 .req x1 - arg3 .req x2 + arg1_low32 .req w19 + arg2 .req x20 + arg3 .req x21 vzr .req v13 ENTRY(crc_t10dif_pmull) + frame_push 3, 128 + + mov arg1_low32, w0 + mov arg2, x1 + mov arg3, x2 + movi vzr.16b, #0 // init zero register // adjust the 16-bit initial_crc value, scale it to 32 bits @@ -175,8 +181,25 @@ CPU_LE( ext v12.16b, v12.16b, v12.16b, #8 ) subs arg3, arg3, #128 // check if there is another 64B in the buffer to be able to fold - b.ge _fold_64_B_loop + b.lt _fold_64_B_end + + if_will_cond_yield_neon + stp q0, q1, [sp, #48] + stp q2, q3, [sp, #80] + stp q4, q5, [sp, #112] + stp q6, q7, [sp, #144] + do_cond_yield_neon + ldp q0, q1, [sp, #48] + ldp q2, q3, [sp, #80] + ldp q4, q5, [sp, #112] + ldp q6, q7, [sp, #144] + ldr q10, rk3 + movi vzr.16b, #0 // init zero register + endif_yield_neon + + b _fold_64_B_loop +_fold_64_B_end: // at this point, the buffer pointer is pointing at the last y Bytes // of the buffer the 64B of folded data is in 4 of the vector // registers: v0, v1, v2, v3 @@ -304,6 +327,7 @@ _barrett: _cleanup: // scale the result back to 16 bits lsr x0, x0, #16 + frame_pop 3, 128 ret _less_than_128: