From patchwork Sat Dec 2 11:15:14 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 10088585 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 6EF4560327 for ; Sat, 2 Dec 2017 11:15:24 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5D25729CF0 for ; Sat, 2 Dec 2017 11:15:24 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 5179129D13; Sat, 2 Dec 2017 11:15:24 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A948F29CF0 for ; Sat, 2 Dec 2017 11:15:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751611AbdLBLPR (ORCPT ); Sat, 2 Dec 2017 06:15:17 -0500 Received: from mail-it0-f68.google.com ([209.85.214.68]:43382 "EHLO mail-it0-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751463AbdLBLPQ (ORCPT ); Sat, 2 Dec 2017 06:15:16 -0500 Received: by mail-it0-f68.google.com with SMTP id u62so5228767ita.2 for ; Sat, 02 Dec 2017 03:15:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc; bh=Hjk5DkJQyrMvMFDB/6udWfSuYRDY8PB5paYl0ViUD7s=; b=HOIz7mWsiuY2xYtz+BxW7Z68b444ZiKuXSvoIg2SRS+YQK6Otp2Jy5WvLF2EV175rj 6SepjYr9M+jVmx9pyHpzuI75NjUTERwVs0ke4Ai/6SI29emw9bDbL4o0Gv/f9iHPcLcD fji85+qSPRWuzgI9C2xnaU0ZcuVQXbIjg5jbE= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc; bh=Hjk5DkJQyrMvMFDB/6udWfSuYRDY8PB5paYl0ViUD7s=; b=V5x17HNNzWzF50fm8WmYSqQdNcRLk/2H7t5ZAS94PbhTpq/9Yraw1gkFzCP9Y7Rn8g amYxGDycH3SBhxfQxtuy2BoYwW5CyY1MDevIFQwYKZ/3Sa5QDtZ2kwvpv3y9aIHus0Z/ kYD2asogBcaqblWNTMR3VhMzwAm2/BjRX0vG1sidCnQs+9htFKXq3140tm+Ja/s2vmax ylvGzZmLtkGZMb4EQYiKJwgJm9qCQNUrt7e/WWl5Q/e+FMXmkuHmDY4tcXKgoFTDNAeo 7QddxO/E5AZjyWeAVc3Os/TzeGjLdqmyvZjXe+pQTQwiBaETvs9nUpOQ6C5Qz957b4UI mfvQ== X-Gm-Message-State: AKGB3mKku1yhDorBl6MigapYvuuDGmgUBYUrnSJiYcf/Y1juxv68bsCM aEEhVHQUXLJna9KgiyT2xsiPg8f9ISMGeAMlb4wTbQ== X-Google-Smtp-Source: AGs4zMZS9taHyzJKHvdjxkQmTqbjRpF+HlVxAZTJeAmPTsIXYrh1t5CIMxYHuQCORccGpvrvWwg7Uq3u5C9oE2GhtJM= X-Received: by 10.36.219.214 with SMTP id c205mr5563239itg.65.1512213315280; Sat, 02 Dec 2017 03:15:15 -0800 (PST) MIME-Version: 1.0 Received: by 10.107.104.16 with HTTP; Sat, 2 Dec 2017 03:15:14 -0800 (PST) In-Reply-To: References: <20171201211927.24653-1-ard.biesheuvel@linaro.org> <20171202090107.GT3326@worktop> From: Ard Biesheuvel Date: Sat, 2 Dec 2017 11:15:14 +0000 Message-ID: Subject: Re: [PATCH 0/5] crypto: arm64 - disable NEON across scatterwalk API calls To: Peter Zijlstra Cc: "linux-crypto@vger.kernel.org" , Herbert Xu , "linux-arm-kernel@lists.infradead.org" , Dave Martin , Russell King - ARM Linux , Sebastian Andrzej Siewior , Mark Rutland , linux-rt-users@vger.kernel.org, Catalin Marinas , Will Deacon , Steven Rostedt , Thomas Gleixner Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP On 2 December 2017 at 09:11, Ard Biesheuvel wrote: > On 2 December 2017 at 09:01, Peter Zijlstra wrote: >> On Fri, Dec 01, 2017 at 09:19:22PM +0000, Ard Biesheuvel wrote: >>> Note that the remaining crypto drivers simply operate on fixed buffers, so >>> while the RT crowd may still feel the need to disable those (and the ones >>> below as well, perhaps), they don't call back into the crypto layer like >>> the ones updated by this series, and so there's no room for improvement >>> there AFAICT. >> >> Do these other drivers process all the blocks fed to them in one go >> under a single NEON section, or do they do a single fixed block per >> NEON invocation? > > They consume the entire input in a single go, yes. But making it more > granular than that is going to hurt performance, unless we introduce > some kind of kernel_neon_yield(), which does a end+begin but only if > the task is being scheduled out. > > For example, the SHA256 keeps 256 bytes of round constants in NEON > registers, and reloading those from memory for each 64 byte block of > input is going to be noticeable. The same applies to the AES code > (although the numbers are slightly different) Something like below should do the trick I think (apologies for the patch soup). I.e., check TIF_NEED_RESCHED at a point where only very few NEON registers are live, and preserve/restore the live registers across calls to kernel_neon_end + kernel_neon_begin. Would that work for RT? diff --git a/arch/arm64/crypto/sha2-ce-core.S b/arch/arm64/crypto/sha2-ce-core.S index 679c6c002f4f..4f12038574f3 100644 --- a/arch/arm64/crypto/sha2-ce-core.S +++ b/arch/arm64/crypto/sha2-ce-core.S @@ -77,6 +77,10 @@ * int blocks) */ ENTRY(sha2_ce_transform) + stp x29, x30, [sp, #-48]! + mov x29, sp + +restart: /* load round constants */ adr x8, .Lsha2_rcon ld1 { v0.4s- v3.4s}, [x8], #64 @@ -129,14 +133,17 @@ CPU_LE( rev32 v19.16b, v19.16b ) add dgbv.4s, dgbv.4s, dg1v.4s /* handled all input blocks? */ - cbnz w2, 0b + cbz w2, 2f + + tif_need_resched 4f, 5 + b 0b /* * Final block: add padding and total bit count. * Skip if the input size was not a round multiple of the block size, * the padding is handled by the C code in that case. */ - cbz x4, 3f +2: cbz x4, 3f ldr_l w4, sha256_ce_offsetof_count, x4 ldr x4, [x0, x4] movi v17.2d, #0 @@ -151,5 +158,15 @@ CPU_LE( rev32 v19.16b, v19.16b ) /* store new state */ 3: st1 {dgav.4s, dgbv.4s}, [x0] + ldp x29, x30, [sp], #48 ret + +4: st1 {dgav.4s, dgbv.4s}, [x0] + stp x0, x1, [sp, #16] + stp x2, x4, [sp, #32] + bl kernel_neon_end + bl kernel_neon_begin + ldp x0, x1, [sp, #16] + ldp x2, x4, [sp, #32] + b restart ENDPROC(sha2_ce_transform) diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h index aef72d886677..e3e7e15ebefd 100644 --- a/arch/arm64/include/asm/assembler.h +++ b/arch/arm64/include/asm/assembler.h @@ -512,4 +512,15 @@ alternative_else_nop_endif #endif .endm +/* + * Check TIF_NEED_RESCHED flag from assembler (for kernel mode NEON) + */ + .macro tif_need_resched, lbl:req, regnum:req +#ifdef CONFIG_PREEMPT + get_thread_info x\regnum + ldr w\regnum, [x\regnum, #TSK_TI_FLAGS] // get flags + tbnz w\regnum, #TIF_NEED_RESCHED, \lbl // needs rescheduling? +#endif + .endm + #endif /* __ASM_ASSEMBLER_H */