From patchwork Tue Jul 18 09:19:11 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 9847293 X-Patchwork-Delegate: kvalo@adurom.com Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 68B9360393 for ; Tue, 18 Jul 2017 09:19:28 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 604F524B48 for ; Tue, 18 Jul 2017 09:19:28 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 553BF26256; Tue, 18 Jul 2017 09:19:28 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 08D8226E49 for ; Tue, 18 Jul 2017 09:19:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751471AbdGRJTY (ORCPT ); Tue, 18 Jul 2017 05:19:24 -0400 Received: from mail-wr0-f181.google.com ([209.85.128.181]:36802 "EHLO mail-wr0-f181.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751418AbdGRJTX (ORCPT ); Tue, 18 Jul 2017 05:19:23 -0400 Received: by mail-wr0-f181.google.com with SMTP id y43so19869018wrd.3 for ; Tue, 18 Jul 2017 02:19:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=TTcJNC6kusyK3xBwuxZWQr4herCFHTJeaV8oPhIifTo=; b=HCy2I3twwrERzOGsugyUagG905OCrqWpzi6PucFPlOFABSa4t8m9Vzb9AwRhHdkf7A Z9YLOi7LoYPcqNnxGBOZztpbM7m199VkWhLD9pvJxzCXjB6VBFNztg/4xBeYdqHmbijt tKhaTnI3ljOMPgYfaC2KdSidcNO8aSd3XZs+k= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=TTcJNC6kusyK3xBwuxZWQr4herCFHTJeaV8oPhIifTo=; b=PCljunJ3jObko6Z2MrqO6Zty4Y/nT7mw5HpOM/Np1mDjVivZwYLwkQx31y8p9uYCFv hwr8iCH8tthkkSFklDcdN/3sdXxo2ww+3gRBuBno5kZXAHhvUEZcNdRV9UYZaiYV7Uf8 NENfbgjkaYNB/pFb//Sgeb4zKjcLg/YXHkMDWQyDWgcMqhEg7IytmZ5Mx1s06e00+RZz US/LQvt6F/troQopq0yeT0aCr8Sg7CmouT1eJMBBSkrq/F8umDSmBQ6SjwIlHuIR4gux lNi2pGh0Yn1iTr+UjpWu457bbTtW0HDLI647OnJ2w1B4VfawSUuVLLPbCl9KSK/wVt/G +7QQ== X-Gm-Message-State: AIVw112JAYIpe1K0FGQQZyrprkRUfKgZn//1bp+oAk7yZiXrxeda1bwh O22iZpKGI9+lt0TKnEBTRg== X-Received: by 10.223.164.83 with SMTP id e19mr493685wra.101.1500369562310; Tue, 18 Jul 2017 02:19:22 -0700 (PDT) Received: from localhost.localdomain ([154.145.198.181]) by smtp.gmail.com with ESMTPSA id k45sm1133024wrk.45.2017.07.18.02.19.19 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 18 Jul 2017 02:19:21 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org, herbert@gondor.apana.org.au Cc: ebiggers@google.com, davem@davemloft.net, dm-devel@redhat.com, johannes@sipsolutions.net, linux-wireless@vger.kernel.org, agk@redhat.com, snitzer@redhat.com, Ard Biesheuvel Subject: [PATCH v2 1/2] crypto/algapi - use separate dst and src operands for __crypto_xor() Date: Tue, 18 Jul 2017 10:19:11 +0100 Message-Id: <20170718091912.14104-2-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.9.3 In-Reply-To: <20170718091912.14104-1-ard.biesheuvel@linaro.org> References: <20170718091912.14104-1-ard.biesheuvel@linaro.org> Sender: linux-wireless-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-wireless@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP In preparation of introducing crypto_xor_cpy(), which will use separate operands for input and output, modify the __crypto_xor() implementation, which it will share with the existing crypto_xor(), which provides the actual functionality when not using the inline version. Signed-off-by: Ard Biesheuvel --- crypto/algapi.c | 25 ++++++++++++-------- include/crypto/algapi.h | 4 ++-- 2 files changed, 17 insertions(+), 12 deletions(-) diff --git a/crypto/algapi.c b/crypto/algapi.c index e4cc7615a139..aa699ff6c876 100644 --- a/crypto/algapi.c +++ b/crypto/algapi.c @@ -975,13 +975,15 @@ void crypto_inc(u8 *a, unsigned int size) } EXPORT_SYMBOL_GPL(crypto_inc); -void __crypto_xor(u8 *dst, const u8 *src, unsigned int len) +void __crypto_xor(u8 *dst, const u8 *src1, const u8 *src2, unsigned int len) { int relalign = 0; if (!IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS)) { int size = sizeof(unsigned long); - int d = ((unsigned long)dst ^ (unsigned long)src) & (size - 1); + int d = (((unsigned long)dst ^ (unsigned long)src1) | + ((unsigned long)dst ^ (unsigned long)src2)) & + (size - 1); relalign = d ? 1 << __ffs(d) : size; @@ -992,34 +994,37 @@ void __crypto_xor(u8 *dst, const u8 *src, unsigned int len) * process the remainder of the input using optimal strides. */ while (((unsigned long)dst & (relalign - 1)) && len > 0) { - *dst++ ^= *src++; + *dst++ = *src1++ ^ *src2++; len--; } } while (IS_ENABLED(CONFIG_64BIT) && len >= 8 && !(relalign & 7)) { - *(u64 *)dst ^= *(u64 *)src; + *(u64 *)dst = *(u64 *)src1 ^ *(u64 *)src2; dst += 8; - src += 8; + src1 += 8; + src2 += 8; len -= 8; } while (len >= 4 && !(relalign & 3)) { - *(u32 *)dst ^= *(u32 *)src; + *(u32 *)dst = *(u32 *)src1 ^ *(u32 *)src2; dst += 4; - src += 4; + src1 += 4; + src2 += 4; len -= 4; } while (len >= 2 && !(relalign & 1)) { - *(u16 *)dst ^= *(u16 *)src; + *(u16 *)dst = *(u16 *)src1 ^ *(u16 *)src2; dst += 2; - src += 2; + src1 += 2; + src2 += 2; len -= 2; } while (len--) - *dst++ ^= *src++; + *dst++ = *src1++ ^ *src2++; } EXPORT_SYMBOL_GPL(__crypto_xor); diff --git a/include/crypto/algapi.h b/include/crypto/algapi.h index 436c4c2683c7..fd547f946bf8 100644 --- a/include/crypto/algapi.h +++ b/include/crypto/algapi.h @@ -192,7 +192,7 @@ static inline unsigned int crypto_queue_len(struct crypto_queue *queue) } void crypto_inc(u8 *a, unsigned int size); -void __crypto_xor(u8 *dst, const u8 *src, unsigned int size); +void __crypto_xor(u8 *dst, const u8 *src1, const u8 *src2, unsigned int size); static inline void crypto_xor(u8 *dst, const u8 *src, unsigned int size) { @@ -207,7 +207,7 @@ static inline void crypto_xor(u8 *dst, const u8 *src, unsigned int size) size -= sizeof(unsigned long); } } else { - __crypto_xor(dst, src, size); + __crypto_xor(dst, dst, src, size); } }