From patchwork Mon Jul 10 13:45:47 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 9833043 X-Patchwork-Delegate: kvalo@adurom.com Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id D2AA4603F3 for ; Mon, 10 Jul 2017 13:46:28 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C52AB22B39 for ; Mon, 10 Jul 2017 13:46:28 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B972623794; Mon, 10 Jul 2017 13:46:28 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.5 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,RCVD_IN_DNSWL_HI,RCVD_IN_SORBS_SPAM autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 596E82624A for ; Mon, 10 Jul 2017 13:46:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932162AbdGJNqO (ORCPT ); Mon, 10 Jul 2017 09:46:14 -0400 Received: from mail-wr0-f172.google.com ([209.85.128.172]:33510 "EHLO mail-wr0-f172.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932079AbdGJNqN (ORCPT ); Mon, 10 Jul 2017 09:46:13 -0400 Received: by mail-wr0-f172.google.com with SMTP id r103so139748026wrb.0 for ; Mon, 10 Jul 2017 06:46:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=zZZKsMVKkLr3FYgcnvfptyUIogCVp/jBa2OcQMjWFwg=; b=b7u1oA1U3HG6umAuz5ydDl6gO4RkszPjxB/XqFWDnUDTzQEh7ciLeNDHITsLW/uaz0 0ETzd5rS2NEGyCTG/n5fzgj/58ail9I3NZ9T2HaeEXCe9NtxPlIxLFbAU+raa2HFKvTA GGh6+47iIZ30y2O6xJrT6Q3Fy7iO2HBvaZauA= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=zZZKsMVKkLr3FYgcnvfptyUIogCVp/jBa2OcQMjWFwg=; b=nLUa9JJFKkHBYFtb0bkpkyTuYBjnchVK5mgzJDtGmgG7dPDdCJPJUyElgQCnx+lDan nLoqedHSO0NbfF0WQYKpsiitP/I5MxOcz/pn6IStE3kL0r0m5vtuRL0JRQ4pjRB42o0N SrNn38istygGbx/GpiaSGO7Cv5+twScUWk8v2RmTlqTNPv9vpElgKvs4Cp6FkwRWIhLS rHxg1z8Te5UQCnBHFvaxVv8bYjiCWHkwhLVBq4Ve7eHcg3KvjWT14LHXCjMGP2y0uNgL A6mBHN9Uc3Bea0zY8KI/sUYHcFdd/6ofvIchvfj7SP/PoEH17qdZmq1ozmyZZ9YSpZfa tdzg== X-Gm-Message-State: AIVw111/3jnaykSqac5nL0OMnpZWmY4L/XX/uBivajM8i52o0sgPtPoa akTYx0fwNSqp6oDG X-Received: by 10.223.165.86 with SMTP id j22mr7227759wrb.147.1499694372035; Mon, 10 Jul 2017 06:46:12 -0700 (PDT) Received: from localhost.localdomain ([154.149.70.241]) by smtp.gmail.com with ESMTPSA id g63sm12139915wrd.11.2017.07.10.06.46.09 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 10 Jul 2017 06:46:11 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org, herbert@gondor.apana.org.au, ebiggers@google.com Cc: davem@davemloft.net, dm-devel@redhat.com, johannes@sipsolutions.net, linux-wireless@vger.kernel.org, agk@redhat.com, snitzer@redhat.com, Ard Biesheuvel Subject: [PATCH 1/2] crypto/algapi - use separate dst and src operands for __crypto_xor() Date: Mon, 10 Jul 2017 14:45:47 +0100 Message-Id: <20170710134548.20234-2-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.9.3 In-Reply-To: <20170710134548.20234-1-ard.biesheuvel@linaro.org> References: <20170710134548.20234-1-ard.biesheuvel@linaro.org> Sender: linux-wireless-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-wireless@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP In preparation of updating crypto_xor() [which is what the crypto API exposes to other subsystems] to use separate operands for input and output, first modify the __crypto_xor() implementation that provides the actual functionality when not using the inline version. Signed-off-by: Ard Biesheuvel --- crypto/algapi.c | 25 ++++++++++++-------- include/crypto/algapi.h | 4 ++-- 2 files changed, 17 insertions(+), 12 deletions(-) diff --git a/crypto/algapi.c b/crypto/algapi.c index e4cc7615a139..aa699ff6c876 100644 --- a/crypto/algapi.c +++ b/crypto/algapi.c @@ -975,13 +975,15 @@ void crypto_inc(u8 *a, unsigned int size) } EXPORT_SYMBOL_GPL(crypto_inc); -void __crypto_xor(u8 *dst, const u8 *src, unsigned int len) +void __crypto_xor(u8 *dst, const u8 *src1, const u8 *src2, unsigned int len) { int relalign = 0; if (!IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS)) { int size = sizeof(unsigned long); - int d = ((unsigned long)dst ^ (unsigned long)src) & (size - 1); + int d = (((unsigned long)dst ^ (unsigned long)src1) | + ((unsigned long)dst ^ (unsigned long)src2)) & + (size - 1); relalign = d ? 1 << __ffs(d) : size; @@ -992,34 +994,37 @@ void __crypto_xor(u8 *dst, const u8 *src, unsigned int len) * process the remainder of the input using optimal strides. */ while (((unsigned long)dst & (relalign - 1)) && len > 0) { - *dst++ ^= *src++; + *dst++ = *src1++ ^ *src2++; len--; } } while (IS_ENABLED(CONFIG_64BIT) && len >= 8 && !(relalign & 7)) { - *(u64 *)dst ^= *(u64 *)src; + *(u64 *)dst = *(u64 *)src1 ^ *(u64 *)src2; dst += 8; - src += 8; + src1 += 8; + src2 += 8; len -= 8; } while (len >= 4 && !(relalign & 3)) { - *(u32 *)dst ^= *(u32 *)src; + *(u32 *)dst = *(u32 *)src1 ^ *(u32 *)src2; dst += 4; - src += 4; + src1 += 4; + src2 += 4; len -= 4; } while (len >= 2 && !(relalign & 1)) { - *(u16 *)dst ^= *(u16 *)src; + *(u16 *)dst = *(u16 *)src1 ^ *(u16 *)src2; dst += 2; - src += 2; + src1 += 2; + src2 += 2; len -= 2; } while (len--) - *dst++ ^= *src++; + *dst++ = *src1++ ^ *src2++; } EXPORT_SYMBOL_GPL(__crypto_xor); diff --git a/include/crypto/algapi.h b/include/crypto/algapi.h index 436c4c2683c7..fd547f946bf8 100644 --- a/include/crypto/algapi.h +++ b/include/crypto/algapi.h @@ -192,7 +192,7 @@ static inline unsigned int crypto_queue_len(struct crypto_queue *queue) } void crypto_inc(u8 *a, unsigned int size); -void __crypto_xor(u8 *dst, const u8 *src, unsigned int size); +void __crypto_xor(u8 *dst, const u8 *src1, const u8 *src2, unsigned int size); static inline void crypto_xor(u8 *dst, const u8 *src, unsigned int size) { @@ -207,7 +207,7 @@ static inline void crypto_xor(u8 *dst, const u8 *src, unsigned int size) size -= sizeof(unsigned long); } } else { - __crypto_xor(dst, src, size); + __crypto_xor(dst, dst, src, size); } }