From patchwork Mon Oct 8 21:15:53 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 10631433 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0BD9314DB for ; Mon, 8 Oct 2018 21:16:12 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id F1F4F29816 for ; Mon, 8 Oct 2018 21:16:11 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E682729839; Mon, 8 Oct 2018 21:16:11 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7C9B229816 for ; Mon, 8 Oct 2018 21:16:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726907AbeJIE3v (ORCPT ); Tue, 9 Oct 2018 00:29:51 -0400 Received: from mail-wm1-f68.google.com ([209.85.128.68]:37859 "EHLO mail-wm1-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725828AbeJIE3v (ORCPT ); Tue, 9 Oct 2018 00:29:51 -0400 Received: by mail-wm1-f68.google.com with SMTP id 185-v6so9631087wmt.2 for ; Mon, 08 Oct 2018 14:16:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=J0aEQXXrkbEOrgPdvQLuNJVqzwZkLxj5YarnobqFpmY=; b=QICMMtwn3ZdSKPwUMQdTRIHUAtlJNyjNIadf/MSj58IyxcxqKwNtGER7cJEyvzXeXa V37zUin8m75tjh26a9BCszmlTBQYd1uXX/aRuTK0ZiMiQNBQCWZeIFEODgXxmwtabViZ ZRFxj+5OCxE215VNU4ZKErCM+4Qlujm6sNcqE= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=J0aEQXXrkbEOrgPdvQLuNJVqzwZkLxj5YarnobqFpmY=; b=ci+NWhIfmKCkp4scx8AIpnolMSNB4OjAFvmHHRezqxIm9p68QS86a6+WcjK+kz3/h7 a/7JmID9KP34mc2WnRlsXPNcH6uz66VyXVmmQx3RTJJhHIbCU0bfTYK3sxdeaX4QNwpY BhlltYL418iabS2odLbubejlxOG59fWQZ6HbWDQEeVWOgFddLyfc+U65mj+aewDG0R1X Hkh4jW8C1ttbeDPS4ocJD4GxqfpmahKHn+BhdCFtoDraIhZHfxUT1SlhJone3xDXYqkv LYv5iOxn57/Jd7E/1ILeh8OjAWygBbzOPutyb8JCGOAjYrU3pIx/z9qJy5zElBXfFsKL dcfw== X-Gm-Message-State: ABuFfogw8Eer6tCDCstm1EOvJl8tuxFEe94NXNk8Biaj8C5qFq+CeNwI zwcw/Rhjnzlm+XpgVzP7PY2w6G4M8Gc= X-Google-Smtp-Source: ACcGV61mPfy4v5RRayAL9q+XSEC/bETR7tsY/X+jYoraSK8/5s+/I5nPa26wrFIDsmrbVFkmtHoJJA== X-Received: by 2002:a1c:bce:: with SMTP id 197-v6mr15370424wml.15.1539033367816; Mon, 08 Oct 2018 14:16:07 -0700 (PDT) Received: from localhost.localdomain ([2a01:cb1d:112:6f00:8084:9715:d038:c67d]) by smtp.gmail.com with ESMTPSA id s24-v6sm7563308wmc.7.2018.10.08.14.16.06 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 08 Oct 2018 14:16:07 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: herbert@gondor.apana.org.au, arnd@arndb.de, jason@zx2c4.com, ebiggers@google.com, linux-arm-kernel@lists.infradead.org, Ard Biesheuvel Subject: [PATCH 2/3] crypto: crypto_xor - use unaligned accessors for aligned fast path Date: Mon, 8 Oct 2018 23:15:53 +0200 Message-Id: <20181008211554.5355-3-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20181008211554.5355-1-ard.biesheuvel@linaro.org> References: <20181008211554.5355-1-ard.biesheuvel@linaro.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP On ARM v6 and later, we define CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS because the ordinary load/store instructions (ldr, ldrh, ldrb) can tolerate any misalignment of the memory address. However, load/store double and load/store multiple instructions (ldrd, ldm) may still only be used on memory addresses that are 32-bit aligned, and so we have to use the CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS macro with care, or we may end up with a severe performance hit due to alignment traps that require fixups by the kernel. Fortunately, the get_unaligned() accessors do the right thing: when building for ARMv6 or later, the compiler will emit unaligned accesses using the ordinary load/store instructions (but avoid the ones that require 32-bit alignment). When building for older ARM, those accessors will emit the appropriate sequence of ldrb/mov/orr instructions. And on architectures that can truly tolerate any kind of misalignment, the get_unaligned() accessors resolve to the leXX_to_cpup accessors that operate on aligned addresses. So switch to the unaligned accessors for the aligned fast path. This will create the exact same code on architectures that can really tolerate any kind of misalignment, and generate code for ARMv6+ that avoids load/store instructions that trigger alignment faults. Signed-off-by: Ard Biesheuvel --- crypto/algapi.c | 7 +++---- include/crypto/algapi.h | 11 +++++++++-- 2 files changed, 12 insertions(+), 6 deletions(-) diff --git a/crypto/algapi.c b/crypto/algapi.c index 2545c5f89c4c..52ce3c5a0499 100644 --- a/crypto/algapi.c +++ b/crypto/algapi.c @@ -988,11 +988,10 @@ void crypto_inc(u8 *a, unsigned int size) __be32 *b = (__be32 *)(a + size); u32 c; - if (IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) || - IS_ALIGNED((unsigned long)b, __alignof__(*b))) + if (IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS)) for (; size >= 4; size -= 4) { - c = be32_to_cpu(*--b) + 1; - *b = cpu_to_be32(c); + c = get_unaligned_be32(--b) + 1; + put_unaligned_be32(c, b); if (likely(c)) return; } diff --git a/include/crypto/algapi.h b/include/crypto/algapi.h index 4a5ad10e75f0..86267c232f34 100644 --- a/include/crypto/algapi.h +++ b/include/crypto/algapi.h @@ -17,6 +17,8 @@ #include #include +#include + /* * Maximum values for blocksize and alignmask, used to allocate * static buffers that are big enough for any combination of @@ -212,7 +214,9 @@ static inline void crypto_xor(u8 *dst, const u8 *src, unsigned int size) unsigned long *s = (unsigned long *)src; while (size > 0) { - *d++ ^= *s++; + put_unaligned(get_unaligned(d) ^ get_unaligned(s), d); + d++; + s++; size -= sizeof(unsigned long); } } else { @@ -231,7 +235,10 @@ static inline void crypto_xor_cpy(u8 *dst, const u8 *src1, const u8 *src2, unsigned long *s2 = (unsigned long *)src2; while (size > 0) { - *d++ = *s1++ ^ *s2++; + put_unaligned(get_unaligned(s1) ^ get_unaligned(s2), d); + d++; + s1++; + s2++; size -= sizeof(unsigned long); } } else {