From patchwork Mon Oct 8 21:15:52 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 10631463 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B6F3914DB for ; Mon, 8 Oct 2018 21:17:14 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A838129816 for ; Mon, 8 Oct 2018 21:17:14 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 9C15629839; Mon, 8 Oct 2018 21:17:14 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 22E5929816 for ; Mon, 8 Oct 2018 21:17:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=ot/3QYCOhm+tsyCtzwpPbSMJ1Q0CLp/643mivo0eCrc=; b=MMciefsToV4buYYspwFq3rt8qo z2wLMas/HS7MZCzIcNLw8oAOnFS4JvpCkiSRRsWsnpru4LaYs+0f3W1rB0Pz2t/tUJQF9uDLZiTqN KonExiDWfui07nHAhd8s4aRqQAzDfFHJ5M1XLrzZvvHZlzTmhqvs6PSshConymM4AySRm1OkSQM5M 3mWShv5UrtC2ftKYkfGPMo7Ex0yqbz5b8Xy6PwzLUJkPTJ0QtfBDGRWDoF73F2U86V0IrRheMHoy4 ad8Yqg66UnQFxMt4aBw/kfnHRSgckXXuMGUgB4CKHo5XqewKtEBIqL2gbE8CMmjuueJ9fWWfeEOd7 f77/FgOg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1g9cte-0007BC-Pt; Mon, 08 Oct 2018 21:16:58 +0000 Received: from mail-wm1-x341.google.com ([2a00:1450:4864:20::341]) by bombadil.infradead.org with esmtps (Exim 4.90_1 #2 (Red Hat Linux)) id 1g9ct1-0006qo-Eq for linux-arm-kernel@lists.infradead.org; Mon, 08 Oct 2018 21:16:22 +0000 Received: by mail-wm1-x341.google.com with SMTP id 193-v6so9571834wme.3 for ; Mon, 08 Oct 2018 14:16:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=yVyxxH1ii59qYZF1tTV85yOwfr9+lABlveFtUiVcJ1o=; b=PBqbW6uTfN5iffbPRE1SIC5r3SoXgOP4NpaaUNKHUlamM5mbc7Y7g5XrgIAlvLYQrN lL+nGpFu9zOhdoBJsSE4TIyUcq5aX+0vRITgY4CXvXiWROA5n6RutE5HQ7av3uDRu4PJ CrcyaB/LWlQR5HTRec71Ei5nAmBeBTtuqjJEA= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=yVyxxH1ii59qYZF1tTV85yOwfr9+lABlveFtUiVcJ1o=; b=jofpUFWdXgmoUtSJLuGcgIhlT0y7mQ8dtforAdKVsp0L5SRtpSIlUn1z4PoWoK4VeQ 0TB3tIP134I7EodgpkMJfEHtcRvSyHppRKH2A5FM+D3o/Tel7Lv3+TKtqrfefI3nG7MV ioQfXXxIvFTDTL1k7vDv6dJPvX5JQn+bL1hLDjlwS+tiRvABkadRZQBlfromYp+1psQ9 fM/IBsyYSzWmXOYUnSLkYO5rPB/2ZG6zPGdpHONh31OwtcGNgwanZmVqtjh+v0p5FkXf iEryNTSE/WC+Lg5CaRu8ZuazY7gpJS7vufyoIlmxgfoVdxPjCug7X9SFlDS4vnOBIKUv yDHQ== X-Gm-Message-State: ABuFfohs6wCKAEN5YYiCIevnCbo0STfXQ0Uj/ckrYGNJ+S1yJxF2vcee DoRR/s23VJt1+nhgyG9260ta6g== X-Google-Smtp-Source: ACcGV60McTrF/fI3Rnaz5sjOeV9Vv/ZrB+wioCZHa2apvq4TL1E4IJ+kPiD8+yzSxpDDwoscRc/B4A== X-Received: by 2002:a1c:b58e:: with SMTP id e136-v6mr15619838wmf.114.1539033366609; Mon, 08 Oct 2018 14:16:06 -0700 (PDT) Received: from localhost.localdomain ([2a01:cb1d:112:6f00:8084:9715:d038:c67d]) by smtp.gmail.com with ESMTPSA id s24-v6sm7563308wmc.7.2018.10.08.14.16.05 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 08 Oct 2018 14:16:05 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Subject: [PATCH 1/3] crypto: memneq - use unaligned accessors for aligned fast path Date: Mon, 8 Oct 2018 23:15:52 +0200 Message-Id: <20181008211554.5355-2-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20181008211554.5355-1-ard.biesheuvel@linaro.org> References: <20181008211554.5355-1-ard.biesheuvel@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20181008_141619_956819_5C760AB3 X-CRM114-Status: GOOD ( 14.95 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: jason@zx2c4.com, herbert@gondor.apana.org.au, arnd@arndb.de, Ard Biesheuvel , ebiggers@google.com, linux-arm-kernel@lists.infradead.org MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP On ARM v6 and later, we define CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS because the ordinary load/store instructions (ldr, ldrh, ldrb) can tolerate any misalignment of the memory address. However, load/store double and load/store multiple instructions (ldrd, ldm) may still only be used on memory addresses that are 32-bit aligned, and so we have to use the CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS macro with care, or we may end up with a severe performance hit due to alignment traps that require fixups by the kernel. Fortunately, the get_unaligned() accessors do the right thing: when building for ARMv6 or later, the compiler will emit unaligned accesses using the ordinary load/store instructions (but avoid the ones that require 32-bit alignment). When building for older ARM, those accessors will emit the appropriate sequence of ldrb/mov/orr instructions. And on architectures that can truly tolerate any kind of misalignment, the get_unaligned() accessors resolve to the leXX_to_cpup accessors that operate on aligned addresses. So switch to the unaligned accessors for the aligned fast path. This will create the exact same code on architectures that can really tolerate any kind of misalignment, and generate code for ARMv6+ that avoids load/store instructions that trigger alignment faults. Signed-off-by: Ard Biesheuvel --- crypto/memneq.c | 24 ++++++++++++++------ 1 file changed, 17 insertions(+), 7 deletions(-) diff --git a/crypto/memneq.c b/crypto/memneq.c index afed1bd16aee..0f46a6150f22 100644 --- a/crypto/memneq.c +++ b/crypto/memneq.c @@ -60,6 +60,7 @@ */ #include +#include #ifndef __HAVE_ARCH_CRYPTO_MEMNEQ @@ -71,7 +72,10 @@ __crypto_memneq_generic(const void *a, const void *b, size_t size) #if defined(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) while (size >= sizeof(unsigned long)) { - neq |= *(unsigned long *)a ^ *(unsigned long *)b; + unsigned long const *p = a; + unsigned long const *q = b; + + neq |= get_unaligned(p) ^ get_unaligned(q); OPTIMIZER_HIDE_VAR(neq); a += sizeof(unsigned long); b += sizeof(unsigned long); @@ -95,18 +99,24 @@ static inline unsigned long __crypto_memneq_16(const void *a, const void *b) #ifdef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS if (sizeof(unsigned long) == 8) { - neq |= *(unsigned long *)(a) ^ *(unsigned long *)(b); + unsigned long const *p = a; + unsigned long const *q = b; + + neq |= get_unaligned(p++) ^ get_unaligned(q++); OPTIMIZER_HIDE_VAR(neq); - neq |= *(unsigned long *)(a+8) ^ *(unsigned long *)(b+8); + neq |= get_unaligned(p) ^ get_unaligned(q); OPTIMIZER_HIDE_VAR(neq); } else if (sizeof(unsigned int) == 4) { - neq |= *(unsigned int *)(a) ^ *(unsigned int *)(b); + unsigned int const *p = a; + unsigned int const *q = b; + + neq |= get_unaligned(p++) ^ get_unaligned(q++); OPTIMIZER_HIDE_VAR(neq); - neq |= *(unsigned int *)(a+4) ^ *(unsigned int *)(b+4); + neq |= get_unaligned(p++) ^ get_unaligned(q++); OPTIMIZER_HIDE_VAR(neq); - neq |= *(unsigned int *)(a+8) ^ *(unsigned int *)(b+8); + neq |= get_unaligned(p++) ^ get_unaligned(q++); OPTIMIZER_HIDE_VAR(neq); - neq |= *(unsigned int *)(a+12) ^ *(unsigned int *)(b+12); + neq |= get_unaligned(p) ^ get_unaligned(q); OPTIMIZER_HIDE_VAR(neq); } else #endif /* CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS */