From patchwork Tue Feb 14 10:04:00 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 9571609 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 67A366045F for ; Tue, 14 Feb 2017 10:05:34 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 58DF72793B for ; Tue, 14 Feb 2017 10:05:34 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4DA522836B; Tue, 14 Feb 2017 10:05:34 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D01822793B for ; Tue, 14 Feb 2017 10:05:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753578AbdBNKEi (ORCPT ); Tue, 14 Feb 2017 05:04:38 -0500 Received: from mail-wr0-f173.google.com ([209.85.128.173]:36006 "EHLO mail-wr0-f173.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752521AbdBNKEV (ORCPT ); Tue, 14 Feb 2017 05:04:21 -0500 Received: by mail-wr0-f173.google.com with SMTP id k90so165940514wrc.3 for ; Tue, 14 Feb 2017 02:04:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=an7WRMDe0ANCSBb2gXxvipZ+ESiG1yJmPJB5d3wuVUY=; b=ERUT+AGxttMNP2IARRzj3qJyuP/yEdWA7erVQTmVpN0z65OEV3RMt8kI8yUiMCoBeo 8Au8NtFYDWcS2q/E8w+qJhJM6V+L8F2OsriC5oPYEaelltDyaW2F4xBOKJ2ts3hQL/fJ 7UnPKc9FIasBb2NQjURaegYlUiKc97xSOJ1ww= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=an7WRMDe0ANCSBb2gXxvipZ+ESiG1yJmPJB5d3wuVUY=; b=eHFRAzqfENGXu3qgsybVHddBBWwYeJ2IItHzO3Ic1Eq5il4urjUlf6KMcv/HGvY0UP ax9pxFd3KKNo4l7WO8PAY7w5qUxfuA+U9XB7UyjNm/TeTSJfM2cXqUim+zkKaoWrrp+o ToiImY1rCBCPGPkbLhXCJm/yqQD+cHDCjxSINDAc+gueYqtEjSTLfG7ytmjv75dWmRCU mxqdEM3Y4zM6lGEPFM3/fq8phMPWVfPxNDlIt+ioR98jiNlicFhMZ5fOn4GnXA6keO8L M6gQ7XE/1np9av6TDROvJJ2WnoCkss6ScO5tRvFjLyHEgRUMOfzoLQbhl0kCNzOEg44x w/lw== X-Gm-Message-State: AMke39kxz9QBJNIJfVm1V4mgc+iMzE7VVLrItwAxzf3aPmfeX+BnZ++6miAluQ0wafUXHEeU X-Received: by 10.223.135.69 with SMTP id 5mr25617663wrz.174.1487066659526; Tue, 14 Feb 2017 02:04:19 -0800 (PST) Received: from localhost.localdomain ([196.80.229.213]) by smtp.gmail.com with ESMTPSA id 8sm3076455wmg.1.2017.02.14.02.04.17 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 14 Feb 2017 02:04:18 -0800 (PST) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org, herbert@gondor.apana.org.au Cc: Ard Biesheuvel , "Jason A . Donenfeld" Subject: [PATCH 2/2] crypto: algapi - annotate expected branch behavior in crypto_inc() Date: Tue, 14 Feb 2017 10:04:00 +0000 Message-Id: <1487066640-17886-2-git-send-email-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1487066640-17886-1-git-send-email-ard.biesheuvel@linaro.org> References: <1487066640-17886-1-git-send-email-ard.biesheuvel@linaro.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP To prevent unnecessary branching, mark the exit condition of the primary loop as likely(), given that a carry in a 32-bit counter occurs very rarely. On arm64, the resulting code is emitted by GCC as 9a8: cmp w1, #0x3 9ac: add x3, x0, w1, uxtw 9b0: b.ls 9e0 9b4: ldr w2, [x3,#-4]! 9b8: rev w2, w2 9bc: add w2, w2, #0x1 9c0: rev w4, w2 9c4: str w4, [x3] 9c8: cbz w2, 9d0 9cc: ret where the two remaining branch conditions (one for size < 4 and one for the carry) are statically predicted as non-taken, resulting in optimal execution in the vast majority of cases. Also, replace the open coded alignment test with IS_ALIGNED(). Cc: Jason A. Donenfeld Signed-off-by: Ard Biesheuvel --- crypto/algapi.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/crypto/algapi.c b/crypto/algapi.c index 6b52e8f0b95f..9eed4ef9c971 100644 --- a/crypto/algapi.c +++ b/crypto/algapi.c @@ -963,11 +963,11 @@ void crypto_inc(u8 *a, unsigned int size) u32 c; if (IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) || - !((unsigned long)b & (__alignof__(*b) - 1))) + IS_ALIGNED((unsigned long)b, __alignof__(*b))) for (; size >= 4; size -= 4) { c = be32_to_cpu(*--b) + 1; *b = cpu_to_be32(c); - if (c) + if (likely(c)) return; }