From patchwork Mon Aug 27 11:02:42 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 10576945 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B679117DE for ; Mon, 27 Aug 2018 11:03:06 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9E17B2904E for ; Mon, 27 Aug 2018 11:03:06 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8FC17290CA; Mon, 27 Aug 2018 11:03:06 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 86E7E2904E for ; Mon, 27 Aug 2018 11:03:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727094AbeH0OtN (ORCPT ); Mon, 27 Aug 2018 10:49:13 -0400 Received: from mail-ed1-f65.google.com ([209.85.208.65]:45043 "EHLO mail-ed1-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726902AbeH0OtN (ORCPT ); Mon, 27 Aug 2018 10:49:13 -0400 Received: by mail-ed1-f65.google.com with SMTP id s10-v6so10115856edb.11 for ; Mon, 27 Aug 2018 04:03:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=oAg2MIWmTiuQXPRQuCyO90mfns13N4bEQB8aeiZWMRk=; b=cG7Q1DVRhliFRC8rJwRTg+THv+BiFvRwXTWg1V/D5US8FJeQ4S8g1iZjRWiU1cB6Yg pnN/Eqx672jHY0XGP/sMDnYqgPKA+BeqiEppEibgXI/eilA6YfVG/MI3xdsKprL0hTIA Z2HFE6udh9LVAs+JAlXU8eVwDKuwVnhfdslLw= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=oAg2MIWmTiuQXPRQuCyO90mfns13N4bEQB8aeiZWMRk=; b=l4vm1hUMGIxCPmwx3ztSjgvfhg65ZEtY4MJvPaauMKe5nZhTWFeHVpPbWD3HwCI8gG NBaJtfTih5a0ISdgz5O7FbHQRDcDpoh3dCHgUOEevy+CysMhxk6b3aa0JFmLLajq/VdW fc4rcWwfqUp585whkKccfhQLHDIIVomB87i6syA6JkC20MTXUQNMvCFad6iTkJ+mifFB Gv0Ia9JcMcUrIQ1RHHzCbphiYLrkSCMfIXG+NHYHYsbpSsJj6k1SbyTDmnlocXwl+RIT SX1gn/vZlYN5gi/k/8TAnZ9MLfLJSy7Mtk2FlNIu/YMmz9sSnvSKpQRs+/5suiyy6qCJ 0MkQ== X-Gm-Message-State: APzg51DFuW/dWMsv9ANrCZEhJbYq9r+eXtIX4pk1ncDlX67cWOv0+Gff 72Ae4T0YVhTie5hM3XPV+Qdzmg== X-Google-Smtp-Source: ANB0Vdb+9LZIseQp6Ydcf1ZGBXMYl0a9yHTImbAWDfdX7s88PcO5TleHrR877p4KMeQchjl52sLPSQ== X-Received: by 2002:a50:a38a:: with SMTP id s10-v6mr16286374edb.99.1535367781188; Mon, 27 Aug 2018 04:03:01 -0700 (PDT) Received: from rev02.home ([2a02:a212:9283:9800:24b9:e2d6:9acc:50dd]) by smtp.gmail.com with ESMTPSA id r2-v6sm3114344eda.89.2018.08.27.04.03.00 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 27 Aug 2018 04:03:00 -0700 (PDT) From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org, linux-crypto@vger.kernel.org Cc: will.deacon@arm.com, catalin.marinas@arm.com, herbert@gondor.apana.org.au, ebiggers@google.com, suzuki.poulose@arm.com, linux-kernel@vger.kernel.org, Ard Biesheuvel Subject: [PATCH 1/4] lib/crc32: make core crc32() routines weak so they can be overridden Date: Mon, 27 Aug 2018 13:02:42 +0200 Message-Id: <20180827110245.14812-2-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180827110245.14812-1-ard.biesheuvel@linaro.org> References: <20180827110245.14812-1-ard.biesheuvel@linaro.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Allow architectures to drop in accelerated CRC32 routines by making the crc32_le/__crc32c_le entry points weak, and exposing non-weak aliases for them that may be used by the accelerated versions as fallbacks in case the instructions they rely upon are not available. Signed-off-by: Ard Biesheuvel Acked-by: Herbert Xu --- lib/crc32.c | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-) diff --git a/lib/crc32.c b/lib/crc32.c index a6c9afafc8c8..45b1d67a1767 100644 --- a/lib/crc32.c +++ b/lib/crc32.c @@ -183,21 +183,21 @@ static inline u32 __pure crc32_le_generic(u32 crc, unsigned char const *p, } #if CRC_LE_BITS == 1 -u32 __pure crc32_le(u32 crc, unsigned char const *p, size_t len) +u32 __pure __weak crc32_le(u32 crc, unsigned char const *p, size_t len) { return crc32_le_generic(crc, p, len, NULL, CRC32_POLY_LE); } -u32 __pure __crc32c_le(u32 crc, unsigned char const *p, size_t len) +u32 __pure __weak __crc32c_le(u32 crc, unsigned char const *p, size_t len) { return crc32_le_generic(crc, p, len, NULL, CRC32C_POLY_LE); } #else -u32 __pure crc32_le(u32 crc, unsigned char const *p, size_t len) +u32 __pure __weak crc32_le(u32 crc, unsigned char const *p, size_t len) { return crc32_le_generic(crc, p, len, (const u32 (*)[256])crc32table_le, CRC32_POLY_LE); } -u32 __pure __crc32c_le(u32 crc, unsigned char const *p, size_t len) +u32 __pure __weak __crc32c_le(u32 crc, unsigned char const *p, size_t len) { return crc32_le_generic(crc, p, len, (const u32 (*)[256])crc32ctable_le, CRC32C_POLY_LE); @@ -206,6 +206,9 @@ u32 __pure __crc32c_le(u32 crc, unsigned char const *p, size_t len) EXPORT_SYMBOL(crc32_le); EXPORT_SYMBOL(__crc32c_le); +u32 crc32_le_base(u32, unsigned char const *, size_t) __alias(crc32_le); +u32 __crc32c_le_base(u32, unsigned char const *, size_t) __alias(__crc32c_le); + /* * This multiplies the polynomials x and y modulo the given modulus. * This follows the "little-endian" CRC convention that the lsbit