From patchwork Mon Jul 13 16:51:39 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leonidas Da Silva Barbosa X-Patchwork-Id: 6780611 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Original-To: patchwork-linux-crypto@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 3C2C39F2F0 for ; Mon, 13 Jul 2015 16:51:48 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 3E0172054C for ; Mon, 13 Jul 2015 16:51:47 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 1F6902054B for ; Mon, 13 Jul 2015 16:51:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751711AbbGMQvo (ORCPT ); Mon, 13 Jul 2015 12:51:44 -0400 Received: from e24smtp04.br.ibm.com ([32.104.18.25]:37603 "EHLO e24smtp04.br.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751615AbbGMQvo (ORCPT ); Mon, 13 Jul 2015 12:51:44 -0400 Received: from /spool/local by e24smtp04.br.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Mon, 13 Jul 2015 13:51:43 -0300 Received: from d24dlp01.br.ibm.com (9.18.248.204) by e24smtp04.br.ibm.com (10.172.0.140) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Mon, 13 Jul 2015 13:51:41 -0300 X-Helo: d24dlp01.br.ibm.com X-MailFrom: leosilva@linux.vnet.ibm.com X-RcptTo: linux-kernel@vger.kernel.org Received: from d24relay02.br.ibm.com (d24relay02.br.ibm.com [9.13.184.26]) by d24dlp01.br.ibm.com (Postfix) with ESMTP id 4C9653520064; Mon, 13 Jul 2015 12:50:36 -0400 (EDT) Received: from d24av02.br.ibm.com (d24av02.br.ibm.com [9.8.31.93]) by d24relay02.br.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id t6DGoBRr43712534; Mon, 13 Jul 2015 13:50:12 -0300 Received: from d24av02.br.ibm.com (localhost [127.0.0.1]) by d24av02.br.ibm.com (8.14.4/8.14.4/NCO v10.0 AVout) with ESMTP id t6DGpdWE031584; Mon, 13 Jul 2015 13:51:40 -0300 Received: from bluepex.com ([9.8.6.82]) by d24av02.br.ibm.com (8.14.4/8.14.4/NCO v10.0 AVin) with SMTP id t6DGpd1f031578; Mon, 13 Jul 2015 13:51:39 -0300 Date: Mon, 13 Jul 2015 13:51:39 -0300 From: "Leonidas S. Barbosa" To: linux-kernel@vger.kernel.org, linux-crypto@vger.kernel.org Cc: linuxppc-dev@lists.ozlabs.org, Herbert Xu , pfsmorigo@linux.vnet.ibm.com, Michael Ellerman Subject: [PATCH 2/2] Adding enable_kernel_vsx() to access VSX instructions Message-ID: <20150713165136.GA6719@bluepex.com> MIME-Version: 1.0 Content-Disposition: inline User-Agent: Mutt/1.5.23 (2014-03-12) X-TM-AS-MML: disable X-Content-Scanned: Fidelis XPS MAILER x-cbid: 15071316-0029-0000-0000-000003FF4D6E Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Spam-Status: No, score=-8.3 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP vmx-crypto driver make use of some VSX instructions which are only available if VSX is enabled. Running in cases where VSX are not enabled vmx-crypto fails in a VSX exception. In order to fix this enable_kernel_vsx() was added to turn on VSX instructions for vmx-crypto. Signed-off-by: Leonidas S. Barbosa --- drivers/crypto/vmx/aes.c | 3 +++ drivers/crypto/vmx/aes_cbc.c | 3 +++ drivers/crypto/vmx/aes_ctr.c | 3 +++ drivers/crypto/vmx/ghash.c | 4 ++++ 4 files changed, 13 insertions(+) diff --git a/drivers/crypto/vmx/aes.c b/drivers/crypto/vmx/aes.c index e79e567..263af70 100644 --- a/drivers/crypto/vmx/aes.c +++ b/drivers/crypto/vmx/aes.c @@ -84,6 +84,7 @@ static int p8_aes_setkey(struct crypto_tfm *tfm, const u8 *key, preempt_disable(); pagefault_disable(); enable_kernel_altivec(); + enable_kernel_vsx(); ret = aes_p8_set_encrypt_key(key, keylen * 8, &ctx->enc_key); ret += aes_p8_set_decrypt_key(key, keylen * 8, &ctx->dec_key); pagefault_enable(); @@ -103,6 +104,7 @@ static void p8_aes_encrypt(struct crypto_tfm *tfm, u8 *dst, const u8 *src) preempt_disable(); pagefault_disable(); enable_kernel_altivec(); + enable_kernel_vsx(); aes_p8_encrypt(src, dst, &ctx->enc_key); pagefault_enable(); preempt_enable(); @@ -119,6 +121,7 @@ static void p8_aes_decrypt(struct crypto_tfm *tfm, u8 *dst, const u8 *src) preempt_disable(); pagefault_disable(); enable_kernel_altivec(); + enable_kernel_vsx(); aes_p8_decrypt(src, dst, &ctx->dec_key); pagefault_enable(); preempt_enable(); diff --git a/drivers/crypto/vmx/aes_cbc.c b/drivers/crypto/vmx/aes_cbc.c index 7299995..0b8fe2e 100644 --- a/drivers/crypto/vmx/aes_cbc.c +++ b/drivers/crypto/vmx/aes_cbc.c @@ -85,6 +85,7 @@ static int p8_aes_cbc_setkey(struct crypto_tfm *tfm, const u8 *key, preempt_disable(); pagefault_disable(); enable_kernel_altivec(); + enable_kernel_vsx(); ret = aes_p8_set_encrypt_key(key, keylen * 8, &ctx->enc_key); ret += aes_p8_set_decrypt_key(key, keylen * 8, &ctx->dec_key); pagefault_enable(); @@ -115,6 +116,7 @@ static int p8_aes_cbc_encrypt(struct blkcipher_desc *desc, preempt_disable(); pagefault_disable(); enable_kernel_altivec(); + enable_kernel_vsx(); blkcipher_walk_init(&walk, dst, src, nbytes); ret = blkcipher_walk_virt(desc, &walk); @@ -155,6 +157,7 @@ static int p8_aes_cbc_decrypt(struct blkcipher_desc *desc, preempt_disable(); pagefault_disable(); enable_kernel_altivec(); + enable_kernel_vsx(); blkcipher_walk_init(&walk, dst, src, nbytes); ret = blkcipher_walk_virt(desc, &walk); diff --git a/drivers/crypto/vmx/aes_ctr.c b/drivers/crypto/vmx/aes_ctr.c index 7adae42..1e754ae 100644 --- a/drivers/crypto/vmx/aes_ctr.c +++ b/drivers/crypto/vmx/aes_ctr.c @@ -82,6 +82,7 @@ static int p8_aes_ctr_setkey(struct crypto_tfm *tfm, const u8 *key, pagefault_disable(); enable_kernel_altivec(); + enable_kernel_vsx(); ret = aes_p8_set_encrypt_key(key, keylen * 8, &ctx->enc_key); pagefault_enable(); @@ -100,6 +101,7 @@ static void p8_aes_ctr_final(struct p8_aes_ctr_ctx *ctx, pagefault_disable(); enable_kernel_altivec(); + enable_kernel_vsx(); aes_p8_encrypt(ctrblk, keystream, &ctx->enc_key); pagefault_enable(); @@ -131,6 +133,7 @@ static int p8_aes_ctr_crypt(struct blkcipher_desc *desc, while ((nbytes = walk.nbytes) >= AES_BLOCK_SIZE) { pagefault_disable(); enable_kernel_altivec(); + enable_kernel_vsx(); aes_p8_ctr32_encrypt_blocks(walk.src.virt.addr, walk.dst.virt.addr, (nbytes & diff --git a/drivers/crypto/vmx/ghash.c b/drivers/crypto/vmx/ghash.c index b5e2900..2183a2e 100644 --- a/drivers/crypto/vmx/ghash.c +++ b/drivers/crypto/vmx/ghash.c @@ -119,6 +119,7 @@ static int p8_ghash_setkey(struct crypto_shash *tfm, const u8 *key, preempt_disable(); pagefault_disable(); enable_kernel_altivec(); + enable_kernel_vsx(); enable_kernel_fp(); gcm_init_p8(ctx->htable, (const u64 *) key); pagefault_enable(); @@ -149,6 +150,7 @@ static int p8_ghash_update(struct shash_desc *desc, preempt_disable(); pagefault_disable(); enable_kernel_altivec(); + enable_kernel_vsx(); enable_kernel_fp(); gcm_ghash_p8(dctx->shash, ctx->htable, dctx->buffer, GHASH_DIGEST_SIZE); @@ -163,6 +165,7 @@ static int p8_ghash_update(struct shash_desc *desc, preempt_disable(); pagefault_disable(); enable_kernel_altivec(); + enable_kernel_vsx(); enable_kernel_fp(); gcm_ghash_p8(dctx->shash, ctx->htable, src, len); pagefault_enable(); @@ -193,6 +196,7 @@ static int p8_ghash_final(struct shash_desc *desc, u8 *out) preempt_disable(); pagefault_disable(); enable_kernel_altivec(); + enable_kernel_vsx(); enable_kernel_fp(); gcm_ghash_p8(dctx->shash, ctx->htable, dctx->buffer, GHASH_DIGEST_SIZE);