From patchwork Fri Jul 26 06:59:14 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lokesh Vutla X-Patchwork-Id: 2833809 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id A9F7A9F4E2 for ; Fri, 26 Jul 2013 07:01:09 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id EE31220142 for ; Fri, 26 Jul 2013 07:01:07 +0000 (UTC) Received: from casper.infradead.org (casper.infradead.org [85.118.1.10]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id E01BA20116 for ; Fri, 26 Jul 2013 07:01:05 +0000 (UTC) Received: from merlin.infradead.org ([2001:4978:20e::2]) by casper.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1V2c10-0006Cv-9O; Fri, 26 Jul 2013 07:00:38 +0000 Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1V2c0i-00080n-SY; Fri, 26 Jul 2013 07:00:20 +0000 Received: from bear.ext.ti.com ([192.94.94.41]) by merlin.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1V2c0F-0007wv-NU for linux-arm-kernel@lists.infradead.org; Fri, 26 Jul 2013 06:59:54 +0000 Received: from dlelxv90.itg.ti.com ([172.17.2.17]) by bear.ext.ti.com (8.13.7/8.13.7) with ESMTP id r6Q6xP65031996; Fri, 26 Jul 2013 01:59:25 -0500 Received: from DLEE70.ent.ti.com (dlee70.ent.ti.com [157.170.170.113]) by dlelxv90.itg.ti.com (8.14.3/8.13.8) with ESMTP id r6Q6xPTL021978; Fri, 26 Jul 2013 01:59:25 -0500 Received: from dlelxv22.itg.ti.com (172.17.1.197) by DLEE70.ent.ti.com (157.170.170.113) with Microsoft SMTP Server id 14.2.342.3; Fri, 26 Jul 2013 01:59:25 -0500 Received: from a0131933lt.apr.dhcp.ti.com (a0131933lt.apr.dhcp.ti.com [172.24.159.126]) by dlelxv22.itg.ti.com (8.13.8/8.13.8) with ESMTP id r6Q6xJVL016780; Fri, 26 Jul 2013 01:59:23 -0500 From: Lokesh Vutla To: , Subject: [PATCH 1/4] crypto: omap-sham: Add SHA384 and SHA512 Support Date: Fri, 26 Jul 2013 12:29:14 +0530 Message-ID: <1374821957-30141-2-git-send-email-lokeshvutla@ti.com> X-Mailer: git-send-email 1.7.9.5 In-Reply-To: <1374821957-30141-1-git-send-email-lokeshvutla@ti.com> References: <1374821957-30141-1-git-send-email-lokeshvutla@ti.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20130726_025952_044106_93170439 X-CRM114-Status: GOOD ( 22.39 ) X-Spam-Score: -6.9 (------) Cc: joelf@ti.com, rnayak@ti.com, nsekhar@ti.com, herbert@gondor.hengli.com.au, lokeshvutla@ti.com, davem@davemloft.net, linux-arm-kernel@lists.infradead.org X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-5.7 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Adding support for SHA348 and SHA512 in addition to MD5, SHA1, SHA224 SHA256 that the omap sha module supports. In order to add the support - Removed hard coded register offsets and passing offsets from pdata - Updating Flag offsets so that they can be used for SHA256 and SHA512 - Adding the algo info. Signed-off-by: Lokesh Vutla --- drivers/crypto/Kconfig | 11 +- drivers/crypto/omap-sham.c | 245 ++++++++++++++++++++++++++++++++++++-------- 2 files changed, 209 insertions(+), 47 deletions(-) diff --git a/drivers/crypto/Kconfig b/drivers/crypto/Kconfig index 8ff7c23..62fb673 100644 --- a/drivers/crypto/Kconfig +++ b/drivers/crypto/Kconfig @@ -242,13 +242,16 @@ config CRYPTO_DEV_PPC4XX This option allows you to have support for AMCC crypto acceleration. config CRYPTO_DEV_OMAP_SHAM - tristate "Support for OMAP SHA1/MD5 hw accelerator" - depends on ARCH_OMAP2 || ARCH_OMAP3 + tristate "Support for OMAP MD5/SHA1/SHA2 hw accelerator" + depends on ARCH_OMAP2PLUS select CRYPTO_SHA1 select CRYPTO_MD5 + select CRYPTO_SHA256 + select CRYPTO_SHA512 + select CRYPTO_HMAC help - OMAP processors have SHA1/MD5 hw accelerator. Select this if you - want to use the OMAP module for SHA1/MD5 algorithms. + OMAP processors have MD5/SHA1/SHA2 hw accelerator. Select this if you + want to use the OMAP module for MD5/SHA1/SHA2 algorithms. config CRYPTO_DEV_OMAP_AES tristate "Support for OMAP AES hw engine" diff --git a/drivers/crypto/omap-sham.c b/drivers/crypto/omap-sham.c index 4bb6765..f73b1e0 100644 --- a/drivers/crypto/omap-sham.c +++ b/drivers/crypto/omap-sham.c @@ -44,7 +44,6 @@ #include #include -#define SHA1_MD5_BLOCK_SIZE SHA1_BLOCK_SIZE #define MD5_DIGEST_SIZE 16 #define DST_MAXBURST 16 @@ -54,7 +53,7 @@ #define SHA_REG_DIN(dd, x) ((dd)->pdata->din_ofs + ((x) * 0x04)) #define SHA_REG_DIGCNT(dd) ((dd)->pdata->digcnt_ofs) -#define SHA_REG_ODIGEST(x) (0x00 + ((x) * 0x04)) +#define SHA_REG_ODIGEST(dd, x) ((dd)->pdata->odigest_ofs + (x * 0x04)) #define SHA_REG_CTRL 0x18 #define SHA_REG_CTRL_LENGTH (0xFFFFFFFF << 5) @@ -75,18 +74,21 @@ #define SHA_REG_SYSSTATUS(dd) ((dd)->pdata->sysstatus_ofs) #define SHA_REG_SYSSTATUS_RESETDONE (1 << 0) -#define SHA_REG_MODE 0x44 +#define SHA_REG_MODE(dd) ((dd)->pdata->mode_ofs) #define SHA_REG_MODE_HMAC_OUTER_HASH (1 << 7) #define SHA_REG_MODE_HMAC_KEY_PROC (1 << 5) #define SHA_REG_MODE_CLOSE_HASH (1 << 4) #define SHA_REG_MODE_ALGO_CONSTANT (1 << 3) -#define SHA_REG_MODE_ALGO_MASK (3 << 1) -#define SHA_REG_MODE_ALGO_MD5_128 (0 << 1) -#define SHA_REG_MODE_ALGO_SHA1_160 (1 << 1) -#define SHA_REG_MODE_ALGO_SHA2_224 (2 << 1) -#define SHA_REG_MODE_ALGO_SHA2_256 (3 << 1) -#define SHA_REG_LENGTH 0x48 +#define SHA_REG_MODE_ALGO_MASK (7 << 0) +#define SHA_REG_MODE_ALGO_MD5_128 (0 << 1) +#define SHA_REG_MODE_ALGO_SHA1_160 (1 << 1) +#define SHA_REG_MODE_ALGO_SHA2_224 (2 << 1) +#define SHA_REG_MODE_ALGO_SHA2_256 (3 << 1) +#define SHA_REG_MODE_ALGO_SHA2_384 (1 << 0) +#define SHA_REG_MODE_ALGO_SHA2_512 (3 << 0) + +#define SHA_REG_LENGTH(dd) ((dd)->pdata->length_ofs) #define SHA_REG_IRQSTATUS 0x118 #define SHA_REG_IRQSTATUS_CTX_RDY (1 << 3) @@ -117,18 +119,16 @@ #define FLAGS_SG 17 #define FLAGS_MODE_SHIFT 18 -#define FLAGS_MODE_MASK (SHA_REG_MODE_ALGO_MASK \ - << (FLAGS_MODE_SHIFT - 1)) -#define FLAGS_MODE_MD5 (SHA_REG_MODE_ALGO_MD5_128 \ - << (FLAGS_MODE_SHIFT - 1)) -#define FLAGS_MODE_SHA1 (SHA_REG_MODE_ALGO_SHA1_160 \ - << (FLAGS_MODE_SHIFT - 1)) -#define FLAGS_MODE_SHA224 (SHA_REG_MODE_ALGO_SHA2_224 \ - << (FLAGS_MODE_SHIFT - 1)) -#define FLAGS_MODE_SHA256 (SHA_REG_MODE_ALGO_SHA2_256 \ - << (FLAGS_MODE_SHIFT - 1)) -#define FLAGS_HMAC 20 -#define FLAGS_ERROR 21 +#define FLAGS_MODE_MASK (SHA_REG_MODE_ALGO_MASK << FLAGS_MODE_SHIFT) +#define FLAGS_MODE_MD5 (SHA_REG_MODE_ALGO_MD5_128 << FLAGS_MODE_SHIFT) +#define FLAGS_MODE_SHA1 (SHA_REG_MODE_ALGO_SHA1_160 << FLAGS_MODE_SHIFT) +#define FLAGS_MODE_SHA224 (SHA_REG_MODE_ALGO_SHA2_224 << FLAGS_MODE_SHIFT) +#define FLAGS_MODE_SHA256 (SHA_REG_MODE_ALGO_SHA2_256 << FLAGS_MODE_SHIFT) +#define FLAGS_MODE_SHA384 (SHA_REG_MODE_ALGO_SHA2_384 << FLAGS_MODE_SHIFT) +#define FLAGS_MODE_SHA512 (SHA_REG_MODE_ALGO_SHA2_512 << FLAGS_MODE_SHIFT) + +#define FLAGS_HMAC 21 +#define FLAGS_ERROR 22 #define OP_UPDATE 1 #define OP_FINAL 2 @@ -145,7 +145,7 @@ struct omap_sham_reqctx { unsigned long flags; unsigned long op; - u8 digest[SHA256_DIGEST_SIZE] OMAP_ALIGNED; + u8 digest[SHA512_DIGEST_SIZE] OMAP_ALIGNED; size_t digcnt; size_t bufcnt; size_t buflen; @@ -162,8 +162,8 @@ struct omap_sham_reqctx { struct omap_sham_hmac_ctx { struct crypto_shash *shash; - u8 ipad[SHA1_MD5_BLOCK_SIZE] OMAP_ALIGNED; - u8 opad[SHA1_MD5_BLOCK_SIZE] OMAP_ALIGNED; + u8 ipad[SHA512_BLOCK_SIZE] OMAP_ALIGNED; + u8 opad[SHA512_BLOCK_SIZE] OMAP_ALIGNED; }; struct omap_sham_ctx { @@ -205,6 +205,8 @@ struct omap_sham_pdata { u32 rev_ofs; u32 mask_ofs; u32 sysstatus_ofs; + u32 mode_ofs; + u32 length_ofs; u32 major_mask; u32 major_shift; @@ -306,9 +308,9 @@ static void omap_sham_copy_hash_omap4(struct ahash_request *req, int out) for (i = 0; i < dd->pdata->digest_size / sizeof(u32); i++) { if (out) opad[i] = omap_sham_read(dd, - SHA_REG_ODIGEST(i)); + SHA_REG_ODIGEST(dd, i)); else - omap_sham_write(dd, SHA_REG_ODIGEST(i), + omap_sham_write(dd, SHA_REG_ODIGEST(dd, i), opad[i]); } } @@ -342,6 +344,12 @@ static void omap_sham_copy_ready_hash(struct ahash_request *req) case FLAGS_MODE_SHA256: d = SHA256_DIGEST_SIZE / sizeof(u32); break; + case FLAGS_MODE_SHA384: + d = SHA384_DIGEST_SIZE / sizeof(u32); + break; + case FLAGS_MODE_SHA512: + d = SHA512_DIGEST_SIZE / sizeof(u32); + break; default: d = 0; } @@ -404,6 +412,30 @@ static int omap_sham_poll_irq_omap2(struct omap_sham_dev *dd) return omap_sham_wait(dd, SHA_REG_CTRL, SHA_REG_CTRL_INPUT_READY); } +static int get_block_size(struct omap_sham_reqctx *ctx) +{ + int d; + + switch (ctx->flags & FLAGS_MODE_MASK) { + case FLAGS_MODE_MD5: + case FLAGS_MODE_SHA1: + d = SHA1_BLOCK_SIZE; + break; + case FLAGS_MODE_SHA224: + case FLAGS_MODE_SHA256: + d = SHA256_BLOCK_SIZE; + break; + case FLAGS_MODE_SHA384: + case FLAGS_MODE_SHA512: + d = SHA512_BLOCK_SIZE; + break; + default: + d = 0; + } + + return d; +} + static void omap_sham_write_n(struct omap_sham_dev *dd, u32 offset, u32 *value, int count) { @@ -422,20 +454,24 @@ static void omap_sham_write_ctrl_omap4(struct omap_sham_dev *dd, size_t length, * CLOSE_HASH only for the last one. Note that flags mode bits * correspond to algorithm encoding in mode register. */ - val = (ctx->flags & FLAGS_MODE_MASK) >> (FLAGS_MODE_SHIFT - 1); + val = (ctx->flags & FLAGS_MODE_MASK) >> (FLAGS_MODE_SHIFT); if (!ctx->digcnt) { struct crypto_ahash *tfm = crypto_ahash_reqtfm(dd->req); struct omap_sham_ctx *tctx = crypto_ahash_ctx(tfm); struct omap_sham_hmac_ctx *bctx = tctx->base; + int bs, nr_dr; val |= SHA_REG_MODE_ALGO_CONSTANT; if (ctx->flags & BIT(FLAGS_HMAC)) { + bs = get_block_size(ctx); + nr_dr = bs / (2 * sizeof(u32)); val |= SHA_REG_MODE_HMAC_KEY_PROC; - omap_sham_write_n(dd, SHA_REG_ODIGEST(0), - (u32 *)bctx->ipad, - SHA1_BLOCK_SIZE / sizeof(u32)); - ctx->digcnt += SHA1_BLOCK_SIZE; + omap_sham_write_n(dd, SHA_REG_ODIGEST(dd, 0), + (u32 *)bctx->ipad, nr_dr); + omap_sham_write_n(dd, SHA_REG_IDIGEST(dd, 0), + (u32 *)bctx->ipad + nr_dr, nr_dr); + ctx->digcnt += bs; } } @@ -451,7 +487,7 @@ static void omap_sham_write_ctrl_omap4(struct omap_sham_dev *dd, size_t length, SHA_REG_MODE_HMAC_KEY_PROC; dev_dbg(dd->dev, "ctrl: %08x, flags: %08lx\n", val, ctx->flags); - omap_sham_write_mask(dd, SHA_REG_MODE, val, mask); + omap_sham_write_mask(dd, SHA_REG_MODE(dd), val, mask); omap_sham_write(dd, SHA_REG_IRQENA, SHA_REG_IRQENA_OUTPUT_RDY); omap_sham_write_mask(dd, SHA_REG_MASK(dd), SHA_REG_MASK_IT_EN | @@ -461,7 +497,7 @@ static void omap_sham_write_ctrl_omap4(struct omap_sham_dev *dd, size_t length, static void omap_sham_trigger_omap4(struct omap_sham_dev *dd, size_t length) { - omap_sham_write(dd, SHA_REG_LENGTH, length); + omap_sham_write(dd, SHA_REG_LENGTH(dd), length); } static int omap_sham_poll_irq_omap4(struct omap_sham_dev *dd) @@ -666,14 +702,14 @@ static int omap_sham_update_dma_slow(struct omap_sham_dev *dd) /* Start address alignment */ #define SG_AA(sg) (IS_ALIGNED(sg->offset, sizeof(u32))) /* SHA1 block size alignment */ -#define SG_SA(sg) (IS_ALIGNED(sg->length, SHA1_MD5_BLOCK_SIZE)) +#define SG_SA(sg, bs) (IS_ALIGNED(sg->length, bs)) static int omap_sham_update_dma_start(struct omap_sham_dev *dd) { struct omap_sham_reqctx *ctx = ahash_request_ctx(dd->req); unsigned int length, final, tail; struct scatterlist *sg; - int ret; + int ret, bs; if (!ctx->total) return 0; @@ -694,23 +730,24 @@ static int omap_sham_update_dma_start(struct omap_sham_dev *dd) ctx->digcnt, ctx->bufcnt, ctx->total); sg = ctx->sg; + bs = get_block_size(ctx); if (!SG_AA(sg)) return omap_sham_update_dma_slow(dd); - if (!sg_is_last(sg) && !SG_SA(sg)) - /* size is not SHA1_BLOCK_SIZE aligned */ + if (!sg_is_last(sg) && !SG_SA(sg, bs)) + /* size is not BLOCK_SIZE aligned */ return omap_sham_update_dma_slow(dd); length = min(ctx->total, sg->length); if (sg_is_last(sg)) { if (!(ctx->flags & BIT(FLAGS_FINUP))) { - /* not last sg must be SHA1_MD5_BLOCK_SIZE aligned */ - tail = length & (SHA1_MD5_BLOCK_SIZE - 1); + /* not last sg must be BLOCK_SIZE aligned */ + tail = length & (bs - 1); /* without finup() we need one block to close hash */ if (!tail) - tail = SHA1_MD5_BLOCK_SIZE; + tail = bs; length -= tail; } } @@ -773,6 +810,7 @@ static int omap_sham_init(struct ahash_request *req) struct omap_sham_ctx *tctx = crypto_ahash_ctx(tfm); struct omap_sham_reqctx *ctx = ahash_request_ctx(req); struct omap_sham_dev *dd = NULL, *tmp; + int bs = 0; spin_lock_bh(&sham.lock); if (!tctx->dd) { @@ -796,15 +834,27 @@ static int omap_sham_init(struct ahash_request *req) switch (crypto_ahash_digestsize(tfm)) { case MD5_DIGEST_SIZE: ctx->flags |= FLAGS_MODE_MD5; + bs = SHA1_BLOCK_SIZE; break; case SHA1_DIGEST_SIZE: ctx->flags |= FLAGS_MODE_SHA1; + bs = SHA1_BLOCK_SIZE; break; case SHA224_DIGEST_SIZE: ctx->flags |= FLAGS_MODE_SHA224; + bs = SHA224_BLOCK_SIZE; break; case SHA256_DIGEST_SIZE: ctx->flags |= FLAGS_MODE_SHA256; + bs = SHA256_BLOCK_SIZE; + break; + case SHA384_DIGEST_SIZE: + ctx->flags |= FLAGS_MODE_SHA384; + bs = SHA384_BLOCK_SIZE; + break; + case SHA512_DIGEST_SIZE: + ctx->flags |= FLAGS_MODE_SHA512; + bs = SHA512_BLOCK_SIZE; break; } @@ -816,8 +866,8 @@ static int omap_sham_init(struct ahash_request *req) if (!test_bit(FLAGS_AUTO_XOR, &dd->flags)) { struct omap_sham_hmac_ctx *bctx = tctx->base; - memcpy(ctx->buffer, bctx->ipad, SHA1_MD5_BLOCK_SIZE); - ctx->bufcnt = SHA1_MD5_BLOCK_SIZE; + memcpy(ctx->buffer, bctx->ipad, bs); + ctx->bufcnt = bs; } ctx->flags |= BIT(FLAGS_HMAC); @@ -1006,6 +1056,7 @@ static int omap_sham_enqueue(struct ahash_request *req, unsigned int op) static int omap_sham_update(struct ahash_request *req) { struct omap_sham_reqctx *ctx = ahash_request_ctx(req); + int bs = get_block_size(ctx); if (!req->nbytes) return 0; @@ -1023,7 +1074,7 @@ static int omap_sham_update(struct ahash_request *req) */ omap_sham_append_sg(ctx); return 0; - } else if (ctx->bufcnt + ctx->total <= SHA1_MD5_BLOCK_SIZE) { + } else if (ctx->bufcnt + ctx->total <= bs) { /* * faster to use CPU for short transfers */ @@ -1214,6 +1265,16 @@ static int omap_sham_cra_md5_init(struct crypto_tfm *tfm) return omap_sham_cra_init_alg(tfm, "md5"); } +static int omap_sham_cra_sha384_init(struct crypto_tfm *tfm) +{ + return omap_sham_cra_init_alg(tfm, "sha384"); +} + +static int omap_sham_cra_sha512_init(struct crypto_tfm *tfm) +{ + return omap_sham_cra_init_alg(tfm, "sha512"); +} + static void omap_sham_cra_exit(struct crypto_tfm *tfm) { struct omap_sham_ctx *tctx = crypto_tfm_ctx(tfm); @@ -1422,6 +1483,101 @@ static struct ahash_alg algs_sha224_sha256[] = { }, }; +static struct ahash_alg algs_sha384_sha512[] = { +{ + .init = omap_sham_init, + .update = omap_sham_update, + .final = omap_sham_final, + .finup = omap_sham_finup, + .digest = omap_sham_digest, + .halg.digestsize = SHA384_DIGEST_SIZE, + .halg.base = { + .cra_name = "sha384", + .cra_driver_name = "omap-sha384", + .cra_priority = 100, + .cra_flags = CRYPTO_ALG_TYPE_AHASH | + CRYPTO_ALG_ASYNC | + CRYPTO_ALG_NEED_FALLBACK, + .cra_blocksize = SHA384_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct omap_sham_ctx), + .cra_alignmask = 0, + .cra_module = THIS_MODULE, + .cra_init = omap_sham_cra_init, + .cra_exit = omap_sham_cra_exit, + } +}, +{ + .init = omap_sham_init, + .update = omap_sham_update, + .final = omap_sham_final, + .finup = omap_sham_finup, + .digest = omap_sham_digest, + .halg.digestsize = SHA512_DIGEST_SIZE, + .halg.base = { + .cra_name = "sha512", + .cra_driver_name = "omap-sha512", + .cra_priority = 100, + .cra_flags = CRYPTO_ALG_TYPE_AHASH | + CRYPTO_ALG_ASYNC | + CRYPTO_ALG_NEED_FALLBACK, + .cra_blocksize = SHA512_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct omap_sham_ctx), + .cra_alignmask = 0, + .cra_module = THIS_MODULE, + .cra_init = omap_sham_cra_init, + .cra_exit = omap_sham_cra_exit, + } +}, +{ + .init = omap_sham_init, + .update = omap_sham_update, + .final = omap_sham_final, + .finup = omap_sham_finup, + .digest = omap_sham_digest, + .setkey = omap_sham_setkey, + .halg.digestsize = SHA384_DIGEST_SIZE, + .halg.base = { + .cra_name = "hmac(sha384)", + .cra_driver_name = "omap-hmac-sha384", + .cra_priority = 100, + .cra_flags = CRYPTO_ALG_TYPE_AHASH | + CRYPTO_ALG_ASYNC | + CRYPTO_ALG_NEED_FALLBACK, + .cra_blocksize = SHA384_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct omap_sham_ctx) + + sizeof(struct omap_sham_hmac_ctx), + .cra_alignmask = OMAP_ALIGN_MASK, + .cra_module = THIS_MODULE, + .cra_init = omap_sham_cra_sha384_init, + .cra_exit = omap_sham_cra_exit, + } +}, +{ + .init = omap_sham_init, + .update = omap_sham_update, + .final = omap_sham_final, + .finup = omap_sham_finup, + .digest = omap_sham_digest, + .setkey = omap_sham_setkey, + .halg.digestsize = SHA512_DIGEST_SIZE, + .halg.base = { + .cra_name = "hmac(sha512)", + .cra_driver_name = "omap-hmac-sha512", + .cra_priority = 100, + .cra_flags = CRYPTO_ALG_TYPE_AHASH | + CRYPTO_ALG_ASYNC | + CRYPTO_ALG_NEED_FALLBACK, + .cra_blocksize = SHA512_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct omap_sham_ctx) + + sizeof(struct omap_sham_hmac_ctx), + .cra_alignmask = OMAP_ALIGN_MASK, + .cra_module = THIS_MODULE, + .cra_init = omap_sham_cra_sha512_init, + .cra_exit = omap_sham_cra_exit, + } +}, +}; + static void omap_sham_done_task(unsigned long data) { struct omap_sham_dev *dd = (struct omap_sham_dev *)data; @@ -1548,11 +1704,14 @@ static const struct omap_sham_pdata omap_sham_pdata_omap4 = { .poll_irq = omap_sham_poll_irq_omap4, .intr_hdlr = omap_sham_irq_omap4, .idigest_ofs = 0x020, + .odigest_ofs = 0x0, .din_ofs = 0x080, .digcnt_ofs = 0x040, .rev_ofs = 0x100, .mask_ofs = 0x110, .sysstatus_ofs = 0x114, + .mode_ofs = 0x44, + .length_ofs = 0x48, .major_mask = 0x0700, .major_shift = 8, .minor_mask = 0x003f,