From patchwork Fri May 27 11:11:24 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "(Exiting) Baolin Wang" X-Patchwork-Id: 9138171 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 8098B607D3 for ; Fri, 27 May 2016 11:12:45 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 722F7274F3 for ; Fri, 27 May 2016 11:12:45 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 56B2827CF9; Fri, 27 May 2016 11:12:45 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D448E28137 for ; Fri, 27 May 2016 11:12:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932214AbcE0LMW (ORCPT ); Fri, 27 May 2016 07:12:22 -0400 Received: from mail-pa0-f44.google.com ([209.85.220.44]:33295 "EHLO mail-pa0-f44.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753720AbcE0LMR (ORCPT ); Fri, 27 May 2016 07:12:17 -0400 Received: by mail-pa0-f44.google.com with SMTP id xk12so39298045pac.0 for ; Fri, 27 May 2016 04:12:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :in-reply-to:references; bh=bMd+U1XiaWYslz7ZlHzFjZ4N9VIN+DvTUaUecJZzXNQ=; b=YnDy50/QKp2CZsusmx10pTIny3f0qdRz2PV0JhUcfEd/K4lUl4T20mPQrxRxX3Xd8r jcSNqUpT66yvttGornBYzjpVcgbbbCHoUYJIZhq2CgdYGJCgz4wBjE3+C02be5LqHuud KkZtvXxhS74vXfw2OqPZrxvau+xnHqOTV/Jrg= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:in-reply-to:references; bh=bMd+U1XiaWYslz7ZlHzFjZ4N9VIN+DvTUaUecJZzXNQ=; b=N0yc9gVM0Ixt2+weL9VOzgzOTzgobUURGjbodaS8+vtqTEMJNmlyGFUg0JYc+doic8 QdcwUIf88Tv63nnt8K98jM5xzMNA7q7mwvtY9mAJbnsKdgJ/xVqnD2/DTcXQ0t1I/Hyp 4mCNVdxaXmh8XDxF6U2EaEdxm5lQESgsmow72En0PoiLaP6oTiXpjZeLw4rEe2Gh850A hOU05CS66g/Xo+4Jfn4u5Lnh7QbbIdfNdheTlnjOcXjs2EsjO15AFEpTkN8a6+JFuGKt 3kHKTlQaWXV8bH6bXRz45ySA6Xpl0hdyZWxVgjQXnM+lJo3vRwsn3ySo2GdBZNxapwuw Mw0g== X-Gm-Message-State: ALyK8tLOkxddsWy2WPKQPruGoPr2wLYkkexOjJRM62OX2N6onC1ZQ2UBBYfLZ3UpyuFruQSz X-Received: by 10.66.160.133 with SMTP id xk5mr21846466pab.71.1464347536385; Fri, 27 May 2016 04:12:16 -0700 (PDT) Received: from baolinwangubtpc.spreadtrum.com ([175.111.195.49]) by smtp.gmail.com with ESMTPSA id vi6sm27430788pab.21.2016.05.27.04.12.10 (version=TLS1 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 27 May 2016 04:12:15 -0700 (PDT) From: Baolin Wang To: axboe@kernel.dk, agk@redhat.com, snitzer@redhat.com, dm-devel@redhat.com, herbert@gondor.apana.org.au, davem@davemloft.net Cc: ebiggers3@gmail.com, js1304@gmail.com, tadeusz.struk@intel.com, smueller@chronox.de, standby24x7@gmail.com, shli@kernel.org, dan.j.williams@intel.com, martin.petersen@oracle.com, sagig@mellanox.com, kent.overstreet@gmail.com, keith.busch@intel.com, tj@kernel.org, ming.lei@canonical.com, broonie@kernel.org, arnd@arndb.de, linux-crypto@vger.kernel.org, linux-block@vger.kernel.org, linux-raid@vger.kernel.org, linux-kernel@vger.kernel.org, baolin.wang@linaro.org Subject: [RFC v2 3/3] md: dm-crypt: Introduce the bulk mode method when sending request Date: Fri, 27 May 2016 19:11:24 +0800 Message-Id: X-Mailer: git-send-email 1.7.9.5 In-Reply-To: References: In-Reply-To: References: Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP In now dm-crypt code, it is ineffective to map one segment (always one sector) of one bio with just only one scatterlist at one time for hardware crypto engine. Especially for some encryption mode (like ecb or xts mode) cooperating with the crypto engine, they just need one initial IV or null IV instead of different IV for each sector. In this situation We can consider to use multiple scatterlists to map the whole bio and send all scatterlists of one bio to crypto engine to encrypt or decrypt, which can improve the hardware engine's efficiency. With this optimization, On my test setup (beaglebone black board) using 64KB I/Os on an eMMC storage device I saw about 60% improvement in throughput for encrypted writes, and about 100% improvement for encrypted reads. But this is not fit for other modes which need different IV for each sector. Signed-off-by: Baolin Wang --- drivers/md/dm-crypt.c | 145 ++++++++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 144 insertions(+), 1 deletion(-) diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c index 4f3cb35..2101f35 100644 --- a/drivers/md/dm-crypt.c +++ b/drivers/md/dm-crypt.c @@ -33,6 +33,7 @@ #include #define DM_MSG_PREFIX "crypt" +#define DM_MAX_SG_LIST 1024 /* * context holding the current state of a multi-part conversion @@ -142,6 +143,9 @@ struct crypt_config { char *cipher; char *cipher_string; + struct sg_table sgt_in; + struct sg_table sgt_out; + struct crypt_iv_operations *iv_gen_ops; union { struct iv_essiv_private essiv; @@ -837,6 +841,129 @@ static u8 *iv_of_dmreq(struct crypt_config *cc, crypto_skcipher_alignmask(any_tfm(cc)) + 1); } +static void crypt_init_sg_table(struct scatterlist *sgl) +{ + struct scatterlist *sg; + int i; + + for_each_sg(sgl, sg, DM_MAX_SG_LIST, i) { + if (i < DM_MAX_SG_LIST - 1 && sg_is_last(sg)) + sg_unmark_end(sg); + else if (i == DM_MAX_SG_LIST - 1) + sg_mark_end(sg); + } + + for_each_sg(sgl, sg, DM_MAX_SG_LIST, i) { + memset(sg, 0, sizeof(struct scatterlist)); + + if (i == DM_MAX_SG_LIST - 1) + sg_mark_end(sg); + } +} + +static void crypt_reinit_sg_table(struct crypt_config *cc) +{ + if (!cc->sgt_in.orig_nents || !cc->sgt_out.orig_nents) + return; + + crypt_init_sg_table(cc->sgt_in.sgl); + crypt_init_sg_table(cc->sgt_out.sgl); +} + +static int crypt_alloc_sg_table(struct crypt_config *cc) +{ + unsigned int bulk_mode = skcipher_is_bulk_mode(any_tfm(cc)); + int ret = 0; + + if (!bulk_mode) + goto out_skip_alloc; + + ret = sg_alloc_table(&cc->sgt_in, DM_MAX_SG_LIST, GFP_KERNEL); + if (ret) + goto out_skip_alloc; + + ret = sg_alloc_table(&cc->sgt_out, DM_MAX_SG_LIST, GFP_KERNEL); + if (ret) + goto out_free_table; + + return 0; + +out_free_table: + sg_free_table(&cc->sgt_in); +out_skip_alloc: + cc->sgt_in.orig_nents = 0; + cc->sgt_out.orig_nents = 0; + + return ret; +} + +static int crypt_convert_bulk_block(struct crypt_config *cc, + struct convert_context *ctx, + struct skcipher_request *req) +{ + struct bio *bio_in = ctx->bio_in; + struct bio *bio_out = ctx->bio_out; + unsigned int total_bytes = bio_in->bi_iter.bi_size; + unsigned int total_sg_in, total_sg_out; + struct scatterlist *sg_in, *sg_out; + struct dm_crypt_request *dmreq; + u8 *iv; + int r; + + if (!cc->sgt_in.orig_nents || !cc->sgt_out.orig_nents) + return -EINVAL; + + dmreq = dmreq_of_req(cc, req); + iv = iv_of_dmreq(cc, dmreq); + dmreq->iv_sector = ctx->cc_sector; + dmreq->ctx = ctx; + + total_sg_in = blk_bio_map_sg(bdev_get_queue(bio_in->bi_bdev), + bio_in, cc->sgt_in.sgl); + if ((total_sg_in <= 0) || (total_sg_in > DM_MAX_SG_LIST)) { + DMERR("%s in sg map error %d, sg table nents[%d]\n", + __func__, total_sg_in, cc->sgt_in.orig_nents); + return -EINVAL; + } + + ctx->iter_in.bi_size -= total_bytes; + sg_in = cc->sgt_in.sgl; + sg_out = cc->sgt_in.sgl; + + if (bio_data_dir(bio_in) == READ) + goto set_crypt; + + total_sg_out = blk_bio_map_sg(bdev_get_queue(bio_out->bi_bdev), + bio_out, cc->sgt_out.sgl); + if ((total_sg_out <= 0) || (total_sg_out > DM_MAX_SG_LIST)) { + DMERR("%s out sg map error %d, sg table nents[%d]\n", + __func__, total_sg_out, cc->sgt_out.orig_nents); + return -EINVAL; + } + + ctx->iter_out.bi_size -= total_bytes; + sg_out = cc->sgt_out.sgl; + +set_crypt: + if (cc->iv_gen_ops) { + r = cc->iv_gen_ops->generator(cc, iv, dmreq); + if (r < 0) + return r; + } + + skcipher_request_set_crypt(req, sg_in, sg_out, total_bytes, iv); + + if (bio_data_dir(ctx->bio_in) == WRITE) + r = crypto_skcipher_encrypt(req); + else + r = crypto_skcipher_decrypt(req); + + if (!r && cc->iv_gen_ops && cc->iv_gen_ops->post) + r = cc->iv_gen_ops->post(cc, iv, dmreq); + + return r; +} + static int crypt_convert_block(struct crypt_config *cc, struct convert_context *ctx, struct skcipher_request *req) @@ -920,6 +1047,7 @@ static void crypt_free_req(struct crypt_config *cc, static int crypt_convert(struct crypt_config *cc, struct convert_context *ctx) { + unsigned int bulk_mode; int r; atomic_set(&ctx->cc_pending, 1); @@ -930,7 +1058,14 @@ static int crypt_convert(struct crypt_config *cc, atomic_inc(&ctx->cc_pending); - r = crypt_convert_block(cc, ctx, ctx->req); + bulk_mode = skcipher_is_bulk_mode(any_tfm(cc)); + if (!bulk_mode) { + r = crypt_convert_block(cc, ctx, ctx->req); + } else { + r = crypt_convert_bulk_block(cc, ctx, ctx->req); + if (r == -EINVAL) + r = crypt_convert_block(cc, ctx, ctx->req); + } switch (r) { /* @@ -1081,6 +1216,7 @@ static void crypt_dec_pending(struct dm_crypt_io *io) if (io->ctx.req) crypt_free_req(cc, io->ctx.req, base_bio); + crypt_reinit_sg_table(cc); base_bio->bi_error = error; bio_endio(base_bio); } @@ -1563,6 +1699,9 @@ static void crypt_dtr(struct dm_target *ti) kzfree(cc->cipher); kzfree(cc->cipher_string); + sg_free_table(&cc->sgt_in); + sg_free_table(&cc->sgt_out); + /* Must zero key material before freeing */ kzfree(cc); } @@ -1718,6 +1857,10 @@ static int crypt_ctr_cipher(struct dm_target *ti, } } + ret = crypt_alloc_sg_table(cc); + if (ret) + DMWARN("Allocate sg table for bulk mode failed"); + ret = 0; bad: kfree(cipher_api);