From patchwork Mon Oct 10 08:43:39 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Gonglei (Arei)" X-Patchwork-Id: 9369167 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 516A0607D2 for ; Mon, 10 Oct 2016 08:56:35 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 37CE82941D for ; Mon, 10 Oct 2016 08:56:35 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2BA2129455; Mon, 10 Oct 2016 08:56:35 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id E75662941D for ; Mon, 10 Oct 2016 08:56:33 +0000 (UTC) Received: from localhost ([::1]:48167 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1btWNs-0005Po-BU for patchwork-qemu-devel@patchwork.kernel.org; Mon, 10 Oct 2016 04:56:32 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:47378) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1btWMs-0004xB-9N for qemu-devel@nongnu.org; Mon, 10 Oct 2016 04:55:32 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1btWMp-00016N-1i for qemu-devel@nongnu.org; Mon, 10 Oct 2016 04:55:30 -0400 Received: from szxga02-in.huawei.com ([119.145.14.65]:4575) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1btWMn-00010L-PG for qemu-devel@nongnu.org; Mon, 10 Oct 2016 04:55:26 -0400 Received: from 172.24.1.60 (EHLO szxeml431-hub.china.huawei.com) ([172.24.1.60]) by szxrg02-dlp.huawei.com (MOS 4.3.7-GA FastPath queued) with ESMTP id DOJ71784; Mon, 10 Oct 2016 16:44:06 +0800 (CST) Received: from localhost (10.177.18.62) by szxeml431-hub.china.huawei.com (10.82.67.208) with Microsoft SMTP Server id 14.3.235.1; Mon, 10 Oct 2016 16:43:54 +0800 From: Gonglei To: , Date: Mon, 10 Oct 2016 16:43:39 +0800 Message-ID: <1476089024-68684-8-git-send-email-arei.gonglei@huawei.com> X-Mailer: git-send-email 2.6.3.windows.1 In-Reply-To: <1476089024-68684-1-git-send-email-arei.gonglei@huawei.com> References: <1476089024-68684-1-git-send-email-arei.gonglei@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.177.18.62] X-CFilter-Loop: Reflected X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.4.x-2.6.x [generic] X-Received-From: 119.145.14.65 Subject: [Qemu-devel] [PATCH v6 07/12] virtio-crypto: add data queue processing handler X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: weidong.huang@huawei.com, claudio.fontana@huawei.com, mst@redhat.com, xin.zeng@intel.com, hanweidong@huawei.com, luonengjun@huawei.com, agraf@suse.de, nmorey@kalray.eu, mike.caraman@nxp.com, Gonglei , stefanha@redhat.com, jianjay.zhou@huawei.com, pbonzini@redhat.com, peter.huangpeng@huawei.com, vincent.jardin@6wind.com, wu.wubin@huawei.com, arei.gonglei@hotmail.com Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" X-Virus-Scanned: ClamAV using ClamSMTP Introduces VirtIOCryptoReq structure to store crypto request so that we can support sync and async crypto operation in the future. At present, we only support cipher and algorithm chainning. Signed-off-by: Gonglei --- hw/virtio/virtio-crypto.c | 338 +++++++++++++++++++++++++++++++++++++- include/hw/virtio/virtio-crypto.h | 11 +- 2 files changed, 346 insertions(+), 3 deletions(-) diff --git a/hw/virtio/virtio-crypto.c b/hw/virtio/virtio-crypto.c index 98a7d60..29eb1a1 100644 --- a/hw/virtio/virtio-crypto.c +++ b/hw/virtio/virtio-crypto.c @@ -306,6 +306,342 @@ static void virtio_crypto_handle_ctrl(VirtIODevice *vdev, VirtQueue *vq) } /* end for loop */ } +static void virtio_crypto_init_request(VirtIOCrypto *vcrypto, VirtQueue *vq, + VirtIOCryptoReq *req) +{ + req->vcrypto = vcrypto; + req->vq = vq; + req->in = NULL; + req->in_iov = NULL; + req->in_num = 0; + req->in_len = 0; + req->flags = CRYPTODEV_BACKEND_ALG__MAX; + req->u.sym_op_info = NULL; +} + +static void virtio_crypto_free_request(VirtIOCryptoReq *req) +{ + if (req) { + if (req->flags == CRYPTODEV_BACKEND_ALG_SYM) { + g_free(req->u.sym_op_info); + } + g_free(req); + } +} + +static void +virtio_crypto_sym_input_data_helper(VirtIODevice *vdev, + VirtIOCryptoReq *req, + uint32_t status, + CryptoDevBackendSymOpInfo *sym_op_info) +{ + size_t s, len; + + if (status != VIRTIO_CRYPTO_OK) { + return; + } + + len = sym_op_info->dst_len; + /* Save the cipher result */ + s = iov_from_buf(req->in_iov, req->in_num, 0, sym_op_info->dst, len); + if (s != len) { + virtio_error(vdev, "virtio-crypto dest data incorrect"); + return; + } + + iov_discard_front(&req->in_iov, &req->in_num, len); + + if (sym_op_info->op_type == + VIRTIO_CRYPTO_SYM_OP_ALGORITHM_CHAINING) { + /* Save the digest result */ + s = iov_from_buf(req->in_iov, req->in_num, 0, + sym_op_info->digest_result, + sym_op_info->digest_result_len); + if (s != len) { + virtio_error(vdev, "virtio-crypto digest result incorrect"); + } + } +} + +static void virtio_crypto_req_complete(VirtIOCryptoReq *req, uint32_t status) +{ + VirtIOCrypto *vcrypto = req->vcrypto; + VirtIODevice *vdev = VIRTIO_DEVICE(vcrypto); + + if (req->flags == CRYPTODEV_BACKEND_ALG_SYM) { + virtio_crypto_sym_input_data_helper(vdev, req, status, + req->u.sym_op_info); + } + stl_he_p(&req->in->status, status); + virtqueue_push(req->vq, &req->elem, req->in_len); + virtio_notify(vdev, req->vq); +} + +static VirtIOCryptoReq * +virtio_crypto_get_request(VirtIOCrypto *s, VirtQueue *vq) +{ + VirtIOCryptoReq *req = virtqueue_pop(vq, sizeof(VirtIOCryptoReq)); + + if (req) { + virtio_crypto_init_request(s, vq, req); + } + return req; +} + +static CryptoDevBackendSymOpInfo * +virtio_crypto_sym_op_helper(VirtIODevice *vdev, + struct virtio_crypto_cipher_para *para, + uint32_t aad_len, + struct iovec *iov, unsigned int out_num, + uint32_t hash_result_len, + uint32_t hash_start_src_offset) +{ + CryptoDevBackendSymOpInfo *op_info; + uint32_t src_len, dst_len; + uint32_t iv_len; + size_t max_len, curr_size = 0; + size_t s; + + iv_len = virtio_ldl_p(vdev, ¶->iv_len); + src_len = virtio_ldl_p(vdev, ¶->src_data_len); + dst_len = virtio_ldl_p(vdev, ¶->dst_data_len); + + max_len = iv_len + aad_len + src_len + dst_len + hash_result_len; + op_info = g_malloc0(sizeof(CryptoDevBackendSymOpInfo) + max_len); + op_info->iv_len = iv_len; + op_info->src_len = src_len; + op_info->dst_len = dst_len; + op_info->aad_len = aad_len; + op_info->digest_result_len = hash_result_len; + op_info->hash_start_src_offset = hash_start_src_offset; + /* Handle the initilization vector */ + if (op_info->iv_len > 0) { + DPRINTF("iv_len=%" PRIu32 "\n", op_info->iv_len); + op_info->iv = op_info->data + curr_size; + + s = iov_to_buf(iov, out_num, 0, op_info->iv, op_info->iv_len); + if (unlikely(s != op_info->iv_len)) { + virtio_error(vdev, "virtio-crypto iv incorrect"); + goto err; + } + iov_discard_front(&iov, &out_num, op_info->iv_len); + curr_size += op_info->iv_len; + } + + /* Handle additional authentication data if exist */ + if (op_info->aad_len > 0) { + DPRINTF("aad_len=%" PRIu32 "\n", op_info->aad_len); + op_info->aad_data = op_info->data + curr_size; + + s = iov_to_buf(iov, out_num, 0, op_info->aad_data, op_info->aad_len); + if (unlikely(s != op_info->aad_len)) { + virtio_error(vdev, "virtio-crypto additional auth data incorrect"); + goto err; + } + iov_discard_front(&iov, &out_num, op_info->aad_len); + + curr_size += op_info->aad_len; + } + + /* Handle the source data */ + if (op_info->src_len > 0) { + DPRINTF("src_len=%" PRIu32 "\n", op_info->src_len); + op_info->src = op_info->data + curr_size; + + s = iov_to_buf(iov, out_num, 0, op_info->src, op_info->src_len); + if (unlikely(s != op_info->src_len)) { + virtio_error(vdev, "virtio-crypto source data incorrect"); + goto err; + } + iov_discard_front(&iov, &out_num, op_info->src_len); + + curr_size += op_info->src_len; + } + + /* Handle the destination data */ + op_info->dst = op_info->data + curr_size; + curr_size += op_info->dst_len; + + DPRINTF("dst_len=%" PRIu32 "\n", op_info->dst_len); + + /* Handle the hash digest result */ + if (hash_result_len > 0) { + DPRINTF("hash_result_len=%" PRIu32 "\n", hash_result_len); + op_info->digest_result = op_info->data + curr_size; + } + + return op_info; + +err: + g_free(op_info); + return NULL; +} + +static int +virtio_crypto_handle_sym_req(VirtIOCrypto *vcrypto, + struct virtio_crypto_sym_data_req *req, + CryptoDevBackendSymOpInfo **sym_op_info, + struct iovec *iov, unsigned int out_num) +{ + VirtIODevice *vdev = VIRTIO_DEVICE(vcrypto); + uint32_t op_type; + CryptoDevBackendSymOpInfo *op_info; + + op_type = virtio_ldl_p(vdev, &req->op_type); + + if (op_type == VIRTIO_CRYPTO_SYM_OP_CIPHER) { + op_info = virtio_crypto_sym_op_helper(vdev, &req->u.cipher.para, + 0, iov, out_num, 0, 0); + if (!op_info) { + return -EFAULT; + } + op_info->op_type = op_type; + } else if (op_type == VIRTIO_CRYPTO_SYM_OP_ALGORITHM_CHAINING) { + uint32_t aad_len, hash_result_len; + uint32_t hash_start_src_offset; + + aad_len = virtio_ldl_p(vdev, &req->u.chain.para.aad_len); + hash_result_len = virtio_ldl_p(vdev, + &req->u.chain.para.hash_result_len); + hash_start_src_offset = virtio_ldl_p(vdev, + &req->u.chain.para.hash_start_src_offset); + /* cipher part */ + op_info = virtio_crypto_sym_op_helper(vdev, &req->u.chain.para.cipher, + aad_len, iov, out_num, + hash_result_len, + hash_start_src_offset); + if (!op_info) { + return -EFAULT; + } + op_info->op_type = op_type; + } else { + /* VIRTIO_CRYPTO_SYM_OP_NONE */ + error_report("virtio-crypto unsupported cipher type"); + return -VIRTIO_CRYPTO_NOTSUPP; + } + + *sym_op_info = op_info; + + return 0; +} + +static int +virtio_crypto_handle_request(VirtIOCryptoReq *request) +{ + VirtIOCrypto *vcrypto = request->vcrypto; + VirtIODevice *vdev = VIRTIO_DEVICE(vcrypto); + VirtQueueElement *elem = &request->elem; + int queue_index = virtio_crypto_vq2q(virtio_get_queue_index(request->vq)); + struct virtio_crypto_op_data_req req; + int ret; + struct iovec *in_iov; + struct iovec *out_iov; + unsigned in_num; + unsigned out_num; + uint32_t opcode, status = VIRTIO_CRYPTO_ERR; + uint64_t session_id; + CryptoDevBackendSymOpInfo *sym_op_info = NULL; + Error *local_err = NULL; + + if (elem->out_num < 1 || elem->in_num < 1) { + virtio_error(vdev, "virtio-crypto dataq missing headers"); + return -1; + } + + out_num = elem->out_num; + out_iov = elem->out_sg; + in_num = elem->in_num; + in_iov = elem->in_sg; + if (unlikely(iov_to_buf(out_iov, out_num, 0, &req, sizeof(req)) + != sizeof(req))) { + virtio_error(vdev, "virtio-crypto request outhdr too short"); + return -1; + } + iov_discard_front(&out_iov, &out_num, sizeof(req)); + + if (in_iov[in_num - 1].iov_len < + sizeof(struct virtio_crypto_inhdr)) { + virtio_error(vdev, "virtio-crypto request inhdr too short"); + return -1; + } + + request->in_len = iov_size(in_iov, in_num); + request->in = (void *)in_iov[in_num - 1].iov_base + + in_iov[in_num - 1].iov_len + - sizeof(struct virtio_crypto_inhdr); + iov_discard_back(in_iov, &in_num, sizeof(struct virtio_crypto_inhdr)); + + /* + * The length of operation result, including dest_data + * and digest_result if exist. + */ + request->in_num = in_num; + request->in_iov = in_iov; + + opcode = virtio_ldl_p(vdev, &req.header.opcode); + session_id = virtio_ldq_p(vdev, &req.header.session_id); + + switch (opcode) { + case VIRTIO_CRYPTO_CIPHER_ENCRYPT: + case VIRTIO_CRYPTO_CIPHER_DECRYPT: + ret = virtio_crypto_handle_sym_req(vcrypto, + &req.u.sym_req, + &sym_op_info, + out_iov, out_num); + /* Serious errors, need to reset virtio crypto device */ + if (ret == -EFAULT) { + return -1; + } else if (ret == -VIRTIO_CRYPTO_NOTSUPP) { + virtio_crypto_req_complete(request, VIRTIO_CRYPTO_NOTSUPP); + virtio_crypto_free_request(request); + } else { + sym_op_info->session_id = session_id; + + /* Set request's parameter */ + request->flags = CRYPTODEV_BACKEND_ALG_SYM; + request->u.sym_op_info = sym_op_info; + ret = cryptodev_backend_sym_operation(vcrypto->cryptodev, + sym_op_info, queue_index, &local_err); + if (ret < 0) { + status = VIRTIO_CRYPTO_ERR; + if (local_err) { + error_report_err(local_err); + } + } else { /* ret >= 0 */ + status = VIRTIO_CRYPTO_OK; + } + virtio_crypto_req_complete(request, status); + virtio_crypto_free_request(request); + } + break; + case VIRTIO_CRYPTO_HASH: + case VIRTIO_CRYPTO_MAC: + case VIRTIO_CRYPTO_AEAD_ENCRYPT: + case VIRTIO_CRYPTO_AEAD_DECRYPT: + default: + error_report("virtio-crypto unsupported dataq opcode: %u", + opcode); + virtio_crypto_req_complete(request, VIRTIO_CRYPTO_NOTSUPP); + virtio_crypto_free_request(request); + } + + return 0; +} + +static void virtio_crypto_handle_dataq(VirtIODevice *vdev, VirtQueue *vq) +{ + VirtIOCrypto *vcrypto = VIRTIO_CRYPTO(vdev); + VirtIOCryptoReq *req; + + while ((req = virtio_crypto_get_request(vcrypto, vq))) { + if (virtio_crypto_handle_request(req) < 0) { + virtqueue_detach_element(req->vq, &req->elem, 0); + virtio_crypto_free_request(req); + break; + } + } +} + static uint64_t virtio_crypto_get_features(VirtIODevice *vdev, uint64_t features, Error **errp) @@ -365,7 +701,7 @@ static void virtio_crypto_device_realize(DeviceState *dev, Error **errp) vcrypto->curr_queues = 1; for (i = 0; i < vcrypto->max_queues; i++) { - virtio_add_queue(vdev, 1024, NULL); + virtio_add_queue(vdev, 1024, virtio_crypto_handle_dataq); } vcrypto->ctrl_vq = virtio_add_queue(vdev, 64, virtio_crypto_handle_ctrl); diff --git a/include/hw/virtio/virtio-crypto.h b/include/hw/virtio/virtio-crypto.h index a5e826c..ff3abfe 100644 --- a/include/hw/virtio/virtio-crypto.h +++ b/include/hw/virtio/virtio-crypto.h @@ -36,6 +36,11 @@ do { \ #define VIRTIO_CRYPTO_GET_PARENT_CLASS(obj) \ OBJECT_GET_PARENT_CLASS(obj, TYPE_VIRTIO_CRYPTO) +/* This is the last element of the write scatter-gather list */ +struct virtio_crypto_inhdr +{ + uint32_t status; +}; typedef struct VirtIOCryptoConf { CryptoDevBackend *cryptodev; @@ -61,8 +66,10 @@ typedef struct VirtIOCryptoReq { VirtQueueElement elem; /* flags of operation, such as type of algorithm */ uint32_t flags; - /* address of in data (Device to Driver) */ - void *idata_hva; + struct virtio_crypto_inhdr *in; + struct iovec *in_iov; /* Head address of dest iovec */ + unsigned int in_num; /* Number of dest iovec */ + size_t in_len; VirtQueue *vq; struct VirtIOCrypto *vcrypto; union {