From patchwork Mon Jan 4 03:31:41 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Parav Pandit X-Patchwork-Id: 11996129 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, UNWANTED_LANGUAGE_BODY,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3008EC43331 for ; Mon, 4 Jan 2021 03:33:20 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 06EEA2242A for ; Mon, 4 Jan 2021 03:33:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728079AbhADDdJ (ORCPT ); Sun, 3 Jan 2021 22:33:09 -0500 Received: from hqnvemgate25.nvidia.com ([216.228.121.64]:10285 "EHLO hqnvemgate25.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728035AbhADDdE (ORCPT ); Sun, 3 Jan 2021 22:33:04 -0500 Received: from hqmail.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate25.nvidia.com (using TLS: TLSv1.2, AES256-SHA) id ; Sun, 03 Jan 2021 19:32:24 -0800 Received: from sw-mtx-036.mtx.labs.mlnx (172.20.145.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Mon, 4 Jan 2021 03:32:23 +0000 From: Parav Pandit To: CC: , , , , Subject: [PATCH linux-next v2 7/7] vdpa_sim_net: Add support for user supported devices Date: Mon, 4 Jan 2021 05:31:41 +0200 Message-ID: <20210104033141.105876-8-parav@nvidia.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210104033141.105876-1-parav@nvidia.com> References: <20201112064005.349268-1-parav@nvidia.com> <20210104033141.105876-1-parav@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.145.6] X-ClientProxiedBy: HQMAIL101.nvidia.com (172.20.187.10) To HQMAIL107.nvidia.com (172.20.187.13) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1609731144; bh=sOQV4nudA1BXJOUqsZ0mGzsnDoZQc9SZHPbEt9crKvw=; h=From:To:CC:Subject:Date:Message-ID:X-Mailer:In-Reply-To: References:MIME-Version:Content-Transfer-Encoding:Content-Type: X-Originating-IP:X-ClientProxiedBy; b=LAVShhn0J2anGnPqp39XzFWNib4ynQq584B6B6HD3EAjXEinHvgUQ3JvEkhYSIVAP iKcJBnf0VeYsRiOHB9L5TBTJZzKPXCDDdY9/3JFdd6Z5R1f0nfJVW4PtlT7/EO3ifL kfRjPFTsYiUzAYhfqFaQ9evr/CW+YJ/dDkFGwpnz5JzFQykYEIXRa7s5uR2wfrJ2OC gFm4glm5nlmyC++UIzvgu5i2+F6JKkhMJUQ7u2McX9ZLoQc21pryjc/oDfnRN0iPnD XKhzarABTygNgTPUodWrHpwq5KRrjCVph5LVbBMRlVGfu+Zk8guwN2pEOKp2ptsIl2 pVyfUcTb0lxBg== Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Enable user to create vdpasim net simulate devices. Show vdpa management device that supports creating, deleting vdpa devices. $ vdpa mgmtdev show vdpasim_net: supported_classes net $ vdpa mgmtdev show -jp { "show": { "vdpasim_net": { "supported_classes": { "net" } } } Create a vdpa device of type networking named as "foo2" from the management device vdpasim: $ vdpa dev add mgmtdev vdpasim_net name foo2 Show the newly created vdpa device by its name: $ vdpa dev show foo2 foo2: type network mgmtdev vdpasim_net vendor_id 0 max_vqs 2 max_vq_size 256 $ vdpa dev show foo2 -jp { "dev": { "foo2": { "type": "network", "mgmtdev": "vdpasim_net", "vendor_id": 0, "max_vqs": 2, "max_vq_size": 256 } } } Delete the vdpa device after its use: $ vdpa dev del foo2 Signed-off-by: Parav Pandit Reviewed-by: Eli Cohen Acked-by: Jason Wang --- Changelog: v1->v2: - rebased --- drivers/vdpa/vdpa_sim/vdpa_sim.c | 3 +- drivers/vdpa/vdpa_sim/vdpa_sim.h | 2 + drivers/vdpa/vdpa_sim/vdpa_sim_net.c | 92 ++++++++++++++++++++++++++++ 3 files changed, 96 insertions(+), 1 deletion(-) diff --git a/drivers/vdpa/vdpa_sim/vdpa_sim.c b/drivers/vdpa/vdpa_sim/vdpa_sim.c index db1636a99ba4..d5942842432d 100644 --- a/drivers/vdpa/vdpa_sim/vdpa_sim.c +++ b/drivers/vdpa/vdpa_sim/vdpa_sim.c @@ -235,7 +235,7 @@ struct vdpasim *vdpasim_create(struct vdpasim_dev_attr *dev_attr) ops = &vdpasim_config_ops; vdpasim = vdpa_alloc_device(struct vdpasim, vdpa, NULL, ops, - dev_attr->nvqs, NULL); + dev_attr->nvqs, dev_attr->name); if (!vdpasim) goto err_alloc; @@ -249,6 +249,7 @@ struct vdpasim *vdpasim_create(struct vdpasim_dev_attr *dev_attr) if (dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64))) goto err_iommu; set_dma_ops(dev, &vdpasim_dma_ops); + vdpasim->vdpa.mdev = dev_attr->mgmt_dev; vdpasim->config = kzalloc(dev_attr->config_size, GFP_KERNEL); if (!vdpasim->config) diff --git a/drivers/vdpa/vdpa_sim/vdpa_sim.h b/drivers/vdpa/vdpa_sim/vdpa_sim.h index b02142293d5b..6d75444f9948 100644 --- a/drivers/vdpa/vdpa_sim/vdpa_sim.h +++ b/drivers/vdpa/vdpa_sim/vdpa_sim.h @@ -33,6 +33,8 @@ struct vdpasim_virtqueue { }; struct vdpasim_dev_attr { + struct vdpa_mgmt_dev *mgmt_dev; + const char *name; u64 supported_features; size_t config_size; size_t buffer_size; diff --git a/drivers/vdpa/vdpa_sim/vdpa_sim_net.c b/drivers/vdpa/vdpa_sim/vdpa_sim_net.c index 34155831538c..b795e02bdad0 100644 --- a/drivers/vdpa/vdpa_sim/vdpa_sim_net.c +++ b/drivers/vdpa/vdpa_sim/vdpa_sim_net.c @@ -162,6 +162,94 @@ static void vdpasim_net_default_dev_unregister(void) vdpa_unregister_device(vdpa); } +static void vdpasim_net_mgmtdev_release(struct device *dev) +{ +} + +static struct device vdpasim_net_mgmtdev = { + .init_name = "vdpasim_net", + .release = vdpasim_net_mgmtdev_release, +}; + +static int vdpasim_net_dev_add(struct vdpa_mgmt_dev *mdev, const char *name) +{ + struct vdpasim_dev_attr dev_attr = {}; + struct vdpasim *simdev; + int ret; + + dev_attr.mgmt_dev = mdev; + dev_attr.name = name; + dev_attr.id = VIRTIO_ID_NET; + dev_attr.supported_features = VDPASIM_NET_FEATURES; + dev_attr.nvqs = VDPASIM_NET_VQ_NUM; + dev_attr.config_size = sizeof(struct virtio_net_config); + dev_attr.get_config = vdpasim_net_get_config; + dev_attr.work_fn = vdpasim_net_work; + dev_attr.buffer_size = PAGE_SIZE; + + simdev = vdpasim_create(&dev_attr); + if (IS_ERR(simdev)) + return PTR_ERR(simdev); + + ret = _vdpa_register_device(&simdev->vdpa); + if (ret) + goto reg_err; + + return 0; + +reg_err: + put_device(&simdev->vdpa.dev); + return ret; +} + +static void vdpasim_net_dev_del(struct vdpa_mgmt_dev *mdev, + struct vdpa_device *dev) +{ + struct vdpasim *simdev = container_of(dev, struct vdpasim, vdpa); + + _vdpa_unregister_device(&simdev->vdpa); +} + +static const struct vdpa_mgmtdev_ops vdpasim_net_mgmtdev_ops = { + .dev_add = vdpasim_net_dev_add, + .dev_del = vdpasim_net_dev_del +}; + +static struct virtio_device_id id_table[] = { + { VIRTIO_ID_NET, VIRTIO_DEV_ANY_ID }, + { 0 }, +}; + +static struct vdpa_mgmt_dev mgmt_dev = { + .device = &vdpasim_net_mgmtdev, + .id_table = id_table, + .ops = &vdpasim_net_mgmtdev_ops, +}; + +static int vdpasim_net_mgmtdev_init(void) +{ + int ret; + + ret = device_register(&vdpasim_net_mgmtdev); + if (ret) + return ret; + + ret = vdpa_mgmtdev_register(&mgmt_dev); + if (ret) + goto parent_err; + return 0; + +parent_err: + device_unregister(&vdpasim_net_mgmtdev); + return ret; +} + +static void vdpasim_net_mgmtdev_cleanup(void) +{ + vdpa_mgmtdev_unregister(&mgmt_dev); + device_unregister(&vdpasim_net_mgmtdev); +} + static int __init vdpasim_net_init(void) { int ret = 0; @@ -176,6 +264,8 @@ static int __init vdpasim_net_init(void) if (default_device) ret = vdpasim_net_default_dev_register(); + else + ret = vdpasim_net_mgmtdev_init(); return ret; } @@ -183,6 +273,8 @@ static void __exit vdpasim_net_exit(void) { if (default_device) vdpasim_net_default_dev_unregister(); + else + vdpasim_net_mgmtdev_cleanup(); } module_init(vdpasim_net_init);