From patchwork Mon Feb 7 17:22:02 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yishai Hadas X-Patchwork-Id: 12737761 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2C45BC46467 for ; Mon, 7 Feb 2022 17:33:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1381809AbiBGRbO (ORCPT ); Mon, 7 Feb 2022 12:31:14 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44886 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241198AbiBGRXU (ORCPT ); Mon, 7 Feb 2022 12:23:20 -0500 Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2050.outbound.protection.outlook.com [40.107.94.50]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E3E30C0401DB; Mon, 7 Feb 2022 09:23:18 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=ecdwJD+HDvJ7C0P597O3hLtbytm5YvXlVLXjuDQmuweVn8YHuFdwzVcnHuSCHQsoM091yOwqqBXp4C4L8eNc80VOjLLdZhpGgShEnbtbww7LxbvC2OnhvVRXhaBVDWEWWRkYqNB/iVS9kUPVXUFfYamzLxorT3xCOucEAaoEwNxOn0eEOqXP5ejVaRAEYRGqR/rindbvc6HILESoclOyuQ/0JB2/I0ff/tE0mzL1qRMnn5/O55n7xE/K0fk8y6feCnG5SUSDNm/ca4vJZJC1X0w4ztA98s0Hb+iRaYa1dauQAJiLElY6Q3XXl40KHktlDIWYRFBXq2MDfCppFgfG+A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=NultjMQVehi+CPwn9nKymN3VP3QKmVagd6D7Y7plSUE=; b=B+e096VrGIJr2UR7anjbmD2nPFnfPUxIhtWCQDGpDTxmNFZZHKranqjemFQQaQEW7rtHbYHYP7bUOUbNyhz8EMfLknblTPa1hglvydJMoQNoLIoN5tCikBKmhTINItqYF4Mq3CmNzTIqF0UR2DiY9IrVeZCN3gPX+Pz2LJaDREf1f+Pg34AOL0OSuwzYJrotMlru0XtMlkEoC8ILKnTUn8VwaLXwFzsdO0Bn5XPdNp6hwGDsEbmheAQaDZmI+hmL5ZvdPQSUudSaGKa3oHVMeKSGNMVaszlQ6CI5vfbiAkVjj/CFj8uBzWkO8h3TVslXLaKW0mKf/LGKSNn8AKSX3w== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.238) smtp.rcpttodomain=intel.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=NultjMQVehi+CPwn9nKymN3VP3QKmVagd6D7Y7plSUE=; b=Hd3yi9MZ1z3N/8lX0z9vE4LkcD2w1Z4h3WghM7CDrx0prbIsAxXuq9EVKr3ToWflpUyPEjGBrlN83pmwoNbpYdvZuanWH4PgkZcqtX15E4nl0J+qOCgfsRfc945hkvY3E5X8UojGu2MucsRAdRhmQfXhmUpwusyFyu/5VJrkcfvrufHYe5+bVWL5OEpoa3h3Fzksoo9WUhAIFLLHbjdUhycnrA4z3AYbFSrcDXGobsLV6NL0+TYYIJIoO97jzWy9QjN3BGFr4BYWHJ6RgDhw+MnetHt08Fb9r+D1AcfA7bv80oyg+DtiuGQ6AAwER39Um5IIRY5xrKpCThNHtmzxTA== Received: from MWHPR19CA0001.namprd19.prod.outlook.com (2603:10b6:300:d4::11) by DM5PR12MB1499.namprd12.prod.outlook.com (2603:10b6:4:8::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4951.11; Mon, 7 Feb 2022 17:23:10 +0000 Received: from CO1NAM11FT043.eop-nam11.prod.protection.outlook.com (2603:10b6:300:d4:cafe::43) by MWHPR19CA0001.outlook.office365.com (2603:10b6:300:d4::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4951.19 via Frontend Transport; Mon, 7 Feb 2022 17:23:10 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.238) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.238 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.238; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.238) by CO1NAM11FT043.mail.protection.outlook.com (10.13.174.193) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4951.12 via Frontend Transport; Mon, 7 Feb 2022 17:23:10 +0000 Received: from rnnvmail205.nvidia.com (10.129.68.10) by DRHQMAIL105.nvidia.com (10.27.9.14) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Mon, 7 Feb 2022 17:23:09 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by rnnvmail205.nvidia.com (10.129.68.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9; Mon, 7 Feb 2022 09:23:09 -0800 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.129.68.8) with Microsoft SMTP Server id 15.2.986.9 via Frontend Transport; Mon, 7 Feb 2022 09:23:05 -0800 From: Yishai Hadas To: , , , CC: , , , , , , , , , , , Subject: [PATCH V7 mlx5-next 01/15] PCI/IOV: Add pci_iov_vf_id() to get VF index Date: Mon, 7 Feb 2022 19:22:02 +0200 Message-ID: <20220207172216.206415-2-yishaih@nvidia.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20220207172216.206415-1-yishaih@nvidia.com> References: <20220207172216.206415-1-yishaih@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 3bc703b0-6974-4c41-8c96-08d9ea5e8391 X-MS-TrafficTypeDiagnostic: DM5PR12MB1499:EE_ X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:6790; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: utQQzjRuLcX6c7G6rjhlnDpvAQ8vFnmLn1HFMYCyZhgRFckTJ0HvCX6migHfMkQrX17s8EX51vWpywj5cuwGJBMN2sTyXa705aaUzRVeEpp/ChXC7zwr7esAMNuSLuIF2zg4yqoyJPFpT6M9hmEMfI/1CiQp7EcKt0d3myuZMUH2wsuCRBi2lwzF5a752JFeW1n9PdZQMoYlKH0EKjmXfiM7C6Tggg69dqMqgPQ/+9IR5BmK7uYRepZrwMmR0XE5tSjH8oOF2qaS6BOYn0g8k0+wDjmYM1eO44egyeM1nGxLvwwHTq7XKpxrz/CjG/GuMqqU7PbXXsldDD+WYO71R+k+6UoXkm9Xt+JSHl6bxpVJuzi5S2ZDMNbN2KbzyYtWv8y6iAXbFu0Atso9WzDTafKafOanADKnFWQzSOTaMzFWFXa10KEYZkfgP3EZHgv7GJTDz1pnviLPoTBCJArTJJr7kXFhprvzM1pq6REuTG2seA8KlS6yXMmIQE+2wcDqEjDnOELkhksavY4UD0ufzjtugbkbL/RZyQq53CjdKC3hXryA0G5fu6bO4/FbbW491D2Ick6spglsKJj0BddfbZ1WZRfHxZcQfMrlkc6OtbP2P2EF+x7z24bBPgO2y7aOTZAOblsvmmBj4H32YnEvG4x5/3dMRY5/v3YgAEZBYStyHdCk/D3tZeggWIBWju0stGPQnGrvDExz6ydEwggggg== X-Forefront-Antispam-Report: CIP:12.22.5.238;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:mail.nvidia.com;PTR:InfoNoRecords;CAT:NONE;SFS:(13230001)(4636009)(46966006)(36840700001)(40470700004)(110136005)(40460700003)(6636002)(6666004)(54906003)(508600001)(7696005)(316002)(4326008)(8676002)(8936002)(86362001)(70586007)(70206006)(36860700001)(82310400004)(356005)(47076005)(5660300002)(83380400001)(81166007)(1076003)(2616005)(26005)(336012)(426003)(186003)(36756003)(2906002)(36900700001);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Feb 2022 17:23:10.3721 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 3bc703b0-6974-4c41-8c96-08d9ea5e8391 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[12.22.5.238];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT043.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR12MB1499 Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org From: Jason Gunthorpe The PCI core uses the VF index internally, often called the vf_id, during the setup of the VF, eg pci_iov_add_virtfn(). This index is needed for device drivers that implement live migration for their internal operations that configure/control their VFs. Specifically, mlx5_vfio_pci driver that is introduced in coming patches from this series needs it and not the bus/device/function which is exposed today. Add pci_iov_vf_id() which computes the vf_id by reversing the math that was used to create the bus/device/function. Signed-off-by: Jason Gunthorpe Acked-by: Bjorn Helgaas Signed-off-by: Leon Romanovsky Signed-off-by: Yishai Hadas --- drivers/pci/iov.c | 14 ++++++++++++++ include/linux/pci.h | 8 +++++++- 2 files changed, 21 insertions(+), 1 deletion(-) diff --git a/drivers/pci/iov.c b/drivers/pci/iov.c index 0267977c9f17..2e9f3d70803a 100644 --- a/drivers/pci/iov.c +++ b/drivers/pci/iov.c @@ -33,6 +33,20 @@ int pci_iov_virtfn_devfn(struct pci_dev *dev, int vf_id) } EXPORT_SYMBOL_GPL(pci_iov_virtfn_devfn); +int pci_iov_vf_id(struct pci_dev *dev) +{ + struct pci_dev *pf; + + if (!dev->is_virtfn) + return -EINVAL; + + pf = pci_physfn(dev); + return (((dev->bus->number << 8) + dev->devfn) - + ((pf->bus->number << 8) + pf->devfn + pf->sriov->offset)) / + pf->sriov->stride; +} +EXPORT_SYMBOL_GPL(pci_iov_vf_id); + /* * Per SR-IOV spec sec 3.3.10 and 3.3.11, First VF Offset and VF Stride may * change when NumVFs changes. diff --git a/include/linux/pci.h b/include/linux/pci.h index 8253a5413d7c..3d4ff7b35ad1 100644 --- a/include/linux/pci.h +++ b/include/linux/pci.h @@ -2166,7 +2166,7 @@ void __iomem *pci_ioremap_wc_bar(struct pci_dev *pdev, int bar); #ifdef CONFIG_PCI_IOV int pci_iov_virtfn_bus(struct pci_dev *dev, int id); int pci_iov_virtfn_devfn(struct pci_dev *dev, int id); - +int pci_iov_vf_id(struct pci_dev *dev); int pci_enable_sriov(struct pci_dev *dev, int nr_virtfn); void pci_disable_sriov(struct pci_dev *dev); @@ -2194,6 +2194,12 @@ static inline int pci_iov_virtfn_devfn(struct pci_dev *dev, int id) { return -ENOSYS; } + +static inline int pci_iov_vf_id(struct pci_dev *dev) +{ + return -ENOSYS; +} + static inline int pci_enable_sriov(struct pci_dev *dev, int nr_virtfn) { return -ENODEV; } From patchwork Mon Feb 7 17:22:03 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yishai Hadas X-Patchwork-Id: 12737751 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DF73FC433FE for ; Mon, 7 Feb 2022 17:33:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244035AbiBGRbI (ORCPT ); Mon, 7 Feb 2022 12:31:08 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45062 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345988AbiBGRX3 (ORCPT ); Mon, 7 Feb 2022 12:23:29 -0500 Received: from NAM04-DM6-obe.outbound.protection.outlook.com (mail-dm6nam08on2040.outbound.protection.outlook.com [40.107.102.40]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BB184C0401E6; Mon, 7 Feb 2022 09:23:25 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=gg+oJn65SrQncxW7uAO07ozA5voo6auuazKZ25noCevTOYsTe/OFfUhvKzmLaVzYSXax3V3ej9Ui2OZBNLvuk4XpHz9UsSnWepwn80SAFmp3qZgVWSy+cAlF+ZWyUEawltO5cmQk65lopFka8vW1UwMEbJ5WcRJ8VbM+u627NruHRkYfoWdxHs/2aX2OwDm9d/EcVNipekCbjsii2gkZHI9nk6cC5Z8K05A9CS7QZgEc6nOl6KLVvkG93BSUC2wXMtaReMU68461j1pe+trPHJn3PDQKRwFqgFbvorXOWFzSv7pGmoVc6VL5998VlSYso3VjO07HWIVUfGztik+DYg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=eawxF+jFRBAH1P9hWq5BE145QVmYPfYNp1U9MJCuY0E=; b=RPZdgyAnQQTjPFhghso7izD64nTztWnIA8aPACuuCqBlge40/FNYkNszjJCBuY72cz2OYQ9m+FuNkYSzCxK7T6q/4YSaSLj7v4b71GeIm6MQWoK2Vug9v8SEzTW60RD8Mx1BSdhSjZtlL+ZGnA92Y+cT3RmojSz49+tGHzWH6OPURr9XOlgqx6u5vmzW/99NduBmoEv274gECm5W+JU1KBU8892JJnUI6EGAAZ33cm2nPP5M7kI9aBopOKkhuTQd3e5QKBj7poD+zUECGZz2RFAntCL1PhbopVyetYcN4EmQ0c1Pqlh8sGWQoUhHpufxK21h4ng8KGZNPu5WvJrHIg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.234) smtp.rcpttodomain=intel.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=eawxF+jFRBAH1P9hWq5BE145QVmYPfYNp1U9MJCuY0E=; b=WVDhBiUdTAG2GnjxTC4qb7uYk2XpNOc3BNB2aZNkWGj5aYZJAERUotunnu0727J6pGaEaf9aAm7WuuXHJPbPMw04J4t5//hVHTPOancS/HQWr8ZUCC8qWytcBYXhrYwu58pwb/g4DHcr3BMSMYmrD9KnZmBQJ7st1JNbyCLwcRMOAHcEkaAq7A3LGIzFVIOchCBwkTewrMccMFwHAClm6Z8PglFQlIzhnRG6/WTbwI9Apc0hhROYwPuy9Mfzd6mXuodq5Mj8NbcFLgg1YvEQOdiJpQqxxQPD/EbXnJqEnnQNx1pTIAqvb8QH68MpljEcnTueKIajGfzba+xxEob/WQ== Received: from MWHPR22CA0014.namprd22.prod.outlook.com (2603:10b6:300:ef::24) by BN6PR1201MB0163.namprd12.prod.outlook.com (2603:10b6:405:56::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4951.11; Mon, 7 Feb 2022 17:23:21 +0000 Received: from CO1NAM11FT049.eop-nam11.prod.protection.outlook.com (2603:10b6:300:ef:cafe::f5) by MWHPR22CA0014.outlook.office365.com (2603:10b6:300:ef::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4951.19 via Frontend Transport; Mon, 7 Feb 2022 17:23:20 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.234) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.234 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.234; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.234) by CO1NAM11FT049.mail.protection.outlook.com (10.13.175.50) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4951.12 via Frontend Transport; Mon, 7 Feb 2022 17:23:20 +0000 Received: from rnnvmail204.nvidia.com (10.129.68.6) by DRHQMAIL101.nvidia.com (10.27.9.10) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Mon, 7 Feb 2022 17:23:14 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by rnnvmail204.nvidia.com (10.129.68.6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9; Mon, 7 Feb 2022 09:23:13 -0800 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.129.68.8) with Microsoft SMTP Server id 15.2.986.9 via Frontend Transport; Mon, 7 Feb 2022 09:23:09 -0800 From: Yishai Hadas To: , , , CC: , , , , , , , , , , , Subject: [PATCH V7 mlx5-next 02/15] net/mlx5: Reuse exported virtfn index function call Date: Mon, 7 Feb 2022 19:22:03 +0200 Message-ID: <20220207172216.206415-3-yishaih@nvidia.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20220207172216.206415-1-yishaih@nvidia.com> References: <20220207172216.206415-1-yishaih@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: adc7a090-6fb0-4868-ae84-08d9ea5e89b7 X-MS-TrafficTypeDiagnostic: BN6PR1201MB0163:EE_ X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:3826; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: awpgcrI9RQPkNn2ChIG7PneVvlKzCmLpYV2P7cElmThKa4QLGS45l4b4zVs9oFDsNXZ54lrQqMvjBSIi1+9GfaklzTCvduA2Eqc7W3MdQSsP9kql6ZYZVbk3yQe9SyJTe1GHQ/aFvWZ6h6XkZKC3DZMKWMl8aTLJxqwsalf4sunxSFmT2eKg4OFt1VR6g6yPsa3IFZtVXUJStFLAUHa4dBdng4TlUhMDRjANakBqXBX/8S/+E2HmVlt2kLjOgDvzjPqPGeM2gPhvgrQvXQXZwjo1BBzfmL03+bDozWNg8XUZm8xUAxlwdEJOKcW6k57AglNA26v+uqP2QuEYccgPBo5N+b8l+Gh3brYxRox+fGE6HxK219bHYLkBiOaCnkOTlixx7pvhcc2WOGwKBWFYNzRw+bY0NAOuRV6zK8Pln5SsxMwIbW1DQbuIPu7XxQSG9RYlfmFmVSHqxZxDso5maGFsYBJ0QUTmzFt8CHlwrcI+YNhz6hEj1n3sEzdkndzJW1FNtkZ1N7DR7aj675uzv8fpz5SJEkSlunQhT/3x++XwaBEgruGLGyYpjRc6e9nX9m56c53+bdhYum8OdBTnTjzpuGnLra9P5xO6cBNusBl7tAl4rfd+eNxsH9siV94FGflAeaFh2mSiavMy18aFwlvjE0x3HisRccrY3qRppaarvvV/tkSvJl2eqljp2N5+Ert9iMppYAvHhgv/4rkaAA== X-Forefront-Antispam-Report: CIP:12.22.5.234;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:mail.nvidia.com;PTR:InfoNoRecords;CAT:NONE;SFS:(13230001)(4636009)(36840700001)(46966006)(40470700004)(8676002)(40460700003)(4326008)(47076005)(70586007)(70206006)(8936002)(336012)(426003)(83380400001)(2906002)(82310400004)(86362001)(36860700001)(5660300002)(508600001)(186003)(6666004)(6636002)(7696005)(26005)(1076003)(2616005)(36756003)(316002)(81166007)(356005)(54906003)(110136005)(36900700001);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Feb 2022 17:23:20.7040 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: adc7a090-6fb0-4868-ae84-08d9ea5e89b7 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[12.22.5.234];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT049.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN6PR1201MB0163 Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org From: Leon Romanovsky Instead open-code iteration to compare virtfn internal index, use newly introduced pci_iov_vf_id() call. Signed-off-by: Leon Romanovsky Signed-off-by: Yishai Hadas --- drivers/net/ethernet/mellanox/mlx5/core/sriov.c | 15 ++------------- 1 file changed, 2 insertions(+), 13 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/sriov.c b/drivers/net/ethernet/mellanox/mlx5/core/sriov.c index e8185b69ac6c..24c4b4f05214 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/sriov.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/sriov.c @@ -205,19 +205,8 @@ int mlx5_core_sriov_set_msix_vec_count(struct pci_dev *vf, int msix_vec_count) mlx5_get_default_msix_vec_count(dev, pci_num_vf(pf)); sriov = &dev->priv.sriov; - - /* Reversed translation of PCI VF function number to the internal - * function_id, which exists in the name of virtfn symlink. - */ - for (id = 0; id < pci_num_vf(pf); id++) { - if (!sriov->vfs_ctx[id].enabled) - continue; - - if (vf->devfn == pci_iov_virtfn_devfn(pf, id)) - break; - } - - if (id == pci_num_vf(pf) || !sriov->vfs_ctx[id].enabled) + id = pci_iov_vf_id(vf); + if (id < 0 || !sriov->vfs_ctx[id].enabled) return -EINVAL; return mlx5_set_msix_vec_count(dev, id + 1, msix_vec_count); From patchwork Mon Feb 7 17:22:04 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yishai Hadas X-Patchwork-Id: 12737760 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0D1EBC41535 for ; Mon, 7 Feb 2022 17:33:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1359116AbiBGRbN (ORCPT ); Mon, 7 Feb 2022 12:31:13 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44934 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1343718AbiBGRXX (ORCPT ); Mon, 7 Feb 2022 12:23:23 -0500 Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2062.outbound.protection.outlook.com [40.107.223.62]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C3F18C0401D5; Mon, 7 Feb 2022 09:23:22 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Uj7lRAIwk52SveB+Y2QxqSrYtpFbPViFoxehPbYVHpr3Gu6Ah8IMxl+so+uOvrapDPRxYKQ63ph0N+QVxHeTlUQmyKSOOOxUcfDq9gdbDS9y8wOtdWb7WVTREKahqQWC90oCTeRMUhCgSQRO1/xZGXPmdvZJqTFsI7iI3tH4VYhX+2Z/sLhR5XjOu5d8mlBw6hmk6DMBqHDWXPOdo2E8mgKW3QrTC6U4TldhwmoWVLJM1AaGl38UcMOpAjtASJ353JownsbrM+qk2WcLciQ5LxqLEdISLuIydGP0YcDhgADkRr4+uVqaKP+eAm1fn9iAcXiYjByyOeRMO4ovERWPTw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=q6yP+UcaESGvVDA99CJM19WVOqvMzB4Yg0hQqHcAhk4=; b=MPc14POH2llUe89XxxvVM1ABmLLdJlXl2c/BauTnDlpCor0nYWs38+87VzgAXdpbyfGbARurHfWtteOUoCBZU6spZhw9NVwGSIq63cjsUQc81ayJr4d9Mbk5yIHgLZWDSl+JsJUvhPPtC68oLyz8bSRkRtRSsk9LpYae11mxfDjT2pvJ8Udom0nrRWumKe4j06LQQlbLGuQl/cYp8wI41QLOsNhKsqRqaxgEEKUAU3l7/RhQGjBjnWjNVrc+mJa59atRBOXTp+vYcRjhUIe+v1Pc98sWQBmIiI2TgbI9e1u2FasTt0N6iFaEkHkm4NezT8yMrQhWRJZjFy+vzm7nEg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.236) smtp.rcpttodomain=intel.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=q6yP+UcaESGvVDA99CJM19WVOqvMzB4Yg0hQqHcAhk4=; b=b0h09UcE96mJvogRF0EfpdmdwgsfSTsawws5nmZy+Nl2RgOVLfGCnfsRMZA0zXJ5lG9H1ZAzXBWt4du9Qk5Mv5bS+exLFBu1YmEiuBaZa4Dd/ltMRr8FS+TcJYyCnRgnLInNye1ertKJfCYbkCuN7/82/hbWhE2MBTo/5FblEM2J7gIN8blaReyPOxVWVsZLspVzLQ9+oZCTXNXfS+S0RKNEHeIYQNoHaRnxIY6QYUh39fiGWtcuYh6SEO8198OW0sZ0nM79sjBt+y1rv80GyH2RY7jFE36NKRCGVestaK3AYVAZQhSZ2hiSXTSPbk9LkztDYXiH/vRN9KM1qSgeSQ== Received: from DM6PR18CA0002.namprd18.prod.outlook.com (2603:10b6:5:15b::15) by MN2PR12MB3360.namprd12.prod.outlook.com (2603:10b6:208:c7::30) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4951.16; Mon, 7 Feb 2022 17:23:20 +0000 Received: from DM6NAM11FT024.eop-nam11.prod.protection.outlook.com (2603:10b6:5:15b:cafe::9b) by DM6PR18CA0002.outlook.office365.com (2603:10b6:5:15b::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4951.19 via Frontend Transport; Mon, 7 Feb 2022 17:23:20 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.236) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.236 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.236; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.236) by DM6NAM11FT024.mail.protection.outlook.com (10.13.172.159) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4951.12 via Frontend Transport; Mon, 7 Feb 2022 17:23:19 +0000 Received: from rnnvmail204.nvidia.com (10.129.68.6) by DRHQMAIL109.nvidia.com (10.27.9.19) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Mon, 7 Feb 2022 17:23:18 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by rnnvmail204.nvidia.com (10.129.68.6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9; Mon, 7 Feb 2022 09:23:18 -0800 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.129.68.8) with Microsoft SMTP Server id 15.2.986.9 via Frontend Transport; Mon, 7 Feb 2022 09:23:14 -0800 From: Yishai Hadas To: , , , CC: , , , , , , , , , , , Subject: [PATCH V7 mlx5-next 03/15] net/mlx5: Disable SRIOV before PF removal Date: Mon, 7 Feb 2022 19:22:04 +0200 Message-ID: <20220207172216.206415-4-yishaih@nvidia.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20220207172216.206415-1-yishaih@nvidia.com> References: <20220207172216.206415-1-yishaih@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 450b0770-fca1-4102-caf1-08d9ea5e894d X-MS-TrafficTypeDiagnostic: MN2PR12MB3360:EE_ X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:6790; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: DIenpFI/CTDCdwRbdHdLgVx490zXKSACm/DMO6F70MTzsXriFwu5RZkj+oZIVp34ho14jT+k2sOqeRUtHjOwFl8oTuI95GtIE84yrB+kUiAAL6TgkIjB9Tb6hMu1hmE6YRVCJmop7rWIfQjngHAdaC4aKPtNvl1Aofm5pZmViG9Z9c83sxlZse8Ei1IUgcK3Ke44CoYrTeh5hkqGQWMErrW/U/Sti/hp9wdT9jhPPAYFt2gVZKgCmTXXeSmdsVHIblymXC2dc+ogDNFaSVUiQfimz2Og4InmzXF49LWirzHS4Z603FTdil69B0WYFRUXhXvBS1ZuatF6Z8whPDbOgviU+nqqfqCYzlmuz/0p1SAfIHwCf78inbG7wfzQIrQOMMkZyHDyFxrFlS+PT6lwVWY06AgZLSIbWdSA3j47LUont24cYL3NVnxsBQur+PVXlbXvkvdpctzB8npAHwkVN9wi53KrAPk2+6W/xIjemfLgPsmwI4rqyY2V7msn1W+Ol6ygXqBoljwm83l+V8/LmaFoOTL8c+CTQhZ1wZTSshVF8MEJyo9+QfPH902JAb17+iJEzVry9uL9Ll7wsbYxqnfX4A3XJH+xJi3XSPnl4r1MuTlXg3E0rbgU+oZqlwP/ssB0XfZE/OXxwpxz0wEBOfF72JZXWF6mnB0V47fPnN4dTLKrTMrz0XeSyBLxYnAhe3FVEwFF52z5naVY/XTsrQ== X-Forefront-Antispam-Report: CIP:12.22.5.236;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:mail.nvidia.com;PTR:InfoNoRecords;CAT:NONE;SFS:(13230001)(4636009)(46966006)(36840700001)(40470700004)(36756003)(40460700003)(36860700001)(70586007)(70206006)(81166007)(8936002)(8676002)(356005)(83380400001)(110136005)(54906003)(6636002)(5660300002)(4326008)(2906002)(336012)(26005)(186003)(426003)(86362001)(1076003)(82310400004)(47076005)(7696005)(6666004)(508600001)(316002)(2616005)(36900700001);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Feb 2022 17:23:19.9944 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 450b0770-fca1-4102-caf1-08d9ea5e894d X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[12.22.5.236];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT024.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR12MB3360 Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org Virtual functions depend on physical function for device access (for example firmware host PAGE management), so make sure to disable SRIOV once PF is gone. This will prevent also the below warning if PF has gone before disabling SRIOV. "driver left SR-IOV enabled after remove" Next patch from this series will rely on that when the VF may need to access safely the PF 'driver data'. Signed-off-by: Yishai Hadas Signed-off-by: Leon Romanovsky --- drivers/net/ethernet/mellanox/mlx5/core/main.c | 1 + drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h | 1 + drivers/net/ethernet/mellanox/mlx5/core/sriov.c | 2 +- 3 files changed, 3 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/main.c b/drivers/net/ethernet/mellanox/mlx5/core/main.c index 2c774f367199..5b8958186157 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/main.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/main.c @@ -1620,6 +1620,7 @@ static void remove_one(struct pci_dev *pdev) struct devlink *devlink = priv_to_devlink(dev); devlink_unregister(devlink); + mlx5_sriov_disable(pdev); mlx5_crdump_disable(dev); mlx5_drain_health_wq(dev); mlx5_uninit_one(dev); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h b/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h index 6f8baa0f2a73..37b2805b3bf3 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h @@ -164,6 +164,7 @@ void mlx5_sriov_cleanup(struct mlx5_core_dev *dev); int mlx5_sriov_attach(struct mlx5_core_dev *dev); void mlx5_sriov_detach(struct mlx5_core_dev *dev); int mlx5_core_sriov_configure(struct pci_dev *dev, int num_vfs); +void mlx5_sriov_disable(struct pci_dev *pdev); int mlx5_core_sriov_set_msix_vec_count(struct pci_dev *vf, int msix_vec_count); int mlx5_core_enable_hca(struct mlx5_core_dev *dev, u16 func_id); int mlx5_core_disable_hca(struct mlx5_core_dev *dev, u16 func_id); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/sriov.c b/drivers/net/ethernet/mellanox/mlx5/core/sriov.c index 24c4b4f05214..887ee0f729d1 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/sriov.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/sriov.c @@ -161,7 +161,7 @@ static int mlx5_sriov_enable(struct pci_dev *pdev, int num_vfs) return err; } -static void mlx5_sriov_disable(struct pci_dev *pdev) +void mlx5_sriov_disable(struct pci_dev *pdev) { struct mlx5_core_dev *dev = pci_get_drvdata(pdev); int num_vfs = pci_num_vf(dev->pdev); From patchwork Mon Feb 7 17:22:05 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yishai Hadas X-Patchwork-Id: 12737762 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1CA5DC3526C for ; Mon, 7 Feb 2022 17:33:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1382915AbiBGRbO (ORCPT ); Mon, 7 Feb 2022 12:31:14 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45054 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346174AbiBGRXb (ORCPT ); Mon, 7 Feb 2022 12:23:31 -0500 Received: from NAM02-BN1-obe.outbound.protection.outlook.com (mail-bn1nam07on2051.outbound.protection.outlook.com [40.107.212.51]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8BC66C0401D5; Mon, 7 Feb 2022 09:23:30 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=e75H7EOs14+yKQfTfbvRPl1AVtiRC3JIcNh3kirVukzqbyT5Wu7bPEsMWOg1mXerirfwD5/+85iQeF3sCOQztp0yYtK24pXBMCWCWHiPE+ZvVq6Hu1BBwS4DuzXNL1bPgTlexFmwbRGhhf3secHYT42lxsBZcfFDW1GflMQ96SrgJ4bYnsyJc9qBFSdREVzEb0WM/SznHEqmTH5SyY0o/3+cN4V1RDdZFiVbybRwzjM4qyormA8hR5lR2RM4isSAUMRvNfa+OsQ5YHIhJmclaP8zVjNJlsnc3bpzf4ObzfmroRdFNlHpuGrF/V7Mocaes2Sinkwri6NAw7m+i+Db3Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=huHD3ovZNc9rzQ0iGUHiVTaCqum+rUNRny5WgGaN18I=; b=l9gXChJriPSYrRPfYJ7fdwMDjdkaNDbInbZIWX2GikUCA75Zv7atsEfkKWrPEeXFUzSUOxcijI0fDaV/VzDHA/OjypjfHIPRtWX7YsqOA6SkDJRh2MmJVt5CAM+ta55rqLc1OkJh+b3paAbzI3YKAln78Al7KsVnL4KPnXHlwjfI6GvIc8FTvjiR1j49u2m8SrJ3GL9HCeNE60YpWz9oxEB9Q8eTS0K16imqNwFvIY58EGTiB1pSLzY1ppO/65FbmQrfkFZmi2EZZfG3EUIJEe7Asgj90tt9s4pa09xMvdSHcw/TkUL2Ej/LOxifW6NgxuL/D6QvVPsCyLCpMYEsIw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.235) smtp.rcpttodomain=intel.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=huHD3ovZNc9rzQ0iGUHiVTaCqum+rUNRny5WgGaN18I=; b=hIaGylfKIpvuzgKsiPqMNA6/Vz37Py54wf7Xkx6DFDCGMk6bL+D9vioJ6jnTH9dKoKtThdMPsUFvqJ9ArY+ENhJleg6bGHV45MsHe3F4d3yqMLYvtmdcbnuS9V3C/gfhxLJHUHrUbMz9tMr9O8L6BJWx9uKeKLM3w9MQGqJPIDUFkElOFjLdoHEFKOoCZ3RQVJl8y2z3oYH3xz5XDkUtDn34kXimyn6cuD6VjdxCzhKIyeYz+MWL4kfMVvvhEEy15UnldoeLUFyH+ATMYPuni7QgI0AiGGQ++/kZIxRQMzf44q/9GSox4HLspvPsw0d1WY1CClEkU2i2tZrQbW6/3Q== Received: from BN9PR03CA0469.namprd03.prod.outlook.com (2603:10b6:408:139::24) by DM6PR12MB4986.namprd12.prod.outlook.com (2603:10b6:5:16f::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4951.17; Mon, 7 Feb 2022 17:23:28 +0000 Received: from BN8NAM11FT068.eop-nam11.prod.protection.outlook.com (2603:10b6:408:139:cafe::f3) by BN9PR03CA0469.outlook.office365.com (2603:10b6:408:139::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4951.12 via Frontend Transport; Mon, 7 Feb 2022 17:23:27 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.235) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.235 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.235; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.235) by BN8NAM11FT068.mail.protection.outlook.com (10.13.177.69) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4951.12 via Frontend Transport; Mon, 7 Feb 2022 17:23:27 +0000 Received: from rnnvmail204.nvidia.com (10.129.68.6) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Mon, 7 Feb 2022 17:23:23 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by rnnvmail204.nvidia.com (10.129.68.6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9; Mon, 7 Feb 2022 09:23:22 -0800 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.129.68.8) with Microsoft SMTP Server id 15.2.986.9 via Frontend Transport; Mon, 7 Feb 2022 09:23:18 -0800 From: Yishai Hadas To: , , , CC: , , , , , , , , , , , Subject: [PATCH V7 mlx5-next 04/15] PCI/IOV: Add pci_iov_get_pf_drvdata() to allow VF reaching the drvdata of a PF Date: Mon, 7 Feb 2022 19:22:05 +0200 Message-ID: <20220207172216.206415-5-yishaih@nvidia.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20220207172216.206415-1-yishaih@nvidia.com> References: <20220207172216.206415-1-yishaih@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 4dda248d-b5d5-46c4-ff5b-08d9ea5e8dd0 X-MS-TrafficTypeDiagnostic: DM6PR12MB4986:EE_ X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:7219; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: T4Mn7V26FmugjTiTNRuUcSXSIhXjWszlcqw8tSgwzJGQAhjLnaJffK7FGU6OObnoswbWDQw98HBeLHnAHCNmQmEtzecQXrfGOFltTSDCZEjgTELjqbXyQKxD/x5plloTmq41i8feCisdeC3hqFVWZOsYVzQGKMqzZYic1UrIbBUusz4PQBj7hTJ57Q7sa8P/D9971pASIoO70gq9bARGqXqkFCL95ePg8oX5CNPdATuprumGiNBOuzyhbvZDMmbP9zF/Ex0SUU6uhFTCG2QFrQMzqYp8Lq7i/6rVjxCuh5z3iyCwymihUkVCASHSYIua4/pfxFNMbWXj2FC4KkOb04IUhROyr1m4CvkzYXFPi/sxDxvDxNdWNh2pWkdBYE/Y03w7i+qi5NVvwM7IGJYOYhGZDUF/7UXgLTJd7evKMHfH7T12QJtuVn32RIjPsP2OuZXuPjlIxLySouCzZ09Ks/Eo3UkYY5eZWIyg04zQfrseEssbEPT4NYyT0qCTU2zFeOn/iyfLQyUY3pl7I6yE9r1//8ADXUnLIYXM2IuxthkSqCjhA0/KtL28AX3UYxs1tPZp2kARnOJRdl6Ai7/O0Sx5QTphEh+AHpzgMyuXY28umTlB9SzWiAK3NnDRHQhye/++XGuRUTFV17ugR+dHrUpgWaTHzDidRow5CsL1toD1ib9qxd6zAZAP6dV/D8R2UjdWLYs3Lx9+I8md21rrww== X-Forefront-Antispam-Report: CIP:12.22.5.235;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:mail.nvidia.com;PTR:InfoNoRecords;CAT:NONE;SFS:(13230001)(4636009)(46966006)(36840700001)(40470700004)(2616005)(81166007)(2906002)(6666004)(82310400004)(86362001)(40460700003)(5660300002)(7696005)(356005)(8936002)(508600001)(83380400001)(316002)(70586007)(36756003)(54906003)(6636002)(4326008)(8676002)(110136005)(70206006)(36860700001)(26005)(186003)(1076003)(47076005)(426003)(336012)(36900700001);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Feb 2022 17:23:27.5026 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 4dda248d-b5d5-46c4-ff5b-08d9ea5e8dd0 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[12.22.5.235];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT068.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB4986 Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org From: Jason Gunthorpe There are some cases where a SR-IOV VF driver will need to reach into and interact with the PF driver. This requires accessing the drvdata of the PF. Provide a function pci_iov_get_pf_drvdata() to return this PF drvdata in a safe way. Normally accessing a drvdata of a foreign struct device would be done using the device_lock() to protect against device driver probe()/remove() races. However, due to the design of pci_enable_sriov() this will result in a ABBA deadlock on the device_lock as the PF's device_lock is held during PF sriov_configure() while calling pci_enable_sriov() which in turn holds the VF's device_lock while calling VF probe(), and similarly for remove. This means the VF driver can never obtain the PF's device_lock. Instead use the implicit locking created by pci_enable/disable_sriov(). A VF driver can access its PF drvdata only while its own driver is attached, and the PF driver can control access to its own drvdata based on when it calls pci_enable/disable_sriov(). To use this API the PF driver will setup the PF drvdata in the probe() function. pci_enable_sriov() is only called from sriov_configure() which cannot happen until probe() completes, ensuring no VF races with drvdata setup. For removal, the PF driver must call pci_disable_sriov() in its remove function before destroying any of the drvdata. This ensures that all VF drivers are unbound before returning, fencing concurrent access to the drvdata. The introduction of a new function to do this access makes clear the special locking scheme and the documents the requirements on the PF/VF drivers using this. Signed-off-by: Jason Gunthorpe Acked-by: Bjorn Helgaas Signed-off-by: Leon Romanovsky Signed-off-by: Yishai Hadas --- drivers/pci/iov.c | 29 +++++++++++++++++++++++++++++ include/linux/pci.h | 7 +++++++ 2 files changed, 36 insertions(+) diff --git a/drivers/pci/iov.c b/drivers/pci/iov.c index 2e9f3d70803a..28ec952e1221 100644 --- a/drivers/pci/iov.c +++ b/drivers/pci/iov.c @@ -47,6 +47,35 @@ int pci_iov_vf_id(struct pci_dev *dev) } EXPORT_SYMBOL_GPL(pci_iov_vf_id); +/** + * pci_iov_get_pf_drvdata - Return the drvdata of a PF + * @dev - VF pci_dev + * @pf_driver - Device driver required to own the PF + * + * This must be called from a context that ensures that a VF driver is attached. + * The value returned is invalid once the VF driver completes its remove() + * callback. + * + * Locking is achieved by the driver core. A VF driver cannot be probed until + * pci_enable_sriov() is called and pci_disable_sriov() does not return until + * all VF drivers have completed their remove(). + * + * The PF driver must call pci_disable_sriov() before it begins to destroy the + * drvdata. + */ +void *pci_iov_get_pf_drvdata(struct pci_dev *dev, struct pci_driver *pf_driver) +{ + struct pci_dev *pf_dev; + + if (!dev->is_virtfn) + return ERR_PTR(-EINVAL); + pf_dev = dev->physfn; + if (pf_dev->driver != pf_driver) + return ERR_PTR(-EINVAL); + return pci_get_drvdata(pf_dev); +} +EXPORT_SYMBOL_GPL(pci_iov_get_pf_drvdata); + /* * Per SR-IOV spec sec 3.3.10 and 3.3.11, First VF Offset and VF Stride may * change when NumVFs changes. diff --git a/include/linux/pci.h b/include/linux/pci.h index 3d4ff7b35ad1..60d423d8f0c4 100644 --- a/include/linux/pci.h +++ b/include/linux/pci.h @@ -2167,6 +2167,7 @@ void __iomem *pci_ioremap_wc_bar(struct pci_dev *pdev, int bar); int pci_iov_virtfn_bus(struct pci_dev *dev, int id); int pci_iov_virtfn_devfn(struct pci_dev *dev, int id); int pci_iov_vf_id(struct pci_dev *dev); +void *pci_iov_get_pf_drvdata(struct pci_dev *dev, struct pci_driver *pf_driver); int pci_enable_sriov(struct pci_dev *dev, int nr_virtfn); void pci_disable_sriov(struct pci_dev *dev); @@ -2200,6 +2201,12 @@ static inline int pci_iov_vf_id(struct pci_dev *dev) return -ENOSYS; } +static inline void *pci_iov_get_pf_drvdata(struct pci_dev *dev, + struct pci_driver *pf_driver) +{ + return ERR_PTR(-EINVAL); +} + static inline int pci_enable_sriov(struct pci_dev *dev, int nr_virtfn) { return -ENODEV; } From patchwork Mon Feb 7 17:22:06 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yishai Hadas X-Patchwork-Id: 12737752 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CF08AC433EF for ; Mon, 7 Feb 2022 17:33:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237933AbiBGRbH (ORCPT ); Mon, 7 Feb 2022 12:31:07 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45024 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346102AbiBGRXa (ORCPT ); Mon, 7 Feb 2022 12:23:30 -0500 Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2052.outbound.protection.outlook.com [40.107.94.52]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 087BDC0401D9; Mon, 7 Feb 2022 09:23:30 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=WjFupO2kjBP0R0MIndln7qZaUd1jjZQscxLCzp9+tBb42vc9TBjKWSYB0O6ZuJxSNGnecDrVRBunj0gJuNwHDP1SG4HHYk/lpbU4qmmAAgKQVdGcV0wWJG/wv8qRYyWQxkKBUj4rTYJiKbBiY2l0DmCLmFQ+kSikLaawcwmf9bq/46PiErCVVN5uJxQLQl9yk33LVovA4I8VHci33EDgDQOULt0zghgTDSM1Zm6xMTtj9uHt4P3bFl3rwvffcCPtmqDN9BQlO/bpsQtsaV4CwdNPQxFIqJZ588qLlrRt168gOvVCDAZij9f8b+Illrbnut3OOtlvTi2xhFokgLpCGw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=61gwSfylWpcH4wUM2gijTwlrZsz7BPMQi4AgoGVantk=; b=QdJUMPwc+kKjOgnpPeH7dPXy53aImwzlcnndX9QXZpk1x+B+JdC8ljogOId5jw7q7S4Mypz12Pon6jcF8JOI/UnyghiKBog0gtqOXNHaTsFWbIWAP0h5iHfG0rbnICIRKwY9rV0RvxskNMdTXRRacSZZAA6D2A3NIpoOZB0c1zl1axKKJ2FpVyswVOE/aoXtcF5dEvnjB/BwP5rYSiqGq6qqwO1W6wu/ebw4kBbSG0RActM2iOFEoSISAhAPmYsT2i6tGPXNgNAL6EqajxCl2COQ5s195zhQNFKMh7r0Yq5LWhSG/2NOnd5eRmu4w4iJJDrc6PUyIw9sithPxlptUQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.234) smtp.rcpttodomain=intel.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=61gwSfylWpcH4wUM2gijTwlrZsz7BPMQi4AgoGVantk=; b=TAqkjWB+THf5ziMqtkMmCyda4cdgCySyqzpvG58sVDauLMrNDAiqlR70fmSnT95QhFjDE0lSDDa9ej7cyKhrHTH6E1k/ne7xIuHqgmia37ccdEw4NOxx2taDZnaUnhje9ia8sZTi+pvWhFFAAPWDlTglQAPvNakYgenhRQ9FKGE2SdcAaHRbg0epE5evp2ReumCR/D1HiDb8FOij6Iog381Rn3D6ciZN7UaFQxvHw5fH2fSx7fQ9Fj7AKmYRPyST9SkWTxHbYsYuqaAHga283gcV7Ck96qs5pce4gPDD9ZhCrNdSjkDZLxtFYKMV4cpQS/kJ6t6jmcUTxTC3y0PRRg== Received: from MWHPR21CA0054.namprd21.prod.outlook.com (2603:10b6:300:db::16) by DM6PR12MB3435.namprd12.prod.outlook.com (2603:10b6:5:39::26) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4951.11; Mon, 7 Feb 2022 17:23:28 +0000 Received: from CO1NAM11FT053.eop-nam11.prod.protection.outlook.com (2603:10b6:300:db:cafe::b9) by MWHPR21CA0054.outlook.office365.com (2603:10b6:300:db::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4995.4 via Frontend Transport; Mon, 7 Feb 2022 17:23:28 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.234) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.234 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.234; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.234) by CO1NAM11FT053.mail.protection.outlook.com (10.13.175.63) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4951.12 via Frontend Transport; Mon, 7 Feb 2022 17:23:27 +0000 Received: from rnnvmail203.nvidia.com (10.129.68.9) by DRHQMAIL101.nvidia.com (10.27.9.10) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Mon, 7 Feb 2022 17:23:27 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by rnnvmail203.nvidia.com (10.129.68.9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9; Mon, 7 Feb 2022 09:23:26 -0800 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.129.68.8) with Microsoft SMTP Server id 15.2.986.9 via Frontend Transport; Mon, 7 Feb 2022 09:23:22 -0800 From: Yishai Hadas To: , , , CC: , , , , , , , , , , , Subject: [PATCH V7 mlx5-next 05/15] net/mlx5: Expose APIs to get/put the mlx5 core device Date: Mon, 7 Feb 2022 19:22:06 +0200 Message-ID: <20220207172216.206415-6-yishaih@nvidia.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20220207172216.206415-1-yishaih@nvidia.com> References: <20220207172216.206415-1-yishaih@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: efba22ce-a0a0-4470-a6a1-08d9ea5e8e0f X-MS-TrafficTypeDiagnostic: DM6PR12MB3435:EE_ X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:2000; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: rcj8wqHfUbAslH4F6kfBugbcv4NmEqTW7g2EE+Ppu+ePqjCjGeQnTOu1/d1KUbEE8vGhSpIGWEmfwvYDEET/D/fJaILDydddOKl7f7ihJ0rxGYkHLuHD3d0+sUxgAwSNuhlcl0pw5PbluEUoEqQyWoYs/L1dBb5UvEdzBRp8OLHbmYH+YdrPnHTu2GRh+k4rutgQdhZvxU7EROuMzLplUBfc7hcNBEVsOmcOOXEOGLHIYxNIg2fhbscl0KRe8w7rBLo2n0s4/xiueUYJLX60N8G4sPHoxOZURjbYoaGTyJIX+A9wYnsA/2pWjFRcitIyVddQm8yY2quLoCUlr8GUfenypTlG0X9xqnbKl+w5T2eW8483bjEbA1l+CcLvrZhSFufWsjSTHJSbuYoQ2OXyynNLK9qkvJP7coYVtDHQB5OBAFihGKPlP42GhHeBXvLXgyDieAi75tXmXT6L9yITFTDikFpjIL8uYR322XGPZR6gvC5r9ssAOX2Jzy2k86dNCB+9tCdldQnD41OnO9EIMtpeyxA/Ffj12LHH5K8OdwQnCXYs+BmRA68k7RFS+pRkhXC/YC+zWM/o5wyMA06Uyl3Gh3vdRQjkgbRW77oCPT1mKP0iQdpjpn9+f98hoz44ovuVh8PfTLGBqdG3Bo/3Fn5X+KYoX/LeIEyE9etUO839v6lNzZ573X5m0/YWDbbgmmuaZ/6UCsQG5COsWns0mbPSZkTJFQ1smrLmVgxxY86EM/Uf40PRQfbBJWOyBF5W X-Forefront-Antispam-Report: CIP:12.22.5.234;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:mail.nvidia.com;PTR:InfoNoRecords;CAT:NONE;SFS:(13230001)(4636009)(40470700004)(36840700001)(46966006)(110136005)(316002)(54906003)(7696005)(6636002)(6666004)(2906002)(508600001)(82310400004)(40460700003)(8676002)(70586007)(8936002)(5660300002)(4326008)(70206006)(86362001)(36860700001)(81166007)(356005)(1076003)(336012)(36756003)(2616005)(26005)(426003)(186003)(83380400001)(47076005)(36900700001);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Feb 2022 17:23:27.9289 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: efba22ce-a0a0-4470-a6a1-08d9ea5e8e0f X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[12.22.5.234];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT053.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB3435 Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org Expose an API to get the mlx5 core device from a given VF PCI device if mlx5_core is its driver. Upon the get API we stay with the intf_state_mutex locked to make sure that the device can't be gone/unloaded till the caller will complete its job over the device, this expects to be for a short period of time for any flow that the lock is taken. Upon the put API we unlock the intf_state_mutex. The use case for those APIs is the migration flow of a VF over VFIO PCI. In that case the VF doesn't ride on mlx5_core, because the device is driving *two* different PCI devices, the PF owned by mlx5_core and the VF owned by the vfio driver. The mlx5_core of the PF is accessed only during the narrow window of the VF's ioctl that requires its services. This allows the PF driver to be more independent of the VF driver, so long as it doesn't reset the FW. Signed-off-by: Yishai Hadas Signed-off-by: Leon Romanovsky --- .../net/ethernet/mellanox/mlx5/core/main.c | 44 +++++++++++++++++++ include/linux/mlx5/driver.h | 3 ++ 2 files changed, 47 insertions(+) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/main.c b/drivers/net/ethernet/mellanox/mlx5/core/main.c index 5b8958186157..e9aeba4267ff 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/main.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/main.c @@ -1881,6 +1881,50 @@ static struct pci_driver mlx5_core_driver = { .sriov_set_msix_vec_count = mlx5_core_sriov_set_msix_vec_count, }; +/** + * mlx5_vf_get_core_dev - Get the mlx5 core device from a given VF PCI device if + * mlx5_core is its driver. + * @pdev: The associated PCI device. + * + * Upon return the interface state lock stay held to let caller uses it safely. + * Caller must ensure to use the returned mlx5 device for a narrow window + * and put it back with mlx5_vf_put_core_dev() immediately once usage was over. + * + * Return: Pointer to the associated mlx5_core_dev or NULL. + */ +struct mlx5_core_dev *mlx5_vf_get_core_dev(struct pci_dev *pdev) + __acquires(&mdev->intf_state_mutex) +{ + struct mlx5_core_dev *mdev; + + mdev = pci_iov_get_pf_drvdata(pdev, &mlx5_core_driver); + if (IS_ERR(mdev)) + return NULL; + + mutex_lock(&mdev->intf_state_mutex); + if (!test_bit(MLX5_INTERFACE_STATE_UP, &mdev->intf_state)) { + mutex_unlock(&mdev->intf_state_mutex); + return NULL; + } + + return mdev; +} +EXPORT_SYMBOL(mlx5_vf_get_core_dev); + +/** + * mlx5_vf_put_core_dev - Put the mlx5 core device back. + * @mdev: The mlx5 core device. + * + * Upon return the interface state lock is unlocked and caller should not + * access the mdev any more. + */ +void mlx5_vf_put_core_dev(struct mlx5_core_dev *mdev) + __releases(&mdev->intf_state_mutex) +{ + mutex_unlock(&mdev->intf_state_mutex); +} +EXPORT_SYMBOL(mlx5_vf_put_core_dev); + static void mlx5_core_verify_params(void) { if (prof_sel >= ARRAY_SIZE(profile)) { diff --git a/include/linux/mlx5/driver.h b/include/linux/mlx5/driver.h index 78655d8d13a7..319322a8ff94 100644 --- a/include/linux/mlx5/driver.h +++ b/include/linux/mlx5/driver.h @@ -1143,6 +1143,9 @@ int mlx5_dm_sw_icm_alloc(struct mlx5_core_dev *dev, enum mlx5_sw_icm_type type, int mlx5_dm_sw_icm_dealloc(struct mlx5_core_dev *dev, enum mlx5_sw_icm_type type, u64 length, u16 uid, phys_addr_t addr, u32 obj_id); +struct mlx5_core_dev *mlx5_vf_get_core_dev(struct pci_dev *pdev); +void mlx5_vf_put_core_dev(struct mlx5_core_dev *mdev); + #ifdef CONFIG_MLX5_CORE_IPOIB struct net_device *mlx5_rdma_netdev_alloc(struct mlx5_core_dev *mdev, struct ib_device *ibdev, From patchwork Mon Feb 7 17:22:07 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yishai Hadas X-Patchwork-Id: 12737748 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E0712C4707E for ; Mon, 7 Feb 2022 17:31:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229615AbiBGRas (ORCPT ); Mon, 7 Feb 2022 12:30:48 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45108 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346180AbiBGRXg (ORCPT ); Mon, 7 Feb 2022 12:23:36 -0500 Received: from NAM12-DM6-obe.outbound.protection.outlook.com (mail-dm6nam12on2083.outbound.protection.outlook.com [40.107.243.83]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7733DC0401D5; Mon, 7 Feb 2022 09:23:35 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=CrwvFQNYXTsz4CaCE+nT7N5p9UdHpZG7S2RW9XyStCuP3fpZFz8Ch+IqGeD1IRQDgJTJ4XLEYnhq1I2uwQtmjnmdsikEnQIV35KDmEmxDSp4NuBiNG14h5Jx3dwLJ6mEuz6xh1OQ71/haClZoUBuC1032gxYAjBU6sukHG+ltBmEACyZqxDPFVeWuzRi3WtKvu8+3ySukuDy2xP0WuLKnzpfAxToWzB8MH6ullmuY2QsnhbTSPx/eIsdNhHqzP47FU+oq8GDe2Bm/o9ptUJzs8m4ArIw8qSumFjEDftqosMsMyzNxK3j2fRukHzsP9rdNsu5fvIob56kwOrq1rkSnw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=sh+Wv4Rj0ROCI1moErIr+cKiO00G2TXSHsDSrJjkwxo=; b=SySFVdX/63wOWhBOptqzKJbBPM0V+6GA1mA5C4xb9y+VzcGmuSm9byHWx5/BXuw9XFxpIf6nPUYG92eo3AhAFLMLBpcoCaMR2EHURnqRobDZGD9qAURU+CoDg84D/XsAe/AABW1CmOcMRF5+w9JM1Tszo+s6rIWbvG6QLU9zacNruzRevQnSCtOLwKmhOOnH3P4Sszqo5SbGZrdtjTJHg9Y69AE2Kp9D1V7oNRyoyCQ0t8JvWiaSNNzT57/V20HewlHA71IubnM84vRLVJQvp+KzW0v3+9/VSlasedah5XaScuOKOqiXNL+enpv2C+FX0zdjxeuFR++DwOPRcJmxDQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.236) smtp.rcpttodomain=intel.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=sh+Wv4Rj0ROCI1moErIr+cKiO00G2TXSHsDSrJjkwxo=; b=hzSlBFKuX/SJ6qU55ZnP5319cR0siFL5AWw2rgJzcF8ypiVM54OzX5kSSOLikM3PLimfYmzVQJ6FMa/N3amEzRmNfn/h2kxxY8+wmbxrl6tetui1aTWB9j6UCmHxcu9jxXHB3/yqptL4jpjt+qyc3BJFUhLd/JB3emku1S0MmB3RRQMrsW8BEeSPljz4NLLGjBa6+EWmXTJY3jjMJasuJH+K5yyCPp3Io7UbR+AlOOkfdbkvAHyJDhO3PK3/CiBbXX5q1jIlzxkgFOV8bUoAQ1isSKFUyKrH3XHE6PRbOfrh/Kr7+1k9wx3jTS4FBa+2UNEiAxVU+aL3vqtPXappMQ== Received: from DM3PR12CA0105.namprd12.prod.outlook.com (2603:10b6:0:55::25) by DM6PR12MB3082.namprd12.prod.outlook.com (2603:10b6:5:11b::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4951.12; Mon, 7 Feb 2022 17:23:32 +0000 Received: from DM6NAM11FT063.eop-nam11.prod.protection.outlook.com (2603:10b6:0:55:cafe::9e) by DM3PR12CA0105.outlook.office365.com (2603:10b6:0:55::25) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4951.12 via Frontend Transport; Mon, 7 Feb 2022 17:23:32 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.236) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.236 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.236; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.236) by DM6NAM11FT063.mail.protection.outlook.com (10.13.172.219) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4951.12 via Frontend Transport; Mon, 7 Feb 2022 17:23:32 +0000 Received: from rnnvmail202.nvidia.com (10.129.68.7) by DRHQMAIL109.nvidia.com (10.27.9.19) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Mon, 7 Feb 2022 17:23:31 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by rnnvmail202.nvidia.com (10.129.68.7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9; Mon, 7 Feb 2022 09:23:31 -0800 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.129.68.8) with Microsoft SMTP Server id 15.2.986.9 via Frontend Transport; Mon, 7 Feb 2022 09:23:27 -0800 From: Yishai Hadas To: , , , CC: , , , , , , , , , , , Subject: [PATCH V7 mlx5-next 06/15] net/mlx5: Introduce migration bits and structures Date: Mon, 7 Feb 2022 19:22:07 +0200 Message-ID: <20220207172216.206415-7-yishaih@nvidia.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20220207172216.206415-1-yishaih@nvidia.com> References: <20220207172216.206415-1-yishaih@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 5eea132f-f0cc-448f-19a1-08d9ea5e90af X-MS-TrafficTypeDiagnostic: DM6PR12MB3082:EE_ X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:4941; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: DQ0mo0c2mn0uIKwenyo5qkB2NUoQQHhg72YV+PcaVZNokRGmOwWZisXmD48FYhl1XUrk6fpAcP0QOdgH79tokPiHVQnVZ8o0Z82spfg/vdxIZYZXNYG/l9on+Cr8MIQguuIi67ivDkX2FuPx/qhGRp9osHcb5zERrH77a4RonnP7OEmAfCMpS+XuS/rfXf/4k1HfBNXC23Zt7KobbN731IZUu7S5Wu9eE8quW4Fj7dFw7gL77zvp19UFxiQCLF3+Iaz5lf2YaZb2mdFkrKyyJivh9rGzcVi0Oqe4JssZvduviXqXU3oxrO2DeWj2OCyLxrbIzTECc0EgN9zqhGXPeq2FDJs6LnxQG3CRNTIwBKdkmzvFyPR0A4pVEjRjJh9VDVhx4pWIVMoOp/KJhOc8/+lwehHT33LZyVaaPW7tsRDhxJ/XAHxco2fUKgnWhceqXeDbdL4dcY3k/kxRLd79uGFxnNMsz3ZIMp/T9/GVbxKjiTaXhcMr/lD4BhY0KcXbnnbK1Nfot5P2RykAJC+xXe/ot62+tlV5vvnFLhHsm7gQvIhrcknaekx5ax0v9Zozi8wWREocpVMij+VSF9jsYMye/oFyFmwylJghbvd3i44y9+hXfaZgkttOV4CH+gs0mNiqwv75Qngkzox/eFJQ7lLDj+VynV78Rd88LEUbZrg/eSiFyBaXokkgScG/17OuxriqwvEMknE3CISFNU/ouw== X-Forefront-Antispam-Report: CIP:12.22.5.236;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:mail.nvidia.com;PTR:InfoNoRecords;CAT:NONE;SFS:(13230001)(4636009)(36840700001)(46966006)(40470700004)(40460700003)(316002)(110136005)(6636002)(82310400004)(54906003)(7696005)(70586007)(86362001)(6666004)(8676002)(4326008)(8936002)(83380400001)(70206006)(81166007)(356005)(47076005)(36860700001)(2906002)(1076003)(186003)(2616005)(5660300002)(26005)(426003)(336012)(508600001)(36756003)(36900700001);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Feb 2022 17:23:32.3605 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 5eea132f-f0cc-448f-19a1-08d9ea5e90af X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[12.22.5.236];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT063.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB3082 Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org Introduce migration IFC related stuff to enable migration commands. Signed-off-by: Yishai Hadas Signed-off-by: Leon Romanovsky --- include/linux/mlx5/mlx5_ifc.h | 147 +++++++++++++++++++++++++++++++++- 1 file changed, 146 insertions(+), 1 deletion(-) diff --git a/include/linux/mlx5/mlx5_ifc.h b/include/linux/mlx5/mlx5_ifc.h index 598ac3bcc901..45891a75c5ca 100644 --- a/include/linux/mlx5/mlx5_ifc.h +++ b/include/linux/mlx5/mlx5_ifc.h @@ -127,6 +127,11 @@ enum { MLX5_CMD_OP_QUERY_SF_PARTITION = 0x111, MLX5_CMD_OP_ALLOC_SF = 0x113, MLX5_CMD_OP_DEALLOC_SF = 0x114, + MLX5_CMD_OP_SUSPEND_VHCA = 0x115, + MLX5_CMD_OP_RESUME_VHCA = 0x116, + MLX5_CMD_OP_QUERY_VHCA_MIGRATION_STATE = 0x117, + MLX5_CMD_OP_SAVE_VHCA_STATE = 0x118, + MLX5_CMD_OP_LOAD_VHCA_STATE = 0x119, MLX5_CMD_OP_CREATE_MKEY = 0x200, MLX5_CMD_OP_QUERY_MKEY = 0x201, MLX5_CMD_OP_DESTROY_MKEY = 0x202, @@ -1757,7 +1762,9 @@ struct mlx5_ifc_cmd_hca_cap_bits { u8 reserved_at_682[0x1]; u8 log_max_sf[0x5]; u8 apu[0x1]; - u8 reserved_at_689[0x7]; + u8 reserved_at_689[0x4]; + u8 migration[0x1]; + u8 reserved_at_68e[0x2]; u8 log_min_sf_size[0x8]; u8 max_num_sf_partitions[0x8]; @@ -11519,4 +11526,142 @@ enum { MLX5_MTT_PERM_RW = MLX5_MTT_PERM_READ | MLX5_MTT_PERM_WRITE, }; +enum { + MLX5_SUSPEND_VHCA_IN_OP_MOD_SUSPEND_MASTER = 0x0, + MLX5_SUSPEND_VHCA_IN_OP_MOD_SUSPEND_SLAVE = 0x1, +}; + +struct mlx5_ifc_suspend_vhca_in_bits { + u8 opcode[0x10]; + u8 uid[0x10]; + + u8 reserved_at_20[0x10]; + u8 op_mod[0x10]; + + u8 reserved_at_40[0x10]; + u8 vhca_id[0x10]; + + u8 reserved_at_60[0x20]; +}; + +struct mlx5_ifc_suspend_vhca_out_bits { + u8 status[0x8]; + u8 reserved_at_8[0x18]; + + u8 syndrome[0x20]; + + u8 reserved_at_40[0x40]; +}; + +enum { + MLX5_RESUME_VHCA_IN_OP_MOD_RESUME_SLAVE = 0x0, + MLX5_RESUME_VHCA_IN_OP_MOD_RESUME_MASTER = 0x1, +}; + +struct mlx5_ifc_resume_vhca_in_bits { + u8 opcode[0x10]; + u8 uid[0x10]; + + u8 reserved_at_20[0x10]; + u8 op_mod[0x10]; + + u8 reserved_at_40[0x10]; + u8 vhca_id[0x10]; + + u8 reserved_at_60[0x20]; +}; + +struct mlx5_ifc_resume_vhca_out_bits { + u8 status[0x8]; + u8 reserved_at_8[0x18]; + + u8 syndrome[0x20]; + + u8 reserved_at_40[0x40]; +}; + +struct mlx5_ifc_query_vhca_migration_state_in_bits { + u8 opcode[0x10]; + u8 uid[0x10]; + + u8 reserved_at_20[0x10]; + u8 op_mod[0x10]; + + u8 reserved_at_40[0x10]; + u8 vhca_id[0x10]; + + u8 reserved_at_60[0x20]; +}; + +struct mlx5_ifc_query_vhca_migration_state_out_bits { + u8 status[0x8]; + u8 reserved_at_8[0x18]; + + u8 syndrome[0x20]; + + u8 reserved_at_40[0x40]; + + u8 required_umem_size[0x20]; + + u8 reserved_at_a0[0x160]; +}; + +struct mlx5_ifc_save_vhca_state_in_bits { + u8 opcode[0x10]; + u8 uid[0x10]; + + u8 reserved_at_20[0x10]; + u8 op_mod[0x10]; + + u8 reserved_at_40[0x10]; + u8 vhca_id[0x10]; + + u8 reserved_at_60[0x20]; + + u8 va[0x40]; + + u8 mkey[0x20]; + + u8 size[0x20]; +}; + +struct mlx5_ifc_save_vhca_state_out_bits { + u8 status[0x8]; + u8 reserved_at_8[0x18]; + + u8 syndrome[0x20]; + + u8 actual_image_size[0x20]; + + u8 reserved_at_60[0x20]; +}; + +struct mlx5_ifc_load_vhca_state_in_bits { + u8 opcode[0x10]; + u8 uid[0x10]; + + u8 reserved_at_20[0x10]; + u8 op_mod[0x10]; + + u8 reserved_at_40[0x10]; + u8 vhca_id[0x10]; + + u8 reserved_at_60[0x20]; + + u8 va[0x40]; + + u8 mkey[0x20]; + + u8 size[0x20]; +}; + +struct mlx5_ifc_load_vhca_state_out_bits { + u8 status[0x8]; + u8 reserved_at_8[0x18]; + + u8 syndrome[0x20]; + + u8 reserved_at_40[0x40]; +}; + #endif /* MLX5_IFC_H */ From patchwork Mon Feb 7 17:22:08 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yishai Hadas X-Patchwork-Id: 12737749 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 37A66C433FE for ; Mon, 7 Feb 2022 17:31:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237603AbiBGRbG (ORCPT ); Mon, 7 Feb 2022 12:31:06 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45140 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346199AbiBGRXl (ORCPT ); Mon, 7 Feb 2022 12:23:41 -0500 Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2085.outbound.protection.outlook.com [40.107.223.85]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 46193C0401D6; Mon, 7 Feb 2022 09:23:40 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=iVOA+h0muh6PrQ7QmWaaXpTU1GpO+oposoq/n1mxcm12R1swVCCrw1OTlsNZRSOrDkSsMQN+SkqsWNmXifmJqX8SWfK+FBvSXd1bHotN1Z8lz/HwrvXyRP3A4jV1f7v1bCLI/0+dVrGFIe2/XWA0833cdu/vAdF3tT2XNJK8miS0kWW7VD3GJ53P6BlBQJXpVabKP20Rt1Bev0gU9jRdqJHCPpsmr8YH0vt705ujyWlYcPeko6ULZAecBo6mAkgKkraHmurOTpzQ8sBPgq6YMzNB0nzBc4tGQkvhhRdm9Yg2vvJ87/bX6IvJf2AyqTn35JK3VmDueUzw7/aHEm96zA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=Q9f+ApyBv8z2m+mGOCVBJkKlMSdqWd3ZFD2RNpvt2wE=; b=B9eMHZmd01slofKBz0izYVOPbLIm6UNo+tAFtVwoybyUMPWdqTaXzLNarGZRkpkk6F1J2ci54JPrNfjqtKiaFRlf7/YgLVIi+YqCQqE71zjJnjoXmrPMMmvfgiG1vF2zb6aLrxISyzvOiN2nzrejGTjwSKDsjxarLZJ8yHJqPDp/TrUb9z9dST7w27mrsetagY6zvHI3EyiRUvd57lDVFdqX0DfdNewHGOIYQFI9MU7FrwUKsKZ6ObxvGTCOecrjQxX5caro8gz5147W11Wmvadv45yxFeusSDUt7TTNIUPAfAv3v0u2ctY8I3pduFElCBYIAG8YXTvcGF+Qfu8Hnw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.235) smtp.rcpttodomain=intel.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Q9f+ApyBv8z2m+mGOCVBJkKlMSdqWd3ZFD2RNpvt2wE=; b=lTuPlnc8uKVSNrLUQCITdeBFTrkSZn5482ZInFRxxzXNjsSQH1h6gLNxRRnLC0tWarcW3IGhWKbN1wudGUCNQakTfUUft1GvrNLQMsjjs7/ovcI26Y9qYLkI+weu3jB+nNWPrI4Jd1h2K4xQHULYGhkD3h2sFyF0APZizCJxS8PuIoI/4y6Hb91JH6HYMAoBf9Q8cd/BmrLOuVGw4vnmyE6qkGGHH5JHzMTk0tBeiivjsILymhVfC1xQX6dmal1vmhUHGoNYqc6XV2DHS5jKBxMTX7tuPV+CNWl1l+4hlIg9xyKjQ1nEMkJFUSu2JrT4LH0i0+lgHrtzoMzYhP2vEA== Received: from BN9PR03CA0174.namprd03.prod.outlook.com (2603:10b6:408:f4::29) by CY4PR1201MB2485.namprd12.prod.outlook.com (2603:10b6:903:d2::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4951.11; Mon, 7 Feb 2022 17:23:37 +0000 Received: from BN8NAM11FT046.eop-nam11.prod.protection.outlook.com (2603:10b6:408:f4:cafe::7) by BN9PR03CA0174.outlook.office365.com (2603:10b6:408:f4::29) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4951.12 via Frontend Transport; Mon, 7 Feb 2022 17:23:37 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.235) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.235 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.235; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.235) by BN8NAM11FT046.mail.protection.outlook.com (10.13.177.127) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4951.12 via Frontend Transport; Mon, 7 Feb 2022 17:23:36 +0000 Received: from rnnvmail202.nvidia.com (10.129.68.7) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Mon, 7 Feb 2022 17:23:36 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by rnnvmail202.nvidia.com (10.129.68.7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9; Mon, 7 Feb 2022 09:23:35 -0800 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.129.68.8) with Microsoft SMTP Server id 15.2.986.9 via Frontend Transport; Mon, 7 Feb 2022 09:23:31 -0800 From: Yishai Hadas To: , , , CC: , , , , , , , , , , , Subject: [PATCH V7 mlx5-next 07/15] vfio: Have the core code decode the VFIO_DEVICE_FEATURE ioctl Date: Mon, 7 Feb 2022 19:22:08 +0200 Message-ID: <20220207172216.206415-8-yishaih@nvidia.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20220207172216.206415-1-yishaih@nvidia.com> References: <20220207172216.206415-1-yishaih@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: ea645553-2494-44ab-8851-08d9ea5e937c X-MS-TrafficTypeDiagnostic: CY4PR1201MB2485:EE_ X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:7219; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: v79YYDjVr6ZJQh9q4lgb3NRCKnSecFHAojjcuJfp6n2Ctwq+DiGpsQ/DUpauf3rUVzhzOCy0+fNDHx/9rjKLZFXE6inHvTxsTgixr0AzeYjDvYf82qx8yRnCT2BaSMGIYBX4Z6v2bedc5nA5P5hesphgtqG26FISTOvnSsaMavCE9SPc7uGq9Yg3Rh5dX27/yD+aowQatnVOIpRl6KNFhKoFZleE5+3Q84NNrR0R2ZSZlwJYjxX11GRwW8WA4vF/47LLPQHI/HS9LGhkWn8RzYGTkhY4KntzPSi5cNPplLqmpfxHXjleOGvW4tsQ8B8Mj2QNMIqqJ7JNoTWHkdLlpXVtFTwgLjJ2+Muxvf/PnNOfHudeAkvwW+ZHkHP6sK4n0bEQSyqfIutdN3QdgWpOAGVnipj1XzxSf0f++IlLNp1D2wNea2zcdd98CosmUispnFFPGB1OaSq9NLgJtX3+ZTsqH4k6QnCUG0evsu4ZHjyiY3QseIkFGqmSHsgTZjfSmGZD5jryFWLG8Z3i2X6iusLWYz965thL3lMvDNiXm+7j1NIDM+Vgxik4gtfqXTxx3CogIzz6HrW9M+F/4PUjDmEGhrURri/n54yk0BurM6QFC2gFCcQONkVKxEG6D8sBlmXxTvHkH2b1WYikMkxzTiPbGOnuADUDCQ0X+upJXk64o0E6YverpYfnOpIhDLxch83Nlu+9whR3ge96/0VIhw== X-Forefront-Antispam-Report: CIP:12.22.5.235;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:mail.nvidia.com;PTR:InfoNoRecords;CAT:NONE;SFS:(13230001)(4636009)(36840700001)(40470700004)(46966006)(5660300002)(8676002)(82310400004)(4326008)(70586007)(70206006)(40460700003)(2906002)(8936002)(36860700001)(356005)(81166007)(36756003)(508600001)(54906003)(6666004)(7696005)(316002)(110136005)(86362001)(83380400001)(426003)(6636002)(47076005)(186003)(336012)(1076003)(2616005)(26005)(36900700001);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Feb 2022 17:23:36.9998 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: ea645553-2494-44ab-8851-08d9ea5e937c X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[12.22.5.235];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT046.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY4PR1201MB2485 Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org From: Jason Gunthorpe Invoke a new device op 'device_feature' to handle just the data array portion of the command. This lifts the ioctl validation to the core code and makes it simpler for either the core code, or layered drivers, to implement their own feature values. Provide vfio_check_feature() to consolidate checking the flags/etc against what the driver supports. Signed-off-by: Jason Gunthorpe Signed-off-by: Yishai Hadas --- drivers/vfio/pci/vfio_pci.c | 1 + drivers/vfio/pci/vfio_pci_core.c | 94 +++++++++++++------------------- drivers/vfio/vfio.c | 46 ++++++++++++++-- include/linux/vfio.h | 32 +++++++++++ include/linux/vfio_pci_core.h | 2 + 5 files changed, 114 insertions(+), 61 deletions(-) diff --git a/drivers/vfio/pci/vfio_pci.c b/drivers/vfio/pci/vfio_pci.c index a5ce92beb655..2b047469e02f 100644 --- a/drivers/vfio/pci/vfio_pci.c +++ b/drivers/vfio/pci/vfio_pci.c @@ -130,6 +130,7 @@ static const struct vfio_device_ops vfio_pci_ops = { .open_device = vfio_pci_open_device, .close_device = vfio_pci_core_close_device, .ioctl = vfio_pci_core_ioctl, + .device_feature = vfio_pci_core_ioctl_feature, .read = vfio_pci_core_read, .write = vfio_pci_core_write, .mmap = vfio_pci_core_mmap, diff --git a/drivers/vfio/pci/vfio_pci_core.c b/drivers/vfio/pci/vfio_pci_core.c index f948e6cd2993..106e1970d653 100644 --- a/drivers/vfio/pci/vfio_pci_core.c +++ b/drivers/vfio/pci/vfio_pci_core.c @@ -1114,70 +1114,50 @@ long vfio_pci_core_ioctl(struct vfio_device *core_vdev, unsigned int cmd, return vfio_pci_ioeventfd(vdev, ioeventfd.offset, ioeventfd.data, count, ioeventfd.fd); - } else if (cmd == VFIO_DEVICE_FEATURE) { - struct vfio_device_feature feature; - uuid_t uuid; - - minsz = offsetofend(struct vfio_device_feature, flags); - - if (copy_from_user(&feature, (void __user *)arg, minsz)) - return -EFAULT; - - if (feature.argsz < minsz) - return -EINVAL; - - /* Check unknown flags */ - if (feature.flags & ~(VFIO_DEVICE_FEATURE_MASK | - VFIO_DEVICE_FEATURE_SET | - VFIO_DEVICE_FEATURE_GET | - VFIO_DEVICE_FEATURE_PROBE)) - return -EINVAL; - - /* GET & SET are mutually exclusive except with PROBE */ - if (!(feature.flags & VFIO_DEVICE_FEATURE_PROBE) && - (feature.flags & VFIO_DEVICE_FEATURE_SET) && - (feature.flags & VFIO_DEVICE_FEATURE_GET)) - return -EINVAL; - - switch (feature.flags & VFIO_DEVICE_FEATURE_MASK) { - case VFIO_DEVICE_FEATURE_PCI_VF_TOKEN: - if (!vdev->vf_token) - return -ENOTTY; - - /* - * We do not support GET of the VF Token UUID as this - * could expose the token of the previous device user. - */ - if (feature.flags & VFIO_DEVICE_FEATURE_GET) - return -EINVAL; - - if (feature.flags & VFIO_DEVICE_FEATURE_PROBE) - return 0; + } + return -ENOTTY; +} +EXPORT_SYMBOL_GPL(vfio_pci_core_ioctl); - /* Don't SET unless told to do so */ - if (!(feature.flags & VFIO_DEVICE_FEATURE_SET)) - return -EINVAL; +static int vfio_pci_core_feature_token(struct vfio_device *device, u32 flags, + void __user *arg, size_t argsz) +{ + struct vfio_pci_core_device *vdev = + container_of(device, struct vfio_pci_core_device, vdev); + uuid_t uuid; + int ret; - if (feature.argsz < minsz + sizeof(uuid)) - return -EINVAL; + if (!vdev->vf_token) + return -ENOTTY; + /* + * We do not support GET of the VF Token UUID as this could + * expose the token of the previous device user. + */ + ret = vfio_check_feature(flags, argsz, VFIO_DEVICE_FEATURE_SET, + sizeof(uuid)); + if (ret != 1) + return ret; - if (copy_from_user(&uuid, (void __user *)(arg + minsz), - sizeof(uuid))) - return -EFAULT; + if (copy_from_user(&uuid, arg, sizeof(uuid))) + return -EFAULT; - mutex_lock(&vdev->vf_token->lock); - uuid_copy(&vdev->vf_token->uuid, &uuid); - mutex_unlock(&vdev->vf_token->lock); + mutex_lock(&vdev->vf_token->lock); + uuid_copy(&vdev->vf_token->uuid, &uuid); + mutex_unlock(&vdev->vf_token->lock); + return 0; +} - return 0; - default: - return -ENOTTY; - } +int vfio_pci_core_ioctl_feature(struct vfio_device *device, u32 flags, + void __user *arg, size_t argsz) +{ + switch (flags & VFIO_DEVICE_FEATURE_MASK) { + case VFIO_DEVICE_FEATURE_PCI_VF_TOKEN: + return vfio_pci_core_feature_token(device, flags, arg, argsz); + default: + return -ENOTTY; } - - return -ENOTTY; } -EXPORT_SYMBOL_GPL(vfio_pci_core_ioctl); +EXPORT_SYMBOL_GPL(vfio_pci_core_ioctl_feature); static ssize_t vfio_pci_rw(struct vfio_pci_core_device *vdev, char __user *buf, size_t count, loff_t *ppos, bool iswrite) diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c index 735d1d344af9..71763e2ac561 100644 --- a/drivers/vfio/vfio.c +++ b/drivers/vfio/vfio.c @@ -1557,15 +1557,53 @@ static int vfio_device_fops_release(struct inode *inode, struct file *filep) return 0; } +static int vfio_ioctl_device_feature(struct vfio_device *device, + struct vfio_device_feature __user *arg) +{ + size_t minsz = offsetofend(struct vfio_device_feature, flags); + struct vfio_device_feature feature; + + if (copy_from_user(&feature, arg, minsz)) + return -EFAULT; + + if (feature.argsz < minsz) + return -EINVAL; + + /* Check unknown flags */ + if (feature.flags & + ~(VFIO_DEVICE_FEATURE_MASK | VFIO_DEVICE_FEATURE_SET | + VFIO_DEVICE_FEATURE_GET | VFIO_DEVICE_FEATURE_PROBE)) + return -EINVAL; + + /* GET & SET are mutually exclusive except with PROBE */ + if (!(feature.flags & VFIO_DEVICE_FEATURE_PROBE) && + (feature.flags & VFIO_DEVICE_FEATURE_SET) && + (feature.flags & VFIO_DEVICE_FEATURE_GET)) + return -EINVAL; + + switch (feature.flags & VFIO_DEVICE_FEATURE_MASK) { + default: + if (unlikely(!device->ops->device_feature)) + return -EINVAL; + return device->ops->device_feature(device, feature.flags, + arg->data, + feature.argsz - minsz); + } +} + static long vfio_device_fops_unl_ioctl(struct file *filep, unsigned int cmd, unsigned long arg) { struct vfio_device *device = filep->private_data; - if (unlikely(!device->ops->ioctl)) - return -EINVAL; - - return device->ops->ioctl(device, cmd, arg); + switch (cmd) { + case VFIO_DEVICE_FEATURE: + return vfio_ioctl_device_feature(device, (void __user *)arg); + default: + if (unlikely(!device->ops->ioctl)) + return -EINVAL; + return device->ops->ioctl(device, cmd, arg); + } } static ssize_t vfio_device_fops_read(struct file *filep, char __user *buf, diff --git a/include/linux/vfio.h b/include/linux/vfio.h index 76191d7abed1..ca69516f869d 100644 --- a/include/linux/vfio.h +++ b/include/linux/vfio.h @@ -55,6 +55,7 @@ struct vfio_device { * @match: Optional device name match callback (return: 0 for no-match, >0 for * match, -errno for abort (ex. match with insufficient or incorrect * additional args) + * @device_feature: Fill in the VFIO_DEVICE_FEATURE ioctl */ struct vfio_device_ops { char *name; @@ -69,8 +70,39 @@ struct vfio_device_ops { int (*mmap)(struct vfio_device *vdev, struct vm_area_struct *vma); void (*request)(struct vfio_device *vdev, unsigned int count); int (*match)(struct vfio_device *vdev, char *buf); + int (*device_feature)(struct vfio_device *device, u32 flags, + void __user *arg, size_t argsz); }; +/** + * vfio_check_feature - Validate user input for the VFIO_DEVICE_FEATURE ioctl + * @flags: Arg from the device_feature op + * @argsz: Arg from the device_feature op + * @supported_ops: Combination of VFIO_DEVICE_FEATURE_GET and SET the driver + * supports + * @minsz: Minimum data size the driver accepts + * + * For use in a driver's device_feature op. Checks that the inputs to the + * VFIO_DEVICE_FEATURE ioctl are correct for the driver's feature. Returns 1 if + * the driver should execute the get or set, otherwise the relevant + * value should be returned. + */ +static inline int vfio_check_feature(u32 flags, size_t argsz, u32 supported_ops, + size_t minsz) +{ + if ((flags & (VFIO_DEVICE_FEATURE_GET | VFIO_DEVICE_FEATURE_SET)) & + ~supported_ops) + return -EINVAL; + if (flags & VFIO_DEVICE_FEATURE_PROBE) + return 0; + /* Without PROBE one of GET or SET must be requested */ + if (!(flags & (VFIO_DEVICE_FEATURE_GET | VFIO_DEVICE_FEATURE_SET))) + return -EINVAL; + if (argsz < minsz) + return -EINVAL; + return 1; +} + void vfio_init_group_dev(struct vfio_device *device, struct device *dev, const struct vfio_device_ops *ops); void vfio_uninit_group_dev(struct vfio_device *device); diff --git a/include/linux/vfio_pci_core.h b/include/linux/vfio_pci_core.h index ef9a44b6cf5d..beba0b2ed87d 100644 --- a/include/linux/vfio_pci_core.h +++ b/include/linux/vfio_pci_core.h @@ -220,6 +220,8 @@ int vfio_pci_core_sriov_configure(struct pci_dev *pdev, int nr_virtfn); extern const struct pci_error_handlers vfio_pci_core_err_handlers; long vfio_pci_core_ioctl(struct vfio_device *core_vdev, unsigned int cmd, unsigned long arg); +int vfio_pci_core_ioctl_feature(struct vfio_device *device, u32 flags, + void __user *arg, size_t argsz); ssize_t vfio_pci_core_read(struct vfio_device *core_vdev, char __user *buf, size_t count, loff_t *ppos); ssize_t vfio_pci_core_write(struct vfio_device *core_vdev, const char __user *buf, From patchwork Mon Feb 7 17:22:09 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yishai Hadas X-Patchwork-Id: 12737763 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EA449C4167D for ; Mon, 7 Feb 2022 17:33:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1376264AbiBGRbN (ORCPT ); Mon, 7 Feb 2022 12:31:13 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45184 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346223AbiBGRXp (ORCPT ); Mon, 7 Feb 2022 12:23:45 -0500 Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2080.outbound.protection.outlook.com [40.107.93.80]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2BA73C0401D8; Mon, 7 Feb 2022 09:23:44 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=DYH83E5JKMqz+H86W5pOMrGpG5u7qcSiaVGXqJOjwOlaV2UjTnvrXsLG2N/UAiGZaRZaXnJ9DwNbBwARco6r1wiIPrcILsc+FDTus+h/UVouRNSeigRDIpR2xVUl7N/oznVXj+2xOdBA6c17QIFb2hBYGcu405BMdkz+uox25todqNcVGa8rwv/VlUrCsr46A7QkZ5TIQ1ZKcqQyfRnLwKZM8x2bY/3XDYUAOr7kjGlfNpqv+42Fn+krMOmnW03N9NP61zy5dtRdxzTkx9QovxFNzPIDV2lwDERA7T/ewjlX+zd5RK/6wxGfN2gjnRgyBfv3bh7D5V0GYzbA9Ll79A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=O20prKxZQ7fSREPopy3t0+VBGzmQCa6ZABuVRw+LxlE=; b=Cd11wPZeZvrtp+1QT3xUJUL+LU4YbIQyVbGPSwheovYRD8z5heZXBC/+b6pzP83Gs5VErUBj0Pd91P6btEIRfNcTupqtFR8kMC1an2OIqz91Fyxify726rVMozT+7M4MpHaivxe+dQZMzF9xU3AxtuaLcrb8vEuxTqS+JgH56RyfF7Slp3zuKHQiCV327vI3wqDKYXGOoUAf+SZxUYIq5R8o85lFFOMa8SG3+mJoyhbjrIZJkkYO7iD4DGaLDFRW+1qlNws0DtIAYv7VDPm6onPah1LLwWKLR+T00OPvPJqQ3BCPVd+cdbGiEfysYWRKq94p+P7mHiOGi842J0Dn4A== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.238) smtp.rcpttodomain=intel.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=O20prKxZQ7fSREPopy3t0+VBGzmQCa6ZABuVRw+LxlE=; b=K52WB4VgMHZKpP3Y4UtKG1q1RvVSKi9SrrUK+JRJVySqZ2YtH0AIs++CR6xc5O5L/irgpJGB49S+CMV/38cK5CuzrqmMzdYWEKxnyBxqUJT2mOKjrwe2q+cABkbQDLBQbxjKVtQGdZgvPq3Wi1Rl8EiauX30DspMmGgZTgqhIpGuKmg9tMmrvE+ePltEtTL0yQfSVTjCRP8waw59VinOJhGP14KQNKuEbVrpPHQlP2oDhpNnlS/IEY3gouTPVKaRIGXzTnWHf42SdfbMYPdKxzi4Wp1+ZmUNU6N+BTDDP8FpnqHoAScNL5B48LFB1IDbVP6RjOy5gUBuOEUw8xumJw== Received: from MW4PR03CA0184.namprd03.prod.outlook.com (2603:10b6:303:b8::9) by CY4PR12MB1655.namprd12.prod.outlook.com (2603:10b6:910:11::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4951.12; Mon, 7 Feb 2022 17:23:41 +0000 Received: from CO1NAM11FT033.eop-nam11.prod.protection.outlook.com (2603:10b6:303:b8:cafe::46) by MW4PR03CA0184.outlook.office365.com (2603:10b6:303:b8::9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4951.12 via Frontend Transport; Mon, 7 Feb 2022 17:23:41 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.238) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.238 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.238; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.238) by CO1NAM11FT033.mail.protection.outlook.com (10.13.174.247) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4951.12 via Frontend Transport; Mon, 7 Feb 2022 17:23:41 +0000 Received: from rnnvmail205.nvidia.com (10.129.68.10) by DRHQMAIL105.nvidia.com (10.27.9.14) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Mon, 7 Feb 2022 17:23:40 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by rnnvmail205.nvidia.com (10.129.68.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9; Mon, 7 Feb 2022 09:23:39 -0800 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.129.68.8) with Microsoft SMTP Server id 15.2.986.9 via Frontend Transport; Mon, 7 Feb 2022 09:23:36 -0800 From: Yishai Hadas To: , , , CC: , , , , , , , , , , , Subject: [PATCH V7 mlx5-next 08/15] vfio: Define device migration protocol v2 Date: Mon, 7 Feb 2022 19:22:09 +0200 Message-ID: <20220207172216.206415-9-yishaih@nvidia.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20220207172216.206415-1-yishaih@nvidia.com> References: <20220207172216.206415-1-yishaih@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 0bc5685c-5b1d-4007-ee65-08d9ea5e9602 X-MS-TrafficTypeDiagnostic: CY4PR12MB1655:EE_ X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:4502; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 2Du2q9BUfrHWIC39eY2AQk4MVN87ANRf3kkgjQt8HOLWQy9BEBoifV6MAsgRyGLNifCW0+2/UO4TZI6Ml4pxalIQh7Q9DX2D8YE/naS2DhYMXP5E0NXFrhrJEh0IMAzzvVHX+b4UskqBB828ZFNRoYF2tGrTLXm97LgzEz2PfzGs/xa/BsMYVUYzTbuBY4Su389putUomnfC97/lxkB+EqxrPBPSdavTdXUVQOBIWSiSQTG+Aygm8K2vNZqz/mkhnjfiwyFTrhV9T0xVTi0VgXg/LYhr+gRiYIMMc5sw69MVGROp3d91kcZ886LOtKdpP8OjKU0F9Moxe3/33KpsiPBBMwR52TTyesvTrEy8vDDLfWss5jfg/UlGsrYFcfg+FUzzCZMTYV9WAlz4is9aXFFrbaCm7vu1FkCcHPCYwLGtrL2f2bM0vgPtXNVBnD5q2PwGoGkVba1zMqQ78yasKcJwHpVeURfdmGD8VU1DSz+dLa7yZUubLGZH/26KRIzD+CA2eP2S5RrW8ufxlgBnxN0xpyLY9pygbvoy9FFT91l5pdy2aOPTgDtGimZIqQ6AHt7AFshClainlij7jmlx5LV/0CIOFxmj3x8onJUk6qV6hp6CNiqSnfa+DnQ1eEn42StB4cxMb26vfEZHaNV++1BM2R71X2M/5Atchp6sFbc41ccLichKIMpnoO0CWcLQVwVUJghIieiDSQXVBjsWmviCODdo/BJliiPYtQjrf2YdwCOSgLkKD8MFTk8GUgIaG2RyZ8cmjod0AsVe6lTGjK2zdhCelr/Sf5SNfj5qPEM= X-Forefront-Antispam-Report: CIP:12.22.5.238;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:mail.nvidia.com;PTR:InfoNoRecords;CAT:NONE;SFS:(13230001)(4636009)(40470700004)(46966006)(36840700001)(81166007)(36860700001)(356005)(5660300002)(30864003)(86362001)(110136005)(70206006)(6666004)(6636002)(54906003)(7696005)(186003)(2906002)(316002)(82310400004)(336012)(426003)(70586007)(83380400001)(4326008)(2616005)(8936002)(40460700003)(1076003)(966005)(26005)(47076005)(8676002)(508600001)(36756003)(36900700001);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Feb 2022 17:23:41.3142 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 0bc5685c-5b1d-4007-ee65-08d9ea5e9602 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[12.22.5.238];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT033.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY4PR12MB1655 Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org From: Jason Gunthorpe Replace the existing region based migration protocol with an ioctl based protocol. The two protocols have the same general semantic behaviors, but the way the data is transported is changed. This is the STOP_COPY portion of the new protocol, it defines the 5 states for basic stop and copy migration and the protocol to move the migration data in/out of the kernel. Compared to the clarification of the v1 protocol Alex proposed: https://lore.kernel.org/r/163909282574.728533.7460416142511440919.stgit@omen This has a few deliberate functional differences: - ERROR arcs allow the device function to remain unchanged. - The protocol is not required to return to the original state on transition failure. Instead userspace can execute an unwind back to the original state, reset, or do something else without needing kernel support. This simplifies the kernel design and should userspace choose a policy like always reset, avoids doing useless work in the kernel on error handling paths. - PRE_COPY is made optional, userspace must discover it before using it. This reflects the fact that the majority of drivers we are aware of right now will not implement PRE_COPY. - segmentation is not part of the data stream protocol, the receiver does not have to reproduce the framing boundaries. The hybrid FSM for the device_state is described as a Mealy machine by documenting each of the arcs the driver is required to implement. Defining the remaining set of old/new device_state transitions as 'combination transitions' which are naturally defined as taking multiple FSM arcs along the shortest path within the FSM's digraph allows a complete matrix of transitions. A new VFIO_DEVICE_FEATURE of VFIO_DEVICE_FEATURE_MIG_DEVICE_STATE is defined to replace writing to the device_state field in the region. This allows returning a brand new FD whenever the requested transition opens a data transfer session. The VFIO core code implements the new feature and provides a helper function to the driver. Using the helper the driver only has to implement 6 of the FSM arcs and the other combination transitions are elaborated consistently from those arcs. A new VFIO_DEVICE_FEATURE of VFIO_DEVICE_FEATURE_MIGRATION is defined to report the capability for migration and indicate which set of states and arcs are supported by the device. The FSM provides a lot of flexibility to make backwards compatible extensions but the VFIO_DEVICE_FEATURE also allows for future breaking extensions for scenarios that cannot support even the basic STOP_COPY requirements. The VFIO_DEVICE_FEATURE_MIG_DEVICE_STATE with the GET option (i.e. VFIO_DEVICE_FEATURE_GET) can be used to read the current migration state of the VFIO device. Data transfer sessions are now carried over a file descriptor, instead of the region. The FD functions for the lifetime of the data transfer session. read() and write() transfer the data with normal Linux stream FD semantics. This design allows future expansion to support poll(), io_uring, and other performance optimizations. The complicated mmap mode for data transfer is discarded as current qemu doesn't take meaningful advantage of it, and the new qemu implementation avoids substantially all the performance penalty of using a read() on the region. Signed-off-by: Jason Gunthorpe Signed-off-by: Yishai Hadas --- drivers/vfio/vfio.c | 198 ++++++++++++++++++++++++++++++++++++++ include/linux/vfio.h | 17 ++++ include/uapi/linux/vfio.h | 172 ++++++++++++++++++++++++++++++--- 3 files changed, 374 insertions(+), 13 deletions(-) diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c index 71763e2ac561..e7ab9f2048cd 100644 --- a/drivers/vfio/vfio.c +++ b/drivers/vfio/vfio.c @@ -1557,6 +1557,196 @@ static int vfio_device_fops_release(struct inode *inode, struct file *filep) return 0; } +/* + * vfio_mig_get_next_state - Compute the next step in the FSM + * @cur_fsm - The current state the device is in + * @new_fsm - The target state to reach + * @next_fsm - Pointer to the next step to get to new_fsm + * + * Return 0 upon success, otherwise -errno + * Upon success the next step in the state progression between cur_fsm and + * new_fsm will be set in next_fsm. + * + * This breaks down requests for combination transitions into smaller steps and + * returns the next step to get to new_fsm. The function may need to be called + * multiple times before reaching new_fsm. + * + */ +int vfio_mig_get_next_state(struct vfio_device *device, + enum vfio_device_mig_state cur_fsm, + enum vfio_device_mig_state new_fsm, + enum vfio_device_mig_state *next_fsm) +{ + enum { VFIO_DEVICE_NUM_STATES = VFIO_DEVICE_STATE_RESUMING + 1 }; + /* + * The coding in this table requires the driver to implement 6 + * FSM arcs: + * RESUMING -> STOP + * RUNNING -> STOP + * STOP -> RESUMING + * STOP -> RUNNING + * STOP -> STOP_COPY + * STOP_COPY -> STOP + * + * The coding will step through multiple states for these combination + * transitions: + * RESUMING -> STOP -> RUNNING + * RESUMING -> STOP -> STOP_COPY + * RUNNING -> STOP -> RESUMING + * RUNNING -> STOP -> STOP_COPY + * STOP_COPY -> STOP -> RESUMING + * STOP_COPY -> STOP -> RUNNING + */ + static const u8 vfio_from_fsm_table[VFIO_DEVICE_NUM_STATES][VFIO_DEVICE_NUM_STATES] = { + [VFIO_DEVICE_STATE_STOP] = { + [VFIO_DEVICE_STATE_STOP] = VFIO_DEVICE_STATE_STOP, + [VFIO_DEVICE_STATE_RUNNING] = VFIO_DEVICE_STATE_RUNNING, + [VFIO_DEVICE_STATE_STOP_COPY] = VFIO_DEVICE_STATE_STOP_COPY, + [VFIO_DEVICE_STATE_RESUMING] = VFIO_DEVICE_STATE_RESUMING, + [VFIO_DEVICE_STATE_ERROR] = VFIO_DEVICE_STATE_ERROR, + }, + [VFIO_DEVICE_STATE_RUNNING] = { + [VFIO_DEVICE_STATE_STOP] = VFIO_DEVICE_STATE_STOP, + [VFIO_DEVICE_STATE_RUNNING] = VFIO_DEVICE_STATE_RUNNING, + [VFIO_DEVICE_STATE_STOP_COPY] = VFIO_DEVICE_STATE_STOP, + [VFIO_DEVICE_STATE_RESUMING] = VFIO_DEVICE_STATE_STOP, + [VFIO_DEVICE_STATE_ERROR] = VFIO_DEVICE_STATE_ERROR, + }, + [VFIO_DEVICE_STATE_STOP_COPY] = { + [VFIO_DEVICE_STATE_STOP] = VFIO_DEVICE_STATE_STOP, + [VFIO_DEVICE_STATE_RUNNING] = VFIO_DEVICE_STATE_STOP, + [VFIO_DEVICE_STATE_STOP_COPY] = VFIO_DEVICE_STATE_STOP_COPY, + [VFIO_DEVICE_STATE_RESUMING] = VFIO_DEVICE_STATE_STOP, + [VFIO_DEVICE_STATE_ERROR] = VFIO_DEVICE_STATE_ERROR, + }, + [VFIO_DEVICE_STATE_RESUMING] = { + [VFIO_DEVICE_STATE_STOP] = VFIO_DEVICE_STATE_STOP, + [VFIO_DEVICE_STATE_RUNNING] = VFIO_DEVICE_STATE_STOP, + [VFIO_DEVICE_STATE_STOP_COPY] = VFIO_DEVICE_STATE_STOP, + [VFIO_DEVICE_STATE_RESUMING] = VFIO_DEVICE_STATE_RESUMING, + [VFIO_DEVICE_STATE_ERROR] = VFIO_DEVICE_STATE_ERROR, + }, + [VFIO_DEVICE_STATE_ERROR] = { + [VFIO_DEVICE_STATE_STOP] = VFIO_DEVICE_STATE_ERROR, + [VFIO_DEVICE_STATE_RUNNING] = VFIO_DEVICE_STATE_ERROR, + [VFIO_DEVICE_STATE_STOP_COPY] = VFIO_DEVICE_STATE_ERROR, + [VFIO_DEVICE_STATE_RESUMING] = VFIO_DEVICE_STATE_ERROR, + [VFIO_DEVICE_STATE_ERROR] = VFIO_DEVICE_STATE_ERROR, + }, + }; + + if (WARN_ON(cur_fsm >= ARRAY_SIZE(vfio_from_fsm_table))) + return -EINVAL; + + if (new_fsm >= ARRAY_SIZE(vfio_from_fsm_table)) + return -EINVAL; + + *next_fsm = vfio_from_fsm_table[cur_fsm][new_fsm]; + return (*next_fsm != VFIO_DEVICE_STATE_ERROR) ? 0 : -EINVAL; +} +EXPORT_SYMBOL_GPL(vfio_mig_get_next_state); + +/* + * Convert the drivers's struct file into a FD number and return it to userspace + */ +static int vfio_ioct_mig_return_fd(struct file *filp, void __user *arg, + struct vfio_device_feature_mig_state *mig) +{ + int ret; + int fd; + + fd = get_unused_fd_flags(O_CLOEXEC); + if (fd < 0) { + ret = fd; + goto out_fput; + } + + mig->data_fd = fd; + if (copy_to_user(arg, mig, sizeof(*mig))) { + ret = -EFAULT; + goto out_put_unused; + } + fd_install(fd, filp); + return 0; + +out_put_unused: + put_unused_fd(fd); +out_fput: + fput(filp); + return ret; +} + +static int +vfio_ioctl_device_feature_mig_device_state(struct vfio_device *device, + u32 flags, void __user *arg, + size_t argsz) +{ + size_t minsz = + offsetofend(struct vfio_device_feature_mig_state, data_fd); + struct vfio_device_feature_mig_state mig; + struct file *filp = NULL; + int ret; + + if (!device->ops->migration_set_state || + !device->ops->migration_get_state) + return -ENOTTY; + + ret = vfio_check_feature(flags, argsz, + VFIO_DEVICE_FEATURE_SET | + VFIO_DEVICE_FEATURE_GET, + sizeof(mig)); + if (ret != 1) + return ret; + + if (copy_from_user(&mig, arg, minsz)) + return -EFAULT; + + if (flags & VFIO_DEVICE_FEATURE_GET) { + enum vfio_device_mig_state curr_state; + + ret = device->ops->migration_get_state(device, &curr_state); + if (ret) + return ret; + mig.device_state = curr_state; + goto out_copy; + } + + /* Handle the VFIO_DEVICE_FEATURE_SET */ + filp = device->ops->migration_set_state(device, mig.device_state); + if (IS_ERR(filp) || !filp) + goto out_copy; + + return vfio_ioct_mig_return_fd(filp, arg, &mig); +out_copy: + mig.data_fd = -1; + if (copy_to_user(arg, &mig, sizeof(mig))) + return -EFAULT; + if (IS_ERR(filp)) + return PTR_ERR(filp); + return 0; +} + +static int vfio_ioctl_device_feature_migration(struct vfio_device *device, + u32 flags, void __user *arg, + size_t argsz) +{ + struct vfio_device_feature_migration mig = { + .flags = VFIO_MIGRATION_STOP_COPY, + }; + int ret; + + if (!device->ops->migration_set_state) + return -ENOTTY; + + ret = vfio_check_feature(flags, argsz, VFIO_DEVICE_FEATURE_GET, + sizeof(mig)); + if (ret != 1) + return ret; + if (copy_to_user(arg, &mig, sizeof(mig))) + return -EFAULT; + return 0; +} + static int vfio_ioctl_device_feature(struct vfio_device *device, struct vfio_device_feature __user *arg) { @@ -1582,6 +1772,14 @@ static int vfio_ioctl_device_feature(struct vfio_device *device, return -EINVAL; switch (feature.flags & VFIO_DEVICE_FEATURE_MASK) { + case VFIO_DEVICE_FEATURE_MIGRATION: + return vfio_ioctl_device_feature_migration( + device, feature.flags, arg->data, + feature.argsz - minsz); + case VFIO_DEVICE_FEATURE_MIG_DEVICE_STATE: + return vfio_ioctl_device_feature_mig_device_state( + device, feature.flags, arg->data, + feature.argsz - minsz); default: if (unlikely(!device->ops->device_feature)) return -EINVAL; diff --git a/include/linux/vfio.h b/include/linux/vfio.h index ca69516f869d..3f4a1a7c2277 100644 --- a/include/linux/vfio.h +++ b/include/linux/vfio.h @@ -56,6 +56,13 @@ struct vfio_device { * match, -errno for abort (ex. match with insufficient or incorrect * additional args) * @device_feature: Fill in the VFIO_DEVICE_FEATURE ioctl + * @migration_set_state: Optional callback to change the migration state for + * devices that support migration. The returned FD is used for data + * transfer according to the FSM definition. The driver is responsible + * to ensure that FD is isolated whenever the migration FSM leaves a + * data transfer state or before close_device() returns. + @migration_get_state: Optional callback to get the migration state for + * devices that support migration. */ struct vfio_device_ops { char *name; @@ -72,6 +79,11 @@ struct vfio_device_ops { int (*match)(struct vfio_device *vdev, char *buf); int (*device_feature)(struct vfio_device *device, u32 flags, void __user *arg, size_t argsz); + struct file *(*migration_set_state)( + struct vfio_device *device, + enum vfio_device_mig_state new_state); + int (*migration_get_state)(struct vfio_device *device, + enum vfio_device_mig_state *curr_state); }; /** @@ -114,6 +126,11 @@ extern void vfio_device_put(struct vfio_device *device); int vfio_assign_device_set(struct vfio_device *device, void *set_id); +int vfio_mig_get_next_state(struct vfio_device *device, + enum vfio_device_mig_state cur_fsm, + enum vfio_device_mig_state new_fsm, + enum vfio_device_mig_state *next_fsm); + /* * External user API */ diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h index ef33ea002b0b..89012bc01663 100644 --- a/include/uapi/linux/vfio.h +++ b/include/uapi/linux/vfio.h @@ -605,25 +605,25 @@ struct vfio_region_gfx_edid { struct vfio_device_migration_info { __u32 device_state; /* VFIO device state */ -#define VFIO_DEVICE_STATE_STOP (0) -#define VFIO_DEVICE_STATE_RUNNING (1 << 0) -#define VFIO_DEVICE_STATE_SAVING (1 << 1) -#define VFIO_DEVICE_STATE_RESUMING (1 << 2) -#define VFIO_DEVICE_STATE_MASK (VFIO_DEVICE_STATE_RUNNING | \ - VFIO_DEVICE_STATE_SAVING | \ - VFIO_DEVICE_STATE_RESUMING) +#define VFIO_DEVICE_STATE_V1_STOP (0) +#define VFIO_DEVICE_STATE_V1_RUNNING (1 << 0) +#define VFIO_DEVICE_STATE_V1_SAVING (1 << 1) +#define VFIO_DEVICE_STATE_V1_RESUMING (1 << 2) +#define VFIO_DEVICE_STATE_MASK (VFIO_DEVICE_STATE_V1_RUNNING | \ + VFIO_DEVICE_STATE_V1_SAVING | \ + VFIO_DEVICE_STATE_V1_RESUMING) #define VFIO_DEVICE_STATE_VALID(state) \ - (state & VFIO_DEVICE_STATE_RESUMING ? \ - (state & VFIO_DEVICE_STATE_MASK) == VFIO_DEVICE_STATE_RESUMING : 1) + (state & VFIO_DEVICE_STATE_V1_RESUMING ? \ + (state & VFIO_DEVICE_STATE_MASK) == VFIO_DEVICE_STATE_V1_RESUMING : 1) #define VFIO_DEVICE_STATE_IS_ERROR(state) \ - ((state & VFIO_DEVICE_STATE_MASK) == (VFIO_DEVICE_STATE_SAVING | \ - VFIO_DEVICE_STATE_RESUMING)) + ((state & VFIO_DEVICE_STATE_MASK) == (VFIO_DEVICE_STATE_V1_SAVING | \ + VFIO_DEVICE_STATE_V1_RESUMING)) #define VFIO_DEVICE_STATE_SET_ERROR(state) \ - ((state & ~VFIO_DEVICE_STATE_MASK) | VFIO_DEVICE_SATE_SAVING | \ - VFIO_DEVICE_STATE_RESUMING) + ((state & ~VFIO_DEVICE_STATE_MASK) | VFIO_DEVICE_STATE_V1_SAVING | \ + VFIO_DEVICE_STATE_V1_RESUMING) __u32 reserved; __u64 pending_bytes; @@ -1002,6 +1002,152 @@ struct vfio_device_feature { */ #define VFIO_DEVICE_FEATURE_PCI_VF_TOKEN (0) +/* + * Indicates the device can support the migration API. See enum + * vfio_device_mig_state for details. If present flags must be non-zero and + * VFIO_DEVICE_FEATURE_MIG_DEVICE_STATE is supported. + * + * VFIO_MIGRATION_STOP_COPY means that RUNNING, STOP, STOP_COPY and + * RESUMING are supported. + */ +struct vfio_device_feature_migration { + __aligned_u64 flags; +#define VFIO_MIGRATION_STOP_COPY (1 << 0) +}; +#define VFIO_DEVICE_FEATURE_MIGRATION 1 + +/* + * Upon VFIO_DEVICE_FEATURE_SET, + * Execute a migration state change on the VFIO device. + * The new state is supplied in device_state. + * + * The kernel migration driver must fully transition the device to the new state + * value before the write(2) operation returns to the user. + * + * The kernel migration driver must not generate asynchronous device state + * transitions outside of manipulation by the user or the VFIO_DEVICE_RESET + * ioctl as described above. + * + * If this function fails then current device_state may be the original + * operating state or some other state along the combination transition path. + * The user can then decide if it should execute a VFIO_DEVICE_RESET, attempt + * to return to the original state, or attempt to return to some other state + * such as RUNNING or STOP. + * + * If the new_state starts a new data transfer session then the FD associated + * with that session is returned in data_fd. The user is responsible to close + * this FD when it is finished. The user must consider the migration data + * segments carried over the FD to be opaque and non-fungible. During RESUMING, + * the data segments must be written in the same order they came out of the + * saving side FD. + * + * Upon VFIO_DEVICE_FEATURE_GET, + * Get the current migration state of the VFIO device, data_fd will be -1. + */ +struct vfio_device_feature_mig_state { + __u32 device_state; /* From enum vfio_device_mig_state */ + __s32 data_fd; +}; +#define VFIO_DEVICE_FEATURE_MIG_DEVICE_STATE 2 + +/* + * The device migration Finite State Machine is described by the enum + * vfio_device_mig_state. Some of the FSM arcs will create a migration data + * transfer session by returning a FD, in this case the migration data will + * flow over the FD using read() and write() as discussed below. + * + * There are 5 states to support VFIO_MIGRATION_STOP_COPY: + * RUNNING - The device is running normally + * STOP - The device does not change the internal or external state + * STOP_COPY - The device internal state can be read out + * RESUMING - The device is stopped and is loading a new internal state + * ERROR - The device has failed and must be reset + * + * The FSM takes actions on the arcs between FSM states. The driver implements + * the following behavior for the FSM arcs: + * + * RUNNING -> STOP + * STOP_COPY -> STOP + * While in STOP the device must stop the operation of the device. The + * device must not generate interrupts, DMA, or advance its internal + * state. When stopped the device and kernel migration driver must accept + * and respond to interaction to support external subsystems in the STOP + * state, for example PCI MSI-X and PCI config pace. Failure by the user to + * restrict device access while in STOP must not result in error conditions + * outside the user context (ex. host system faults). + * + * The STOP_COPY arc will terminate a data transfer session. + * + * RESUMING -> STOP + * Leaving RESUMING terminates a data transfer session and indicates the + * device should complete processing of the data delivered by write(). The + * kernel migration driver should complete the incorporation of data written + * to the data transfer FD into the device internal state and perform + * final validity and consistency checking of the new device state. If the + * user provided data is found to be incomplete, inconsistent, or otherwise + * invalid, the migration driver must fail the SET_STATE ioctl and + * optionally go to the ERROR state as described below. + * + * While in STOP the device has the same behavior as other STOP states + * described above. + * + * To abort a RESUMING session the device must be reset. + * + * STOP -> RUNNING + * While in RUNNING the device is fully operational, the device may generate + * interrupts, DMA, respond to MMIO, all vfio device regions are functional, + * and the device may advance its internal state. + * + * STOP -> STOP_COPY + * This arc begin the process of saving the device state and will return a + * new data_fd. + * + * While in the STOP_COPY state the device has the same behavior as STOP + * with the addition that the data transfers session continues to stream the + * migration state. End of stream on the FD indicates the entire device + * state has been transferred. + * + * The user should take steps to restrict access to vfio device regions while + * the device is in STOP_COPY or risk corruption of the device migration data + * stream. + * + * STOP -> RESUMING + * Entering the RESUMING state starts a process of restoring the device + * state and will return a new data_fd. The data stream fed into the data_fd + * should be taken from the data transfer output of the saving group states + * from a compatible device. The migration driver may alter/reset the + * internal device state for this arc if required to prepare the device to + * receive the migration data. + * + * any -> ERROR + * ERROR cannot be specified as a device state, however any transition request + * can be failed with an errno return and may then move the device_state into + * ERROR. In this case the device was unable to execute the requested arc and + * was also unable to restore the device to any valid device_state. + * To recover from ERROR VFIO_DEVICE_RESET must be used to return the + * device_state back to RUNNING. + * + * The remaining possible transitions are interpreted as combinations of the + * above FSM arcs. As there are multiple paths through the FSM arcs the path + * should be selected based on the following rules: + * - Select the shortest path. + * Refer to vfio_mig_get_next_state() for the result of the algorithm. + * + * The automatic transit through the FSM arcs that make up the combination + * transition is invisible to the user. When working with combination arcs the + * user may see any step along the path in the device_state if SET_STATE + * fails. When handling these types of errors users should anticipate future + * revisions of this protocol using new states and those states becoming + * visible in this case. + */ +enum vfio_device_mig_state { + VFIO_DEVICE_STATE_ERROR = 0, + VFIO_DEVICE_STATE_STOP = 1, + VFIO_DEVICE_STATE_RUNNING = 2, + VFIO_DEVICE_STATE_STOP_COPY = 3, + VFIO_DEVICE_STATE_RESUMING = 4, +}; + /* -------- API for Type1 VFIO IOMMU -------- */ /** From patchwork Mon Feb 7 17:22:10 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yishai Hadas X-Patchwork-Id: 12737753 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F1C1CC433F5 for ; Mon, 7 Feb 2022 17:33:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S245674AbiBGRbI (ORCPT ); Mon, 7 Feb 2022 12:31:08 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45214 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346229AbiBGRXt (ORCPT ); Mon, 7 Feb 2022 12:23:49 -0500 Received: from NAM12-DM6-obe.outbound.protection.outlook.com (mail-dm6nam12on2084.outbound.protection.outlook.com [40.107.243.84]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A85E7C0401D6; Mon, 7 Feb 2022 09:23:48 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=F5f6rXMCWmE/PxLPB4lsvctMjNMPXjOmqWQE6PejythkBFCb1Sj/i1TrQ1tLrJqFVDR8OvhIO3xRWVJQmg+XzzHyENBNj0D/X7Cigdh7Q6azM2ws2J4jMghHj+m3qXIpIwxr6RBFs3XeL65G4ax2UdAFsLmclveKxfpEqgJ+YT43Hi3+SO8kc6u4AIQLwjFRWCKxJsIPO9VmMJsH5cAl/KnS0rkmQRV222j1osRvd6HDJaP/FaWNkwMzMaJdDZgJLlw13oJyYdXaZl/u0fFRRmNLeqtGD312+aErM2DyHlgmGJOs7RQPOz8KunkFs8wjADZq7QeOE1gE0TkS0k/HGQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=iasQLIX5PtpEW4tZYcVSJeb4BwNmLfATGoEzGk2drvc=; b=ChfiSV8/GPZupWKWDHQkgDZsURSs6G3ZCsQgv+dFyYpc12A7Q6ugWiabDDsunlTH4eeRm+e+QMkMqGpabbkkRI3tnnhW1hnO5XPu0uDLfuWS6GMzv6wJHEigcuF8WhMqsz/Hud4CtjmqNsxXNUj+Rekn9GpJBvl7VfrmOqXLpOIKKBumuyyjlUtchf9cWQA6G4pktbyPNlelMlypB5QnmAZM3liVwGdioE6d2taSbyKM/HMpwKLeHSVXGVFxPvfODhtwHI+NUur3h4/zINBgO6fzZkv/s5QHfxE/nKO+JV0E9K/LAg44Xws0WqwjqwYAK+Q9wApAWjE29gH5VV8xzQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=temperror (sender ip is 12.22.5.238) smtp.rcpttodomain=intel.com smtp.mailfrom=nvidia.com; dmarc=temperror action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=iasQLIX5PtpEW4tZYcVSJeb4BwNmLfATGoEzGk2drvc=; b=RiBydfMsU8OSGegWcQZ4Bwx0fAbbDTCstlCNT4RZfZbp0Cwrro5XOGOVu2wlAe7BDGyhL4ySPR+oocfg0ip7rs06xCSQ9uZllz0coklHcPiI0MaUmFdSWRXW4X+xGVaHzCar/tgzB9bFtAWiqrwZhb2hIcFg3tnp54xq7iMbd7NTAptQ2tXDJ8hzmC+XL5VnF1Z8UVzH9prWZ6CiT3aQzkbuSooGtkQW9z9haKmdFb4ZCs9pxcp1CRXfmjTtv1DEKURkv21298YiU0hHOlv0xpSL0o6Y1G2m+fdDDcvI4cENgOM2W75Xg6QxojkfswfE3FnRDji7+4kyQI7RcHSCRw== Received: from MWHPR02CA0016.namprd02.prod.outlook.com (2603:10b6:300:4b::26) by DM6PR12MB5550.namprd12.prod.outlook.com (2603:10b6:5:1b6::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4909.8; Mon, 7 Feb 2022 17:23:46 +0000 Received: from CO1NAM11FT004.eop-nam11.prod.protection.outlook.com (2603:10b6:300:4b:cafe::c2) by MWHPR02CA0016.outlook.office365.com (2603:10b6:300:4b::26) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4951.12 via Frontend Transport; Mon, 7 Feb 2022 17:23:45 +0000 X-MS-Exchange-Authentication-Results: spf=temperror (sender IP is 12.22.5.238) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=temperror action=none header.from=nvidia.com; Received-SPF: TempError (protection.outlook.com: error in processing during lookup of nvidia.com: DNS Timeout) Received: from mail.nvidia.com (12.22.5.238) by CO1NAM11FT004.mail.protection.outlook.com (10.13.175.89) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4951.12 via Frontend Transport; Mon, 7 Feb 2022 17:23:45 +0000 Received: from rnnvmail203.nvidia.com (10.129.68.9) by DRHQMAIL105.nvidia.com (10.27.9.14) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Mon, 7 Feb 2022 17:23:44 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by rnnvmail203.nvidia.com (10.129.68.9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9; Mon, 7 Feb 2022 09:23:44 -0800 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.129.68.8) with Microsoft SMTP Server id 15.2.986.9 via Frontend Transport; Mon, 7 Feb 2022 09:23:40 -0800 From: Yishai Hadas To: , , , CC: , , , , , , , , , , , Subject: [PATCH V7 mlx5-next 09/15] vfio: Extend the device migration protocol with RUNNING_P2P Date: Mon, 7 Feb 2022 19:22:10 +0200 Message-ID: <20220207172216.206415-10-yishaih@nvidia.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20220207172216.206415-1-yishaih@nvidia.com> References: <20220207172216.206415-1-yishaih@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 314b92f3-b137-4689-28eb-08d9ea5e9846 X-MS-TrafficTypeDiagnostic: DM6PR12MB5550:EE_ X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:8882; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: K2s7Le1qR9fdF0hAiiMxc4sa3kSZrbsDTKUO3UHjfmlrlKaEa3P0VOZ3rcwX4cnAo/V9S/96MsiTp0Ss6NAY/VEEAb4dqcswtW6D6/xr0uW2JWycBadE98ue+dgiV5LX0kQ924OaPtQ50MF1Qpi3jEt6oZPwtaUEc8sphNIm3aoSWL0sDVQgEOgUyX0NoArpnyWggQHq5hgweaBP9qsdiRN0uuhWMtMuyzYy3NfSzs2YAGshCNWCNFQLGYjRBMCWUZYgwl2KDZKasw0mzK1wfKhx//nrEPpuRS/pj7fVxpJWG98s+2CkyJSn/esL1I5H5ZVnRoUD5G8Q/xpKy4bEayDhQIF1gBkowuodpm84evrZIgHg4qaAEeengU5XDqCzmXsDVFN/lgLbVjHOx43LNeZGPU4u6po1zd/KtSNDjB/PC1g3x8s0t3WDpYWOgzX+nolZc2ITkPEzCnG+KUTmVyrBhdWYL/x37ev83MP5vY/ou414LbK+GFDRrqcX79UqM0c4aFhtbvw0sbr2mJE9jSK06Wj7xt6LNuBhe0d/JKL7a9rvcn4261G33iKha25qgv+xl0uLIgKENWqj5jtHuCBNiW6RUcBZ8mGUat6782GebmGGSKI22nVjs+jPoz3GGKEmtilA5TD+WVy3f9ZE3iISRIAKtduw61371HjVM0rt+mFoUp+2uOcNUMtYXrmBN15pf8lAQpt1xMZbSm3YlCE8LJXO4nCOoptwCXU2DNrlcTSCgor79TY8V4wJzRImrecLX/nA7uqxIpRCWMP5QHqoXTWywwcuouCt4WsLjOw= X-Forefront-Antispam-Report: CIP:12.22.5.238;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:mail.nvidia.com;PTR:InfoNoRecords;CAT:NONE;SFS:(4636009)(40470700004)(36840700001)(46966006)(6666004)(86362001)(2616005)(36860700001)(2906002)(356005)(8676002)(26005)(83380400001)(70206006)(7696005)(6636002)(4326008)(426003)(1076003)(40460700003)(30864003)(110136005)(54906003)(508600001)(8936002)(5660300002)(336012)(70586007)(186003)(82310400004)(63370400001)(36756003)(81166007)(63350400001)(47076005)(316002)(36900700001);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Feb 2022 17:23:45.1102 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 314b92f3-b137-4689-28eb-08d9ea5e9846 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[12.22.5.238];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT004.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB5550 Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org From: Jason Gunthorpe The RUNNING_P2P state is designed to support multiple devices in the same VM that are doing P2P transactions between themselves. When in RUNNING_P2P the device must be able to accept incoming P2P transactions but should not generate outgoing transactions. As an optional extension to the mandatory states it is defined as inbetween STOP and RUNNING: STOP -> RUNNING_P2P -> RUNNING -> RUNNING_P2P -> STOP For drivers that are unable to support RUNNING_P2P the core code silently merges RUNNING_P2P and RUNNING together. Drivers that support this will be required to implement 4 FSM arcs beyond the basic FSM. 2 of the basic FSM arcs become combination transitions. Compared to the v1 clarification, NDMA is redefined into FSM states and is described in terms of the desired P2P quiescent behavior, noting that halting all DMA is an acceptable implementation. Signed-off-by: Jason Gunthorpe Signed-off-by: Yishai Hadas --- drivers/vfio/vfio.c | 79 ++++++++++++++++++++++++++++++--------- include/linux/vfio.h | 1 + include/uapi/linux/vfio.h | 34 ++++++++++++++++- 3 files changed, 95 insertions(+), 19 deletions(-) diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c index e7ab9f2048cd..8c484593dfe0 100644 --- a/drivers/vfio/vfio.c +++ b/drivers/vfio/vfio.c @@ -1577,39 +1577,55 @@ int vfio_mig_get_next_state(struct vfio_device *device, enum vfio_device_mig_state new_fsm, enum vfio_device_mig_state *next_fsm) { - enum { VFIO_DEVICE_NUM_STATES = VFIO_DEVICE_STATE_RESUMING + 1 }; + enum { VFIO_DEVICE_NUM_STATES = VFIO_DEVICE_STATE_RUNNING_P2P + 1 }; /* - * The coding in this table requires the driver to implement 6 + * The coding in this table requires the driver to implement * FSM arcs: * RESUMING -> STOP - * RUNNING -> STOP * STOP -> RESUMING - * STOP -> RUNNING * STOP -> STOP_COPY * STOP_COPY -> STOP * - * The coding will step through multiple states for these combination - * transitions: - * RESUMING -> STOP -> RUNNING + * If P2P is supported then the driver must also implement these FSM + * arcs: + * RUNNING -> RUNNING_P2P + * RUNNING_P2P -> RUNNING + * RUNNING_P2P -> STOP + * STOP -> RUNNING_P2P + * Without P2P the driver must implement: + * RUNNING -> STOP + * STOP -> RUNNING + * + * If all optional features are supported then the coding will step + * through multiple states for these combination transitions: + * RESUMING -> STOP -> RUNNING_P2P + * RESUMING -> STOP -> RUNNING_P2P -> RUNNING * RESUMING -> STOP -> STOP_COPY - * RUNNING -> STOP -> RESUMING - * RUNNING -> STOP -> STOP_COPY + * RUNNING -> RUNNING_P2P -> STOP + * RUNNING -> RUNNING_P2P -> STOP -> RESUMING + * RUNNING -> RUNNING_P2P -> STOP -> STOP_COPY + * RUNNING_P2P -> STOP -> RESUMING + * RUNNING_P2P -> STOP -> STOP_COPY + * STOP -> RUNNING_P2P -> RUNNING * STOP_COPY -> STOP -> RESUMING - * STOP_COPY -> STOP -> RUNNING + * STOP_COPY -> STOP -> RUNNING_P2P + * STOP_COPY -> STOP -> RUNNING_P2P -> RUNNING */ static const u8 vfio_from_fsm_table[VFIO_DEVICE_NUM_STATES][VFIO_DEVICE_NUM_STATES] = { [VFIO_DEVICE_STATE_STOP] = { [VFIO_DEVICE_STATE_STOP] = VFIO_DEVICE_STATE_STOP, - [VFIO_DEVICE_STATE_RUNNING] = VFIO_DEVICE_STATE_RUNNING, + [VFIO_DEVICE_STATE_RUNNING] = VFIO_DEVICE_STATE_RUNNING_P2P, [VFIO_DEVICE_STATE_STOP_COPY] = VFIO_DEVICE_STATE_STOP_COPY, [VFIO_DEVICE_STATE_RESUMING] = VFIO_DEVICE_STATE_RESUMING, + [VFIO_DEVICE_STATE_RUNNING_P2P] = VFIO_DEVICE_STATE_RUNNING_P2P, [VFIO_DEVICE_STATE_ERROR] = VFIO_DEVICE_STATE_ERROR, }, [VFIO_DEVICE_STATE_RUNNING] = { - [VFIO_DEVICE_STATE_STOP] = VFIO_DEVICE_STATE_STOP, + [VFIO_DEVICE_STATE_STOP] = VFIO_DEVICE_STATE_RUNNING_P2P, [VFIO_DEVICE_STATE_RUNNING] = VFIO_DEVICE_STATE_RUNNING, - [VFIO_DEVICE_STATE_STOP_COPY] = VFIO_DEVICE_STATE_STOP, - [VFIO_DEVICE_STATE_RESUMING] = VFIO_DEVICE_STATE_STOP, + [VFIO_DEVICE_STATE_STOP_COPY] = VFIO_DEVICE_STATE_RUNNING_P2P, + [VFIO_DEVICE_STATE_RESUMING] = VFIO_DEVICE_STATE_RUNNING_P2P, + [VFIO_DEVICE_STATE_RUNNING_P2P] = VFIO_DEVICE_STATE_RUNNING_P2P, [VFIO_DEVICE_STATE_ERROR] = VFIO_DEVICE_STATE_ERROR, }, [VFIO_DEVICE_STATE_STOP_COPY] = { @@ -1617,6 +1633,7 @@ int vfio_mig_get_next_state(struct vfio_device *device, [VFIO_DEVICE_STATE_RUNNING] = VFIO_DEVICE_STATE_STOP, [VFIO_DEVICE_STATE_STOP_COPY] = VFIO_DEVICE_STATE_STOP_COPY, [VFIO_DEVICE_STATE_RESUMING] = VFIO_DEVICE_STATE_STOP, + [VFIO_DEVICE_STATE_RUNNING_P2P] = VFIO_DEVICE_STATE_STOP, [VFIO_DEVICE_STATE_ERROR] = VFIO_DEVICE_STATE_ERROR, }, [VFIO_DEVICE_STATE_RESUMING] = { @@ -1624,6 +1641,15 @@ int vfio_mig_get_next_state(struct vfio_device *device, [VFIO_DEVICE_STATE_RUNNING] = VFIO_DEVICE_STATE_STOP, [VFIO_DEVICE_STATE_STOP_COPY] = VFIO_DEVICE_STATE_STOP, [VFIO_DEVICE_STATE_RESUMING] = VFIO_DEVICE_STATE_RESUMING, + [VFIO_DEVICE_STATE_RUNNING_P2P] = VFIO_DEVICE_STATE_STOP, + [VFIO_DEVICE_STATE_ERROR] = VFIO_DEVICE_STATE_ERROR, + }, + [VFIO_DEVICE_STATE_RUNNING_P2P] = { + [VFIO_DEVICE_STATE_STOP] = VFIO_DEVICE_STATE_STOP, + [VFIO_DEVICE_STATE_RUNNING] = VFIO_DEVICE_STATE_RUNNING, + [VFIO_DEVICE_STATE_STOP_COPY] = VFIO_DEVICE_STATE_STOP, + [VFIO_DEVICE_STATE_RESUMING] = VFIO_DEVICE_STATE_STOP, + [VFIO_DEVICE_STATE_RUNNING_P2P] = VFIO_DEVICE_STATE_RUNNING_P2P, [VFIO_DEVICE_STATE_ERROR] = VFIO_DEVICE_STATE_ERROR, }, [VFIO_DEVICE_STATE_ERROR] = { @@ -1631,17 +1657,36 @@ int vfio_mig_get_next_state(struct vfio_device *device, [VFIO_DEVICE_STATE_RUNNING] = VFIO_DEVICE_STATE_ERROR, [VFIO_DEVICE_STATE_STOP_COPY] = VFIO_DEVICE_STATE_ERROR, [VFIO_DEVICE_STATE_RESUMING] = VFIO_DEVICE_STATE_ERROR, + [VFIO_DEVICE_STATE_RUNNING_P2P] = VFIO_DEVICE_STATE_ERROR, [VFIO_DEVICE_STATE_ERROR] = VFIO_DEVICE_STATE_ERROR, }, }; - if (WARN_ON(cur_fsm >= ARRAY_SIZE(vfio_from_fsm_table))) + static const unsigned int state_flags_table[VFIO_DEVICE_NUM_STATES] = { + [VFIO_DEVICE_STATE_STOP] = VFIO_MIGRATION_STOP_COPY, + [VFIO_DEVICE_STATE_RUNNING] = VFIO_MIGRATION_STOP_COPY, + [VFIO_DEVICE_STATE_STOP_COPY] = VFIO_MIGRATION_STOP_COPY, + [VFIO_DEVICE_STATE_RESUMING] = VFIO_MIGRATION_STOP_COPY, + [VFIO_DEVICE_STATE_RUNNING_P2P] = + VFIO_MIGRATION_STOP_COPY | VFIO_MIGRATION_P2P, + [VFIO_DEVICE_STATE_ERROR] = ~0U, + }; + + if (WARN_ON(cur_fsm >= ARRAY_SIZE(vfio_from_fsm_table) || + (state_flags_table[cur_fsm] & device->migration_flags) != + state_flags_table[cur_fsm])) return -EINVAL; - if (new_fsm >= ARRAY_SIZE(vfio_from_fsm_table)) + if (new_fsm >= ARRAY_SIZE(vfio_from_fsm_table) || + (state_flags_table[new_fsm] & device->migration_flags) != + state_flags_table[new_fsm]) return -EINVAL; *next_fsm = vfio_from_fsm_table[cur_fsm][new_fsm]; + while ((state_flags_table[*next_fsm] & device->migration_flags) != + state_flags_table[*next_fsm]) + *next_fsm = vfio_from_fsm_table[*next_fsm][new_fsm]; + return (*next_fsm != VFIO_DEVICE_STATE_ERROR) ? 0 : -EINVAL; } EXPORT_SYMBOL_GPL(vfio_mig_get_next_state); @@ -1731,7 +1776,7 @@ static int vfio_ioctl_device_feature_migration(struct vfio_device *device, size_t argsz) { struct vfio_device_feature_migration mig = { - .flags = VFIO_MIGRATION_STOP_COPY, + .flags = device->migration_flags, }; int ret; diff --git a/include/linux/vfio.h b/include/linux/vfio.h index 3f4a1a7c2277..a173718d2a1b 100644 --- a/include/linux/vfio.h +++ b/include/linux/vfio.h @@ -33,6 +33,7 @@ struct vfio_device { struct vfio_group *group; struct vfio_device_set *dev_set; struct list_head dev_set_list; + unsigned int migration_flags; /* Members below here are private, not for driver use */ refcount_t refcount; diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h index 89012bc01663..773895988cf1 100644 --- a/include/uapi/linux/vfio.h +++ b/include/uapi/linux/vfio.h @@ -1009,10 +1009,16 @@ struct vfio_device_feature { * * VFIO_MIGRATION_STOP_COPY means that RUNNING, STOP, STOP_COPY and * RESUMING are supported. + * + * VFIO_MIGRATION_STOP_COPY | VFIO_MIGRATION_P2P means that RUNNING_P2P + * is supported in addition to the STOP_COPY states. + * + * Other combinations of flags have behavior to be defined in the future. */ struct vfio_device_feature_migration { __aligned_u64 flags; #define VFIO_MIGRATION_STOP_COPY (1 << 0) +#define VFIO_MIGRATION_P2P (1 << 1) }; #define VFIO_DEVICE_FEATURE_MIGRATION 1 @@ -1063,10 +1069,13 @@ struct vfio_device_feature_mig_state { * RESUMING - The device is stopped and is loading a new internal state * ERROR - The device has failed and must be reset * + * And 1 optional state to support VFIO_MIGRATION_P2P: + * RUNNING_P2P - RUNNING, except the device cannot do peer to peer DMA + * * The FSM takes actions on the arcs between FSM states. The driver implements * the following behavior for the FSM arcs: * - * RUNNING -> STOP + * RUNNING_P2P -> STOP * STOP_COPY -> STOP * While in STOP the device must stop the operation of the device. The * device must not generate interrupts, DMA, or advance its internal @@ -1093,11 +1102,16 @@ struct vfio_device_feature_mig_state { * * To abort a RESUMING session the device must be reset. * - * STOP -> RUNNING + * RUNNING_P2P -> RUNNING * While in RUNNING the device is fully operational, the device may generate * interrupts, DMA, respond to MMIO, all vfio device regions are functional, * and the device may advance its internal state. * + * RUNNING -> RUNNING_P2P + * STOP -> RUNNING_P2P + * While in RUNNING_P2P the device is partially running in the P2P quiescent + * state defined below. + * * STOP -> STOP_COPY * This arc begin the process of saving the device state and will return a * new data_fd. @@ -1127,6 +1141,16 @@ struct vfio_device_feature_mig_state { * To recover from ERROR VFIO_DEVICE_RESET must be used to return the * device_state back to RUNNING. * + * The optional peer to peer (P2P) quiescent state is intended to be a quiescent + * state for the device for the purposes of managing multiple devices within a + * user context where peer-to-peer DMA between devices may be active. The + * RUNNING_P2P states must prevent the device from initiating + * any new P2P DMA transactions. If the device can identify P2P transactions + * then it can stop only P2P DMA, otherwise it must stop all DMA. The migration + * driver must complete any such outstanding operations prior to completing the + * FSM arc into a P2P state. For the purpose of specification the states + * behave as though the device was fully running if not supported. + * * The remaining possible transitions are interpreted as combinations of the * above FSM arcs. As there are multiple paths through the FSM arcs the path * should be selected based on the following rules: @@ -1139,6 +1163,11 @@ struct vfio_device_feature_mig_state { * fails. When handling these types of errors users should anticipate future * revisions of this protocol using new states and those states becoming * visible in this case. + * + * The optional states cannot be used with SET_STATE if the device does not + * support them. The user can disocver if these states are supported by using + * VFIO_DEVICE_FEATURE_MIGRATION. By using combination transitions the user can + * avoid knowing about these optional states if the kernel driver supports them. */ enum vfio_device_mig_state { VFIO_DEVICE_STATE_ERROR = 0, @@ -1146,6 +1175,7 @@ enum vfio_device_mig_state { VFIO_DEVICE_STATE_RUNNING = 2, VFIO_DEVICE_STATE_STOP_COPY = 3, VFIO_DEVICE_STATE_RESUMING = 4, + VFIO_DEVICE_STATE_RUNNING_P2P = 5, }; /* -------- API for Type1 VFIO IOMMU -------- */ From patchwork Mon Feb 7 17:22:11 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yishai Hadas X-Patchwork-Id: 12737757 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 359ECC43217 for ; Mon, 7 Feb 2022 17:33:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346281AbiBGRbL (ORCPT ); Mon, 7 Feb 2022 12:31:11 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45248 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346867AbiBGRXx (ORCPT ); Mon, 7 Feb 2022 12:23:53 -0500 Received: from NAM02-DM3-obe.outbound.protection.outlook.com (mail-dm3nam07on2062.outbound.protection.outlook.com [40.107.95.62]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E142DC0401D5; Mon, 7 Feb 2022 09:23:52 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=l6K+RKeHxgDKD3OFIBmgN5PUcPaCt6/TXPi7aZIi4Ar+ZZ3d9ZqyHqeVxsbSqKVcUszUboDHrxhbTgbQF2G4SFkGS1DmjP0XD3OBeQXXZcq9F8YAFE+spc0MOeOrGoiHdbQsq7oppbmXDDaYnh0yyHP8r2H4LnSQjhol0b9sqJpbXLRjb4V0hIkRzre6rJYktplYcw3BWPB41JTBDTArZndsumQf1nd0tLsdjIynv9uXHwum61FwA5ppV19XmKuwO49fSee1mXasDghzM/o935NSU1H0HWYYIlj17+TgLrH73AxsfAsFcmj8fvUwHuZWyBt4pmUBr1ctO90MT5EvSw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=b2N6s4sSXZvhjA4Cb+qqCGJAQJn7pmmVlSyyMIvFgeM=; b=gKFIpqJ+9ien11+EKLz75WMJBJsuaHuMaIaGLiaE+vaZfRAipeANmNCewgW32r/InIIfUZePYE0sc99KyNwphm9HwrGf6d1+W+Dtpt1QXQGp+yYUi5psIAsI9mdj1yk9E67H0c6lY8iYM/6+rbBRXuHZPJEqTAFHaTRKslnIMX9ZLgivW0dc1cZALGmh8E2OxRAQh98w30XmBnBY6giexW92YK0l5kEX3tx1COrtEEzyviO/VRyYKMocGIpwg6UnPflzLx1kwmC+jrJNWnKBMqTkWuSh8tC+7vx6PU1i/ngFBIs1WOnVxZ0N6iEKj9iKcIZQ+C5uMRtPTPuTBE4PuA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.234) smtp.rcpttodomain=intel.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=b2N6s4sSXZvhjA4Cb+qqCGJAQJn7pmmVlSyyMIvFgeM=; b=H9ZeOIpc9ldn7oN1bV9Yxugklj3nmUphPdOBtXLduDFG5VbqeUwHcufkRuODJUQri3b4ozdFbSo3AzBYRVWAMLrvJbXYV+yOLW3WaldwNraV9SyLXA+PURyoAnKK0ejw2bzza5dN3uollariu424jkd3TE8K41qS987XWtYJ4FHY/tlEuH4WlcLwauQ4+o85qHB2AnEc947NOI7p3vg4Teq4Jx/60qC/e1Ipq/X5NGI6iQ3uXKKy1dggWvNqilhsK2wetbqDQpOtqrK57LT+lbK80/e1TshdOU4FDhziucpioiqz9vkRwzW5oDPbwPVnNJKpQJ+zUpGsXEd9agywaQ== Received: from MW4PR03CA0002.namprd03.prod.outlook.com (2603:10b6:303:8f::7) by BY5PR12MB4642.namprd12.prod.outlook.com (2603:10b6:a03:1f6::29) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4951.12; Mon, 7 Feb 2022 17:23:50 +0000 Received: from CO1NAM11FT018.eop-nam11.prod.protection.outlook.com (2603:10b6:303:8f:cafe::23) by MW4PR03CA0002.outlook.office365.com (2603:10b6:303:8f::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4951.12 via Frontend Transport; Mon, 7 Feb 2022 17:23:50 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.234) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.234 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.234; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.234) by CO1NAM11FT018.mail.protection.outlook.com (10.13.175.16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4951.12 via Frontend Transport; Mon, 7 Feb 2022 17:23:49 +0000 Received: from rnnvmail203.nvidia.com (10.129.68.9) by DRHQMAIL101.nvidia.com (10.27.9.10) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Mon, 7 Feb 2022 17:23:49 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by rnnvmail203.nvidia.com (10.129.68.9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9; Mon, 7 Feb 2022 09:23:48 -0800 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.129.68.8) with Microsoft SMTP Server id 15.2.986.9 via Frontend Transport; Mon, 7 Feb 2022 09:23:44 -0800 From: Yishai Hadas To: , , , CC: , , , , , , , , , , , Subject: [PATCH V7 mlx5-next 10/15] vfio: Remove migration protocol v1 documentation Date: Mon, 7 Feb 2022 19:22:11 +0200 Message-ID: <20220207172216.206415-11-yishaih@nvidia.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20220207172216.206415-1-yishaih@nvidia.com> References: <20220207172216.206415-1-yishaih@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 03721b76-60db-4bd9-c84a-08d9ea5e9b13 X-MS-TrafficTypeDiagnostic: BY5PR12MB4642:EE_ X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:8882; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: wIsOy5d3n8EIArUhd08TJNHzoX1I3//1LpER33vCpN+4YK5BAM1ToAPLUqn3NrBcO8fOZcX61KP9OeQV1heGin1tN4J0fSa1eM6Z8Z93jOftNzDO4GXUoDZGxd8R9xfPgZwcYIEIPGIzESdjdTqem9jdpQC3wU8TaUXCdAeoGDb2k8i2bAdgFsp8i/oT9eXxye93Xg81RV0RHqZX7ukM0FNsPmSZvD0Ab++T1g4Z5EnvXz3pZbqE3Tpjlj/0tf2G1mK53RdqG2cvdC5/E5uGV+TktcKYXiP+GuRNjqJKK2+z95U4m9b2UYjDl9X5sH1YPUqLo/X7fCqTnLtynqgQQhDOgY2oMY8Fp4Fxp/LUcawNJNx/YsiQYvGaFdpdFb/psVHy8xeqleNrd/pfuuuOIU3lYfI8NJobKj/Tjk7hlkpZzcSDIBnlTnN1zO+pLqgHd3CPoWE9sW+/3/71qso6BB1F+Dh55U3WeyH/2638DCQr46Wae3kplveLNmo/OR9e2DEZoXDMWSnG5lQY+JPu3rV37I5ToDY12htlCO/X6A/bRObRIxZ89CbJErSeLu7azy/OkdKJTNGqBCg4cEyUSjPKS6A6S+oCmSWXHxVDtWryElmrtDRB2E1TVGH0Z+De62Pny3A4DU8apWfcfnsMH8tu7AamLuJZXa6ncoP0Iz4XJucJHzrd1sjRoaWGajzDTkNcEpwjC/DSDwZRuA4cnQ== X-Forefront-Antispam-Report: CIP:12.22.5.234;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:mail.nvidia.com;PTR:InfoNoRecords;CAT:NONE;SFS:(13230001)(4636009)(36840700001)(40470700004)(46966006)(7696005)(40460700003)(36756003)(8936002)(26005)(186003)(508600001)(316002)(83380400001)(6636002)(54906003)(110136005)(30864003)(4326008)(8676002)(1076003)(5660300002)(86362001)(70586007)(81166007)(70206006)(356005)(82310400004)(6666004)(426003)(336012)(47076005)(2616005)(2906002)(36860700001)(36900700001);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Feb 2022 17:23:49.8288 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 03721b76-60db-4bd9-c84a-08d9ea5e9b13 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[12.22.5.234];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT018.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR12MB4642 Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org From: Jason Gunthorpe v1 was never implemented and is replaced by v2. The old uAPI documentation is removed from the header file. The old uAPI definitions are still kept in the header file till v2 will reach Linus's tree. Signed-off-by: Jason Gunthorpe Signed-off-by: Yishai Hadas --- include/uapi/linux/vfio.h | 200 +------------------------------------- 1 file changed, 2 insertions(+), 198 deletions(-) diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h index 773895988cf1..227f55d57e06 100644 --- a/include/uapi/linux/vfio.h +++ b/include/uapi/linux/vfio.h @@ -323,7 +323,7 @@ struct vfio_region_info_cap_type { #define VFIO_REGION_TYPE_PCI_VENDOR_MASK (0xffff) #define VFIO_REGION_TYPE_GFX (1) #define VFIO_REGION_TYPE_CCW (2) -#define VFIO_REGION_TYPE_MIGRATION (3) +#define VFIO_REGION_TYPE_MIGRATION_DEPRECATED (3) /* sub-types for VFIO_REGION_TYPE_PCI_* */ @@ -405,203 +405,7 @@ struct vfio_region_gfx_edid { #define VFIO_REGION_SUBTYPE_CCW_CRW (3) /* sub-types for VFIO_REGION_TYPE_MIGRATION */ -#define VFIO_REGION_SUBTYPE_MIGRATION (1) - -/* - * The structure vfio_device_migration_info is placed at the 0th offset of - * the VFIO_REGION_SUBTYPE_MIGRATION region to get and set VFIO device related - * migration information. Field accesses from this structure are only supported - * at their native width and alignment. Otherwise, the result is undefined and - * vendor drivers should return an error. - * - * device_state: (read/write) - * - The user application writes to this field to inform the vendor driver - * about the device state to be transitioned to. - * - The vendor driver should take the necessary actions to change the - * device state. After successful transition to a given state, the - * vendor driver should return success on write(device_state, state) - * system call. If the device state transition fails, the vendor driver - * should return an appropriate -errno for the fault condition. - * - On the user application side, if the device state transition fails, - * that is, if write(device_state, state) returns an error, read - * device_state again to determine the current state of the device from - * the vendor driver. - * - The vendor driver should return previous state of the device unless - * the vendor driver has encountered an internal error, in which case - * the vendor driver may report the device_state VFIO_DEVICE_STATE_ERROR. - * - The user application must use the device reset ioctl to recover the - * device from VFIO_DEVICE_STATE_ERROR state. If the device is - * indicated to be in a valid device state by reading device_state, the - * user application may attempt to transition the device to any valid - * state reachable from the current state or terminate itself. - * - * device_state consists of 3 bits: - * - If bit 0 is set, it indicates the _RUNNING state. If bit 0 is clear, - * it indicates the _STOP state. When the device state is changed to - * _STOP, driver should stop the device before write() returns. - * - If bit 1 is set, it indicates the _SAVING state, which means that the - * driver should start gathering device state information that will be - * provided to the VFIO user application to save the device's state. - * - If bit 2 is set, it indicates the _RESUMING state, which means that - * the driver should prepare to resume the device. Data provided through - * the migration region should be used to resume the device. - * Bits 3 - 31 are reserved for future use. To preserve them, the user - * application should perform a read-modify-write operation on this - * field when modifying the specified bits. - * - * +------- _RESUMING - * |+------ _SAVING - * ||+----- _RUNNING - * ||| - * 000b => Device Stopped, not saving or resuming - * 001b => Device running, which is the default state - * 010b => Stop the device & save the device state, stop-and-copy state - * 011b => Device running and save the device state, pre-copy state - * 100b => Device stopped and the device state is resuming - * 101b => Invalid state - * 110b => Error state - * 111b => Invalid state - * - * State transitions: - * - * _RESUMING _RUNNING Pre-copy Stop-and-copy _STOP - * (100b) (001b) (011b) (010b) (000b) - * 0. Running or default state - * | - * - * 1. Normal Shutdown (optional) - * |------------------------------------->| - * - * 2. Save the state or suspend - * |------------------------->|---------->| - * - * 3. Save the state during live migration - * |----------->|------------>|---------->| - * - * 4. Resuming - * |<---------| - * - * 5. Resumed - * |--------->| - * - * 0. Default state of VFIO device is _RUNNING when the user application starts. - * 1. During normal shutdown of the user application, the user application may - * optionally change the VFIO device state from _RUNNING to _STOP. This - * transition is optional. The vendor driver must support this transition but - * must not require it. - * 2. When the user application saves state or suspends the application, the - * device state transitions from _RUNNING to stop-and-copy and then to _STOP. - * On state transition from _RUNNING to stop-and-copy, driver must stop the - * device, save the device state and send it to the application through the - * migration region. The sequence to be followed for such transition is given - * below. - * 3. In live migration of user application, the state transitions from _RUNNING - * to pre-copy, to stop-and-copy, and to _STOP. - * On state transition from _RUNNING to pre-copy, the driver should start - * gathering the device state while the application is still running and send - * the device state data to application through the migration region. - * On state transition from pre-copy to stop-and-copy, the driver must stop - * the device, save the device state and send it to the user application - * through the migration region. - * Vendor drivers must support the pre-copy state even for implementations - * where no data is provided to the user before the stop-and-copy state. The - * user must not be required to consume all migration data before the device - * transitions to a new state, including the stop-and-copy state. - * The sequence to be followed for above two transitions is given below. - * 4. To start the resuming phase, the device state should be transitioned from - * the _RUNNING to the _RESUMING state. - * In the _RESUMING state, the driver should use the device state data - * received through the migration region to resume the device. - * 5. After providing saved device data to the driver, the application should - * change the state from _RESUMING to _RUNNING. - * - * reserved: - * Reads on this field return zero and writes are ignored. - * - * pending_bytes: (read only) - * The number of pending bytes still to be migrated from the vendor driver. - * - * data_offset: (read only) - * The user application should read data_offset field from the migration - * region. The user application should read the device data from this - * offset within the migration region during the _SAVING state or write - * the device data during the _RESUMING state. See below for details of - * sequence to be followed. - * - * data_size: (read/write) - * The user application should read data_size to get the size in bytes of - * the data copied in the migration region during the _SAVING state and - * write the size in bytes of the data copied in the migration region - * during the _RESUMING state. - * - * The format of the migration region is as follows: - * ------------------------------------------------------------------ - * |vfio_device_migration_info| data section | - * | | /////////////////////////////// | - * ------------------------------------------------------------------ - * ^ ^ - * offset 0-trapped part data_offset - * - * The structure vfio_device_migration_info is always followed by the data - * section in the region, so data_offset will always be nonzero. The offset - * from where the data is copied is decided by the kernel driver. The data - * section can be trapped, mmapped, or partitioned, depending on how the kernel - * driver defines the data section. The data section partition can be defined - * as mapped by the sparse mmap capability. If mmapped, data_offset must be - * page aligned, whereas initial section which contains the - * vfio_device_migration_info structure, might not end at the offset, which is - * page aligned. The user is not required to access through mmap regardless - * of the capabilities of the region mmap. - * The vendor driver should determine whether and how to partition the data - * section. The vendor driver should return data_offset accordingly. - * - * The sequence to be followed while in pre-copy state and stop-and-copy state - * is as follows: - * a. Read pending_bytes, indicating the start of a new iteration to get device - * data. Repeated read on pending_bytes at this stage should have no side - * effects. - * If pending_bytes == 0, the user application should not iterate to get data - * for that device. - * If pending_bytes > 0, perform the following steps. - * b. Read data_offset, indicating that the vendor driver should make data - * available through the data section. The vendor driver should return this - * read operation only after data is available from (region + data_offset) - * to (region + data_offset + data_size). - * c. Read data_size, which is the amount of data in bytes available through - * the migration region. - * Read on data_offset and data_size should return the offset and size of - * the current buffer if the user application reads data_offset and - * data_size more than once here. - * d. Read data_size bytes of data from (region + data_offset) from the - * migration region. - * e. Process the data. - * f. Read pending_bytes, which indicates that the data from the previous - * iteration has been read. If pending_bytes > 0, go to step b. - * - * The user application can transition from the _SAVING|_RUNNING - * (pre-copy state) to the _SAVING (stop-and-copy) state regardless of the - * number of pending bytes. The user application should iterate in _SAVING - * (stop-and-copy) until pending_bytes is 0. - * - * The sequence to be followed while _RESUMING device state is as follows: - * While data for this device is available, repeat the following steps: - * a. Read data_offset from where the user application should write data. - * b. Write migration data starting at the migration region + data_offset for - * the length determined by data_size from the migration source. - * c. Write data_size, which indicates to the vendor driver that data is - * written in the migration region. Vendor driver must return this write - * operations on consuming data. Vendor driver should apply the - * user-provided migration region data to the device resume state. - * - * If an error occurs during the above sequences, the vendor driver can return - * an error code for next read() or write() operation, which will terminate the - * loop. The user application should then take the next necessary action, for - * example, failing migration or terminating the user application. - * - * For the user application, data is opaque. The user application should write - * data in the same order as the data is received and the data should be of - * same transaction size at the source. - */ +#define VFIO_REGION_SUBTYPE_MIGRATION_DEPRECATED (1) struct vfio_device_migration_info { __u32 device_state; /* VFIO device state */ From patchwork Mon Feb 7 17:22:12 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yishai Hadas X-Patchwork-Id: 12737754 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2312CC43219 for ; Mon, 7 Feb 2022 17:33:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344137AbiBGRbK (ORCPT ); Mon, 7 Feb 2022 12:31:10 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45274 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1347306AbiBGRX6 (ORCPT ); Mon, 7 Feb 2022 12:23:58 -0500 Received: from NAM04-BN8-obe.outbound.protection.outlook.com (mail-bn8nam08on2055.outbound.protection.outlook.com [40.107.100.55]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 14816C0401D5; Mon, 7 Feb 2022 09:23:57 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=VrzF2V9c97sO7/DTVwzi5j3x37V+mP5Ashcb0zQKSjSTTmoArPsSh2/pnGRvIBroZdmh/f+E1smakZwqmCho+/cvYyYsr45JDNESG4sFApYqK7KV4zn8uJ/toAGo1agNpZ6uea+kIE2Gryj0NqDm2h/AQ5HCZrVQ/RulTI0rNg1r2/lG/UvvUbRmFe94hX+ljcoAq2qS/ELg7pa51dIsADdixbQMUREPa43BP0SClbfzAZr3zVc77zvkhOINeMjUXcw1ZjXWR1Wpsh1jz1wDtcYZ3+z8ijNqiBmToheyQIBDVzMnJRPVXErE7UqhA55MnVm6/f92QjnJTiwoL93ntg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=+7C9tX4kP8W89Mrf4mmijlxMrlM7BtvYPIr1UgzIxSU=; b=J1WiX7Z1g/iYBWRxFmW7DycTeO2bmr2kM/DarLF6M0ox9vzjafz+dpoTq5Y2QAFVn6upmtwnAbF1XBQCQj/LQc23QYHkEHwlP/IR8fwabX9PYKHi8q61yU/SiMU23Eg3L1PMxQFavLP60TlkMt0JVSxIQAxlVwkHZ+qO7hUtbnRLfmA7L94j6LYbKXfTdKgb+a0u5DVutdSbsPPTc65MjT1aX40ikHLwmOCqHWdfDx+bG0yOMyiolwF0KLKrKRs9Wt7MaWqgETauzar43pIcSvL3eO7dvSwmBSZ/I6V6Z9PHYqhLMqyNM7xhIgPJ4i/c/zwYnqJt6wjz44wnfZUbtA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.238) smtp.rcpttodomain=intel.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=+7C9tX4kP8W89Mrf4mmijlxMrlM7BtvYPIr1UgzIxSU=; b=rICelSCYa7Is7+brnszVGeubvihxSMM4WMNidFy9OrW+uFJIVn8tse8Wd9FLQJiimOJIAB/qWGtGQWE1jGpetp3dAUOkp+h/lHfkGv6el452f5fBUHUVvf4nn+UFM9wRsURhQ95Nix2iFbS+Wx6H39rGlTSZTE/VypAtaZY61P1q1kB97lUReDbDNeD02ylqI8TP2rb0yRfK5pUYTj19Mx3TPJs6kX6t0n0hTPhm2/T/138+xoIrJBcOFnuYdCjOHSh/AuZokNhNnpQvWVPlhsKv7JdwTCd1Qmif4zwWPOzeG+3WJU+bK0x5VVq7Z4mcG3vRmy3phIp6l9GZ0D2c4Q== Received: from MW4PR04CA0347.namprd04.prod.outlook.com (2603:10b6:303:8a::22) by MN2PR12MB3949.namprd12.prod.outlook.com (2603:10b6:208:167::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4951.12; Mon, 7 Feb 2022 17:23:54 +0000 Received: from CO1NAM11FT035.eop-nam11.prod.protection.outlook.com (2603:10b6:303:8a:cafe::7c) by MW4PR04CA0347.outlook.office365.com (2603:10b6:303:8a::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4951.12 via Frontend Transport; Mon, 7 Feb 2022 17:23:54 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.238) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.238 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.238; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.238) by CO1NAM11FT035.mail.protection.outlook.com (10.13.175.36) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4951.12 via Frontend Transport; Mon, 7 Feb 2022 17:23:54 +0000 Received: from rnnvmail202.nvidia.com (10.129.68.7) by DRHQMAIL105.nvidia.com (10.27.9.14) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Mon, 7 Feb 2022 17:23:54 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by rnnvmail202.nvidia.com (10.129.68.7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9; Mon, 7 Feb 2022 09:23:53 -0800 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.129.68.8) with Microsoft SMTP Server id 15.2.986.9 via Frontend Transport; Mon, 7 Feb 2022 09:23:49 -0800 From: Yishai Hadas To: , , , CC: , , , , , , , , , , , Subject: [PATCH V7 mlx5-next 11/15] vfio/mlx5: Expose migration commands over mlx5 device Date: Mon, 7 Feb 2022 19:22:12 +0200 Message-ID: <20220207172216.206415-12-yishaih@nvidia.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20220207172216.206415-1-yishaih@nvidia.com> References: <20220207172216.206415-1-yishaih@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 3391b175-fd33-4195-75d6-08d9ea5e9dda X-MS-TrafficTypeDiagnostic: MN2PR12MB3949:EE_ X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:5516; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: XjMIr0IQkCV0n49BBEilgwLRd9UM7py6i6Xbz7zePLhEaUCTUyBvn0U+QsuswDGlmGqjJdB93CN7ocTUXXXS/Xr5E54IcjVEU6WnoMb3JHfXmHH/GiHtt+TTuPofbDTFwpJ4hDycCsFxRVBlh9QpANOn13KcO6iJbYouEmNG0iL2li7xBR9EvJjEMV4sikxEIMBFrtEWOAx9dR6KlU7NTxxvQYHhyWHPc5pY2jy+YByUFRAeyKoCrZTUhjFsqfWyRp/sHjANQmD1bXLbkMZ/zg9qfgEv9O3Fn5L2U6bU9C+f5Y7YKiAzsGAC2b/xnkGIUv3OpWJjUt7gvIgtLAlsCv8Sk6KMfaK0cApBbhpLSMLh5/YRGcz/F/qN/UoxJZdiNGO3eAn+y7P4RK7TnxtpeqrcCkpcBKpngCjfPtTQh6omZlHBmDz9AqumtgSM51LVswlEMTIKRs5HQxvT4Qyxn+lGXQ/JGRnvkfy3b+em66btE4U5NUjCkMcHN2osgLHIOSA6BFj1vC/VcRwljF/e+C35nes8fhSG6SRcRd6d4rTRPkKdlxb3OCCIh8djqud+iYc9NX78Cf2LDYY/e+ihJT40St0z8e2y8LzJROIRJY47Gq+Aa6qKfFkXzd1PAwc+9Q/xz8cUm+Pb6jdJ6nVsaJZSkWwy6W+S0cGeqX0/PGUV17Gl3jYcebvOo/Jd1FJSyzTzi7CzBI5RbfDzoFkKkA== X-Forefront-Antispam-Report: CIP:12.22.5.238;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:mail.nvidia.com;PTR:InfoNoRecords;CAT:NONE;SFS:(13230001)(4636009)(46966006)(36840700001)(40470700004)(316002)(6636002)(82310400004)(110136005)(70206006)(356005)(81166007)(54906003)(8676002)(8936002)(2906002)(6666004)(70586007)(86362001)(5660300002)(508600001)(4326008)(40460700003)(2616005)(336012)(426003)(47076005)(26005)(83380400001)(7696005)(36860700001)(186003)(36756003)(1076003)(36900700001);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Feb 2022 17:23:54.4395 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 3391b175-fd33-4195-75d6-08d9ea5e9dda X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[12.22.5.238];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT035.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR12MB3949 Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org Expose migration commands over the device, it includes: suspend, resume, get vhca id, query/save/load state. As part of this adds the APIs and data structure that are needed to manage the migration data. Signed-off-by: Yishai Hadas Signed-off-by: Leon Romanovsky Signed-off-by: Jason Gunthorpe --- drivers/vfio/pci/mlx5/cmd.c | 259 ++++++++++++++++++++++++++++++++++++ drivers/vfio/pci/mlx5/cmd.h | 35 +++++ 2 files changed, 294 insertions(+) create mode 100644 drivers/vfio/pci/mlx5/cmd.c create mode 100644 drivers/vfio/pci/mlx5/cmd.h diff --git a/drivers/vfio/pci/mlx5/cmd.c b/drivers/vfio/pci/mlx5/cmd.c new file mode 100644 index 000000000000..5c9f9218cc1d --- /dev/null +++ b/drivers/vfio/pci/mlx5/cmd.c @@ -0,0 +1,259 @@ +// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB +/* + * Copyright (c) 2021-2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved + */ + +#include "cmd.h" + +int mlx5vf_cmd_suspend_vhca(struct pci_dev *pdev, u16 vhca_id, u16 op_mod) +{ + struct mlx5_core_dev *mdev = mlx5_vf_get_core_dev(pdev); + u32 out[MLX5_ST_SZ_DW(suspend_vhca_out)] = {}; + u32 in[MLX5_ST_SZ_DW(suspend_vhca_in)] = {}; + int ret; + + if (!mdev) + return -ENOTCONN; + + MLX5_SET(suspend_vhca_in, in, opcode, MLX5_CMD_OP_SUSPEND_VHCA); + MLX5_SET(suspend_vhca_in, in, vhca_id, vhca_id); + MLX5_SET(suspend_vhca_in, in, op_mod, op_mod); + + ret = mlx5_cmd_exec_inout(mdev, suspend_vhca, in, out); + mlx5_vf_put_core_dev(mdev); + return ret; +} + +int mlx5vf_cmd_resume_vhca(struct pci_dev *pdev, u16 vhca_id, u16 op_mod) +{ + struct mlx5_core_dev *mdev = mlx5_vf_get_core_dev(pdev); + u32 out[MLX5_ST_SZ_DW(resume_vhca_out)] = {}; + u32 in[MLX5_ST_SZ_DW(resume_vhca_in)] = {}; + int ret; + + if (!mdev) + return -ENOTCONN; + + MLX5_SET(resume_vhca_in, in, opcode, MLX5_CMD_OP_RESUME_VHCA); + MLX5_SET(resume_vhca_in, in, vhca_id, vhca_id); + MLX5_SET(resume_vhca_in, in, op_mod, op_mod); + + ret = mlx5_cmd_exec_inout(mdev, resume_vhca, in, out); + mlx5_vf_put_core_dev(mdev); + return ret; +} + +int mlx5vf_cmd_query_vhca_migration_state(struct pci_dev *pdev, u16 vhca_id, + size_t *state_size) +{ + struct mlx5_core_dev *mdev = mlx5_vf_get_core_dev(pdev); + u32 out[MLX5_ST_SZ_DW(query_vhca_migration_state_out)] = {}; + u32 in[MLX5_ST_SZ_DW(query_vhca_migration_state_in)] = {}; + int ret; + + if (!mdev) + return -ENOTCONN; + + MLX5_SET(query_vhca_migration_state_in, in, opcode, + MLX5_CMD_OP_QUERY_VHCA_MIGRATION_STATE); + MLX5_SET(query_vhca_migration_state_in, in, vhca_id, vhca_id); + MLX5_SET(query_vhca_migration_state_in, in, op_mod, 0); + + ret = mlx5_cmd_exec_inout(mdev, query_vhca_migration_state, in, out); + if (ret) + goto end; + + *state_size = MLX5_GET(query_vhca_migration_state_out, out, + required_umem_size); + +end: + mlx5_vf_put_core_dev(mdev); + return ret; +} + +int mlx5vf_cmd_get_vhca_id(struct pci_dev *pdev, u16 function_id, u16 *vhca_id) +{ + struct mlx5_core_dev *mdev = mlx5_vf_get_core_dev(pdev); + u32 in[MLX5_ST_SZ_DW(query_hca_cap_in)] = {}; + int out_size; + void *out; + int ret; + + if (!mdev) + return -ENOTCONN; + + out_size = MLX5_ST_SZ_BYTES(query_hca_cap_out); + out = kzalloc(out_size, GFP_KERNEL); + if (!out) { + ret = -ENOMEM; + goto end; + } + + MLX5_SET(query_hca_cap_in, in, opcode, MLX5_CMD_OP_QUERY_HCA_CAP); + MLX5_SET(query_hca_cap_in, in, other_function, 1); + MLX5_SET(query_hca_cap_in, in, function_id, function_id); + MLX5_SET(query_hca_cap_in, in, op_mod, + MLX5_SET_HCA_CAP_OP_MOD_GENERAL_DEVICE << 1 | + HCA_CAP_OPMOD_GET_CUR); + + ret = mlx5_cmd_exec_inout(mdev, query_hca_cap, in, out); + if (ret) + goto err_exec; + + *vhca_id = MLX5_GET(query_hca_cap_out, out, + capability.cmd_hca_cap.vhca_id); + +err_exec: + kfree(out); +end: + mlx5_vf_put_core_dev(mdev); + return ret; +} + +static int _create_state_mkey(struct mlx5_core_dev *mdev, u32 pdn, + struct mlx5_vf_migration_file *migf, u32 *mkey) +{ + size_t npages = DIV_ROUND_UP(migf->total_length, PAGE_SIZE); + struct sg_dma_page_iter dma_iter; + int err = 0, inlen; + __be64 *mtt; + void *mkc; + u32 *in; + + inlen = MLX5_ST_SZ_BYTES(create_mkey_in) + + sizeof(*mtt) * round_up(npages, 2); + + in = kvzalloc(inlen, GFP_KERNEL); + if (!in) + return -ENOMEM; + + MLX5_SET(create_mkey_in, in, translations_octword_actual_size, + DIV_ROUND_UP(npages, 2)); + mtt = (__be64 *)MLX5_ADDR_OF(create_mkey_in, in, klm_pas_mtt); + + for_each_sgtable_dma_page(&migf->table.sgt, &dma_iter, 0) + *mtt++ = cpu_to_be64(sg_page_iter_dma_address(&dma_iter)); + + mkc = MLX5_ADDR_OF(create_mkey_in, in, memory_key_mkey_entry); + MLX5_SET(mkc, mkc, access_mode_1_0, MLX5_MKC_ACCESS_MODE_MTT); + MLX5_SET(mkc, mkc, lr, 1); + MLX5_SET(mkc, mkc, lw, 1); + MLX5_SET(mkc, mkc, rr, 1); + MLX5_SET(mkc, mkc, rw, 1); + MLX5_SET(mkc, mkc, pd, pdn); + MLX5_SET(mkc, mkc, bsf_octword_size, 0); + MLX5_SET(mkc, mkc, qpn, 0xffffff); + MLX5_SET(mkc, mkc, log_page_size, PAGE_SHIFT); + MLX5_SET(mkc, mkc, translations_octword_size, DIV_ROUND_UP(npages, 2)); + MLX5_SET64(mkc, mkc, len, migf->total_length); + err = mlx5_core_create_mkey(mdev, mkey, in, inlen); + kvfree(in); + return err; +} + +int mlx5vf_cmd_save_vhca_state(struct pci_dev *pdev, u16 vhca_id, + struct mlx5_vf_migration_file *migf) +{ + struct mlx5_core_dev *mdev = mlx5_vf_get_core_dev(pdev); + u32 out[MLX5_ST_SZ_DW(save_vhca_state_out)] = {}; + u32 in[MLX5_ST_SZ_DW(save_vhca_state_in)] = {}; + u32 pdn, mkey; + int err; + + if (!mdev) + return -ENOTCONN; + + err = mlx5_core_alloc_pd(mdev, &pdn); + if (err) + goto end; + + err = dma_map_sgtable(mdev->device, &migf->table.sgt, DMA_FROM_DEVICE, + 0); + if (err) + goto err_dma_map; + + err = _create_state_mkey(mdev, pdn, migf, &mkey); + if (err) + goto err_create_mkey; + + MLX5_SET(save_vhca_state_in, in, opcode, + MLX5_CMD_OP_SAVE_VHCA_STATE); + MLX5_SET(save_vhca_state_in, in, op_mod, 0); + MLX5_SET(save_vhca_state_in, in, vhca_id, vhca_id); + MLX5_SET(save_vhca_state_in, in, mkey, mkey); + MLX5_SET(save_vhca_state_in, in, size, migf->total_length); + + err = mlx5_cmd_exec_inout(mdev, save_vhca_state, in, out); + if (err) + goto err_exec; + + migf->total_length = + MLX5_GET(save_vhca_state_out, out, actual_image_size); + + mlx5_core_destroy_mkey(mdev, mkey); + mlx5_core_dealloc_pd(mdev, pdn); + dma_unmap_sgtable(mdev->device, &migf->table.sgt, DMA_FROM_DEVICE, 0); + mlx5_vf_put_core_dev(mdev); + + return 0; + +err_exec: + mlx5_core_destroy_mkey(mdev, mkey); +err_create_mkey: + dma_unmap_sgtable(mdev->device, &migf->table.sgt, DMA_FROM_DEVICE, 0); +err_dma_map: + mlx5_core_dealloc_pd(mdev, pdn); +end: + mlx5_vf_put_core_dev(mdev); + return err; +} + +int mlx5vf_cmd_load_vhca_state(struct pci_dev *pdev, u16 vhca_id, + struct mlx5_vf_migration_file *migf) +{ + struct mlx5_core_dev *mdev = mlx5_vf_get_core_dev(pdev); + u32 out[MLX5_ST_SZ_DW(save_vhca_state_out)] = {}; + u32 in[MLX5_ST_SZ_DW(save_vhca_state_in)] = {}; + u32 pdn, mkey; + int err; + + if (!mdev) + return -ENOTCONN; + + mutex_lock(&migf->lock); + if (!migf->total_length) { + err = -EINVAL; + goto end; + } + + err = mlx5_core_alloc_pd(mdev, &pdn); + if (err) + goto end; + + err = dma_map_sgtable(mdev->device, &migf->table.sgt, DMA_TO_DEVICE, 0); + if (err) + goto err_reg; + + err = _create_state_mkey(mdev, pdn, migf, &mkey); + if (err) + goto err_mkey; + + MLX5_SET(load_vhca_state_in, in, opcode, + MLX5_CMD_OP_LOAD_VHCA_STATE); + MLX5_SET(load_vhca_state_in, in, op_mod, 0); + MLX5_SET(load_vhca_state_in, in, vhca_id, vhca_id); + MLX5_SET(load_vhca_state_in, in, mkey, mkey); + MLX5_SET(load_vhca_state_in, in, size, migf->total_length); + + err = mlx5_cmd_exec_inout(mdev, load_vhca_state, in, out); + + mlx5_core_destroy_mkey(mdev, mkey); +err_mkey: + dma_unmap_sgtable(mdev->device, &migf->table.sgt, DMA_TO_DEVICE, 0); +err_reg: + mlx5_core_dealloc_pd(mdev, pdn); +end: + mlx5_vf_put_core_dev(mdev); + mutex_unlock(&migf->lock); + return err; +} diff --git a/drivers/vfio/pci/mlx5/cmd.h b/drivers/vfio/pci/mlx5/cmd.h new file mode 100644 index 000000000000..69a1481ed953 --- /dev/null +++ b/drivers/vfio/pci/mlx5/cmd.h @@ -0,0 +1,35 @@ +/* SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB */ +/* + * Copyright (c) 2021-2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved. + */ + +#ifndef MLX5_VFIO_CMD_H +#define MLX5_VFIO_CMD_H + +#include +#include + +struct mlx5_vf_migration_file { + struct file *filp; + struct mutex lock; + + struct sg_append_table table; + size_t total_length; + size_t allocated_length; + + /* Optimize mlx5vf_get_migration_page() for sequential access */ + struct scatterlist *last_offset_sg; + unsigned int sg_last_entry; + unsigned long last_offset; +}; + +int mlx5vf_cmd_suspend_vhca(struct pci_dev *pdev, u16 vhca_id, u16 op_mod); +int mlx5vf_cmd_resume_vhca(struct pci_dev *pdev, u16 vhca_id, u16 op_mod); +int mlx5vf_cmd_query_vhca_migration_state(struct pci_dev *pdev, u16 vhca_id, + size_t *state_size); +int mlx5vf_cmd_get_vhca_id(struct pci_dev *pdev, u16 function_id, u16 *vhca_id); +int mlx5vf_cmd_save_vhca_state(struct pci_dev *pdev, u16 vhca_id, + struct mlx5_vf_migration_file *migf); +int mlx5vf_cmd_load_vhca_state(struct pci_dev *pdev, u16 vhca_id, + struct mlx5_vf_migration_file *migf); +#endif /* MLX5_VFIO_CMD_H */ From patchwork Mon Feb 7 17:22:13 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yishai Hadas X-Patchwork-Id: 12737750 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 21507C433F5 for ; Mon, 7 Feb 2022 17:31:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233919AbiBGRbG (ORCPT ); Mon, 7 Feb 2022 12:31:06 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45314 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1347970AbiBGRYE (ORCPT ); Mon, 7 Feb 2022 12:24:04 -0500 Received: from NAM02-DM3-obe.outbound.protection.outlook.com (mail-dm3nam07on2072.outbound.protection.outlook.com [40.107.95.72]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D9AA7C0401D5; Mon, 7 Feb 2022 09:24:02 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=mGu+Z8FzZNEQPcfY1kDnao2BH5JQAszao9VhN4mIVE7f4ohixS1MesjNuQGo/yFvfUI0OWmFihn+Tc4y+SAorMsWp8D3DhFiuIX5Unqa/uiXEPeaN6jFb/1EzWSgAGGkOfoKpenhTR3mPdjsvA5gOCD5SHWvpSz4+fcH4IpbIxnzOsu6ur/oIEELlXoFwsIX5NVohgZIvKaZA3FCoglP7emreymxCRuKq8HXNsnS8rEMj9aATzbIEpLdW7YGlAYjw4lukrlamvsrkytbyVLoGeM046FURC5f2Lnlh+ghBczOEmkpUAWUmwPbhal/xSI53GWIiU2Hsxjy8b+BqIQfYA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=B2gqo36AsqcOWsCRHVcyIRNnLFyVBClgs0dzR2caT94=; b=Kp2JIL7Qk3Fd1gpn3EkwzVbeBCygakfPcdR9C+/3nK3hIZJGrxRfNBTA8LKC3TsgIaxFKANg1cBzSj4XnZGjfbEh4hE4NpNO6ZEu2Ed1clmlxuKVpleNKQ8GWh5t61snUF27yQumURAFJWMH8EpDJWJdvAX3EX2bjuzuYC/syEyibbvZPuWwm8sZsl+kSN2r9tbV2Rlxg55uNN0oQM4uHKLHg6hysDr3nV3TGQZMydB2wg1db1ye0F9Zf3yDiA0X9WrrkZOgp946PQyw8dhptmX9gW8Eki2Sd2sCXvDnIVUlCpHYzz7Of5TKXaLsL2ZxsQ2Su07DZcpc5jq5RIb0Gg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.235) smtp.rcpttodomain=intel.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=B2gqo36AsqcOWsCRHVcyIRNnLFyVBClgs0dzR2caT94=; b=dZAqPb9Rmx7SAR8dB2vwZBkUTtnMd2ERrx15dPp/2eVEzZ6SUwmSn1XcP68muVCxtlSf+p1wzEJNgPh1slQMi8Wo0Exqi7258ZPxcbQCyjwmEeOxDLPl9hKFSU/JgMQpUOtXbPKkimdNTdN585Ngv0XhjAJ8wndfBBjwl4Pcsj5+rib79BlyzPwiyasXuWRBuADuzSkhPN8kE2rTMHDk/TC1EPKPWs28SLdmreeKZMXfCzIcKbv3OxiQ2YUW3zMIw9y5FpTdlxLW+UwavXDciuwFCGZR197ewtVtRvEbvUjA16reCRK68+1vXr26+bxy8MDg/QBdzklyPa0rsDE4CQ== Received: from BN0PR03CA0036.namprd03.prod.outlook.com (2603:10b6:408:e7::11) by BYAPR12MB2648.namprd12.prod.outlook.com (2603:10b6:a03:69::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4951.18; Mon, 7 Feb 2022 17:24:00 +0000 Received: from BN8NAM11FT017.eop-nam11.prod.protection.outlook.com (2603:10b6:408:e7:cafe::96) by BN0PR03CA0036.outlook.office365.com (2603:10b6:408:e7::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4951.12 via Frontend Transport; Mon, 7 Feb 2022 17:23:59 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.235) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.235 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.235; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.235) by BN8NAM11FT017.mail.protection.outlook.com (10.13.177.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4951.12 via Frontend Transport; Mon, 7 Feb 2022 17:23:59 +0000 Received: from rnnvmail203.nvidia.com (10.129.68.9) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Mon, 7 Feb 2022 17:23:58 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by rnnvmail203.nvidia.com (10.129.68.9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9; Mon, 7 Feb 2022 09:23:57 -0800 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.129.68.8) with Microsoft SMTP Server id 15.2.986.9 via Frontend Transport; Mon, 7 Feb 2022 09:23:53 -0800 From: Yishai Hadas To: , , , CC: , , , , , , , , , , , Subject: [PATCH V7 mlx5-next 12/15] vfio/mlx5: Implement vfio_pci driver for mlx5 devices Date: Mon, 7 Feb 2022 19:22:13 +0200 Message-ID: <20220207172216.206415-13-yishaih@nvidia.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20220207172216.206415-1-yishaih@nvidia.com> References: <20220207172216.206415-1-yishaih@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 5ec91a05-9af3-4c2e-ab59-08d9ea5ea0b8 X-MS-TrafficTypeDiagnostic: BYAPR12MB2648:EE_ X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:345; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: gOLDBIDpPRWvpxLcU4uOWJYOSrJ1WNprVijg+fRBAfRbdzChavLl+OAun/keSdty9WpH5XC8sbMuZMRh1HMldy0xDivTuQ2ah5iPaBoMZKGfmpNE4kcHHEzKpooJnJBdGv9SlGo0ELAJOzwrvP16nvh28+dYtb8SdU3F+sDAjKsfeu8li0pvGTie/g5ZM0kZhvMAtS2gYcsGWb92JNIZF4njpaXcsKttnsr+yrC/nk21WMaQ80M544j1FXv5T4/gCMqGxfc5deNndL3enfIk6538v6mh3GbMroAIFCAcN3ocXk5HZUEFagOyT4LBMZNUkphiJdhJ0Zsu4PhcuMFSp0SQASBaR/kgKMSoBTwyPQPwPXVXfPiIneCiLvoV/aDd9rrRAyrgtiB+cRm3VOKPje8R6cQV9TyG3ldTFaxAe58QHUy6dbTQO4qUkgRsHLugvV0rGeGcAF45rNWjKb+O/v9dhDC/KW86Eag8uAqmFwqTJB1DkOy3lRf9veSMOTgg/sJ1zTc0W29Hdk2RFKAr2xpgpxeOX4hD5eFdTK/DuW+Zwlc/YXs/W0YLFEcdRz6cPy5q+MxdrBxc39X+CDVi21PqcApnqzIc8UyKzV4oZlsFEI6BBMgmHcOzmK6fAmw6ssi+AXA7sTTiaxU9Ca7GSxsMfK1dmBqoiVZtLUWmkkJrEqugk1JMFqPsKKmPJcsSPJiutESfEqCp4DmBLvus6g== X-Forefront-Antispam-Report: CIP:12.22.5.235;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:mail.nvidia.com;PTR:InfoNoRecords;CAT:NONE;SFS:(13230001)(4636009)(40470700004)(36840700001)(46966006)(356005)(8676002)(86362001)(70586007)(82310400004)(8936002)(70206006)(2906002)(4326008)(30864003)(81166007)(508600001)(1076003)(2616005)(36756003)(7696005)(6666004)(110136005)(83380400001)(54906003)(26005)(186003)(6636002)(316002)(40460700003)(336012)(36860700001)(47076005)(426003)(5660300002)(36900700001);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Feb 2022 17:23:59.1574 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 5ec91a05-9af3-4c2e-ab59-08d9ea5ea0b8 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[12.22.5.235];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT017.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR12MB2648 Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org This patch adds support for vfio_pci driver for mlx5 devices. It uses vfio_pci_core to register to the VFIO subsystem and then implements the mlx5 specific logic in the migration area. The migration implementation follows the definition from uapi/vfio.h and uses the mlx5 VF->PF command channel to achieve it. This patch implements the suspend/resume flows. Signed-off-by: Yishai Hadas Signed-off-by: Leon Romanovsky Signed-off-by: Jason Gunthorpe --- MAINTAINERS | 6 + drivers/vfio/pci/Kconfig | 3 + drivers/vfio/pci/Makefile | 2 + drivers/vfio/pci/mlx5/Kconfig | 10 + drivers/vfio/pci/mlx5/Makefile | 4 + drivers/vfio/pci/mlx5/cmd.h | 1 + drivers/vfio/pci/mlx5/main.c | 623 +++++++++++++++++++++++++++++++++ 7 files changed, 649 insertions(+) create mode 100644 drivers/vfio/pci/mlx5/Kconfig create mode 100644 drivers/vfio/pci/mlx5/Makefile create mode 100644 drivers/vfio/pci/mlx5/main.c diff --git a/MAINTAINERS b/MAINTAINERS index ea3e6c914384..5c5216f5e43d 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -20260,6 +20260,12 @@ L: kvm@vger.kernel.org S: Maintained F: drivers/vfio/platform/ +VFIO MLX5 PCI DRIVER +M: Yishai Hadas +L: kvm@vger.kernel.org +S: Maintained +F: drivers/vfio/pci/mlx5/ + VGA_SWITCHEROO R: Lukas Wunner S: Maintained diff --git a/drivers/vfio/pci/Kconfig b/drivers/vfio/pci/Kconfig index 860424ccda1b..187b9c259944 100644 --- a/drivers/vfio/pci/Kconfig +++ b/drivers/vfio/pci/Kconfig @@ -43,4 +43,7 @@ config VFIO_PCI_IGD To enable Intel IGD assignment through vfio-pci, say Y. endif + +source "drivers/vfio/pci/mlx5/Kconfig" + endif diff --git a/drivers/vfio/pci/Makefile b/drivers/vfio/pci/Makefile index 349d68d242b4..ed9d6f2e0555 100644 --- a/drivers/vfio/pci/Makefile +++ b/drivers/vfio/pci/Makefile @@ -7,3 +7,5 @@ obj-$(CONFIG_VFIO_PCI_CORE) += vfio-pci-core.o vfio-pci-y := vfio_pci.o vfio-pci-$(CONFIG_VFIO_PCI_IGD) += vfio_pci_igd.o obj-$(CONFIG_VFIO_PCI) += vfio-pci.o + +obj-$(CONFIG_MLX5_VFIO_PCI) += mlx5/ diff --git a/drivers/vfio/pci/mlx5/Kconfig b/drivers/vfio/pci/mlx5/Kconfig new file mode 100644 index 000000000000..29ba9c504a75 --- /dev/null +++ b/drivers/vfio/pci/mlx5/Kconfig @@ -0,0 +1,10 @@ +# SPDX-License-Identifier: GPL-2.0-only +config MLX5_VFIO_PCI + tristate "VFIO support for MLX5 PCI devices" + depends on MLX5_CORE + depends on VFIO_PCI_CORE + help + This provides migration support for MLX5 devices using the VFIO + framework. + + If you don't know what to do here, say N. diff --git a/drivers/vfio/pci/mlx5/Makefile b/drivers/vfio/pci/mlx5/Makefile new file mode 100644 index 000000000000..689627da7ff5 --- /dev/null +++ b/drivers/vfio/pci/mlx5/Makefile @@ -0,0 +1,4 @@ +# SPDX-License-Identifier: GPL-2.0-only +obj-$(CONFIG_MLX5_VFIO_PCI) += mlx5-vfio-pci.o +mlx5-vfio-pci-y := main.o cmd.o + diff --git a/drivers/vfio/pci/mlx5/cmd.h b/drivers/vfio/pci/mlx5/cmd.h index 69a1481ed953..1392a11a9cc0 100644 --- a/drivers/vfio/pci/mlx5/cmd.h +++ b/drivers/vfio/pci/mlx5/cmd.h @@ -12,6 +12,7 @@ struct mlx5_vf_migration_file { struct file *filp; struct mutex lock; + bool disabled; struct sg_append_table table; size_t total_length; diff --git a/drivers/vfio/pci/mlx5/main.c b/drivers/vfio/pci/mlx5/main.c new file mode 100644 index 000000000000..acd205bcff70 --- /dev/null +++ b/drivers/vfio/pci/mlx5/main.c @@ -0,0 +1,623 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2021-2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "cmd.h" + +/* Arbitrary to prevent userspace from consuming endless memory */ +#define MAX_MIGRATION_SIZE (512*1024*1024) + +struct mlx5vf_pci_core_device { + struct vfio_pci_core_device core_device; + u8 migrate_cap:1; + /* protect migration state */ + struct mutex state_mutex; + enum vfio_device_mig_state mig_state; + u16 vhca_id; + struct mlx5_vf_migration_file *resuming_migf; + struct mlx5_vf_migration_file *saving_migf; +}; + +static struct page * +mlx5vf_get_migration_page(struct mlx5_vf_migration_file *migf, + unsigned long offset) +{ + unsigned long cur_offset = 0; + struct scatterlist *sg; + unsigned int i; + + /* All accesses are sequential */ + if (offset < migf->last_offset || !migf->last_offset_sg) { + migf->last_offset = 0; + migf->last_offset_sg = migf->table.sgt.sgl; + migf->sg_last_entry = 0; + } + + cur_offset = migf->last_offset; + + for_each_sg(migf->last_offset_sg, sg, + migf->table.sgt.orig_nents - migf->sg_last_entry, i) { + if (offset < sg->length + cur_offset) { + migf->last_offset_sg = sg; + migf->sg_last_entry += i; + migf->last_offset = cur_offset; + return nth_page(sg_page(sg), + (offset - cur_offset) / PAGE_SIZE); + } + cur_offset += sg->length; + } + return NULL; +} + +static int mlx5vf_add_migration_pages(struct mlx5_vf_migration_file *migf, + unsigned int npages) +{ + unsigned int to_alloc = npages; + struct page **page_list; + unsigned long filled; + unsigned int to_fill; + int ret; + + to_fill = min_t(unsigned int, npages, PAGE_SIZE / sizeof(*page_list)); + page_list = kvzalloc(to_fill * sizeof(*page_list), GFP_KERNEL); + if (!page_list) + return -ENOMEM; + + do { + filled = alloc_pages_bulk_array(GFP_KERNEL, to_fill, page_list); + if (!filled) { + ret = -ENOMEM; + goto err; + } + to_alloc -= filled; + ret = sg_alloc_append_table_from_pages( + &migf->table, page_list, filled, 0, + filled << PAGE_SHIFT, UINT_MAX, SG_MAX_SINGLE_ALLOC, + GFP_KERNEL); + + if (ret) + goto err; + migf->allocated_length += filled * PAGE_SIZE; + /* clean input for another bulk allocation */ + memset(page_list, 0, filled * sizeof(*page_list)); + to_fill = min_t(unsigned int, to_alloc, + PAGE_SIZE / sizeof(*page_list)); + } while (to_alloc > 0); + + kvfree(page_list); + return 0; + +err: + kvfree(page_list); + return ret; +} + +static void mlx5vf_disable_fd(struct mlx5_vf_migration_file *migf) +{ + struct sg_page_iter sg_iter; + + mutex_lock(&migf->lock); + /* Undo alloc_pages_bulk_array() */ + for_each_sgtable_page(&migf->table.sgt, &sg_iter, 0) + __free_page(sg_page_iter_page(&sg_iter)); + sg_free_append_table(&migf->table); + migf->disabled = true; + migf->total_length = 0; + migf->allocated_length = 0; + migf->filp->f_pos = 0; + mutex_unlock(&migf->lock); +} + +static int mlx5vf_release_file(struct inode *inode, struct file *filp) +{ + struct mlx5_vf_migration_file *migf = filp->private_data; + + mlx5vf_disable_fd(migf); + mutex_destroy(&migf->lock); + kfree(migf); + return 0; +} + +static ssize_t mlx5vf_save_read(struct file *filp, char __user *buf, size_t len, + loff_t *pos) +{ + struct mlx5_vf_migration_file *migf = filp->private_data; + ssize_t done = 0; + + if (pos) + return -ESPIPE; + pos = &filp->f_pos; + + mutex_lock(&migf->lock); + if (*pos > migf->total_length) { + done = -EINVAL; + goto out_unlock; + } + if (migf->disabled) { + done = -ENODEV; + goto out_unlock; + } + + len = min_t(size_t, migf->total_length - *pos, len); + while (len) { + size_t page_offset; + struct page *page; + size_t page_len; + u8 *from_buff; + int ret; + + page_offset = (*pos) % PAGE_SIZE; + page = mlx5vf_get_migration_page(migf, *pos - page_offset); + if (!page) { + if (done == 0) + done = -EINVAL; + goto out_unlock; + } + + page_len = min_t(size_t, len, PAGE_SIZE - page_offset); + from_buff = kmap_local_page(page); + ret = copy_to_user(buf, from_buff + page_offset, page_len); + kunmap_local(from_buff); + if (ret) { + done = -EFAULT; + goto out_unlock; + } + *pos += page_len; + len -= page_len; + done += page_len; + buf += page_len; + } + +out_unlock: + mutex_unlock(&migf->lock); + return done; +} + +static const struct file_operations mlx5vf_save_fops = { + .owner = THIS_MODULE, + .read = mlx5vf_save_read, + .release = mlx5vf_release_file, + .llseek = no_llseek, +}; + +static struct mlx5_vf_migration_file * +mlx5vf_pci_save_device_data(struct mlx5vf_pci_core_device *mvdev) +{ + struct mlx5_vf_migration_file *migf; + int ret; + + migf = kzalloc(sizeof(*migf), GFP_KERNEL); + if (!migf) + return ERR_PTR(-ENOMEM); + + migf->filp = anon_inode_getfile("mlx5vf_mig", &mlx5vf_save_fops, migf, + O_RDONLY); + if (IS_ERR(migf->filp)) { + int err = PTR_ERR(migf->filp); + + kfree(migf); + return ERR_PTR(err); + } + + stream_open(migf->filp->f_inode, migf->filp); + mutex_init(&migf->lock); + + ret = mlx5vf_cmd_query_vhca_migration_state( + mvdev->core_device.pdev, mvdev->vhca_id, &migf->total_length); + if (ret) + goto out_free; + + ret = mlx5vf_add_migration_pages( + migf, DIV_ROUND_UP_ULL(migf->total_length, PAGE_SIZE)); + if (ret) + goto out_free; + + ret = mlx5vf_cmd_save_vhca_state(mvdev->core_device.pdev, + mvdev->vhca_id, migf); + if (ret) + goto out_free; + return migf; +out_free: + fput(migf->filp); + return ERR_PTR(ret); +} + +static ssize_t mlx5vf_resume_write(struct file *filp, const char __user *buf, + size_t len, loff_t *pos) +{ + struct mlx5_vf_migration_file *migf = filp->private_data; + loff_t requested_length; + ssize_t done = 0; + + if (pos) + return -ESPIPE; + pos = &filp->f_pos; + + if (*pos < 0 || + check_add_overflow((loff_t)len, *pos, &requested_length)) + return -EINVAL; + + if (requested_length > MAX_MIGRATION_SIZE) + return -ENOMEM; + + mutex_lock(&migf->lock); + if (migf->disabled) { + done = -ENODEV; + goto out_unlock; + } + + if (migf->allocated_length < requested_length) { + done = mlx5vf_add_migration_pages( + migf, + DIV_ROUND_UP(requested_length - migf->allocated_length, + PAGE_SIZE)); + if (done) + goto out_unlock; + } + + while (len) { + size_t page_offset; + struct page *page; + size_t page_len; + u8 *to_buff; + int ret; + + page_offset = (*pos) % PAGE_SIZE; + page = mlx5vf_get_migration_page(migf, *pos - page_offset); + if (!page) { + if (done == 0) + done = -EINVAL; + goto out_unlock; + } + + page_len = min_t(size_t, len, PAGE_SIZE - page_offset); + to_buff = kmap_local_page(page); + ret = copy_from_user(to_buff + page_offset, buf, page_len); + kunmap_local(to_buff); + if (ret) { + done = -EFAULT; + goto out_unlock; + } + *pos += page_len; + len -= page_len; + done += page_len; + buf += page_len; + migf->total_length += page_len; + } +out_unlock: + mutex_unlock(&migf->lock); + return done; +} + +static const struct file_operations mlx5vf_resume_fops = { + .owner = THIS_MODULE, + .write = mlx5vf_resume_write, + .release = mlx5vf_release_file, + .llseek = no_llseek, +}; + +static struct mlx5_vf_migration_file * +mlx5vf_pci_resume_device_data(struct mlx5vf_pci_core_device *mvdev) +{ + struct mlx5_vf_migration_file *migf; + + migf = kzalloc(sizeof(*migf), GFP_KERNEL); + if (!migf) + return ERR_PTR(-ENOMEM); + + migf->filp = anon_inode_getfile("mlx5vf_mig", &mlx5vf_resume_fops, migf, + O_WRONLY); + if (IS_ERR(migf->filp)) { + int err = PTR_ERR(migf->filp); + + kfree(migf); + return ERR_PTR(err); + } + stream_open(migf->filp->f_inode, migf->filp); + mutex_init(&migf->lock); + return migf; +} + +static void mlx5vf_disable_fds(struct mlx5vf_pci_core_device *mvdev) +{ + if (mvdev->resuming_migf) { + mlx5vf_disable_fd(mvdev->resuming_migf); + fput(mvdev->resuming_migf->filp); + mvdev->resuming_migf = NULL; + } + if (mvdev->saving_migf) { + mlx5vf_disable_fd(mvdev->saving_migf); + fput(mvdev->saving_migf->filp); + mvdev->saving_migf = NULL; + } +} + +static struct file * +mlx5vf_pci_step_device_state_locked(struct mlx5vf_pci_core_device *mvdev, + u32 new) +{ + u32 cur = mvdev->mig_state; + int ret; + + if (cur == VFIO_DEVICE_STATE_RUNNING_P2P && new == VFIO_DEVICE_STATE_STOP) { + ret = mlx5vf_cmd_suspend_vhca( + mvdev->core_device.pdev, mvdev->vhca_id, + MLX5_SUSPEND_VHCA_IN_OP_MOD_SUSPEND_SLAVE); + if (ret) + return ERR_PTR(ret); + return NULL; + } + + if (cur == VFIO_DEVICE_STATE_STOP && new == VFIO_DEVICE_STATE_RUNNING_P2P) { + ret = mlx5vf_cmd_resume_vhca( + mvdev->core_device.pdev, mvdev->vhca_id, + MLX5_RESUME_VHCA_IN_OP_MOD_RESUME_SLAVE); + if (ret) + return ERR_PTR(ret); + return NULL; + } + + if (cur == VFIO_DEVICE_STATE_RUNNING && new == VFIO_DEVICE_STATE_RUNNING_P2P) { + ret = mlx5vf_cmd_suspend_vhca( + mvdev->core_device.pdev, mvdev->vhca_id, + MLX5_SUSPEND_VHCA_IN_OP_MOD_SUSPEND_MASTER); + if (ret) + return ERR_PTR(ret); + return NULL; + } + + if (cur == VFIO_DEVICE_STATE_RUNNING_P2P && new == VFIO_DEVICE_STATE_RUNNING) { + ret = mlx5vf_cmd_resume_vhca( + mvdev->core_device.pdev, mvdev->vhca_id, + MLX5_RESUME_VHCA_IN_OP_MOD_RESUME_MASTER); + if (ret) + return ERR_PTR(ret); + return NULL; + } + + if (cur == VFIO_DEVICE_STATE_STOP && new == VFIO_DEVICE_STATE_STOP_COPY) { + struct mlx5_vf_migration_file *migf; + + migf = mlx5vf_pci_save_device_data(mvdev); + if (IS_ERR(migf)) + return ERR_CAST(migf); + get_file(migf->filp); + mvdev->saving_migf = migf; + return migf->filp; + } + + if ((cur == VFIO_DEVICE_STATE_STOP_COPY && new == VFIO_DEVICE_STATE_STOP)) { + mlx5vf_disable_fds(mvdev); + return 0; + } + + if (cur == VFIO_DEVICE_STATE_STOP && new == VFIO_DEVICE_STATE_RESUMING) { + struct mlx5_vf_migration_file *migf; + + migf = mlx5vf_pci_resume_device_data(mvdev); + if (IS_ERR(migf)) + return ERR_CAST(migf); + get_file(migf->filp); + mvdev->resuming_migf = migf; + return migf->filp; + } + + if (cur == VFIO_DEVICE_STATE_RESUMING && new == VFIO_DEVICE_STATE_STOP) { + ret = mlx5vf_cmd_load_vhca_state(mvdev->core_device.pdev, + mvdev->vhca_id, + mvdev->resuming_migf); + if (ret) + return ERR_PTR(ret); + mlx5vf_disable_fds(mvdev); + return 0; + } + + /* + * vfio_mig_get_next_state() does not use arcs other than the above + */ + WARN_ON(true); + return ERR_PTR(-EINVAL); +} + +static struct file * +mlx5vf_pci_set_device_state(struct vfio_device *vdev, + enum vfio_device_mig_state new_state) +{ + struct mlx5vf_pci_core_device *mvdev = container_of( + vdev, struct mlx5vf_pci_core_device, core_device.vdev); + enum vfio_device_mig_state next_state; + struct file *res = NULL; + int ret; + + mutex_lock(&mvdev->state_mutex); + while (new_state != mvdev->mig_state) { + ret = vfio_mig_get_next_state(vdev, mvdev->mig_state, + new_state, &next_state); + if (ret) { + res = ERR_PTR(ret); + break; + } + res = mlx5vf_pci_step_device_state_locked(mvdev, next_state); + if (IS_ERR(res)) + break; + mvdev->mig_state = next_state; + if (WARN_ON(res && new_state != mvdev->mig_state)) { + fput(res); + res = ERR_PTR(-EINVAL); + break; + } + } + mutex_unlock(&mvdev->state_mutex); + return res; +} + +static int mlx5vf_pci_get_device_state(struct vfio_device *vdev, + enum vfio_device_mig_state *curr_state) +{ + struct mlx5vf_pci_core_device *mvdev = container_of( + vdev, struct mlx5vf_pci_core_device, core_device.vdev); + + mutex_lock(&mvdev->state_mutex); + *curr_state = mvdev->mig_state; + mutex_unlock(&mvdev->state_mutex); + return 0; +} + +static int mlx5vf_pci_open_device(struct vfio_device *core_vdev) +{ + struct mlx5vf_pci_core_device *mvdev = container_of( + core_vdev, struct mlx5vf_pci_core_device, core_device.vdev); + struct vfio_pci_core_device *vdev = &mvdev->core_device; + int vf_id; + int ret; + + ret = vfio_pci_core_enable(vdev); + if (ret) + return ret; + + if (!mvdev->migrate_cap) { + vfio_pci_core_finish_enable(vdev); + return 0; + } + + vf_id = pci_iov_vf_id(vdev->pdev); + if (vf_id < 0) { + ret = vf_id; + goto out_disable; + } + + ret = mlx5vf_cmd_get_vhca_id(vdev->pdev, vf_id + 1, &mvdev->vhca_id); + if (ret) + goto out_disable; + + mvdev->mig_state = VFIO_DEVICE_STATE_RUNNING; + vfio_pci_core_finish_enable(vdev); + return 0; +out_disable: + vfio_pci_core_disable(vdev); + return ret; +} + +static void mlx5vf_pci_close_device(struct vfio_device *core_vdev) +{ + struct mlx5vf_pci_core_device *mvdev = container_of( + core_vdev, struct mlx5vf_pci_core_device, core_device.vdev); + + mlx5vf_disable_fds(mvdev); + vfio_pci_core_close_device(core_vdev); +} + +static const struct vfio_device_ops mlx5vf_pci_ops = { + .name = "mlx5-vfio-pci", + .open_device = mlx5vf_pci_open_device, + .close_device = mlx5vf_pci_close_device, + .ioctl = vfio_pci_core_ioctl, + .device_feature = vfio_pci_core_ioctl_feature, + .read = vfio_pci_core_read, + .write = vfio_pci_core_write, + .mmap = vfio_pci_core_mmap, + .request = vfio_pci_core_request, + .match = vfio_pci_core_match, + .migration_set_state = mlx5vf_pci_set_device_state, + .migration_get_state = mlx5vf_pci_get_device_state, +}; + +static int mlx5vf_pci_probe(struct pci_dev *pdev, + const struct pci_device_id *id) +{ + struct mlx5vf_pci_core_device *mvdev; + int ret; + + mvdev = kzalloc(sizeof(*mvdev), GFP_KERNEL); + if (!mvdev) + return -ENOMEM; + vfio_pci_core_init_device(&mvdev->core_device, pdev, &mlx5vf_pci_ops); + + if (pdev->is_virtfn) { + struct mlx5_core_dev *mdev = + mlx5_vf_get_core_dev(pdev); + + if (mdev) { + if (MLX5_CAP_GEN(mdev, migration)) { + mvdev->migrate_cap = 1; + mvdev->core_device.vdev.migration_flags = + VFIO_MIGRATION_STOP_COPY | + VFIO_MIGRATION_P2P; + mutex_init(&mvdev->state_mutex); + } + mlx5_vf_put_core_dev(mdev); + } + } + + ret = vfio_pci_core_register_device(&mvdev->core_device); + if (ret) + goto out_free; + + dev_set_drvdata(&pdev->dev, mvdev); + return 0; + +out_free: + vfio_pci_core_uninit_device(&mvdev->core_device); + kfree(mvdev); + return ret; +} + +static void mlx5vf_pci_remove(struct pci_dev *pdev) +{ + struct mlx5vf_pci_core_device *mvdev = dev_get_drvdata(&pdev->dev); + + vfio_pci_core_unregister_device(&mvdev->core_device); + vfio_pci_core_uninit_device(&mvdev->core_device); + kfree(mvdev); +} + +static const struct pci_device_id mlx5vf_pci_table[] = { + { PCI_DRIVER_OVERRIDE_DEVICE_VFIO(PCI_VENDOR_ID_MELLANOX, 0x101e) }, /* ConnectX Family mlx5Gen Virtual Function */ + {} +}; + +MODULE_DEVICE_TABLE(pci, mlx5vf_pci_table); + +static struct pci_driver mlx5vf_pci_driver = { + .name = KBUILD_MODNAME, + .id_table = mlx5vf_pci_table, + .probe = mlx5vf_pci_probe, + .remove = mlx5vf_pci_remove, +}; + +static void __exit mlx5vf_pci_cleanup(void) +{ + pci_unregister_driver(&mlx5vf_pci_driver); +} + +static int __init mlx5vf_pci_init(void) +{ + return pci_register_driver(&mlx5vf_pci_driver); +} + +module_init(mlx5vf_pci_init); +module_exit(mlx5vf_pci_cleanup); + +MODULE_LICENSE("GPL"); +MODULE_AUTHOR("Max Gurtovoy "); +MODULE_AUTHOR("Yishai Hadas "); +MODULE_DESCRIPTION( + "MLX5 VFIO PCI - User Level meta-driver for MLX5 device family"); From patchwork Mon Feb 7 17:22:14 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yishai Hadas X-Patchwork-Id: 12737758 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6511EC4167E for ; Mon, 7 Feb 2022 17:33:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1356769AbiBGRbN (ORCPT ); Mon, 7 Feb 2022 12:31:13 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45326 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1347972AbiBGRYF (ORCPT ); Mon, 7 Feb 2022 12:24:05 -0500 Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2080.outbound.protection.outlook.com [40.107.220.80]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0C276C0401D5; Mon, 7 Feb 2022 09:24:05 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=NH12exB71fMFBMdCSDSVLWEq+imrVuLbIX6H+KSP3S11zIC95ac62783j38rO1vHqaRfqxYU5bzMMlLAiif9p9j8BWQzthPLz0gTkiqcHI2o4SbY8pGJ360HFRsWGnuJ9vik4PFkoEY+mJJ6RfhpSyNc7l+B6Ju65sPtfBMPitB6k+VhAEHSPyQ3xWhYzzFNqpD4cgKIyzMhjXxgvr3XJb8QgOW2o46HmGL7SS6KWoVFXtl1jle82q12ktvEXv9pWMOd0Hnw+MCLkDyprMd9aoKMXlUK/Uu1i5GAqJmHIvd157Kwka8Z5ffVXKysGoTmP4jz+3Ooz8xaAMqv+T9TFQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=KNQgaDTdj4EZWXUCoFLRq6+GwlBZaCf9ViHrWbjQUmk=; b=l9IuSZfocQvEuMA8vEL6zHNv3+cwQL1kepTudJYR1xwuAd36WLXWA5rxLzEEH3OwkzOKeh+L+6toIVj/PIPTIJtUYhGFQMr3ae6oQWT1NWDFzxeKd+KZchhzXSdOvwH3/aNmlR3Ijm5h/yT4zaxhnMu1XdUllmaRFB/6O1G01pSBVJBTX2Eq7GLA1xtmvSsDQQdcvkb3jaNj+IIPsVUykeP6ERW7dTtQgqgwMsuF+wDBs854mRS7ANd3YalONjWFAbE+8hcWeje5r0ZPzQIwpOu0dvCHJMF0/qhBOor07/H3/9IzM27hplBhCimbPFvBzv7AfkMcQnNjzsg9FWccKw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.235) smtp.rcpttodomain=intel.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=KNQgaDTdj4EZWXUCoFLRq6+GwlBZaCf9ViHrWbjQUmk=; b=MGnpLsxPK+5M4vj4mJm9stxEOdtqJrAQU6Y+FPY+2N7uvp6xTLcQHQ37CNAAaB2sMEwBS4i3dGRnZEljhIKSE1tuPLl+ON4fe4RicFFoSPDZYbd/6hdjBPoJ/E3bjO50YLfhhnOdQoLeLqB8Jm5pEIWVqWgg6tbOVCQqkXI5mzkNLbomPuPIMuY2mjJztXSjHWrxgQgBz/EN9XcKtjW6nebLIADWL97Nd+TsMSU6J2wY40m4LiqAbL2HaD2uP+T/9YvmHZIKX7gE0LzFlYaEB0VbXL6qha3781XpTOPa0VeRh1NCbwoA2IFhtRpdMOlvY4fPNBO6pP+bZAcQ3CcvOw== Received: from BN8PR04CA0018.namprd04.prod.outlook.com (2603:10b6:408:70::31) by BY5PR12MB4901.namprd12.prod.outlook.com (2603:10b6:a03:1c5::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4951.12; Mon, 7 Feb 2022 17:24:03 +0000 Received: from BN8NAM11FT039.eop-nam11.prod.protection.outlook.com (2603:10b6:408:70:cafe::8d) by BN8PR04CA0018.outlook.office365.com (2603:10b6:408:70::31) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4951.11 via Frontend Transport; Mon, 7 Feb 2022 17:24:03 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.235) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.235 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.235; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.235) by BN8NAM11FT039.mail.protection.outlook.com (10.13.177.169) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4951.12 via Frontend Transport; Mon, 7 Feb 2022 17:24:03 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Mon, 7 Feb 2022 17:24:02 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9; Mon, 7 Feb 2022 09:24:02 -0800 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.129.68.8) with Microsoft SMTP Server id 15.2.986.9 via Frontend Transport; Mon, 7 Feb 2022 09:23:58 -0800 From: Yishai Hadas To: , , , CC: , , , , , , , , , , , Subject: [PATCH V7 mlx5-next 13/15] vfio/pci: Expose vfio_pci_core_aer_err_detected() Date: Mon, 7 Feb 2022 19:22:14 +0200 Message-ID: <20220207172216.206415-14-yishaih@nvidia.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20220207172216.206415-1-yishaih@nvidia.com> References: <20220207172216.206415-1-yishaih@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 2abc6e83-7122-4bb7-25da-08d9ea5ea30d X-MS-TrafficTypeDiagnostic: BY5PR12MB4901:EE_ X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:3968; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: MYmZXRZramBDXkFjujI9oDt55VlJICEf2Jqx2ZsDkLxFmS+ZUFQJMVxOS/39Okrh7AiiGD5/QxPHFv0OveLtakOA094BISgzF5MMRu5RWHJlx8U18oqa0WMtMxYZjwpvYkn5XYX1MFxWvbpbYn9Seyh9iobOyJmpoASFUhztjFcuMqR4Q4iy8io9OperoYbyYwcJ/rDrp33RoviLlU/dfy4+/9zk4fC9VmM3oaA+WgbyWcWeBzQPRVVKJjvEt7hOAFhKVMjShWQ/BSv9DhkCH6yLWJH1a9Htf0TT8250rGfUWuVt6n3TY+6C5H/KlE9GlTnqX8dccQ37kH++ztE5+wHLspabFee1QaQQzJKfWKog/e8hv/PrGTn2YsH6GFegJ8qkm/KBd2tLI8UyShc9iwajkOdZOELdlNsfX4qutcinvMJ6TJlB3R1KBTmIMVTMhqtroWlaqUV9CBg/vXOYmVLctRvLaT1vPbWIuP6hgkjLql8Gy0c7Idx38FCMhBoyEeQVG4irQo1qhsYPZcBS3b9hiQMsg1n5P+ZjQXiLvA1oTDcWbHzs9z85jmYQoAkc4mCypW9Vb1Gy197P2D62Ehit/ZQdiVqw1Q4XBlxMV6A7LgbMv0wo9MPaP4ok0+KDpkvJSrCxNYn1LLD7uY9eQywTWIArYdlipmBWciyyZBFqIKpVaUEVYoO5vKJPE9cZolQDle/KuElLXYwuJ+ZlcA== X-Forefront-Antispam-Report: CIP:12.22.5.235;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:mail.nvidia.com;PTR:InfoNoRecords;CAT:NONE;SFS:(13230001)(4636009)(40470700004)(36840700001)(46966006)(8936002)(1076003)(86362001)(82310400004)(70586007)(70206006)(2616005)(5660300002)(426003)(336012)(186003)(54906003)(110136005)(47076005)(26005)(6636002)(316002)(8676002)(4326008)(2906002)(6666004)(7696005)(83380400001)(81166007)(508600001)(40460700003)(356005)(36860700001)(36756003)(36900700001);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Feb 2022 17:24:03.1290 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 2abc6e83-7122-4bb7-25da-08d9ea5ea30d X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[12.22.5.235];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT039.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR12MB4901 Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org Expose vfio_pci_core_aer_err_detected() to be used by drivers as part of their pci_error_handlers structure. Next patch for mlx5 driver will use it. Signed-off-by: Yishai Hadas --- drivers/vfio/pci/vfio_pci_core.c | 7 ++++--- include/linux/vfio_pci_core.h | 2 ++ 2 files changed, 6 insertions(+), 3 deletions(-) diff --git a/drivers/vfio/pci/vfio_pci_core.c b/drivers/vfio/pci/vfio_pci_core.c index 106e1970d653..e301092e94ef 100644 --- a/drivers/vfio/pci/vfio_pci_core.c +++ b/drivers/vfio/pci/vfio_pci_core.c @@ -1871,8 +1871,8 @@ void vfio_pci_core_unregister_device(struct vfio_pci_core_device *vdev) } EXPORT_SYMBOL_GPL(vfio_pci_core_unregister_device); -static pci_ers_result_t vfio_pci_aer_err_detected(struct pci_dev *pdev, - pci_channel_state_t state) +pci_ers_result_t vfio_pci_core_aer_err_detected(struct pci_dev *pdev, + pci_channel_state_t state) { struct vfio_pci_core_device *vdev; struct vfio_device *device; @@ -1894,6 +1894,7 @@ static pci_ers_result_t vfio_pci_aer_err_detected(struct pci_dev *pdev, return PCI_ERS_RESULT_CAN_RECOVER; } +EXPORT_SYMBOL_GPL(vfio_pci_core_aer_err_detected); int vfio_pci_core_sriov_configure(struct pci_dev *pdev, int nr_virtfn) { @@ -1916,7 +1917,7 @@ int vfio_pci_core_sriov_configure(struct pci_dev *pdev, int nr_virtfn) EXPORT_SYMBOL_GPL(vfio_pci_core_sriov_configure); const struct pci_error_handlers vfio_pci_core_err_handlers = { - .error_detected = vfio_pci_aer_err_detected, + .error_detected = vfio_pci_core_aer_err_detected, }; EXPORT_SYMBOL_GPL(vfio_pci_core_err_handlers); diff --git a/include/linux/vfio_pci_core.h b/include/linux/vfio_pci_core.h index beba0b2ed87d..9f1bf8e49d43 100644 --- a/include/linux/vfio_pci_core.h +++ b/include/linux/vfio_pci_core.h @@ -232,6 +232,8 @@ int vfio_pci_core_match(struct vfio_device *core_vdev, char *buf); int vfio_pci_core_enable(struct vfio_pci_core_device *vdev); void vfio_pci_core_disable(struct vfio_pci_core_device *vdev); void vfio_pci_core_finish_enable(struct vfio_pci_core_device *vdev); +pci_ers_result_t vfio_pci_core_aer_err_detected(struct pci_dev *pdev, + pci_channel_state_t state); static inline bool vfio_pci_is_vga(struct pci_dev *pdev) { From patchwork Mon Feb 7 17:22:15 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yishai Hadas X-Patchwork-Id: 12737759 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 558F4C4321E for ; Mon, 7 Feb 2022 17:33:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1354559AbiBGRbM (ORCPT ); Mon, 7 Feb 2022 12:31:12 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45358 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1348644AbiBGRYK (ORCPT ); Mon, 7 Feb 2022 12:24:10 -0500 Received: from NAM12-DM6-obe.outbound.protection.outlook.com (mail-dm6nam12on2069.outbound.protection.outlook.com [40.107.243.69]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6BB45C0401D6; Mon, 7 Feb 2022 09:24:10 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=OPaZ0LcPVhPnu3r6gfdEZgiP9PBu4fv+1I3fXWdt3w+fkGzDU+IdmPUxm1Cz92IHSOyUxHFizygd9SjKd0X27ScxT9bMxWxGCOaMnU6SRrnHDukN3J9URhydDlOBGir1AQ3XaJ6P8WjuZuh9l6y/oQC6BdCUfPswsk7zNFTPrdWD887wI21uyel1kFen23RBZmXufaYUCiK72cgoDkpDxjtyx4KyTz31A5p8tRH/2H0CEe31/L1/2MBgCqeTGRI5+o1TFhi+R7OCG5dJpCD5btJz455Oz3A4KBm2hdVVU7hNFS/Z6GEicVelDEAcMV2KsWazwYj4BKPyQbAJ4/tWvw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=BdEvb6AWR53BRkZhnnpX7dTU+RBHrPZOwgBbrsCyrIU=; b=VR7plKKQaAnF+4cR9rMmlik0uav2KS2ilz2mif6NT67fgp+y71rTEmlO4D9srfqx0hqULmmpOezLGdmp9F4QYT1iJtBZ81JkF6eDFnuM0RwTS8ttWQlbKhDjVnmfK2uj9F2wVEA9e+K9KEBnkA1gSF9jOXymRqzzxUQSoxy4Jvyz5PF4scN3hRxGSYpr9aebF83l+1qtkKAscgNiPGQSCDOjacemXLa+/lh7tEw3glRwvPIGfavs7g84zX+F/32nNFLcfCLWonQsdElRa8ApeOgB6XU7bKdUa+r177UZFB4G8CCOYt98SE4hv3ii54q1lXTKQcY1h8wgP9AfJtA+DQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.234) smtp.rcpttodomain=intel.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=BdEvb6AWR53BRkZhnnpX7dTU+RBHrPZOwgBbrsCyrIU=; b=dTNid7XyZf7f8w6N7kD1Pr7pqA//v+83dbkTK6KKb8dSrW5GcnCozfhYNnU3cpMVyfuXSSf1einjtvHHFxVXhUQyWaxDe3DpTIKV1OQ8zlSC8WTAK1occ0zd0LmVSpEsK/lq2IGjmWN/tdZ2Eju8gFJiEjg1voNQeyHfz+SlZHDEYa21L0H6QDID3BVBPky+WrL52IgEnKQEtFvvushYGOnreYNXl82BdHhMpvEXWJKLJFHpysZN9MI7og8FGO4dk1oG8VEFmYP8Mf/0+CaW/8B2Twoy6sskRr0EFe43jX5ywgfn0gG+bc+YkdFkodG1wwzerUCyptvbB/zkt8wHww== Received: from MWHPR19CA0011.namprd19.prod.outlook.com (2603:10b6:300:d4::21) by BN8PR12MB3122.namprd12.prod.outlook.com (2603:10b6:408:44::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4951.11; Mon, 7 Feb 2022 17:24:08 +0000 Received: from CO1NAM11FT043.eop-nam11.prod.protection.outlook.com (2603:10b6:300:d4:cafe::c2) by MWHPR19CA0011.outlook.office365.com (2603:10b6:300:d4::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4951.19 via Frontend Transport; Mon, 7 Feb 2022 17:24:07 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.234) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.234 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.234; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.234) by CO1NAM11FT043.mail.protection.outlook.com (10.13.174.193) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4951.12 via Frontend Transport; Mon, 7 Feb 2022 17:24:07 +0000 Received: from rnnvmail205.nvidia.com (10.129.68.10) by DRHQMAIL101.nvidia.com (10.27.9.10) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Mon, 7 Feb 2022 17:24:07 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by rnnvmail205.nvidia.com (10.129.68.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9; Mon, 7 Feb 2022 09:24:06 -0800 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.129.68.8) with Microsoft SMTP Server id 15.2.986.9 via Frontend Transport; Mon, 7 Feb 2022 09:24:02 -0800 From: Yishai Hadas To: , , , CC: , , , , , , , , , , , Subject: [PATCH V7 mlx5-next 14/15] vfio/mlx5: Use its own PCI reset_done error handler Date: Mon, 7 Feb 2022 19:22:15 +0200 Message-ID: <20220207172216.206415-15-yishaih@nvidia.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20220207172216.206415-1-yishaih@nvidia.com> References: <20220207172216.206415-1-yishaih@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: a67e1fce-460e-433e-4d6a-08d9ea5ea5b5 X-MS-TrafficTypeDiagnostic: BN8PR12MB3122:EE_ X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:4303; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: gaJI6dlZ46ajk08py6L09yG9Dfaq58MIfPUTyD8WQgmpbYUW5ZoI3VqK7/bhsmJ9abbYD894gvGqoncdiv+4Eg8RLxIvRSjUSwlNlJJTaPL+zwIjjEbW1m/ZTDjxv8kcEI2IsmMPHvylkDXtaNH6aEu/cn+0d6rlcqiGSYrxKGhvrtQE/v0uSYJOIIjMja0EX5hpSvmt06mE/WlyXPOv+mmIDDrJlNzgMFBR6Ur7mod6TXx7b3wqnsUY1O5YdlegzgCh/vsN2YZlZiM2NoI8a/Lvdkk2slnXR9wKWa9Ej985oK8eOHuIgt1J1b5MFUkK2XIR9ph0r2SfbqmlS9KadMP7flb0edbbqUyRcv44CdegJF4ZsZNBVxKzfcazYzZ7Eu6sIQV1v383E26DfJgGwB5axA2fjCy4pePD1+dRKbx8tsnjDinYX9SdDQmMyUR+MTe1qa3u4VR7TpbdGQq/TV2HhzlmlArpzLFTj1O8Ta6tBBpBu8JTRjA7rQNoyGCIAJhPSPzT61DxaBouu7gVgIbgvN7t9JdDAycqVminq1TRNcNLOjBd8CMJmgkgMDREBYXPmLZAVyP0OykBSmMMUbUxhe2esqGE3HT2QZqzwaYVAubFMBWipLD+7I91yRkwmJXXA/GbnbdoPFMkm2I26Q7dTJX/AoBXn2PaMf8L4Qyq8fkkNdVAlPLBlBTi50Bd3mEBt7ntqCxDP9DvJiO0Lg== X-Forefront-Antispam-Report: CIP:12.22.5.234;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:mail.nvidia.com;PTR:InfoNoRecords;CAT:NONE;SFS:(13230001)(4636009)(46966006)(36840700001)(40470700004)(4326008)(8676002)(70586007)(70206006)(8936002)(6636002)(82310400004)(110136005)(54906003)(316002)(186003)(1076003)(2616005)(336012)(426003)(26005)(83380400001)(6666004)(508600001)(86362001)(7696005)(47076005)(36756003)(40460700003)(2906002)(81166007)(356005)(5660300002)(36860700001)(36900700001);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Feb 2022 17:24:07.6490 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: a67e1fce-460e-433e-4d6a-08d9ea5ea5b5 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[12.22.5.234];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT043.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN8PR12MB3122 Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org Register its own handler for pci_error_handlers.reset_done and update state accordingly. Signed-off-by: Yishai Hadas Signed-off-by: Leon Romanovsky Signed-off-by: Jason Gunthorpe --- drivers/vfio/pci/mlx5/main.c | 57 ++++++++++++++++++++++++++++++++++-- 1 file changed, 55 insertions(+), 2 deletions(-) diff --git a/drivers/vfio/pci/mlx5/main.c b/drivers/vfio/pci/mlx5/main.c index acd205bcff70..63a889210ef3 100644 --- a/drivers/vfio/pci/mlx5/main.c +++ b/drivers/vfio/pci/mlx5/main.c @@ -28,9 +28,12 @@ struct mlx5vf_pci_core_device { struct vfio_pci_core_device core_device; u8 migrate_cap:1; + u8 deferred_reset:1; /* protect migration state */ struct mutex state_mutex; enum vfio_device_mig_state mig_state; + /* protect the reset_done flow */ + spinlock_t reset_lock; u16 vhca_id; struct mlx5_vf_migration_file *resuming_migf; struct mlx5_vf_migration_file *saving_migf; @@ -437,6 +440,25 @@ mlx5vf_pci_step_device_state_locked(struct mlx5vf_pci_core_device *mvdev, return ERR_PTR(-EINVAL); } +/* + * This function is called in all state_mutex unlock cases to + * handle a 'deferred_reset' if exists. + */ +static void mlx5vf_state_mutex_unlock(struct mlx5vf_pci_core_device *mvdev) +{ +again: + spin_lock(&mvdev->reset_lock); + if (mvdev->deferred_reset) { + mvdev->deferred_reset = false; + spin_unlock(&mvdev->reset_lock); + mvdev->mig_state = VFIO_DEVICE_STATE_RUNNING; + mlx5vf_disable_fds(mvdev); + goto again; + } + mutex_unlock(&mvdev->state_mutex); + spin_unlock(&mvdev->reset_lock); +} + static struct file * mlx5vf_pci_set_device_state(struct vfio_device *vdev, enum vfio_device_mig_state new_state) @@ -465,7 +487,7 @@ mlx5vf_pci_set_device_state(struct vfio_device *vdev, break; } } - mutex_unlock(&mvdev->state_mutex); + mlx5vf_state_mutex_unlock(mvdev); return res; } @@ -477,10 +499,34 @@ static int mlx5vf_pci_get_device_state(struct vfio_device *vdev, mutex_lock(&mvdev->state_mutex); *curr_state = mvdev->mig_state; - mutex_unlock(&mvdev->state_mutex); + mlx5vf_state_mutex_unlock(mvdev); return 0; } +static void mlx5vf_pci_aer_reset_done(struct pci_dev *pdev) +{ + struct mlx5vf_pci_core_device *mvdev = dev_get_drvdata(&pdev->dev); + + if (!mvdev->migrate_cap) + return; + + /* + * As the higher VFIO layers are holding locks across reset and using + * those same locks with the mm_lock we need to prevent ABBA deadlock + * with the state_mutex and mm_lock. + * In case the state_mutex was taken already we defer the cleanup work + * to the unlock flow of the other running context. + */ + spin_lock(&mvdev->reset_lock); + mvdev->deferred_reset = true; + if (!mutex_trylock(&mvdev->state_mutex)) { + spin_unlock(&mvdev->reset_lock); + return; + } + spin_unlock(&mvdev->reset_lock); + mlx5vf_state_mutex_unlock(mvdev); +} + static int mlx5vf_pci_open_device(struct vfio_device *core_vdev) { struct mlx5vf_pci_core_device *mvdev = container_of( @@ -562,6 +608,7 @@ static int mlx5vf_pci_probe(struct pci_dev *pdev, VFIO_MIGRATION_STOP_COPY | VFIO_MIGRATION_P2P; mutex_init(&mvdev->state_mutex); + spin_lock_init(&mvdev->reset_lock); } mlx5_vf_put_core_dev(mdev); } @@ -596,11 +643,17 @@ static const struct pci_device_id mlx5vf_pci_table[] = { MODULE_DEVICE_TABLE(pci, mlx5vf_pci_table); +static const struct pci_error_handlers mlx5vf_err_handlers = { + .reset_done = mlx5vf_pci_aer_reset_done, + .error_detected = vfio_pci_core_aer_err_detected, +}; + static struct pci_driver mlx5vf_pci_driver = { .name = KBUILD_MODNAME, .id_table = mlx5vf_pci_table, .probe = mlx5vf_pci_probe, .remove = mlx5vf_pci_remove, + .err_handler = &mlx5vf_err_handlers, }; static void __exit mlx5vf_pci_cleanup(void) From patchwork Mon Feb 7 17:22:16 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yishai Hadas X-Patchwork-Id: 12737755 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0FBC2C4332F for ; Mon, 7 Feb 2022 17:33:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343993AbiBGRbJ (ORCPT ); Mon, 7 Feb 2022 12:31:09 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45414 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233155AbiBGRYQ (ORCPT ); Mon, 7 Feb 2022 12:24:16 -0500 Received: from NAM12-DM6-obe.outbound.protection.outlook.com (mail-dm6nam12on2075.outbound.protection.outlook.com [40.107.243.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E63B3C0401D6; Mon, 7 Feb 2022 09:24:14 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Ri2SqwS0QyXGUNyBvUbESwki6qK4kR8CD6W2soV72UyI7zs52ho54dAgfgFZnqaqXUaI177r97IZhswYi0tpPKjkzizUgDZSkbru0hRFXSH6oGNXLc5PybR4TvlL7i16QLT9E2wdcDyrz55wHhPUiFTrW9vspqpniGqy64UbMe3DdVLq9baoKYhfv6jloddd173KLKDmcTCGVqZ9l3jaDbyzxvA4KbsfIFGeW0Fm1yOVM2RnpvWChoW1y0979tAmWkP2HSJLl3dEdHckNiGEkygLoYZkJtKBeyuSrH8+OsEbTs6jr34NVhP2AnUT3sOOMGza9lpGc1BiATqdxVcS0w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=GILwQh9UN0e/szg6YBtlkdheha+GiHOw3hifeuWppxo=; b=MpWuETOaWu5dMbFJLbMCN1U6hJZ6ZS+v3IViNqtN0j72/KtV0cKan21hxr5yTkkktoTwNi4BMYBnXWxov4iDD0XUnwmRoghSEAx6g5hUqSl20RN5ZY2JyBfUit7gtNmDdxIrboYyr82hyunjMJxjABA+KWqp2/nCEKQXVHcgvB7wx7AVpRDAfPiPCj/mT6qR0lGboYf6GZYudaZ0c6HdlbY8QmbHuvbk/bQ1xm9SMlwmfrpLGpG2imeReL6pWQeZsK3F37q37d28yEekOBxDrcy2hO9voODzMiXcQ5sqPCwRRbU3RQ27sZxHmi38+9nMr5YqPRZ2vcoEVnTVEq7fSQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.236) smtp.rcpttodomain=intel.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=GILwQh9UN0e/szg6YBtlkdheha+GiHOw3hifeuWppxo=; b=qznB4b+e6w2UK+LLMzL8LAeHnkOWiF53+7122nuXwtpMPOgl+2paMt3PGNd63+bv+TJ1kBINlVbHodjE6t2A5utty6tkVtm39C0sZvwMmzdwehfk862s+hGtL8mzriqobnUZnMnYk2qwgZ+kSd/PdaBNVC/hj5ZvCWcMYxKFtSBpaiRWRYe+LAc4nEB3B3dLo0Ha1CprEvsvD6WWBRQ0GLp72o7YYxSCQjuEXBhslcOuAqKuoXpXb9clpquvuWHh6TFphymmFvmcOk7nypSSnFw2yNBpYqOwRjTZm+sHler2f5Cv1njzhnJDcrZ+JMC2ntHxsPjyODMUdRRr7C8CoA== Received: from DM6PR18CA0018.namprd18.prod.outlook.com (2603:10b6:5:15b::31) by PH0PR12MB5404.namprd12.prod.outlook.com (2603:10b6:510:d7::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4951.11; Mon, 7 Feb 2022 17:24:12 +0000 Received: from DM6NAM11FT024.eop-nam11.prod.protection.outlook.com (2603:10b6:5:15b:cafe::39) by DM6PR18CA0018.outlook.office365.com (2603:10b6:5:15b::31) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4951.18 via Frontend Transport; Mon, 7 Feb 2022 17:24:12 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.236) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.236 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.236; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.236) by DM6NAM11FT024.mail.protection.outlook.com (10.13.172.159) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4951.12 via Frontend Transport; Mon, 7 Feb 2022 17:24:12 +0000 Received: from rnnvmail203.nvidia.com (10.129.68.9) by DRHQMAIL109.nvidia.com (10.27.9.19) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Mon, 7 Feb 2022 17:24:11 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by rnnvmail203.nvidia.com (10.129.68.9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9; Mon, 7 Feb 2022 09:24:10 -0800 Received: from vdi.nvidia.com (10.127.8.10) by mail.nvidia.com (10.129.68.8) with Microsoft SMTP Server id 15.2.986.9 via Frontend Transport; Mon, 7 Feb 2022 09:24:06 -0800 From: Yishai Hadas To: , , , CC: , , , , , , , , , , , Subject: [PATCH V7 mlx5-next 15/15] vfio: Extend the device migration protocol with PRE_COPY Date: Mon, 7 Feb 2022 19:22:16 +0200 Message-ID: <20220207172216.206415-16-yishaih@nvidia.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20220207172216.206415-1-yishaih@nvidia.com> References: <20220207172216.206415-1-yishaih@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: ae5f6f05-6355-4e80-074b-08d9ea5ea85a X-MS-TrafficTypeDiagnostic: PH0PR12MB5404:EE_ X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:6790; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: Ti9hxLEAjThqMkEFkHEpzSyL4bXgLwWktPmX1xrOJrSdSCeb6xKJh2DREPw/FEUiF4enMYzNreDX7kmtmo2PiouBCB8klDNvu9eOqIcJ0kvk5WbSP8faQY32TG8PgSd3XNCsBLM5Wz2kxehvqxHskU3JsBF5GQLreBlHer9A10ODvmgHcUCqdFF+7+eXYdakne1APXbf8Pt+SfOEgDkRA3miKMjHbf0yhOhuEKXSdXsA1wBjRLYwdb0aBEZvJrNOoxS2HE6bQEJeEtgcVJHFQ7kr9Xvm2A2LQccpztEPqQ90fby9YokQR+++tCkm5pvLADI+NPO8BEc7nZoVlzMTOWSQJbOlVHV2oLgr8hDFcr9P+c92HSVaJd8aby0vphC1GmhxG8h44DRjNWTHxMN1SKLxOtNC8/mqe2iSEq8yy+twjlB1pSp+ZIUfBlppujL7p5GaI7a8YM5/rzfVUTJGRv1VNXoABipvKwqBxkUSEVGH9MQIPWkE59O+hVFBKEg6BtjEeHn/jpZJ87GrSIK4qxHiiL60oXizhklAmOzWmtJoNNp3QwSES6I6S76F8zv+PYTMSoQ6dFjSBpIJ+TKLfIxcO19/at+Y9qR1o/6MxxqljSHSC0M7ZUvWCo491BoUjXtd5r92xaVRavBP5AJ1EGZ2/yTqzpokzGdFlaTuOHRGDJ/WEB1qdUhm5oNgupjH3kJgeiQz40/QRCZmNs01Fg== X-Forefront-Antispam-Report: CIP:12.22.5.236;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:mail.nvidia.com;PTR:InfoNoRecords;CAT:NONE;SFS:(13230001)(4636009)(40470700004)(36840700001)(46966006)(186003)(1076003)(26005)(36860700001)(336012)(426003)(356005)(86362001)(82310400004)(83380400001)(81166007)(47076005)(40460700003)(4326008)(36756003)(8676002)(110136005)(6636002)(316002)(54906003)(70586007)(8936002)(30864003)(5660300002)(6666004)(7696005)(508600001)(70206006)(2906002)(2616005)(36900700001);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Feb 2022 17:24:12.0215 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: ae5f6f05-6355-4e80-074b-08d9ea5ea85a X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[12.22.5.236];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT024.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR12MB5404 Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org From: Jason Gunthorpe The optional PRE_COPY states open the saving data transfer FD before reaching STOP_COPY and allows the device to dirty track internal state changes with the general idea to reduce the volume of data transferred in the STOP_COPY stage. While in PRE_COPY the device remains RUNNING, but the saving FD is open. Only if the device also supports RUNNING_P2P can it support PRE_COPY_P2P, which halts P2P transfers while continuing the saving FD. PRE_COPY, with P2P support, requires the driver to implement 7 new arcs and exists as an optional FSM branch between RUNNING and STOP_COPY: RUNNING -> PRE_COPY -> PRE_COPY_P2P -> STOP_COPY A new ioctl VFIO_DEVICE_MIG_PRECOPY is provided to allow userspace to query the progress of the precopy operation in the driver with the idea it will judge to move to STOP_COPY at least once the initial data set is transferred, and possibly after the dirty size has shrunk appropriately. We think there may also be merit in future extensions to the VFIO_DEVICE_MIG_PRECOPY ioctl to also command the device to throttle the rate it generates internal dirty state. Compared to the v1 clarification, STOP_COPY -> PRE_COPY is made optional and to be defined in future. While making the whole PRE_COPY feature optional eliminates the concern from mlx5, this is still a complicated arc to implement and seems prudent to leave it closed until a proper use case is developed. We also split the pending_bytes report into the initial and sustaining values, and define the protocol to get an event via poll() for new dirty data during PRE_COPY. Signed-off-by: Jason Gunthorpe Signed-off-by: Yishai Hadas --- drivers/vfio/vfio.c | 71 +++++++++++++++++++++++- include/uapi/linux/vfio.h | 110 ++++++++++++++++++++++++++++++++++++-- 2 files changed, 176 insertions(+), 5 deletions(-) diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c index 8c484593dfe0..b4c585114ef3 100644 --- a/drivers/vfio/vfio.c +++ b/drivers/vfio/vfio.c @@ -1577,7 +1577,7 @@ int vfio_mig_get_next_state(struct vfio_device *device, enum vfio_device_mig_state new_fsm, enum vfio_device_mig_state *next_fsm) { - enum { VFIO_DEVICE_NUM_STATES = VFIO_DEVICE_STATE_RUNNING_P2P + 1 }; + enum { VFIO_DEVICE_NUM_STATES = VFIO_DEVICE_STATE_PRE_COPY_P2P + 1 }; /* * The coding in this table requires the driver to implement * FSM arcs: @@ -1596,25 +1596,59 @@ int vfio_mig_get_next_state(struct vfio_device *device, * RUNNING -> STOP * STOP -> RUNNING * + * If precopy is supported then the driver must support these additional + * FSM arcs: + * RUNNING -> PRE_COPY + * PRE_COPY -> RUNNING + * PRE_COPY -> STOP_COPY + * However, if precopy and P2P are supported together then the driver + * must support these additional arcs beyond the P2P arcs above: + * PRE_COPY -> RUNNING + * PRE_COPY -> PRE_COPY_P2P + * PRE_COPY_P2P -> PRE_COPY + * PRE_COPY_P2P -> RUNNING_P2P + * PRE_COPY_P2P -> STOP_COPY + * RUNNING -> PRE_COPY + * RUNNING_P2P -> PRE_COPY_P2P + * * If all optional features are supported then the coding will step * through multiple states for these combination transitions: + * PRE_COPY -> PRE_COPY_P2P -> STOP_COPY + * PRE_COPY -> RUNNING -> RUNNING_P2P + * PRE_COPY -> RUNNING -> RUNNING_P2P -> STOP + * PRE_COPY -> RUNNING -> RUNNING_P2P -> STOP -> RESUMING + * PRE_COPY_P2P -> RUNNING_P2P -> RUNNING + * PRE_COPY_P2P -> RUNNING_P2P -> STOP + * PRE_COPY_P2P -> RUNNING_P2P -> STOP -> RESUMING * RESUMING -> STOP -> RUNNING_P2P + * RESUMING -> STOP -> RUNNING_P2P -> PRE_COPY_P2P * RESUMING -> STOP -> RUNNING_P2P -> RUNNING + * RESUMING -> STOP -> RUNNING_P2P -> RUNNING -> PRE_COPY * RESUMING -> STOP -> STOP_COPY + * RUNNING -> RUNNING_P2P -> PRE_COPY_P2P * RUNNING -> RUNNING_P2P -> STOP * RUNNING -> RUNNING_P2P -> STOP -> RESUMING * RUNNING -> RUNNING_P2P -> STOP -> STOP_COPY + * RUNNING_P2P -> RUNNING -> PRE_COPY * RUNNING_P2P -> STOP -> RESUMING * RUNNING_P2P -> STOP -> STOP_COPY + * STOP -> RUNNING_P2P -> PRE_COPY_P2P * STOP -> RUNNING_P2P -> RUNNING + * STOP -> RUNNING_P2P -> RUNNING -> PRE_COPY * STOP_COPY -> STOP -> RESUMING * STOP_COPY -> STOP -> RUNNING_P2P * STOP_COPY -> STOP -> RUNNING_P2P -> RUNNING + * + * The following transitions are blocked: + * STOP_COPY -> PRE_COPY + * STOP_COPY -> PRE_COPY_P2P */ static const u8 vfio_from_fsm_table[VFIO_DEVICE_NUM_STATES][VFIO_DEVICE_NUM_STATES] = { [VFIO_DEVICE_STATE_STOP] = { [VFIO_DEVICE_STATE_STOP] = VFIO_DEVICE_STATE_STOP, [VFIO_DEVICE_STATE_RUNNING] = VFIO_DEVICE_STATE_RUNNING_P2P, + [VFIO_DEVICE_STATE_PRE_COPY] = VFIO_DEVICE_STATE_RUNNING_P2P, + [VFIO_DEVICE_STATE_PRE_COPY_P2P] = VFIO_DEVICE_STATE_RUNNING_P2P, [VFIO_DEVICE_STATE_STOP_COPY] = VFIO_DEVICE_STATE_STOP_COPY, [VFIO_DEVICE_STATE_RESUMING] = VFIO_DEVICE_STATE_RESUMING, [VFIO_DEVICE_STATE_RUNNING_P2P] = VFIO_DEVICE_STATE_RUNNING_P2P, @@ -1623,14 +1657,38 @@ int vfio_mig_get_next_state(struct vfio_device *device, [VFIO_DEVICE_STATE_RUNNING] = { [VFIO_DEVICE_STATE_STOP] = VFIO_DEVICE_STATE_RUNNING_P2P, [VFIO_DEVICE_STATE_RUNNING] = VFIO_DEVICE_STATE_RUNNING, + [VFIO_DEVICE_STATE_PRE_COPY] = VFIO_DEVICE_STATE_PRE_COPY, + [VFIO_DEVICE_STATE_PRE_COPY_P2P] = VFIO_DEVICE_STATE_RUNNING_P2P, [VFIO_DEVICE_STATE_STOP_COPY] = VFIO_DEVICE_STATE_RUNNING_P2P, [VFIO_DEVICE_STATE_RESUMING] = VFIO_DEVICE_STATE_RUNNING_P2P, [VFIO_DEVICE_STATE_RUNNING_P2P] = VFIO_DEVICE_STATE_RUNNING_P2P, [VFIO_DEVICE_STATE_ERROR] = VFIO_DEVICE_STATE_ERROR, }, + [VFIO_DEVICE_STATE_PRE_COPY] = { + [VFIO_DEVICE_STATE_STOP] = VFIO_DEVICE_STATE_RUNNING, + [VFIO_DEVICE_STATE_RUNNING] = VFIO_DEVICE_STATE_RUNNING, + [VFIO_DEVICE_STATE_PRE_COPY] = VFIO_DEVICE_STATE_PRE_COPY, + [VFIO_DEVICE_STATE_PRE_COPY_P2P] = VFIO_DEVICE_STATE_PRE_COPY_P2P, + [VFIO_DEVICE_STATE_STOP_COPY] = VFIO_DEVICE_STATE_PRE_COPY_P2P, + [VFIO_DEVICE_STATE_RESUMING] = VFIO_DEVICE_STATE_RUNNING, + [VFIO_DEVICE_STATE_RUNNING_P2P] = VFIO_DEVICE_STATE_RUNNING, + [VFIO_DEVICE_STATE_ERROR] = VFIO_DEVICE_STATE_ERROR, + }, + [VFIO_DEVICE_STATE_PRE_COPY_P2P] = { + [VFIO_DEVICE_STATE_STOP] = VFIO_DEVICE_STATE_RUNNING_P2P, + [VFIO_DEVICE_STATE_RUNNING] = VFIO_DEVICE_STATE_RUNNING_P2P, + [VFIO_DEVICE_STATE_PRE_COPY] = VFIO_DEVICE_STATE_PRE_COPY, + [VFIO_DEVICE_STATE_PRE_COPY_P2P] = VFIO_DEVICE_STATE_PRE_COPY_P2P, + [VFIO_DEVICE_STATE_STOP_COPY] = VFIO_DEVICE_STATE_STOP_COPY, + [VFIO_DEVICE_STATE_RESUMING] = VFIO_DEVICE_STATE_RUNNING_P2P, + [VFIO_DEVICE_STATE_RUNNING_P2P] = VFIO_DEVICE_STATE_RUNNING_P2P, + [VFIO_DEVICE_STATE_ERROR] = VFIO_DEVICE_STATE_ERROR, + }, [VFIO_DEVICE_STATE_STOP_COPY] = { [VFIO_DEVICE_STATE_STOP] = VFIO_DEVICE_STATE_STOP, [VFIO_DEVICE_STATE_RUNNING] = VFIO_DEVICE_STATE_STOP, + [VFIO_DEVICE_STATE_PRE_COPY] = VFIO_DEVICE_STATE_ERROR, + [VFIO_DEVICE_STATE_PRE_COPY_P2P] = VFIO_DEVICE_STATE_ERROR, [VFIO_DEVICE_STATE_STOP_COPY] = VFIO_DEVICE_STATE_STOP_COPY, [VFIO_DEVICE_STATE_RESUMING] = VFIO_DEVICE_STATE_STOP, [VFIO_DEVICE_STATE_RUNNING_P2P] = VFIO_DEVICE_STATE_STOP, @@ -1639,6 +1697,8 @@ int vfio_mig_get_next_state(struct vfio_device *device, [VFIO_DEVICE_STATE_RESUMING] = { [VFIO_DEVICE_STATE_STOP] = VFIO_DEVICE_STATE_STOP, [VFIO_DEVICE_STATE_RUNNING] = VFIO_DEVICE_STATE_STOP, + [VFIO_DEVICE_STATE_PRE_COPY] = VFIO_DEVICE_STATE_STOP, + [VFIO_DEVICE_STATE_PRE_COPY_P2P] = VFIO_DEVICE_STATE_STOP, [VFIO_DEVICE_STATE_STOP_COPY] = VFIO_DEVICE_STATE_STOP, [VFIO_DEVICE_STATE_RESUMING] = VFIO_DEVICE_STATE_RESUMING, [VFIO_DEVICE_STATE_RUNNING_P2P] = VFIO_DEVICE_STATE_STOP, @@ -1647,6 +1707,8 @@ int vfio_mig_get_next_state(struct vfio_device *device, [VFIO_DEVICE_STATE_RUNNING_P2P] = { [VFIO_DEVICE_STATE_STOP] = VFIO_DEVICE_STATE_STOP, [VFIO_DEVICE_STATE_RUNNING] = VFIO_DEVICE_STATE_RUNNING, + [VFIO_DEVICE_STATE_PRE_COPY] = VFIO_DEVICE_STATE_RUNNING, + [VFIO_DEVICE_STATE_PRE_COPY_P2P] = VFIO_DEVICE_STATE_PRE_COPY_P2P, [VFIO_DEVICE_STATE_STOP_COPY] = VFIO_DEVICE_STATE_STOP, [VFIO_DEVICE_STATE_RESUMING] = VFIO_DEVICE_STATE_STOP, [VFIO_DEVICE_STATE_RUNNING_P2P] = VFIO_DEVICE_STATE_RUNNING_P2P, @@ -1655,6 +1717,8 @@ int vfio_mig_get_next_state(struct vfio_device *device, [VFIO_DEVICE_STATE_ERROR] = { [VFIO_DEVICE_STATE_STOP] = VFIO_DEVICE_STATE_ERROR, [VFIO_DEVICE_STATE_RUNNING] = VFIO_DEVICE_STATE_ERROR, + [VFIO_DEVICE_STATE_PRE_COPY] = VFIO_DEVICE_STATE_ERROR, + [VFIO_DEVICE_STATE_PRE_COPY_P2P] = VFIO_DEVICE_STATE_ERROR, [VFIO_DEVICE_STATE_STOP_COPY] = VFIO_DEVICE_STATE_ERROR, [VFIO_DEVICE_STATE_RESUMING] = VFIO_DEVICE_STATE_ERROR, [VFIO_DEVICE_STATE_RUNNING_P2P] = VFIO_DEVICE_STATE_ERROR, @@ -1665,6 +1729,11 @@ int vfio_mig_get_next_state(struct vfio_device *device, static const unsigned int state_flags_table[VFIO_DEVICE_NUM_STATES] = { [VFIO_DEVICE_STATE_STOP] = VFIO_MIGRATION_STOP_COPY, [VFIO_DEVICE_STATE_RUNNING] = VFIO_MIGRATION_STOP_COPY, + [VFIO_DEVICE_STATE_PRE_COPY] = + VFIO_MIGRATION_STOP_COPY | VFIO_MIGRATION_PRE_COPY, + [VFIO_DEVICE_STATE_PRE_COPY_P2P] = VFIO_MIGRATION_STOP_COPY | + VFIO_MIGRATION_P2P | + VFIO_MIGRATION_PRE_COPY, [VFIO_DEVICE_STATE_STOP_COPY] = VFIO_MIGRATION_STOP_COPY, [VFIO_DEVICE_STATE_RESUMING] = VFIO_MIGRATION_STOP_COPY, [VFIO_DEVICE_STATE_RUNNING_P2P] = diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h index 227f55d57e06..6424c5b3415b 100644 --- a/include/uapi/linux/vfio.h +++ b/include/uapi/linux/vfio.h @@ -817,12 +817,20 @@ struct vfio_device_feature { * VFIO_MIGRATION_STOP_COPY | VFIO_MIGRATION_P2P means that RUNNING_P2P * is supported in addition to the STOP_COPY states. * + * VFIO_MIGRATION_STOP_COPY | VFIO_MIGRATION_PRE_COPY means that + * PRE_COPY is supported in addition to the STOP_COPY states. + * + * VFIO_MIGRATION_STOP_COPY | VFIO_MIGRATION_P2P | VFIO_MIGRATION_PRE_COPY + * means that RUNNING_P2P, PRE_COPY and PRE_COPY_P2P are supported + * in addition to the STOP_COPY states. + * * Other combinations of flags have behavior to be defined in the future. */ struct vfio_device_feature_migration { __aligned_u64 flags; #define VFIO_MIGRATION_STOP_COPY (1 << 0) #define VFIO_MIGRATION_P2P (1 << 1) +#define VFIO_MIGRATION_PRE_COPY (1 << 2) }; #define VFIO_DEVICE_FEATURE_MIGRATION 1 @@ -873,8 +881,13 @@ struct vfio_device_feature_mig_state { * RESUMING - The device is stopped and is loading a new internal state * ERROR - The device has failed and must be reset * - * And 1 optional state to support VFIO_MIGRATION_P2P: + * And optional states to support VFIO_MIGRATION_P2P: * RUNNING_P2P - RUNNING, except the device cannot do peer to peer DMA + * And VFIO_MIGRATION_PRE_COPY: + * PRE_COPY - The device is running normally but tracking internal state + * changes + * And VFIO_MIGRATION_P2P | VFIO_MIGRATION_PRE_COPY: + * PRE_COPY_P2P - PRE_COPY, except the device cannot do peer to peer DMA * * The FSM takes actions on the arcs between FSM states. The driver implements * the following behavior for the FSM arcs: @@ -906,20 +919,48 @@ struct vfio_device_feature_mig_state { * * To abort a RESUMING session the device must be reset. * + * PRE_COPY -> RUNNING * RUNNING_P2P -> RUNNING * While in RUNNING the device is fully operational, the device may generate * interrupts, DMA, respond to MMIO, all vfio device regions are functional, * and the device may advance its internal state. * + * The PRE_COPY arc will terminate a data transfer session. + * + * PRE_COPY_P2P -> RUNNING_P2P * RUNNING -> RUNNING_P2P * STOP -> RUNNING_P2P * While in RUNNING_P2P the device is partially running in the P2P quiescent * state defined below. * + * The PRE_COPY arc will terminate a data transfer session. + * + * RUNNING -> PRE_COPY + * RUNNING_P2P -> PRE_COPY_P2P * STOP -> STOP_COPY - * This arc begin the process of saving the device state and will return a - * new data_fd. + * PRE_COPY, PRE_COPY_P2P and STOP_COPY form the "saving group" of states + * which share a data transfer session. Moving between these states alters + * what is streamed in session, but does not terminate or otherwise effect + * the associated fd. + * + * These arcs begin the process of saving the device state and will return a + * new data_fd. The migration driver may perform actions such as enabling + * dirty logging of device state when entering PRE_COPY or PER_COPY_P2P. * + * Each arc does not change the device operation, the device remains + * RUNNING, P2P quiesced or in STOP. The STOP_COPY state is described below + * in PRE_COPY_P2P -> STOP_COPY. + * + * PRE_COPY -> PRE_COPY_P2P + * Entering PRE_COPY_P2P continues all the behaviors of PRE_COPY above. + * However, while in the PRE_COPY_P2P state, the device is partially running + * in the P2P quiescent state defined below, like RUNNING_P2P. + * + * PRE_COPY_P2P -> PRE_COPY + * This arc allows returning the device to a full RUNNING behavior while + * continuing all the behaviors of PRE_COPY. + * + * PRE_COPY_P2P -> STOP_COPY * While in the STOP_COPY state the device has the same behavior as STOP * with the addition that the data transfers session continues to stream the * migration state. End of stream on the FD indicates the entire device @@ -937,6 +978,13 @@ struct vfio_device_feature_mig_state { * internal device state for this arc if required to prepare the device to * receive the migration data. * + * STOP_COPY -> PRE_COPY + * STOP_COPY -> PRE_COPY_P2P + * These arcs are not permitted and return error if requested. Future + * revisions of this API may define behaviors for these arcs, in this case + * support will be discoverable by a new flag in + * VFIO_DEVICE_FEATURE_MIGRATION. + * * any -> ERROR * ERROR cannot be specified as a device state, however any transition request * can be failed with an errno return and may then move the device_state into @@ -948,7 +996,7 @@ struct vfio_device_feature_mig_state { * The optional peer to peer (P2P) quiescent state is intended to be a quiescent * state for the device for the purposes of managing multiple devices within a * user context where peer-to-peer DMA between devices may be active. The - * RUNNING_P2P states must prevent the device from initiating + * RUNNING_P2P and PRE_COPY_P2P states must prevent the device from initiating * any new P2P DMA transactions. If the device can identify P2P transactions * then it can stop only P2P DMA, otherwise it must stop all DMA. The migration * driver must complete any such outstanding operations prior to completing the @@ -959,6 +1007,8 @@ struct vfio_device_feature_mig_state { * above FSM arcs. As there are multiple paths through the FSM arcs the path * should be selected based on the following rules: * - Select the shortest path. + * - The path cannot have saving group states as interior arcs, only + * starting/end states. * Refer to vfio_mig_get_next_state() for the result of the algorithm. * * The automatic transit through the FSM arcs that make up the combination @@ -972,6 +1022,9 @@ struct vfio_device_feature_mig_state { * support them. The user can disocver if these states are supported by using * VFIO_DEVICE_FEATURE_MIGRATION. By using combination transitions the user can * avoid knowing about these optional states if the kernel driver supports them. + * + * Arcs touching PRE_COPY and PRE_COPY_P2P are removed if support for PRE_COPY + * is not present. */ enum vfio_device_mig_state { VFIO_DEVICE_STATE_ERROR = 0, @@ -980,8 +1033,57 @@ enum vfio_device_mig_state { VFIO_DEVICE_STATE_STOP_COPY = 3, VFIO_DEVICE_STATE_RESUMING = 4, VFIO_DEVICE_STATE_RUNNING_P2P = 5, + VFIO_DEVICE_STATE_PRE_COPY = 6, + VFIO_DEVICE_STATE_PRE_COPY_P2P = 7, +}; + +/** + * VFIO_DEVICE_MIG_PRECOPY - _IO(VFIO_TYPE, VFIO_BASE + 21) + * + * This ioctl is used on the migration data FD in the precopy phase of the + * migration data transfer. It returns an estimate of the current data sizes + * remaining to be transferred. It allows the user to judge when it is + * appropriate to leave PRE_COPY for STOP_COPY. + * + * initial_bytes reflects the estimated remaining size of any initial mandatory + * precopy data transfer. When initial_bytes returns as zero then the initial + * phase of the precopy data is completed. Generally initial_bytes should start + * out as approximately the entire device state. + * + * dirty_bytes reflects an estimate for how much more data needs to be + * transferred to complete the migration. Generally it should start as zero + * and increase as internal state is dirtied. + * + * Drivers should attempt to return estimates so that initial_bytes + + * dirty_bytes matches the amount of data an immediate transition to STOP_COPY + * will require to be streamed. + * + * Drivers have alot of flexibility in when and what they transfer during the + * PRE_COPY phase, and how they report this from VFIO_DEVICE_MIG_PRECOPY. + * + * During pre-copy the migration data FD has a temporary "end of stream" that is + * reached when both initial_bytes and dirty_byte are zero. For instance, this + * may indicate that the device is idle and not currently dirtying any internal + * state. When read() is done on this temporary end of stream the kernel driver + * should return ENOMSG from read(). Userspace can wait for more data (which may + * never come) by using poll. + * + * Once in STOP_COPY the migration data FD has a permanent end of stream + * signaled in the usual way by read() always returning 0 and poll always + * returning readable. ENOMSG may not be returned in STOP_COPY. Support + * for this ioctl is optional. + * + * Return: 0 on success, -1 and errno set on failure. + */ +struct vfio_device_mig_precopy { + __u32 argsz; + __u32 flags; + __aligned_u64 initial_bytes; + __aligned_u64 dirty_bytes; }; +#define VFIO_DEVICE_MIG_PRECOPY _IO(VFIO_TYPE, VFIO_BASE + 21) + /* -------- API for Type1 VFIO IOMMU -------- */ /**