From patchwork Wed Mar 22 19:10:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Nelson, Shannon" X-Patchwork-Id: 13184510 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9C07DC6FD1C for ; Wed, 22 Mar 2023 19:11:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229704AbjCVTLA (ORCPT ); Wed, 22 Mar 2023 15:11:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57134 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229436AbjCVTK6 (ORCPT ); Wed, 22 Mar 2023 15:10:58 -0400 Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2052.outbound.protection.outlook.com [40.107.223.52]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 401C8570B2 for ; Wed, 22 Mar 2023 12:10:57 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=WRgz+m2RtR1Rf1POuvx2QwQa+dHMucY/mDnCZC5VHxOfZpEaITCEWdqSxEMVD0LVzfIelaYgEUCXp5ZNeoMF+6v4UHzQ/xgrF8d9klubObde54bZma8Bguw6KECmq5zuvMeWD6KaqnL2Omt5fOQSZjN72daRnqmEqboq3uqsoBV+TdiNgFaZXNKMm0dvPtEe6SEBXxUrRv0RE1plZWmTxRyUlEsZFrTUxFprZ0096jGlEPKI46tZ+r7yYE8dyDYj8bvnSgqSpgtOVF2B456p9n3K3dNUWag2Xp3v2Yj6i+UaQLJSaIvWVHRspGMY2lnk8HgE8FAZd1fv6jiZIrEPsA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=q944AGSbLZa1GpsklG+Def0XmFjiS60a7VYRxzTuOT8=; b=KLqvPSh0q9Ayex7jrZEsqTwrW+Fsayfo7cGJ0brM8T6Y9SNcln9aU8UyIVhiEnsKWo8MItthLkpVP5rif0C2QImAV5bmv+h1kejcvzVVEtq5LHa4Drguz2EoYqjCE4Zh+verLDXELsIjE0KljT3kcevuqRk9r4GDz4WYrsvgl72HOvs4EpiZEqC8Eh8yrOHPP0CaZHFvN/prYZO5Nmb2bRZaoEiPrPZonvbTRT7Xvb83Efm34s7wTYBpxUxG2blOH5BQuhXXaWJN8wLM/FmrzwY4Wl+aXLN4qzUTy9q4V5yG6Meuriin04JzLVCb75Y7vrjDYP3zbO31+bS5eJDaaw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=redhat.com smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=q944AGSbLZa1GpsklG+Def0XmFjiS60a7VYRxzTuOT8=; b=u6DQeGMn+MeCslBzadGuuiHcN4YvOKaT6dUj5GH0aNhZsxiAH+fia83y1hAupK4mjZ0FSYbFFjpkjkTxR8qV3SATJ3va+MdNXUdAkpKmvUop0zmSqpZ/xF01GEwv0800zWoVxwoxakwDIt6eHVD6lhBjVFIBsSRpR9s7lz1X73Y= Received: from DS7PR03CA0292.namprd03.prod.outlook.com (2603:10b6:5:3ad::27) by DM6PR12MB4417.namprd12.prod.outlook.com (2603:10b6:5:2a4::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6178.37; Wed, 22 Mar 2023 19:10:55 +0000 Received: from DM6NAM11FT004.eop-nam11.prod.protection.outlook.com (2603:10b6:5:3ad:cafe::35) by DS7PR03CA0292.outlook.office365.com (2603:10b6:5:3ad::27) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6178.37 via Frontend Transport; Wed, 22 Mar 2023 19:10:55 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by DM6NAM11FT004.mail.protection.outlook.com (10.13.172.217) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.6222.17 via Frontend Transport; Wed, 22 Mar 2023 19:10:54 +0000 Received: from driver-dev1.pensando.io (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Wed, 22 Mar 2023 14:10:53 -0500 From: Shannon Nelson To: , , , , , , , CC: Subject: [PATCH v3 virtio 1/8] virtio: allow caller to override device id and DMA mask Date: Wed, 22 Mar 2023 12:10:31 -0700 Message-ID: <20230322191038.44037-2-shannon.nelson@amd.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230322191038.44037-1-shannon.nelson@amd.com> References: <20230322191038.44037-1-shannon.nelson@amd.com> MIME-Version: 1.0 X-Originating-IP: [10.180.168.240] X-ClientProxiedBy: SATLEXMB04.amd.com (10.181.40.145) To SATLEXMB04.amd.com (10.181.40.145) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DM6NAM11FT004:EE_|DM6PR12MB4417:EE_ X-MS-Office365-Filtering-Correlation-Id: ebbf76f8-9577-4f71-c837-08db2b092954 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: CEEgw1JG0bv0lmUHCOlEffg48VfNPBYna/QNB5WFvkPBJxnGH3SiCK8O7C1cgq0iaIWW8Qkw1Zfq/LbuU0MXxTvbifnXFDC0i0ZGI4dG637cvXcNkO/7S32P/yLS1KOlnAcRM9mitiXI9vuui483PtTto3Qr3RfSnVXdus9e6jaOpwHgSC8jFNZTwiEhOGd4Z/o7jWLU59PUOM+nxRw5oCmlZdf8FHuVuWZaSIf3wM/Lw4R8X//1UZKhiuKxoe81oSiwCAcHIcQHSfT8lcacVxN2OLXHna9FRlGQW4fyg1lRVaAjM/JS46UdDjXwvZoHT0lFupp8bBr41uSOQbQSp3SGogAua2sfYFt51T2SQr7F50G34gVzOFQLs+8xsROW63T8Kwkk+JzrKp513304Rhp21QPlclhoiuBZ9IP8349iizKOR9V8acOtcTIIGClmhwPDgLupIAIvDT3W9P3ez02+gg/iJAUoM0RIXcXDVg4Q29/0cgfrnjYTRB9vwmV1B99yrJvNAwGW4RoQ4unu7csIQrcG5xY5gwQWsrR76DmzVK2RpmKp0pzhSY2au+SGRPsG4nekmjR4QOsUw+eqZTOROSL8uk8DnT7/cHmsHIwkxm6a8/f+S1yGh1SLgDEt46NX4grjVuZcX7Pf/b9VLDOGWCFKogLqrIYY9EH+63s5xXqFpqgaFHHYpN7dPs5ORUTGBxHEVVFWf7s/ilH6HXG1cZhInqYzPGyRbEFLLUm2Dy+MlhVRpgvadNh6jL3/xsHVPApDdpxTwwKd/xJ4nA== X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230025)(4636009)(136003)(39860400002)(376002)(346002)(396003)(451199018)(40470700004)(46966006)(36840700001)(2616005)(16526019)(426003)(478600001)(47076005)(336012)(6666004)(83380400001)(70586007)(110136005)(316002)(70206006)(26005)(186003)(8676002)(4326008)(36860700001)(1076003)(8936002)(44832011)(5660300002)(41300700001)(82740400003)(40460700003)(2906002)(81166007)(36756003)(356005)(86362001)(40480700001)(82310400005)(21314003)(36900700001);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Mar 2023 19:10:54.8679 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: ebbf76f8-9577-4f71-c837-08db2b092954 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT004.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB4417 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org To allow a bit of flexibility with various virtio based devices, allow the caller to specify a different device id and DMA mask. This adds fields to struct XXX to specify an override device id check and a DMA mask. Signed-off-by: Shannon Nelson --- drivers/virtio/virtio_pci_modern_dev.c | 36 +++++++++++++++++--------- include/linux/virtio_pci_modern.h | 6 +++++ 2 files changed, 30 insertions(+), 12 deletions(-) diff --git a/drivers/virtio/virtio_pci_modern_dev.c b/drivers/virtio/virtio_pci_modern_dev.c index 869cb46bef96..6ad1bb9ae8fa 100644 --- a/drivers/virtio/virtio_pci_modern_dev.c +++ b/drivers/virtio/virtio_pci_modern_dev.c @@ -221,18 +221,25 @@ int vp_modern_probe(struct virtio_pci_modern_device *mdev) check_offsets(); - /* We only own devices >= 0x1000 and <= 0x107f: leave the rest. */ - if (pci_dev->device < 0x1000 || pci_dev->device > 0x107f) - return -ENODEV; - - if (pci_dev->device < 0x1040) { - /* Transitional devices: use the PCI subsystem device id as - * virtio device id, same as legacy driver always did. - */ - mdev->id.device = pci_dev->subsystem_device; + if (mdev->device_id_check_override) { + err = mdev->device_id_check_override(pci_dev); + if (err) + return err; + mdev->id.device = pci_dev->device; } else { - /* Modern devices: simply use PCI device id, but start from 0x1040. */ - mdev->id.device = pci_dev->device - 0x1040; + /* We only own devices >= 0x1000 and <= 0x107f: leave the rest. */ + if (pci_dev->device < 0x1000 || pci_dev->device > 0x107f) + return -ENODEV; + + if (pci_dev->device < 0x1040) { + /* Transitional devices: use the PCI subsystem device id as + * virtio device id, same as legacy driver always did. + */ + mdev->id.device = pci_dev->subsystem_device; + } else { + /* Modern devices: simply use PCI device id, but start from 0x1040. */ + mdev->id.device = pci_dev->device - 0x1040; + } } mdev->id.vendor = pci_dev->subsystem_vendor; @@ -260,7 +267,12 @@ int vp_modern_probe(struct virtio_pci_modern_device *mdev) return -EINVAL; } - err = dma_set_mask_and_coherent(&pci_dev->dev, DMA_BIT_MASK(64)); + if (mdev->dma_mask_override) + err = dma_set_mask_and_coherent(&pci_dev->dev, + mdev->dma_mask_override); + else + err = dma_set_mask_and_coherent(&pci_dev->dev, + DMA_BIT_MASK(64)); if (err) err = dma_set_mask_and_coherent(&pci_dev->dev, DMA_BIT_MASK(32)); diff --git a/include/linux/virtio_pci_modern.h b/include/linux/virtio_pci_modern.h index c4eeb79b0139..84765bbd8dc5 100644 --- a/include/linux/virtio_pci_modern.h +++ b/include/linux/virtio_pci_modern.h @@ -38,6 +38,12 @@ struct virtio_pci_modern_device { int modern_bars; struct virtio_device_id id; + + /* alt. check for vendor virtio device, return 0 or -ERRNO */ + int (*device_id_check_override)(struct pci_dev *pdev); + + /* alt. mask for devices with limited DMA space */ + u64 dma_mask_override; }; /* From patchwork Wed Mar 22 19:10:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Nelson, Shannon" X-Patchwork-Id: 13184512 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9C9D3C76195 for ; Wed, 22 Mar 2023 19:11:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230214AbjCVTLF (ORCPT ); Wed, 22 Mar 2023 15:11:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57162 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230191AbjCVTLA (ORCPT ); Wed, 22 Mar 2023 15:11:00 -0400 Received: from NAM02-BN1-obe.outbound.protection.outlook.com (mail-bn1nam02on2078.outbound.protection.outlook.com [40.107.212.78]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EAC1956166 for ; Wed, 22 Mar 2023 12:10:58 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=YxJLqoxl0porhzsm6y0//87A8sqaK1wIEMoMFJEveGdhR7XgDNTj0EngRdFHmqnkNJoI++H2OxM/pgp9EacO7t82Rpmzdz54SLZG1jX7/nh1dKJt/j0KUmE8iPIDF0NjkmvBRAfV6/tobXvtBRpcy3Y2Hki7luj1dsCS2KQocX3yuCKcWeh/zvZDpGkOEEWImDuMBlMPP7WdYybFid1x+H9vXpx3w9MtCYKvM+ZS82GfPKrstJ0ztM2gX5w3BHO34/kPIK7wJGEjv0Yqit5ruQITqrwMNReXJOjspLRm3y3Dmvq6yKvVApDnhehNlrJfV8ZNMrcFXxiCQqTik47o5A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=QYwIWBnGRCaVAaP0RkkPrcZxBI9REJPJPfdaiLoGuM0=; b=cAcvc5Fl0GNVHlq9FUW/Z2gD5YR4qEOAjXGsBUVXNctRysI7LZOFn1106dNLqujmd3S1M1WIRAAeGk9wQlBhYuaUrAK0zfkqKSM2Pu+lrRx/b55LY0Dj+T/o/rGA2YQgxE1aVxMMWhXzC+aUFgBNmTD2tnRig0JvqyHCcrDdSU2/Zh5ljEqkYETIJHmb73dfFwqGsyJNSWgcZWUf/1NhldNz56rPBApF15CTLOIq2YncXCSdQ83sUa+ysRgmkPBXFdJWM0KkclF5h3AMJLgXJxII2GUCcGs6in8sPohWqJ0rvL3JJ2Z+jsnTU1p5NtojNsr0GwKlFS6BiL9PuP9lxA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=redhat.com smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=QYwIWBnGRCaVAaP0RkkPrcZxBI9REJPJPfdaiLoGuM0=; b=ZyLiD4XLCkDRMOGHTxz1EEoBN+GBVg7XuWmexeResxYj5nAZIOUxN9yA+35yTT38BJacYFFIpqpm34BS7E401YPe8Y+xGNnluxmophyP+KvaVhGWnZBeTAvhfVpOk4Va9KyZvll7/mcubUnSusR1eHsw2wWrD2UwBEES0EKXMYc= Received: from DS7PR03CA0297.namprd03.prod.outlook.com (2603:10b6:5:3ad::32) by CY8PR12MB7588.namprd12.prod.outlook.com (2603:10b6:930:9b::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6178.37; Wed, 22 Mar 2023 19:10:56 +0000 Received: from DM6NAM11FT004.eop-nam11.prod.protection.outlook.com (2603:10b6:5:3ad:cafe::fe) by DS7PR03CA0297.outlook.office365.com (2603:10b6:5:3ad::32) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6178.37 via Frontend Transport; Wed, 22 Mar 2023 19:10:56 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by DM6NAM11FT004.mail.protection.outlook.com (10.13.172.217) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.6222.17 via Frontend Transport; Wed, 22 Mar 2023 19:10:56 +0000 Received: from driver-dev1.pensando.io (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Wed, 22 Mar 2023 14:10:54 -0500 From: Shannon Nelson To: , , , , , , , CC: Subject: [PATCH v3 virtio 2/8] pds_vdpa: Add new vDPA driver for AMD/Pensando DSC Date: Wed, 22 Mar 2023 12:10:32 -0700 Message-ID: <20230322191038.44037-3-shannon.nelson@amd.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230322191038.44037-1-shannon.nelson@amd.com> References: <20230322191038.44037-1-shannon.nelson@amd.com> MIME-Version: 1.0 X-Originating-IP: [10.180.168.240] X-ClientProxiedBy: SATLEXMB04.amd.com (10.181.40.145) To SATLEXMB04.amd.com (10.181.40.145) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DM6NAM11FT004:EE_|CY8PR12MB7588:EE_ X-MS-Office365-Filtering-Correlation-Id: e14bdba5-8cce-4b9c-aa4a-08db2b0929fb X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: ewfb8r6dQhMix+wTh1xSaG+KbjRLzoQG216GA1CWMJK3PRv2ftzTwQBB27hjXzr5nGTyFvbmrzZlv5ns6Ryq8gL1BRAVW4SF+aQuF/Jp2l39SgSgdZ626csjDsZAp6HoFl95rVm6ulax7uovTjUjeRq3uqiLrhVO6ueW7PQoYQ8hRFNo41o5SFcF2ge7GLqdYgsvpd+eA7Zo7X5AhxpsQOGpmvjqBUJef/Niuk+lNUUok9lLBBXPCygP0XW73Y3A1TM1qs/Leu9GeoZMeEDdBPsJDa++WuS/ixkZIrBdWunMudAPQLkEZMwuvvD499XcxN8DglTQiEB+ldDtMl18DFpXFg8FaapCuzcjUGW9BvrOgExcsAZOjgUKpFP6IlXENzg2zJc6AZ5rgY1PAdGK4CpwFRYB3SmXW1ehz0BxyNd09PPhM4QD1dFl8Q0vdOQkTc/jnVSKpS3TYqcT8+zHO8KgNVhbPNj1UyzhFQpyiCdHrfxmcAodoDkjRbcIEZXPxfZTgcW6j5aJl6LxuyKSRq5F9vcKZ4SWpzJloRu2PdCHZbjH30JLmI1wXFIYgtsysay6jsWCzPBXe6Oy3QSJOaqLioY93dNEQlnTM2rdLkfSSPoy4n67EPQVqChdk4Nn++UEXQ1/dICwwpHfZkV07FNEsoVzxD8QHIy6g2f/qf7EMZT+YTq+N9BBz6DuSAiKfWc10RAL4JPTjyRu6QYkwREoIJN019IedKqeIcvbYKM= X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230025)(4636009)(136003)(396003)(376002)(39860400002)(346002)(451199018)(46966006)(36840700001)(40470700004)(426003)(186003)(47076005)(6666004)(336012)(16526019)(478600001)(26005)(83380400001)(8676002)(110136005)(2616005)(316002)(70586007)(70206006)(8936002)(5660300002)(40480700001)(41300700001)(82740400003)(81166007)(1076003)(4326008)(36860700001)(2906002)(40460700003)(356005)(86362001)(36756003)(82310400005)(44832011)(36900700001);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Mar 2023 19:10:56.0241 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: e14bdba5-8cce-4b9c-aa4a-08db2b0929fb X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT004.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY8PR12MB7588 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org This is the initial auxiliary driver framework for a new vDPA device driver, an auxiliary_bus client of the pds_core driver. The pds_core driver supplies the PCI services for the VF device and for accessing the adminq in the PF device. This patch adds the very basics of registering for the auxiliary device and setting up debugfs entries. Signed-off-by: Shannon Nelson Acked-by: Jason Wang --- drivers/vdpa/Makefile | 1 + drivers/vdpa/pds/Makefile | 8 ++++ drivers/vdpa/pds/aux_drv.c | 84 ++++++++++++++++++++++++++++++++++++ drivers/vdpa/pds/aux_drv.h | 15 +++++++ drivers/vdpa/pds/debugfs.c | 29 +++++++++++++ drivers/vdpa/pds/debugfs.h | 18 ++++++++ include/linux/pds/pds_vdpa.h | 10 +++++ 7 files changed, 165 insertions(+) create mode 100644 drivers/vdpa/pds/Makefile create mode 100644 drivers/vdpa/pds/aux_drv.c create mode 100644 drivers/vdpa/pds/aux_drv.h create mode 100644 drivers/vdpa/pds/debugfs.c create mode 100644 drivers/vdpa/pds/debugfs.h create mode 100644 include/linux/pds/pds_vdpa.h diff --git a/drivers/vdpa/Makefile b/drivers/vdpa/Makefile index 59396ff2a318..8f53c6f3cca7 100644 --- a/drivers/vdpa/Makefile +++ b/drivers/vdpa/Makefile @@ -7,3 +7,4 @@ obj-$(CONFIG_MLX5_VDPA) += mlx5/ obj-$(CONFIG_VP_VDPA) += virtio_pci/ obj-$(CONFIG_ALIBABA_ENI_VDPA) += alibaba/ obj-$(CONFIG_SNET_VDPA) += solidrun/ +obj-$(CONFIG_PDS_VDPA) += pds/ diff --git a/drivers/vdpa/pds/Makefile b/drivers/vdpa/pds/Makefile new file mode 100644 index 000000000000..a9cd2f450ae1 --- /dev/null +++ b/drivers/vdpa/pds/Makefile @@ -0,0 +1,8 @@ +# SPDX-License-Identifier: GPL-2.0-only +# Copyright(c) 2023 Advanced Micro Devices, Inc + +obj-$(CONFIG_PDS_VDPA) := pds_vdpa.o + +pds_vdpa-y := aux_drv.o + +pds_vdpa-$(CONFIG_DEBUG_FS) += debugfs.o diff --git a/drivers/vdpa/pds/aux_drv.c b/drivers/vdpa/pds/aux_drv.c new file mode 100644 index 000000000000..39c03f067b77 --- /dev/null +++ b/drivers/vdpa/pds/aux_drv.c @@ -0,0 +1,84 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright(c) 2023 Advanced Micro Devices, Inc */ + +#include +#include + +#include +#include +#include +#include +#include + +#include "aux_drv.h" +#include "debugfs.h" + +static const struct auxiliary_device_id pds_vdpa_id_table[] = { + { .name = PDS_VDPA_DEV_NAME, }, + {}, +}; + +static int pds_vdpa_probe(struct auxiliary_device *aux_dev, + const struct auxiliary_device_id *id) + +{ + struct pds_auxiliary_dev *padev = + container_of(aux_dev, struct pds_auxiliary_dev, aux_dev); + struct pds_vdpa_aux *vdpa_aux; + + vdpa_aux = kzalloc(sizeof(*vdpa_aux), GFP_KERNEL); + if (!vdpa_aux) + return -ENOMEM; + + vdpa_aux->padev = padev; + auxiliary_set_drvdata(aux_dev, vdpa_aux); + + return 0; +} + +static void pds_vdpa_remove(struct auxiliary_device *aux_dev) +{ + struct pds_vdpa_aux *vdpa_aux = auxiliary_get_drvdata(aux_dev); + struct device *dev = &aux_dev->dev; + + kfree(vdpa_aux); + auxiliary_set_drvdata(aux_dev, NULL); + + dev_info(dev, "Removed\n"); +} + +static struct auxiliary_driver pds_vdpa_driver = { + .name = PDS_DEV_TYPE_VDPA_STR, + .probe = pds_vdpa_probe, + .remove = pds_vdpa_remove, + .id_table = pds_vdpa_id_table, +}; + +static void __exit pds_vdpa_cleanup(void) +{ + auxiliary_driver_unregister(&pds_vdpa_driver); + + pds_vdpa_debugfs_destroy(); +} +module_exit(pds_vdpa_cleanup); + +static int __init pds_vdpa_init(void) +{ + int err; + + pds_vdpa_debugfs_create(); + + err = auxiliary_driver_register(&pds_vdpa_driver); + if (err) { + pr_err("%s: aux driver register failed: %pe\n", + PDS_VDPA_DRV_NAME, ERR_PTR(err)); + pds_vdpa_debugfs_destroy(); + } + + return err; +} +module_init(pds_vdpa_init); + +MODULE_DESCRIPTION(PDS_VDPA_DRV_DESCRIPTION); +MODULE_AUTHOR("Advanced Micro Devices, Inc"); +MODULE_LICENSE("GPL"); diff --git a/drivers/vdpa/pds/aux_drv.h b/drivers/vdpa/pds/aux_drv.h new file mode 100644 index 000000000000..14e465944dfd --- /dev/null +++ b/drivers/vdpa/pds/aux_drv.h @@ -0,0 +1,15 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Copyright(c) 2023 Advanced Micro Devices, Inc */ + +#ifndef _AUX_DRV_H_ +#define _AUX_DRV_H_ + +#define PDS_VDPA_DRV_DESCRIPTION "AMD/Pensando vDPA VF Device Driver" +#define PDS_VDPA_DRV_NAME "pds_vdpa" + +struct pds_vdpa_aux { + struct pds_auxiliary_dev *padev; + + struct dentry *dentry; +}; +#endif /* _AUX_DRV_H_ */ diff --git a/drivers/vdpa/pds/debugfs.c b/drivers/vdpa/pds/debugfs.c new file mode 100644 index 000000000000..12e844f96ccc --- /dev/null +++ b/drivers/vdpa/pds/debugfs.c @@ -0,0 +1,29 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright(c) 2023 Advanced Micro Devices, Inc */ + +#include + +#include +#include +#include +#include + +#include "aux_drv.h" +#include "debugfs.h" + +#ifdef CONFIG_DEBUG_FS + +static struct dentry *dbfs_dir; + +void pds_vdpa_debugfs_create(void) +{ + dbfs_dir = debugfs_create_dir(PDS_VDPA_DRV_NAME, NULL); +} + +void pds_vdpa_debugfs_destroy(void) +{ + debugfs_remove_recursive(dbfs_dir); + dbfs_dir = NULL; +} + +#endif /* CONFIG_DEBUG_FS */ diff --git a/drivers/vdpa/pds/debugfs.h b/drivers/vdpa/pds/debugfs.h new file mode 100644 index 000000000000..fff078a869e5 --- /dev/null +++ b/drivers/vdpa/pds/debugfs.h @@ -0,0 +1,18 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright(c) 2023 Advanced Micro Devices, Inc */ + +#ifndef _PDS_VDPA_DEBUGFS_H_ +#define _PDS_VDPA_DEBUGFS_H_ + +#include + +#ifdef CONFIG_DEBUG_FS + +void pds_vdpa_debugfs_create(void); +void pds_vdpa_debugfs_destroy(void); +#else +static inline void pds_vdpa_debugfs_create(void) { } +static inline void pds_vdpa_debugfs_destroy(void) { } +#endif + +#endif /* _PDS_VDPA_DEBUGFS_H_ */ diff --git a/include/linux/pds/pds_vdpa.h b/include/linux/pds/pds_vdpa.h new file mode 100644 index 000000000000..d3414536985d --- /dev/null +++ b/include/linux/pds/pds_vdpa.h @@ -0,0 +1,10 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Copyright(c) 2023 Advanced Micro Devices, Inc */ + +#ifndef _PDS_VDPA_H_ +#define _PDS_VDPA_H_ + +#define PDS_DEV_TYPE_VDPA_STR "vDPA" +#define PDS_VDPA_DEV_NAME PDS_CORE_DRV_NAME "." PDS_DEV_TYPE_VDPA_STR + +#endif /* _PDS_VDPA_H_ */ From patchwork Wed Mar 22 19:10:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Nelson, Shannon" X-Patchwork-Id: 13184515 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CC703C6FD1F for ; Wed, 22 Mar 2023 19:11:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230448AbjCVTLM (ORCPT ); Wed, 22 Mar 2023 15:11:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57142 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230247AbjCVTLG (ORCPT ); Wed, 22 Mar 2023 15:11:06 -0400 Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2049.outbound.protection.outlook.com [40.107.94.49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0DB215C117 for ; Wed, 22 Mar 2023 12:11:04 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=g1iEuclIZncNHHyGAiA+mgVAfzkEHQWkhIm8oQtuIRhrpnqdL/fwNVFDQS63ers5/DgyspKDuG/5fhlCAZNRXgBaXtGxmHN3dSlSpj8xc4fiJ2lHJoNNxV+Sfy+YJt2L8el3PmxWQsBUKGrLeVVrQBzy+FhoIrZ/X2FZa5HW/dQg3V+KY6yJgsmSiBnfcDOkiPTiOJYmFQ74R+qCkQqqyEnAv/yJdit4GHl8d0VAZB523qxGuE3mkOFu/Ku1tfPgBUW7lEuPk7PpLjNByBEzFV3cDzzQwINIQGk7AfVZlaH7FwrXz2MTDsf6T/zXNPQyumimrO2Bf9IdxrZRJKXwxg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=DGKz7RV1kOCcEp8ITGCtwf2PPxYkKJ+kuqsVu3N39Rk=; b=PDkhD0hGOoOBdXnFR/QVwuiXOGVweNBdOns+uArsVpJMtJxUBUrzECeijHXSpRvbp3X51zYziPavWYEMlzAxYba8mJZdQKyaVln846sv1QbRljmsPo4rYpVNkyo91+dkH8l6R7kFoKXtK9iOVBw8zv63GyU4cNCy2KvIdWOH9p4mq85yC7SD4366Ab1+XcA65XgjCblrITRXpFhB2fznHQTlVU8oL2dbrwd+3TOOWeRYqNl0SkLi2xRNR+VLDbNdxbaEgDrs3zVTRYKDLXDym5xzrGHZs364DfvH9VkbV1g70wV76PA9lFMRaXU7h+svzy1/ntzwPEnTC3Z8seiYZw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=redhat.com smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=DGKz7RV1kOCcEp8ITGCtwf2PPxYkKJ+kuqsVu3N39Rk=; b=p/8gDL8oLFYlo1II8sxu74kQebJgtFHLR/LxsrfbzmyVwHjQ+Z0ePdHWEoMdFpzKd+2w1jk9+MeY37s61gWGBP7lugLocSkZS+gXPAq0tQizOjLCP0Vt296qt4enTKMnJB4QNv4hGrdNMFQd1pQ5h8wsGfMe7IuJX/YmuDoYrZU= Received: from DS7PR03CA0298.namprd03.prod.outlook.com (2603:10b6:5:3ad::33) by MN2PR12MB4176.namprd12.prod.outlook.com (2603:10b6:208:1d5::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6178.37; Wed, 22 Mar 2023 19:11:00 +0000 Received: from DM6NAM11FT004.eop-nam11.prod.protection.outlook.com (2603:10b6:5:3ad:cafe::9) by DS7PR03CA0298.outlook.office365.com (2603:10b6:5:3ad::33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6178.37 via Frontend Transport; Wed, 22 Mar 2023 19:11:00 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by DM6NAM11FT004.mail.protection.outlook.com (10.13.172.217) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.6222.17 via Frontend Transport; Wed, 22 Mar 2023 19:10:59 +0000 Received: from driver-dev1.pensando.io (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Wed, 22 Mar 2023 14:10:55 -0500 From: Shannon Nelson To: , , , , , , , CC: Subject: [PATCH v3 virtio 3/8] pds_vdpa: get vdpa management info Date: Wed, 22 Mar 2023 12:10:33 -0700 Message-ID: <20230322191038.44037-4-shannon.nelson@amd.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230322191038.44037-1-shannon.nelson@amd.com> References: <20230322191038.44037-1-shannon.nelson@amd.com> MIME-Version: 1.0 X-Originating-IP: [10.180.168.240] X-ClientProxiedBy: SATLEXMB04.amd.com (10.181.40.145) To SATLEXMB04.amd.com (10.181.40.145) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DM6NAM11FT004:EE_|MN2PR12MB4176:EE_ X-MS-Office365-Filtering-Correlation-Id: cb0a4ae1-1c2d-4ec1-71c7-08db2b092c54 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: lIwiwqcGMpbjh9s2G23j8C88gZXe1j1rMwFBeGkn+N1pAKE/NedUb9khBhGa8T3GFaG+XgtlhOJ9J/7ySQj9bHqp28uN4u5uTJT2vtEvUL1W/cVpbb3mpkP2UEfW7G0U+U4tKbWieUC15t3qaWxtSDgJHF5us3JGFcylvLVQ5Rqw2M2lyUfnTgnkpMMuIuSl+jttWZELhPK+gV9An1vWIU+IZGL7sdAGYi+F/8Lj240AMZWh+YybXKdlP/NR1lJOvLW6QE9tLKvrD7YLsYtclmCaxmMZCwIYk2KdLz7loUyFImObFSDftiRA1nux01p9eWakOLjpJtAUVnBL9Xb2Mf86LVYz8au6h7CPTI21J8/2CG5oJ9kjgEnTH6GTi7VrcLT84g3Ln32glb/KYwdFxpi9SN7kCmPjARPKHyWstpv5IO9HX4HZw6KaBW5I6CrsZMCspa7xkz0H+WhtSzmtRaq6dGBF6wLO2bNb658IMmSp0yP6WnFgK6HCIEC3vc7Uj5NzhcgX7wxcGDvln8ispoPeLywgyoa0m9SnstKFYFJ1q1Xjh98sPQW1aIXWJa97bloQi/we0L6MotO1F54mdubaZl2rhgLjsCc8Jc0Y/TgagZjdoXcSgi+M5K7sNZTA22IzHhdkMHO1T1w2WEkvjonqrBx8nLPQsoVmigDvUuFsEUB6GxO7IsuMKDqpyN5/RO6+hgglzTbzP/6m0BPu1v+tcm9yGO74CmcPCX6y+Ag= X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230025)(4636009)(39860400002)(396003)(136003)(346002)(376002)(451199018)(36840700001)(40470700004)(46966006)(2616005)(186003)(47076005)(16526019)(83380400001)(478600001)(426003)(6666004)(1076003)(336012)(70206006)(110136005)(316002)(70586007)(8676002)(26005)(4326008)(36860700001)(41300700001)(5660300002)(8936002)(44832011)(30864003)(81166007)(82740400003)(40460700003)(2906002)(356005)(86362001)(82310400005)(40480700001)(36756003)(36900700001);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Mar 2023 19:10:59.8676 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: cb0a4ae1-1c2d-4ec1-71c7-08db2b092c54 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT004.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR12MB4176 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Find the vDPA management information from the DSC in order to advertise it to the vdpa subsystem. Signed-off-by: Shannon Nelson Acked-by: Jason Wang --- drivers/vdpa/pds/Makefile | 3 +- drivers/vdpa/pds/aux_drv.c | 17 ++++++ drivers/vdpa/pds/aux_drv.h | 7 +++ drivers/vdpa/pds/debugfs.c | 2 + drivers/vdpa/pds/vdpa_dev.c | 114 +++++++++++++++++++++++++++++++++++ drivers/vdpa/pds/vdpa_dev.h | 15 +++++ include/linux/pds/pds_vdpa.h | 90 +++++++++++++++++++++++++++ 7 files changed, 247 insertions(+), 1 deletion(-) create mode 100644 drivers/vdpa/pds/vdpa_dev.c create mode 100644 drivers/vdpa/pds/vdpa_dev.h diff --git a/drivers/vdpa/pds/Makefile b/drivers/vdpa/pds/Makefile index a9cd2f450ae1..13b50394ec64 100644 --- a/drivers/vdpa/pds/Makefile +++ b/drivers/vdpa/pds/Makefile @@ -3,6 +3,7 @@ obj-$(CONFIG_PDS_VDPA) := pds_vdpa.o -pds_vdpa-y := aux_drv.o +pds_vdpa-y := aux_drv.o \ + vdpa_dev.o pds_vdpa-$(CONFIG_DEBUG_FS) += debugfs.o diff --git a/drivers/vdpa/pds/aux_drv.c b/drivers/vdpa/pds/aux_drv.c index 39c03f067b77..881acd869a9d 100644 --- a/drivers/vdpa/pds/aux_drv.c +++ b/drivers/vdpa/pds/aux_drv.c @@ -3,6 +3,7 @@ #include #include +#include #include #include @@ -12,6 +13,7 @@ #include "aux_drv.h" #include "debugfs.h" +#include "vdpa_dev.h" static const struct auxiliary_device_id pds_vdpa_id_table[] = { { .name = PDS_VDPA_DEV_NAME, }, @@ -25,15 +27,28 @@ static int pds_vdpa_probe(struct auxiliary_device *aux_dev, struct pds_auxiliary_dev *padev = container_of(aux_dev, struct pds_auxiliary_dev, aux_dev); struct pds_vdpa_aux *vdpa_aux; + int err; vdpa_aux = kzalloc(sizeof(*vdpa_aux), GFP_KERNEL); if (!vdpa_aux) return -ENOMEM; vdpa_aux->padev = padev; + vdpa_aux->vf_id = pci_iov_vf_id(padev->vf_pdev); auxiliary_set_drvdata(aux_dev, vdpa_aux); + /* Get device ident info and set up the vdpa_mgmt_dev */ + err = pds_vdpa_get_mgmt_info(vdpa_aux); + if (err) + goto err_free_mem; + return 0; + +err_free_mem: + kfree(vdpa_aux); + auxiliary_set_drvdata(aux_dev, NULL); + + return err; } static void pds_vdpa_remove(struct auxiliary_device *aux_dev) @@ -41,6 +56,8 @@ static void pds_vdpa_remove(struct auxiliary_device *aux_dev) struct pds_vdpa_aux *vdpa_aux = auxiliary_get_drvdata(aux_dev); struct device *dev = &aux_dev->dev; + pci_free_irq_vectors(vdpa_aux->padev->vf_pdev); + kfree(vdpa_aux); auxiliary_set_drvdata(aux_dev, NULL); diff --git a/drivers/vdpa/pds/aux_drv.h b/drivers/vdpa/pds/aux_drv.h index 14e465944dfd..94ba7abcaa43 100644 --- a/drivers/vdpa/pds/aux_drv.h +++ b/drivers/vdpa/pds/aux_drv.h @@ -10,6 +10,13 @@ struct pds_vdpa_aux { struct pds_auxiliary_dev *padev; + struct vdpa_mgmt_dev vdpa_mdev; + + struct pds_vdpa_ident ident; + + int vf_id; struct dentry *dentry; + + int nintrs; }; #endif /* _AUX_DRV_H_ */ diff --git a/drivers/vdpa/pds/debugfs.c b/drivers/vdpa/pds/debugfs.c index 12e844f96ccc..f4275fe667c3 100644 --- a/drivers/vdpa/pds/debugfs.c +++ b/drivers/vdpa/pds/debugfs.c @@ -2,10 +2,12 @@ /* Copyright(c) 2023 Advanced Micro Devices, Inc */ #include +#include #include #include #include +#include #include #include "aux_drv.h" diff --git a/drivers/vdpa/pds/vdpa_dev.c b/drivers/vdpa/pds/vdpa_dev.c new file mode 100644 index 000000000000..6345b3fa2440 --- /dev/null +++ b/drivers/vdpa/pds/vdpa_dev.c @@ -0,0 +1,114 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright(c) 2023 Advanced Micro Devices, Inc */ + +#include +#include +#include + +#include +#include +#include +#include +#include + +#include "vdpa_dev.h" +#include "aux_drv.h" + +static struct virtio_device_id pds_vdpa_id_table[] = { + {VIRTIO_ID_NET, VIRTIO_DEV_ANY_ID}, + {0}, +}; + +static int pds_vdpa_dev_add(struct vdpa_mgmt_dev *mdev, const char *name, + const struct vdpa_dev_set_config *add_config) +{ + return -EOPNOTSUPP; +} + +static void pds_vdpa_dev_del(struct vdpa_mgmt_dev *mdev, + struct vdpa_device *vdpa_dev) +{ +} + +static const struct vdpa_mgmtdev_ops pds_vdpa_mgmt_dev_ops = { + .dev_add = pds_vdpa_dev_add, + .dev_del = pds_vdpa_dev_del +}; + +int pds_vdpa_get_mgmt_info(struct pds_vdpa_aux *vdpa_aux) +{ + struct pds_vdpa_ident_cmd ident_cmd = { + .opcode = PDS_VDPA_CMD_IDENT, + .vf_id = cpu_to_le16(vdpa_aux->vf_id), + }; + struct pds_vdpa_comp ident_comp = {0}; + struct vdpa_mgmt_dev *mgmt; + struct device *pf_dev; + struct pci_dev *pdev; + dma_addr_t ident_pa; + struct device *dev; + u16 max_vqs; + int err; + + dev = &vdpa_aux->padev->aux_dev.dev; + pdev = vdpa_aux->padev->vf_pdev; + mgmt = &vdpa_aux->vdpa_mdev; + + /* Get resource info through the PF's adminq. It is a block of info, + * so we need to map some memory for PF to make available to the + * firmware for writing the data. + */ + pf_dev = &vdpa_aux->padev->pf_pdev->dev; + ident_pa = dma_map_single(pf_dev, &vdpa_aux->ident, + sizeof(vdpa_aux->ident), DMA_FROM_DEVICE); + if (dma_mapping_error(pf_dev, ident_pa)) { + dev_err(dev, "Failed to map ident space\n"); + return -ENOMEM; + } + + ident_cmd.ident_pa = cpu_to_le64(ident_pa); + ident_cmd.len = cpu_to_le32(sizeof(vdpa_aux->ident)); + err = vdpa_aux->padev->ops->adminq_cmd(vdpa_aux->padev, + (union pds_core_adminq_cmd *)&ident_cmd, + sizeof(ident_cmd), + (union pds_core_adminq_comp *)&ident_comp, + 0); + dma_unmap_single(pf_dev, ident_pa, + sizeof(vdpa_aux->ident), DMA_FROM_DEVICE); + if (err) { + dev_err(dev, "Failed to ident hw, status %d: %pe\n", + ident_comp.status, ERR_PTR(err)); + return err; + } + + max_vqs = le16_to_cpu(vdpa_aux->ident.max_vqs); + mgmt->max_supported_vqs = min_t(u16, PDS_VDPA_MAX_QUEUES, max_vqs); + if (max_vqs > PDS_VDPA_MAX_QUEUES) + dev_info(dev, "FYI - Device supports more vqs (%d) than driver (%d)\n", + max_vqs, PDS_VDPA_MAX_QUEUES); + + mgmt->ops = &pds_vdpa_mgmt_dev_ops; + mgmt->id_table = pds_vdpa_id_table; + mgmt->device = dev; + mgmt->supported_features = le64_to_cpu(vdpa_aux->ident.hw_features); + mgmt->config_attr_mask = BIT_ULL(VDPA_ATTR_DEV_NET_CFG_MACADDR); + mgmt->config_attr_mask |= BIT_ULL(VDPA_ATTR_DEV_NET_CFG_MAX_VQP); + + /* Set up interrupts now that we know how many we might want + * each gets one, than add another for a control queue if supported + */ + vdpa_aux->nintrs = mgmt->max_supported_vqs; + if (mgmt->supported_features & BIT_ULL(VIRTIO_NET_F_CTRL_VQ)) + vdpa_aux->nintrs++; + + err = pci_alloc_irq_vectors(pdev, vdpa_aux->nintrs, vdpa_aux->nintrs, + PCI_IRQ_MSIX); + if (err < 0) { + dev_err(dev, "Couldn't get %d msix vectors: %pe\n", + vdpa_aux->nintrs, ERR_PTR(err)); + return err; + } + vdpa_aux->nintrs = err; + + return 0; +} diff --git a/drivers/vdpa/pds/vdpa_dev.h b/drivers/vdpa/pds/vdpa_dev.h new file mode 100644 index 000000000000..97fab833a0aa --- /dev/null +++ b/drivers/vdpa/pds/vdpa_dev.h @@ -0,0 +1,15 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Copyright(c) 2023 Advanced Micro Devices, Inc */ + +#ifndef _VDPA_DEV_H_ +#define _VDPA_DEV_H_ + +#define PDS_VDPA_MAX_QUEUES 65 + +struct pds_vdpa_device { + struct vdpa_device vdpa_dev; + struct pds_vdpa_aux *vdpa_aux; +}; + +int pds_vdpa_get_mgmt_info(struct pds_vdpa_aux *vdpa_aux); +#endif /* _VDPA_DEV_H_ */ diff --git a/include/linux/pds/pds_vdpa.h b/include/linux/pds/pds_vdpa.h index d3414536985d..c1d6a3fe2d61 100644 --- a/include/linux/pds/pds_vdpa.h +++ b/include/linux/pds/pds_vdpa.h @@ -7,4 +7,94 @@ #define PDS_DEV_TYPE_VDPA_STR "vDPA" #define PDS_VDPA_DEV_NAME PDS_CORE_DRV_NAME "." PDS_DEV_TYPE_VDPA_STR +/* + * enum pds_vdpa_cmd_opcode - vDPA Device commands + */ +enum pds_vdpa_cmd_opcode { + PDS_VDPA_CMD_INIT = 48, + PDS_VDPA_CMD_IDENT = 49, + PDS_VDPA_CMD_RESET = 51, + PDS_VDPA_CMD_VQ_RESET = 52, + PDS_VDPA_CMD_VQ_INIT = 53, + PDS_VDPA_CMD_STATUS_UPDATE = 54, + PDS_VDPA_CMD_SET_FEATURES = 55, + PDS_VDPA_CMD_SET_ATTR = 56, + PDS_VDPA_CMD_VQ_SET_STATE = 57, + PDS_VDPA_CMD_VQ_GET_STATE = 58, +}; + +/** + * struct pds_vdpa_cmd - generic command + * @opcode: Opcode + * @vdpa_index: Index for vdpa subdevice + * @vf_id: VF id + */ +struct pds_vdpa_cmd { + u8 opcode; + u8 vdpa_index; + __le16 vf_id; +}; + +/** + * struct pds_vdpa_comp - generic command completion + * @status: Status of the command (enum pds_core_status_code) + * @rsvd: Word boundary padding + * @color: Color bit + */ +struct pds_vdpa_comp { + u8 status; + u8 rsvd[14]; + u8 color; +}; + +/** + * struct pds_vdpa_init_cmd - INIT command + * @opcode: Opcode PDS_VDPA_CMD_INIT + * @vdpa_index: Index for vdpa subdevice + * @vf_id: VF id + * @len: length of config info DMA space + * @config_pa: address for DMA of virtio config struct + */ +struct pds_vdpa_init_cmd { + u8 opcode; + u8 vdpa_index; + __le16 vf_id; +}; + +/** + * struct pds_vdpa_ident - vDPA identification data + * @hw_features: vDPA features supported by device + * @max_vqs: max queues available (2 queues for a single queuepair) + * @max_qlen: log(2) of maximum number of descriptors + * @min_qlen: log(2) of minimum number of descriptors + * + * This struct is used in a DMA block that is set up for the PDS_VDPA_CMD_IDENT + * transaction. Set up the DMA block and send the address in the IDENT cmd + * data, the DSC will write the ident information, then we can remove the DMA + * block after reading the answer. If the completion status is 0, then there + * is valid information, else there was an error and the data should be invalid. + */ +struct pds_vdpa_ident { + __le64 hw_features; + __le16 max_vqs; + __le16 max_qlen; + __le16 min_qlen; +}; + +/** + * struct pds_vdpa_ident_cmd - IDENT command + * @opcode: Opcode PDS_VDPA_CMD_IDENT + * @rsvd: Word boundary padding + * @vf_id: VF id + * @len: length of ident info DMA space + * @ident_pa: address for DMA of ident info (struct pds_vdpa_ident) + * only used for this transaction, then forgotten by DSC + */ +struct pds_vdpa_ident_cmd { + u8 opcode; + u8 rsvd; + __le16 vf_id; + __le32 len; + __le64 ident_pa; +}; #endif /* _PDS_VDPA_H_ */ From patchwork Wed Mar 22 19:10:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Nelson, Shannon" X-Patchwork-Id: 13184513 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 04EC9C6FD1F for ; Wed, 22 Mar 2023 19:11:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230345AbjCVTLH (ORCPT ); Wed, 22 Mar 2023 15:11:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57272 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230293AbjCVTLE (ORCPT ); Wed, 22 Mar 2023 15:11:04 -0400 Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2080.outbound.protection.outlook.com [40.107.94.80]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3DAD53526A for ; Wed, 22 Mar 2023 12:11:03 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=QdiM++eNkq67bJSvdT1mCPdYBbHCsOgzrl4le0HpHfXUioFmNR7erTdE8TYw6eCgCYF1QI1TbIUgyAdXeYuhtTB9OqnSXD81LCimQ+Py7TGIWpUcnpZjGWB2vOPE3WDvGoKHMpIE2UQ4pOGQm70TC84LVoZ8devFHKAqRcr1BKvrnarJSlJ8EJTiIiwKJZb3zaRXsT5O5i1hi55u3AGjSdx7VNOMmcZi6bKrvoi04qc7K7DW1G83SRN1JWJSgDyh4JGioUF7T7912/e2tXYZVpJ6suLO0xKib+OAIkjmDt/MZCaQLpjp3PtFSZGvTw68dW4tve5eC6Y9dF6bbr91zg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=VDLBFkpD8KiE57okebDvtHVUXbqGBQahQe4MmMGYYPQ=; b=oaKXSgjdFVJOUsTNd+ajQnOD9aCyZ9RrRY7Qd3qr3xMYdMW1LqgKMT6iCw8A2GRFI2nlUFkZXSRAJYiOEqAD+xIEJpUb7HR2W9PAYK0Uy/EvTPP98C8OdNzK+iLPDp91yFQtyeye7919f6/Y2SJQkRSsHrYJmKRZTLmFC2/5NCVXhx4oSOEdzzbl6BmKGZSOfok9aT9Kr3MuB003ohMivJkVdMe+sTGjuY//lxaaBztWiVhEC7dMU+Ge4xV6YhSPjTaZBb8YmhCmKbZ6rfZ+GwDur2snHn/FIwbRZxrKZC6ZoJKZUkErIXJ5YFmxbPnQGNyA3LU8I/MEceG8La+8fQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=redhat.com smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=VDLBFkpD8KiE57okebDvtHVUXbqGBQahQe4MmMGYYPQ=; b=M/oRZT6g+WG7fVSj4LDqd/FpPNCc6Ti0JXeuEvuDHntF3zQz6tSJWCbpz4PvjV6fUfh2b4kBU45WFkbLHCVqsdNdH3XyEBIdyouXZyFejWmUHGOpea7khslipIt4XtYizX6h49Esbmml2Fa/nsH9/YAppVJcHR63lTA515I/H+U= Received: from DS7PR03CA0288.namprd03.prod.outlook.com (2603:10b6:5:3ad::23) by IA1PR12MB8239.namprd12.prod.outlook.com (2603:10b6:208:3f7::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6178.37; Wed, 22 Mar 2023 19:11:01 +0000 Received: from DM6NAM11FT004.eop-nam11.prod.protection.outlook.com (2603:10b6:5:3ad:cafe::88) by DS7PR03CA0288.outlook.office365.com (2603:10b6:5:3ad::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6178.37 via Frontend Transport; Wed, 22 Mar 2023 19:11:01 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by DM6NAM11FT004.mail.protection.outlook.com (10.13.172.217) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.6222.17 via Frontend Transport; Wed, 22 Mar 2023 19:11:00 +0000 Received: from driver-dev1.pensando.io (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Wed, 22 Mar 2023 14:10:56 -0500 From: Shannon Nelson To: , , , , , , , CC: Subject: [PATCH v3 virtio 4/8] pds_vdpa: virtio bar setup for vdpa Date: Wed, 22 Mar 2023 12:10:34 -0700 Message-ID: <20230322191038.44037-5-shannon.nelson@amd.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230322191038.44037-1-shannon.nelson@amd.com> References: <20230322191038.44037-1-shannon.nelson@amd.com> MIME-Version: 1.0 X-Originating-IP: [10.180.168.240] X-ClientProxiedBy: SATLEXMB04.amd.com (10.181.40.145) To SATLEXMB04.amd.com (10.181.40.145) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DM6NAM11FT004:EE_|IA1PR12MB8239:EE_ X-MS-Office365-Filtering-Correlation-Id: 74451bac-5cd0-40be-cab7-08db2b092cf1 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: QZMyTMf07Q3CSh9RTybNoPUGLPIqfVXGSkRJTFgveFOmyjnjzQDI0Wn2jmYcih3wHe16REDtlQCBJZOnb2ioS5AggDMWrvtlgmJidCj52CemfVvWM/Lcc+Ku/8+PoqOF3KmdDFVXKtQTs+DlpfnhXAPZtJ9qXeSeE/8btxGmrEYJBFugJ/q0rr6ZWgSEAbsJh7jX+xqXjlE4F2RltU9TkR2F26RI98CSdh3VLPso61MsXecEFtNEgH6MfTCvaiNoP5geZi9VgBQgypoh0u57WqlvAV7+dQEdczhKO/gZ9HG96uEP5A/Y3N4EnmmDMN0DjEbD58ijqsNNC954lflKAj768UMwPViE0kTiZ6FETptrX3R6vMXS3dwCjP7NDos5t3fEdzS+tSqc1H5F+SfW2Yn0bMRXqG4OKMtPgz+GxposkuYOBZMiTv/Alx2uygB+PkMHjCdmxs2ja3a/XL0Ua6m29QkgLqmpVO+i/lIO/uw2qOZVbS6L9BWhyDNhDoGR+dkQnEshCZkuddB9pKtFdJM7uNlsgiSx0BeVjVAQWIA1vS+OrN9/txXx2kJ3/2yNXgUwJ+e9jbesag863a74KMXGtyRNB6KLYA5r5rfP4csXkqB71PaF3z6/PLicEcxXhTWDoZkZXLnxlYtZXVdiMl9u1/LUhE5eS+YidhIVmgqcNAv/gORlGMFJ4qk+bjTTts353KMq8UkMv049qRWlmTEDRUAXsVvgk96i9P77FEA= X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230025)(4636009)(39860400002)(396003)(376002)(346002)(136003)(451199018)(46966006)(36840700001)(40470700004)(356005)(478600001)(81166007)(2616005)(110136005)(316002)(82310400005)(16526019)(186003)(2906002)(70206006)(82740400003)(8676002)(70586007)(26005)(4326008)(86362001)(1076003)(36756003)(44832011)(6666004)(40480700001)(41300700001)(5660300002)(47076005)(36860700001)(336012)(40460700003)(426003)(8936002)(36900700001);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Mar 2023 19:11:00.8519 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 74451bac-5cd0-40be-cab7-08db2b092cf1 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT004.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA1PR12MB8239 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Prep and use the "modern" virtio bar utilities to get our virtio config space ready. Signed-off-by: Shannon Nelson --- drivers/vdpa/pds/aux_drv.c | 25 +++++++++++++++++++++++++ drivers/vdpa/pds/aux_drv.h | 3 +++ 2 files changed, 28 insertions(+) diff --git a/drivers/vdpa/pds/aux_drv.c b/drivers/vdpa/pds/aux_drv.c index 881acd869a9d..8f3ae3326885 100644 --- a/drivers/vdpa/pds/aux_drv.c +++ b/drivers/vdpa/pds/aux_drv.c @@ -4,6 +4,7 @@ #include #include #include +#include #include #include @@ -20,12 +21,22 @@ static const struct auxiliary_device_id pds_vdpa_id_table[] = { {}, }; +static int pds_vdpa_device_id_check(struct pci_dev *pdev) +{ + if (pdev->device != PCI_DEVICE_ID_PENSANDO_VDPA_VF || + pdev->vendor != PCI_VENDOR_ID_PENSANDO) + return -ENODEV; + + return 0; +} + static int pds_vdpa_probe(struct auxiliary_device *aux_dev, const struct auxiliary_device_id *id) { struct pds_auxiliary_dev *padev = container_of(aux_dev, struct pds_auxiliary_dev, aux_dev); + struct device *dev = &aux_dev->dev; struct pds_vdpa_aux *vdpa_aux; int err; @@ -42,8 +53,21 @@ static int pds_vdpa_probe(struct auxiliary_device *aux_dev, if (err) goto err_free_mem; + /* Find the virtio configuration */ + vdpa_aux->vd_mdev.pci_dev = padev->vf_pdev; + vdpa_aux->vd_mdev.device_id_check_override = pds_vdpa_device_id_check; + vdpa_aux->vd_mdev.dma_mask_override = DMA_BIT_MASK(PDS_CORE_ADDR_LEN); + err = vp_modern_probe(&vdpa_aux->vd_mdev); + if (err) { + dev_err(dev, "Unable to probe for virtio configuration: %pe\n", + ERR_PTR(err)); + goto err_free_mgmt_info; + } + return 0; +err_free_mgmt_info: + pci_free_irq_vectors(padev->vf_pdev); err_free_mem: kfree(vdpa_aux); auxiliary_set_drvdata(aux_dev, NULL); @@ -56,6 +80,7 @@ static void pds_vdpa_remove(struct auxiliary_device *aux_dev) struct pds_vdpa_aux *vdpa_aux = auxiliary_get_drvdata(aux_dev); struct device *dev = &aux_dev->dev; + vp_modern_remove(&vdpa_aux->vd_mdev); pci_free_irq_vectors(vdpa_aux->padev->vf_pdev); kfree(vdpa_aux); diff --git a/drivers/vdpa/pds/aux_drv.h b/drivers/vdpa/pds/aux_drv.h index 94ba7abcaa43..8f5140401573 100644 --- a/drivers/vdpa/pds/aux_drv.h +++ b/drivers/vdpa/pds/aux_drv.h @@ -4,6 +4,8 @@ #ifndef _AUX_DRV_H_ #define _AUX_DRV_H_ +#include + #define PDS_VDPA_DRV_DESCRIPTION "AMD/Pensando vDPA VF Device Driver" #define PDS_VDPA_DRV_NAME "pds_vdpa" @@ -16,6 +18,7 @@ struct pds_vdpa_aux { int vf_id; struct dentry *dentry; + struct virtio_pci_modern_device vd_mdev; int nintrs; }; From patchwork Wed Mar 22 19:10:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Nelson, Shannon" X-Patchwork-Id: 13184514 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4C206C6FD1C for ; Wed, 22 Mar 2023 19:11:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230440AbjCVTLK (ORCPT ); Wed, 22 Mar 2023 15:11:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57260 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230191AbjCVTLG (ORCPT ); Wed, 22 Mar 2023 15:11:06 -0400 Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2046.outbound.protection.outlook.com [40.107.94.46]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6917D59429 for ; Wed, 22 Mar 2023 12:11:04 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=hQsjUkz+Gh2PnWo85F1X+qlXuSpuve6YmlsCzIxbB2qcOcS0zDVRLGw07E+zLSllqnQuTI2XeLe1Jl0t0wAxXNgJPZ817czEz68Ugg9WWEoH1URwVwtIvIiXB037fp+InW1TGyV2wtkn8nngeWRdZJTTNVkKP/j8bld6rxm1c+Rzy8LD/dFfNF1SFpyPToP7Rq1vQXBEy7THyKeRMdcHR5nlGt5Hhg1u4EPjcot3k+5IlcsUIH31lz5SGAoPVDUAfsqgZTLIyOYh6BQ1s33toQbhAXQISeLLn+ErGdD30Qq7bb3GMVXRFU7Wm+VMRYK4yfmiX5yXc6KqlZJz/mTwOA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=chVEa0pW/5KqXfMLKL7x2LR6Dgk+iSHXRoqbgi5Tbk8=; b=HHdWTgQKpQ6dA+GSJG/Q+wRAzDv7qbcddI5FiISgn6tiuIufZHDDdJYvLoe+VXw45t6OqxCwj2wBo+b+tpCNZkr1QQ7K7G5LZ5+5Zcq76U8Q7Qn5asgCV3nbaLbLZzrreZQ3IaMa2vNJFH5c8IpvvcF28wxEmh6LxzyZrKADshYC4zSwq/tG3OAwKI/9xAaAytmi+VNLIEyQfxcCTNjUQeghtHh3odKheYMLKT2DdUNNEOcy9gBUIQ5h6ZLWGdo/KCjytJHqUH7yHZjbhl/ncqV5Z56btqK6RerpuMp1IBiEkWFiZOwyz2nC5/HC0FNl87jHHP8zcx9eT5/RPUo5XQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=redhat.com smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=chVEa0pW/5KqXfMLKL7x2LR6Dgk+iSHXRoqbgi5Tbk8=; b=VyRnlWvUySkQuPxGPoVuy4lcAOGnBlrZvai7vDIHpBa6CAUXN1GE3qeXnstH2W4GvCzxjgSOFEDpq/2FJWrliYiurOC0lKwwRcxOffMgFIo1O3EM7Cn+kkednfceV//Yb3Nu+88BgOwAGGGufi3+XjvQ5IA+0qG5re3GO9kqobo= Received: from DS7PR03CA0296.namprd03.prod.outlook.com (2603:10b6:5:3ad::31) by CH0PR12MB5169.namprd12.prod.outlook.com (2603:10b6:610:b8::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6178.37; Wed, 22 Mar 2023 19:11:01 +0000 Received: from DM6NAM11FT004.eop-nam11.prod.protection.outlook.com (2603:10b6:5:3ad:cafe::a1) by DS7PR03CA0296.outlook.office365.com (2603:10b6:5:3ad::31) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6178.37 via Frontend Transport; Wed, 22 Mar 2023 19:11:01 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by DM6NAM11FT004.mail.protection.outlook.com (10.13.172.217) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.6222.17 via Frontend Transport; Wed, 22 Mar 2023 19:11:01 +0000 Received: from driver-dev1.pensando.io (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Wed, 22 Mar 2023 14:10:57 -0500 From: Shannon Nelson To: , , , , , , , CC: Subject: [PATCH v3 virtio 5/8] pds_vdpa: add vdpa config client commands Date: Wed, 22 Mar 2023 12:10:35 -0700 Message-ID: <20230322191038.44037-6-shannon.nelson@amd.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230322191038.44037-1-shannon.nelson@amd.com> References: <20230322191038.44037-1-shannon.nelson@amd.com> MIME-Version: 1.0 X-Originating-IP: [10.180.168.240] X-ClientProxiedBy: SATLEXMB04.amd.com (10.181.40.145) To SATLEXMB04.amd.com (10.181.40.145) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DM6NAM11FT004:EE_|CH0PR12MB5169:EE_ X-MS-Office365-Filtering-Correlation-Id: ca532e54-b2b3-436d-2916-08db2b092d32 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: hdhVuAXwYtWxX8nT2/4V21BLmSFfHmmbJKp5sIPMMNaROlWqPPqLMC/jvBch7+3dU/+hnW9y2oCM4tjwWXxhzWyhWv9ozht4KtbQD55Xsx1herR4MWioes+pCMKUCfR4y0tGcYbLvS/KjbvYwhsKV1JqelkAZB4k+M7/lcMbHyCVDrDU1H5NtqLB8YCcnNjgd6zdiJFtxAYl18OHi5Etc2Yzptp3SLVd9f0sG3xeNRphqCFdT1q/czWMl4LzYnYTFmnqdTWu1uUrrUyLnHxAiERUP4AHjzgiOhEgwMcZ+Z0tBSHaDUPD/2Xcv6RuUPghLsT31e9HcKVV/BAjJb6W2uVjmu7XCNCR48FgeopEUINMptC21t2zM1trEJvBeFSfT+38LUo/r+k2BQtJc42P5P8ooOgtyUUwyLTDnEtHfc4Bw24dY7ltPYk+0hzox9RhH54AnbHxph54qPHa6+7lFr0K5wN7ghomAbVMNYOfnq0tu3+Fj3rTv7NljhTy2TcBH8Wi1ZH9LTjMRPHUKAUL1E26ob8ro1FVfEavjXUV00a1UqWw3kg+hghQNV1hhLDtOp4mIlQYk/NlD3g39F5tkzs4MaipqINbSgEBBSirBHYQ69do9cmxotSdEzhmDaBwh4rHzMJkAeed2QDtya8RB8/0SF/ZiEJnc7Ux4PONjLr9fWeXe+nOeiXRiKXZyh0m3lcbLCuY47AoRh6UM+0GYGFX+6kSMPq5qIDMomcpgOY= X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230025)(4636009)(136003)(396003)(346002)(376002)(39860400002)(451199018)(46966006)(40470700004)(36840700001)(316002)(6666004)(81166007)(4326008)(8676002)(70206006)(110136005)(478600001)(70586007)(336012)(47076005)(82740400003)(426003)(2906002)(2616005)(41300700001)(186003)(40460700003)(26005)(86362001)(16526019)(356005)(36860700001)(8936002)(83380400001)(36756003)(40480700001)(5660300002)(44832011)(30864003)(1076003)(82310400005)(36900700001);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Mar 2023 19:11:01.4456 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: ca532e54-b2b3-436d-2916-08db2b092d32 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT004.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH0PR12MB5169 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org These are the adminq commands that will be needed for setting up and using the vDPA device. There are a number of commands defined in the FW's API, but by making use of the FW's virtio BAR we only need a few of these commands for vDPA support. Signed-off-by: Shannon Nelson Acked-by: Jason Wang --- drivers/vdpa/pds/Makefile | 1 + drivers/vdpa/pds/cmds.c | 178 +++++++++++++++++++++++++++++++++++ drivers/vdpa/pds/cmds.h | 16 ++++ drivers/vdpa/pds/vdpa_dev.h | 33 ++++++- include/linux/pds/pds_vdpa.h | 175 ++++++++++++++++++++++++++++++++++ 5 files changed, 402 insertions(+), 1 deletion(-) create mode 100644 drivers/vdpa/pds/cmds.c create mode 100644 drivers/vdpa/pds/cmds.h diff --git a/drivers/vdpa/pds/Makefile b/drivers/vdpa/pds/Makefile index 13b50394ec64..2e22418e3ab3 100644 --- a/drivers/vdpa/pds/Makefile +++ b/drivers/vdpa/pds/Makefile @@ -4,6 +4,7 @@ obj-$(CONFIG_PDS_VDPA) := pds_vdpa.o pds_vdpa-y := aux_drv.o \ + cmds.o \ vdpa_dev.o pds_vdpa-$(CONFIG_DEBUG_FS) += debugfs.o diff --git a/drivers/vdpa/pds/cmds.c b/drivers/vdpa/pds/cmds.c new file mode 100644 index 000000000000..b847d708e4cc --- /dev/null +++ b/drivers/vdpa/pds/cmds.c @@ -0,0 +1,178 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright(c) 2023 Advanced Micro Devices, Inc */ + +#include +#include + +#include +#include +#include +#include +#include + +#include "vdpa_dev.h" +#include "aux_drv.h" +#include "cmds.h" + +int pds_vdpa_init_hw(struct pds_vdpa_device *pdsv) +{ + struct pds_auxiliary_dev *padev = pdsv->vdpa_aux->padev; + struct device *dev = &padev->aux_dev.dev; + struct pds_vdpa_init_cmd init_cmd = { + .opcode = PDS_VDPA_CMD_INIT, + .vdpa_index = pdsv->vdpa_index, + .vf_id = cpu_to_le16(pdsv->vdpa_aux->vf_id), + }; + struct pds_vdpa_comp init_comp = {0}; + int err; + + /* Initialize the vdpa/virtio device */ + err = padev->ops->adminq_cmd(padev, + (union pds_core_adminq_cmd *)&init_cmd, + sizeof(init_cmd), + (union pds_core_adminq_comp *)&init_comp, + 0); + if (err) + dev_dbg(dev, "Failed to init hw, status %d: %pe\n", + init_comp.status, ERR_PTR(err)); + + return err; +} + +int pds_vdpa_cmd_reset(struct pds_vdpa_device *pdsv) +{ + struct pds_auxiliary_dev *padev = pdsv->vdpa_aux->padev; + struct device *dev = &padev->aux_dev.dev; + struct pds_vdpa_cmd cmd = { + .opcode = PDS_VDPA_CMD_RESET, + .vdpa_index = pdsv->vdpa_index, + .vf_id = cpu_to_le16(pdsv->vdpa_aux->vf_id), + }; + struct pds_vdpa_comp comp = {0}; + int err; + + err = padev->ops->adminq_cmd(padev, + (union pds_core_adminq_cmd *)&cmd, + sizeof(cmd), + (union pds_core_adminq_comp *)&comp, + 0); + if (err) + dev_dbg(dev, "Failed to reset hw, status %d: %pe\n", + comp.status, ERR_PTR(err)); + + return err; +} + +int pds_vdpa_cmd_set_mac(struct pds_vdpa_device *pdsv, u8 *mac) +{ + struct pds_auxiliary_dev *padev = pdsv->vdpa_aux->padev; + struct device *dev = &padev->aux_dev.dev; + struct pds_vdpa_setattr_cmd cmd = { + .opcode = PDS_VDPA_CMD_SET_ATTR, + .vdpa_index = pdsv->vdpa_index, + .vf_id = cpu_to_le16(pdsv->vdpa_aux->vf_id), + .attr = PDS_VDPA_ATTR_MAC, + }; + struct pds_vdpa_comp comp = {0}; + int err; + + ether_addr_copy(cmd.mac, mac); + err = padev->ops->adminq_cmd(padev, + (union pds_core_adminq_cmd *)&cmd, + sizeof(cmd), + (union pds_core_adminq_comp *)&comp, + 0); + if (err) + dev_dbg(dev, "Failed to set mac address %pM, status %d: %pe\n", + mac, comp.status, ERR_PTR(err)); + + return err; +} + +int pds_vdpa_cmd_set_max_vq_pairs(struct pds_vdpa_device *pdsv, u16 max_vqp) +{ + struct pds_auxiliary_dev *padev = pdsv->vdpa_aux->padev; + struct device *dev = &padev->aux_dev.dev; + struct pds_vdpa_setattr_cmd cmd = { + .opcode = PDS_VDPA_CMD_SET_ATTR, + .vdpa_index = pdsv->vdpa_index, + .vf_id = cpu_to_le16(pdsv->vdpa_aux->vf_id), + .attr = PDS_VDPA_ATTR_MAX_VQ_PAIRS, + .max_vq_pairs = cpu_to_le16(max_vqp), + }; + struct pds_vdpa_comp comp = {0}; + int err; + + err = padev->ops->adminq_cmd(padev, + (union pds_core_adminq_cmd *)&cmd, + sizeof(cmd), + (union pds_core_adminq_comp *)&comp, + 0); + if (err) + dev_dbg(dev, "Failed to set max vq pairs %u, status %d: %pe\n", + max_vqp, comp.status, ERR_PTR(err)); + + return err; +} + +int pds_vdpa_cmd_init_vq(struct pds_vdpa_device *pdsv, u16 qid, + struct pds_vdpa_vq_info *vq_info) +{ + struct pds_auxiliary_dev *padev = pdsv->vdpa_aux->padev; + struct device *dev = &padev->aux_dev.dev; + struct pds_vdpa_vq_init_comp comp = {0}; + struct pds_vdpa_vq_init_cmd cmd = { + .opcode = PDS_VDPA_CMD_VQ_INIT, + .vdpa_index = pdsv->vdpa_index, + .vf_id = cpu_to_le16(pdsv->vdpa_aux->vf_id), + .qid = cpu_to_le16(qid), + .len = cpu_to_le16(ilog2(vq_info->q_len)), + .desc_addr = cpu_to_le64(vq_info->desc_addr), + .avail_addr = cpu_to_le64(vq_info->avail_addr), + .used_addr = cpu_to_le64(vq_info->used_addr), + .intr_index = cpu_to_le16(qid), + }; + int err; + + dev_dbg(dev, "%s: qid %d len %d desc_addr %#llx avail_addr %#llx used_addr %#llx\n", + __func__, qid, ilog2(vq_info->q_len), + vq_info->desc_addr, vq_info->avail_addr, vq_info->used_addr); + + err = padev->ops->adminq_cmd(padev, + (union pds_core_adminq_cmd *)&cmd, + sizeof(cmd), + (union pds_core_adminq_comp *)&comp, + 0); + if (err) { + dev_dbg(dev, "Failed to init vq %d, status %d: %pe\n", + qid, comp.status, ERR_PTR(err)); + return err; + } + + return 0; +} + +int pds_vdpa_cmd_reset_vq(struct pds_vdpa_device *pdsv, u16 qid) +{ + struct pds_auxiliary_dev *padev = pdsv->vdpa_aux->padev; + struct device *dev = &padev->aux_dev.dev; + struct pds_vdpa_vq_reset_cmd cmd = { + .opcode = PDS_VDPA_CMD_VQ_RESET, + .vdpa_index = pdsv->vdpa_index, + .vf_id = cpu_to_le16(pdsv->vdpa_aux->vf_id), + .qid = cpu_to_le16(qid), + }; + struct pds_vdpa_comp comp = {0}; + int err; + + err = padev->ops->adminq_cmd(padev, + (union pds_core_adminq_cmd *)&cmd, + sizeof(cmd), + (union pds_core_adminq_comp *)&comp, + 0); + if (err) + dev_dbg(dev, "Failed to reset vq %d, status %d: %pe\n", + qid, comp.status, ERR_PTR(err)); + + return err; +} diff --git a/drivers/vdpa/pds/cmds.h b/drivers/vdpa/pds/cmds.h new file mode 100644 index 000000000000..72e19f4efde6 --- /dev/null +++ b/drivers/vdpa/pds/cmds.h @@ -0,0 +1,16 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Copyright(c) 2023 Advanced Micro Devices, Inc */ + +#ifndef _VDPA_CMDS_H_ +#define _VDPA_CMDS_H_ + +int pds_vdpa_init_hw(struct pds_vdpa_device *pdsv); + +int pds_vdpa_cmd_reset(struct pds_vdpa_device *pdsv); +int pds_vdpa_cmd_set_mac(struct pds_vdpa_device *pdsv, u8 *mac); +int pds_vdpa_cmd_set_max_vq_pairs(struct pds_vdpa_device *pdsv, u16 max_vqp); +int pds_vdpa_cmd_init_vq(struct pds_vdpa_device *pdsv, u16 qid, + struct pds_vdpa_vq_info *vq_info); +int pds_vdpa_cmd_reset_vq(struct pds_vdpa_device *pdsv, u16 qid); +int pds_vdpa_cmd_set_features(struct pds_vdpa_device *pdsv, u64 features); +#endif /* _VDPA_CMDS_H_ */ diff --git a/drivers/vdpa/pds/vdpa_dev.h b/drivers/vdpa/pds/vdpa_dev.h index 97fab833a0aa..a21596f438c1 100644 --- a/drivers/vdpa/pds/vdpa_dev.h +++ b/drivers/vdpa/pds/vdpa_dev.h @@ -4,11 +4,42 @@ #ifndef _VDPA_DEV_H_ #define _VDPA_DEV_H_ -#define PDS_VDPA_MAX_QUEUES 65 +#include +#include + +struct pds_vdpa_vq_info { + bool ready; + u64 desc_addr; + u64 avail_addr; + u64 used_addr; + u32 q_len; + u16 qid; + int irq; + char irq_name[32]; + + void __iomem *notify; + dma_addr_t notify_pa; + + u64 doorbell; + u16 avail_idx; + u16 used_idx; + struct vdpa_callback event_cb; + struct pds_vdpa_device *pdsv; +}; + +#define PDS_VDPA_MAX_QUEUES 65 +#define PDS_VDPA_MAX_QLEN 32768 struct pds_vdpa_device { struct vdpa_device vdpa_dev; struct pds_vdpa_aux *vdpa_aux; + + struct pds_vdpa_vq_info vqs[PDS_VDPA_MAX_QUEUES]; + u64 req_features; /* features requested by vdpa */ + u64 actual_features; /* features negotiated and in use */ + u8 vdpa_index; /* rsvd for future subdevice use */ + u8 num_vqs; /* num vqs in use */ + struct vdpa_callback config_cb; }; int pds_vdpa_get_mgmt_info(struct pds_vdpa_aux *vdpa_aux); diff --git a/include/linux/pds/pds_vdpa.h b/include/linux/pds/pds_vdpa.h index c1d6a3fe2d61..785909a6daf6 100644 --- a/include/linux/pds/pds_vdpa.h +++ b/include/linux/pds/pds_vdpa.h @@ -97,4 +97,179 @@ struct pds_vdpa_ident_cmd { __le32 len; __le64 ident_pa; }; + +/** + * struct pds_vdpa_status_cmd - STATUS_UPDATE command + * @opcode: Opcode PDS_VDPA_CMD_STATUS_UPDATE + * @vdpa_index: Index for vdpa subdevice + * @vf_id: VF id + * @status: new status bits + */ +struct pds_vdpa_status_cmd { + u8 opcode; + u8 vdpa_index; + __le16 vf_id; + u8 status; +}; + +/** + * enum pds_vdpa_attr - List of VDPA device attributes + * @PDS_VDPA_ATTR_MAC: MAC address + * @PDS_VDPA_ATTR_MAX_VQ_PAIRS: Max virtqueue pairs + */ +enum pds_vdpa_attr { + PDS_VDPA_ATTR_MAC = 1, + PDS_VDPA_ATTR_MAX_VQ_PAIRS = 2, +}; + +/** + * struct pds_vdpa_setattr_cmd - SET_ATTR command + * @opcode: Opcode PDS_VDPA_CMD_SET_ATTR + * @vdpa_index: Index for vdpa subdevice + * @vf_id: VF id + * @attr: attribute to be changed (enum pds_vdpa_attr) + * @pad: Word boundary padding + * @mac: new mac address to be assigned as vdpa device address + * @max_vq_pairs: new limit of virtqueue pairs + */ +struct pds_vdpa_setattr_cmd { + u8 opcode; + u8 vdpa_index; + __le16 vf_id; + u8 attr; + u8 pad[3]; + union { + u8 mac[6]; + __le16 max_vq_pairs; + } __packed; +}; + +/** + * struct pds_vdpa_vq_init_cmd - queue init command + * @opcode: Opcode PDS_VDPA_CMD_VQ_INIT + * @vdpa_index: Index for vdpa subdevice + * @vf_id: VF id + * @qid: Queue id (bit0 clear = rx, bit0 set = tx, qid=N is ctrlq) + * @len: log(2) of max descriptor count + * @desc_addr: DMA address of descriptor area + * @avail_addr: DMA address of available descriptors (aka driver area) + * @used_addr: DMA address of used descriptors (aka device area) + * @intr_index: interrupt index + */ +struct pds_vdpa_vq_init_cmd { + u8 opcode; + u8 vdpa_index; + __le16 vf_id; + __le16 qid; + __le16 len; + __le64 desc_addr; + __le64 avail_addr; + __le64 used_addr; + __le16 intr_index; +}; + +/** + * struct pds_vdpa_vq_init_comp - queue init completion + * @status: Status of the command (enum pds_core_status_code) + * @hw_qtype: HW queue type, used in doorbell selection + * @hw_qindex: HW queue index, used in doorbell selection + * @rsvd: Word boundary padding + * @color: Color bit + */ +struct pds_vdpa_vq_init_comp { + u8 status; + u8 hw_qtype; + __le16 hw_qindex; + u8 rsvd[11]; + u8 color; +}; + +/** + * struct pds_vdpa_vq_reset_cmd - queue reset command + * @opcode: Opcode PDS_VDPA_CMD_VQ_RESET + * @vdpa_index: Index for vdpa subdevice + * @vf_id: VF id + * @qid: Queue id + */ +struct pds_vdpa_vq_reset_cmd { + u8 opcode; + u8 vdpa_index; + __le16 vf_id; + __le16 qid; +}; + +/** + * struct pds_vdpa_set_features_cmd - set hw features + * @opcode: Opcode PDS_VDPA_CMD_SET_FEATURES + * @vdpa_index: Index for vdpa subdevice + * @vf_id: VF id + * @rsvd: Word boundary padding + * @features: Feature bit mask + */ +struct pds_vdpa_set_features_cmd { + u8 opcode; + u8 vdpa_index; + __le16 vf_id; + __le32 rsvd; + __le64 features; +}; + +/** + * struct pds_vdpa_vq_set_state_cmd - set vq state + * @opcode: Opcode PDS_VDPA_CMD_VQ_SET_STATE + * @vdpa_index: Index for vdpa subdevice + * @vf_id: VF id + * @qid: Queue id + * @avail: Device avail index. + * @used: Device used index. + * + * If the virtqueue uses packed descriptor format, then the avail and used + * index must have a wrap count. The bits should be arranged like the upper + * 16 bits in the device available notification data: 15 bit index, 1 bit wrap. + */ +struct pds_vdpa_vq_set_state_cmd { + u8 opcode; + u8 vdpa_index; + __le16 vf_id; + __le16 qid; + __le16 avail; + __le16 used; +}; + +/** + * struct pds_vdpa_vq_get_state_cmd - get vq state + * @opcode: Opcode PDS_VDPA_CMD_VQ_GET_STATE + * @vdpa_index: Index for vdpa subdevice + * @vf_id: VF id + * @qid: Queue id + */ +struct pds_vdpa_vq_get_state_cmd { + u8 opcode; + u8 vdpa_index; + __le16 vf_id; + __le16 qid; +}; + +/** + * struct pds_vdpa_vq_get_state_comp - get vq state completion + * @status: Status of the command (enum pds_core_status_code) + * @rsvd0: Word boundary padding + * @avail: Device avail index. + * @used: Device used index. + * @rsvd: Word boundary padding + * @color: Color bit + * + * If the virtqueue uses packed descriptor format, then the avail and used + * index will have a wrap count. The bits will be arranged like the "next" + * part of device available notification data: 15 bit index, 1 bit wrap. + */ +struct pds_vdpa_vq_get_state_comp { + u8 status; + u8 rsvd0; + __le16 avail; + __le16 used; + u8 rsvd[9]; + u8 color; +}; + #endif /* _PDS_VDPA_H_ */ From patchwork Wed Mar 22 19:10:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Nelson, Shannon" X-Patchwork-Id: 13184517 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6E49FC76195 for ; Wed, 22 Mar 2023 19:11:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230247AbjCVTL1 (ORCPT ); Wed, 22 Mar 2023 15:11:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57272 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230430AbjCVTLJ (ORCPT ); Wed, 22 Mar 2023 15:11:09 -0400 Received: from NAM04-MW2-obe.outbound.protection.outlook.com (mail-mw2nam04on2052.outbound.protection.outlook.com [40.107.101.52]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8B8BA56166 for ; Wed, 22 Mar 2023 12:11:05 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=XZUSjf/J1eKsirbLLZAWBM9jifp0+XzF4Cldhpb/zRifq5V0u9tCHxdFxxTUFIIjmKCMq+A4Zy9h8wmD2TnWUeVDkbjpKapMcJLG3O8eZH8/eXmWZVmICy7S0vo0yA2swmOqlJSlHZpT0I9eH551P8TDEk/ycTLxP7WlWREObBgpLbIRCoVTEGKuSXb9XWWKF4S3+IB1ToeCQ5i6F2hDeXBGGP1TsdBi6xUE1zMbe9/2yy9LBqlGenl2sMpuOuLj9p64jbkkBaWPzguhccfyyqtsBw02jd3djFD27R8znRJYvMmycUKP8mSHh+jWggeV1MHnQGjePGok3qpt9ulfGA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=LsrcMWIAhxR06Tf8rfs6DR4Qs2+TO8y9ITmnWGiLUyo=; b=C5oFpgod81zN21CBtjD9jqRbK9sqsnKpvWpxr8kxppScpZhmvtcyEYaMWDNYWfUJraF1fpq9Dz6LBruNg1wjzLg68YMF04EdV5ose9UnxcQwO3S9IblnuvXLHcfW3tCtamKtwknbw3QsNfuAHGvkgxScHd9cTt9QTZnyTgrDUfHnxzpIL6MiRu73qHTWED9I63oTuP/dcBP8ngfrM/ryudUVLNubjCRsn72C1ALG78Jj83gGUQUEuhFtHYWm+3BPkffzSTND6UVn40+ZDIuMMx1UbsL1JMh9FxBy2lQ6vDH9uxb7sNnteJrtdFvQtZwRws3odRyx1kp+7Y/Jh7WxFQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=redhat.com smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=LsrcMWIAhxR06Tf8rfs6DR4Qs2+TO8y9ITmnWGiLUyo=; b=EXsPVNLPW/bK9gST8oExHL0MtDE4gwBrgR2aOdbeNvCRvXfvgASPiPOTGhpLS2hYY9Gkqb1J35o+VIy31mX7PSjz9BJRnBtmPJgophMv535NBncZVbJCEGY8NFLU2uZ5vQcuMRpNjm0IddZDvMlSrT6fjU6mHoglJy3nbkULCzo= Received: from DS7PR03CA0300.namprd03.prod.outlook.com (2603:10b6:5:3ad::35) by PH7PR12MB8428.namprd12.prod.outlook.com (2603:10b6:510:243::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6178.37; Wed, 22 Mar 2023 19:11:02 +0000 Received: from DM6NAM11FT004.eop-nam11.prod.protection.outlook.com (2603:10b6:5:3ad:cafe::c2) by DS7PR03CA0300.outlook.office365.com (2603:10b6:5:3ad::35) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6178.37 via Frontend Transport; Wed, 22 Mar 2023 19:11:02 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by DM6NAM11FT004.mail.protection.outlook.com (10.13.172.217) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.6222.17 via Frontend Transport; Wed, 22 Mar 2023 19:11:01 +0000 Received: from driver-dev1.pensando.io (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Wed, 22 Mar 2023 14:10:58 -0500 From: Shannon Nelson To: , , , , , , , CC: Subject: [PATCH v3 virtio 6/8] pds_vdpa: add support for vdpa and vdpamgmt interfaces Date: Wed, 22 Mar 2023 12:10:36 -0700 Message-ID: <20230322191038.44037-7-shannon.nelson@amd.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230322191038.44037-1-shannon.nelson@amd.com> References: <20230322191038.44037-1-shannon.nelson@amd.com> MIME-Version: 1.0 X-Originating-IP: [10.180.168.240] X-ClientProxiedBy: SATLEXMB04.amd.com (10.181.40.145) To SATLEXMB04.amd.com (10.181.40.145) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DM6NAM11FT004:EE_|PH7PR12MB8428:EE_ X-MS-Office365-Filtering-Correlation-Id: 03e046d5-3343-49cf-fc7a-08db2b092d87 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: XSbjmO0C1iKaF7aJba6PZTzQ/redfui6z3LcmC+tT8N2FGOEoliu3n0KBxTyemTR5YPu0bz8r10FRdewP5jdUTQtSPLSs7R44wHK6/5jiRXqyU5Q6+otceaXzMi82AqN/b4PhGU6TxQn1Wzy1i3v7sihX7hicimFpbHUEY1LluwL2ogEvDfDdjXNv98lKZWUfop64qwCeSB3VeNHwi05WfpObQJ4kLHvPSDTmQEIuz7K6WncnwOTCqALI3CdDoGbI3REFFm80/5KsNpbHkJVhF/sJq4Xe6LYqviaPzookcI82RWEg1wcsK6w6pDC7VFi/ta/PRzG+by1Ohy0Hs0IwdFQKjU4DtA7o7Qf3cGF+VYvG26vn5U1uo+yHdRFyuY/M/IxdGs8UO9FYpKL5mVIqWRHhr4GcZ97VBa+A0GWeWsbyNlIa43Xyfzd9qH+XyHRhq6UXg17aOqUT/fLnf55rtIZ9jdt3NVs/PCu2XdBKaideWFpbUJg/IWS9YEBkmwznb1n1xJ1+TlJL33MZ4gAYHbukFshcU3yFKx5w3zJWDdQFF7ArL/sKysEU8f00vZvkaTn2M7m8Sao1+4VFL4NVGssxuJ48YMOjwiEHyT5kkxiwAgrhhbTl9T5wpbM5SY0Io7pu2pb2RcLjLTP6ZMAvC0099TX6GrHGVq36UXd3uP4dHCmqpeP9JWMvtKsDqsMAQ6sX/dqLJfODO2plCNcL9g+ZJP/TZPEVDN+KYP+0/w= X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230025)(4636009)(136003)(396003)(376002)(346002)(39860400002)(451199018)(36840700001)(40470700004)(46966006)(186003)(2616005)(47076005)(16526019)(336012)(6666004)(478600001)(426003)(83380400001)(316002)(70586007)(70206006)(8676002)(26005)(1076003)(110136005)(66899018)(4326008)(41300700001)(36860700001)(30864003)(5660300002)(44832011)(81166007)(40460700003)(82740400003)(2906002)(356005)(86362001)(82310400005)(40480700001)(36756003)(8936002)(36900700001);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Mar 2023 19:11:01.8831 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 03e046d5-3343-49cf-fc7a-08db2b092d87 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT004.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR12MB8428 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org This is the vDPA device support, where we advertise that we can support the virtio queues and deal with the configuration work through the pds_core's adminq. Signed-off-by: Shannon Nelson --- drivers/vdpa/pds/aux_drv.c | 15 + drivers/vdpa/pds/aux_drv.h | 1 + drivers/vdpa/pds/debugfs.c | 260 +++++++++++++++++ drivers/vdpa/pds/debugfs.h | 10 + drivers/vdpa/pds/vdpa_dev.c | 560 +++++++++++++++++++++++++++++++++++- 5 files changed, 845 insertions(+), 1 deletion(-) diff --git a/drivers/vdpa/pds/aux_drv.c b/drivers/vdpa/pds/aux_drv.c index 8f3ae3326885..e54f0371c60e 100644 --- a/drivers/vdpa/pds/aux_drv.c +++ b/drivers/vdpa/pds/aux_drv.c @@ -64,8 +64,21 @@ static int pds_vdpa_probe(struct auxiliary_device *aux_dev, goto err_free_mgmt_info; } + /* Let vdpa know that we can provide devices */ + err = vdpa_mgmtdev_register(&vdpa_aux->vdpa_mdev); + if (err) { + dev_err(dev, "%s: Failed to initialize vdpa_mgmt interface: %pe\n", + __func__, ERR_PTR(err)); + goto err_free_virtio; + } + + pds_vdpa_debugfs_add_pcidev(vdpa_aux); + pds_vdpa_debugfs_add_ident(vdpa_aux); + return 0; +err_free_virtio: + vp_modern_remove(&vdpa_aux->vd_mdev); err_free_mgmt_info: pci_free_irq_vectors(padev->vf_pdev); err_free_mem: @@ -80,9 +93,11 @@ static void pds_vdpa_remove(struct auxiliary_device *aux_dev) struct pds_vdpa_aux *vdpa_aux = auxiliary_get_drvdata(aux_dev); struct device *dev = &aux_dev->dev; + vdpa_mgmtdev_unregister(&vdpa_aux->vdpa_mdev); vp_modern_remove(&vdpa_aux->vd_mdev); pci_free_irq_vectors(vdpa_aux->padev->vf_pdev); + pds_vdpa_debugfs_del_vdpadev(vdpa_aux); kfree(vdpa_aux); auxiliary_set_drvdata(aux_dev, NULL); diff --git a/drivers/vdpa/pds/aux_drv.h b/drivers/vdpa/pds/aux_drv.h index 8f5140401573..1993a5e1806f 100644 --- a/drivers/vdpa/pds/aux_drv.h +++ b/drivers/vdpa/pds/aux_drv.h @@ -13,6 +13,7 @@ struct pds_vdpa_aux { struct pds_auxiliary_dev *padev; struct vdpa_mgmt_dev vdpa_mdev; + struct pds_vdpa_device *pdsv; struct pds_vdpa_ident ident; diff --git a/drivers/vdpa/pds/debugfs.c b/drivers/vdpa/pds/debugfs.c index f4275fe667c3..96aa42fa5b3f 100644 --- a/drivers/vdpa/pds/debugfs.c +++ b/drivers/vdpa/pds/debugfs.c @@ -11,6 +11,7 @@ #include #include "aux_drv.h" +#include "vdpa_dev.h" #include "debugfs.h" #ifdef CONFIG_DEBUG_FS @@ -28,4 +29,263 @@ void pds_vdpa_debugfs_destroy(void) dbfs_dir = NULL; } +#define PRINT_SBIT_NAME(__seq, __f, __name) \ + do { \ + if ((__f) & (__name)) \ + seq_printf(__seq, " %s", &#__name[16]); \ + } while (0) + +static void print_status_bits(struct seq_file *seq, u8 status) +{ + seq_puts(seq, "status:"); + PRINT_SBIT_NAME(seq, status, VIRTIO_CONFIG_S_ACKNOWLEDGE); + PRINT_SBIT_NAME(seq, status, VIRTIO_CONFIG_S_DRIVER); + PRINT_SBIT_NAME(seq, status, VIRTIO_CONFIG_S_DRIVER_OK); + PRINT_SBIT_NAME(seq, status, VIRTIO_CONFIG_S_FEATURES_OK); + PRINT_SBIT_NAME(seq, status, VIRTIO_CONFIG_S_NEEDS_RESET); + PRINT_SBIT_NAME(seq, status, VIRTIO_CONFIG_S_FAILED); + seq_puts(seq, "\n"); +} + +static void print_feature_bits_all(struct seq_file *seq, u64 features) +{ + int i; + + seq_puts(seq, "features:"); + + for (i = 0; i < (sizeof(u64) * 8); i++) { + u64 mask = BIT_ULL(i); + + switch (features & mask) { + case BIT_ULL(VIRTIO_NET_F_CSUM): + seq_puts(seq, " VIRTIO_NET_F_CSUM"); + break; + case BIT_ULL(VIRTIO_NET_F_GUEST_CSUM): + seq_puts(seq, " VIRTIO_NET_F_GUEST_CSUM"); + break; + case BIT_ULL(VIRTIO_NET_F_CTRL_GUEST_OFFLOADS): + seq_puts(seq, " VIRTIO_NET_F_CTRL_GUEST_OFFLOADS"); + break; + case BIT_ULL(VIRTIO_NET_F_MTU): + seq_puts(seq, " VIRTIO_NET_F_MTU"); + break; + case BIT_ULL(VIRTIO_NET_F_MAC): + seq_puts(seq, " VIRTIO_NET_F_MAC"); + break; + case BIT_ULL(VIRTIO_NET_F_GUEST_TSO4): + seq_puts(seq, " VIRTIO_NET_F_GUEST_TSO4"); + break; + case BIT_ULL(VIRTIO_NET_F_GUEST_TSO6): + seq_puts(seq, " VIRTIO_NET_F_GUEST_TSO6"); + break; + case BIT_ULL(VIRTIO_NET_F_GUEST_ECN): + seq_puts(seq, " VIRTIO_NET_F_GUEST_ECN"); + break; + case BIT_ULL(VIRTIO_NET_F_GUEST_UFO): + seq_puts(seq, " VIRTIO_NET_F_GUEST_UFO"); + break; + case BIT_ULL(VIRTIO_NET_F_HOST_TSO4): + seq_puts(seq, " VIRTIO_NET_F_HOST_TSO4"); + break; + case BIT_ULL(VIRTIO_NET_F_HOST_TSO6): + seq_puts(seq, " VIRTIO_NET_F_HOST_TSO6"); + break; + case BIT_ULL(VIRTIO_NET_F_HOST_ECN): + seq_puts(seq, " VIRTIO_NET_F_HOST_ECN"); + break; + case BIT_ULL(VIRTIO_NET_F_HOST_UFO): + seq_puts(seq, " VIRTIO_NET_F_HOST_UFO"); + break; + case BIT_ULL(VIRTIO_NET_F_MRG_RXBUF): + seq_puts(seq, " VIRTIO_NET_F_MRG_RXBUF"); + break; + case BIT_ULL(VIRTIO_NET_F_STATUS): + seq_puts(seq, " VIRTIO_NET_F_STATUS"); + break; + case BIT_ULL(VIRTIO_NET_F_CTRL_VQ): + seq_puts(seq, " VIRTIO_NET_F_CTRL_VQ"); + break; + case BIT_ULL(VIRTIO_NET_F_CTRL_RX): + seq_puts(seq, " VIRTIO_NET_F_CTRL_RX"); + break; + case BIT_ULL(VIRTIO_NET_F_CTRL_VLAN): + seq_puts(seq, " VIRTIO_NET_F_CTRL_VLAN"); + break; + case BIT_ULL(VIRTIO_NET_F_CTRL_RX_EXTRA): + seq_puts(seq, " VIRTIO_NET_F_CTRL_RX_EXTRA"); + break; + case BIT_ULL(VIRTIO_NET_F_GUEST_ANNOUNCE): + seq_puts(seq, " VIRTIO_NET_F_GUEST_ANNOUNCE"); + break; + case BIT_ULL(VIRTIO_NET_F_MQ): + seq_puts(seq, " VIRTIO_NET_F_MQ"); + break; + case BIT_ULL(VIRTIO_NET_F_CTRL_MAC_ADDR): + seq_puts(seq, " VIRTIO_NET_F_CTRL_MAC_ADDR"); + break; + case BIT_ULL(VIRTIO_NET_F_HASH_REPORT): + seq_puts(seq, " VIRTIO_NET_F_HASH_REPORT"); + break; + case BIT_ULL(VIRTIO_NET_F_RSS): + seq_puts(seq, " VIRTIO_NET_F_RSS"); + break; + case BIT_ULL(VIRTIO_NET_F_RSC_EXT): + seq_puts(seq, " VIRTIO_NET_F_RSC_EXT"); + break; + case BIT_ULL(VIRTIO_NET_F_STANDBY): + seq_puts(seq, " VIRTIO_NET_F_STANDBY"); + break; + case BIT_ULL(VIRTIO_NET_F_SPEED_DUPLEX): + seq_puts(seq, " VIRTIO_NET_F_SPEED_DUPLEX"); + break; + case BIT_ULL(VIRTIO_F_NOTIFY_ON_EMPTY): + seq_puts(seq, " VIRTIO_F_NOTIFY_ON_EMPTY"); + break; + case BIT_ULL(VIRTIO_F_ANY_LAYOUT): + seq_puts(seq, " VIRTIO_F_ANY_LAYOUT"); + break; + case BIT_ULL(VIRTIO_F_VERSION_1): + seq_puts(seq, " VIRTIO_F_VERSION_1"); + break; + case BIT_ULL(VIRTIO_F_ACCESS_PLATFORM): + seq_puts(seq, " VIRTIO_F_ACCESS_PLATFORM"); + break; + case BIT_ULL(VIRTIO_F_RING_PACKED): + seq_puts(seq, " VIRTIO_F_RING_PACKED"); + break; + case BIT_ULL(VIRTIO_F_ORDER_PLATFORM): + seq_puts(seq, " VIRTIO_F_ORDER_PLATFORM"); + break; + case BIT_ULL(VIRTIO_F_SR_IOV): + seq_puts(seq, " VIRTIO_F_SR_IOV"); + break; + case 0: + break; + default: + seq_printf(seq, " bit_%d", i); + break; + } + } + + seq_puts(seq, "\n"); +} + +void pds_vdpa_debugfs_add_pcidev(struct pds_vdpa_aux *vdpa_aux) +{ + vdpa_aux->dentry = debugfs_create_dir(pci_name(vdpa_aux->padev->vf_pdev), dbfs_dir); +} + +static int identity_show(struct seq_file *seq, void *v) +{ + struct pds_vdpa_aux *vdpa_aux = seq->private; + struct vdpa_mgmt_dev *mgmt; + + seq_printf(seq, "aux_dev: %s\n", + dev_name(&vdpa_aux->padev->aux_dev.dev)); + + mgmt = &vdpa_aux->vdpa_mdev; + seq_printf(seq, "max_vqs: %d\n", mgmt->max_supported_vqs); + seq_printf(seq, "config_attr_mask: %#llx\n", mgmt->config_attr_mask); + seq_printf(seq, "supported_features: %#llx\n", mgmt->supported_features); + print_feature_bits_all(seq, mgmt->supported_features); + + return 0; +} +DEFINE_SHOW_ATTRIBUTE(identity); + +void pds_vdpa_debugfs_add_ident(struct pds_vdpa_aux *vdpa_aux) +{ + debugfs_create_file("identity", 0400, vdpa_aux->dentry, + vdpa_aux, &identity_fops); +} + +static int config_show(struct seq_file *seq, void *v) +{ + struct pds_vdpa_device *pdsv = seq->private; + struct virtio_net_config vc; + u8 status; + + memcpy_fromio(&vc, pdsv->vdpa_aux->vd_mdev.device, + sizeof(struct virtio_net_config)); + + seq_printf(seq, "mac: %pM\n", vc.mac); + seq_printf(seq, "max_virtqueue_pairs: %d\n", + __virtio16_to_cpu(true, vc.max_virtqueue_pairs)); + seq_printf(seq, "mtu: %d\n", __virtio16_to_cpu(true, vc.mtu)); + seq_printf(seq, "speed: %d\n", le32_to_cpu(vc.speed)); + seq_printf(seq, "duplex: %d\n", vc.duplex); + seq_printf(seq, "rss_max_key_size: %d\n", vc.rss_max_key_size); + seq_printf(seq, "rss_max_indirection_table_length: %d\n", + le16_to_cpu(vc.rss_max_indirection_table_length)); + seq_printf(seq, "supported_hash_types: %#x\n", + le32_to_cpu(vc.supported_hash_types)); + seq_printf(seq, "vn_status: %#x\n", + __virtio16_to_cpu(true, vc.status)); + + status = vp_modern_get_status(&pdsv->vdpa_aux->vd_mdev); + seq_printf(seq, "dev_status: %#x\n", status); + print_status_bits(seq, status); + + seq_printf(seq, "req_features: %#llx\n", pdsv->req_features); + print_feature_bits_all(seq, pdsv->req_features); + seq_printf(seq, "actual_features: %#llx\n", pdsv->actual_features); + print_feature_bits_all(seq, pdsv->actual_features); + seq_printf(seq, "vdpa_index: %d\n", pdsv->vdpa_index); + seq_printf(seq, "num_vqs: %d\n", pdsv->num_vqs); + + return 0; +} +DEFINE_SHOW_ATTRIBUTE(config); + +static int vq_show(struct seq_file *seq, void *v) +{ + struct pds_vdpa_vq_info *vq = seq->private; + + seq_printf(seq, "ready: %d\n", vq->ready); + seq_printf(seq, "desc_addr: %#llx\n", vq->desc_addr); + seq_printf(seq, "avail_addr: %#llx\n", vq->avail_addr); + seq_printf(seq, "used_addr: %#llx\n", vq->used_addr); + seq_printf(seq, "q_len: %d\n", vq->q_len); + seq_printf(seq, "qid: %d\n", vq->qid); + + seq_printf(seq, "doorbell: %#llx\n", vq->doorbell); + seq_printf(seq, "avail_idx: %d\n", vq->avail_idx); + seq_printf(seq, "used_idx: %d\n", vq->used_idx); + seq_printf(seq, "irq: %d\n", vq->irq); + seq_printf(seq, "irq-name: %s\n", vq->irq_name); + + return 0; +} +DEFINE_SHOW_ATTRIBUTE(vq); + +void pds_vdpa_debugfs_add_vdpadev(struct pds_vdpa_aux *vdpa_aux) +{ + int i; + + debugfs_create_file("config", 0400, vdpa_aux->dentry, vdpa_aux->pdsv, &config_fops); + + for (i = 0; i < vdpa_aux->pdsv->num_vqs; i++) { + char name[8]; + + snprintf(name, sizeof(name), "vq%02d", i); + debugfs_create_file(name, 0400, vdpa_aux->dentry, + &vdpa_aux->pdsv->vqs[i], &vq_fops); + } +} + +void pds_vdpa_debugfs_del_vdpadev(struct pds_vdpa_aux *vdpa_aux) +{ + debugfs_remove_recursive(vdpa_aux->dentry); + vdpa_aux->dentry = NULL; +} + +void pds_vdpa_debugfs_reset_vdpadev(struct pds_vdpa_aux *vdpa_aux) +{ + /* we don't keep track of the entries, so remove it all + * then rebuild the basics + */ + pds_vdpa_debugfs_del_vdpadev(vdpa_aux); + pds_vdpa_debugfs_add_pcidev(vdpa_aux); + pds_vdpa_debugfs_add_ident(vdpa_aux); +} #endif /* CONFIG_DEBUG_FS */ diff --git a/drivers/vdpa/pds/debugfs.h b/drivers/vdpa/pds/debugfs.h index fff078a869e5..6545cefb16c2 100644 --- a/drivers/vdpa/pds/debugfs.h +++ b/drivers/vdpa/pds/debugfs.h @@ -10,9 +10,19 @@ void pds_vdpa_debugfs_create(void); void pds_vdpa_debugfs_destroy(void); +void pds_vdpa_debugfs_add_pcidev(struct pds_vdpa_aux *vdpa_aux); +void pds_vdpa_debugfs_add_ident(struct pds_vdpa_aux *vdpa_aux); +void pds_vdpa_debugfs_add_vdpadev(struct pds_vdpa_aux *vdpa_aux); +void pds_vdpa_debugfs_del_vdpadev(struct pds_vdpa_aux *vdpa_aux); +void pds_vdpa_debugfs_reset_vdpadev(struct pds_vdpa_aux *vdpa_aux); #else static inline void pds_vdpa_debugfs_create(void) { } static inline void pds_vdpa_debugfs_destroy(void) { } +static inline void pds_vdpa_debugfs_add_pcidev(struct pds_vdpa_aux *vdpa_aux) { } +static inline void pds_vdpa_debugfs_add_ident(struct pds_vdpa_aux *vdpa_aux) { } +static inline void pds_vdpa_debugfs_add_vdpadev(struct pds_vdpa_aux *vdpa_aux) { } +static inline void pds_vdpa_debugfs_del_vdpadev(struct pds_vdpa_aux *vdpa_aux) { } +static inline void pds_vdpa_debugfs_reset_vdpadev(struct pds_vdpa_aux *vdpa_aux) { } #endif #endif /* _PDS_VDPA_DEBUGFS_H_ */ diff --git a/drivers/vdpa/pds/vdpa_dev.c b/drivers/vdpa/pds/vdpa_dev.c index 6345b3fa2440..6b6675ac4219 100644 --- a/drivers/vdpa/pds/vdpa_dev.c +++ b/drivers/vdpa/pds/vdpa_dev.c @@ -4,6 +4,7 @@ #include #include #include +#include #include #include @@ -13,7 +14,433 @@ #include "vdpa_dev.h" #include "aux_drv.h" +#include "cmds.h" +#include "debugfs.h" +static struct pds_vdpa_device *vdpa_to_pdsv(struct vdpa_device *vdpa_dev) +{ + return container_of(vdpa_dev, struct pds_vdpa_device, vdpa_dev); +} + +static int pds_vdpa_set_vq_address(struct vdpa_device *vdpa_dev, u16 qid, + u64 desc_addr, u64 driver_addr, u64 device_addr) +{ + struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev); + + pdsv->vqs[qid].desc_addr = desc_addr; + pdsv->vqs[qid].avail_addr = driver_addr; + pdsv->vqs[qid].used_addr = device_addr; + + return 0; +} + +static void pds_vdpa_set_vq_num(struct vdpa_device *vdpa_dev, u16 qid, u32 num) +{ + struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev); + + pdsv->vqs[qid].q_len = num; +} + +static void pds_vdpa_kick_vq(struct vdpa_device *vdpa_dev, u16 qid) +{ + struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev); + + iowrite16(qid, pdsv->vqs[qid].notify); +} + +static void pds_vdpa_set_vq_cb(struct vdpa_device *vdpa_dev, u16 qid, + struct vdpa_callback *cb) +{ + struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev); + + pdsv->vqs[qid].event_cb = *cb; +} + +static irqreturn_t pds_vdpa_isr(int irq, void *data) +{ + struct pds_vdpa_vq_info *vq; + + vq = data; + if (vq->event_cb.callback) + vq->event_cb.callback(vq->event_cb.private); + + return IRQ_HANDLED; +} + +static void pds_vdpa_release_irq(struct pds_vdpa_device *pdsv, int qid) +{ + if (pdsv->vqs[qid].irq == VIRTIO_MSI_NO_VECTOR) + return; + + free_irq(pdsv->vqs[qid].irq, &pdsv->vqs[qid]); + pdsv->vqs[qid].irq = VIRTIO_MSI_NO_VECTOR; +} + +static void pds_vdpa_set_vq_ready(struct vdpa_device *vdpa_dev, u16 qid, bool ready) +{ + struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev); + struct pci_dev *pdev = pdsv->vdpa_aux->padev->vf_pdev; + struct device *dev = &pdsv->vdpa_dev.dev; + int irq; + int err; + + dev_dbg(dev, "%s: qid %d ready %d => %d\n", + __func__, qid, pdsv->vqs[qid].ready, ready); + if (ready == pdsv->vqs[qid].ready) + return; + + if (ready) { + irq = pci_irq_vector(pdev, qid); + snprintf(pdsv->vqs[qid].irq_name, sizeof(pdsv->vqs[qid].irq_name), + "vdpa-%s-%d", dev_name(dev), qid); + + err = request_irq(irq, pds_vdpa_isr, 0, + pdsv->vqs[qid].irq_name, &pdsv->vqs[qid]); + if (err) { + dev_err(dev, "%s: no irq for qid %d: %pe\n", + __func__, qid, ERR_PTR(err)); + return; + } + pdsv->vqs[qid].irq = irq; + + /* Pass vq setup info to DSC using adminq to gather up and + * send all info at once so FW can do its full set up in + * one easy operation + */ + err = pds_vdpa_cmd_init_vq(pdsv, qid, &pdsv->vqs[qid]); + if (err) { + dev_err(dev, "Failed to init vq %d: %pe\n", + qid, ERR_PTR(err)); + pds_vdpa_release_irq(pdsv, qid); + ready = false; + } + } else { + err = pds_vdpa_cmd_reset_vq(pdsv, qid); + if (err) + dev_err(dev, "%s: reset_vq failed qid %d: %pe\n", + __func__, qid, ERR_PTR(err)); + pds_vdpa_release_irq(pdsv, qid); + } + + pdsv->vqs[qid].ready = ready; +} + +static bool pds_vdpa_get_vq_ready(struct vdpa_device *vdpa_dev, u16 qid) +{ + struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev); + + return pdsv->vqs[qid].ready; +} + +static int pds_vdpa_set_vq_state(struct vdpa_device *vdpa_dev, u16 qid, + const struct vdpa_vq_state *state) +{ + struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev); + struct pds_auxiliary_dev *padev = pdsv->vdpa_aux->padev; + struct device *dev = &padev->aux_dev.dev; + struct pds_vdpa_vq_set_state_cmd cmd = { + .opcode = PDS_VDPA_CMD_VQ_SET_STATE, + .vdpa_index = pdsv->vdpa_index, + .vf_id = cpu_to_le16(pdsv->vdpa_aux->vf_id), + .qid = cpu_to_le16(qid), + }; + struct pds_vdpa_comp comp = {0}; + int err; + + dev_dbg(dev, "%s: qid %d avail %#x\n", + __func__, qid, state->packed.last_avail_idx); + + if (pdsv->actual_features & VIRTIO_F_RING_PACKED) { + cmd.avail = cpu_to_le16(state->packed.last_avail_idx | + (state->packed.last_avail_counter << 15)); + cmd.used = cpu_to_le16(state->packed.last_used_idx | + (state->packed.last_used_counter << 15)); + } else { + cmd.avail = cpu_to_le16(state->split.avail_index); + /* state->split does not provide a used_index: + * the vq will be set to "empty" here, and the vq will read + * the current used index the next time the vq is kicked. + */ + cmd.used = cpu_to_le16(state->split.avail_index); + } + + err = padev->ops->adminq_cmd(padev, + (union pds_core_adminq_cmd *)&cmd, + sizeof(cmd), + (union pds_core_adminq_comp *)&comp, + 0); + if (err) + dev_err(dev, "Failed to set vq state qid %u, status %d: %pe\n", + qid, comp.status, ERR_PTR(err)); + + return err; +} + +static int pds_vdpa_get_vq_state(struct vdpa_device *vdpa_dev, u16 qid, + struct vdpa_vq_state *state) +{ + struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev); + struct pds_auxiliary_dev *padev = pdsv->vdpa_aux->padev; + struct device *dev = &padev->aux_dev.dev; + struct pds_vdpa_vq_get_state_cmd cmd = { + .opcode = PDS_VDPA_CMD_VQ_GET_STATE, + .vdpa_index = pdsv->vdpa_index, + .vf_id = cpu_to_le16(pdsv->vdpa_aux->vf_id), + .qid = cpu_to_le16(qid), + }; + struct pds_vdpa_vq_get_state_comp comp = {0}; + int err; + + dev_dbg(dev, "%s: qid %d\n", __func__, qid); + + err = padev->ops->adminq_cmd(padev, + (union pds_core_adminq_cmd *)&cmd, + sizeof(cmd), + (union pds_core_adminq_comp *)&comp, + 0); + if (err) { + dev_err(dev, "Failed to get vq state qid %u, status %d: %pe\n", + qid, comp.status, ERR_PTR(err)); + return err; + } + + if (pdsv->actual_features & VIRTIO_F_RING_PACKED) { + state->packed.last_avail_idx = le16_to_cpu(comp.avail) & 0x7fff; + state->packed.last_avail_counter = le16_to_cpu(comp.avail) >> 15; + } else { + state->split.avail_index = le16_to_cpu(comp.avail); + /* state->split does not provide a used_index. */ + } + + return err; +} + +static struct vdpa_notification_area +pds_vdpa_get_vq_notification(struct vdpa_device *vdpa_dev, u16 qid) +{ + struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev); + struct virtio_pci_modern_device *vd_mdev; + struct vdpa_notification_area area; + + area.addr = pdsv->vqs[qid].notify_pa; + + vd_mdev = &pdsv->vdpa_aux->vd_mdev; + if (!vd_mdev->notify_offset_multiplier) + area.size = PDS_PAGE_SIZE; + else + area.size = vd_mdev->notify_offset_multiplier; + + return area; +} + +static int pds_vdpa_get_vq_irq(struct vdpa_device *vdpa_dev, u16 qid) +{ + struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev); + + return pdsv->vqs[qid].irq; +} + +static u32 pds_vdpa_get_vq_align(struct vdpa_device *vdpa_dev) +{ + return PDS_PAGE_SIZE; +} + +static u32 pds_vdpa_get_vq_group(struct vdpa_device *vdpa_dev, u16 idx) +{ + return 0; +} + +static u64 pds_vdpa_get_device_features(struct vdpa_device *vdpa_dev) +{ + struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev); + + return le64_to_cpu(pdsv->vdpa_aux->ident.hw_features); +} + +static int pds_vdpa_set_driver_features(struct vdpa_device *vdpa_dev, u64 features) +{ + struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev); + struct device *dev = &pdsv->vdpa_dev.dev; + u64 nego_features; + u64 missing; + + if (!(features & BIT_ULL(VIRTIO_F_ACCESS_PLATFORM)) && features) { + dev_err(dev, "VIRTIO_F_ACCESS_PLATFORM is not negotiated\n"); + return -EOPNOTSUPP; + } + + pdsv->req_features = features; + + /* Check for valid feature bits */ + nego_features = features & le64_to_cpu(pdsv->vdpa_aux->ident.hw_features); + missing = pdsv->req_features & ~nego_features; + if (missing) { + dev_err(dev, "Can't support all requested features in %#llx, missing %#llx features\n", + pdsv->req_features, missing); + return -EOPNOTSUPP; + } + + dev_dbg(dev, "%s: %#llx => %#llx\n", + __func__, pdsv->actual_features, nego_features); + + if (pdsv->actual_features == nego_features) + return 0; + + vp_modern_set_features(&pdsv->vdpa_aux->vd_mdev, nego_features); + pdsv->actual_features = nego_features; + + return 0; +} + +static u64 pds_vdpa_get_driver_features(struct vdpa_device *vdpa_dev) +{ + struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev); + + return pdsv->actual_features; +} + +static void pds_vdpa_set_config_cb(struct vdpa_device *vdpa_dev, + struct vdpa_callback *cb) +{ + struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev); + + pdsv->config_cb.callback = cb->callback; + pdsv->config_cb.private = cb->private; +} + +static u16 pds_vdpa_get_vq_num_max(struct vdpa_device *vdpa_dev) +{ + struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev); + + /* qemu has assert() that vq_num_max <= VIRTQUEUE_MAX_SIZE (1024) */ + return min_t(u16, 1024, BIT(le16_to_cpu(pdsv->vdpa_aux->ident.max_qlen))); +} + +static u32 pds_vdpa_get_device_id(struct vdpa_device *vdpa_dev) +{ + return VIRTIO_ID_NET; +} + +static u32 pds_vdpa_get_vendor_id(struct vdpa_device *vdpa_dev) +{ + return PCI_VENDOR_ID_PENSANDO; +} + +static u8 pds_vdpa_get_status(struct vdpa_device *vdpa_dev) +{ + struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev); + + return vp_modern_get_status(&pdsv->vdpa_aux->vd_mdev); +} + +static void pds_vdpa_set_status(struct vdpa_device *vdpa_dev, u8 status) +{ + struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev); + + vp_modern_set_status(&pdsv->vdpa_aux->vd_mdev, status); + + /* Note: still working with FW on the need for this reset cmd */ + if (status == 0) + pds_vdpa_cmd_reset(pdsv); +} + +static int pds_vdpa_reset(struct vdpa_device *vdpa_dev) +{ + struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev); + struct device *dev; + int err = 0; + u8 status; + int i; + + dev = &pdsv->vdpa_aux->padev->aux_dev.dev; + status = pds_vdpa_get_status(vdpa_dev); + + if (status == 0) + return 0; + + if (status & VIRTIO_CONFIG_S_DRIVER_OK) { + /* Reset the vqs */ + for (i = 0; i < pdsv->num_vqs && !err; i++) { + err = pds_vdpa_cmd_reset_vq(pdsv, i); + if (err) + dev_err(dev, "%s: reset_vq failed qid %d: %pe\n", + __func__, i, ERR_PTR(err)); + pds_vdpa_release_irq(pdsv, i); + memset(&pdsv->vqs[i], 0, sizeof(pdsv->vqs[0])); + pdsv->vqs[i].ready = false; + } + } + + pds_vdpa_set_status(vdpa_dev, 0); + + return 0; +} + +static size_t pds_vdpa_get_config_size(struct vdpa_device *vdpa_dev) +{ + return sizeof(struct virtio_net_config); +} + +static void pds_vdpa_get_config(struct vdpa_device *vdpa_dev, + unsigned int offset, + void *buf, unsigned int len) +{ + struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev); + void __iomem *device; + + if (offset + len > sizeof(struct virtio_net_config)) { + WARN(true, "%s: bad read, offset %d len %d\n", __func__, offset, len); + return; + } + + device = pdsv->vdpa_aux->vd_mdev.device; + memcpy_fromio(buf, device + offset, len); +} + +static void pds_vdpa_set_config(struct vdpa_device *vdpa_dev, + unsigned int offset, const void *buf, + unsigned int len) +{ + struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev); + void __iomem *device; + + if (offset + len > sizeof(struct virtio_net_config)) { + WARN(true, "%s: bad read, offset %d len %d\n", __func__, offset, len); + return; + } + + device = pdsv->vdpa_aux->vd_mdev.device; + memcpy_toio(device + offset, buf, len); +} + +static const struct vdpa_config_ops pds_vdpa_ops = { + .set_vq_address = pds_vdpa_set_vq_address, + .set_vq_num = pds_vdpa_set_vq_num, + .kick_vq = pds_vdpa_kick_vq, + .set_vq_cb = pds_vdpa_set_vq_cb, + .set_vq_ready = pds_vdpa_set_vq_ready, + .get_vq_ready = pds_vdpa_get_vq_ready, + .set_vq_state = pds_vdpa_set_vq_state, + .get_vq_state = pds_vdpa_get_vq_state, + .get_vq_notification = pds_vdpa_get_vq_notification, + .get_vq_irq = pds_vdpa_get_vq_irq, + .get_vq_align = pds_vdpa_get_vq_align, + .get_vq_group = pds_vdpa_get_vq_group, + + .get_device_features = pds_vdpa_get_device_features, + .set_driver_features = pds_vdpa_set_driver_features, + .get_driver_features = pds_vdpa_get_driver_features, + .set_config_cb = pds_vdpa_set_config_cb, + .get_vq_num_max = pds_vdpa_get_vq_num_max, + .get_device_id = pds_vdpa_get_device_id, + .get_vendor_id = pds_vdpa_get_vendor_id, + .get_status = pds_vdpa_get_status, + .set_status = pds_vdpa_set_status, + .reset = pds_vdpa_reset, + .get_config_size = pds_vdpa_get_config_size, + .get_config = pds_vdpa_get_config, + .set_config = pds_vdpa_set_config, +}; static struct virtio_device_id pds_vdpa_id_table[] = { {VIRTIO_ID_NET, VIRTIO_DEV_ANY_ID}, {0}, @@ -22,12 +449,143 @@ static struct virtio_device_id pds_vdpa_id_table[] = { static int pds_vdpa_dev_add(struct vdpa_mgmt_dev *mdev, const char *name, const struct vdpa_dev_set_config *add_config) { - return -EOPNOTSUPP; + struct pds_vdpa_aux *vdpa_aux; + struct pds_vdpa_device *pdsv; + struct vdpa_mgmt_dev *mgmt; + u16 fw_max_vqs, vq_pairs; + struct device *dma_dev; + struct pci_dev *pdev; + struct device *dev; + u8 mac[ETH_ALEN]; + int err; + int i; + + vdpa_aux = container_of(mdev, struct pds_vdpa_aux, vdpa_mdev); + dev = &vdpa_aux->padev->aux_dev.dev; + mgmt = &vdpa_aux->vdpa_mdev; + + if (vdpa_aux->pdsv) { + dev_warn(dev, "Multiple vDPA devices on a VF is not supported.\n"); + return -EOPNOTSUPP; + } + + pdsv = vdpa_alloc_device(struct pds_vdpa_device, vdpa_dev, + dev, &pds_vdpa_ops, 1, 1, name, false); + if (IS_ERR(pdsv)) { + dev_err(dev, "Failed to allocate vDPA structure: %pe\n", pdsv); + return PTR_ERR(pdsv); + } + + vdpa_aux->pdsv = pdsv; + vdpa_aux->padev->priv = pdsv; + pdsv->vdpa_aux = vdpa_aux; + + pdev = vdpa_aux->padev->vf_pdev; + dma_dev = &pdev->dev; + pdsv->vdpa_dev.dma_dev = dma_dev; + + err = pds_vdpa_cmd_reset(pdsv); + if (err) { + dev_err(dev, "Failed to reset hw: %pe\n", ERR_PTR(err)); + goto err_unmap; + } + + err = pds_vdpa_init_hw(pdsv); + if (err) { + dev_err(dev, "Failed to init hw: %pe\n", ERR_PTR(err)); + goto err_unmap; + } + + fw_max_vqs = le16_to_cpu(pdsv->vdpa_aux->ident.max_vqs); + vq_pairs = fw_max_vqs / 2; + + /* Make sure we have the queues being requested */ + if (add_config->mask & (1 << VDPA_ATTR_DEV_NET_CFG_MAX_VQP)) + vq_pairs = add_config->net.max_vq_pairs; + + pdsv->num_vqs = 2 * vq_pairs; + if (mgmt->supported_features & BIT_ULL(VIRTIO_NET_F_CTRL_VQ)) + pdsv->num_vqs++; + + if (pdsv->num_vqs > fw_max_vqs) { + dev_err(dev, "%s: queue count requested %u greater than max %u\n", + __func__, pdsv->num_vqs, fw_max_vqs); + err = -ENOSPC; + goto err_unmap; + } + + if (pdsv->num_vqs != fw_max_vqs) { + err = pds_vdpa_cmd_set_max_vq_pairs(pdsv, vq_pairs); + if (err) { + dev_err(dev, "Failed to set max_vq_pairs: %pe\n", + ERR_PTR(err)); + goto err_unmap; + } + } + + /* Set a mac, either from the user config if provided + * or set a random mac if default is 00:..:00 + */ + if (add_config->mask & BIT_ULL(VDPA_ATTR_DEV_NET_CFG_MACADDR)) { + ether_addr_copy(mac, add_config->net.mac); + pds_vdpa_cmd_set_mac(pdsv, mac); + } else { + struct virtio_net_config __iomem *vc; + + vc = pdsv->vdpa_aux->vd_mdev.device; + memcpy_fromio(mac, vc->mac, sizeof(mac)); + if (is_zero_ether_addr(mac)) { + eth_random_addr(mac); + dev_info(dev, "setting random mac %pM\n", mac); + pds_vdpa_cmd_set_mac(pdsv, mac); + } + } + + for (i = 0; i < pdsv->num_vqs; i++) { + pdsv->vqs[i].qid = i; + pdsv->vqs[i].pdsv = pdsv; + pdsv->vqs[i].irq = VIRTIO_MSI_NO_VECTOR; + pdsv->vqs[i].notify = vp_modern_map_vq_notify(&pdsv->vdpa_aux->vd_mdev, + i, &pdsv->vqs[i].notify_pa); + } + + pdsv->vdpa_dev.mdev = &vdpa_aux->vdpa_mdev; + + /* We use the _vdpa_register_device() call rather than the + * vdpa_register_device() to avoid a deadlock because our + * dev_add() is called with the vdpa_dev_lock already set + * by vdpa_nl_cmd_dev_add_set_doit() + */ + err = _vdpa_register_device(&pdsv->vdpa_dev, pdsv->num_vqs); + if (err) { + dev_err(dev, "Failed to register to vDPA bus: %pe\n", ERR_PTR(err)); + goto err_unmap; + } + + pds_vdpa_debugfs_add_vdpadev(vdpa_aux); + + return 0; + +err_unmap: + put_device(&pdsv->vdpa_dev.dev); + vdpa_aux->pdsv = NULL; + return err; } static void pds_vdpa_dev_del(struct vdpa_mgmt_dev *mdev, struct vdpa_device *vdpa_dev) { + struct pds_vdpa_aux *vdpa_aux; + + vdpa_aux = container_of(mdev, struct pds_vdpa_aux, vdpa_mdev); + _vdpa_unregister_device(vdpa_dev); + + pds_vdpa_cmd_reset(vdpa_aux->pdsv); + pds_vdpa_debugfs_reset_vdpadev(vdpa_aux); + + vdpa_aux->pdsv = NULL; + + dev_info(&vdpa_aux->padev->aux_dev.dev, "Removed vdpa device\n"); } static const struct vdpa_mgmtdev_ops pds_vdpa_mgmt_dev_ops = { From patchwork Wed Mar 22 19:10:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Nelson, Shannon" X-Patchwork-Id: 13184516 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 61322C6FD1F for ; Wed, 22 Mar 2023 19:11:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231193AbjCVTLY (ORCPT ); Wed, 22 Mar 2023 15:11:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57430 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230400AbjCVTLI (ORCPT ); Wed, 22 Mar 2023 15:11:08 -0400 Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2087.outbound.protection.outlook.com [40.107.94.87]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 02ABE5C9D3 for ; Wed, 22 Mar 2023 12:11:05 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=M3MjgOR2uWQaMxnoWsGpc3RvSetEKjPeJgvIPwB1TecxS1dK2Yh/U/JOkA5Azv1Sic0NbTmYQUMlWpix0FwQsQf6AD43+DgRQpieMAIaVLCaV776n8YKrH1/rIlpiR50nfxSuvWoM+uXH+6k5E8Fcd2OOs0smT5Y/NtBsF5MrTG7NT7gvO7PGRw2cMnEk0mT+0cXOs+xzDVDV8+TRodUOYaDfVoQGsGpa5p7lTtdy5BBhESiRXEtykB6xLGzRAvG1sqgm5Y3cIZyawfBH88cEvUpTuFN9jT6ekccPk4/a5cDZn7ZWQIjF06qG4LOvCgiT5qjGu4Cvn5viMQh2n7UKA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=8NpiPLJ22zI1qFLJpTd74/sRpfDQGAiwUkhCAjFuOBo=; b=YyeUwhA0ESNkx9r7+RdbrBm5JWP4jgp8tozdckG90dfz9XQNIsOu41W/rdv7x0OS/jZSmYtk4BhfB5r5HtnuvLpmiAX/t62kihO5KG+mwuE393A2ym+Fv/HPzHUvFnfBC7J/gIEWDIZ6OD7Ie5CSSSJtPz5cIY3suctt/UY8h0KMh1WIRlTcW5W2FNIzLu0HcYGETjOHCsuKobFmvKPJNhw1aZhdY2ex+f3cORgXtHEA+b3FgRVAXHMtoeR15ONfdVRjSjnTs0SChI+wA/MGwIk1aZJ9kdsLpYrWsP75M3hKOoaEgCI3GBv9GK2DnnRgVORFEWVl+C1UXXYWtvR2Lg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=redhat.com smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=8NpiPLJ22zI1qFLJpTd74/sRpfDQGAiwUkhCAjFuOBo=; b=qeiiEeNCUNnQk0mF3Df/ynM8oiAe2g6i1OdBNBmEWHmHeIjAmvha0U94kY2TQzNfs2/jRWNo+rvSFAmgRqRfAT2qmvfFRI+oecFi6U/k43ndOXZdRkTf3QR/G8tqjw0PuSDN/IxRmw/zwNSoMocNlJJLdI8civ8YToPo70ttETo= Received: from DS7PR03CA0287.namprd03.prod.outlook.com (2603:10b6:5:3ad::22) by PH0PR12MB8051.namprd12.prod.outlook.com (2603:10b6:510:26d::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6178.37; Wed, 22 Mar 2023 19:11:03 +0000 Received: from DM6NAM11FT004.eop-nam11.prod.protection.outlook.com (2603:10b6:5:3ad:cafe::de) by DS7PR03CA0287.outlook.office365.com (2603:10b6:5:3ad::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6178.37 via Frontend Transport; Wed, 22 Mar 2023 19:11:02 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by DM6NAM11FT004.mail.protection.outlook.com (10.13.172.217) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.6222.17 via Frontend Transport; Wed, 22 Mar 2023 19:11:02 +0000 Received: from driver-dev1.pensando.io (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Wed, 22 Mar 2023 14:11:00 -0500 From: Shannon Nelson To: , , , , , , , CC: Subject: [PATCH v3 virtio 7/8] pds_vdpa: subscribe to the pds_core events Date: Wed, 22 Mar 2023 12:10:37 -0700 Message-ID: <20230322191038.44037-8-shannon.nelson@amd.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230322191038.44037-1-shannon.nelson@amd.com> References: <20230322191038.44037-1-shannon.nelson@amd.com> MIME-Version: 1.0 X-Originating-IP: [10.180.168.240] X-ClientProxiedBy: SATLEXMB04.amd.com (10.181.40.145) To SATLEXMB04.amd.com (10.181.40.145) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DM6NAM11FT004:EE_|PH0PR12MB8051:EE_ X-MS-Office365-Filtering-Correlation-Id: 8347e3bf-cb77-43f9-1daa-08db2b092dec X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: j1AaN8k0aGivlhu/TIRq0ddpyh2CSbCDt8t0mxpxwb/m1IPsQu5ninUwUvUXc4ZOeUc2z6qQFYuSxRm9w1BshyBkm1M3+Dgtsl2GAFsFJFgGTmL6k/eVk36J6ISrodEaKYfydTmswAedI9a8HBf+/D3Fuj6vTsFCVFxqfkpSVx5Tz++hKxGDFKeulWKn5Awwo9bgtFr9x/ag3LZdNFKl3EZD4dQ1kn/NtIqVOS28SpTII5VCgRaAMyXX1vpkAksZIndaFmHFYz7skw56PvKEwINW/FC9DyUOo8tdt7Vm88VswOeyXU6HtFTjYuK9yGPnmGDjsFvBNX5IwiWE8Zr+4MUbCGyWd4BlkAHj0hhOq09LpsPZCy0FmowZNNF9DfIXpGmgf+8bRAAGhot+qg/2VqbJtwJfftav1TONC3YkEnlngB2i5GceQO+UKJFObov0oO6eqAkAavetrueLS598pHbCLxoZVvGHqw/DkV3zz54TsLDl4T8WAFMmoOL/DUISIUOdS8Q9c9AfEYPV46So7rl0PriO99s4Ou3NPVrMbGALNL+4V8A39WLH5j45PRVGJ7S/IHzy4hNFzZPpC4hRWwf84TODOC4/DlSsin0g7/hvXMwuOSmPOhCov6IXI6kn7Q3la6U3aNOVCDT2lLVKHEirjopk0+HeUOSkvsifNKgk/ptaQ8I9oiEygfQM47JJlTrP0CoB8+2zA/rQz/ZLEPMaRHLGsE+2VCUYQCNtqQc= X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230025)(4636009)(346002)(376002)(136003)(39860400002)(396003)(451199018)(46966006)(40470700004)(36840700001)(16526019)(2616005)(47076005)(26005)(83380400001)(426003)(336012)(6666004)(186003)(4326008)(316002)(1076003)(110136005)(8676002)(70206006)(478600001)(70586007)(36860700001)(44832011)(5660300002)(41300700001)(81166007)(82740400003)(8936002)(2906002)(356005)(40460700003)(86362001)(82310400005)(40480700001)(36756003)(36900700001);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Mar 2023 19:11:02.6331 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 8347e3bf-cb77-43f9-1daa-08db2b092dec X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT004.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR12MB8051 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Register for the pds_core's notification events, primarily to find out when the FW has been reset so we can pass this on back up the chain. Signed-off-by: Shannon Nelson --- drivers/vdpa/pds/vdpa_dev.c | 68 ++++++++++++++++++++++++++++++++++++- drivers/vdpa/pds/vdpa_dev.h | 1 + 2 files changed, 68 insertions(+), 1 deletion(-) diff --git a/drivers/vdpa/pds/vdpa_dev.c b/drivers/vdpa/pds/vdpa_dev.c index 6b6675ac4219..c0cd000bac06 100644 --- a/drivers/vdpa/pds/vdpa_dev.c +++ b/drivers/vdpa/pds/vdpa_dev.c @@ -22,6 +22,61 @@ static struct pds_vdpa_device *vdpa_to_pdsv(struct vdpa_device *vdpa_dev) return container_of(vdpa_dev, struct pds_vdpa_device, vdpa_dev); } +static int pds_vdpa_notify_handler(struct notifier_block *nb, + unsigned long ecode, + void *data) +{ + struct pds_vdpa_device *pdsv = container_of(nb, struct pds_vdpa_device, nb); + struct device *dev = &pdsv->vdpa_aux->padev->aux_dev.dev; + + dev_dbg(dev, "%s: event code %lu\n", __func__, ecode); + + /* Give the upper layers a hint that something interesting + * may have happened. It seems that the only thing this + * triggers in the virtio-net drivers above us is a check + * of link status. + * + * We don't set the NEEDS_RESET flag for EVENT_RESET + * because we're likely going through a recovery or + * fw_update and will be back up and running soon. + */ + if (ecode == PDS_EVENT_RESET || ecode == PDS_EVENT_LINK_CHANGE) { + if (pdsv->config_cb.callback) + pdsv->config_cb.callback(pdsv->config_cb.private); + } + + return 0; +} + +static int pds_vdpa_register_event_handler(struct pds_vdpa_device *pdsv) +{ + struct device *dev = &pdsv->vdpa_aux->padev->aux_dev.dev; + struct notifier_block *nb = &pdsv->nb; + int err; + + if (!nb->notifier_call) { + nb->notifier_call = pds_vdpa_notify_handler; + err = pdsc_register_notify(nb); + if (err) { + nb->notifier_call = NULL; + dev_err(dev, "failed to register pds event handler: %ps\n", + ERR_PTR(err)); + return -EINVAL; + } + dev_dbg(dev, "pds event handler registered\n"); + } + + return 0; +} + +static void pds_vdpa_unregister_event_handler(struct pds_vdpa_device *pdsv) +{ + if (pdsv->nb.notifier_call) { + pdsc_unregister_notify(&pdsv->nb); + pdsv->nb.notifier_call = NULL; + } +} + static int pds_vdpa_set_vq_address(struct vdpa_device *vdpa_dev, u16 qid, u64 desc_addr, u64 driver_addr, u64 device_addr) { @@ -551,6 +606,12 @@ static int pds_vdpa_dev_add(struct vdpa_mgmt_dev *mdev, const char *name, pdsv->vdpa_dev.mdev = &vdpa_aux->vdpa_mdev; + err = pds_vdpa_register_event_handler(pdsv); + if (err) { + dev_err(dev, "Failed to register for PDS events: %pe\n", ERR_PTR(err)); + goto err_unmap; + } + /* We use the _vdpa_register_device() call rather than the * vdpa_register_device() to avoid a deadlock because our * dev_add() is called with the vdpa_dev_lock already set @@ -559,13 +620,15 @@ static int pds_vdpa_dev_add(struct vdpa_mgmt_dev *mdev, const char *name, err = _vdpa_register_device(&pdsv->vdpa_dev, pdsv->num_vqs); if (err) { dev_err(dev, "Failed to register to vDPA bus: %pe\n", ERR_PTR(err)); - goto err_unmap; + goto err_unevent; } pds_vdpa_debugfs_add_vdpadev(vdpa_aux); return 0; +err_unevent: + pds_vdpa_unregister_event_handler(pdsv); err_unmap: put_device(&pdsv->vdpa_dev.dev); vdpa_aux->pdsv = NULL; @@ -575,8 +638,11 @@ static int pds_vdpa_dev_add(struct vdpa_mgmt_dev *mdev, const char *name, static void pds_vdpa_dev_del(struct vdpa_mgmt_dev *mdev, struct vdpa_device *vdpa_dev) { + struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev); struct pds_vdpa_aux *vdpa_aux; + pds_vdpa_unregister_event_handler(pdsv); + vdpa_aux = container_of(mdev, struct pds_vdpa_aux, vdpa_mdev); _vdpa_unregister_device(vdpa_dev); diff --git a/drivers/vdpa/pds/vdpa_dev.h b/drivers/vdpa/pds/vdpa_dev.h index a21596f438c1..1650a2b08845 100644 --- a/drivers/vdpa/pds/vdpa_dev.h +++ b/drivers/vdpa/pds/vdpa_dev.h @@ -40,6 +40,7 @@ struct pds_vdpa_device { u8 vdpa_index; /* rsvd for future subdevice use */ u8 num_vqs; /* num vqs in use */ struct vdpa_callback config_cb; + struct notifier_block nb; }; int pds_vdpa_get_mgmt_info(struct pds_vdpa_aux *vdpa_aux); From patchwork Wed Mar 22 19:10:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Nelson, Shannon" X-Patchwork-Id: 13184518 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 84E1CC6FD1C for ; Wed, 22 Mar 2023 19:11:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230523AbjCVTL3 (ORCPT ); Wed, 22 Mar 2023 15:11:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57724 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230503AbjCVTLN (ORCPT ); Wed, 22 Mar 2023 15:11:13 -0400 Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2055.outbound.protection.outlook.com [40.107.244.55]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BCE975D881 for ; Wed, 22 Mar 2023 12:11:09 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=YMBP1Z/HCSpBOdCooZtOk9HxfzSbB/h7T4aqwMrZIKd6xbnGSvlVTSqfAl0nmPdmoGiRHtKo+BzK54JSs676X1yeoChoe2deTL3jyNhsX7y1xVemF3Nyj8J4eZ7WFZTT1vnvB/UlFZr7XH3tqkqfhB4pAE2WvBl+5HS5WeeJrWliD3Vw9SZ4PwJHxZ3lc1J0Bw9K7jsEH0hl+ef+krb4AJeB8syTKOZUoRQ6ZGzfrnAu/pjB25ZppjcYw/WJbiBzAaXt4TnQ6oXp6Eh2YR9qjwNtV1LtQsv1MrQwwNeyMgoyrcepd4Bn9xWMapSSKA7ybV+eJ1uDKDl+CUjuQMnPYg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=8TqqbkopQsY8FCmwxla/MjlCA0GntRPA2Fhs6+Z/DtQ=; b=GoQtZ61gCubGT1TGEzZwPz7xQWqHMbX6nQEExDVc2WzLuLsyLdO0zZyfPMLj4BHVHjwj/S6j9KdLt36mjBSfvlKf1xO7P2IupDk5FEDddqj5au02+kJmpnt9zTYDl3GgpNOZyhXxeDryp0+Y+9FyT9JRPIjIf3c33jF48bWSvU3SVERc83jgqQnK/WR6Tfa5bNKhRSfPAVmPYEJkgHVnXxxqiTOsFjVl/mLuPvsJ2y5FbikyrXhHGcihwju9F4u/41+jgc/KeS/Byu9dI1FiqTYE1dA/hBIGg5xx2kTYzI5sMilYJW/bZW3PlqCXlkj1BADtQsHjxUTik5hQryPPlg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=redhat.com smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=8TqqbkopQsY8FCmwxla/MjlCA0GntRPA2Fhs6+Z/DtQ=; b=epFDSaa034UikDbk9mmrllW5J0M7DKpPwRCGad9DbT1KU1e3tpmp4QxjVaL77TA7G4EQJZgZQV7cbTgQ0kfMdjLL1VvOT1kdpAuhKrbmoQtjtvPNEF5etAMyEUY6fTyiVETvPOQFwEw4DOJ9wJJMPOJEyK1ZJQTnMPSo10mFPUw= Received: from DS7PR03CA0278.namprd03.prod.outlook.com (2603:10b6:5:3ad::13) by CY5PR12MB6345.namprd12.prod.outlook.com (2603:10b6:930:22::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6178.37; Wed, 22 Mar 2023 19:11:04 +0000 Received: from DM6NAM11FT004.eop-nam11.prod.protection.outlook.com (2603:10b6:5:3ad:cafe::c9) by DS7PR03CA0278.outlook.office365.com (2603:10b6:5:3ad::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6178.37 via Frontend Transport; Wed, 22 Mar 2023 19:11:03 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by DM6NAM11FT004.mail.protection.outlook.com (10.13.172.217) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.6222.17 via Frontend Transport; Wed, 22 Mar 2023 19:11:03 +0000 Received: from driver-dev1.pensando.io (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Wed, 22 Mar 2023 14:11:01 -0500 From: Shannon Nelson To: , , , , , , , CC: Subject: [PATCH v3 virtio 8/8] pds_vdpa: pds_vdps.rst and Kconfig Date: Wed, 22 Mar 2023 12:10:38 -0700 Message-ID: <20230322191038.44037-9-shannon.nelson@amd.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230322191038.44037-1-shannon.nelson@amd.com> References: <20230322191038.44037-1-shannon.nelson@amd.com> MIME-Version: 1.0 X-Originating-IP: [10.180.168.240] X-ClientProxiedBy: SATLEXMB04.amd.com (10.181.40.145) To SATLEXMB04.amd.com (10.181.40.145) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DM6NAM11FT004:EE_|CY5PR12MB6345:EE_ X-MS-Office365-Filtering-Correlation-Id: da089c73-c6c7-467b-d8cc-08db2b092e8b X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 7DbbNENGlrCp8ALYvvLFkudWdIbC8KrFTz+E+K4UmOze1wvnvettPWLBCYvFge/Adnsy9ccCO4kDtIIcIuqdykShWC/NfXFOPsOmifpXEb4Oy599R0UBUJI8AOESevqclPmbC6o/Fexr/Pty4v/rJMCrZuKfl8+vUCNtp438K4I5IGguzLAgV1TX2OyQfhOrIo+7FUNMthoByCt/nPA2hrzLnSYuhafAuyE4fq+RmD3+6QLWLdfJzv4G7Uc52Q5D2CaUwNUc+CC/q13Kdyjrb6okE9Ss/uaaz2yEBulFwfq3YvxilSRAjoHmEIHU5zHyL3lAZE5jfDEiuBgPXvOGBjDfVaaRzJ1oeXgasIBnNCeD+zU1vBET24zYolExAbBbqZYwuC6HPqVXe5CH6RbrEXswlRSjI9QmqqSdkdbT9oVI6cTPY/taJvABXCD3AkbLGiecQIIAPpCFVeEcHMhkLoYONXu6Erhbb1M58rk9/Di74vQTLBlVj3wFBT7mIbQ67ohPp9Phv73gTPI+cQEjy7lStSKxQBJLWOqv7QbxUVaul0WYuo6he1kET/9qSCDV4qnziCXVh2cZxUNoCYNOeRSiXrgKxKbVgUkq344ZQwc4Wl2CUNbVlXdSYzOWBmBG22tv663iSXvjLs84W1SAfe41Oux0Y7I/u+omU3aY5blndc+H3rMufccaaMBaMj60839lXVNSE165rKH3QoLqaUJFSoSKrhYFOfJMaOp5Udax1ZyG6BAWWhXE0hTqph2Ahf59bXFD7pivbh08NnQ82w== X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230025)(4636009)(39860400002)(396003)(376002)(346002)(136003)(451199018)(46966006)(36840700001)(40470700004)(86362001)(356005)(2906002)(82740400003)(36756003)(81166007)(36860700001)(41300700001)(44832011)(5660300002)(8676002)(26005)(40480700001)(4326008)(43170500006)(83380400001)(2616005)(110136005)(1076003)(47076005)(316002)(186003)(336012)(16526019)(70206006)(82310400005)(6666004)(478600001)(70586007)(426003)(8936002)(66899018)(40460700003)(36900700001);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Mar 2023 19:11:03.6018 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: da089c73-c6c7-467b-d8cc-08db2b092e8b X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT004.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY5PR12MB6345 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Add the documentation and Kconfig entry for pds_vdpa driver. Signed-off-by: Shannon Nelson --- .../device_drivers/ethernet/amd/pds_vdpa.rst | 84 +++++++++++++++++++ .../device_drivers/ethernet/index.rst | 1 + MAINTAINERS | 4 + drivers/vdpa/Kconfig | 8 ++ 4 files changed, 97 insertions(+) create mode 100644 Documentation/networking/device_drivers/ethernet/amd/pds_vdpa.rst diff --git a/Documentation/networking/device_drivers/ethernet/amd/pds_vdpa.rst b/Documentation/networking/device_drivers/ethernet/amd/pds_vdpa.rst new file mode 100644 index 000000000000..d41f6dd66e3e --- /dev/null +++ b/Documentation/networking/device_drivers/ethernet/amd/pds_vdpa.rst @@ -0,0 +1,84 @@ +.. SPDX-License-Identifier: GPL-2.0+ +.. note: can be edited and viewed with /usr/bin/formiko-vim + +========================================================== +PCI vDPA driver for the AMD/Pensando(R) DSC adapter family +========================================================== + +AMD/Pensando vDPA VF Device Driver +Copyright(c) 2023 Advanced Micro Devices, Inc + +Overview +======== + +The ``pds_vdpa`` driver is an auxiliary bus driver that supplies +a vDPA device for use by the virtio network stack. It is used with +the Pensando Virtual Function devices that offer vDPA and virtio queue +services. It depends on the ``pds_core`` driver and hardware for the PF +and VF PCI handling as well as for device configuration services. + +Using the device +================ + +The ``pds_vdpa`` device is enabled via multiple configuration steps and +depends on the ``pds_core`` driver to create and enable SR-IOV Virtual +Function devices. + +Shown below are the steps to bind the driver to a VF and also to the +associated auxiliary device created by the ``pds_core`` driver. + +.. code-block:: bash + + #!/bin/bash + + modprobe pds_core + modprobe vdpa + modprobe pds_vdpa + + PF_BDF=`grep -H "vDPA.*1" /sys/kernel/debug/pds_core/*/viftypes | head -1 | awk -F / '{print $6}'` + + # Enable vDPA VF auxiliary device(s) in the PF + devlink dev param set pci/$PF_BDF name enable_vnet value true cmode runtime + + # Create a VF for vDPA use + echo 1 > /sys/bus/pci/drivers/pds_core/$PF_BDF/sriov_numvfs + + # Find the vDPA services/devices available + PDS_VDPA_MGMT=`vdpa mgmtdev show | grep vDPA | head -1 | cut -d: -f1` + + # Create a vDPA device for use in virtio network configurations + vdpa dev add name vdpa1 mgmtdev $PDS_VDPA_MGMT mac 00:11:22:33:44:55 + + # Set up an ethernet interface on the vdpa device + modprobe virtio_vdpa + + + +Enabling the driver +=================== + +The driver is enabled via the standard kernel configuration system, +using the make command:: + + make oldconfig/menuconfig/etc. + +The driver is located in the menu structure at: + + -> Device Drivers + -> Network device support (NETDEVICES [=y]) + -> Ethernet driver support + -> Pensando devices + -> Pensando Ethernet PDS_VDPA Support + +Support +======= + +For general Linux networking support, please use the netdev mailing +list, which is monitored by Pensando personnel:: + + netdev@vger.kernel.org + +For more specific support needs, please use the Pensando driver support +email:: + + drivers@pensando.io diff --git a/Documentation/networking/device_drivers/ethernet/index.rst b/Documentation/networking/device_drivers/ethernet/index.rst index eaaf284e69e6..88dd38c7eb6d 100644 --- a/Documentation/networking/device_drivers/ethernet/index.rst +++ b/Documentation/networking/device_drivers/ethernet/index.rst @@ -14,6 +14,7 @@ Contents: 3com/vortex amazon/ena amd/pds_core + amd/pds_vdpa altera/altera_tse aquantia/atlantic chelsio/cxgb diff --git a/MAINTAINERS b/MAINTAINERS index 95b5f25a2c06..2af133861068 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -22108,6 +22108,10 @@ SNET DPU VIRTIO DATA PATH ACCELERATOR R: Alvaro Karsz F: drivers/vdpa/solidrun/ +PDS DSC VIRTIO DATA PATH ACCELERATOR +R: Shannon Nelson +F: drivers/vdpa/pds/ + VIRTIO BALLOON M: "Michael S. Tsirkin" M: David Hildenbrand diff --git a/drivers/vdpa/Kconfig b/drivers/vdpa/Kconfig index cd6ad92f3f05..c910cb119c1b 100644 --- a/drivers/vdpa/Kconfig +++ b/drivers/vdpa/Kconfig @@ -116,4 +116,12 @@ config ALIBABA_ENI_VDPA This driver includes a HW monitor device that reads health values from the DPU. +config PDS_VDPA + tristate "vDPA driver for AMD/Pensando DSC devices" + depends on PDS_CORE + help + VDPA network driver for AMD/Pensando's PDS Core devices. + With this driver, the VirtIO dataplane can be + offloaded to an AMD/Pensando DSC device. + endif # VDPA