From patchwork Fri Oct 25 15:01:51 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Szymon Durawa X-Patchwork-Id: 13850750 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.14]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BF0ECE574 for ; Fri, 25 Oct 2024 14:00:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.14 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729864844; cv=none; b=dD8TBU4ylfXd0M49bGA3A2oSalnP6y0hSuAX+gpj1ZqQETFxpD1V7SNYbQ1a67GOEGM3aB0AeZIv8nn0DySIlneAc+csrmI32qO0u9ID1Wxv7hSLpYtxgUYa5tW6CMiryveOlTzxpklQ1gdtoIMWKwRjU60r8LXf7skcnOdYyr4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729864844; c=relaxed/simple; bh=uo8AA/LCKkvGN+iJ+NFzMf+VCUYVyj0mlt9AJbeP/V4=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version:Content-Type; b=XBGdhpdy1LkqBN4tVPiufcT+4zjV7muahyD5mBgufFnC460bZ1rFEuHWHy+828UW6+nf4c6+ur8qBkW/6hy1Rre2ahgiVWUYuLC/zjCVvQOCIe9D8UgCuE2927WeT2Pdw5CwptCrPYFjg2TDNc0PXXpmlletuL8yLv/xbxSKWkI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=Pt9R9h5j; arc=none smtp.client-ip=192.198.163.14 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="Pt9R9h5j" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1729864842; x=1761400842; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=uo8AA/LCKkvGN+iJ+NFzMf+VCUYVyj0mlt9AJbeP/V4=; b=Pt9R9h5jmhswdqIThrPCZsnDX4r/6YAUMldkmukQOdfjwv4ck8VNApxA 2BjRPsm8CFACELi2Kz7tkIj2lCo7XrPQ5t8WRbzBU6ekWpNYzSsRelZoA j3ZHK1ohirMVkUrAQ1z2KLIyw3kC5vTXc91BNgPKqbxH8yDcqTDoWiWqJ rzAKhI9XzqFBY9F/BozzcUId8WM6MDaWS8D8JA8nCisBt8WQ+bB5tkyxv tyh65u4CE0kwLGT3dVYJEizfD0/Mq0vgI6BSZnzGv8uWn8f8O6RnNE0XD TmcggsxgYq4D1s3VWzJGTNRNrLlxHTva66w5M+pwwi+q10RuU4UCE/2QP w==; X-CSE-ConnectionGUID: G9bXuHcXTmS/K/uH5l+sQg== X-CSE-MsgGUID: pWSuqIgHRLmnZ5PEllTHUg== X-IronPort-AV: E=McAfee;i="6700,10204,11236"; a="29752845" X-IronPort-AV: E=Sophos;i="6.11,231,1725346800"; d="scan'208";a="29752845" Received: from fmviesa010.fm.intel.com ([10.60.135.150]) by fmvoesa108.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Oct 2024 07:00:41 -0700 X-CSE-ConnectionGUID: QxdoVnerRjyBETcLKA0dkw== X-CSE-MsgGUID: ZxvuvlLXQ8Sn7CN65MsS1A== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.11,231,1725346800"; d="scan'208";a="81232291" Received: from arl-s-03.igk.intel.com ([10.91.111.103]) by fmviesa010-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Oct 2024 07:00:39 -0700 From: Szymon Durawa To: helgaas@kernel.org Cc: Szymon Durawa , Dan Williams , Lukas Wunner , linux-pci@vger.kernel.org, Nirmal Patel , Mariusz Tkaczyk Subject: [RFC PATCH v1 1/3] PCI: vmd: Clean up vmd_enable_domain function Date: Fri, 25 Oct 2024 17:01:51 +0200 Message-Id: <20241025150153.983306-2-szymon.durawa@linux.intel.com> X-Mailer: git-send-email 2.39.3 In-Reply-To: <20241025150153.983306-1-szymon.durawa@linux.intel.com> References: <20241025150153.983306-1-szymon.durawa@linux.intel.com> Precedence: bulk X-Mailing-List: linux-pci@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 This function is too long and needs to be shortened to make it more readable. This clean up is a prework for enablement additional VMD bus range. It doesn't change functional behavior of vmd_enable_domain(). Suggested-by: Nirmal Patel Reviewed-by: Mariusz Tkaczyk Signed-off-by: Szymon Durawa --- drivers/pci/controller/vmd.c | 262 +++++++++++++++++++++-------------- 1 file changed, 161 insertions(+), 101 deletions(-) mode change 100644 => 100755 drivers/pci/controller/vmd.c diff --git a/drivers/pci/controller/vmd.c b/drivers/pci/controller/vmd.c old mode 100644 new mode 100755 index 264a180403a0..7cce7354b5c2 --- a/drivers/pci/controller/vmd.c +++ b/drivers/pci/controller/vmd.c @@ -34,6 +34,18 @@ #define MB2_SHADOW_OFFSET 0x2000 #define MB2_SHADOW_SIZE 16 +enum vmd_resource { + VMD_RES_CFGBAR = 0, + VMD_RES_MBAR_1, /*VMD Resource MemBAR 1 */ + VMD_RES_MBAR_2, /*VMD Resource MemBAR 2 */ + VMD_RES_COUNT +}; + +enum vmd_rootbus { + VMD_BUS_0 = 0, + VMD_BUS_COUNT +}; + enum vmd_features { /* * Device may contain registers which hint the physical location of the @@ -132,10 +144,10 @@ struct vmd_dev { struct vmd_irq_list *irqs; struct pci_sysdata sysdata; - struct resource resources[3]; + struct resource resources[VMD_RES_COUNT]; struct irq_domain *irq_domain; - struct pci_bus *bus; - u8 busn_start; + struct pci_bus *bus[VMD_BUS_COUNT]; + u8 busn_start[VMD_BUS_COUNT]; u8 first_vec; char *name; int instance; @@ -367,7 +379,7 @@ static void vmd_remove_irq_domain(struct vmd_dev *vmd) static void __iomem *vmd_cfg_addr(struct vmd_dev *vmd, struct pci_bus *bus, unsigned int devfn, int reg, int len) { - unsigned int busnr_ecam = bus->number - vmd->busn_start; + unsigned int busnr_ecam = bus->number - vmd->busn_start[VMD_BUS_0]; u32 offset = PCIE_ECAM_OFFSET(busnr_ecam, devfn, reg); if (offset + len >= resource_size(&vmd->dev->resource[VMD_CFGBAR])) @@ -505,7 +517,7 @@ static inline void vmd_acpi_end(void) { } static void vmd_domain_reset(struct vmd_dev *vmd) { - u16 bus, max_buses = resource_size(&vmd->resources[0]); + u16 bus, max_buses = resource_size(&vmd->resources[VMD_RES_CFGBAR]); u8 dev, functions, fn, hdr_type; char __iomem *base; @@ -553,8 +565,8 @@ static void vmd_domain_reset(struct vmd_dev *vmd) static void vmd_attach_resources(struct vmd_dev *vmd) { - vmd->dev->resource[VMD_MEMBAR1].child = &vmd->resources[1]; - vmd->dev->resource[VMD_MEMBAR2].child = &vmd->resources[2]; + vmd->dev->resource[VMD_MEMBAR1].child = &vmd->resources[VMD_RES_MBAR_1]; + vmd->dev->resource[VMD_MEMBAR2].child = &vmd->resources[VMD_RES_MBAR_2]; } static void vmd_detach_resources(struct vmd_dev *vmd) @@ -644,13 +656,13 @@ static int vmd_get_bus_number_start(struct vmd_dev *vmd) switch (BUS_RESTRICT_CFG(reg)) { case 0: - vmd->busn_start = 0; + vmd->busn_start[VMD_BUS_0] = 0; break; case 1: - vmd->busn_start = 128; + vmd->busn_start[VMD_BUS_0] = 128; break; case 2: - vmd->busn_start = 224; + vmd->busn_start[VMD_BUS_0] = 224; break; default: pci_err(dev, "Unknown Bus Offset Setting (%d)\n", @@ -767,17 +779,126 @@ static int vmd_pm_enable_quirk(struct pci_dev *pdev, void *userdata) return 0; } -static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features) +static void vmd_configure_cfgbar(struct vmd_dev *vmd) { - struct pci_sysdata *sd = &vmd->sysdata; - struct resource *res; + struct resource *res = &vmd->dev->resource[VMD_CFGBAR]; + + vmd->resources[VMD_RES_CFGBAR] = (struct resource){ + .name = "VMD CFGBAR", + .start = vmd->busn_start[VMD_BUS_0], + .end = vmd->busn_start[VMD_BUS_0] + + (resource_size(res) >> 20) - 1, + .flags = IORESOURCE_BUS | IORESOURCE_PCI_FIXED, + }; +} + +/** + * vmd_configure_membar - Configure VMD MemBAR register, which points + * to MMIO address assigned by the OS or BIOS. + * @vmd: the VMD device + * @resource_number: resource buffer number to be filled in + * @membar_number: number of the MemBAR + * @start_offset: 4K aligned offset applied to start of VMD’s MEMBAR MMIO space + * @end_offset: 4K aligned offset applied to end of VMD’s MEMBAR MMIO space + * @parent: resource assigned as a parent, may be NULL + * + * Function fills resource buffer inside the VMD structure. + */ +static void vmd_configure_membar(struct vmd_dev *vmd, + enum vmd_resource resource_number, + u8 membar_number, resource_size_t start_offset, + resource_size_t end_offset, + struct resource *parent) +{ + struct resource *res_parent; u32 upper_bits; unsigned long flags; + char name[16]; + + struct resource *res = &vmd->dev->resource[membar_number]; + + upper_bits = upper_32_bits(res->end); + flags = res->flags & ~IORESOURCE_SIZEALIGN; + if (!upper_bits) + flags &= ~IORESOURCE_MEM_64; + + snprintf(name, sizeof(name), "VMD MEMBAR%d", membar_number/2); + + res_parent = parent; + if (!res_parent) + res_parent = res; + + vmd->resources[resource_number] = (struct resource){ + .name = name, + .start = res->start + start_offset, + .end = res->end - end_offset, + .flags = flags, + .parent = res_parent, + }; +} + +static void vmd_configure_membar1_membar2(struct vmd_dev *vmd, + resource_size_t mbar2_ofs) +{ + vmd_configure_membar(vmd, VMD_RES_MBAR_1, VMD_MEMBAR1, 0, 0, NULL); + vmd_configure_membar(vmd, VMD_RES_MBAR_2, VMD_MEMBAR2, + mbar2_ofs, 0, NULL); +} + +static void vmd_bus_enumeration(struct pci_bus *bus, unsigned long features) +{ + struct pci_bus *child; + struct pci_dev *dev; + int ret; + + vmd_acpi_begin(); + + pci_scan_child_bus(bus); + vmd_domain_reset(vmd_from_bus(bus)); + + /* + * When Intel VMD is enabled, the OS does not discover the Root Ports + * owned by Intel VMD within the MMCFG space. pci_reset_bus() applies + * a reset to the parent of the PCI device supplied as argument. This + * is why we pass a child device, so the reset can be triggered at + * the Intel bridge level and propagated to all the children in the + * hierarchy. + */ + list_for_each_entry(child, &bus->children, node) { + if (!list_empty(&child->devices)) { + dev = list_first_entry(&child->devices, struct pci_dev, + bus_list); + ret = pci_reset_bus(dev); + if (ret) + pci_warn(dev, "can't reset device: %d\n", ret); + + break; + } + } + + pci_assign_unassigned_bus_resources(bus); + + pci_walk_bus(bus, vmd_pm_enable_quirk, &features); + + /* + * VMD root buses are virtual and don't return true on pci_is_pcie() + * and will fail pcie_bus_configure_settings() early. It can instead be + * run on each of the real root ports. + */ + list_for_each_entry(child, &bus->children, node) + pcie_bus_configure_settings(child); + + pci_bus_add_devices(bus); + + vmd_acpi_end(); +} + +static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features) +{ + struct pci_sysdata *sd = &vmd->sysdata; LIST_HEAD(resources); resource_size_t offset[2] = {0}; resource_size_t membar2_offset = 0x2000; - struct pci_bus *child; - struct pci_dev *dev; int ret; /* @@ -807,13 +928,7 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features) return ret; } - res = &vmd->dev->resource[VMD_CFGBAR]; - vmd->resources[0] = (struct resource) { - .name = "VMD CFGBAR", - .start = vmd->busn_start, - .end = vmd->busn_start + (resource_size(res) >> 20) - 1, - .flags = IORESOURCE_BUS | IORESOURCE_PCI_FIXED, - }; + vmd_configure_cfgbar(vmd); /* * If the window is below 4GB, clear IORESOURCE_MEM_64 so we can @@ -827,36 +942,12 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features) * * The only way we could use a 64-bit non-prefetchable MEMBAR is * if its address is <4GB so that we can convert it to a 32-bit - * resource. To be visible to the host OS, all VMD endpoints must + * resource. To be visible to the host OS, all VMD endpoints must * be initially configured by platform BIOS, which includes setting - * up these resources. We can assume the device is configured + * up these resources. We can assume the device is configured * according to the platform needs. */ - res = &vmd->dev->resource[VMD_MEMBAR1]; - upper_bits = upper_32_bits(res->end); - flags = res->flags & ~IORESOURCE_SIZEALIGN; - if (!upper_bits) - flags &= ~IORESOURCE_MEM_64; - vmd->resources[1] = (struct resource) { - .name = "VMD MEMBAR1", - .start = res->start, - .end = res->end, - .flags = flags, - .parent = res, - }; - - res = &vmd->dev->resource[VMD_MEMBAR2]; - upper_bits = upper_32_bits(res->end); - flags = res->flags & ~IORESOURCE_SIZEALIGN; - if (!upper_bits) - flags &= ~IORESOURCE_MEM_64; - vmd->resources[2] = (struct resource) { - .name = "VMD MEMBAR2", - .start = res->start + membar2_offset, - .end = res->end, - .flags = flags, - .parent = res, - }; + vmd_configure_membar1_membar2(vmd, membar2_offset); sd->vmd_dev = vmd->dev; sd->domain = vmd_find_free_domain(); @@ -892,70 +983,39 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features) vmd_set_msi_remapping(vmd, false); } - pci_add_resource(&resources, &vmd->resources[0]); - pci_add_resource_offset(&resources, &vmd->resources[1], offset[0]); - pci_add_resource_offset(&resources, &vmd->resources[2], offset[1]); + pci_add_resource(&resources, &vmd->resources[VMD_RES_CFGBAR]); + pci_add_resource_offset(&resources, &vmd->resources[VMD_RES_MBAR_1], + offset[0]); + pci_add_resource_offset(&resources, &vmd->resources[VMD_RES_MBAR_2], + offset[1]); - vmd->bus = pci_create_root_bus(&vmd->dev->dev, vmd->busn_start, - &vmd_ops, sd, &resources); - if (!vmd->bus) { + vmd->bus[VMD_BUS_0] = pci_create_root_bus(&vmd->dev->dev, + vmd->busn_start[VMD_BUS_0], + &vmd_ops, sd, &resources); + if (!vmd->bus[VMD_BUS_0]) { pci_free_resource_list(&resources); vmd_remove_irq_domain(vmd); return -ENODEV; } - vmd_copy_host_bridge_flags(pci_find_host_bridge(vmd->dev->bus), - to_pci_host_bridge(vmd->bus->bridge)); + vmd_copy_host_bridge_flags( + pci_find_host_bridge(vmd->dev->bus), + to_pci_host_bridge(vmd->bus[VMD_BUS_0]->bridge)); vmd_attach_resources(vmd); if (vmd->irq_domain) - dev_set_msi_domain(&vmd->bus->dev, vmd->irq_domain); + dev_set_msi_domain(&vmd->bus[VMD_BUS_0]->dev, + vmd->irq_domain); else - dev_set_msi_domain(&vmd->bus->dev, + dev_set_msi_domain(&vmd->bus[VMD_BUS_0]->dev, dev_get_msi_domain(&vmd->dev->dev)); - WARN(sysfs_create_link(&vmd->dev->dev.kobj, &vmd->bus->dev.kobj, - "domain"), "Can't create symlink to domain\n"); - - vmd_acpi_begin(); + WARN(sysfs_create_link(&vmd->dev->dev.kobj, + &vmd->bus[VMD_BUS_0]->dev.kobj, "domain"), + "Can't create symlink to domain\n"); - pci_scan_child_bus(vmd->bus); - vmd_domain_reset(vmd); + vmd_bus_enumeration(vmd->bus[VMD_BUS_0], features); - /* When Intel VMD is enabled, the OS does not discover the Root Ports - * owned by Intel VMD within the MMCFG space. pci_reset_bus() applies - * a reset to the parent of the PCI device supplied as argument. This - * is why we pass a child device, so the reset can be triggered at - * the Intel bridge level and propagated to all the children in the - * hierarchy. - */ - list_for_each_entry(child, &vmd->bus->children, node) { - if (!list_empty(&child->devices)) { - dev = list_first_entry(&child->devices, - struct pci_dev, bus_list); - ret = pci_reset_bus(dev); - if (ret) - pci_warn(dev, "can't reset device: %d\n", ret); - - break; - } - } - - pci_assign_unassigned_bus_resources(vmd->bus); - - pci_walk_bus(vmd->bus, vmd_pm_enable_quirk, &features); - - /* - * VMD root buses are virtual and don't return true on pci_is_pcie() - * and will fail pcie_bus_configure_settings() early. It can instead be - * run on each of the real root ports. - */ - list_for_each_entry(child, &vmd->bus->children, node) - pcie_bus_configure_settings(child); - - pci_bus_add_devices(vmd->bus); - - vmd_acpi_end(); return 0; } @@ -1031,9 +1091,9 @@ static void vmd_remove(struct pci_dev *dev) { struct vmd_dev *vmd = pci_get_drvdata(dev); - pci_stop_root_bus(vmd->bus); + pci_stop_root_bus(vmd->bus[VMD_BUS_0]); sysfs_remove_link(&vmd->dev->dev.kobj, "domain"); - pci_remove_root_bus(vmd->bus); + pci_remove_root_bus(vmd->bus[VMD_BUS_0]); vmd_cleanup_srcu(vmd); vmd_detach_resources(vmd); vmd_remove_irq_domain(vmd); From patchwork Fri Oct 25 15:01:52 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Szymon Durawa X-Patchwork-Id: 13850751 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.14]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EB7DE21217B for ; Fri, 25 Oct 2024 14:00:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.14 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729864854; cv=none; b=kWZI3YLabAxBHs1Sjehcm5H8NAPl00k7mTJEFnmgbUgR6mw9ovMiBoNm+xW/ZuITk9rJWlw84ov4hHxIxXxms/trFwT1RNuHwksIEDa0YVqKj84lz1WIN+mHzw43ZYW6R4t6e1082yA6I1kjddIVQ7wL07s5txtqYPcpoi2HZrE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729864854; c=relaxed/simple; bh=31MjEYFO+Jy1g3Vgqg90KvVhsxgswjPjPqnmxPbRY4g=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=tJL5t5nIF8ctWk+rD1lBnmHuU3mBfF5rCGOTXRDpY6zgbw4JN0iJvw2InqbyBTJltllcw8wZBu4Kj3diSFRqqNxxtQuYbyLH70mCTEOZOPryy5F4ScUUhQqh8rDcfP1r7P7sugXURFZo7KzhxRpDAKBR1xxpHTtlNvhzxL2rA0c= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=II6jrq86; arc=none smtp.client-ip=192.198.163.14 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="II6jrq86" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1729864852; x=1761400852; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=31MjEYFO+Jy1g3Vgqg90KvVhsxgswjPjPqnmxPbRY4g=; b=II6jrq86+EEbRESJdE7yXNPeRoJ9oiEaZyQ6lM1wVpDeVcxhzuGSdYs2 pzfnHyFxRy53G05kJ86q2Cgi/Z74Y2sze/CGzHPSzEsMTmmQ6o48Q02+l gIoG/5QlaMq0BqiPY+rt8RQ9LoSZvJtW3EiDpuTKCcx0h3iXXiwZIXoi3 wG1EUjXy9jCx5neGK1LlbT6NS9XPo7bd2P/vaofH3aQCj6yMAAhUdC/gx ULdFaZ2blgYGxQWyojhMY6KkLw9b9L7a0Pu8SezJpTF9/QVXwFT4xhss1 jUWK5SYA8/UwlCKrRBLiTYG1Knzve98Cogl0zCWiPMSzLxeF8Em3jHgLz A==; X-CSE-ConnectionGUID: foa4DhQHSUyzgr4BO0GGbg== X-CSE-MsgGUID: vDvmMSgNTiqcxqMvaSsnQg== X-IronPort-AV: E=McAfee;i="6700,10204,11236"; a="29752859" X-IronPort-AV: E=Sophos;i="6.11,231,1725346800"; d="scan'208";a="29752859" Received: from fmviesa010.fm.intel.com ([10.60.135.150]) by fmvoesa108.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Oct 2024 07:00:51 -0700 X-CSE-ConnectionGUID: xMBed0hmTGK3TWGuU9lpMw== X-CSE-MsgGUID: v/SVei/TRBKKwxFG0tvZww== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.11,231,1725346800"; d="scan'208";a="81232301" Received: from arl-s-03.igk.intel.com ([10.91.111.103]) by fmviesa010-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Oct 2024 07:00:50 -0700 From: Szymon Durawa To: helgaas@kernel.org Cc: Szymon Durawa , Dan Williams , Lukas Wunner , linux-pci@vger.kernel.org, Nirmal Patel , Mariusz Tkaczyk Subject: [RFC PATCH v1 2/3] PCI: vmd: Add VMD PCH rootbus support Date: Fri, 25 Oct 2024 17:01:52 +0200 Message-Id: <20241025150153.983306-3-szymon.durawa@linux.intel.com> X-Mailer: git-send-email 2.39.3 In-Reply-To: <20241025150153.983306-1-szymon.durawa@linux.intel.com> References: <20241025150153.983306-1-szymon.durawa@linux.intel.com> Precedence: bulk X-Mailing-List: linux-pci@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Starting from Intel Arrow Lake VMD enhacement introduces separate rotbus for PCH. It means that all 3 MMIO BARs exposed by VMD are shared now between CPU IOC and PCH. This patch adds PCH bus enumeration and MMIO management for devices with VMD enhancement support. Suggested-by: Nirmal Patel Reviewed-by: Mariusz Tkaczyk Signed-off-by: Szymon Durawa --- drivers/pci/controller/vmd.c | 176 +++++++++++++++++++++++++++++++++-- 1 file changed, 167 insertions(+), 9 deletions(-) diff --git a/drivers/pci/controller/vmd.c b/drivers/pci/controller/vmd.c index 7cce7354b5c2..842b70a21325 100755 --- a/drivers/pci/controller/vmd.c +++ b/drivers/pci/controller/vmd.c @@ -31,6 +31,15 @@ #define PCI_REG_VMLOCK 0x70 #define MB2_SHADOW_EN(vmlock) (vmlock & 0x2) +#define VMD_PRIMARY_PCH_BUS 0x80 +#define VMD_BUSRANGE0 0xC8 +#define VMD_BUSRANGE1 0xCC +#define VMD_MEMBAR1_OFFSET 0xD0 +#define VMD_MEMBAR2_OFFSET1 0xD8 +#define VMD_MEMBAR2_OFFSET2 0xDC +#define VMD_BUS_END(busr) ((busr >> 8) & 0xff) +#define VMD_BUS_START(busr) (busr & 0x00ff) + #define MB2_SHADOW_OFFSET 0x2000 #define MB2_SHADOW_SIZE 16 @@ -38,11 +47,15 @@ enum vmd_resource { VMD_RES_CFGBAR = 0, VMD_RES_MBAR_1, /*VMD Resource MemBAR 1 */ VMD_RES_MBAR_2, /*VMD Resource MemBAR 2 */ + VMD_RES_PCH_CFGBAR, + VMD_RES_PCH_MBAR_1, /*VMD Resource PCH MemBAR 1 */ + VMD_RES_PCH_MBAR_2, /*VMD Resource PCH MemBAR 2 */ VMD_RES_COUNT }; enum vmd_rootbus { VMD_BUS_0 = 0, + VMD_BUS_1, VMD_BUS_COUNT }; @@ -86,6 +99,12 @@ enum vmd_features { * proper power management of the SoC. */ VMD_FEAT_BIOS_PM_QUIRK = (1 << 5), + + /* + * Starting from Intel Arrow Lake, VMD devices have their VMD rootports + * connected to CPU IOC and PCH rootbuses. + */ + VMD_FEAT_HAS_PCH_ROOTBUS = (1 << 6) }; #define VMD_BIOS_PM_QUIRK_LTR 0x1003 /* 3145728 ns */ @@ -93,7 +112,8 @@ enum vmd_features { #define VMD_FEATS_CLIENT (VMD_FEAT_HAS_MEMBAR_SHADOW_VSCAP | \ VMD_FEAT_HAS_BUS_RESTRICTIONS | \ VMD_FEAT_OFFSET_FIRST_VECTOR | \ - VMD_FEAT_BIOS_PM_QUIRK) + VMD_FEAT_BIOS_PM_QUIRK | \ + VMD_FEAT_HAS_PCH_ROOTBUS) static DEFINE_IDA(vmd_instance_ida); @@ -376,6 +396,11 @@ static void vmd_remove_irq_domain(struct vmd_dev *vmd) } } +static inline u8 vmd_has_pch_rootbus(struct vmd_dev *vmd) +{ + return vmd->busn_start[VMD_BUS_1] != 0; +} + static void __iomem *vmd_cfg_addr(struct vmd_dev *vmd, struct pci_bus *bus, unsigned int devfn, int reg, int len) { @@ -521,6 +546,11 @@ static void vmd_domain_reset(struct vmd_dev *vmd) u8 dev, functions, fn, hdr_type; char __iomem *base; + if (vmd_has_pch_rootbus(vmd)) { + max_buses += resource_size(&vmd->resources[VMD_RES_PCH_CFGBAR]); + max_buses += 2; + } + for (bus = 0; bus < max_buses; bus++) { for (dev = 0; dev < 32; dev++) { base = vmd->cfgbar + PCIE_ECAM_OFFSET(bus, @@ -645,7 +675,7 @@ static int vmd_get_phys_offsets(struct vmd_dev *vmd, bool native_hint, return 0; } -static int vmd_get_bus_number_start(struct vmd_dev *vmd) +static int vmd_get_bus_number_start(struct vmd_dev *vmd, unsigned long features) { struct pci_dev *dev = vmd->dev; u16 reg; @@ -664,6 +694,18 @@ static int vmd_get_bus_number_start(struct vmd_dev *vmd) case 2: vmd->busn_start[VMD_BUS_0] = 224; break; + case 3: + if (!(features & VMD_FEAT_HAS_PCH_ROOTBUS)) { + pci_err(dev, "VMD Bus Restriction detected type %d, but PCH Rootbus is not supported, aborting.\n", + BUS_RESTRICT_CFG(reg)); + return -ENODEV; + } + + /* IOC start bus */ + vmd->busn_start[VMD_BUS_0] = 224; + /* PCH start bus */ + vmd->busn_start[VMD_BUS_1] = 225; + break; default: pci_err(dev, "Unknown Bus Offset Setting (%d)\n", BUS_RESTRICT_CFG(reg)); @@ -790,6 +832,30 @@ static void vmd_configure_cfgbar(struct vmd_dev *vmd) (resource_size(res) >> 20) - 1, .flags = IORESOURCE_BUS | IORESOURCE_PCI_FIXED, }; + + if (vmd_has_pch_rootbus(vmd)) { + u16 ioc_range = 0; + u16 pch_range = 0; + + pci_read_config_word(vmd->dev, VMD_BUSRANGE0, &ioc_range); + pci_read_config_word(vmd->dev, VMD_BUSRANGE1, &pch_range); + + /* + * Resize CPU IOC CFGBAR range to make space for PCH owned + * devices by adjusting range end with value stored in + * VMD_BUSRANGE0 register. + */ + vmd->resources[VMD_RES_CFGBAR].start = VMD_BUS_START(ioc_range); + vmd->resources[VMD_RES_CFGBAR].end = VMD_BUS_END(ioc_range); + + vmd->resources[VMD_RES_PCH_CFGBAR] = (struct resource){ + .name = "VMD CFGBAR PCH", + .start = VMD_BUS_START(pch_range), + .end = VMD_BUS_END(pch_range), + .flags = IORESOURCE_BUS | IORESOURCE_PCI_FIXED, + .parent = &vmd->resources[VMD_RES_CFGBAR], + }; + } } /** @@ -822,7 +888,8 @@ static void vmd_configure_membar(struct vmd_dev *vmd, if (!upper_bits) flags &= ~IORESOURCE_MEM_64; - snprintf(name, sizeof(name), "VMD MEMBAR%d", membar_number/2); + snprintf(name, sizeof(name), "VMD MEMBAR%d %s", membar_number / 2, + resource_number > VMD_RES_MBAR_2 ? "PCH" : ""); res_parent = parent; if (!res_parent) @@ -840,9 +907,43 @@ static void vmd_configure_membar(struct vmd_dev *vmd, static void vmd_configure_membar1_membar2(struct vmd_dev *vmd, resource_size_t mbar2_ofs) { - vmd_configure_membar(vmd, VMD_RES_MBAR_1, VMD_MEMBAR1, 0, 0, NULL); - vmd_configure_membar(vmd, VMD_RES_MBAR_2, VMD_MEMBAR2, - mbar2_ofs, 0, NULL); + if (vmd_has_pch_rootbus(vmd)) { + u32 pch_mbar1_ofs = 0; + u64 pch_mbar2_ofs = 0; + u32 reg; + + pci_read_config_dword(vmd->dev, VMD_MEMBAR1_OFFSET, + &pch_mbar1_ofs); + + pci_read_config_dword(vmd->dev, VMD_MEMBAR2_OFFSET1, ®); + pch_mbar2_ofs = reg; + + pci_read_config_dword(vmd->dev, VMD_MEMBAR2_OFFSET2, ®); + pch_mbar2_ofs |= (u64)reg << 32; + + /* + * Resize CPU IOC MEMBAR1 and MEMBAR2 ranges to make space + * for PCH owned devices by adjusting range end with values + * stored in VMD_MEMBAR1_OFFSET and VMD_MEMBAR2_OFFSET registers + */ + vmd_configure_membar(vmd, VMD_RES_MBAR_1, VMD_MEMBAR1, 0, + pch_mbar1_ofs, NULL); + vmd_configure_membar(vmd, VMD_RES_MBAR_2, VMD_MEMBAR2, + mbar2_ofs, pch_mbar2_ofs - mbar2_ofs, + NULL); + + vmd_configure_membar(vmd, VMD_RES_PCH_MBAR_1, VMD_MEMBAR1, + pch_mbar1_ofs, 0, + &vmd->resources[VMD_RES_MBAR_1]); + vmd_configure_membar(vmd, VMD_RES_PCH_MBAR_2, VMD_MEMBAR2, + mbar2_ofs + pch_mbar2_ofs, 0, + &vmd->resources[VMD_RES_MBAR_2]); + } else { + vmd_configure_membar(vmd, VMD_RES_MBAR_1, VMD_MEMBAR1, 0, 0, + NULL); + vmd_configure_membar(vmd, VMD_RES_MBAR_2, VMD_MEMBAR2, + mbar2_ofs, 0, NULL); + } } static void vmd_bus_enumeration(struct pci_bus *bus, unsigned long features) @@ -854,7 +955,9 @@ static void vmd_bus_enumeration(struct pci_bus *bus, unsigned long features) vmd_acpi_begin(); pci_scan_child_bus(bus); - vmd_domain_reset(vmd_from_bus(bus)); + + if (bus->primary == 0) + vmd_domain_reset(vmd_from_bus(bus)); /* * When Intel VMD is enabled, the OS does not discover the Root Ports @@ -893,6 +996,47 @@ static void vmd_bus_enumeration(struct pci_bus *bus, unsigned long features) vmd_acpi_end(); } +static int vmd_create_pch_bus(struct vmd_dev *vmd, struct pci_sysdata *sd, + resource_size_t *offset) +{ + LIST_HEAD(resources_pch); + + pci_add_resource(&resources_pch, &vmd->resources[VMD_RES_PCH_CFGBAR]); + pci_add_resource_offset(&resources_pch, + &vmd->resources[VMD_RES_PCH_MBAR_1], offset[0]); + pci_add_resource_offset(&resources_pch, + &vmd->resources[VMD_RES_PCH_MBAR_2], offset[1]); + + vmd->bus[VMD_BUS_1] = pci_create_root_bus(&vmd->dev->dev, + vmd->busn_start[VMD_BUS_1], + &vmd_ops, sd, &resources_pch); + + if (!vmd->bus[VMD_BUS_1]) { + pci_free_resource_list(&resources_pch); + pci_stop_root_bus(vmd->bus[VMD_BUS_1]); + pci_remove_root_bus(vmd->bus[VMD_BUS_1]); + return -ENODEV; + } + + /* + * primary bus is not set by pci_create_root_bus(), it is updated here + */ + vmd->bus[VMD_BUS_1]->primary = VMD_PRIMARY_PCH_BUS; + + vmd_copy_host_bridge_flags( + pci_find_host_bridge(vmd->dev->bus), + to_pci_host_bridge(vmd->bus[VMD_BUS_1]->bridge)); + + if (vmd->irq_domain) + dev_set_msi_domain(&vmd->bus[VMD_BUS_1]->dev, + vmd->irq_domain); + else + dev_set_msi_domain(&vmd->bus[VMD_BUS_1]->dev, + dev_get_msi_domain(&vmd->dev->dev)); + + return 0; +} + static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features) { struct pci_sysdata *sd = &vmd->sysdata; @@ -923,7 +1067,7 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features) * limits the bus range to between 0-127, 128-255, or 224-255 */ if (features & VMD_FEAT_HAS_BUS_RESTRICTIONS) { - ret = vmd_get_bus_number_start(vmd); + ret = vmd_get_bus_number_start(vmd, features); if (ret) return ret; } @@ -1016,6 +1160,16 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features) vmd_bus_enumeration(vmd->bus[VMD_BUS_0], features); + if (vmd_has_pch_rootbus(vmd)) { + ret = vmd_create_pch_bus(vmd, sd, offset); + if (ret) { + pci_err(vmd->dev, "Can't create PCH bus: %d\n", ret); + return ret; + } + + vmd_bus_enumeration(vmd->bus[VMD_BUS_1], features); + } + return 0; } @@ -1094,6 +1248,10 @@ static void vmd_remove(struct pci_dev *dev) pci_stop_root_bus(vmd->bus[VMD_BUS_0]); sysfs_remove_link(&vmd->dev->dev.kobj, "domain"); pci_remove_root_bus(vmd->bus[VMD_BUS_0]); + if (vmd_has_pch_rootbus(vmd)) { + pci_stop_root_bus(vmd->bus[VMD_BUS_1]); + pci_remove_root_bus(vmd->bus[VMD_BUS_1]); + } vmd_cleanup_srcu(vmd); vmd_detach_resources(vmd); vmd_remove_irq_domain(vmd); @@ -1179,4 +1337,4 @@ module_pci_driver(vmd_drv); MODULE_AUTHOR("Intel Corporation"); MODULE_DESCRIPTION("Volume Management Device driver"); MODULE_LICENSE("GPL v2"); -MODULE_VERSION("0.6"); +MODULE_VERSION("0.7"); From patchwork Fri Oct 25 15:01:53 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Szymon Durawa X-Patchwork-Id: 13850752 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.14]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E572A143744 for ; Fri, 25 Oct 2024 14:00:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.14 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729864858; cv=none; b=S6Mqrq2BPgl8O34i/TUIKFa3WnG3c/wlzkby3XubmPrQqkWMP7ArtYI1ZkdL1c2o4I/HT4ACAjJ2dWJWM46wixntYBxC4HD8dvfHDa4vCJSUQhMdQcKA1HXeB4lOqxXOgK11Db178pX98wd8EivdSbdhrE81hSqfBIIW7PTWVVU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729864858; c=relaxed/simple; bh=sfeMvxnsKa610l3g0XtcCqYynCw85W+htMeF3y49fBk=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=nnRUMuSwt7uE3Cl7CZpCeMAW4cQ9NyiVGZWhMp1RGxpvPSz+/zldSQBvyWilKQYdrl+rB/xaBB1wxS/g4oc+P9yFq8HRnO1YfeIHBvZp+i2YeSOH2jLuI4SQ9SVZBtoC+U3S4bgGMOKXnfj9xH8tthtxXtis5bGHniJJghTpi4M= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=KrUIqS5a; arc=none smtp.client-ip=192.198.163.14 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="KrUIqS5a" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1729864857; x=1761400857; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=sfeMvxnsKa610l3g0XtcCqYynCw85W+htMeF3y49fBk=; b=KrUIqS5a/+f7ty9lNVPjEwFgVgUMAcWg9uJlP5zIwWDH/eRD7q3PTFI2 G5rLY0pP/Y14gnYIPNOulDiRBZswUEo+/kaMio9nFLgb4TbMlp564YoqG +o+LYlYrJp0wk7aLH8gtemdabFXYlx5KzCF3ej06VeeXF/3q816smIBpM 8r/6e0k/LUZMdMkzRZK3BBbPZsYnG87pwOmDm/HaVskTyUM5BlqZvQK3X +rPn+lTiu2JDIK4XKEhrYm7MtnG4cn65E2VY4IoR+6EsyIzFTE20b48UV j3MwxWyI++pxjkoDZ79KhS41K0OcVT18yNtMPogvhwX5Trz2Lgpa9WV8Z w==; X-CSE-ConnectionGUID: Aj+DE7MBSGukLCAX9OwNjg== X-CSE-MsgGUID: ujME5DWuQxaOWsvdTnqgEQ== X-IronPort-AV: E=McAfee;i="6700,10204,11236"; a="29752865" X-IronPort-AV: E=Sophos;i="6.11,231,1725346800"; d="scan'208";a="29752865" Received: from fmviesa010.fm.intel.com ([10.60.135.150]) by fmvoesa108.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Oct 2024 07:00:57 -0700 X-CSE-ConnectionGUID: eqvnk4ZLSZaIvmMwjt4bvA== X-CSE-MsgGUID: j4hfAorlSRizJL5KyhWycA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.11,231,1725346800"; d="scan'208";a="81232310" Received: from arl-s-03.igk.intel.com ([10.91.111.103]) by fmviesa010-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Oct 2024 07:00:55 -0700 From: Szymon Durawa To: helgaas@kernel.org Cc: Szymon Durawa , Dan Williams , Lukas Wunner , linux-pci@vger.kernel.org, Nirmal Patel , Mariusz Tkaczyk Subject: [RFC PATCH v1 3/3] PCI: vmd: Add WA for VMD PCH rootbus support Date: Fri, 25 Oct 2024 17:01:53 +0200 Message-Id: <20241025150153.983306-4-szymon.durawa@linux.intel.com> X-Mailer: git-send-email 2.39.3 In-Reply-To: <20241025150153.983306-1-szymon.durawa@linux.intel.com> References: <20241025150153.983306-1-szymon.durawa@linux.intel.com> Precedence: bulk X-Mailing-List: linux-pci@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 VMD PCH rootbus primary number is 0x80 and pci_scan_bridge_extend() cannot assign it as "hard-wired to 0" and marks setup as broken. To avoid this, PCH bus number has to be the same as PCH primary number. Suggested-by: Nirmal Patel Reviewed-by: Mariusz Tkaczyk Signed-off-by: Szymon Durawa --- drivers/pci/controller/vmd.c | 26 ++++++++++++++++++++++++-- 1 file changed, 24 insertions(+), 2 deletions(-) diff --git a/drivers/pci/controller/vmd.c b/drivers/pci/controller/vmd.c index 842b70a21325..bb47e0a76c89 100755 --- a/drivers/pci/controller/vmd.c +++ b/drivers/pci/controller/vmd.c @@ -404,8 +404,22 @@ static inline u8 vmd_has_pch_rootbus(struct vmd_dev *vmd) static void __iomem *vmd_cfg_addr(struct vmd_dev *vmd, struct pci_bus *bus, unsigned int devfn, int reg, int len) { - unsigned int busnr_ecam = bus->number - vmd->busn_start[VMD_BUS_0]; - u32 offset = PCIE_ECAM_OFFSET(busnr_ecam, devfn, reg); + unsigned char bus_number; + unsigned int busnr_ecam; + u32 offset; + + /* + * VMD WA: for PCH rootbus, bus number is set to VMD_PRIMARY_PCH_BUS + * (see comment in vmd_create_pch_bus()) but original value is 0xE1 + * which is stored in vmd->busn_start[VMD_BUS_1]. + */ + if (vmd_has_pch_rootbus(vmd) && bus->number == VMD_PRIMARY_PCH_BUS) + bus_number = vmd->busn_start[VMD_BUS_1]; + else + bus_number = bus->number; + + busnr_ecam = bus_number - vmd->busn_start[VMD_BUS_0]; + offset = PCIE_ECAM_OFFSET(busnr_ecam, devfn, reg); if (offset + len >= resource_size(&vmd->dev->resource[VMD_CFGBAR])) return NULL; @@ -1023,6 +1037,14 @@ static int vmd_create_pch_bus(struct vmd_dev *vmd, struct pci_sysdata *sd, */ vmd->bus[VMD_BUS_1]->primary = VMD_PRIMARY_PCH_BUS; + /* This is a workaround for pci_scan_bridge_extend() code. + * It assigns setup as broken when primary != bus->number and + * for PCH rootbus primary is not "hard-wired to 0". + * To avoid this, vmd->bus[VMD_BUS_1]->number and + * vmd->bus[VMD_BUS_1]->primary are updated to the same value. + */ + vmd->bus[VMD_BUS_1]->number = VMD_PRIMARY_PCH_BUS; + vmd_copy_host_bridge_flags( pci_find_host_bridge(vmd->dev->bus), to_pci_host_bridge(vmd->bus[VMD_BUS_1]->bridge));