Message ID | 8D943B239464E04C9184454D8B3F5CFF0ABF18EC@ORSMSX101.amr.corp.intel.com (mailing list archive) |
---|---|
State | New, archived |
Delegated to: | Bjorn Helgaas |
Headers | show |
On Fri, Jan 15, 2016 at 11:48:11AM -0800, Derrick, Jonathan wrote: > Hi folks, > > The VMD driver can be built as a module, so the following symbols need to be exported: > pci_msi_create_irq_domain > msi_desc_to_pci_dev Hi Jon, Your tree is a little old. It's added in commit a4289dc2ec3a5: https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=a4289dc2ec3a5821076a78ee9678909b4eff297e But since you mention it, this is not in the pci repo yet either, but it's a clean cherry-pick for us to test. -- To unsubscribe from this list: send the line "unsubscribe linux-pci" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Fri, Jan 15, 2016 at 07:54:08PM +0000, Keith Busch wrote: > On Fri, Jan 15, 2016 at 11:48:11AM -0800, Derrick, Jonathan wrote: > > Hi folks, > > > > The VMD driver can be built as a module, so the following symbols need to be exported: > > pci_msi_create_irq_domain > > msi_desc_to_pci_dev > > Hi Jon, > > Your tree is a little old. It's added in commit a4289dc2ec3a5: > > https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=a4289dc2ec3a5821076a78ee9678909b4eff297e > > But since you mention it, this is not in the pci repo yet either, but > it's a clean cherry-pick for us to test. Aha, good catch -- To unsubscribe from this list: send the line "unsubscribe linux-pci" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Fri, Jan 15, 2016 at 07:48:11PM +0000, Derrick, Jonathan wrote:
> Additionally, what are your thoughts of moving the menuconfig option from the root-tier to 'Processor Type and Features'?
I don't know how Intel is going to market this feature, but from a
developer point of view, the code looks more like the PCI host bridge
drivers in drivers/pci/host than it does like a processor feature.
Most of those drivers are specific to a particular architecture or
SoC.
Bjorn
--
To unsubscribe from this list: send the line "unsubscribe linux-pci" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
On Fri, Jan 15, 2016 at 04:06:31PM -0600, Bjorn Helgaas wrote: > On Fri, Jan 15, 2016 at 07:48:11PM +0000, Derrick, Jonathan wrote: > > Additionally, what are your thoughts of moving the menuconfig option from the root-tier to 'Processor Type and Features'? > > I don't know how Intel is going to market this feature, but from a > developer point of view, the code looks more like the PCI host bridge > drivers in drivers/pci/host than it does like a processor feature. > Most of those drivers are specific to a particular architecture or > SoC. We had tighter dependencies on x86 in earlier revisions of this driver. It probably now looks more sensible to be in drivers/pci/host instead of arch specific. Do you want us to make this change and resend the series? Or can we provide patches for in tree development? We'll also add the requested code comments to explain more about the device. BTW, we completed tests with your pci/host-vmd branch, and that was successful. -- To unsubscribe from this list: send the line "unsubscribe linux-pci" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
As this seems to require special drivers to bind to it, and Intel people refuse to even publicly tell what the code does I'd like to NAK this code until we get an explanation and use cases for it. -- To unsubscribe from this list: send the line "unsubscribe linux-pci" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Tue, Jan 19, 2016 at 08:02:20AM -0800, Christoph Hellwig wrote: > As this seems to require special drivers to bind to it, and Intel > people refuse to even publicly tell what the code does I'd like > to NAK this code until we get an explanation and use cases for it. We haven't opened the h/w specification, but we've been pretty open with what it provides, how the code works, and our intended use case. The device provides additional pci domains for people who need more than the 256 busses a single domain provides. What information may I provide to satisfy your use case concerns? Are you wanting to know what devices we have in mind that require additional domains? -- To unsubscribe from this list: send the line "unsubscribe linux-pci" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Tue, Jan 19, 2016 at 04:36:36PM +0000, Keith Busch wrote: > On Tue, Jan 19, 2016 at 08:02:20AM -0800, Christoph Hellwig wrote: > > As this seems to require special drivers to bind to it, and Intel > > people refuse to even publicly tell what the code does I'd like > > to NAK this code until we get an explanation and use cases for it. > > We haven't opened the h/w specification, but we've been pretty open with > what it provides, how the code works, and our intended use case. The > device provides additional pci domains for people who need more than > the 256 busses a single domain provides. > > What information may I provide to satisfy your use case concerns? Are > you wanting to know what devices we have in mind that require additional > domains? VMD is simply a convenient way to create a new PCIe host bridge that happens to sit on the existing PCIe root bus. It changes how I/O is routed (i.e. BDF translation), but not its contents. We've actually gone through some effort in the code *avoid* special drivers by implementing the existing host bridge abstractions. The cases where existing drivers wouldn't work are due to limitations, not arbitrary filters. (For example, it doesn't know how to route legacy IO ports or INTx.) -- To unsubscribe from this list: send the line "unsubscribe linux-pci" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Hi Christoph, On Tue, Jan 19, 2016 at 08:02:20AM -0800, Christoph Hellwig wrote: > > As this seems to require special drivers to bind to it, and Intel > people refuse to even publicly tell what the code does I'd like > to NAK this code until we get an explanation and use cases for it. I saw responses from Keith and Bryan, and I hope they answer your questions. As far as I can tell, the VMD driver is grossly similar to other host bridge drivers we've already merged, and I don't think we have public specs for all of them. Unless you have further concerns, I'm going to ask Linus to pull this tomorrow, along with the rest of the PCI changes for v4.5. Bjorn -- To unsubscribe from this list: send the line "unsubscribe linux-pci" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Wed, Jan 20, 2016 at 02:43:08PM -0600, Bjorn Helgaas wrote: > I saw responses from Keith and Bryan, and I hope they answer your > questions. As far as I can tell, the VMD driver is grossly similar to > other host bridge drivers we've already merged, and I don't think we > have public specs for all of them. > > Unless you have further concerns, I'm going to ask Linus to pull this > tomorrow, along with the rest of the PCI changes for v4.5. I still think it's a bad idea to merge something odd like this without a good explanation or showing what devices can actually sit under it. But you're the maintainer in the end.. -- To unsubscribe from this list: send the line "unsubscribe linux-pci" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Tue, Jan 26, 2016 at 08:46:09AM -0800, Christoph Hellwig wrote: > On Wed, Jan 20, 2016 at 02:43:08PM -0600, Bjorn Helgaas wrote: > > I saw responses from Keith and Bryan, and I hope they answer your > > questions. As far as I can tell, the VMD driver is grossly similar to > > other host bridge drivers we've already merged, and I don't think we > > have public specs for all of them. > > > > Unless you have further concerns, I'm going to ask Linus to pull this > > tomorrow, along with the rest of the PCI changes for v4.5. > > I still think it's a bad idea to merge something odd like this without > a good explanation or showing what devices can actually sit under it. > > But you're the maintainer in the end.. Any PCIe devices and and bridges should work with existing upstream drivers. The only exceptions would be anything depndent on INTx or IO ports. -- To unsubscribe from this list: send the line "unsubscribe linux-pci" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
diff --git a/MAINTAINERS b/MAINTAINERS index 76369d4..ce47e08 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -8216,7 +8216,7 @@ S: Maintained F: Documentation/devicetree/bindings/pci/host-generic-pci.txt F: drivers/pci/host/pci-host-generic.c -PCI DRIVER FOR VMD +PCI DRIVER FOR INTEL VOLUME MANAGEMENT DEVICE (VMD) M: Keith Busch <keith.busch@intel.com> L: linux-pci@vger.kernel.org S: Supported diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 9a5ab69..3e6aca8 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -2670,13 +2670,13 @@ config VMD tristate "Volume Management Device Driver" default N ---help--- - Adds support for the Intel Volume Manage Device (VMD). VMD is a + Adds support for the Intel Volume Management Device (VMD). VMD is a secondary PCI host bridge that allows PCI Express root ports, and devices attached to them, to be removed from the default PCI domain and placed within the VMD domain. This provides - additional bus resources than are otherwise possible with a + more bus resources than are otherwise possible with a single domain. If you know your system provides one of these and - have devices attached to it, say Y; if you are not sure, say N. + has devices attached to it, say Y; if you are not sure, say N. source "net/Kconfig" diff --git a/arch/x86/include/asm/device.h b/arch/x86/include/asm/device.h index 3b23897..684ed6c 100644 --- a/arch/x86/include/asm/device.h +++ b/arch/x86/include/asm/device.h @@ -16,8 +16,8 @@ struct dma_domain { struct dma_map_ops *dma_ops; int domain_nr; }; -extern void add_dma_domain(struct dma_domain *domain); -extern void del_dma_domain(struct dma_domain *domain); +void add_dma_domain(struct dma_domain *domain); void +del_dma_domain(struct dma_domain *domain); #endif struct pdev_archdata { diff --git a/arch/x86/pci/common.c b/arch/x86/pci/common.c index 106fd13..2879efc 100644 --- a/arch/x86/pci/common.c +++ b/arch/x86/pci/common.c @@ -642,8 +642,8 @@ unsigned int pcibios_assign_all_busses(void) } #if defined(CONFIG_X86_DEV_DMA_OPS) && defined(CONFIG_PCI_DOMAINS) -LIST_HEAD(dma_domain_list); -DEFINE_SPINLOCK(dma_domain_list_lock); +static LIST_HEAD(dma_domain_list); +static DEFINE_SPINLOCK(dma_domain_list_lock); void add_dma_domain(struct dma_domain *domain) { diff --git a/arch/x86/pci/vmd.c b/arch/x86/pci/vmd.c index 56ef447..d57e480 100644 --- a/arch/x86/pci/vmd.c +++ b/arch/x86/pci/vmd.c @@ -27,20 +27,24 @@ #include <asm/msi.h> #include <asm/msidef.h>