Message ID | 1455069384-22323-1-git-send-email-dhdang@apm.com (mailing list archive) |
---|---|
State | New, archived |
Delegated to: | Bjorn Helgaas |
Headers | show |
On 10/02/16 01:56, Duc Dang wrote: > This patch makes pci-xgene-msi driver ACPI-aware and provides > MSI capability for X-Gene v1 PCIe controllers in ACPI boot mode. > > Signed-off-by: Duc Dang <dhdang@apm.com> Another nail on the ACPI "standardization" coffin, I suppose... Oh well. Acked-by: Marc Zyngier <marc.zyngier@arm.com> M.
On Wed, Feb 10, 2016 at 9:08 AM, Marc Zyngier <marc.zyngier@arm.com> wrote: > On 10/02/16 01:56, Duc Dang wrote: >> This patch makes pci-xgene-msi driver ACPI-aware and provides >> MSI capability for X-Gene v1 PCIe controllers in ACPI boot mode. >> >> Signed-off-by: Duc Dang <dhdang@apm.com> > > Another nail on the ACPI "standardization" coffin, I suppose... Oh well. > > Acked-by: Marc Zyngier <marc.zyngier@arm.com> Thanks, Marc. Regards, Duc Dang. > > M. > -- > Jazz is not dead. It just smells funny... -- To unsubscribe from this list: send the line "unsubscribe linux-pci" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Tue, Feb 9, 2016 at 5:56 PM, Duc Dang <dhdang@apm.com> wrote: > This patch makes pci-xgene-msi driver ACPI-aware and provides > MSI capability for X-Gene v1 PCIe controllers in ACPI boot mode. Hi Bjorn, Are you planning to take this patch into your tree? Regards, Duc Dang. > > Signed-off-by: Duc Dang <dhdang@apm.com> > --- > drivers/pci/host/pci-xgene-msi.c | 35 ++++++++++++++++++++++++++++++++--- > 1 file changed, 32 insertions(+), 3 deletions(-) > > diff --git a/drivers/pci/host/pci-xgene-msi.c b/drivers/pci/host/pci-xgene-msi.c > index a6456b5..466aa93 100644 > --- a/drivers/pci/host/pci-xgene-msi.c > +++ b/drivers/pci/host/pci-xgene-msi.c > @@ -24,6 +24,7 @@ > #include <linux/pci.h> > #include <linux/platform_device.h> > #include <linux/of_pci.h> > +#include <linux/acpi.h> > > #define MSI_IR0 0x000000 > #define MSI_INT0 0x800000 > @@ -39,7 +40,7 @@ struct xgene_msi_group { > }; > > struct xgene_msi { > - struct device_node *node; > + struct fwnode_handle *fwnode; > struct irq_domain *inner_domain; > struct irq_domain *msi_domain; > u64 msi_addr; > @@ -249,6 +250,13 @@ static const struct irq_domain_ops msi_domain_ops = { > .free = xgene_irq_domain_free, > }; > > +#ifdef CONFIG_ACPI > +static struct fwnode_handle *xgene_msi_get_fwnode(struct device *dev) > +{ > + return xgene_msi_ctrl.fwnode; > +} > +#endif > + > static int xgene_allocate_domains(struct xgene_msi *msi) > { > msi->inner_domain = irq_domain_add_linear(NULL, NR_MSI_VEC, > @@ -256,7 +264,7 @@ static int xgene_allocate_domains(struct xgene_msi *msi) > if (!msi->inner_domain) > return -ENOMEM; > > - msi->msi_domain = pci_msi_create_irq_domain(of_node_to_fwnode(msi->node), > + msi->msi_domain = pci_msi_create_irq_domain(msi->fwnode, > &xgene_msi_domain_info, > msi->inner_domain); > > @@ -265,6 +273,9 @@ static int xgene_allocate_domains(struct xgene_msi *msi) > return -ENOMEM; > } > > +#ifdef CONFIG_ACPI > + pci_msi_register_fwnode_provider(&xgene_msi_get_fwnode); > +#endif > return 0; > } > > @@ -473,6 +484,13 @@ static const struct of_device_id xgene_msi_match_table[] = { > {}, > }; > > +#ifdef CONFIG_ACPI > +static const struct acpi_device_id xgene_msi_acpi_ids[] = { > + {"APMC0D0E", 0}, > + { }, > +}; > +#endif > + > static int xgene_msi_probe(struct platform_device *pdev) > { > struct resource *res; > @@ -494,7 +512,17 @@ static int xgene_msi_probe(struct platform_device *pdev) > goto error; > } > xgene_msi->msi_addr = res->start; > - xgene_msi->node = pdev->dev.of_node; > + > + xgene_msi->fwnode = of_node_to_fwnode(pdev->dev.of_node); > + if (!xgene_msi->fwnode) { > + xgene_msi->fwnode = irq_domain_alloc_fwnode(NULL); > + if (!xgene_msi->fwnode) { > + dev_err(&pdev->dev, "Failed to create fwnode\n"); > + rc = ENOMEM; > + goto error; > + } > + } > + > xgene_msi->num_cpus = num_possible_cpus(); > > rc = xgene_msi_init_allocator(xgene_msi); > @@ -571,6 +599,7 @@ static struct platform_driver xgene_msi_driver = { > .driver = { > .name = "xgene-msi", > .of_match_table = xgene_msi_match_table, > + .acpi_match_table = ACPI_PTR(xgene_msi_acpi_ids), > }, > .probe = xgene_msi_probe, > .remove = xgene_msi_remove, > -- > 1.9.1 > -- To unsubscribe from this list: send the line "unsubscribe linux-pci" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Sat, Feb 20, 2016 at 1:47 PM, Duc Dang <dhdang@apm.com> wrote: > On Tue, Feb 9, 2016 at 5:56 PM, Duc Dang <dhdang@apm.com> wrote: >> This patch makes pci-xgene-msi driver ACPI-aware and provides >> MSI capability for X-Gene v1 PCIe controllers in ACPI boot mode. > > Hi Bjorn, > > Are you planning to take this patch into your tree? Sorry, I haven't had a chance to look at this yet. I had a sick daughter last week, so I'm even farther behind than usual. When I get to it, I'll be asking questions like: - Does this correspond to some analogous x86 code? - Is there something unique about arm64 or X-Gene that means it needs special code? I don't really understand the arm64/ACPI strategy yet, so I'll be looking first for generic solutions that work across x86/ia64/arm64, and justifications for the inevitable exceptions. >> Signed-off-by: Duc Dang <dhdang@apm.com> >> --- >> drivers/pci/host/pci-xgene-msi.c | 35 ++++++++++++++++++++++++++++++++--- >> 1 file changed, 32 insertions(+), 3 deletions(-) >> >> diff --git a/drivers/pci/host/pci-xgene-msi.c b/drivers/pci/host/pci-xgene-msi.c >> index a6456b5..466aa93 100644 >> --- a/drivers/pci/host/pci-xgene-msi.c >> +++ b/drivers/pci/host/pci-xgene-msi.c >> @@ -24,6 +24,7 @@ >> #include <linux/pci.h> >> #include <linux/platform_device.h> >> #include <linux/of_pci.h> >> +#include <linux/acpi.h> >> >> #define MSI_IR0 0x000000 >> #define MSI_INT0 0x800000 >> @@ -39,7 +40,7 @@ struct xgene_msi_group { >> }; >> >> struct xgene_msi { >> - struct device_node *node; >> + struct fwnode_handle *fwnode; >> struct irq_domain *inner_domain; >> struct irq_domain *msi_domain; >> u64 msi_addr; >> @@ -249,6 +250,13 @@ static const struct irq_domain_ops msi_domain_ops = { >> .free = xgene_irq_domain_free, >> }; >> >> +#ifdef CONFIG_ACPI >> +static struct fwnode_handle *xgene_msi_get_fwnode(struct device *dev) >> +{ >> + return xgene_msi_ctrl.fwnode; >> +} >> +#endif >> + >> static int xgene_allocate_domains(struct xgene_msi *msi) >> { >> msi->inner_domain = irq_domain_add_linear(NULL, NR_MSI_VEC, >> @@ -256,7 +264,7 @@ static int xgene_allocate_domains(struct xgene_msi *msi) >> if (!msi->inner_domain) >> return -ENOMEM; >> >> - msi->msi_domain = pci_msi_create_irq_domain(of_node_to_fwnode(msi->node), >> + msi->msi_domain = pci_msi_create_irq_domain(msi->fwnode, >> &xgene_msi_domain_info, >> msi->inner_domain); >> >> @@ -265,6 +273,9 @@ static int xgene_allocate_domains(struct xgene_msi *msi) >> return -ENOMEM; >> } >> >> +#ifdef CONFIG_ACPI >> + pci_msi_register_fwnode_provider(&xgene_msi_get_fwnode); >> +#endif >> return 0; >> } >> >> @@ -473,6 +484,13 @@ static const struct of_device_id xgene_msi_match_table[] = { >> {}, >> }; >> >> +#ifdef CONFIG_ACPI >> +static const struct acpi_device_id xgene_msi_acpi_ids[] = { >> + {"APMC0D0E", 0}, >> + { }, >> +}; >> +#endif >> + >> static int xgene_msi_probe(struct platform_device *pdev) >> { >> struct resource *res; >> @@ -494,7 +512,17 @@ static int xgene_msi_probe(struct platform_device *pdev) >> goto error; >> } >> xgene_msi->msi_addr = res->start; >> - xgene_msi->node = pdev->dev.of_node; >> + >> + xgene_msi->fwnode = of_node_to_fwnode(pdev->dev.of_node); >> + if (!xgene_msi->fwnode) { >> + xgene_msi->fwnode = irq_domain_alloc_fwnode(NULL); >> + if (!xgene_msi->fwnode) { >> + dev_err(&pdev->dev, "Failed to create fwnode\n"); >> + rc = ENOMEM; >> + goto error; >> + } >> + } >> + >> xgene_msi->num_cpus = num_possible_cpus(); >> >> rc = xgene_msi_init_allocator(xgene_msi); >> @@ -571,6 +599,7 @@ static struct platform_driver xgene_msi_driver = { >> .driver = { >> .name = "xgene-msi", >> .of_match_table = xgene_msi_match_table, >> + .acpi_match_table = ACPI_PTR(xgene_msi_acpi_ids), >> }, >> .probe = xgene_msi_probe, >> .remove = xgene_msi_remove, >> -- >> 1.9.1 >> -- To unsubscribe from this list: send the line "unsubscribe linux-pci" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Tue, 2016-02-09 at 17:56 -0800, Duc Dang wrote: > This patch makes pci-xgene-msi driver ACPI-aware and provides > MSI capability for X-Gene v1 PCIe controllers in ACPI boot mode. > > Signed-off-by: Duc Dang <dhdang@apm.com> > --- > drivers/pci/host/pci-xgene-msi.c | 35 ++++++++++++++++++++++++++++++++--- > 1 file changed, 32 insertions(+), 3 deletions(-) > > diff --git a/drivers/pci/host/pci-xgene-msi.c b/drivers/pci/host/pci-xgene-msi.c > index a6456b5..466aa93 100644 > --- a/drivers/pci/host/pci-xgene-msi.c > +++ b/drivers/pci/host/pci-xgene-msi.c > @@ -24,6 +24,7 @@ > #include <linux/pci.h> > #include <linux/platform_device.h> > #include <linux/of_pci.h> > +#include <linux/acpi.h> > > #define MSI_IR0 0x000000 > #define MSI_INT0 0x800000 > @@ -39,7 +40,7 @@ struct xgene_msi_group { > }; > > struct xgene_msi { > - struct device_node *node; > + struct fwnode_handle *fwnode; > struct irq_domain *inner_domain; > struct irq_domain *msi_domain; > u64 msi_addr; > @@ -249,6 +250,13 @@ static const struct irq_domain_ops msi_domain_ops = { > .free = xgene_irq_domain_free, > }; > > +#ifdef CONFIG_ACPI > +static struct fwnode_handle *xgene_msi_get_fwnode(struct device *dev) > +{ > + return xgene_msi_ctrl.fwnode; > +} > +#endif > + > static int xgene_allocate_domains(struct xgene_msi *msi) > { > msi->inner_domain = irq_domain_add_linear(NULL, NR_MSI_VEC, > @@ -256,7 +264,7 @@ static int xgene_allocate_domains(struct xgene_msi *msi) > if (!msi->inner_domain) > return -ENOMEM; > > - msi->msi_domain = pci_msi_create_irq_domain(of_node_to_fwnode(msi->node), > + msi->msi_domain = pci_msi_create_irq_domain(msi->fwnode, > &xgene_msi_domain_info, > msi->inner_domain); This doesn't work for me (ACPI probing on Mustang) unless I change this to be pci_msi_create_default_irq_domain(). The problem seems to be that the MSI probe happens after the PCIe RC is probed so there is no MSI domain at the time the PCIe root is initialized by ACPI. > > @@ -265,6 +273,9 @@ static int xgene_allocate_domains(struct xgene_msi *msi) > return -ENOMEM; > } > > +#ifdef CONFIG_ACPI > + pci_msi_register_fwnode_provider(&xgene_msi_get_fwnode); > +#endif > return 0; > } > > @@ -473,6 +484,13 @@ static const struct of_device_id xgene_msi_match_table[] = { > {}, > }; > > +#ifdef CONFIG_ACPI > +static const struct acpi_device_id xgene_msi_acpi_ids[] = { > + {"APMC0D0E", 0}, > + { }, > +}; > +#endif > + > static int xgene_msi_probe(struct platform_device *pdev) > { > struct resource *res; > @@ -494,7 +512,17 @@ static int xgene_msi_probe(struct platform_device *pdev) > goto error; > } > xgene_msi->msi_addr = res->start; > - xgene_msi->node = pdev->dev.of_node; > + > + xgene_msi->fwnode = of_node_to_fwnode(pdev->dev.of_node); > + if (!xgene_msi->fwnode) { > + xgene_msi->fwnode = irq_domain_alloc_fwnode(NULL); > + if (!xgene_msi->fwnode) { > + dev_err(&pdev->dev, "Failed to create fwnode\n"); > + rc = ENOMEM; > + goto error; > + } > + } > + > xgene_msi->num_cpus = num_possible_cpus(); > > rc = xgene_msi_init_allocator(xgene_msi); > @@ -571,6 +599,7 @@ static struct platform_driver xgene_msi_driver = { > .driver = { > .name = "xgene-msi", > .of_match_table = xgene_msi_match_table, > + .acpi_match_table = ACPI_PTR(xgene_msi_acpi_ids), > }, > .probe = xgene_msi_probe, > .remove = xgene_msi_remove, -- To unsubscribe from this list: send the line "unsubscribe linux-pci" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On 24/02/16 16:09, Mark Salter wrote: > On Tue, 2016-02-09 at 17:56 -0800, Duc Dang wrote: >> This patch makes pci-xgene-msi driver ACPI-aware and provides >> MSI capability for X-Gene v1 PCIe controllers in ACPI boot mode. >> >> Signed-off-by: Duc Dang <dhdang@apm.com> >> --- >> drivers/pci/host/pci-xgene-msi.c | 35 ++++++++++++++++++++++++++++++++--- >> 1 file changed, 32 insertions(+), 3 deletions(-) >> >> diff --git a/drivers/pci/host/pci-xgene-msi.c b/drivers/pci/host/pci-xgene-msi.c >> index a6456b5..466aa93 100644 >> --- a/drivers/pci/host/pci-xgene-msi.c >> +++ b/drivers/pci/host/pci-xgene-msi.c >> @@ -24,6 +24,7 @@ >> #include <linux/pci.h> >> #include <linux/platform_device.h> >> #include <linux/of_pci.h> >> +#include <linux/acpi.h> >> >> #define MSI_IR0 0x000000 >> #define MSI_INT0 0x800000 >> @@ -39,7 +40,7 @@ struct xgene_msi_group { >> }; >> >> struct xgene_msi { >> - struct device_node *node; >> + struct fwnode_handle *fwnode; >> struct irq_domain *inner_domain; >> struct irq_domain *msi_domain; >> u64 msi_addr; >> @@ -249,6 +250,13 @@ static const struct irq_domain_ops msi_domain_ops = { >> .free = xgene_irq_domain_free, >> }; >> >> +#ifdef CONFIG_ACPI >> +static struct fwnode_handle *xgene_msi_get_fwnode(struct device *dev) >> +{ >> + return xgene_msi_ctrl.fwnode; >> +} >> +#endif >> + >> static int xgene_allocate_domains(struct xgene_msi *msi) >> { >> msi->inner_domain = irq_domain_add_linear(NULL, NR_MSI_VEC, >> @@ -256,7 +264,7 @@ static int xgene_allocate_domains(struct xgene_msi *msi) >> if (!msi->inner_domain) >> return -ENOMEM; >> >> - msi->msi_domain = pci_msi_create_irq_domain(of_node_to_fwnode(msi->node), >> + msi->msi_domain = pci_msi_create_irq_domain(msi->fwnode, >> &xgene_msi_domain_info, >> msi->inner_domain); > > This doesn't work for me (ACPI probing on Mustang) unless I change this > to be pci_msi_create_default_irq_domain(). The problem seems to be that > the MSI probe happens after the PCIe RC is probed so there is no MSI domain > at the time the PCIe root is initialized by ACPI. pci_msi_create_default_irq_domain is the wrong thing do use, specially if you have multiple MSI controllers in the system. I certainly wouldn't want to see it being used on arm64. This is the usual dependency hell. You try moving the probing earlier, but that may break something else in the process. M.
On Wed, Feb 24, 2016 at 8:16 AM, Marc Zyngier <marc.zyngier@arm.com> wrote: > On 24/02/16 16:09, Mark Salter wrote: >> On Tue, 2016-02-09 at 17:56 -0800, Duc Dang wrote: >>> This patch makes pci-xgene-msi driver ACPI-aware and provides >>> MSI capability for X-Gene v1 PCIe controllers in ACPI boot mode. >>> >>> Signed-off-by: Duc Dang <dhdang@apm.com> >>> --- >>> drivers/pci/host/pci-xgene-msi.c | 35 ++++++++++++++++++++++++++++++++--- >>> 1 file changed, 32 insertions(+), 3 deletions(-) >>> >>> diff --git a/drivers/pci/host/pci-xgene-msi.c b/drivers/pci/host/pci-xgene-msi.c >>> index a6456b5..466aa93 100644 >>> --- a/drivers/pci/host/pci-xgene-msi.c >>> +++ b/drivers/pci/host/pci-xgene-msi.c >>> @@ -24,6 +24,7 @@ >>> #include <linux/pci.h> >>> #include <linux/platform_device.h> >>> #include <linux/of_pci.h> >>> +#include <linux/acpi.h> >>> >>> #define MSI_IR0 0x000000 >>> #define MSI_INT0 0x800000 >>> @@ -39,7 +40,7 @@ struct xgene_msi_group { >>> }; >>> >>> struct xgene_msi { >>> - struct device_node *node; >>> + struct fwnode_handle *fwnode; >>> struct irq_domain *inner_domain; >>> struct irq_domain *msi_domain; >>> u64 msi_addr; >>> @@ -249,6 +250,13 @@ static const struct irq_domain_ops msi_domain_ops = { >>> .free = xgene_irq_domain_free, >>> }; >>> >>> +#ifdef CONFIG_ACPI >>> +static struct fwnode_handle *xgene_msi_get_fwnode(struct device *dev) >>> +{ >>> + return xgene_msi_ctrl.fwnode; >>> +} >>> +#endif >>> + >>> static int xgene_allocate_domains(struct xgene_msi *msi) >>> { >>> msi->inner_domain = irq_domain_add_linear(NULL, NR_MSI_VEC, >>> @@ -256,7 +264,7 @@ static int xgene_allocate_domains(struct xgene_msi *msi) >>> if (!msi->inner_domain) >>> return -ENOMEM; >>> >>> - msi->msi_domain = pci_msi_create_irq_domain(of_node_to_fwnode(msi->node), >>> + msi->msi_domain = pci_msi_create_irq_domain(msi->fwnode, >>> &xgene_msi_domain_info, >>> msi->inner_domain); >> >> This doesn't work for me (ACPI probing on Mustang) unless I change this >> to be pci_msi_create_default_irq_domain(). The problem seems to be that >> the MSI probe happens after the PCIe RC is probed so there is no MSI domain >> at the time the PCIe root is initialized by ACPI. > > pci_msi_create_default_irq_domain is the wrong thing do use, specially > if you have multiple MSI controllers in the system. I certainly wouldn't > want to see it being used on arm64. > > This is the usual dependency hell. You try moving the probing earlier, > but that may break something else in the process. Hi Mark and Marc, I have modified Tianocore firmware to have MSI node declared before PCIe node in Dsdt table. With this modification, the MSI driver will be loaded before PCIe driver and MSI domain is available at the time PCIe root is initialized. So we will need a UEFI firmware update to work with this patch. Regards, Duc Dang. > > M. > -- > Jazz is not dead. It just smells funny... -- To unsubscribe from this list: send the line "unsubscribe linux-pci" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Wed, Feb 24, 2016 at 02:28:10PM -0800, Duc Dang wrote: > On Wed, Feb 24, 2016 at 8:16 AM, Marc Zyngier <marc.zyngier@arm.com> wrote: > > On 24/02/16 16:09, Mark Salter wrote: > >> On Tue, 2016-02-09 at 17:56 -0800, Duc Dang wrote: > >>> This patch makes pci-xgene-msi driver ACPI-aware and provides > >>> MSI capability for X-Gene v1 PCIe controllers in ACPI boot mode. > >>> > >>> Signed-off-by: Duc Dang <dhdang@apm.com> > >>> --- > >>> drivers/pci/host/pci-xgene-msi.c | 35 ++++++++++++++++++++++++++++++++--- > >>> 1 file changed, 32 insertions(+), 3 deletions(-) > >>> > >>> diff --git a/drivers/pci/host/pci-xgene-msi.c b/drivers/pci/host/pci-xgene-msi.c > >>> index a6456b5..466aa93 100644 > >>> --- a/drivers/pci/host/pci-xgene-msi.c > >>> +++ b/drivers/pci/host/pci-xgene-msi.c > >>> @@ -24,6 +24,7 @@ > >>> #include <linux/pci.h> > >>> #include <linux/platform_device.h> > >>> #include <linux/of_pci.h> > >>> +#include <linux/acpi.h> > >>> > >>> #define MSI_IR0 0x000000 > >>> #define MSI_INT0 0x800000 > >>> @@ -39,7 +40,7 @@ struct xgene_msi_group { > >>> }; > >>> > >>> struct xgene_msi { > >>> - struct device_node *node; > >>> + struct fwnode_handle *fwnode; > >>> struct irq_domain *inner_domain; > >>> struct irq_domain *msi_domain; > >>> u64 msi_addr; > >>> @@ -249,6 +250,13 @@ static const struct irq_domain_ops msi_domain_ops = { > >>> .free = xgene_irq_domain_free, > >>> }; > >>> > >>> +#ifdef CONFIG_ACPI > >>> +static struct fwnode_handle *xgene_msi_get_fwnode(struct device *dev) > >>> +{ > >>> + return xgene_msi_ctrl.fwnode; > >>> +} > >>> +#endif > >>> + > >>> static int xgene_allocate_domains(struct xgene_msi *msi) > >>> { > >>> msi->inner_domain = irq_domain_add_linear(NULL, NR_MSI_VEC, > >>> @@ -256,7 +264,7 @@ static int xgene_allocate_domains(struct xgene_msi *msi) > >>> if (!msi->inner_domain) > >>> return -ENOMEM; > >>> > >>> - msi->msi_domain = pci_msi_create_irq_domain(of_node_to_fwnode(msi->node), > >>> + msi->msi_domain = pci_msi_create_irq_domain(msi->fwnode, > >>> &xgene_msi_domain_info, > >>> msi->inner_domain); > >> > >> This doesn't work for me (ACPI probing on Mustang) unless I change this > >> to be pci_msi_create_default_irq_domain(). The problem seems to be that > >> the MSI probe happens after the PCIe RC is probed so there is no MSI domain > >> at the time the PCIe root is initialized by ACPI. > > > > pci_msi_create_default_irq_domain is the wrong thing do use, specially > > if you have multiple MSI controllers in the system. I certainly wouldn't > > want to see it being used on arm64. > > > > This is the usual dependency hell. You try moving the probing earlier, > > but that may break something else in the process. > > Hi Mark and Marc, > > I have modified Tianocore firmware to have MSI node declared before > PCIe node in Dsdt table. With this modification, the MSI driver will > be loaded before PCIe driver and MSI domain is available at the time > PCIe root is initialized. I am totally against this. We should not hack ACPI tables to make the kernel work (on top of that with unwritten ordering rules that may well require changes as kernel evolves), we should have a standard set of bindings that people use to describe HW in the DSDT and the kernel(s) has to cope with that. If there is a dependency problem in the description we may solve it at bindings level, but I absolutely do not want to rely on DSDT nodes ordering for things to work, that's fragile, no shortcuts please. Thanks, Lorenzo -- To unsubscribe from this list: send the line "unsubscribe linux-pci" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Mon, Feb 22, 2016 at 8:03 AM, Bjorn Helgaas <bhelgaas@google.com> wrote: > > On Sat, Feb 20, 2016 at 1:47 PM, Duc Dang <dhdang@apm.com> wrote: > > On Tue, Feb 9, 2016 at 5:56 PM, Duc Dang <dhdang@apm.com> wrote: > >> This patch makes pci-xgene-msi driver ACPI-aware and provides > >> MSI capability for X-Gene v1 PCIe controllers in ACPI boot mode. > > > > Hi Bjorn, > > > > Are you planning to take this patch into your tree? > > Sorry, I haven't had a chance to look at this yet. I had a sick > daughter last week, so I'm even farther behind than usual. > Hi Bjorn, I hope your daughter is doing well. > When I get to it, I'll be asking questions like: > > - Does this correspond to some analogous x86 code? I am not aware of any x86 code similar to this. > - Is there something unique about arm64 or X-Gene that means it > needs special code? The MSI controller on X-Gene v1 is a separate IP block. pci-xgene-msi is a separate X-Gene v1 specific driver that provides MSI capability for X-Gene v1 PCIe controllers. We already have device-tree version of this driver. So the purpose of this patch is to make the driver can be discovered and loaded when booting with ACPI mode. Regards, Duc Dang. > > I don't really understand the arm64/ACPI strategy yet, so I'll be > looking first for generic solutions that work across x86/ia64/arm64, > and justifications for the inevitable exceptions. > > >> Signed-off-by: Duc Dang <dhdang@apm.com> > >> --- > >> drivers/pci/host/pci-xgene-msi.c | 35 ++++++++++++++++++++++++++++++++--- > >> 1 file changed, 32 insertions(+), 3 deletions(-) > >> > >> diff --git a/drivers/pci/host/pci-xgene-msi.c b/drivers/pci/host/pci-xgene-msi.c > >> index a6456b5..466aa93 100644 > >> --- a/drivers/pci/host/pci-xgene-msi.c > >> +++ b/drivers/pci/host/pci-xgene-msi.c > >> @@ -24,6 +24,7 @@ > >> #include <linux/pci.h> > >> #include <linux/platform_device.h> > >> #include <linux/of_pci.h> > >> +#include <linux/acpi.h> > >> > >> #define MSI_IR0 0x000000 > >> #define MSI_INT0 0x800000 > >> @@ -39,7 +40,7 @@ struct xgene_msi_group { > >> }; > >> > >> struct xgene_msi { > >> - struct device_node *node; > >> + struct fwnode_handle *fwnode; > >> struct irq_domain *inner_domain; > >> struct irq_domain *msi_domain; > >> u64 msi_addr; > >> @@ -249,6 +250,13 @@ static const struct irq_domain_ops msi_domain_ops = { > >> .free = xgene_irq_domain_free, > >> }; > >> > >> +#ifdef CONFIG_ACPI > >> +static struct fwnode_handle *xgene_msi_get_fwnode(struct device *dev) > >> +{ > >> + return xgene_msi_ctrl.fwnode; > >> +} > >> +#endif > >> + > >> static int xgene_allocate_domains(struct xgene_msi *msi) > >> { > >> msi->inner_domain = irq_domain_add_linear(NULL, NR_MSI_VEC, > >> @@ -256,7 +264,7 @@ static int xgene_allocate_domains(struct xgene_msi *msi) > >> if (!msi->inner_domain) > >> return -ENOMEM; > >> > >> - msi->msi_domain = pci_msi_create_irq_domain(of_node_to_fwnode(msi->node), > >> + msi->msi_domain = pci_msi_create_irq_domain(msi->fwnode, > >> &xgene_msi_domain_info, > >> msi->inner_domain); > >> > >> @@ -265,6 +273,9 @@ static int xgene_allocate_domains(struct xgene_msi *msi) > >> return -ENOMEM; > >> } > >> > >> +#ifdef CONFIG_ACPI > >> + pci_msi_register_fwnode_provider(&xgene_msi_get_fwnode); > >> +#endif > >> return 0; > >> } > >> > >> @@ -473,6 +484,13 @@ static const struct of_device_id xgene_msi_match_table[] = { > >> {}, > >> }; > >> > >> +#ifdef CONFIG_ACPI > >> +static const struct acpi_device_id xgene_msi_acpi_ids[] = { > >> + {"APMC0D0E", 0}, > >> + { }, > >> +}; > >> +#endif > >> + > >> static int xgene_msi_probe(struct platform_device *pdev) > >> { > >> struct resource *res; > >> @@ -494,7 +512,17 @@ static int xgene_msi_probe(struct platform_device *pdev) > >> goto error; > >> } > >> xgene_msi->msi_addr = res->start; > >> - xgene_msi->node = pdev->dev.of_node; > >> + > >> + xgene_msi->fwnode = of_node_to_fwnode(pdev->dev.of_node); > >> + if (!xgene_msi->fwnode) { > >> + xgene_msi->fwnode = irq_domain_alloc_fwnode(NULL); > >> + if (!xgene_msi->fwnode) { > >> + dev_err(&pdev->dev, "Failed to create fwnode\n"); > >> + rc = ENOMEM; > >> + goto error; > >> + } > >> + } > >> + > >> xgene_msi->num_cpus = num_possible_cpus(); > >> > >> rc = xgene_msi_init_allocator(xgene_msi); > >> @@ -571,6 +599,7 @@ static struct platform_driver xgene_msi_driver = { > >> .driver = { > >> .name = "xgene-msi", > >> .of_match_table = xgene_msi_match_table, > >> + .acpi_match_table = ACPI_PTR(xgene_msi_acpi_ids), > >> }, > >> .probe = xgene_msi_probe, > >> .remove = xgene_msi_remove, > >> -- > >> 1.9.1 > >> -- To unsubscribe from this list: send the line "unsubscribe linux-pci" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Thu, Feb 25, 2016 at 9:38 AM, Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> wrote: > On Wed, Feb 24, 2016 at 02:28:10PM -0800, Duc Dang wrote: >> On Wed, Feb 24, 2016 at 8:16 AM, Marc Zyngier <marc.zyngier@arm.com> wrote: >> > On 24/02/16 16:09, Mark Salter wrote: >> >> On Tue, 2016-02-09 at 17:56 -0800, Duc Dang wrote: >> >>> This patch makes pci-xgene-msi driver ACPI-aware and provides >> >>> MSI capability for X-Gene v1 PCIe controllers in ACPI boot mode. >> >>> >> >>> Signed-off-by: Duc Dang <dhdang@apm.com> >> >>> --- >> >>> drivers/pci/host/pci-xgene-msi.c | 35 ++++++++++++++++++++++++++++++++--- >> >>> 1 file changed, 32 insertions(+), 3 deletions(-) >> >>> >> >>> diff --git a/drivers/pci/host/pci-xgene-msi.c b/drivers/pci/host/pci-xgene-msi.c >> >>> index a6456b5..466aa93 100644 >> >>> --- a/drivers/pci/host/pci-xgene-msi.c >> >>> +++ b/drivers/pci/host/pci-xgene-msi.c >> >>> @@ -24,6 +24,7 @@ >> >>> #include <linux/pci.h> >> >>> #include <linux/platform_device.h> >> >>> #include <linux/of_pci.h> >> >>> +#include <linux/acpi.h> >> >>> >> >>> #define MSI_IR0 0x000000 >> >>> #define MSI_INT0 0x800000 >> >>> @@ -39,7 +40,7 @@ struct xgene_msi_group { >> >>> }; >> >>> >> >>> struct xgene_msi { >> >>> - struct device_node *node; >> >>> + struct fwnode_handle *fwnode; >> >>> struct irq_domain *inner_domain; >> >>> struct irq_domain *msi_domain; >> >>> u64 msi_addr; >> >>> @@ -249,6 +250,13 @@ static const struct irq_domain_ops msi_domain_ops = { >> >>> .free = xgene_irq_domain_free, >> >>> }; >> >>> >> >>> +#ifdef CONFIG_ACPI >> >>> +static struct fwnode_handle *xgene_msi_get_fwnode(struct device *dev) >> >>> +{ >> >>> + return xgene_msi_ctrl.fwnode; >> >>> +} >> >>> +#endif >> >>> + >> >>> static int xgene_allocate_domains(struct xgene_msi *msi) >> >>> { >> >>> msi->inner_domain = irq_domain_add_linear(NULL, NR_MSI_VEC, >> >>> @@ -256,7 +264,7 @@ static int xgene_allocate_domains(struct xgene_msi *msi) >> >>> if (!msi->inner_domain) >> >>> return -ENOMEM; >> >>> >> >>> - msi->msi_domain = pci_msi_create_irq_domain(of_node_to_fwnode(msi->node), >> >>> + msi->msi_domain = pci_msi_create_irq_domain(msi->fwnode, >> >>> &xgene_msi_domain_info, >> >>> msi->inner_domain); >> >> >> >> This doesn't work for me (ACPI probing on Mustang) unless I change this >> >> to be pci_msi_create_default_irq_domain(). The problem seems to be that >> >> the MSI probe happens after the PCIe RC is probed so there is no MSI domain >> >> at the time the PCIe root is initialized by ACPI. >> > >> > pci_msi_create_default_irq_domain is the wrong thing do use, specially >> > if you have multiple MSI controllers in the system. I certainly wouldn't >> > want to see it being used on arm64. >> > >> > This is the usual dependency hell. You try moving the probing earlier, >> > but that may break something else in the process. >> >> Hi Mark and Marc, >> >> I have modified Tianocore firmware to have MSI node declared before >> PCIe node in Dsdt table. With this modification, the MSI driver will >> be loaded before PCIe driver and MSI domain is available at the time >> PCIe root is initialized. > > I am totally against this. We should not hack ACPI tables to make > the kernel work (on top of that with unwritten ordering rules that may > well require changes as kernel evolves), we should have a standard set > of bindings that people use to describe HW in the DSDT and the kernel(s) > has to cope with that. If there is a dependency problem in the description > we may solve it at bindings level, but I absolutely do not want to rely > on DSDT nodes ordering for things to work, that's fragile, no shortcuts > please. Hi Lorenzo, Bjorn, (Refresh this thread on this merge cycle) I tried to use _DEP method to describe the dependency of PCIe node to MSI node but it turns out that PCIe driver does not check for the dependency (only EC (ec.c) and I2C core (i2c-core.c) use acpi_walk_dep_device_list to check for dependency before loading dependent drivers) Do you have other suggestion/example about how to ensure strict ordering on driver loading in ACPI boot mode without changing the order of ACPI nodes in DSDT? Regards, Duc Dang. > > Thanks, > Lorenzo -- To unsubscribe from this list: send the line "unsubscribe linux-pci" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Hi Duc, On Wed, May 25, 2016 at 04:13:35PM -0700, Duc Dang wrote: > On Thu, Feb 25, 2016 at 9:38 AM, Lorenzo Pieralisi > <lorenzo.pieralisi@arm.com> wrote: > > On Wed, Feb 24, 2016 at 02:28:10PM -0800, Duc Dang wrote: > >> On Wed, Feb 24, 2016 at 8:16 AM, Marc Zyngier <marc.zyngier@arm.com> wrote: > >> > On 24/02/16 16:09, Mark Salter wrote: > >> >> On Tue, 2016-02-09 at 17:56 -0800, Duc Dang wrote: > >> >>> This patch makes pci-xgene-msi driver ACPI-aware and provides > >> >>> MSI capability for X-Gene v1 PCIe controllers in ACPI boot mode. > >> >>> > >> >>> Signed-off-by: Duc Dang <dhdang@apm.com> > >> >>> --- > >> >>> drivers/pci/host/pci-xgene-msi.c | 35 ++++++++++++++++++++++++++++++++--- > >> >>> 1 file changed, 32 insertions(+), 3 deletions(-) > >> >>> > >> >>> diff --git a/drivers/pci/host/pci-xgene-msi.c b/drivers/pci/host/pci-xgene-msi.c > >> >>> index a6456b5..466aa93 100644 > >> >>> --- a/drivers/pci/host/pci-xgene-msi.c > >> >>> +++ b/drivers/pci/host/pci-xgene-msi.c > >> >>> @@ -24,6 +24,7 @@ > >> >>> #include <linux/pci.h> > >> >>> #include <linux/platform_device.h> > >> >>> #include <linux/of_pci.h> > >> >>> +#include <linux/acpi.h> > >> >>> > >> >>> #define MSI_IR0 0x000000 > >> >>> #define MSI_INT0 0x800000 > >> >>> @@ -39,7 +40,7 @@ struct xgene_msi_group { > >> >>> }; > >> >>> > >> >>> struct xgene_msi { > >> >>> - struct device_node *node; > >> >>> + struct fwnode_handle *fwnode; > >> >>> struct irq_domain *inner_domain; > >> >>> struct irq_domain *msi_domain; > >> >>> u64 msi_addr; > >> >>> @@ -249,6 +250,13 @@ static const struct irq_domain_ops msi_domain_ops = { > >> >>> .free = xgene_irq_domain_free, > >> >>> }; > >> >>> > >> >>> +#ifdef CONFIG_ACPI > >> >>> +static struct fwnode_handle *xgene_msi_get_fwnode(struct device *dev) > >> >>> +{ > >> >>> + return xgene_msi_ctrl.fwnode; > >> >>> +} > >> >>> +#endif > >> >>> + > >> >>> static int xgene_allocate_domains(struct xgene_msi *msi) > >> >>> { > >> >>> msi->inner_domain = irq_domain_add_linear(NULL, NR_MSI_VEC, > >> >>> @@ -256,7 +264,7 @@ static int xgene_allocate_domains(struct xgene_msi *msi) > >> >>> if (!msi->inner_domain) > >> >>> return -ENOMEM; > >> >>> > >> >>> - msi->msi_domain = pci_msi_create_irq_domain(of_node_to_fwnode(msi->node), > >> >>> + msi->msi_domain = pci_msi_create_irq_domain(msi->fwnode, > >> >>> &xgene_msi_domain_info, > >> >>> msi->inner_domain); > >> >> > >> >> This doesn't work for me (ACPI probing on Mustang) unless I change this > >> >> to be pci_msi_create_default_irq_domain(). The problem seems to be that > >> >> the MSI probe happens after the PCIe RC is probed so there is no MSI domain > >> >> at the time the PCIe root is initialized by ACPI. > >> > > >> > pci_msi_create_default_irq_domain is the wrong thing do use, specially > >> > if you have multiple MSI controllers in the system. I certainly wouldn't > >> > want to see it being used on arm64. > >> > > >> > This is the usual dependency hell. You try moving the probing earlier, > >> > but that may break something else in the process. > >> > >> Hi Mark and Marc, > >> > >> I have modified Tianocore firmware to have MSI node declared before > >> PCIe node in Dsdt table. With this modification, the MSI driver will > >> be loaded before PCIe driver and MSI domain is available at the time > >> PCIe root is initialized. > > > > I am totally against this. We should not hack ACPI tables to make > > the kernel work (on top of that with unwritten ordering rules that may > > well require changes as kernel evolves), we should have a standard set > > of bindings that people use to describe HW in the DSDT and the kernel(s) > > has to cope with that. If there is a dependency problem in the description > > we may solve it at bindings level, but I absolutely do not want to rely > > on DSDT nodes ordering for things to work, that's fragile, no shortcuts > > please. > > Hi Lorenzo, Bjorn, > > (Refresh this thread on this merge cycle) > > I tried to use _DEP method to describe the dependency of PCIe node to > MSI node but it turns out that PCIe driver does not check for the > dependency (only EC (ec.c) and I2C core (i2c-core.c) use > acpi_walk_dep_device_list to check for dependency before loading > dependent drivers) > > Do you have other suggestion/example about how to ensure strict > ordering on driver loading in ACPI boot mode without changing the > order of ACPI nodes in DSDT? Before doing anything else, I would like to ask you to report the code paths leading to failures (what fails actually ?) please, and I want to understand what "this does not work" means. In particular: - Why using pci_msi_create_default_irq_domain() works - Why, if you change the DSDT nodes ordering, things work (please add some debugging to MSI allocation functions to detect the code paths leading to correct MSI allocation) - What's the difference wrt the DT probe path When we have a detailed view of MSI allocation code paths and failures we can see what we can do to fix it in ACPI. Thanks, Lorenzo -- To unsubscribe from this list: send the line "unsubscribe linux-pci" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Hi Lorenzo, On Thu, May 26, 2016 at 5:34 AM, Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> wrote: > Hi Duc, > > On Wed, May 25, 2016 at 04:13:35PM -0700, Duc Dang wrote: >> On Thu, Feb 25, 2016 at 9:38 AM, Lorenzo Pieralisi >> <lorenzo.pieralisi@arm.com> wrote: >> > On Wed, Feb 24, 2016 at 02:28:10PM -0800, Duc Dang wrote: >> >> On Wed, Feb 24, 2016 at 8:16 AM, Marc Zyngier <marc.zyngier@arm.com> wrote: >> >> > On 24/02/16 16:09, Mark Salter wrote: >> >> >> On Tue, 2016-02-09 at 17:56 -0800, Duc Dang wrote: >> >> >>> This patch makes pci-xgene-msi driver ACPI-aware and provides >> >> >>> MSI capability for X-Gene v1 PCIe controllers in ACPI boot mode. >> >> >>> >> >> >>> Signed-off-by: Duc Dang <dhdang@apm.com> >> >> >>> --- >> >> >>> drivers/pci/host/pci-xgene-msi.c | 35 ++++++++++++++++++++++++++++++++--- >> >> >>> 1 file changed, 32 insertions(+), 3 deletions(-) >> >> >>> >> >> >>> diff --git a/drivers/pci/host/pci-xgene-msi.c b/drivers/pci/host/pci-xgene-msi.c >> >> >>> index a6456b5..466aa93 100644 >> >> >>> --- a/drivers/pci/host/pci-xgene-msi.c >> >> >>> +++ b/drivers/pci/host/pci-xgene-msi.c >> >> >>> @@ -24,6 +24,7 @@ >> >> >>> #include <linux/pci.h> >> >> >>> #include <linux/platform_device.h> >> >> >>> #include <linux/of_pci.h> >> >> >>> +#include <linux/acpi.h> >> >> >>> >> >> >>> #define MSI_IR0 0x000000 >> >> >>> #define MSI_INT0 0x800000 >> >> >>> @@ -39,7 +40,7 @@ struct xgene_msi_group { >> >> >>> }; >> >> >>> >> >> >>> struct xgene_msi { >> >> >>> - struct device_node *node; >> >> >>> + struct fwnode_handle *fwnode; >> >> >>> struct irq_domain *inner_domain; >> >> >>> struct irq_domain *msi_domain; >> >> >>> u64 msi_addr; >> >> >>> @@ -249,6 +250,13 @@ static const struct irq_domain_ops msi_domain_ops = { >> >> >>> .free = xgene_irq_domain_free, >> >> >>> }; >> >> >>> >> >> >>> +#ifdef CONFIG_ACPI >> >> >>> +static struct fwnode_handle *xgene_msi_get_fwnode(struct device *dev) >> >> >>> +{ >> >> >>> + return xgene_msi_ctrl.fwnode; >> >> >>> +} >> >> >>> +#endif >> >> >>> + >> >> >>> static int xgene_allocate_domains(struct xgene_msi *msi) >> >> >>> { >> >> >>> msi->inner_domain = irq_domain_add_linear(NULL, NR_MSI_VEC, >> >> >>> @@ -256,7 +264,7 @@ static int xgene_allocate_domains(struct xgene_msi *msi) >> >> >>> if (!msi->inner_domain) >> >> >>> return -ENOMEM; >> >> >>> >> >> >>> - msi->msi_domain = pci_msi_create_irq_domain(of_node_to_fwnode(msi->node), >> >> >>> + msi->msi_domain = pci_msi_create_irq_domain(msi->fwnode, >> >> >>> &xgene_msi_domain_info, >> >> >>> msi->inner_domain); >> >> >> >> >> >> This doesn't work for me (ACPI probing on Mustang) unless I change this >> >> >> to be pci_msi_create_default_irq_domain(). The problem seems to be that >> >> >> the MSI probe happens after the PCIe RC is probed so there is no MSI domain >> >> >> at the time the PCIe root is initialized by ACPI. >> >> > >> >> > pci_msi_create_default_irq_domain is the wrong thing do use, specially >> >> > if you have multiple MSI controllers in the system. I certainly wouldn't >> >> > want to see it being used on arm64. >> >> > >> >> > This is the usual dependency hell. You try moving the probing earlier, >> >> > but that may break something else in the process. >> >> >> >> Hi Mark and Marc, >> >> >> >> I have modified Tianocore firmware to have MSI node declared before >> >> PCIe node in Dsdt table. With this modification, the MSI driver will >> >> be loaded before PCIe driver and MSI domain is available at the time >> >> PCIe root is initialized. >> > >> > I am totally against this. We should not hack ACPI tables to make >> > the kernel work (on top of that with unwritten ordering rules that may >> > well require changes as kernel evolves), we should have a standard set >> > of bindings that people use to describe HW in the DSDT and the kernel(s) >> > has to cope with that. If there is a dependency problem in the description >> > we may solve it at bindings level, but I absolutely do not want to rely >> > on DSDT nodes ordering for things to work, that's fragile, no shortcuts >> > please. >> >> Hi Lorenzo, Bjorn, >> >> (Refresh this thread on this merge cycle) >> >> I tried to use _DEP method to describe the dependency of PCIe node to >> MSI node but it turns out that PCIe driver does not check for the >> dependency (only EC (ec.c) and I2C core (i2c-core.c) use >> acpi_walk_dep_device_list to check for dependency before loading >> dependent drivers) >> >> Do you have other suggestion/example about how to ensure strict >> ordering on driver loading in ACPI boot mode without changing the >> order of ACPI nodes in DSDT? > > Before doing anything else, I would like to ask you to report the code > paths leading to failures (what fails actually ?) please, and I want > to understand what "this does not work" means. > > In particular: > > - Why using pci_msi_create_default_irq_domain() works > - Why, if you change the DSDT nodes ordering, things work (please add > some debugging to MSI allocation functions to detect the code paths > leading to correct MSI allocation) > - What's the difference wrt the DT probe path The MSI allocation happens with following code path (I use PMC pm80xx SAS driver as an example (drivers/scsi/pm8001/pm8001_init.c)): pci_enable_msix_exact pci_enable_msix_range pci_enable_msix msix_capbility_init pci_msi_setup_msi_irqs pci_msi_get_domain dev_get_msi_domain arch_get_pci_msi_domain return pci_msi_default_domain; In my failing case: pci_msi_get_domain return NULL, so MSI allocation function (pci_enable_msix_exact) fails. The expected MSI domain is registered when X-Gene MSI driver is initialized and is attached to each PCI bus by calling pci_set_bus_msi_domain during PCIe bus-scan: pci_scan_root_bus pci_scan_root_bus_msi pci_scan_child_bus pci_scan_bridge pci_add_new_bus pci_alloc_child_bus pci_set_bus_msi_domain pci_host_bridge_msi_domain pci_host_bridge_of_msi_domain pci_host_bridge_acpi_msi_domain + When booting with DT: X-Gene MSI driver is initialized early (subsys_initcall level); X-Gene PCIe driver is initialized later at device_initcall level (using module_platform_driver). Because of that when PCIe enumeration happens, the X-Gene MSI domain was already registered and available. So when the driver of PCIe Endpoint devices gets loaded, the call to pci_enable_msix_exact (to allocate MSI) succeeds. + When booting with ACPI: acpi_scan_init is called at subsys_initcall level (the same level with X-Gene MSI driver): acpi_init acpi_bus_init (GIC is initialized here) acpi_scan_init acpi_pci_root_init acpi_pci_link_init ... acpi_bus_scan acpi_bus_check_add ... Currently, in DSDT table, X-Gene MSI node is defined right after X-Gene PCIe nodes. So PCIe driver initialization and PCIe enumeration happens before MSI driver is initialized. During PCIe bus scan, the pci_set_bus_msi_domain gets a NULL irq_domain value from pci_host_bridge_msi_domain (as MSI driver is not initialized), so there is no MSI support for PCIe bus. + Using pci_msi_create_default_irq_domain() helps MSI allocation succeed as pci_msi_get_domain (in the code path of pci_enable_msix_exact above) can fall back to get a valid domain from arch_get_pci_msi_domain (please note that PCIe endpoint driver gets called later (at device_initcall level (using module_init)), so at the time PCIe endpoint driver probe function executes, the MSI driver already register the global MSI domain with pci_msi_create_default_irq_domain) + If changing DSDT node ordering, to define MSI node before PCIe nodes, then X-Gene MSI driver will be loaded before PCIe host driver. At the time PCIe enumeration happens, X-Gene MSI driver already registered a valid MSI domain (by calling pci_msi_register_fwnode_provider to register the fwnode provider function), so all the detected PCIe buses (and PCIe endpoint devices) will have a valid MSI domain attached to them (as a result of function pci_set_bus_msi_domain) Regards, Duc Dang. > > When we have a detailed view of MSI allocation code paths and failures > we can see what we can do to fix it in ACPI. > > Thanks, > Lorenzo -- To unsubscribe from this list: send the line "unsubscribe linux-pci" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[+ Rafael] On Thu, May 26, 2016 at 01:49:23PM -0700, Duc Dang wrote: > Hi Lorenzo, > > On Thu, May 26, 2016 at 5:34 AM, Lorenzo Pieralisi > <lorenzo.pieralisi@arm.com> wrote: > > Hi Duc, > > > > On Wed, May 25, 2016 at 04:13:35PM -0700, Duc Dang wrote: > >> On Thu, Feb 25, 2016 at 9:38 AM, Lorenzo Pieralisi > >> <lorenzo.pieralisi@arm.com> wrote: > >> > On Wed, Feb 24, 2016 at 02:28:10PM -0800, Duc Dang wrote: > >> >> On Wed, Feb 24, 2016 at 8:16 AM, Marc Zyngier <marc.zyngier@arm.com> wrote: > >> >> > On 24/02/16 16:09, Mark Salter wrote: > >> >> >> On Tue, 2016-02-09 at 17:56 -0800, Duc Dang wrote: > >> >> >>> This patch makes pci-xgene-msi driver ACPI-aware and provides > >> >> >>> MSI capability for X-Gene v1 PCIe controllers in ACPI boot mode. > >> >> >>> > >> >> >>> Signed-off-by: Duc Dang <dhdang@apm.com> > >> >> >>> --- > >> >> >>> drivers/pci/host/pci-xgene-msi.c | 35 ++++++++++++++++++++++++++++++++--- > >> >> >>> 1 file changed, 32 insertions(+), 3 deletions(-) > >> >> >>> > >> >> >>> diff --git a/drivers/pci/host/pci-xgene-msi.c b/drivers/pci/host/pci-xgene-msi.c > >> >> >>> index a6456b5..466aa93 100644 > >> >> >>> --- a/drivers/pci/host/pci-xgene-msi.c > >> >> >>> +++ b/drivers/pci/host/pci-xgene-msi.c > >> >> >>> @@ -24,6 +24,7 @@ > >> >> >>> #include <linux/pci.h> > >> >> >>> #include <linux/platform_device.h> > >> >> >>> #include <linux/of_pci.h> > >> >> >>> +#include <linux/acpi.h> > >> >> >>> > >> >> >>> #define MSI_IR0 0x000000 > >> >> >>> #define MSI_INT0 0x800000 > >> >> >>> @@ -39,7 +40,7 @@ struct xgene_msi_group { > >> >> >>> }; > >> >> >>> > >> >> >>> struct xgene_msi { > >> >> >>> - struct device_node *node; > >> >> >>> + struct fwnode_handle *fwnode; > >> >> >>> struct irq_domain *inner_domain; > >> >> >>> struct irq_domain *msi_domain; > >> >> >>> u64 msi_addr; > >> >> >>> @@ -249,6 +250,13 @@ static const struct irq_domain_ops msi_domain_ops = { > >> >> >>> .free = xgene_irq_domain_free, > >> >> >>> }; > >> >> >>> > >> >> >>> +#ifdef CONFIG_ACPI > >> >> >>> +static struct fwnode_handle *xgene_msi_get_fwnode(struct device *dev) > >> >> >>> +{ > >> >> >>> + return xgene_msi_ctrl.fwnode; > >> >> >>> +} > >> >> >>> +#endif > >> >> >>> + > >> >> >>> static int xgene_allocate_domains(struct xgene_msi *msi) > >> >> >>> { > >> >> >>> msi->inner_domain = irq_domain_add_linear(NULL, NR_MSI_VEC, > >> >> >>> @@ -256,7 +264,7 @@ static int xgene_allocate_domains(struct xgene_msi *msi) > >> >> >>> if (!msi->inner_domain) > >> >> >>> return -ENOMEM; > >> >> >>> > >> >> >>> - msi->msi_domain = pci_msi_create_irq_domain(of_node_to_fwnode(msi->node), > >> >> >>> + msi->msi_domain = pci_msi_create_irq_domain(msi->fwnode, > >> >> >>> &xgene_msi_domain_info, > >> >> >>> msi->inner_domain); > >> >> >> > >> >> >> This doesn't work for me (ACPI probing on Mustang) unless I change this > >> >> >> to be pci_msi_create_default_irq_domain(). The problem seems to be that > >> >> >> the MSI probe happens after the PCIe RC is probed so there is no MSI domain > >> >> >> at the time the PCIe root is initialized by ACPI. > >> >> > > >> >> > pci_msi_create_default_irq_domain is the wrong thing do use, specially > >> >> > if you have multiple MSI controllers in the system. I certainly wouldn't > >> >> > want to see it being used on arm64. > >> >> > > >> >> > This is the usual dependency hell. You try moving the probing earlier, > >> >> > but that may break something else in the process. > >> >> > >> >> Hi Mark and Marc, > >> >> > >> >> I have modified Tianocore firmware to have MSI node declared before > >> >> PCIe node in Dsdt table. With this modification, the MSI driver will > >> >> be loaded before PCIe driver and MSI domain is available at the time > >> >> PCIe root is initialized. > >> > > >> > I am totally against this. We should not hack ACPI tables to make > >> > the kernel work (on top of that with unwritten ordering rules that may > >> > well require changes as kernel evolves), we should have a standard set > >> > of bindings that people use to describe HW in the DSDT and the kernel(s) > >> > has to cope with that. If there is a dependency problem in the description > >> > we may solve it at bindings level, but I absolutely do not want to rely > >> > on DSDT nodes ordering for things to work, that's fragile, no shortcuts > >> > please. > >> > >> Hi Lorenzo, Bjorn, > >> > >> (Refresh this thread on this merge cycle) > >> > >> I tried to use _DEP method to describe the dependency of PCIe node to > >> MSI node but it turns out that PCIe driver does not check for the > >> dependency (only EC (ec.c) and I2C core (i2c-core.c) use > >> acpi_walk_dep_device_list to check for dependency before loading > >> dependent drivers) > >> > >> Do you have other suggestion/example about how to ensure strict > >> ordering on driver loading in ACPI boot mode without changing the > >> order of ACPI nodes in DSDT? > > > > Before doing anything else, I would like to ask you to report the code > > paths leading to failures (what fails actually ?) please, and I want > > to understand what "this does not work" means. > > > > In particular: > > > > - Why using pci_msi_create_default_irq_domain() works > > - Why, if you change the DSDT nodes ordering, things work (please add > > some debugging to MSI allocation functions to detect the code paths > > leading to correct MSI allocation) > > - What's the difference wrt the DT probe path > > The MSI allocation happens with following code path (I use PMC pm80xx > SAS driver as an example (drivers/scsi/pm8001/pm8001_init.c)): > pci_enable_msix_exact > pci_enable_msix_range > pci_enable_msix > msix_capbility_init > pci_msi_setup_msi_irqs > pci_msi_get_domain > dev_get_msi_domain > arch_get_pci_msi_domain > return pci_msi_default_domain; > > In my failing case: pci_msi_get_domain return NULL, so MSI allocation > function (pci_enable_msix_exact) fails. > > The expected MSI domain is registered when X-Gene MSI driver is > initialized and is attached to each PCI bus by calling > pci_set_bus_msi_domain during PCIe bus-scan: > pci_scan_root_bus > pci_scan_root_bus_msi > pci_scan_child_bus > pci_scan_bridge > pci_add_new_bus > pci_alloc_child_bus > pci_set_bus_msi_domain > pci_host_bridge_msi_domain > pci_host_bridge_of_msi_domain > pci_host_bridge_acpi_msi_domain > > + When booting with DT: X-Gene MSI driver is initialized early > (subsys_initcall level); X-Gene PCIe driver is initialized later at > device_initcall level (using module_platform_driver). Because of that > when PCIe enumeration happens, the X-Gene MSI domain was already > registered and available. So when the driver of PCIe Endpoint devices > gets loaded, the call to pci_enable_msix_exact (to allocate MSI) > succeeds. > > + When booting with ACPI: acpi_scan_init is called at subsys_initcall > level (the same level with X-Gene MSI driver): > acpi_init > acpi_bus_init (GIC is initialized here) > acpi_scan_init > acpi_pci_root_init > acpi_pci_link_init > ... > acpi_bus_scan > acpi_bus_check_add > ... > > Currently, in DSDT table, X-Gene MSI node is defined right after > X-Gene PCIe nodes. So PCIe driver initialization and PCIe enumeration > happens before MSI driver is initialized. During PCIe bus scan, the > pci_set_bus_msi_domain gets a NULL irq_domain value from > pci_host_bridge_msi_domain (as MSI driver is not initialized), so > there is no MSI support for PCIe bus. > > + Using pci_msi_create_default_irq_domain() helps MSI allocation > succeed as pci_msi_get_domain (in the code path of > pci_enable_msix_exact above) can fall back to get a valid domain from > arch_get_pci_msi_domain (please note that PCIe endpoint driver gets > called later (at device_initcall level (using module_init)), so at the > time PCIe endpoint driver probe function executes, the MSI driver > already register the global MSI domain with > pci_msi_create_default_irq_domain) > > + If changing DSDT node ordering, to define MSI node before PCIe > nodes, then X-Gene MSI driver will be loaded before PCIe host driver. > At the time PCIe enumeration happens, X-Gene MSI driver already > registered a valid MSI domain (by calling > pci_msi_register_fwnode_provider to register the fwnode provider > function), so all the detected PCIe buses (and PCIe endpoint devices) > will have a valid MSI domain attached to them (as a result of function > pci_set_bus_msi_domain) This still works out of sheer luck, given that relying on initcall ordering at same init level is undefined (ie it works if your msi driver registration call is executed before acpi_init() is called). I though about using a scan handler like: See arch/ia64/hp/common/sba_iommu.c acpi_sba_ioc_handler but that won't solve your problem either since, given that PNP0A03 (ie PCI host bridge) is handled through a scan handler too, IIUC you will still have issues related to namespace device nodes ordering. I suspect we will have to add a hook walking the ACPI namespace before acpi_bus_scan() is called in acpi_scan_init() (aka MSI controllers must be probed before any other device is, that's the gist), I CC'ed Rafael to see if there is any other suitable way to sort out this issue. Thanks, Lorenzo -- To unsubscribe from this list: send the line "unsubscribe linux-pci" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Fri, May 27, 2016 at 3:52 AM, Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> wrote: > > [+ Rafael] > > On Thu, May 26, 2016 at 01:49:23PM -0700, Duc Dang wrote: > > Hi Lorenzo, > > > > On Thu, May 26, 2016 at 5:34 AM, Lorenzo Pieralisi > > <lorenzo.pieralisi@arm.com> wrote: > > > Hi Duc, > > > > > > On Wed, May 25, 2016 at 04:13:35PM -0700, Duc Dang wrote: > > >> On Thu, Feb 25, 2016 at 9:38 AM, Lorenzo Pieralisi > > >> <lorenzo.pieralisi@arm.com> wrote: > > >> > On Wed, Feb 24, 2016 at 02:28:10PM -0800, Duc Dang wrote: > > >> >> On Wed, Feb 24, 2016 at 8:16 AM, Marc Zyngier <marc.zyngier@arm.com> wrote: > > >> >> > On 24/02/16 16:09, Mark Salter wrote: > > >> >> >> On Tue, 2016-02-09 at 17:56 -0800, Duc Dang wrote: > > >> >> >>> This patch makes pci-xgene-msi driver ACPI-aware and provides > > >> >> >>> MSI capability for X-Gene v1 PCIe controllers in ACPI boot mode. > > >> >> >>> > > >> >> >>> Signed-off-by: Duc Dang <dhdang@apm.com> > > >> >> >>> --- > > >> >> >>> drivers/pci/host/pci-xgene-msi.c | 35 ++++++++++++++++++++++++++++++++--- > > >> >> >>> 1 file changed, 32 insertions(+), 3 deletions(-) > > >> >> >>> > > >> >> >>> diff --git a/drivers/pci/host/pci-xgene-msi.c b/drivers/pci/host/pci-xgene-msi.c > > >> >> >>> index a6456b5..466aa93 100644 > > >> >> >>> --- a/drivers/pci/host/pci-xgene-msi.c > > >> >> >>> +++ b/drivers/pci/host/pci-xgene-msi.c > > >> >> >>> @@ -24,6 +24,7 @@ > > >> >> >>> #include <linux/pci.h> > > >> >> >>> #include <linux/platform_device.h> > > >> >> >>> #include <linux/of_pci.h> > > >> >> >>> +#include <linux/acpi.h> > > >> >> >>> > > >> >> >>> #define MSI_IR0 0x000000 > > >> >> >>> #define MSI_INT0 0x800000 > > >> >> >>> @@ -39,7 +40,7 @@ struct xgene_msi_group { > > >> >> >>> }; > > >> >> >>> > > >> >> >>> struct xgene_msi { > > >> >> >>> - struct device_node *node; > > >> >> >>> + struct fwnode_handle *fwnode; > > >> >> >>> struct irq_domain *inner_domain; > > >> >> >>> struct irq_domain *msi_domain; > > >> >> >>> u64 msi_addr; > > >> >> >>> @@ -249,6 +250,13 @@ static const struct irq_domain_ops msi_domain_ops = { > > >> >> >>> .free = xgene_irq_domain_free, > > >> >> >>> }; > > >> >> >>> > > >> >> >>> +#ifdef CONFIG_ACPI > > >> >> >>> +static struct fwnode_handle *xgene_msi_get_fwnode(struct device *dev) > > >> >> >>> +{ > > >> >> >>> + return xgene_msi_ctrl.fwnode; > > >> >> >>> +} > > >> >> >>> +#endif > > >> >> >>> + > > >> >> >>> static int xgene_allocate_domains(struct xgene_msi *msi) > > >> >> >>> { > > >> >> >>> msi->inner_domain = irq_domain_add_linear(NULL, NR_MSI_VEC, > > >> >> >>> @@ -256,7 +264,7 @@ static int xgene_allocate_domains(struct xgene_msi *msi) > > >> >> >>> if (!msi->inner_domain) > > >> >> >>> return -ENOMEM; > > >> >> >>> > > >> >> >>> - msi->msi_domain = pci_msi_create_irq_domain(of_node_to_fwnode(msi->node), > > >> >> >>> + msi->msi_domain = pci_msi_create_irq_domain(msi->fwnode, > > >> >> >>> &xgene_msi_domain_info, > > >> >> >>> msi->inner_domain); > > >> >> >> > > >> >> >> This doesn't work for me (ACPI probing on Mustang) unless I change this > > >> >> >> to be pci_msi_create_default_irq_domain(). The problem seems to be that > > >> >> >> the MSI probe happens after the PCIe RC is probed so there is no MSI domain > > >> >> >> at the time the PCIe root is initialized by ACPI. > > >> >> > > > >> >> > pci_msi_create_default_irq_domain is the wrong thing do use, specially > > >> >> > if you have multiple MSI controllers in the system. I certainly wouldn't > > >> >> > want to see it being used on arm64. > > >> >> > > > >> >> > This is the usual dependency hell. You try moving the probing earlier, > > >> >> > but that may break something else in the process. > > >> >> > > >> >> Hi Mark and Marc, > > >> >> > > >> >> I have modified Tianocore firmware to have MSI node declared before > > >> >> PCIe node in Dsdt table. With this modification, the MSI driver will > > >> >> be loaded before PCIe driver and MSI domain is available at the time > > >> >> PCIe root is initialized. > > >> > > > >> > I am totally against this. We should not hack ACPI tables to make > > >> > the kernel work (on top of that with unwritten ordering rules that may > > >> > well require changes as kernel evolves), we should have a standard set > > >> > of bindings that people use to describe HW in the DSDT and the kernel(s) > > >> > has to cope with that. If there is a dependency problem in the description > > >> > we may solve it at bindings level, but I absolutely do not want to rely > > >> > on DSDT nodes ordering for things to work, that's fragile, no shortcuts > > >> > please. > > >> > > >> Hi Lorenzo, Bjorn, > > >> > > >> (Refresh this thread on this merge cycle) > > >> > > >> I tried to use _DEP method to describe the dependency of PCIe node to > > >> MSI node but it turns out that PCIe driver does not check for the > > >> dependency (only EC (ec.c) and I2C core (i2c-core.c) use > > >> acpi_walk_dep_device_list to check for dependency before loading > > >> dependent drivers) > > >> > > >> Do you have other suggestion/example about how to ensure strict > > >> ordering on driver loading in ACPI boot mode without changing the > > >> order of ACPI nodes in DSDT? > > > > > > Before doing anything else, I would like to ask you to report the code > > > paths leading to failures (what fails actually ?) please, and I want > > > to understand what "this does not work" means. > > > > > > In particular: > > > > > > - Why using pci_msi_create_default_irq_domain() works > > > - Why, if you change the DSDT nodes ordering, things work (please add > > > some debugging to MSI allocation functions to detect the code paths > > > leading to correct MSI allocation) > > > - What's the difference wrt the DT probe path > > > > The MSI allocation happens with following code path (I use PMC pm80xx > > SAS driver as an example (drivers/scsi/pm8001/pm8001_init.c)): > > pci_enable_msix_exact > > pci_enable_msix_range > > pci_enable_msix > > msix_capbility_init > > pci_msi_setup_msi_irqs > > pci_msi_get_domain > > dev_get_msi_domain > > arch_get_pci_msi_domain > > return pci_msi_default_domain; > > > > In my failing case: pci_msi_get_domain return NULL, so MSI allocation > > function (pci_enable_msix_exact) fails. > > > > The expected MSI domain is registered when X-Gene MSI driver is > > initialized and is attached to each PCI bus by calling > > pci_set_bus_msi_domain during PCIe bus-scan: > > pci_scan_root_bus > > pci_scan_root_bus_msi > > pci_scan_child_bus > > pci_scan_bridge > > pci_add_new_bus > > pci_alloc_child_bus > > pci_set_bus_msi_domain > > pci_host_bridge_msi_domain > > pci_host_bridge_of_msi_domain > > pci_host_bridge_acpi_msi_domain > > > > + When booting with DT: X-Gene MSI driver is initialized early > > (subsys_initcall level); X-Gene PCIe driver is initialized later at > > device_initcall level (using module_platform_driver). Because of that > > when PCIe enumeration happens, the X-Gene MSI domain was already > > registered and available. So when the driver of PCIe Endpoint devices > > gets loaded, the call to pci_enable_msix_exact (to allocate MSI) > > succeeds. > > > > + When booting with ACPI: acpi_scan_init is called at subsys_initcall > > level (the same level with X-Gene MSI driver): > > acpi_init > > acpi_bus_init (GIC is initialized here) > > acpi_scan_init > > acpi_pci_root_init > > acpi_pci_link_init > > ... > > acpi_bus_scan > > acpi_bus_check_add > > ... > > > > Currently, in DSDT table, X-Gene MSI node is defined right after > > X-Gene PCIe nodes. So PCIe driver initialization and PCIe enumeration > > happens before MSI driver is initialized. During PCIe bus scan, the > > pci_set_bus_msi_domain gets a NULL irq_domain value from > > pci_host_bridge_msi_domain (as MSI driver is not initialized), so > > there is no MSI support for PCIe bus. > > > > + Using pci_msi_create_default_irq_domain() helps MSI allocation > > succeed as pci_msi_get_domain (in the code path of > > pci_enable_msix_exact above) can fall back to get a valid domain from > > arch_get_pci_msi_domain (please note that PCIe endpoint driver gets > > called later (at device_initcall level (using module_init)), so at the > > time PCIe endpoint driver probe function executes, the MSI driver > > already register the global MSI domain with > > pci_msi_create_default_irq_domain) > > > > + If changing DSDT node ordering, to define MSI node before PCIe > > nodes, then X-Gene MSI driver will be loaded before PCIe host driver. > > At the time PCIe enumeration happens, X-Gene MSI driver already > > registered a valid MSI domain (by calling > > pci_msi_register_fwnode_provider to register the fwnode provider > > function), so all the detected PCIe buses (and PCIe endpoint devices) > > will have a valid MSI domain attached to them (as a result of function > > pci_set_bus_msi_domain) > > This still works out of sheer luck, given that relying on initcall > ordering at same init level is undefined (ie it works if your msi driver > registration call is executed before acpi_init() is called). > > I though about using a scan handler like: > > See arch/ia64/hp/common/sba_iommu.c acpi_sba_ioc_handler > > but that won't solve your problem either since, given that PNP0A03 (ie > PCI host bridge) is handled through a scan handler too, IIUC you will > still have issues related to namespace device nodes ordering. > > I suspect we will have to add a hook walking the ACPI namespace before > acpi_bus_scan() is called in acpi_scan_init() (aka MSI controllers must > be probed before any other device is, that's the gist), I CC'ed Rafael > to see if there is any other suitable way to sort out this issue. Thanks for your input, Lorenzo. Hi Rafael, Do you have other suggestions? Otherwise, I will prepare a patch following Lorenzo's approach. Regards, Duc Dang. > > Thanks, > Lorenzo -- To unsubscribe from this list: send the line "unsubscribe linux-pci" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On 06/03/2016 06:15 PM, Duc Dang wrote: > Do you have other suggestions? Otherwise, I will prepare a patch > following Lorenzo's approach. Duc has since left Applied for other pastures. I miss him, he's a great guy. He laid all the right groundwork for this, but the ACPI binding still needs to be upstreamed. It's a few lines of code matching on APMC0D0E but without it, upstream kernels wont have working MSI on X-Gene with ACPI. I need this to be upstreamed soon please :) Can someone at APM followup with an updated patch, and get it in? Here's the rub. The average person booting a Linux box (even a good kernel person) isn't going to say "hey, MSIs aren't setup right on this ARM server because it's compliant with 1 out of 3 possible ways MSIs might be done at a high level [let's forget the many others] and all it needs is this...". What they're going to say is "huh, PCIe card doesn't work, might be an MSI problem". Which is the email I have after someone tried using an IB card in an X-Gene box and spent a few hours poking. We're so close to having "ACPI all the things" but the latest development builds of RHEL don't do MSI on X-Gene because of the Red Hat "upstream first" rules. So let's get that fixed. Thanks, Jon.
diff --git a/drivers/pci/host/pci-xgene-msi.c b/drivers/pci/host/pci-xgene-msi.c index a6456b5..466aa93 100644 --- a/drivers/pci/host/pci-xgene-msi.c +++ b/drivers/pci/host/pci-xgene-msi.c @@ -24,6 +24,7 @@ #include <linux/pci.h> #include <linux/platform_device.h> #include <linux/of_pci.h> +#include <linux/acpi.h> #define MSI_IR0 0x000000 #define MSI_INT0 0x800000 @@ -39,7 +40,7 @@ struct xgene_msi_group { }; struct xgene_msi { - struct device_node *node; + struct fwnode_handle *fwnode; struct irq_domain *inner_domain; struct irq_domain *msi_domain; u64 msi_addr; @@ -249,6 +250,13 @@ static const struct irq_domain_ops msi_domain_ops = { .free = xgene_irq_domain_free, }; +#ifdef CONFIG_ACPI +static struct fwnode_handle *xgene_msi_get_fwnode(struct device *dev) +{ + return xgene_msi_ctrl.fwnode; +} +#endif + static int xgene_allocate_domains(struct xgene_msi *msi) { msi->inner_domain = irq_domain_add_linear(NULL, NR_MSI_VEC, @@ -256,7 +264,7 @@ static int xgene_allocate_domains(struct xgene_msi *msi) if (!msi->inner_domain) return -ENOMEM; - msi->msi_domain = pci_msi_create_irq_domain(of_node_to_fwnode(msi->node), + msi->msi_domain = pci_msi_create_irq_domain(msi->fwnode, &xgene_msi_domain_info, msi->inner_domain); @@ -265,6 +273,9 @@ static int xgene_allocate_domains(struct xgene_msi *msi) return -ENOMEM; } +#ifdef CONFIG_ACPI + pci_msi_register_fwnode_provider(&xgene_msi_get_fwnode); +#endif return 0; } @@ -473,6 +484,13 @@ static const struct of_device_id xgene_msi_match_table[] = { {}, }; +#ifdef CONFIG_ACPI +static const struct acpi_device_id xgene_msi_acpi_ids[] = { + {"APMC0D0E", 0}, + { }, +}; +#endif + static int xgene_msi_probe(struct platform_device *pdev) { struct resource *res; @@ -494,7 +512,17 @@ static int xgene_msi_probe(struct platform_device *pdev) goto error; } xgene_msi->msi_addr = res->start; - xgene_msi->node = pdev->dev.of_node; + + xgene_msi->fwnode = of_node_to_fwnode(pdev->dev.of_node); + if (!xgene_msi->fwnode) { + xgene_msi->fwnode = irq_domain_alloc_fwnode(NULL); + if (!xgene_msi->fwnode) { + dev_err(&pdev->dev, "Failed to create fwnode\n"); + rc = ENOMEM; + goto error; + } + } + xgene_msi->num_cpus = num_possible_cpus(); rc = xgene_msi_init_allocator(xgene_msi); @@ -571,6 +599,7 @@ static struct platform_driver xgene_msi_driver = { .driver = { .name = "xgene-msi", .of_match_table = xgene_msi_match_table, + .acpi_match_table = ACPI_PTR(xgene_msi_acpi_ids), }, .probe = xgene_msi_probe, .remove = xgene_msi_remove,
This patch makes pci-xgene-msi driver ACPI-aware and provides MSI capability for X-Gene v1 PCIe controllers in ACPI boot mode. Signed-off-by: Duc Dang <dhdang@apm.com> --- drivers/pci/host/pci-xgene-msi.c | 35 ++++++++++++++++++++++++++++++++--- 1 file changed, 32 insertions(+), 3 deletions(-)