Message ID | 20250307103128.3287497-1-s-vadapalli@ti.com (mailing list archive) |
---|---|
State | New |
Headers | show |
On 25-03-07 16:01:24, Siddharth Vadapalli wrote: > EXTERNAL EMAIL > > Hello, > > This series enables support to build the PCIe Cadence Controller drivers > and the PCI J721E Application/Wrapper/Glue driver as Loadable Kernel > Modules. The motivation for this series is that PCIe is not a necessity > for booting the SoC, due to which it doesn't have to be a built-in > module. Additionally, the defconfig doesn't enable the PCIe Cadence > Controller drivers and the PCI J721E driver, due to which PCIe is not > supported by default. Enabling the configs as of now (i.e. without this > series) will result in built-in drivers i.e. a bloated Linux Image for > everyone who doesn't have the PCIe Controller. If the user doesn't enable PCIe controller device through DTS/ACPI, that's doesn't matter. > @@ -209,6 +209,12 @@ CONFIG_NFC=m > CONFIG_NFC_NCI=m > CONFIG_NFC_S3FWRN5_I2C=m > CONFIG_PCI=y > +CONFIG_PCI_J721E=m > +CONFIG_PCI_J721E_HOST=m > +CONFIG_PCI_J721E_EP=m > +CONFIG_PCIE_CADENCE=m > +CONFIG_PCIE_CADENCE_HOST=m > +CONFIG_PCIE_CADENCE_EP=m The common Cadence configuration will be select if the glue layer's configuration is select according to Kconfig. Please do not set common configuration as module, some user may need it as build-in like dw's. Considering the situation, the rootfs is at NVMe. Peter > CONFIG_PCIEPORTBUS=y > CONFIG_PCIEAER=y > CONFIG_PCI_IOV=y > > Regards, > Siddharth. > > Kishon Vijay Abraham I (1): > PCI: cadence: Add support to build pcie-cadence library as a kernel > module > > Siddharth Vadapalli (3): > PCI: cadence-host: Introduce cdns_pcie_host_disable helper for cleanup > PCI: cadence-ep: Introduce cdns_pcie_ep_disable helper for cleanup > PCI: j721e: Add support to build as a loadable module > > drivers/pci/controller/cadence/Kconfig | 12 +- > drivers/pci/controller/cadence/pci-j721e.c | 37 +++++- > .../pci/controller/cadence/pcie-cadence-ep.c | 16 +++ > .../controller/cadence/pcie-cadence-host.c | 113 ++++++++++++++++++ > drivers/pci/controller/cadence/pcie-cadence.c | 12 ++ > drivers/pci/controller/cadence/pcie-cadence.h | 14 ++- > 6 files changed, 194 insertions(+), 10 deletions(-) > > -- > 2.34.1 > >
On Wed, Mar 19, 2025 at 02:09:03PM +0800, Peter Chen wrote: Hello Peter, > On 25-03-07 16:01:24, Siddharth Vadapalli wrote: > > EXTERNAL EMAIL > > > > Hello, > > > > This series enables support to build the PCIe Cadence Controller drivers > > and the PCI J721E Application/Wrapper/Glue driver as Loadable Kernel > > Modules. The motivation for this series is that PCIe is not a necessity > > for booting the SoC, due to which it doesn't have to be a built-in > > module. Additionally, the defconfig doesn't enable the PCIe Cadence > > Controller drivers and the PCI J721E driver, due to which PCIe is not > > supported by default. Enabling the configs as of now (i.e. without this > > series) will result in built-in drivers i.e. a bloated Linux Image for > > everyone who doesn't have the PCIe Controller. > > If the user doesn't enable PCIe controller device through DTS/ACPI, > that's doesn't matter. The Linux Image for arm64 systems built using: arch/arm64/configs/defconfig will not have support for the Cadence PCIe Controller and the PCIe J721e driver, because these configs aren't enabled. > > > @@ -209,6 +209,12 @@ CONFIG_NFC=m > > CONFIG_NFC_NCI=m > > CONFIG_NFC_S3FWRN5_I2C=m > > CONFIG_PCI=y > > +CONFIG_PCI_J721E=m > > +CONFIG_PCI_J721E_HOST=m > > +CONFIG_PCI_J721E_EP=m > > +CONFIG_PCIE_CADENCE=m > > +CONFIG_PCIE_CADENCE_HOST=m > > +CONFIG_PCIE_CADENCE_EP=m > > The common Cadence configuration will be select if the glue layer's > configuration is select according to Kconfig. > > Please do not set common configuration as module, some user may need > it as build-in like dw's. Considering the situation, the rootfs is at > NVMe. The common configuration at the moment is "DISABLED" i.e. no support for the Cadence Controller at all. Which "user" are you referring to? This series was introduced since having the drivers built-in was pushed back at: https://lore.kernel.org/linux-arm-kernel/20250122145822.4ewsmkk6ztbeejzf@slashing/ Regards, Siddharth.
On 25-03-19 14:25:34, Siddharth Vadapalli wrote: > > > > > > Hello, > > > > > > This series enables support to build the PCIe Cadence Controller drivers > > > and the PCI J721E Application/Wrapper/Glue driver as Loadable Kernel > > > Modules. The motivation for this series is that PCIe is not a necessity > > > for booting the SoC, due to which it doesn't have to be a built-in > > > module. Additionally, the defconfig doesn't enable the PCIe Cadence > > > Controller drivers and the PCI J721E driver, due to which PCIe is not > > > supported by default. Enabling the configs as of now (i.e. without this > > > series) will result in built-in drivers i.e. a bloated Linux Image for > > > everyone who doesn't have the PCIe Controller. > > > > If the user doesn't enable PCIe controller device through DTS/ACPI, > > that's doesn't matter. > > The Linux Image for arm64 systems built using: > arch/arm64/configs/defconfig > will not have support for the Cadence PCIe Controller and the PCIe J721e > driver, because these configs aren't enabled. > > > > > > @@ -209,6 +209,12 @@ CONFIG_NFC=m > > > CONFIG_NFC_NCI=m > > > CONFIG_NFC_S3FWRN5_I2C=m > > > CONFIG_PCI=y > > > +CONFIG_PCI_J721E=m > > > +CONFIG_PCI_J721E_HOST=m > > > +CONFIG_PCI_J721E_EP=m > > > +CONFIG_PCIE_CADENCE=m > > > +CONFIG_PCIE_CADENCE_HOST=m > > > +CONFIG_PCIE_CADENCE_EP=m > > > > The common Cadence configuration will be select if the glue layer's > > configuration is select according to Kconfig. > > > > Please do not set common configuration as module, some user may need > > it as build-in like dw's. Considering the situation, the rootfs is at > > NVMe. > > The common configuration at the moment is "DISABLED" i.e. no support for > the Cadence Controller at all. Which "user" are you referring to? This > series was introduced since having the drivers built-in was pushed back at: We are using Cadence controller, and prepare upstream radxa-o6 board whose rootfs is at PCIe NVMe. You could build driver as module for TI glue layer, but don't force other vendors using module as well, see dwc as an example please.
On Wed, Mar 19, 2025 at 05:31:01PM +0800, Peter Chen wrote: > On 25-03-19 14:25:34, Siddharth Vadapalli wrote: > > > > > > > > Hello, > > > > > > > > This series enables support to build the PCIe Cadence Controller drivers > > > > and the PCI J721E Application/Wrapper/Glue driver as Loadable Kernel > > > > Modules. The motivation for this series is that PCIe is not a necessity > > > > for booting the SoC, due to which it doesn't have to be a built-in > > > > module. Additionally, the defconfig doesn't enable the PCIe Cadence > > > > Controller drivers and the PCI J721E driver, due to which PCIe is not > > > > supported by default. Enabling the configs as of now (i.e. without this > > > > series) will result in built-in drivers i.e. a bloated Linux Image for > > > > everyone who doesn't have the PCIe Controller. > > > > > > If the user doesn't enable PCIe controller device through DTS/ACPI, > > > that's doesn't matter. > > > > The Linux Image for arm64 systems built using: > > arch/arm64/configs/defconfig > > will not have support for the Cadence PCIe Controller and the PCIe J721e > > driver, because these configs aren't enabled. > > > > > > > > > @@ -209,6 +209,12 @@ CONFIG_NFC=m > > > > CONFIG_NFC_NCI=m > > > > CONFIG_NFC_S3FWRN5_I2C=m > > > > CONFIG_PCI=y > > > > +CONFIG_PCI_J721E=m > > > > +CONFIG_PCI_J721E_HOST=m > > > > +CONFIG_PCI_J721E_EP=m > > > > +CONFIG_PCIE_CADENCE=m > > > > +CONFIG_PCIE_CADENCE_HOST=m > > > > +CONFIG_PCIE_CADENCE_EP=m > > > > > > The common Cadence configuration will be select if the glue layer's > > > configuration is select according to Kconfig. > > > > > > Please do not set common configuration as module, some user may need > > > it as build-in like dw's. Considering the situation, the rootfs is at > > > NVMe. > > > > The common configuration at the moment is "DISABLED" i.e. no support for > > the Cadence Controller at all. Which "user" are you referring to? This > > series was introduced since having the drivers built-in was pushed back at: > > We are using Cadence controller, and prepare upstream radxa-o6 board > whose rootfs is at PCIe NVMe. > It doesn't matter. Only criteria AFAIK to build the driver as built-in in defconfig is that it should be a depedency for console. For some time, storage was also a dependency, but for sure PCIe is not. Moreover, CONFIG_BLK_DEV_NVME is built as a module in ARM64 defconfig. So it doesn't matter if you build PCIe controller driver as a built-in or not. You need to load the NVMe driver somehow. So please use initramfs. > You could build driver as module for TI glue layer, but don't force > other vendors using module as well, see dwc as an example please. > DWC is a bad example here. Only reason the DWC drivers are not loadable is due to the in-built MSI controller implementation as irqchip. People tend to build the irqchip controllers as always built-in for some known issues. Even then some driver developers prefer to built them as loadable module but suppress unbind to avoid rmmoding the module. - Mani
On 2025/3/19 17:55, manivannan.sadhasivam@linaro.org wrote: > EXTERNAL EMAIL > > On Wed, Mar 19, 2025 at 05:31:01PM +0800, Peter Chen wrote: >> On 25-03-19 14:25:34, Siddharth Vadapalli wrote: >>>>> >>>>> Hello, >>>>> >>>>> This series enables support to build the PCIe Cadence Controller drivers >>>>> and the PCI J721E Application/Wrapper/Glue driver as Loadable Kernel >>>>> Modules. The motivation for this series is that PCIe is not a necessity >>>>> for booting the SoC, due to which it doesn't have to be a built-in >>>>> module. Additionally, the defconfig doesn't enable the PCIe Cadence >>>>> Controller drivers and the PCI J721E driver, due to which PCIe is not >>>>> supported by default. Enabling the configs as of now (i.e. without this >>>>> series) will result in built-in drivers i.e. a bloated Linux Image for >>>>> everyone who doesn't have the PCIe Controller. >>>> >>>> If the user doesn't enable PCIe controller device through DTS/ACPI, >>>> that's doesn't matter. >>> >>> The Linux Image for arm64 systems built using: >>> arch/arm64/configs/defconfig >>> will not have support for the Cadence PCIe Controller and the PCIe J721e >>> driver, because these configs aren't enabled. >>> >>>> >>>>> @@ -209,6 +209,12 @@ CONFIG_NFC=m >>>>> CONFIG_NFC_NCI=m >>>>> CONFIG_NFC_S3FWRN5_I2C=m >>>>> CONFIG_PCI=y >>>>> +CONFIG_PCI_J721E=m >>>>> +CONFIG_PCI_J721E_HOST=m >>>>> +CONFIG_PCI_J721E_EP=m >>>>> +CONFIG_PCIE_CADENCE=m >>>>> +CONFIG_PCIE_CADENCE_HOST=m >>>>> +CONFIG_PCIE_CADENCE_EP=m >>>> >>>> The common Cadence configuration will be select if the glue layer's >>>> configuration is select according to Kconfig. >>>> >>>> Please do not set common configuration as module, some user may need >>>> it as build-in like dw's. Considering the situation, the rootfs is at >>>> NVMe. >>> >>> The common configuration at the moment is "DISABLED" i.e. no support for >>> the Cadence Controller at all. Which "user" are you referring to? This >>> series was introduced since having the drivers built-in was pushed back at: >> >> We are using Cadence controller, and prepare upstream radxa-o6 board >> whose rootfs is at PCIe NVMe. >> > > It doesn't matter. Only criteria AFAIK to build the driver as built-in in > defconfig is that it should be a depedency for console. For some time, storage > was also a dependency, but for sure PCIe is not. > > Moreover, CONFIG_BLK_DEV_NVME is built as a module in ARM64 defconfig. So it > doesn't matter if you build PCIe controller driver as a built-in or not. You > need to load the NVMe driver somehow. > > So please use initramfs. > >> You could build driver as module for TI glue layer, but don't force >> other vendors using module as well, see dwc as an example please. >> > > DWC is a bad example here. Only reason the DWC drivers are not loadable is due > to the in-built MSI controller implementation as irqchip. People tend to build > the irqchip controllers as always built-in for some known issues. Even then some > driver developers prefer to built them as loadable module but suppress unbind to > avoid rmmoding the module. Hi Mani, I think the MSI RTL module provided by Synopsys PCIe controller IP is not a standard operation. The reason for this MSI module is probably to be used by some cpus that do not have ITS(LPI interrupt) designed. Or RISC-V SOC, etc. MSI is defined as an MSI/MSIX interrupt that starts with a direct write memory access. There are also SOC vendors that do not use the built-in MSI RTL module. Instead, MSI/MSIX interrupts are transmitted directly to the GIC's ITS module via the GIC V3/V4 interface. For example, RK3588, they do not use the PCIe controller built-in MSI module. Some Qualcomm platforms also modify the PCIe controller's built-in MSI modules to connect each of them to 32 SPI interrupts to the GIC. I was under the impression that the SDM845 was designed that way. The only explanation is that SPI interrupts are faster than LPI interrupts without having to look up some tables. So the dwc driver can also compile to ko? Best regards, Hans
On 2025/3/20 10:14, hans.zhang wrote: > EXTERNAL EMAIL > > On 2025/3/19 17:55, manivannan.sadhasivam@linaro.org wrote: >> EXTERNAL EMAIL >> >> On Wed, Mar 19, 2025 at 05:31:01PM +0800, Peter Chen wrote: >>> On 25-03-19 14:25:34, Siddharth Vadapalli wrote: >>>>>> >>>>>> Hello, >>>>>> >>>>>> This series enables support to build the PCIe Cadence Controller >>>>>> drivers >>>>>> and the PCI J721E Application/Wrapper/Glue driver as Loadable Kernel >>>>>> Modules. The motivation for this series is that PCIe is not a >>>>>> necessity >>>>>> for booting the SoC, due to which it doesn't have to be a built-in >>>>>> module. Additionally, the defconfig doesn't enable the PCIe Cadence >>>>>> Controller drivers and the PCI J721E driver, due to which PCIe is not >>>>>> supported by default. Enabling the configs as of now (i.e. without >>>>>> this >>>>>> series) will result in built-in drivers i.e. a bloated Linux Image >>>>>> for >>>>>> everyone who doesn't have the PCIe Controller. >>>>> >>>>> If the user doesn't enable PCIe controller device through DTS/ACPI, >>>>> that's doesn't matter. >>>> >>>> The Linux Image for arm64 systems built using: >>>> arch/arm64/configs/defconfig >>>> will not have support for the Cadence PCIe Controller and the PCIe >>>> J721e >>>> driver, because these configs aren't enabled. >>>> >>>>> >>>>>> @@ -209,6 +209,12 @@ CONFIG_NFC=m >>>>>> CONFIG_NFC_NCI=m >>>>>> CONFIG_NFC_S3FWRN5_I2C=m >>>>>> CONFIG_PCI=y >>>>>> +CONFIG_PCI_J721E=m >>>>>> +CONFIG_PCI_J721E_HOST=m >>>>>> +CONFIG_PCI_J721E_EP=m >>>>>> +CONFIG_PCIE_CADENCE=m >>>>>> +CONFIG_PCIE_CADENCE_HOST=m >>>>>> +CONFIG_PCIE_CADENCE_EP=m >>>>> >>>>> The common Cadence configuration will be select if the glue layer's >>>>> configuration is select according to Kconfig. >>>>> >>>>> Please do not set common configuration as module, some user may need >>>>> it as build-in like dw's. Considering the situation, the rootfs is at >>>>> NVMe. >>>> >>>> The common configuration at the moment is "DISABLED" i.e. no support >>>> for >>>> the Cadence Controller at all. Which "user" are you referring to? This >>>> series was introduced since having the drivers built-in was pushed >>>> back at: >>> >>> We are using Cadence controller, and prepare upstream radxa-o6 board >>> whose rootfs is at PCIe NVMe. >>> >> >> It doesn't matter. Only criteria AFAIK to build the driver as built-in in >> defconfig is that it should be a depedency for console. For some time, >> storage >> was also a dependency, but for sure PCIe is not. >> >> Moreover, CONFIG_BLK_DEV_NVME is built as a module in ARM64 defconfig. >> So it >> doesn't matter if you build PCIe controller driver as a built-in or >> not. You >> need to load the NVMe driver somehow. >> >> So please use initramfs. >> >>> You could build driver as module for TI glue layer, but don't force >>> other vendors using module as well, see dwc as an example please. >>> >> >> DWC is a bad example here. Only reason the DWC drivers are not >> loadable is due >> to the in-built MSI controller implementation as irqchip. People tend >> to build >> the irqchip controllers as always built-in for some known issues. Even >> then some >> driver developers prefer to built them as loadable module but suppress >> unbind to >> avoid rmmoding the module. > Hi Mani, > > I think the MSI RTL module provided by Synopsys PCIe controller IP is > not a standard operation. The reason for this MSI module is probably to > be used by some cpus that do not have ITS(LPI interrupt) designed. Or > RISC-V SOC, etc. MSI is defined as an MSI/MSIX interrupt that starts > with a direct write memory access. > > There are also SOC vendors that do not use the built-in MSI RTL module. > Instead, MSI/MSIX interrupts are transmitted directly to the GIC's ITS > module via the GIC V3/V4 interface. For example, RK3588, they do not use > the PCIe controller built-in MSI module. Some Qualcomm platforms also > modify the PCIe controller's built-in MSI modules to connect each of > them to 32 SPI interrupts to the GIC. I was under the impression that > the SDM845 was designed that way. The only explanation is that SPI > interrupts are faster than LPI interrupts without having to look up some > tables. > > So the dwc driver can also compile to ko? Additional reason: Often in SOC design, the RTL designer does not understand the dwc-driver behavior and causes some RTL bugs, and the software needs to workaround. As a result, the dwc driver of build-in cannot be used, and the dwc driver needs to be modified, which will make it easier to compile the dwc driver to module. Best regards, Hans
On Thu, Mar 20, 2025 at 10:14:02AM +0800, hans.zhang wrote: > > > On 2025/3/19 17:55, manivannan.sadhasivam@linaro.org wrote: > > EXTERNAL EMAIL > > > > On Wed, Mar 19, 2025 at 05:31:01PM +0800, Peter Chen wrote: > > > On 25-03-19 14:25:34, Siddharth Vadapalli wrote: > > > > > > > > > > > > Hello, > > > > > > > > > > > > This series enables support to build the PCIe Cadence Controller drivers > > > > > > and the PCI J721E Application/Wrapper/Glue driver as Loadable Kernel > > > > > > Modules. The motivation for this series is that PCIe is not a necessity > > > > > > for booting the SoC, due to which it doesn't have to be a built-in > > > > > > module. Additionally, the defconfig doesn't enable the PCIe Cadence > > > > > > Controller drivers and the PCI J721E driver, due to which PCIe is not > > > > > > supported by default. Enabling the configs as of now (i.e. without this > > > > > > series) will result in built-in drivers i.e. a bloated Linux Image for > > > > > > everyone who doesn't have the PCIe Controller. > > > > > > > > > > If the user doesn't enable PCIe controller device through DTS/ACPI, > > > > > that's doesn't matter. > > > > > > > > The Linux Image for arm64 systems built using: > > > > arch/arm64/configs/defconfig > > > > will not have support for the Cadence PCIe Controller and the PCIe J721e > > > > driver, because these configs aren't enabled. > > > > > > > > > > > > > > > @@ -209,6 +209,12 @@ CONFIG_NFC=m > > > > > > CONFIG_NFC_NCI=m > > > > > > CONFIG_NFC_S3FWRN5_I2C=m > > > > > > CONFIG_PCI=y > > > > > > +CONFIG_PCI_J721E=m > > > > > > +CONFIG_PCI_J721E_HOST=m > > > > > > +CONFIG_PCI_J721E_EP=m > > > > > > +CONFIG_PCIE_CADENCE=m > > > > > > +CONFIG_PCIE_CADENCE_HOST=m > > > > > > +CONFIG_PCIE_CADENCE_EP=m > > > > > > > > > > The common Cadence configuration will be select if the glue layer's > > > > > configuration is select according to Kconfig. > > > > > > > > > > Please do not set common configuration as module, some user may need > > > > > it as build-in like dw's. Considering the situation, the rootfs is at > > > > > NVMe. > > > > > > > > The common configuration at the moment is "DISABLED" i.e. no support for > > > > the Cadence Controller at all. Which "user" are you referring to? This > > > > series was introduced since having the drivers built-in was pushed back at: > > > > > > We are using Cadence controller, and prepare upstream radxa-o6 board > > > whose rootfs is at PCIe NVMe. > > > > > > > It doesn't matter. Only criteria AFAIK to build the driver as built-in in > > defconfig is that it should be a depedency for console. For some time, storage > > was also a dependency, but for sure PCIe is not. > > > > Moreover, CONFIG_BLK_DEV_NVME is built as a module in ARM64 defconfig. So it > > doesn't matter if you build PCIe controller driver as a built-in or not. You > > need to load the NVMe driver somehow. > > > > So please use initramfs. > > > > > You could build driver as module for TI glue layer, but don't force > > > other vendors using module as well, see dwc as an example please. > > > > > > > DWC is a bad example here. Only reason the DWC drivers are not loadable is due > > to the in-built MSI controller implementation as irqchip. People tend to build > > the irqchip controllers as always built-in for some known issues. Even then some > > driver developers prefer to built them as loadable module but suppress unbind to > > avoid rmmoding the module. > Hi Mani, > > I think the MSI RTL module provided by Synopsys PCIe controller IP is not a > standard operation. The reason for this MSI module is probably to be used by > some cpus that do not have ITS(LPI interrupt) designed. Or RISC-V SOC, etc. > MSI is defined as an MSI/MSIX interrupt that starts with a direct write > memory access. > Yeah, DWC MSI controller is not a great design. The older ones are even more horrible (using SPI interrupts for reporting AERs etc...). > There are also SOC vendors that do not use the built-in MSI RTL module. > Instead, MSI/MSIX interrupts are transmitted directly to the GIC's ITS > module via the GIC V3/V4 interface. For example, RK3588, they do not use the > PCIe controller built-in MSI module. Some Qualcomm platforms also modify the > PCIe controller's built-in MSI modules to connect each of them to 32 SPI > interrupts to the GIC. I was under the impression that the SDM845 was > designed that way. The only explanation is that SPI interrupts are faster > than LPI interrupts without having to look up some tables. > If ITS is available, platforms should make use of that. There is no way DWC MSI is superior than ITS. We are slowly migrating the Qcom platforms to use ITS. And btw, Qcom DWC MSI controller raise interrupts for AER/PME sent by the downstream components. So enabling ITS is uncovering AER errors which were already present :) > So the dwc driver can also compile to ko? > Only if the MSI support is made as a build time option and there is a guarantee that the platform will never use it (which is difficult to do as the driver can only detect it during the runtime based on devicetree). - Mani
On 2025/3/25 23:26, manivannan.sadhasivam@linaro.org wrote: > EXTERNAL EMAIL > > On Thu, Mar 20, 2025 at 10:14:02AM +0800, hans.zhang wrote: >> >> >> On 2025/3/19 17:55, manivannan.sadhasivam@linaro.org wrote: >>> EXTERNAL EMAIL >>> >>> On Wed, Mar 19, 2025 at 05:31:01PM +0800, Peter Chen wrote: >>>> On 25-03-19 14:25:34, Siddharth Vadapalli wrote: >>>>>>> >>>>>>> Hello, >>>>>>> >>>>>>> This series enables support to build the PCIe Cadence Controller drivers >>>>>>> and the PCI J721E Application/Wrapper/Glue driver as Loadable Kernel >>>>>>> Modules. The motivation for this series is that PCIe is not a necessity >>>>>>> for booting the SoC, due to which it doesn't have to be a built-in >>>>>>> module. Additionally, the defconfig doesn't enable the PCIe Cadence >>>>>>> Controller drivers and the PCI J721E driver, due to which PCIe is not >>>>>>> supported by default. Enabling the configs as of now (i.e. without this >>>>>>> series) will result in built-in drivers i.e. a bloated Linux Image for >>>>>>> everyone who doesn't have the PCIe Controller. >>>>>> >>>>>> If the user doesn't enable PCIe controller device through DTS/ACPI, >>>>>> that's doesn't matter. >>>>> >>>>> The Linux Image for arm64 systems built using: >>>>> arch/arm64/configs/defconfig >>>>> will not have support for the Cadence PCIe Controller and the PCIe J721e >>>>> driver, because these configs aren't enabled. >>>>> >>>>>> >>>>>>> @@ -209,6 +209,12 @@ CONFIG_NFC=m >>>>>>> CONFIG_NFC_NCI=m >>>>>>> CONFIG_NFC_S3FWRN5_I2C=m >>>>>>> CONFIG_PCI=y >>>>>>> +CONFIG_PCI_J721E=m >>>>>>> +CONFIG_PCI_J721E_HOST=m >>>>>>> +CONFIG_PCI_J721E_EP=m >>>>>>> +CONFIG_PCIE_CADENCE=m >>>>>>> +CONFIG_PCIE_CADENCE_HOST=m >>>>>>> +CONFIG_PCIE_CADENCE_EP=m >>>>>> >>>>>> The common Cadence configuration will be select if the glue layer's >>>>>> configuration is select according to Kconfig. >>>>>> >>>>>> Please do not set common configuration as module, some user may need >>>>>> it as build-in like dw's. Considering the situation, the rootfs is at >>>>>> NVMe. >>>>> >>>>> The common configuration at the moment is "DISABLED" i.e. no support for >>>>> the Cadence Controller at all. Which "user" are you referring to? This >>>>> series was introduced since having the drivers built-in was pushed back at: >>>> >>>> We are using Cadence controller, and prepare upstream radxa-o6 board >>>> whose rootfs is at PCIe NVMe. >>>> >>> >>> It doesn't matter. Only criteria AFAIK to build the driver as built-in in >>> defconfig is that it should be a depedency for console. For some time, storage >>> was also a dependency, but for sure PCIe is not. >>> >>> Moreover, CONFIG_BLK_DEV_NVME is built as a module in ARM64 defconfig. So it >>> doesn't matter if you build PCIe controller driver as a built-in or not. You >>> need to load the NVMe driver somehow. >>> >>> So please use initramfs. >>> >>>> You could build driver as module for TI glue layer, but don't force >>>> other vendors using module as well, see dwc as an example please. >>>> >>> >>> DWC is a bad example here. Only reason the DWC drivers are not loadable is due >>> to the in-built MSI controller implementation as irqchip. People tend to build >>> the irqchip controllers as always built-in for some known issues. Even then some >>> driver developers prefer to built them as loadable module but suppress unbind to >>> avoid rmmoding the module. >> Hi Mani, >> >> I think the MSI RTL module provided by Synopsys PCIe controller IP is not a >> standard operation. The reason for this MSI module is probably to be used by >> some cpus that do not have ITS(LPI interrupt) designed. Or RISC-V SOC, etc. >> MSI is defined as an MSI/MSIX interrupt that starts with a direct write >> memory access. >> > > Yeah, DWC MSI controller is not a great design. The older ones are even more > horrible (using SPI interrupts for reporting AERs etc...). Hi Mani, Currently Synopsys and Cadence provide SPI interrupts for reporting AERs etc... This IP vendor only provides an alternative approach that actually requires SOC design companies to design according to PCIe SPEC and conform to linux OS software behavior. I have a way to workaround AER is SPI interrupt. It can also use aer.c drivers. However, I have been afraid to submit patch, because this is a problem of SOC designers themselves, which does not conform to the port driver of linux os (aer.c). So it will certainly not be accepted. > >> There are also SOC vendors that do not use the built-in MSI RTL module. >> Instead, MSI/MSIX interrupts are transmitted directly to the GIC's ITS >> module via the GIC V3/V4 interface. For example, RK3588, they do not use the >> PCIe controller built-in MSI module. Some Qualcomm platforms also modify the >> PCIe controller's built-in MSI modules to connect each of them to 32 SPI >> interrupts to the GIC. I was under the impression that the SDM845 was >> designed that way. The only explanation is that SPI interrupts are faster >> than LPI interrupts without having to look up some tables. >> > > If ITS is available, platforms should make use of that. There is no way DWC MSI > is superior than ITS. We are slowly migrating the Qcom platforms to use ITS. > I agree with you. > And btw, Qcom DWC MSI controller raise interrupts for AER/PME sent by the > downstream components. So enabling ITS is uncovering AER errors which were > already present :) > >> So the dwc driver can also compile to ko? >> > > Only if the MSI support is made as a build time option and there is a guarantee > that the platform will never use it (which is difficult to do as the driver can > only detect it during the runtime based on devicetree). Anyway, I would still like to request that the Cadence PCIe controller driver not be in module mode. Cadence also has a lot of customers, we are one of them, it's just that many customers don't have upstream. We are about to upstream. This series was introduced since having the drivers built-in was pushed back at: https://lore.kernel.org/linux-arm-kernel/20250122145822.4ewsmkk6ztbeejzf@slashing/ Hans: The Cadence PCIe root port driver can not be made into module mode because of TI's idea. We should consider the ideas of other customers. If you have to make it module mode, I think all peripheral drivers should be module mode. Maybe I'm being direct, but that's probably the case. Many thanks Mani for replying to me. Best regards, Hans
On Wed, Mar 26, 2025 at 12:03:01AM +0800, Hans Zhang wrote: > > > On 2025/3/25 23:26, manivannan.sadhasivam@linaro.org wrote: > > EXTERNAL EMAIL > > > > On Thu, Mar 20, 2025 at 10:14:02AM +0800, hans.zhang wrote: > > > > > > > > > On 2025/3/19 17:55, manivannan.sadhasivam@linaro.org wrote: > > > > EXTERNAL EMAIL > > > > > > > > On Wed, Mar 19, 2025 at 05:31:01PM +0800, Peter Chen wrote: > > > > > On 25-03-19 14:25:34, Siddharth Vadapalli wrote: > > > > > > > > > > > > > > > > Hello, > > > > > > > > > > > > > > > > This series enables support to build the PCIe Cadence Controller drivers > > > > > > > > and the PCI J721E Application/Wrapper/Glue driver as Loadable Kernel > > > > > > > > Modules. The motivation for this series is that PCIe is not a necessity > > > > > > > > for booting the SoC, due to which it doesn't have to be a built-in > > > > > > > > module. Additionally, the defconfig doesn't enable the PCIe Cadence > > > > > > > > Controller drivers and the PCI J721E driver, due to which PCIe is not > > > > > > > > supported by default. Enabling the configs as of now (i.e. without this > > > > > > > > series) will result in built-in drivers i.e. a bloated Linux Image for > > > > > > > > everyone who doesn't have the PCIe Controller. > > > > > > > > > > > > > > If the user doesn't enable PCIe controller device through DTS/ACPI, > > > > > > > that's doesn't matter. > > > > > > > > > > > > The Linux Image for arm64 systems built using: > > > > > > arch/arm64/configs/defconfig > > > > > > will not have support for the Cadence PCIe Controller and the PCIe J721e > > > > > > driver, because these configs aren't enabled. > > > > > > > > > > > > > > > > > > > > > @@ -209,6 +209,12 @@ CONFIG_NFC=m > > > > > > > > CONFIG_NFC_NCI=m > > > > > > > > CONFIG_NFC_S3FWRN5_I2C=m > > > > > > > > CONFIG_PCI=y > > > > > > > > +CONFIG_PCI_J721E=m > > > > > > > > +CONFIG_PCI_J721E_HOST=m > > > > > > > > +CONFIG_PCI_J721E_EP=m > > > > > > > > +CONFIG_PCIE_CADENCE=m > > > > > > > > +CONFIG_PCIE_CADENCE_HOST=m > > > > > > > > +CONFIG_PCIE_CADENCE_EP=m > > > > > > > > > > > > > > The common Cadence configuration will be select if the glue layer's > > > > > > > configuration is select according to Kconfig. > > > > > > > > > > > > > > Please do not set common configuration as module, some user may need > > > > > > > it as build-in like dw's. Considering the situation, the rootfs is at > > > > > > > NVMe. > > > > > > > > > > > > The common configuration at the moment is "DISABLED" i.e. no support for > > > > > > the Cadence Controller at all. Which "user" are you referring to? This > > > > > > series was introduced since having the drivers built-in was pushed back at: > > > > > > > > > > We are using Cadence controller, and prepare upstream radxa-o6 board > > > > > whose rootfs is at PCIe NVMe. > > > > > > > > > > > > > It doesn't matter. Only criteria AFAIK to build the driver as built-in in > > > > defconfig is that it should be a depedency for console. For some time, storage > > > > was also a dependency, but for sure PCIe is not. > > > > > > > > Moreover, CONFIG_BLK_DEV_NVME is built as a module in ARM64 defconfig. So it > > > > doesn't matter if you build PCIe controller driver as a built-in or not. You > > > > need to load the NVMe driver somehow. > > > > > > > > So please use initramfs. > > > > > > > > > You could build driver as module for TI glue layer, but don't force > > > > > other vendors using module as well, see dwc as an example please. > > > > > > > > > > > > > DWC is a bad example here. Only reason the DWC drivers are not loadable is due > > > > to the in-built MSI controller implementation as irqchip. People tend to build > > > > the irqchip controllers as always built-in for some known issues. Even then some > > > > driver developers prefer to built them as loadable module but suppress unbind to > > > > avoid rmmoding the module. > > > Hi Mani, > > > > > > I think the MSI RTL module provided by Synopsys PCIe controller IP is not a > > > standard operation. The reason for this MSI module is probably to be used by > > > some cpus that do not have ITS(LPI interrupt) designed. Or RISC-V SOC, etc. > > > MSI is defined as an MSI/MSIX interrupt that starts with a direct write > > > memory access. > > > > > > > Yeah, DWC MSI controller is not a great design. The older ones are even more > > horrible (using SPI interrupts for reporting AERs etc...). > > Hi Mani, > > Currently Synopsys and Cadence provide SPI interrupts for reporting AERs > etc... This IP vendor only provides an alternative approach that actually > requires SOC design companies to design according to PCIe SPEC and conform > to linux OS software behavior. > > I have a way to workaround AER is SPI interrupt. It can also use aer.c > drivers. However, I have been afraid to submit patch, because this is a > problem of SOC designers themselves, which does not conform to the port > driver of linux os (aer.c). So it will certainly not be accepted. > Right. There is not clean way afaik. > > > > > > There are also SOC vendors that do not use the built-in MSI RTL module. > > > Instead, MSI/MSIX interrupts are transmitted directly to the GIC's ITS > > > module via the GIC V3/V4 interface. For example, RK3588, they do not use the > > > PCIe controller built-in MSI module. Some Qualcomm platforms also modify the > > > PCIe controller's built-in MSI modules to connect each of them to 32 SPI > > > interrupts to the GIC. I was under the impression that the SDM845 was > > > designed that way. The only explanation is that SPI interrupts are faster > > > than LPI interrupts without having to look up some tables. > > > > > > > If ITS is available, platforms should make use of that. There is no way DWC MSI > > is superior than ITS. We are slowly migrating the Qcom platforms to use ITS. > > > > I agree with you. > > > And btw, Qcom DWC MSI controller raise interrupts for AER/PME sent by the > > downstream components. So enabling ITS is uncovering AER errors which were > > already present :) > > > > > So the dwc driver can also compile to ko? > > > > > > > Only if the MSI support is made as a build time option and there is a guarantee > > that the platform will never use it (which is difficult to do as the driver can > > only detect it during the runtime based on devicetree). > > Anyway, I would still like to request that the Cadence PCIe controller > driver not be in module mode. Cadence also has a lot of customers, we are > one of them, it's just that many customers don't have upstream. We are about > to upstream. > > This series was introduced since having the drivers built-in was pushed back > at: > https://lore.kernel.org/linux-arm-kernel/20250122145822.4ewsmkk6ztbeejzf@slashing/ > > Hans: > The Cadence PCIe root port driver can not be made into module mode because > of TI's idea. We should consider the ideas of other customers. If you have > to make it module mode, I think all peripheral drivers should be module > mode. Maybe I'm being direct, but that's probably the case. > It is not about one company's idea to make the driver as a module. Linux kernel community in general strongly encourages developers to build the drivers as module unless there are exceptions (which I have already quoted). If you keep building the drivers as built-in, it will result in a bloated kernel image. For sure vendors would want *their* interested drivers to be built-in so that they do not need to package the drivers in initramfs/rootfs, but that's not a practice which is encouraged by the Linux community. So I'm in favor of making the PCIe controllers as module if there are no technical issues. - Mani
diff --git a/arch/arm64/configs/defconfig b/arch/arm64/configs/defconfig index 3a3706db2982..0ca073141c3e 100644 --- a/arch/arm64/configs/defconfig +++ b/arch/arm64/configs/defconfig @@ -209,6 +209,12 @@ CONFIG_NFC=m CONFIG_NFC_NCI=m CONFIG_NFC_S3FWRN5_I2C=m CONFIG_PCI=y +CONFIG_PCI_J721E=m +CONFIG_PCI_J721E_HOST=m +CONFIG_PCI_J721E_EP=m +CONFIG_PCIE_CADENCE=m +CONFIG_PCIE_CADENCE_HOST=m +CONFIG_PCIE_CADENCE_EP=m CONFIG_PCIEPORTBUS=y CONFIG_PCIEAER=y CONFIG_PCI_IOV=y