Message ID | 20240418092305.2337429-7-quic_varada@quicinc.com (mailing list archive) |
---|---|
State | Superseded |
Headers | show |
Series | Add interconnect driver for IPQ9574 SoC | expand |
On 4/18/24 11:23, Varadarajan Narayanan wrote: > IPQ SoCs dont involve RPM in managing NoC related clocks and > there is no NoC scaling. Linux itself handles these clocks. > However, these should not be exposed as just clocks and align > with other Qualcomm SoCs that handle these clocks from a > interconnect provider. > > Hence include icc provider capability to the gcc node so that > peripherals can use the interconnect facility to enable these > clocks. > > Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org> > Signed-off-by: Varadarajan Narayanan <quic_varada@quicinc.com> > --- If this is all you do to enable interconnect (which is not the case, as this patch only satisfies the bindings checker, the meaningful change happens in the previous patch) and nothing explodes, this is an apparent sign of your driver doing nothing. The expected reaction to "enabling interconnect" without defining the required paths for your hardware would be a crash-on-sync_state, as all unused (from Linux's POV) resources ought to be shut down. Because you lack sync_state, the interconnects silently retain the state that they were left in (which is not deterministic), and that's precisely what we want to avoid. Konrad
On Tue, Apr 23, 2024 at 02:58:41PM +0200, Konrad Dybcio wrote: > > > On 4/18/24 11:23, Varadarajan Narayanan wrote: > > IPQ SoCs dont involve RPM in managing NoC related clocks and > > there is no NoC scaling. Linux itself handles these clocks. > > However, these should not be exposed as just clocks and align > > with other Qualcomm SoCs that handle these clocks from a > > interconnect provider. > > > > Hence include icc provider capability to the gcc node so that > > peripherals can use the interconnect facility to enable these > > clocks. > > > > Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org> > > Signed-off-by: Varadarajan Narayanan <quic_varada@quicinc.com> > > --- > > If this is all you do to enable interconnect (which is not the case, > as this patch only satisfies the bindings checker, the meaningful > change happens in the previous patch) and nothing explodes, this is > an apparent sign of your driver doing nothing. It appears to do nothing because, we are just enabling the clock provider to also act as interconnect provider. Only when the consumers are enabled with interconnect usage, this will create paths and turn on the relevant NOC clocks. This interconnect will be used by the PCIe and NSS blocks. When those patches were posted earlier, they were put on hold until interconnect driver is available. Once this patch gets in, PCIe for example will make use of icc. Please refer to https://lore.kernel.org/linux-arm-msm/20230519090219.15925-5-quic_devipriy@quicinc.com/. The 'pcieX' nodes will include the following entries. interconnects = <&gcc MASTER_ANOC_PCIE0 &gcc SLAVE_ANOC_PCIE0>, <&gcc MASTER_SNOC_PCIE0 &gcc SLAVE_SNOC_PCIE0>; interconnect-names = "pcie-mem", "cpu-pcie"; > The expected reaction to "enabling interconnect" without defining the > required paths for your hardware would be a crash-on-sync_state, as all > unused (from Linux's POV) resources ought to be shut down. > > Because you lack sync_state, the interconnects silently retain the state > that they were left in (which is not deterministic), and that's precisely > what we want to avoid. I tried to set 'sync_state' to icc_sync_state to be invoked and didn't see any crash. Thanks Varada
On 25.04.2024 12:26 PM, Varadarajan Narayanan wrote: > On Tue, Apr 23, 2024 at 02:58:41PM +0200, Konrad Dybcio wrote: >> >> >> On 4/18/24 11:23, Varadarajan Narayanan wrote: >>> IPQ SoCs dont involve RPM in managing NoC related clocks and >>> there is no NoC scaling. Linux itself handles these clocks. >>> However, these should not be exposed as just clocks and align >>> with other Qualcomm SoCs that handle these clocks from a >>> interconnect provider. >>> >>> Hence include icc provider capability to the gcc node so that >>> peripherals can use the interconnect facility to enable these >>> clocks. >>> >>> Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org> >>> Signed-off-by: Varadarajan Narayanan <quic_varada@quicinc.com> >>> --- >> >> If this is all you do to enable interconnect (which is not the case, >> as this patch only satisfies the bindings checker, the meaningful >> change happens in the previous patch) and nothing explodes, this is >> an apparent sign of your driver doing nothing. > > It appears to do nothing because, we are just enabling the clock > provider to also act as interconnect provider. Only when the > consumers are enabled with interconnect usage, this will create > paths and turn on the relevant NOC clocks. No, with sync_state it actually does "something" (sets the interconnect path bandwidths to zero). And *this* patch does nothing functionally, it only makes the dt checker happy. > > This interconnect will be used by the PCIe and NSS blocks. When > those patches were posted earlier, they were put on hold until > interconnect driver is available. > > Once this patch gets in, PCIe for example will make use of icc. > Please refer to https://lore.kernel.org/linux-arm-msm/20230519090219.15925-5-quic_devipriy@quicinc.com/. > > The 'pcieX' nodes will include the following entries. > > interconnects = <&gcc MASTER_ANOC_PCIE0 &gcc SLAVE_ANOC_PCIE0>, > <&gcc MASTER_SNOC_PCIE0 &gcc SLAVE_SNOC_PCIE0>; > interconnect-names = "pcie-mem", "cpu-pcie"; Okay. What about USB that's already enabled? And BIMC/MEMNOC? > >> The expected reaction to "enabling interconnect" without defining the >> required paths for your hardware would be a crash-on-sync_state, as all >> unused (from Linux's POV) resources ought to be shut down. >> >> Because you lack sync_state, the interconnects silently retain the state >> that they were left in (which is not deterministic), and that's precisely >> what we want to avoid. > > I tried to set 'sync_state' to icc_sync_state to be invoked and > didn't see any crash. Have you confirmed that the registers are actually written to, and with correct values? Konrad
On Tue, Apr 30, 2024 at 12:05:29PM +0200, Konrad Dybcio wrote: > On 25.04.2024 12:26 PM, Varadarajan Narayanan wrote: > > On Tue, Apr 23, 2024 at 02:58:41PM +0200, Konrad Dybcio wrote: > >> > >> > >> On 4/18/24 11:23, Varadarajan Narayanan wrote: > >>> IPQ SoCs dont involve RPM in managing NoC related clocks and > >>> there is no NoC scaling. Linux itself handles these clocks. > >>> However, these should not be exposed as just clocks and align > >>> with other Qualcomm SoCs that handle these clocks from a > >>> interconnect provider. > >>> > >>> Hence include icc provider capability to the gcc node so that > >>> peripherals can use the interconnect facility to enable these > >>> clocks. > >>> > >>> Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org> > >>> Signed-off-by: Varadarajan Narayanan <quic_varada@quicinc.com> > >>> --- > >> > >> If this is all you do to enable interconnect (which is not the case, > >> as this patch only satisfies the bindings checker, the meaningful > >> change happens in the previous patch) and nothing explodes, this is > >> an apparent sign of your driver doing nothing. > > > > It appears to do nothing because, we are just enabling the clock > > provider to also act as interconnect provider. Only when the > > consumers are enabled with interconnect usage, this will create > > paths and turn on the relevant NOC clocks. > > No, with sync_state it actually does "something" (sets the interconnect > path bandwidths to zero). And *this* patch does nothing functionally, > it only makes the dt checker happy. I understand. > > This interconnect will be used by the PCIe and NSS blocks. When > > those patches were posted earlier, they were put on hold until > > interconnect driver is available. > > > > Once this patch gets in, PCIe for example will make use of icc. > > Please refer to https://lore.kernel.org/linux-arm-msm/20230519090219.15925-5-quic_devipriy@quicinc.com/. > > > > The 'pcieX' nodes will include the following entries. > > > > interconnects = <&gcc MASTER_ANOC_PCIE0 &gcc SLAVE_ANOC_PCIE0>, > > <&gcc MASTER_SNOC_PCIE0 &gcc SLAVE_SNOC_PCIE0>; > > interconnect-names = "pcie-mem", "cpu-pcie"; > > Okay. What about USB that's already enabled? And BIMC/MEMNOC? For USB, the GCC_ANOC_USB_AXI_CLK is enabled as part of the iface clock. Hence, interconnect is not specified there. MEMNOC to System NOC interfaces seem to be enabled automatically. Software doesn't have to turn on or program specific clocks. > >> The expected reaction to "enabling interconnect" without defining the > >> required paths for your hardware would be a crash-on-sync_state, as all > >> unused (from Linux's POV) resources ought to be shut down. > >> > >> Because you lack sync_state, the interconnects silently retain the state > >> that they were left in (which is not deterministic), and that's precisely > >> what we want to avoid. > > > > I tried to set 'sync_state' to icc_sync_state to be invoked and > > didn't see any crash. > > Have you confirmed that the registers are actually written to, and with > correct values? I tried the following combinations:- 1. Top of tree linux-next + This patch set * icc_sync_state called * No crash or hang observed * From /sys/kernel/debug/clk/clk_summary can see the relevant clocks are set to the expected rates (compared with downstream kernel) 2. Top of tree linux-next + This patch set + PCIe enablement * icc_sync_state NOT called * No crash or hang observed * From /sys/kernel/debug/clk/clk_summary can see the relevant clocks are set to the expected rates (compared with downstream kernel) Does this answer your question? Please let me know if you were looking for some other information. Thanks Varada
Hi Varada, Thank you for your work on this! On 2.05.24 12:30, Varadarajan Narayanan wrote: > On Tue, Apr 30, 2024 at 12:05:29PM +0200, Konrad Dybcio wrote: >> On 25.04.2024 12:26 PM, Varadarajan Narayanan wrote: >>> On Tue, Apr 23, 2024 at 02:58:41PM +0200, Konrad Dybcio wrote: >>>> >>>> >>>> On 4/18/24 11:23, Varadarajan Narayanan wrote: >>>>> IPQ SoCs dont involve RPM in managing NoC related clocks and >>>>> there is no NoC scaling. Linux itself handles these clocks. >>>>> However, these should not be exposed as just clocks and align >>>>> with other Qualcomm SoCs that handle these clocks from a >>>>> interconnect provider. >>>>> >>>>> Hence include icc provider capability to the gcc node so that >>>>> peripherals can use the interconnect facility to enable these >>>>> clocks. >>>>> >>>>> Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org> >>>>> Signed-off-by: Varadarajan Narayanan <quic_varada@quicinc.com> >>>>> --- >>>> >>>> If this is all you do to enable interconnect (which is not the case, >>>> as this patch only satisfies the bindings checker, the meaningful >>>> change happens in the previous patch) and nothing explodes, this is >>>> an apparent sign of your driver doing nothing. >>> >>> It appears to do nothing because, we are just enabling the clock >>> provider to also act as interconnect provider. Only when the >>> consumers are enabled with interconnect usage, this will create >>> paths and turn on the relevant NOC clocks. >> >> No, with sync_state it actually does "something" (sets the interconnect >> path bandwidths to zero). And *this* patch does nothing functionally, >> it only makes the dt checker happy. > > I understand. > >>> This interconnect will be used by the PCIe and NSS blocks. When >>> those patches were posted earlier, they were put on hold until >>> interconnect driver is available. >>> >>> Once this patch gets in, PCIe for example will make use of icc. >>> Please refer to https://lore.kernel.org/linux-arm-msm/20230519090219.15925-5-quic_devipriy@quicinc.com/. >>> >>> The 'pcieX' nodes will include the following entries. >>> >>> interconnects = <&gcc MASTER_ANOC_PCIE0 &gcc SLAVE_ANOC_PCIE0>, >>> <&gcc MASTER_SNOC_PCIE0 &gcc SLAVE_SNOC_PCIE0>; >>> interconnect-names = "pcie-mem", "cpu-pcie"; >> >> Okay. What about USB that's already enabled? And BIMC/MEMNOC? > > For USB, the GCC_ANOC_USB_AXI_CLK is enabled as part of the iface > clock. Hence, interconnect is not specified there. > > MEMNOC to System NOC interfaces seem to be enabled automatically. > Software doesn't have to turn on or program specific clocks. > >>>> The expected reaction to "enabling interconnect" without defining the >>>> required paths for your hardware would be a crash-on-sync_state, as all >>>> unused (from Linux's POV) resources ought to be shut down. >>>> >>>> Because you lack sync_state, the interconnects silently retain the state >>>> that they were left in (which is not deterministic), and that's precisely >>>> what we want to avoid. >>> >>> I tried to set 'sync_state' to icc_sync_state to be invoked and >>> didn't see any crash. >> >> Have you confirmed that the registers are actually written to, and with >> correct values? > > I tried the following combinations:- > > 1. Top of tree linux-next + This patch set > > * icc_sync_state called > * No crash or hang observed > * From /sys/kernel/debug/clk/clk_summary can see the > relevant clocks are set to the expected rates (compared > with downstream kernel) > > 2. Top of tree linux-next + This patch set + PCIe enablement > > * icc_sync_state NOT called If sync_state() is not being called, that usually means that there are interconnect consumers that haven't probed successfully (PCIe?) or their dependencies. That can be checked in /sys/class/devlink/.../status But i am not sure how this works for PCI devices however. You can also manually force a call to sync_state by writing "1" to the interconnect provider's /sys/devices/.../state_synced Anyway, the question is if PCIe and NSS work without this driver? If they work, is this because the clocks are turned on by default or by the boot loader? Then if an interconnect path (clock) gets disabled either when we reach a sync_state (with no bandwidth requests) or we explicitly call icc_set_bw() with 0 bandwidth values, i would expect that these PCIe and NSS devices would not function anymore (it might save some power etc) and if this is unexpected we should see a a crash or hang... Can you confirm this? Thanks, Georgi > * No crash or hang observed > * From /sys/kernel/debug/clk/clk_summary can see the > relevant clocks are set to the expected rates (compared > with downstream kernel) > > Does this answer your question? Please let me know if you were > looking for some other information. > > Thanks > Varada >
On Fri, May 03, 2024 at 04:51:04PM +0300, Georgi Djakov wrote: > Hi Varada, > > Thank you for your work on this! > > On 2.05.24 12:30, Varadarajan Narayanan wrote: > > On Tue, Apr 30, 2024 at 12:05:29PM +0200, Konrad Dybcio wrote: > > > On 25.04.2024 12:26 PM, Varadarajan Narayanan wrote: > > > > On Tue, Apr 23, 2024 at 02:58:41PM +0200, Konrad Dybcio wrote: > > > > > > > > > > > > > > > On 4/18/24 11:23, Varadarajan Narayanan wrote: > > > > > > IPQ SoCs dont involve RPM in managing NoC related clocks and > > > > > > there is no NoC scaling. Linux itself handles these clocks. > > > > > > However, these should not be exposed as just clocks and align > > > > > > with other Qualcomm SoCs that handle these clocks from a > > > > > > interconnect provider. > > > > > > > > > > > > Hence include icc provider capability to the gcc node so that > > > > > > peripherals can use the interconnect facility to enable these > > > > > > clocks. > > > > > > > > > > > > Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org> > > > > > > Signed-off-by: Varadarajan Narayanan <quic_varada@quicinc.com> > > > > > > --- > > > > > > > > > > If this is all you do to enable interconnect (which is not the case, > > > > > as this patch only satisfies the bindings checker, the meaningful > > > > > change happens in the previous patch) and nothing explodes, this is > > > > > an apparent sign of your driver doing nothing. > > > > > > > > It appears to do nothing because, we are just enabling the clock > > > > provider to also act as interconnect provider. Only when the > > > > consumers are enabled with interconnect usage, this will create > > > > paths and turn on the relevant NOC clocks. > > > > > > No, with sync_state it actually does "something" (sets the interconnect > > > path bandwidths to zero). And *this* patch does nothing functionally, > > > it only makes the dt checker happy. > > > > I understand. > > > > > > This interconnect will be used by the PCIe and NSS blocks. When > > > > those patches were posted earlier, they were put on hold until > > > > interconnect driver is available. > > > > > > > > Once this patch gets in, PCIe for example will make use of icc. > > > > Please refer to https://lore.kernel.org/linux-arm-msm/20230519090219.15925-5-quic_devipriy@quicinc.com/. > > > > > > > > The 'pcieX' nodes will include the following entries. > > > > > > > > interconnects = <&gcc MASTER_ANOC_PCIE0 &gcc SLAVE_ANOC_PCIE0>, > > > > <&gcc MASTER_SNOC_PCIE0 &gcc SLAVE_SNOC_PCIE0>; > > > > interconnect-names = "pcie-mem", "cpu-pcie"; > > > > > > Okay. What about USB that's already enabled? And BIMC/MEMNOC? > > > > For USB, the GCC_ANOC_USB_AXI_CLK is enabled as part of the iface > > clock. Hence, interconnect is not specified there. > > > > MEMNOC to System NOC interfaces seem to be enabled automatically. > > Software doesn't have to turn on or program specific clocks. > > > > > > > The expected reaction to "enabling interconnect" without defining the > > > > > required paths for your hardware would be a crash-on-sync_state, as all > > > > > unused (from Linux's POV) resources ought to be shut down. > > > > > > > > > > Because you lack sync_state, the interconnects silently retain the state > > > > > that they were left in (which is not deterministic), and that's precisely > > > > > what we want to avoid. > > > > > > > > I tried to set 'sync_state' to icc_sync_state to be invoked and > > > > didn't see any crash. > > > > > > Have you confirmed that the registers are actually written to, and with > > > correct values? > > > > I tried the following combinations:- > > > > 1. Top of tree linux-next + This patch set > > > > * icc_sync_state called > > * No crash or hang observed > > * From /sys/kernel/debug/clk/clk_summary can see the > > relevant clocks are set to the expected rates (compared > > with downstream kernel) > > > > 2. Top of tree linux-next + This patch set + PCIe enablement > > > > * icc_sync_state NOT called > > If sync_state() is not being called, that usually means that there > are interconnect consumers that haven't probed successfully (PCIe?) > or their dependencies. That can be checked in /sys/class/devlink/.../status > But i am not sure how this works for PCI devices however. > > You can also manually force a call to sync_state by writing "1" to > the interconnect provider's /sys/devices/.../state_synced > > Anyway, the question is if PCIe and NSS work without this driver? No. > If they work, is this because the clocks are turned on by default > or by the boot loader? Initially, the PCIe/NSS driver enabled these clocks directly by having them in their DT nodes itself. Based on community feedback this was removed and after that PCIe/NSS did not work. > Then if an interconnect path (clock) gets disabled either when we > reach a sync_state (with no bandwidth requests) or we explicitly > call icc_set_bw() with 0 bandwidth values, i would expect that > these PCIe and NSS devices would not function anymore (it might > save some power etc) and if this is unexpected we should see a > a crash or hang... > > Can you confirm this? With ICC enabled, icc_set_bw (with non-zero values) is called by PCIe and NSS drivers. Haven't checked with icc_set_bw with zero values. PCIe: qcom_pcie_probe -> qcom_pcie_icc_init -> icc_set_bw NSS: ppe_icc_init -> icc_set_bw I believe sync_state is not getting called since there is a non-zero set bandwidth request. Which seems to be aligned with your explanation. Thanks Varada > > Thanks, > Georgi > > > * No crash or hang observed > > * From /sys/kernel/debug/clk/clk_summary can see the > > relevant clocks are set to the expected rates (compared > > with downstream kernel) > > > > Does this answer your question? Please let me know if you were > > looking for some other information. > > > > Thanks > > Varada > > >
On Wed, 8 May 2024 at 09:53, Varadarajan Narayanan <quic_varada@quicinc.com> wrote: > > On Fri, May 03, 2024 at 04:51:04PM +0300, Georgi Djakov wrote: > > Hi Varada, > > > > Thank you for your work on this! > > > > On 2.05.24 12:30, Varadarajan Narayanan wrote: > > > On Tue, Apr 30, 2024 at 12:05:29PM +0200, Konrad Dybcio wrote: > > > > On 25.04.2024 12:26 PM, Varadarajan Narayanan wrote: > > > > > On Tue, Apr 23, 2024 at 02:58:41PM +0200, Konrad Dybcio wrote: > > > > > > > > > > > > > > > > > > On 4/18/24 11:23, Varadarajan Narayanan wrote: > > > > > > > IPQ SoCs dont involve RPM in managing NoC related clocks and > > > > > > > there is no NoC scaling. Linux itself handles these clocks. > > > > > > > However, these should not be exposed as just clocks and align > > > > > > > with other Qualcomm SoCs that handle these clocks from a > > > > > > > interconnect provider. > > > > > > > > > > > > > > Hence include icc provider capability to the gcc node so that > > > > > > > peripherals can use the interconnect facility to enable these > > > > > > > clocks. > > > > > > > > > > > > > > Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org> > > > > > > > Signed-off-by: Varadarajan Narayanan <quic_varada@quicinc.com> > > > > > > > --- > > > > > > > > > > > > If this is all you do to enable interconnect (which is not the case, > > > > > > as this patch only satisfies the bindings checker, the meaningful > > > > > > change happens in the previous patch) and nothing explodes, this is > > > > > > an apparent sign of your driver doing nothing. > > > > > > > > > > It appears to do nothing because, we are just enabling the clock > > > > > provider to also act as interconnect provider. Only when the > > > > > consumers are enabled with interconnect usage, this will create > > > > > paths and turn on the relevant NOC clocks. > > > > > > > > No, with sync_state it actually does "something" (sets the interconnect > > > > path bandwidths to zero). And *this* patch does nothing functionally, > > > > it only makes the dt checker happy. > > > > > > I understand. > > > > > > > > This interconnect will be used by the PCIe and NSS blocks. When > > > > > those patches were posted earlier, they were put on hold until > > > > > interconnect driver is available. > > > > > > > > > > Once this patch gets in, PCIe for example will make use of icc. > > > > > Please refer to https://lore.kernel.org/linux-arm-msm/20230519090219.15925-5-quic_devipriy@quicinc.com/. > > > > > > > > > > The 'pcieX' nodes will include the following entries. > > > > > > > > > > interconnects = <&gcc MASTER_ANOC_PCIE0 &gcc SLAVE_ANOC_PCIE0>, > > > > > <&gcc MASTER_SNOC_PCIE0 &gcc SLAVE_SNOC_PCIE0>; > > > > > interconnect-names = "pcie-mem", "cpu-pcie"; > > > > > > > > Okay. What about USB that's already enabled? And BIMC/MEMNOC? > > > > > > For USB, the GCC_ANOC_USB_AXI_CLK is enabled as part of the iface > > > clock. Hence, interconnect is not specified there. > > > > > > MEMNOC to System NOC interfaces seem to be enabled automatically. > > > Software doesn't have to turn on or program specific clocks. > > > > > > > > > The expected reaction to "enabling interconnect" without defining the > > > > > > required paths for your hardware would be a crash-on-sync_state, as all > > > > > > unused (from Linux's POV) resources ought to be shut down. > > > > > > > > > > > > Because you lack sync_state, the interconnects silently retain the state > > > > > > that they were left in (which is not deterministic), and that's precisely > > > > > > what we want to avoid. > > > > > > > > > > I tried to set 'sync_state' to icc_sync_state to be invoked and > > > > > didn't see any crash. > > > > > > > > Have you confirmed that the registers are actually written to, and with > > > > correct values? > > > > > > I tried the following combinations:- > > > > > > 1. Top of tree linux-next + This patch set > > > > > > * icc_sync_state called > > > * No crash or hang observed > > > * From /sys/kernel/debug/clk/clk_summary can see the > > > relevant clocks are set to the expected rates (compared > > > with downstream kernel) > > > > > > 2. Top of tree linux-next + This patch set + PCIe enablement > > > > > > * icc_sync_state NOT called > > > > If sync_state() is not being called, that usually means that there > > are interconnect consumers that haven't probed successfully (PCIe?) > > or their dependencies. That can be checked in /sys/class/devlink/.../status > > But i am not sure how this works for PCI devices however. > > > > You can also manually force a call to sync_state by writing "1" to > > the interconnect provider's /sys/devices/.../state_synced > > > > Anyway, the question is if PCIe and NSS work without this driver? > > No. > > > If they work, is this because the clocks are turned on by default > > or by the boot loader? > > Initially, the PCIe/NSS driver enabled these clocks directly > by having them in their DT nodes itself. Based on community > feedback this was removed and after that PCIe/NSS did not work. > > > Then if an interconnect path (clock) gets disabled either when we > > reach a sync_state (with no bandwidth requests) or we explicitly > > call icc_set_bw() with 0 bandwidth values, i would expect that > > these PCIe and NSS devices would not function anymore (it might > > save some power etc) and if this is unexpected we should see a > > a crash or hang... > > > > Can you confirm this? > > With ICC enabled, icc_set_bw (with non-zero values) is called by > PCIe and NSS drivers. Haven't checked with icc_set_bw with zero > values. > > PCIe: qcom_pcie_probe -> qcom_pcie_icc_init -> icc_set_bw > NSS: ppe_icc_init -> icc_set_bw > > I believe sync_state is not getting called since there is a > non-zero set bandwidth request. Which seems to be aligned with > your explanation. This doesn't look correct. sync_state is being called once all consumers are probed. It doesn't matter whether those consumers have non-zero bandwidth requests or no.
On 8.05.2024 10:10 AM, Dmitry Baryshkov wrote: > On Wed, 8 May 2024 at 09:53, Varadarajan Narayanan > <quic_varada@quicinc.com> wrote: >> >> On Fri, May 03, 2024 at 04:51:04PM +0300, Georgi Djakov wrote: >>> Hi Varada, >>> >>> Thank you for your work on this! >>> >>> On 2.05.24 12:30, Varadarajan Narayanan wrote: >>>> On Tue, Apr 30, 2024 at 12:05:29PM +0200, Konrad Dybcio wrote: >>>>> On 25.04.2024 12:26 PM, Varadarajan Narayanan wrote: >>>>>> On Tue, Apr 23, 2024 at 02:58:41PM +0200, Konrad Dybcio wrote: >>>>>>> >>>>>>> >>>>>>> On 4/18/24 11:23, Varadarajan Narayanan wrote: >>>>>>>> IPQ SoCs dont involve RPM in managing NoC related clocks and >>>>>>>> there is no NoC scaling. Linux itself handles these clocks. >>>>>>>> However, these should not be exposed as just clocks and align >>>>>>>> with other Qualcomm SoCs that handle these clocks from a >>>>>>>> interconnect provider. >>>>>>>> >>>>>>>> Hence include icc provider capability to the gcc node so that >>>>>>>> peripherals can use the interconnect facility to enable these >>>>>>>> clocks. >>>>>>>> >>>>>>>> Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org> >>>>>>>> Signed-off-by: Varadarajan Narayanan <quic_varada@quicinc.com> >>>>>>>> --- >>>>>>> >>>>>>> If this is all you do to enable interconnect (which is not the case, >>>>>>> as this patch only satisfies the bindings checker, the meaningful >>>>>>> change happens in the previous patch) and nothing explodes, this is >>>>>>> an apparent sign of your driver doing nothing. >>>>>> >>>>>> It appears to do nothing because, we are just enabling the clock >>>>>> provider to also act as interconnect provider. Only when the >>>>>> consumers are enabled with interconnect usage, this will create >>>>>> paths and turn on the relevant NOC clocks. >>>>> >>>>> No, with sync_state it actually does "something" (sets the interconnect >>>>> path bandwidths to zero). And *this* patch does nothing functionally, >>>>> it only makes the dt checker happy. >>>> >>>> I understand. >>>> >>>>>> This interconnect will be used by the PCIe and NSS blocks. When >>>>>> those patches were posted earlier, they were put on hold until >>>>>> interconnect driver is available. >>>>>> >>>>>> Once this patch gets in, PCIe for example will make use of icc. >>>>>> Please refer to https://lore.kernel.org/linux-arm-msm/20230519090219.15925-5-quic_devipriy@quicinc.com/. >>>>>> >>>>>> The 'pcieX' nodes will include the following entries. >>>>>> >>>>>> interconnects = <&gcc MASTER_ANOC_PCIE0 &gcc SLAVE_ANOC_PCIE0>, >>>>>> <&gcc MASTER_SNOC_PCIE0 &gcc SLAVE_SNOC_PCIE0>; >>>>>> interconnect-names = "pcie-mem", "cpu-pcie"; >>>>> >>>>> Okay. What about USB that's already enabled? And BIMC/MEMNOC? >>>> >>>> For USB, the GCC_ANOC_USB_AXI_CLK is enabled as part of the iface >>>> clock. Hence, interconnect is not specified there. >>>> >>>> MEMNOC to System NOC interfaces seem to be enabled automatically. >>>> Software doesn't have to turn on or program specific clocks. >>>> >>>>>>> The expected reaction to "enabling interconnect" without defining the >>>>>>> required paths for your hardware would be a crash-on-sync_state, as all >>>>>>> unused (from Linux's POV) resources ought to be shut down. >>>>>>> >>>>>>> Because you lack sync_state, the interconnects silently retain the state >>>>>>> that they were left in (which is not deterministic), and that's precisely >>>>>>> what we want to avoid. >>>>>> >>>>>> I tried to set 'sync_state' to icc_sync_state to be invoked and >>>>>> didn't see any crash. >>>>> >>>>> Have you confirmed that the registers are actually written to, and with >>>>> correct values? >>>> >>>> I tried the following combinations:- >>>> >>>> 1. Top of tree linux-next + This patch set >>>> >>>> * icc_sync_state called >>>> * No crash or hang observed >>>> * From /sys/kernel/debug/clk/clk_summary can see the >>>> relevant clocks are set to the expected rates (compared >>>> with downstream kernel) >>>> >>>> 2. Top of tree linux-next + This patch set + PCIe enablement >>>> >>>> * icc_sync_state NOT called >>> >>> If sync_state() is not being called, that usually means that there >>> are interconnect consumers that haven't probed successfully (PCIe?) >>> or their dependencies. That can be checked in /sys/class/devlink/.../status >>> But i am not sure how this works for PCI devices however. >>> >>> You can also manually force a call to sync_state by writing "1" to >>> the interconnect provider's /sys/devices/.../state_synced >>> >>> Anyway, the question is if PCIe and NSS work without this driver? >> >> No. >> >>> If they work, is this because the clocks are turned on by default >>> or by the boot loader? >> >> Initially, the PCIe/NSS driver enabled these clocks directly >> by having them in their DT nodes itself. Based on community >> feedback this was removed and after that PCIe/NSS did not work. >> >>> Then if an interconnect path (clock) gets disabled either when we >>> reach a sync_state (with no bandwidth requests) or we explicitly >>> call icc_set_bw() with 0 bandwidth values, i would expect that >>> these PCIe and NSS devices would not function anymore (it might >>> save some power etc) and if this is unexpected we should see a >>> a crash or hang... >>> >>> Can you confirm this? >> >> With ICC enabled, icc_set_bw (with non-zero values) is called by >> PCIe and NSS drivers. Haven't checked with icc_set_bw with zero >> values. >> >> PCIe: qcom_pcie_probe -> qcom_pcie_icc_init -> icc_set_bw >> NSS: ppe_icc_init -> icc_set_bw >> >> I believe sync_state is not getting called since there is a >> non-zero set bandwidth request. Which seems to be aligned with >> your explanation. > > This doesn't look correct. sync_state is being called once all > consumers are probed. It doesn't matter whether those consumers have > non-zero bandwidth requests or no. /sys/kernel/debug/devices_deferred may have some useful info, too Konrad
On Thu, Jun 06, 2024 at 04:06:01PM +0200, Konrad Dybcio wrote: > On 8.05.2024 10:10 AM, Dmitry Baryshkov wrote: > > On Wed, 8 May 2024 at 09:53, Varadarajan Narayanan > > <quic_varada@quicinc.com> wrote: > >> > >> On Fri, May 03, 2024 at 04:51:04PM +0300, Georgi Djakov wrote: > >>> Hi Varada, > >>> > >>> Thank you for your work on this! > >>> > >>> On 2.05.24 12:30, Varadarajan Narayanan wrote: > >>>> On Tue, Apr 30, 2024 at 12:05:29PM +0200, Konrad Dybcio wrote: > >>>>> On 25.04.2024 12:26 PM, Varadarajan Narayanan wrote: > >>>>>> On Tue, Apr 23, 2024 at 02:58:41PM +0200, Konrad Dybcio wrote: > >>>>>>> > >>>>>>> > >>>>>>> On 4/18/24 11:23, Varadarajan Narayanan wrote: > >>>>>>>> IPQ SoCs dont involve RPM in managing NoC related clocks and > >>>>>>>> there is no NoC scaling. Linux itself handles these clocks. > >>>>>>>> However, these should not be exposed as just clocks and align > >>>>>>>> with other Qualcomm SoCs that handle these clocks from a > >>>>>>>> interconnect provider. > >>>>>>>> > >>>>>>>> Hence include icc provider capability to the gcc node so that > >>>>>>>> peripherals can use the interconnect facility to enable these > >>>>>>>> clocks. > >>>>>>>> > >>>>>>>> Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org> > >>>>>>>> Signed-off-by: Varadarajan Narayanan <quic_varada@quicinc.com> > >>>>>>>> --- > >>>>>>> > >>>>>>> If this is all you do to enable interconnect (which is not the case, > >>>>>>> as this patch only satisfies the bindings checker, the meaningful > >>>>>>> change happens in the previous patch) and nothing explodes, this is > >>>>>>> an apparent sign of your driver doing nothing. > >>>>>> > >>>>>> It appears to do nothing because, we are just enabling the clock > >>>>>> provider to also act as interconnect provider. Only when the > >>>>>> consumers are enabled with interconnect usage, this will create > >>>>>> paths and turn on the relevant NOC clocks. > >>>>> > >>>>> No, with sync_state it actually does "something" (sets the interconnect > >>>>> path bandwidths to zero). And *this* patch does nothing functionally, > >>>>> it only makes the dt checker happy. > >>>> > >>>> I understand. > >>>> > >>>>>> This interconnect will be used by the PCIe and NSS blocks. When > >>>>>> those patches were posted earlier, they were put on hold until > >>>>>> interconnect driver is available. > >>>>>> > >>>>>> Once this patch gets in, PCIe for example will make use of icc. > >>>>>> Please refer to https://lore.kernel.org/linux-arm-msm/20230519090219.15925-5-quic_devipriy@quicinc.com/. > >>>>>> > >>>>>> The 'pcieX' nodes will include the following entries. > >>>>>> > >>>>>> interconnects = <&gcc MASTER_ANOC_PCIE0 &gcc SLAVE_ANOC_PCIE0>, > >>>>>> <&gcc MASTER_SNOC_PCIE0 &gcc SLAVE_SNOC_PCIE0>; > >>>>>> interconnect-names = "pcie-mem", "cpu-pcie"; > >>>>> > >>>>> Okay. What about USB that's already enabled? And BIMC/MEMNOC? > >>>> > >>>> For USB, the GCC_ANOC_USB_AXI_CLK is enabled as part of the iface > >>>> clock. Hence, interconnect is not specified there. > >>>> > >>>> MEMNOC to System NOC interfaces seem to be enabled automatically. > >>>> Software doesn't have to turn on or program specific clocks. > >>>> > >>>>>>> The expected reaction to "enabling interconnect" without defining the > >>>>>>> required paths for your hardware would be a crash-on-sync_state, as all > >>>>>>> unused (from Linux's POV) resources ought to be shut down. > >>>>>>> > >>>>>>> Because you lack sync_state, the interconnects silently retain the state > >>>>>>> that they were left in (which is not deterministic), and that's precisely > >>>>>>> what we want to avoid. > >>>>>> > >>>>>> I tried to set 'sync_state' to icc_sync_state to be invoked and > >>>>>> didn't see any crash. > >>>>> > >>>>> Have you confirmed that the registers are actually written to, and with > >>>>> correct values? > >>>> > >>>> I tried the following combinations:- > >>>> > >>>> 1. Top of tree linux-next + This patch set > >>>> > >>>> * icc_sync_state called > >>>> * No crash or hang observed > >>>> * From /sys/kernel/debug/clk/clk_summary can see the > >>>> relevant clocks are set to the expected rates (compared > >>>> with downstream kernel) > >>>> > >>>> 2. Top of tree linux-next + This patch set + PCIe enablement > >>>> > >>>> * icc_sync_state NOT called > >>> > >>> If sync_state() is not being called, that usually means that there > >>> are interconnect consumers that haven't probed successfully (PCIe?) > >>> or their dependencies. That can be checked in /sys/class/devlink/.../status > >>> But i am not sure how this works for PCI devices however. > >>> > >>> You can also manually force a call to sync_state by writing "1" to > >>> the interconnect provider's /sys/devices/.../state_synced > >>> > >>> Anyway, the question is if PCIe and NSS work without this driver? > >> > >> No. > >> > >>> If they work, is this because the clocks are turned on by default > >>> or by the boot loader? > >> > >> Initially, the PCIe/NSS driver enabled these clocks directly > >> by having them in their DT nodes itself. Based on community > >> feedback this was removed and after that PCIe/NSS did not work. > >> > >>> Then if an interconnect path (clock) gets disabled either when we > >>> reach a sync_state (with no bandwidth requests) or we explicitly > >>> call icc_set_bw() with 0 bandwidth values, i would expect that > >>> these PCIe and NSS devices would not function anymore (it might > >>> save some power etc) and if this is unexpected we should see a > >>> a crash or hang... > >>> > >>> Can you confirm this? > >> > >> With ICC enabled, icc_set_bw (with non-zero values) is called by > >> PCIe and NSS drivers. Haven't checked with icc_set_bw with zero > >> values. > >> > >> PCIe: qcom_pcie_probe -> qcom_pcie_icc_init -> icc_set_bw > >> NSS: ppe_icc_init -> icc_set_bw > >> > >> I believe sync_state is not getting called since there is a > >> non-zero set bandwidth request. Which seems to be aligned with > >> your explanation. > > > > This doesn't look correct. sync_state is being called once all > > consumers are probed. It doesn't matter whether those consumers have > > non-zero bandwidth requests or no. > > /sys/kernel/debug/devices_deferred may have some useful info, too /sys/kernel/debug/devices_deferred seems to be empty # mount | grep -w debugfs none on /sys/kernel/debug type debugfs (rw,relatime) # cat /sys/kernel/debug/devices_deferred | wc -l 0 Added the following print to icc_sync_state, @@ -1096,6 +1096,7 @@ void icc_sync_state(struct device *dev) struct icc_node *n; static int count; + printk("--> %s: %d %d\n", __func__, providers_count, count); count++; if (count < providers_count) return; icc_sync_state seems to be called once, # dmesg | grep icc_sync_state [ 12.260544] --> icc_sync_state: 2 0 Since 'providers_count' is greated than 'count' icc_sync_state seems to return before doing anything. Thanks Varada
On 11.06.24 12:42, Varadarajan Narayanan wrote: > On Thu, Jun 06, 2024 at 04:06:01PM +0200, Konrad Dybcio wrote: >> On 8.05.2024 10:10 AM, Dmitry Baryshkov wrote: >>> On Wed, 8 May 2024 at 09:53, Varadarajan Narayanan >>> <quic_varada@quicinc.com> wrote: >>>> >>>> On Fri, May 03, 2024 at 04:51:04PM +0300, Georgi Djakov wrote: >>>>> Hi Varada, >>>>> >>>>> Thank you for your work on this! >>>>> >>>>> On 2.05.24 12:30, Varadarajan Narayanan wrote: >>>>>> On Tue, Apr 30, 2024 at 12:05:29PM +0200, Konrad Dybcio wrote: >>>>>>> On 25.04.2024 12:26 PM, Varadarajan Narayanan wrote: >>>>>>>> On Tue, Apr 23, 2024 at 02:58:41PM +0200, Konrad Dybcio wrote: >>>>>>>>> >>>>>>>>> >>>>>>>>> On 4/18/24 11:23, Varadarajan Narayanan wrote: >>>>>>>>>> IPQ SoCs dont involve RPM in managing NoC related clocks and >>>>>>>>>> there is no NoC scaling. Linux itself handles these clocks. >>>>>>>>>> However, these should not be exposed as just clocks and align >>>>>>>>>> with other Qualcomm SoCs that handle these clocks from a >>>>>>>>>> interconnect provider. >>>>>>>>>> >>>>>>>>>> Hence include icc provider capability to the gcc node so that >>>>>>>>>> peripherals can use the interconnect facility to enable these >>>>>>>>>> clocks. >>>>>>>>>> >>>>>>>>>> Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org> >>>>>>>>>> Signed-off-by: Varadarajan Narayanan <quic_varada@quicinc.com> >>>>>>>>>> --- >>>>>>>>> >>>>>>>>> If this is all you do to enable interconnect (which is not the case, >>>>>>>>> as this patch only satisfies the bindings checker, the meaningful >>>>>>>>> change happens in the previous patch) and nothing explodes, this is >>>>>>>>> an apparent sign of your driver doing nothing. >>>>>>>> >>>>>>>> It appears to do nothing because, we are just enabling the clock >>>>>>>> provider to also act as interconnect provider. Only when the >>>>>>>> consumers are enabled with interconnect usage, this will create >>>>>>>> paths and turn on the relevant NOC clocks. >>>>>>> >>>>>>> No, with sync_state it actually does "something" (sets the interconnect >>>>>>> path bandwidths to zero). And *this* patch does nothing functionally, >>>>>>> it only makes the dt checker happy. >>>>>> >>>>>> I understand. >>>>>> >>>>>>>> This interconnect will be used by the PCIe and NSS blocks. When >>>>>>>> those patches were posted earlier, they were put on hold until >>>>>>>> interconnect driver is available. >>>>>>>> >>>>>>>> Once this patch gets in, PCIe for example will make use of icc. >>>>>>>> Please refer to https://lore.kernel.org/linux-arm-msm/20230519090219.15925-5-quic_devipriy@quicinc.com/. >>>>>>>> >>>>>>>> The 'pcieX' nodes will include the following entries. >>>>>>>> >>>>>>>> interconnects = <&gcc MASTER_ANOC_PCIE0 &gcc SLAVE_ANOC_PCIE0>, >>>>>>>> <&gcc MASTER_SNOC_PCIE0 &gcc SLAVE_SNOC_PCIE0>; >>>>>>>> interconnect-names = "pcie-mem", "cpu-pcie"; >>>>>>> >>>>>>> Okay. What about USB that's already enabled? And BIMC/MEMNOC? >>>>>> >>>>>> For USB, the GCC_ANOC_USB_AXI_CLK is enabled as part of the iface >>>>>> clock. Hence, interconnect is not specified there. >>>>>> >>>>>> MEMNOC to System NOC interfaces seem to be enabled automatically. >>>>>> Software doesn't have to turn on or program specific clocks. >>>>>> >>>>>>>>> The expected reaction to "enabling interconnect" without defining the >>>>>>>>> required paths for your hardware would be a crash-on-sync_state, as all >>>>>>>>> unused (from Linux's POV) resources ought to be shut down. >>>>>>>>> >>>>>>>>> Because you lack sync_state, the interconnects silently retain the state >>>>>>>>> that they were left in (which is not deterministic), and that's precisely >>>>>>>>> what we want to avoid. >>>>>>>> >>>>>>>> I tried to set 'sync_state' to icc_sync_state to be invoked and >>>>>>>> didn't see any crash. >>>>>>> >>>>>>> Have you confirmed that the registers are actually written to, and with >>>>>>> correct values? >>>>>> >>>>>> I tried the following combinations:- >>>>>> >>>>>> 1. Top of tree linux-next + This patch set >>>>>> >>>>>> * icc_sync_state called >>>>>> * No crash or hang observed >>>>>> * From /sys/kernel/debug/clk/clk_summary can see the >>>>>> relevant clocks are set to the expected rates (compared >>>>>> with downstream kernel) >>>>>> >>>>>> 2. Top of tree linux-next + This patch set + PCIe enablement >>>>>> >>>>>> * icc_sync_state NOT called >>>>> >>>>> If sync_state() is not being called, that usually means that there >>>>> are interconnect consumers that haven't probed successfully (PCIe?) >>>>> or their dependencies. That can be checked in /sys/class/devlink/.../status >>>>> But i am not sure how this works for PCI devices however. >>>>> >>>>> You can also manually force a call to sync_state by writing "1" to >>>>> the interconnect provider's /sys/devices/.../state_synced >>>>> >>>>> Anyway, the question is if PCIe and NSS work without this driver? >>>> >>>> No. >>>> >>>>> If they work, is this because the clocks are turned on by default >>>>> or by the boot loader? >>>> >>>> Initially, the PCIe/NSS driver enabled these clocks directly >>>> by having them in their DT nodes itself. Based on community >>>> feedback this was removed and after that PCIe/NSS did not work. >>>> >>>>> Then if an interconnect path (clock) gets disabled either when we >>>>> reach a sync_state (with no bandwidth requests) or we explicitly >>>>> call icc_set_bw() with 0 bandwidth values, i would expect that >>>>> these PCIe and NSS devices would not function anymore (it might >>>>> save some power etc) and if this is unexpected we should see a >>>>> a crash or hang... >>>>> >>>>> Can you confirm this? >>>> >>>> With ICC enabled, icc_set_bw (with non-zero values) is called by >>>> PCIe and NSS drivers. Haven't checked with icc_set_bw with zero >>>> values. >>>> >>>> PCIe: qcom_pcie_probe -> qcom_pcie_icc_init -> icc_set_bw >>>> NSS: ppe_icc_init -> icc_set_bw >>>> >>>> I believe sync_state is not getting called since there is a >>>> non-zero set bandwidth request. Which seems to be aligned with >>>> your explanation. >>> >>> This doesn't look correct. sync_state is being called once all >>> consumers are probed. It doesn't matter whether those consumers have >>> non-zero bandwidth requests or no. >> >> /sys/kernel/debug/devices_deferred may have some useful info, too > > /sys/kernel/debug/devices_deferred seems to be empty > > # mount | grep -w debugfs > none on /sys/kernel/debug type debugfs (rw,relatime) > > # cat /sys/kernel/debug/devices_deferred | wc -l > 0 > > Added the following print to icc_sync_state, > > @@ -1096,6 +1096,7 @@ void icc_sync_state(struct device *dev) > struct icc_node *n; > static int count; > > + printk("--> %s: %d %d\n", __func__, providers_count, count); > count++; > > if (count < providers_count) > return; > > icc_sync_state seems to be called once, > > # dmesg | grep icc_sync_state > [ 12.260544] --> icc_sync_state: 2 0 > > Since 'providers_count' is greated than 'count' icc_sync_state > seems to return before doing anything. Is there also another interconnect provider on this platform, other than the gcc? Check for DT nodes that have the #interconnect-cells property. Are all providers probing successfully? All providers must probe, as there might be paths that cross multiple providers and we can't get into sync-state with a topology that is only partially initialized. Thanks, Georgi
On Tue, Jun 11, 2024 at 02:29:48PM +0300, Georgi Djakov wrote: > On 11.06.24 12:42, Varadarajan Narayanan wrote: > > On Thu, Jun 06, 2024 at 04:06:01PM +0200, Konrad Dybcio wrote: > > > On 8.05.2024 10:10 AM, Dmitry Baryshkov wrote: > > > > On Wed, 8 May 2024 at 09:53, Varadarajan Narayanan > > > > <quic_varada@quicinc.com> wrote: > > > > > > > > > > On Fri, May 03, 2024 at 04:51:04PM +0300, Georgi Djakov wrote: > > > > > > Hi Varada, > > > > > > > > > > > > Thank you for your work on this! > > > > > > > > > > > > On 2.05.24 12:30, Varadarajan Narayanan wrote: > > > > > > > On Tue, Apr 30, 2024 at 12:05:29PM +0200, Konrad Dybcio wrote: > > > > > > > > On 25.04.2024 12:26 PM, Varadarajan Narayanan wrote: > > > > > > > > > On Tue, Apr 23, 2024 at 02:58:41PM +0200, Konrad Dybcio wrote: > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > On 4/18/24 11:23, Varadarajan Narayanan wrote: > > > > > > > > > > > IPQ SoCs dont involve RPM in managing NoC related clocks and > > > > > > > > > > > there is no NoC scaling. Linux itself handles these clocks. > > > > > > > > > > > However, these should not be exposed as just clocks and align > > > > > > > > > > > with other Qualcomm SoCs that handle these clocks from a > > > > > > > > > > > interconnect provider. > > > > > > > > > > > > > > > > > > > > > > Hence include icc provider capability to the gcc node so that > > > > > > > > > > > peripherals can use the interconnect facility to enable these > > > > > > > > > > > clocks. > > > > > > > > > > > > > > > > > > > > > > Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org> > > > > > > > > > > > Signed-off-by: Varadarajan Narayanan <quic_varada@quicinc.com> > > > > > > > > > > > --- > > > > > > > > > > > > > > > > > > > > If this is all you do to enable interconnect (which is not the case, > > > > > > > > > > as this patch only satisfies the bindings checker, the meaningful > > > > > > > > > > change happens in the previous patch) and nothing explodes, this is > > > > > > > > > > an apparent sign of your driver doing nothing. > > > > > > > > > > > > > > > > > > It appears to do nothing because, we are just enabling the clock > > > > > > > > > provider to also act as interconnect provider. Only when the > > > > > > > > > consumers are enabled with interconnect usage, this will create > > > > > > > > > paths and turn on the relevant NOC clocks. > > > > > > > > > > > > > > > > No, with sync_state it actually does "something" (sets the interconnect > > > > > > > > path bandwidths to zero). And *this* patch does nothing functionally, > > > > > > > > it only makes the dt checker happy. > > > > > > > > > > > > > > I understand. > > > > > > > > > > > > > > > > This interconnect will be used by the PCIe and NSS blocks. When > > > > > > > > > those patches were posted earlier, they were put on hold until > > > > > > > > > interconnect driver is available. > > > > > > > > > > > > > > > > > > Once this patch gets in, PCIe for example will make use of icc. > > > > > > > > > Please refer to https://lore.kernel.org/linux-arm-msm/20230519090219.15925-5-quic_devipriy@quicinc.com/. > > > > > > > > > > > > > > > > > > The 'pcieX' nodes will include the following entries. > > > > > > > > > > > > > > > > > > interconnects = <&gcc MASTER_ANOC_PCIE0 &gcc SLAVE_ANOC_PCIE0>, > > > > > > > > > <&gcc MASTER_SNOC_PCIE0 &gcc SLAVE_SNOC_PCIE0>; > > > > > > > > > interconnect-names = "pcie-mem", "cpu-pcie"; > > > > > > > > > > > > > > > > Okay. What about USB that's already enabled? And BIMC/MEMNOC? > > > > > > > > > > > > > > For USB, the GCC_ANOC_USB_AXI_CLK is enabled as part of the iface > > > > > > > clock. Hence, interconnect is not specified there. > > > > > > > > > > > > > > MEMNOC to System NOC interfaces seem to be enabled automatically. > > > > > > > Software doesn't have to turn on or program specific clocks. > > > > > > > > > > > > > > > > > The expected reaction to "enabling interconnect" without defining the > > > > > > > > > > required paths for your hardware would be a crash-on-sync_state, as all > > > > > > > > > > unused (from Linux's POV) resources ought to be shut down. > > > > > > > > > > > > > > > > > > > > Because you lack sync_state, the interconnects silently retain the state > > > > > > > > > > that they were left in (which is not deterministic), and that's precisely > > > > > > > > > > what we want to avoid. > > > > > > > > > > > > > > > > > > I tried to set 'sync_state' to icc_sync_state to be invoked and > > > > > > > > > didn't see any crash. > > > > > > > > > > > > > > > > Have you confirmed that the registers are actually written to, and with > > > > > > > > correct values? > > > > > > > > > > > > > > I tried the following combinations:- > > > > > > > > > > > > > > 1. Top of tree linux-next + This patch set > > > > > > > > > > > > > > * icc_sync_state called > > > > > > > * No crash or hang observed > > > > > > > * From /sys/kernel/debug/clk/clk_summary can see the > > > > > > > relevant clocks are set to the expected rates (compared > > > > > > > with downstream kernel) > > > > > > > > > > > > > > 2. Top of tree linux-next + This patch set + PCIe enablement > > > > > > > > > > > > > > * icc_sync_state NOT called > > > > > > > > > > > > If sync_state() is not being called, that usually means that there > > > > > > are interconnect consumers that haven't probed successfully (PCIe?) > > > > > > or their dependencies. That can be checked in /sys/class/devlink/.../status > > > > > > But i am not sure how this works for PCI devices however. > > > > > > > > > > > > You can also manually force a call to sync_state by writing "1" to > > > > > > the interconnect provider's /sys/devices/.../state_synced > > > > > > > > > > > > Anyway, the question is if PCIe and NSS work without this driver? > > > > > > > > > > No. > > > > > > > > > > > If they work, is this because the clocks are turned on by default > > > > > > or by the boot loader? > > > > > > > > > > Initially, the PCIe/NSS driver enabled these clocks directly > > > > > by having them in their DT nodes itself. Based on community > > > > > feedback this was removed and after that PCIe/NSS did not work. > > > > > > > > > > > Then if an interconnect path (clock) gets disabled either when we > > > > > > reach a sync_state (with no bandwidth requests) or we explicitly > > > > > > call icc_set_bw() with 0 bandwidth values, i would expect that > > > > > > these PCIe and NSS devices would not function anymore (it might > > > > > > save some power etc) and if this is unexpected we should see a > > > > > > a crash or hang... > > > > > > > > > > > > Can you confirm this? > > > > > > > > > > With ICC enabled, icc_set_bw (with non-zero values) is called by > > > > > PCIe and NSS drivers. Haven't checked with icc_set_bw with zero > > > > > values. > > > > > > > > > > PCIe: qcom_pcie_probe -> qcom_pcie_icc_init -> icc_set_bw > > > > > NSS: ppe_icc_init -> icc_set_bw > > > > > > > > > > I believe sync_state is not getting called since there is a > > > > > non-zero set bandwidth request. Which seems to be aligned with > > > > > your explanation. > > > > > > > > This doesn't look correct. sync_state is being called once all > > > > consumers are probed. It doesn't matter whether those consumers have > > > > non-zero bandwidth requests or no. > > > > > > /sys/kernel/debug/devices_deferred may have some useful info, too > > > > /sys/kernel/debug/devices_deferred seems to be empty > > > > # mount | grep -w debugfs > > none on /sys/kernel/debug type debugfs (rw,relatime) > > > > # cat /sys/kernel/debug/devices_deferred | wc -l > > 0 > > > > Added the following print to icc_sync_state, > > > > @@ -1096,6 +1096,7 @@ void icc_sync_state(struct device *dev) > > struct icc_node *n; > > static int count; > > > > + printk("--> %s: %d %d\n", __func__, providers_count, count); > > count++; > > > > if (count < providers_count) > > return; > > > > icc_sync_state seems to be called once, > > > > # dmesg | grep icc_sync_state > > [ 12.260544] --> icc_sync_state: 2 0 > > > > Since 'providers_count' is greated than 'count' icc_sync_state > > seems to return before doing anything. > > Is there also another interconnect provider on this platform, other > than the gcc? Check for DT nodes that have the #interconnect-cells > property. Yes there are two interconnect providers # find /proc/device-tree/ -name '#interconnect-cells' /proc/device-tree/soc@0/clock-controller@1800000/#interconnect-cells /proc/device-tree/soc@0/clock-controller@39b00000/#interconnect-cells Note: gcc => clock-controller@1800000 nsscc => clock-controller@39b00000 > Are all providers probing successfully? Yes. I printed the return value of their probe functions... # dmesg | grep probe: [ 0.037815] --> gcc_ipq9574_probe: return 0 [ 2.078215] --> nss_cc_ipq9574_probe: return 0 > All providers must probe, as there might be paths that cross multiple > providers and we can't get into sync-state with a topology that is > only partially initialized. It does look like both the providers' probe has completed. And, there aren't any paths that cross providers interconnects = <&gcc MASTER_ANOC_PCIE1 &gcc SLAVE_ANOC_PCIE1>, <&gcc MASTER_SNOC_PCIE1 &gcc SLAVE_SNOC_PCIE1>; interconnects = <&gcc MASTER_ANOC_PCIE3 &gcc SLAVE_ANOC_PCIE3>, <&gcc MASTER_SNOC_PCIE3 &gcc SLAVE_SNOC_PCIE3>; interconnects = <&gcc MASTER_ANOC_PCIE2 &gcc SLAVE_ANOC_PCIE2>, <&gcc MASTER_SNOC_PCIE2 &gcc SLAVE_SNOC_PCIE2>; interconnects = <&gcc MASTER_ANOC_PCIE0 &gcc SLAVE_ANOC_PCIE0>, <&gcc MASTER_SNOC_PCIE0 &gcc SLAVE_SNOC_PCIE0>; interconnects = <&nsscc MASTER_NSSNOC_PPE &nsscc SLAVE_NSSNOC_PPE>, <&nsscc MASTER_NSSNOC_PPE_CFG &nsscc SLAVE_NSSNOC_PPE_CFG>, <&gcc MASTER_NSSNOC_QOSGEN_REF &gcc SLAVE_NSSNOC_QOSGEN_REF>, <&gcc MASTER_NSSNOC_TIMEOUT_REF &gcc SLAVE_NSSNOC_TIMEOUT_REF>, <&gcc MASTER_MEM_NOC_NSSNOC &gcc SLAVE_MEM_NOC_NSSNOC>, <&gcc MASTER_NSSNOC_MEMNOC &gcc SLAVE_NSSNOC_MEMNOC>, <&gcc MASTER_NSSNOC_MEM_NOC_1 &gcc SLAVE_NSSNOC_MEM_NOC_1>; Thanks Varada
On 12.06.24 9:30, Varadarajan Narayanan wrote: > On Tue, Jun 11, 2024 at 02:29:48PM +0300, Georgi Djakov wrote: >> On 11.06.24 12:42, Varadarajan Narayanan wrote: >>> On Thu, Jun 06, 2024 at 04:06:01PM +0200, Konrad Dybcio wrote: >>>> On 8.05.2024 10:10 AM, Dmitry Baryshkov wrote: >>>>> On Wed, 8 May 2024 at 09:53, Varadarajan Narayanan >>>>> <quic_varada@quicinc.com> wrote: >>>>>> >>>>>> On Fri, May 03, 2024 at 04:51:04PM +0300, Georgi Djakov wrote: >>>>>>> Hi Varada, >>>>>>> >>>>>>> Thank you for your work on this! >>>>>>> >>>>>>> On 2.05.24 12:30, Varadarajan Narayanan wrote: >>>>>>>> On Tue, Apr 30, 2024 at 12:05:29PM +0200, Konrad Dybcio wrote: >>>>>>>>> On 25.04.2024 12:26 PM, Varadarajan Narayanan wrote: >>>>>>>>>> On Tue, Apr 23, 2024 at 02:58:41PM +0200, Konrad Dybcio wrote: >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> On 4/18/24 11:23, Varadarajan Narayanan wrote: >>>>>>>>>>>> IPQ SoCs dont involve RPM in managing NoC related clocks and >>>>>>>>>>>> there is no NoC scaling. Linux itself handles these clocks. >>>>>>>>>>>> However, these should not be exposed as just clocks and align >>>>>>>>>>>> with other Qualcomm SoCs that handle these clocks from a >>>>>>>>>>>> interconnect provider. >>>>>>>>>>>> >>>>>>>>>>>> Hence include icc provider capability to the gcc node so that >>>>>>>>>>>> peripherals can use the interconnect facility to enable these >>>>>>>>>>>> clocks. >>>>>>>>>>>> >>>>>>>>>>>> Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org> >>>>>>>>>>>> Signed-off-by: Varadarajan Narayanan <quic_varada@quicinc.com> >>>>>>>>>>>> --- >>>>>>>>>>> >>>>>>>>>>> If this is all you do to enable interconnect (which is not the case, >>>>>>>>>>> as this patch only satisfies the bindings checker, the meaningful >>>>>>>>>>> change happens in the previous patch) and nothing explodes, this is >>>>>>>>>>> an apparent sign of your driver doing nothing. >>>>>>>>>> >>>>>>>>>> It appears to do nothing because, we are just enabling the clock >>>>>>>>>> provider to also act as interconnect provider. Only when the >>>>>>>>>> consumers are enabled with interconnect usage, this will create >>>>>>>>>> paths and turn on the relevant NOC clocks. >>>>>>>>> >>>>>>>>> No, with sync_state it actually does "something" (sets the interconnect >>>>>>>>> path bandwidths to zero). And *this* patch does nothing functionally, >>>>>>>>> it only makes the dt checker happy. >>>>>>>> >>>>>>>> I understand. >>>>>>>> >>>>>>>>>> This interconnect will be used by the PCIe and NSS blocks. When >>>>>>>>>> those patches were posted earlier, they were put on hold until >>>>>>>>>> interconnect driver is available. >>>>>>>>>> >>>>>>>>>> Once this patch gets in, PCIe for example will make use of icc. >>>>>>>>>> Please refer to https://lore.kernel.org/linux-arm-msm/20230519090219.15925-5-quic_devipriy@quicinc.com/. >>>>>>>>>> >>>>>>>>>> The 'pcieX' nodes will include the following entries. >>>>>>>>>> >>>>>>>>>> interconnects = <&gcc MASTER_ANOC_PCIE0 &gcc SLAVE_ANOC_PCIE0>, >>>>>>>>>> <&gcc MASTER_SNOC_PCIE0 &gcc SLAVE_SNOC_PCIE0>; >>>>>>>>>> interconnect-names = "pcie-mem", "cpu-pcie"; >>>>>>>>> >>>>>>>>> Okay. What about USB that's already enabled? And BIMC/MEMNOC? >>>>>>>> >>>>>>>> For USB, the GCC_ANOC_USB_AXI_CLK is enabled as part of the iface >>>>>>>> clock. Hence, interconnect is not specified there. >>>>>>>> >>>>>>>> MEMNOC to System NOC interfaces seem to be enabled automatically. >>>>>>>> Software doesn't have to turn on or program specific clocks. >>>>>>>> >>>>>>>>>>> The expected reaction to "enabling interconnect" without defining the >>>>>>>>>>> required paths for your hardware would be a crash-on-sync_state, as all >>>>>>>>>>> unused (from Linux's POV) resources ought to be shut down. >>>>>>>>>>> >>>>>>>>>>> Because you lack sync_state, the interconnects silently retain the state >>>>>>>>>>> that they were left in (which is not deterministic), and that's precisely >>>>>>>>>>> what we want to avoid. >>>>>>>>>> >>>>>>>>>> I tried to set 'sync_state' to icc_sync_state to be invoked and >>>>>>>>>> didn't see any crash. >>>>>>>>> >>>>>>>>> Have you confirmed that the registers are actually written to, and with >>>>>>>>> correct values? >>>>>>>> >>>>>>>> I tried the following combinations:- >>>>>>>> >>>>>>>> 1. Top of tree linux-next + This patch set >>>>>>>> >>>>>>>> * icc_sync_state called >>>>>>>> * No crash or hang observed >>>>>>>> * From /sys/kernel/debug/clk/clk_summary can see the >>>>>>>> relevant clocks are set to the expected rates (compared >>>>>>>> with downstream kernel) >>>>>>>> >>>>>>>> 2. Top of tree linux-next + This patch set + PCIe enablement >>>>>>>> >>>>>>>> * icc_sync_state NOT called >>>>>>> >>>>>>> If sync_state() is not being called, that usually means that there >>>>>>> are interconnect consumers that haven't probed successfully (PCIe?) >>>>>>> or their dependencies. That can be checked in /sys/class/devlink/.../status >>>>>>> But i am not sure how this works for PCI devices however. >>>>>>> >>>>>>> You can also manually force a call to sync_state by writing "1" to >>>>>>> the interconnect provider's /sys/devices/.../state_synced >>>>>>> >>>>>>> Anyway, the question is if PCIe and NSS work without this driver? >>>>>> >>>>>> No. >>>>>> >>>>>>> If they work, is this because the clocks are turned on by default >>>>>>> or by the boot loader? >>>>>> >>>>>> Initially, the PCIe/NSS driver enabled these clocks directly >>>>>> by having them in their DT nodes itself. Based on community >>>>>> feedback this was removed and after that PCIe/NSS did not work. >>>>>> >>>>>>> Then if an interconnect path (clock) gets disabled either when we >>>>>>> reach a sync_state (with no bandwidth requests) or we explicitly >>>>>>> call icc_set_bw() with 0 bandwidth values, i would expect that >>>>>>> these PCIe and NSS devices would not function anymore (it might >>>>>>> save some power etc) and if this is unexpected we should see a >>>>>>> a crash or hang... >>>>>>> >>>>>>> Can you confirm this? >>>>>> >>>>>> With ICC enabled, icc_set_bw (with non-zero values) is called by >>>>>> PCIe and NSS drivers. Haven't checked with icc_set_bw with zero >>>>>> values. >>>>>> >>>>>> PCIe: qcom_pcie_probe -> qcom_pcie_icc_init -> icc_set_bw >>>>>> NSS: ppe_icc_init -> icc_set_bw >>>>>> >>>>>> I believe sync_state is not getting called since there is a >>>>>> non-zero set bandwidth request. Which seems to be aligned with >>>>>> your explanation. >>>>> >>>>> This doesn't look correct. sync_state is being called once all >>>>> consumers are probed. It doesn't matter whether those consumers have >>>>> non-zero bandwidth requests or no. >>>> >>>> /sys/kernel/debug/devices_deferred may have some useful info, too >>> >>> /sys/kernel/debug/devices_deferred seems to be empty >>> >>> # mount | grep -w debugfs >>> none on /sys/kernel/debug type debugfs (rw,relatime) >>> >>> # cat /sys/kernel/debug/devices_deferred | wc -l >>> 0 >>> >>> Added the following print to icc_sync_state, >>> >>> @@ -1096,6 +1096,7 @@ void icc_sync_state(struct device *dev) >>> struct icc_node *n; >>> static int count; >>> >>> + printk("--> %s: %d %d\n", __func__, providers_count, count); >>> count++; >>> >>> if (count < providers_count) >>> return; >>> >>> icc_sync_state seems to be called once, >>> >>> # dmesg | grep icc_sync_state >>> [ 12.260544] --> icc_sync_state: 2 0 >>> >>> Since 'providers_count' is greated than 'count' icc_sync_state >>> seems to return before doing anything. >> >> Is there also another interconnect provider on this platform, other >> than the gcc? Check for DT nodes that have the #interconnect-cells >> property. > > Yes there are two interconnect providers > > # find /proc/device-tree/ -name '#interconnect-cells' > /proc/device-tree/soc@0/clock-controller@1800000/#interconnect-cells > /proc/device-tree/soc@0/clock-controller@39b00000/#interconnect-cells > > Note: gcc => clock-controller@1800000 > nsscc => clock-controller@39b00000 > >> Are all providers probing successfully? > > Yes. I printed the return value of their probe functions... > > # dmesg | grep probe: > [ 0.037815] --> gcc_ipq9574_probe: return 0 > [ 2.078215] --> nss_cc_ipq9574_probe: return 0 > > >> All providers must probe, as there might be paths that cross multiple >> providers and we can't get into sync-state with a topology that is >> only partially initialized. > > It does look like both the providers' probe has completed. And, > there aren't any paths that cross providers > > interconnects = <&gcc MASTER_ANOC_PCIE1 &gcc SLAVE_ANOC_PCIE1>, > <&gcc MASTER_SNOC_PCIE1 &gcc SLAVE_SNOC_PCIE1>; > > interconnects = <&gcc MASTER_ANOC_PCIE3 &gcc SLAVE_ANOC_PCIE3>, > <&gcc MASTER_SNOC_PCIE3 &gcc SLAVE_SNOC_PCIE3>; > > interconnects = <&gcc MASTER_ANOC_PCIE2 &gcc SLAVE_ANOC_PCIE2>, > <&gcc MASTER_SNOC_PCIE2 &gcc SLAVE_SNOC_PCIE2>; > > interconnects = <&gcc MASTER_ANOC_PCIE0 &gcc SLAVE_ANOC_PCIE0>, > <&gcc MASTER_SNOC_PCIE0 &gcc SLAVE_SNOC_PCIE0>; > > interconnects = <&nsscc MASTER_NSSNOC_PPE &nsscc SLAVE_NSSNOC_PPE>, > <&nsscc MASTER_NSSNOC_PPE_CFG &nsscc SLAVE_NSSNOC_PPE_CFG>, > <&gcc MASTER_NSSNOC_QOSGEN_REF &gcc SLAVE_NSSNOC_QOSGEN_REF>, > <&gcc MASTER_NSSNOC_TIMEOUT_REF &gcc SLAVE_NSSNOC_TIMEOUT_REF>, > <&gcc MASTER_MEM_NOC_NSSNOC &gcc SLAVE_MEM_NOC_NSSNOC>, > <&gcc MASTER_NSSNOC_MEMNOC &gcc SLAVE_NSSNOC_MEMNOC>, > <&gcc MASTER_NSSNOC_MEM_NOC_1 &gcc SLAVE_NSSNOC_MEM_NOC_1>; Are the above consumers also probing successfully? Especially the one with the nsscc paths? Is nss_cc_ipq9574 also using icc_sync_state? Sync state will be called when all consumers of the specific provider are probed. The idea of sync state is to allow all consumers to probe and to request their paths. Only after that, the framework will take into account the bandwidth values that has been requested from consumers and disable unused paths. Sorry, but i am doing a bit of guessing here as i am missing the complete picture. So you add interconnect-cells to nsscc, but what is this DT node that requests the nss and gcc paths? I am failing to find these on the mailing lists. BR, Georgi
On Wed, Jun 12, 2024 at 11:48:17AM +0300, Georgi Djakov wrote: > On 12.06.24 9:30, Varadarajan Narayanan wrote: > > On Tue, Jun 11, 2024 at 02:29:48PM +0300, Georgi Djakov wrote: > > > On 11.06.24 12:42, Varadarajan Narayanan wrote: > > > > On Thu, Jun 06, 2024 at 04:06:01PM +0200, Konrad Dybcio wrote: > > > > > On 8.05.2024 10:10 AM, Dmitry Baryshkov wrote: > > > > > > On Wed, 8 May 2024 at 09:53, Varadarajan Narayanan > > > > > > <quic_varada@quicinc.com> wrote: > > > > > > > > > > > > > > On Fri, May 03, 2024 at 04:51:04PM +0300, Georgi Djakov wrote: > > > > > > > > Hi Varada, > > > > > > > > > > > > > > > > Thank you for your work on this! > > > > > > > > > > > > > > > > On 2.05.24 12:30, Varadarajan Narayanan wrote: > > > > > > > > > On Tue, Apr 30, 2024 at 12:05:29PM +0200, Konrad Dybcio wrote: > > > > > > > > > > On 25.04.2024 12:26 PM, Varadarajan Narayanan wrote: > > > > > > > > > > > On Tue, Apr 23, 2024 at 02:58:41PM +0200, Konrad Dybcio wrote: > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > On 4/18/24 11:23, Varadarajan Narayanan wrote: > > > > > > > > > > > > > IPQ SoCs dont involve RPM in managing NoC related clocks and > > > > > > > > > > > > > there is no NoC scaling. Linux itself handles these clocks. > > > > > > > > > > > > > However, these should not be exposed as just clocks and align > > > > > > > > > > > > > with other Qualcomm SoCs that handle these clocks from a > > > > > > > > > > > > > interconnect provider. > > > > > > > > > > > > > > > > > > > > > > > > > > Hence include icc provider capability to the gcc node so that > > > > > > > > > > > > > peripherals can use the interconnect facility to enable these > > > > > > > > > > > > > clocks. > > > > > > > > > > > > > > > > > > > > > > > > > > Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org> > > > > > > > > > > > > > Signed-off-by: Varadarajan Narayanan <quic_varada@quicinc.com> > > > > > > > > > > > > > --- > > > > > > > > > > > > > > > > > > > > > > > > If this is all you do to enable interconnect (which is not the case, > > > > > > > > > > > > as this patch only satisfies the bindings checker, the meaningful > > > > > > > > > > > > change happens in the previous patch) and nothing explodes, this is > > > > > > > > > > > > an apparent sign of your driver doing nothing. > > > > > > > > > > > > > > > > > > > > > > It appears to do nothing because, we are just enabling the clock > > > > > > > > > > > provider to also act as interconnect provider. Only when the > > > > > > > > > > > consumers are enabled with interconnect usage, this will create > > > > > > > > > > > paths and turn on the relevant NOC clocks. > > > > > > > > > > > > > > > > > > > > No, with sync_state it actually does "something" (sets the interconnect > > > > > > > > > > path bandwidths to zero). And *this* patch does nothing functionally, > > > > > > > > > > it only makes the dt checker happy. > > > > > > > > > > > > > > > > > > I understand. > > > > > > > > > > > > > > > > > > > > This interconnect will be used by the PCIe and NSS blocks. When > > > > > > > > > > > those patches were posted earlier, they were put on hold until > > > > > > > > > > > interconnect driver is available. > > > > > > > > > > > > > > > > > > > > > > Once this patch gets in, PCIe for example will make use of icc. > > > > > > > > > > > Please refer to https://lore.kernel.org/linux-arm-msm/20230519090219.15925-5-quic_devipriy@quicinc.com/. > > > > > > > > > > > > > > > > > > > > > > The 'pcieX' nodes will include the following entries. > > > > > > > > > > > > > > > > > > > > > > interconnects = <&gcc MASTER_ANOC_PCIE0 &gcc SLAVE_ANOC_PCIE0>, > > > > > > > > > > > <&gcc MASTER_SNOC_PCIE0 &gcc SLAVE_SNOC_PCIE0>; > > > > > > > > > > > interconnect-names = "pcie-mem", "cpu-pcie"; > > > > > > > > > > > > > > > > > > > > Okay. What about USB that's already enabled? And BIMC/MEMNOC? > > > > > > > > > > > > > > > > > > For USB, the GCC_ANOC_USB_AXI_CLK is enabled as part of the iface > > > > > > > > > clock. Hence, interconnect is not specified there. > > > > > > > > > > > > > > > > > > MEMNOC to System NOC interfaces seem to be enabled automatically. > > > > > > > > > Software doesn't have to turn on or program specific clocks. > > > > > > > > > > > > > > > > > > > > > The expected reaction to "enabling interconnect" without defining the > > > > > > > > > > > > required paths for your hardware would be a crash-on-sync_state, as all > > > > > > > > > > > > unused (from Linux's POV) resources ought to be shut down. > > > > > > > > > > > > > > > > > > > > > > > > Because you lack sync_state, the interconnects silently retain the state > > > > > > > > > > > > that they were left in (which is not deterministic), and that's precisely > > > > > > > > > > > > what we want to avoid. > > > > > > > > > > > > > > > > > > > > > > I tried to set 'sync_state' to icc_sync_state to be invoked and > > > > > > > > > > > didn't see any crash. > > > > > > > > > > > > > > > > > > > > Have you confirmed that the registers are actually written to, and with > > > > > > > > > > correct values? > > > > > > > > > > > > > > > > > > I tried the following combinations:- > > > > > > > > > > > > > > > > > > 1. Top of tree linux-next + This patch set > > > > > > > > > > > > > > > > > > * icc_sync_state called > > > > > > > > > * No crash or hang observed > > > > > > > > > * From /sys/kernel/debug/clk/clk_summary can see the > > > > > > > > > relevant clocks are set to the expected rates (compared > > > > > > > > > with downstream kernel) > > > > > > > > > > > > > > > > > > 2. Top of tree linux-next + This patch set + PCIe enablement > > > > > > > > > > > > > > > > > > * icc_sync_state NOT called > > > > > > > > > > > > > > > > If sync_state() is not being called, that usually means that there > > > > > > > > are interconnect consumers that haven't probed successfully (PCIe?) > > > > > > > > or their dependencies. That can be checked in /sys/class/devlink/.../status > > > > > > > > But i am not sure how this works for PCI devices however. > > > > > > > > > > > > > > > > You can also manually force a call to sync_state by writing "1" to > > > > > > > > the interconnect provider's /sys/devices/.../state_synced > > > > > > > > > > > > > > > > Anyway, the question is if PCIe and NSS work without this driver? > > > > > > > > > > > > > > No. > > > > > > > > > > > > > > > If they work, is this because the clocks are turned on by default > > > > > > > > or by the boot loader? > > > > > > > > > > > > > > Initially, the PCIe/NSS driver enabled these clocks directly > > > > > > > by having them in their DT nodes itself. Based on community > > > > > > > feedback this was removed and after that PCIe/NSS did not work. > > > > > > > > > > > > > > > Then if an interconnect path (clock) gets disabled either when we > > > > > > > > reach a sync_state (with no bandwidth requests) or we explicitly > > > > > > > > call icc_set_bw() with 0 bandwidth values, i would expect that > > > > > > > > these PCIe and NSS devices would not function anymore (it might > > > > > > > > save some power etc) and if this is unexpected we should see a > > > > > > > > a crash or hang... > > > > > > > > > > > > > > > > Can you confirm this? > > > > > > > > > > > > > > With ICC enabled, icc_set_bw (with non-zero values) is called by > > > > > > > PCIe and NSS drivers. Haven't checked with icc_set_bw with zero > > > > > > > values. > > > > > > > > > > > > > > PCIe: qcom_pcie_probe -> qcom_pcie_icc_init -> icc_set_bw > > > > > > > NSS: ppe_icc_init -> icc_set_bw > > > > > > > > > > > > > > I believe sync_state is not getting called since there is a > > > > > > > non-zero set bandwidth request. Which seems to be aligned with > > > > > > > your explanation. > > > > > > > > > > > > This doesn't look correct. sync_state is being called once all > > > > > > consumers are probed. It doesn't matter whether those consumers have > > > > > > non-zero bandwidth requests or no. > > > > > > > > > > /sys/kernel/debug/devices_deferred may have some useful info, too > > > > > > > > /sys/kernel/debug/devices_deferred seems to be empty > > > > > > > > # mount | grep -w debugfs > > > > none on /sys/kernel/debug type debugfs (rw,relatime) > > > > > > > > # cat /sys/kernel/debug/devices_deferred | wc -l > > > > 0 > > > > > > > > Added the following print to icc_sync_state, > > > > > > > > @@ -1096,6 +1096,7 @@ void icc_sync_state(struct device *dev) > > > > struct icc_node *n; > > > > static int count; > > > > > > > > + printk("--> %s: %d %d\n", __func__, providers_count, count); > > > > count++; > > > > > > > > if (count < providers_count) > > > > return; > > > > > > > > icc_sync_state seems to be called once, > > > > > > > > # dmesg | grep icc_sync_state > > > > [ 12.260544] --> icc_sync_state: 2 0 > > > > > > > > Since 'providers_count' is greated than 'count' icc_sync_state > > > > seems to return before doing anything. > > > > > > Is there also another interconnect provider on this platform, other > > > than the gcc? Check for DT nodes that have the #interconnect-cells > > > property. > > > > Yes there are two interconnect providers > > > > # find /proc/device-tree/ -name '#interconnect-cells' > > /proc/device-tree/soc@0/clock-controller@1800000/#interconnect-cells > > /proc/device-tree/soc@0/clock-controller@39b00000/#interconnect-cells > > > > Note: gcc => clock-controller@1800000 > > nsscc => clock-controller@39b00000 > > > > > Are all providers probing successfully? > > > > Yes. I printed the return value of their probe functions... > > > > # dmesg | grep probe: > > [ 0.037815] --> gcc_ipq9574_probe: return 0 > > [ 2.078215] --> nss_cc_ipq9574_probe: return 0 > > > > > > > All providers must probe, as there might be paths that cross multiple > > > providers and we can't get into sync-state with a topology that is > > > only partially initialized. > > > > It does look like both the providers' probe has completed. And, > > there aren't any paths that cross providers > > > > interconnects = <&gcc MASTER_ANOC_PCIE1 &gcc SLAVE_ANOC_PCIE1>, > > <&gcc MASTER_SNOC_PCIE1 &gcc SLAVE_SNOC_PCIE1>; > > > > interconnects = <&gcc MASTER_ANOC_PCIE3 &gcc SLAVE_ANOC_PCIE3>, > > <&gcc MASTER_SNOC_PCIE3 &gcc SLAVE_SNOC_PCIE3>; > > > > interconnects = <&gcc MASTER_ANOC_PCIE2 &gcc SLAVE_ANOC_PCIE2>, > > <&gcc MASTER_SNOC_PCIE2 &gcc SLAVE_SNOC_PCIE2>; > > > > interconnects = <&gcc MASTER_ANOC_PCIE0 &gcc SLAVE_ANOC_PCIE0>, > > <&gcc MASTER_SNOC_PCIE0 &gcc SLAVE_SNOC_PCIE0>; > > > > interconnects = <&nsscc MASTER_NSSNOC_PPE &nsscc SLAVE_NSSNOC_PPE>, > > <&nsscc MASTER_NSSNOC_PPE_CFG &nsscc SLAVE_NSSNOC_PPE_CFG>, > > <&gcc MASTER_NSSNOC_QOSGEN_REF &gcc SLAVE_NSSNOC_QOSGEN_REF>, > > <&gcc MASTER_NSSNOC_TIMEOUT_REF &gcc SLAVE_NSSNOC_TIMEOUT_REF>, > > <&gcc MASTER_MEM_NOC_NSSNOC &gcc SLAVE_MEM_NOC_NSSNOC>, > > <&gcc MASTER_NSSNOC_MEMNOC &gcc SLAVE_NSSNOC_MEMNOC>, > > <&gcc MASTER_NSSNOC_MEM_NOC_1 &gcc SLAVE_NSSNOC_MEM_NOC_1>; > > Are the above consumers also probing successfully? Especially the one with > the nsscc paths? Is nss_cc_ipq9574 also using icc_sync_state? Sync state > will be called when all consumers of the specific provider are probed. nsscc_ipq9574 was not using icc_sync_state. After adding that, I can see the following messages printed from icc_sync_state. I also added a print to confirm if 'p->set(n, n);' is called. [ 12.260138] --> icc_sync_state: 2 2 ---> [ 12.260166] qcom,gcc-ipq9574 1800000.clock-controller: interconnect provider is in synced state [ 12.262429] qcom,gcc-ipq9574 1800000.clock-controller: Calling icc_clk_set(gcc_anoc_pcie0_1lane_m_clk_master) [ 12.271206] qcom,gcc-ipq9574 1800000.clock-controller: Calling icc_clk_set(gcc_anoc_pcie0_1lane_m_clk_slave) [ 12.281225] qcom,gcc-ipq9574 1800000.clock-controller: Calling icc_clk_set(gcc_snoc_pcie0_1lane_s_clk_master) [ 12.291118] qcom,gcc-ipq9574 1800000.clock-controller: Calling icc_clk_set(gcc_snoc_pcie0_1lane_s_clk_slave) [ 12.300902] qcom,gcc-ipq9574 1800000.clock-controller: Calling icc_clk_set(gcc_anoc_pcie1_1lane_m_clk_master) [ 12.310797] qcom,gcc-ipq9574 1800000.clock-controller: Calling icc_clk_set(gcc_anoc_pcie1_1lane_m_clk_slave) [ 12.320596] qcom,gcc-ipq9574 1800000.clock-controller: Calling icc_clk_set(gcc_snoc_pcie1_1lane_s_clk_master) [ 12.330494] qcom,gcc-ipq9574 1800000.clock-controller: Calling icc_clk_set(gcc_snoc_pcie1_1lane_s_clk_slave) [ 12.340299] qcom,gcc-ipq9574 1800000.clock-controller: Calling icc_clk_set(gcc_anoc_pcie2_2lane_m_clk_master) [ 12.350224] qcom,gcc-ipq9574 1800000.clock-controller: Calling icc_clk_set(gcc_anoc_pcie2_2lane_m_clk_slave) [ 12.360013] qcom,gcc-ipq9574 1800000.clock-controller: Calling icc_clk_set(gcc_snoc_pcie2_2lane_s_clk_master) [ 12.369904] qcom,gcc-ipq9574 1800000.clock-controller: Calling icc_clk_set(gcc_snoc_pcie2_2lane_s_clk_slave) [ 12.379709] qcom,gcc-ipq9574 1800000.clock-controller: Calling icc_clk_set(gcc_anoc_pcie3_2lane_m_clk_master) [ 12.389616] qcom,gcc-ipq9574 1800000.clock-controller: Calling icc_clk_set(gcc_anoc_pcie3_2lane_m_clk_slave) [ 12.399415] qcom,gcc-ipq9574 1800000.clock-controller: Calling icc_clk_set(gcc_snoc_pcie3_2lane_s_clk_master) [ 12.409312] qcom,gcc-ipq9574 1800000.clock-controller: Calling icc_clk_set(gcc_snoc_pcie3_2lane_s_clk_slave) [ 12.419119] qcom,gcc-ipq9574 1800000.clock-controller: Calling icc_clk_set(gcc_snoc_usb_clk_master) [ 12.429017] qcom,gcc-ipq9574 1800000.clock-controller: Calling icc_clk_set(gcc_snoc_usb_clk_slave) [ 12.437781] qcom,gcc-ipq9574 1800000.clock-controller: Calling icc_clk_set(gcc_anoc_usb_axi_clk_master) [ 12.446813] qcom,gcc-ipq9574 1800000.clock-controller: Calling icc_clk_set(gcc_anoc_usb_axi_clk_slave) [ 12.456098] qcom,gcc-ipq9574 1800000.clock-controller: Calling icc_clk_set(gcc_nssnoc_nsscc_clk_master) [ 12.465474] qcom,gcc-ipq9574 1800000.clock-controller: Calling icc_clk_set(gcc_nssnoc_nsscc_clk_slave) [ 12.474767] qcom,gcc-ipq9574 1800000.clock-controller: Calling icc_clk_set(gcc_nssnoc_snoc_clk_master) [ 12.484138] qcom,gcc-ipq9574 1800000.clock-controller: Calling icc_clk_set(gcc_nssnoc_snoc_clk_slave) [ 12.493424] qcom,gcc-ipq9574 1800000.clock-controller: Calling icc_clk_set(gcc_nssnoc_snoc_1_clk_master) [ 12.502713] qcom,gcc-ipq9574 1800000.clock-controller: Calling icc_clk_set(gcc_nssnoc_snoc_1_clk_slave) [ 12.512261] qcom,gcc-ipq9574 1800000.clock-controller: Calling icc_clk_set(gcc_nssnoc_pcnoc_1_clk_master) [ 12.521379] qcom,gcc-ipq9574 1800000.clock-controller: Calling icc_clk_set(gcc_nssnoc_pcnoc_1_clk_slave) [ 12.531098] qcom,gcc-ipq9574 1800000.clock-controller: Calling icc_clk_set(gcc_nssnoc_qosgen_ref_clk_master) [ 12.540651] qcom,gcc-ipq9574 1800000.clock-controller: Calling icc_clk_set(gcc_nssnoc_qosgen_ref_clk_slave) [ 12.550456] qcom,gcc-ipq9574 1800000.clock-controller: Calling icc_clk_set(gcc_nssnoc_timeout_ref_clk_master) [ 12.559922] qcom,gcc-ipq9574 1800000.clock-controller: Calling icc_clk_set(gcc_nssnoc_timeout_ref_clk_slave) [ 12.569986] qcom,gcc-ipq9574 1800000.clock-controller: Calling icc_clk_set(gcc_nssnoc_xo_dcd_clk_master) [ 12.579886] qcom,gcc-ipq9574 1800000.clock-controller: Calling icc_clk_set(gcc_nssnoc_xo_dcd_clk_slave) [ 12.589344] qcom,gcc-ipq9574 1800000.clock-controller: Calling icc_clk_set(gcc_nssnoc_atb_clk_master) [ 12.598466] qcom,gcc-ipq9574 1800000.clock-controller: Calling icc_clk_set(gcc_nssnoc_atb_clk_slave) [ 12.607834] qcom,gcc-ipq9574 1800000.clock-controller: Calling icc_clk_set(gcc_mem_noc_nssnoc_clk_master) [ 12.617039] qcom,gcc-ipq9574 1800000.clock-controller: Calling icc_clk_set(gcc_mem_noc_nssnoc_clk_slave) [ 12.626497] qcom,gcc-ipq9574 1800000.clock-controller: Calling icc_clk_set(gcc_nssnoc_memnoc_clk_master) [ 12.636049] qcom,gcc-ipq9574 1800000.clock-controller: Calling icc_clk_set(gcc_nssnoc_memnoc_clk_slave) [ 12.645507] qcom,gcc-ipq9574 1800000.clock-controller: Calling icc_clk_set(gcc_nssnoc_mem_noc_1_clk_master) [ 12.654668] qcom,gcc-ipq9574 1800000.clock-controller: Calling icc_clk_set(gcc_nssnoc_mem_noc_1_clk_slave) ---> [ 12.664354] qcom,nsscc-ipq9574 39b00000.clock-controller: interconnect provider is in synced state [ 12.674069] qcom,nsscc-ipq9574 39b00000.clock-controller: Calling icc_clk_set(nss_cc_nssnoc_ppe_clk_master) [ 12.683012] qcom,nsscc-ipq9574 39b00000.clock-controller: Calling icc_clk_set(nss_cc_nssnoc_ppe_clk_slave) [ 12.692646] qcom,nsscc-ipq9574 39b00000.clock-controller: Calling icc_clk_set(nss_cc_nssnoc_ppe_cfg_clk_master) [ 12.702369] qcom,nsscc-ipq9574 39b00000.clock-controller: Calling icc_clk_set(nss_cc_nssnoc_ppe_cfg_clk_slave) [ 12.712349] qcom,nsscc-ipq9574 39b00000.clock-controller: Calling icc_clk_set(nss_cc_nssnoc_nss_csr_clk_master) [ 12.722431] qcom,nsscc-ipq9574 39b00000.clock-controller: Calling icc_clk_set(nss_cc_nssnoc_nss_csr_clk_slave) [ 12.732404] qcom,nsscc-ipq9574 39b00000.clock-controller: Calling icc_clk_set(nss_cc_nssnoc_imem_qsb_clk_master) [ 12.742473] qcom,nsscc-ipq9574 39b00000.clock-controller: Calling icc_clk_set(nss_cc_nssnoc_imem_qsb_clk_slave) [ 12.752801] qcom,nsscc-ipq9574 39b00000.clock-controller: Calling icc_clk_set(nss_cc_nssnoc_imem_ahb_clk_master) [ 12.762611] qcom,nsscc-ipq9574 39b00000.clock-controller: Calling icc_clk_set(nss_cc_nssnoc_imem_ahb_clk_slave) > The idea of sync state is to allow all consumers to probe and to request > their paths. Only after that, the framework will take into account the > bandwidth values that has been requested from consumers and disable unused > paths. > > Sorry, but i am doing a bit of guessing here as i am missing the complete > picture. So you add interconnect-cells to nsscc, but what is this DT node > that requests the nss and gcc paths? I am failing to find these on the > mailing lists. The gcc based interconnect paths are referenced by PCIe controller nodes. Please refer to this patch [PATCH V5 4/6] arm64: dts: qcom: ipq9574: Add PCIe PHYs and controller nodes https://lore.kernel.org/linux-arm-msm/20240512082858.1806694-5-quic_devipriy@quicinc.com/ Sorry, did not post the nsscc related patches since this base ICC patch hasn't reached closure. The nsscc patches are very similar to this gcc based series. Wanted to gather the issues raised in this and address them in nsscc so that it is in a more acceptable shape. Thanks Varada
On 12.06.24 13:28, Varadarajan Narayanan wrote: > On Wed, Jun 12, 2024 at 11:48:17AM +0300, Georgi Djakov wrote: >> On 12.06.24 9:30, Varadarajan Narayanan wrote: >>> On Tue, Jun 11, 2024 at 02:29:48PM +0300, Georgi Djakov wrote: >>>> On 11.06.24 12:42, Varadarajan Narayanan wrote: >>>>> On Thu, Jun 06, 2024 at 04:06:01PM +0200, Konrad Dybcio wrote: >>>>>> On 8.05.2024 10:10 AM, Dmitry Baryshkov wrote: >>>>>>> On Wed, 8 May 2024 at 09:53, Varadarajan Narayanan >>>>>>> <quic_varada@quicinc.com> wrote: >>>>>>>> >>>>>>>> On Fri, May 03, 2024 at 04:51:04PM +0300, Georgi Djakov wrote: >>>>>>>>> Hi Varada, >>>>>>>>> >>>>>>>>> Thank you for your work on this! >>>>>>>>> >>>>>>>>> On 2.05.24 12:30, Varadarajan Narayanan wrote: >>>>>>>>>> On Tue, Apr 30, 2024 at 12:05:29PM +0200, Konrad Dybcio wrote: >>>>>>>>>>> On 25.04.2024 12:26 PM, Varadarajan Narayanan wrote: >>>>>>>>>>>> On Tue, Apr 23, 2024 at 02:58:41PM +0200, Konrad Dybcio wrote: >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> On 4/18/24 11:23, Varadarajan Narayanan wrote: >>>>>>>>>>>>>> IPQ SoCs dont involve RPM in managing NoC related clocks and >>>>>>>>>>>>>> there is no NoC scaling. Linux itself handles these clocks. >>>>>>>>>>>>>> However, these should not be exposed as just clocks and align >>>>>>>>>>>>>> with other Qualcomm SoCs that handle these clocks from a >>>>>>>>>>>>>> interconnect provider. >>>>>>>>>>>>>> >>>>>>>>>>>>>> Hence include icc provider capability to the gcc node so that >>>>>>>>>>>>>> peripherals can use the interconnect facility to enable these >>>>>>>>>>>>>> clocks. >>>>>>>>>>>>>> >>>>>>>>>>>>>> Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org> >>>>>>>>>>>>>> Signed-off-by: Varadarajan Narayanan <quic_varada@quicinc.com> >>>>>>>>>>>>>> --- >>>>>>>>>>>>> >>>>>>>>>>>>> If this is all you do to enable interconnect (which is not the case, >>>>>>>>>>>>> as this patch only satisfies the bindings checker, the meaningful >>>>>>>>>>>>> change happens in the previous patch) and nothing explodes, this is >>>>>>>>>>>>> an apparent sign of your driver doing nothing. >>>>>>>>>>>> >>>>>>>>>>>> It appears to do nothing because, we are just enabling the clock >>>>>>>>>>>> provider to also act as interconnect provider. Only when the >>>>>>>>>>>> consumers are enabled with interconnect usage, this will create >>>>>>>>>>>> paths and turn on the relevant NOC clocks. >>>>>>>>>>> >>>>>>>>>>> No, with sync_state it actually does "something" (sets the interconnect >>>>>>>>>>> path bandwidths to zero). And *this* patch does nothing functionally, >>>>>>>>>>> it only makes the dt checker happy. >>>>>>>>>> [..] > > nsscc_ipq9574 was not using icc_sync_state. After adding that, I > can see the following messages printed from icc_sync_state. I > also added a print to confirm if 'p->set(n, n);' is called. Ok, that's good! So now when all providers are using sync_state, we can go back to the initial comment from Konrad. I think you should re-check the tests that you did, as the current results just lead to more questions than answers. Maybe it was just the sync-state that was missing, or there is some other issue. BR, Georgi [..] > > The gcc based interconnect paths are referenced by PCIe controller > nodes. Please refer to this patch > > [PATCH V5 4/6] arm64: dts: qcom: ipq9574: Add PCIe PHYs and controller nodes > https://lore.kernel.org/linux-arm-msm/20240512082858.1806694-5-quic_devipriy@quicinc.com/ > > Sorry, did not post the nsscc related patches since this base ICC > patch hasn't reached closure. The nsscc patches are very similar > to this gcc based series. Wanted to gather the issues raised in > this and address them in nsscc so that it is in a more acceptable > shape. > > Thanks > Varada
On Wed, Jun 12, 2024 at 03:52:51PM +0300, Georgi Djakov wrote: > On 12.06.24 13:28, Varadarajan Narayanan wrote: > > On Wed, Jun 12, 2024 at 11:48:17AM +0300, Georgi Djakov wrote: > > > On 12.06.24 9:30, Varadarajan Narayanan wrote: > > > > On Tue, Jun 11, 2024 at 02:29:48PM +0300, Georgi Djakov wrote: > > > > > On 11.06.24 12:42, Varadarajan Narayanan wrote: > > > > > > On Thu, Jun 06, 2024 at 04:06:01PM +0200, Konrad Dybcio wrote: > > > > > > > On 8.05.2024 10:10 AM, Dmitry Baryshkov wrote: > > > > > > > > On Wed, 8 May 2024 at 09:53, Varadarajan Narayanan > > > > > > > > <quic_varada@quicinc.com> wrote: > > > > > > > > > > > > > > > > > > On Fri, May 03, 2024 at 04:51:04PM +0300, Georgi Djakov wrote: > > > > > > > > > > Hi Varada, > > > > > > > > > > > > > > > > > > > > Thank you for your work on this! > > > > > > > > > > > > > > > > > > > > On 2.05.24 12:30, Varadarajan Narayanan wrote: > > > > > > > > > > > On Tue, Apr 30, 2024 at 12:05:29PM +0200, Konrad Dybcio wrote: > > > > > > > > > > > > On 25.04.2024 12:26 PM, Varadarajan Narayanan wrote: > > > > > > > > > > > > > On Tue, Apr 23, 2024 at 02:58:41PM +0200, Konrad Dybcio wrote: > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > On 4/18/24 11:23, Varadarajan Narayanan wrote: > > > > > > > > > > > > > > > IPQ SoCs dont involve RPM in managing NoC related clocks and > > > > > > > > > > > > > > > there is no NoC scaling. Linux itself handles these clocks. > > > > > > > > > > > > > > > However, these should not be exposed as just clocks and align > > > > > > > > > > > > > > > with other Qualcomm SoCs that handle these clocks from a > > > > > > > > > > > > > > > interconnect provider. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Hence include icc provider capability to the gcc node so that > > > > > > > > > > > > > > > peripherals can use the interconnect facility to enable these > > > > > > > > > > > > > > > clocks. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org> > > > > > > > > > > > > > > > Signed-off-by: Varadarajan Narayanan <quic_varada@quicinc.com> > > > > > > > > > > > > > > > --- > > > > > > > > > > > > > > > > > > > > > > > > > > > > If this is all you do to enable interconnect (which is not the case, > > > > > > > > > > > > > > as this patch only satisfies the bindings checker, the meaningful > > > > > > > > > > > > > > change happens in the previous patch) and nothing explodes, this is > > > > > > > > > > > > > > an apparent sign of your driver doing nothing. > > > > > > > > > > > > > > > > > > > > > > > > > > It appears to do nothing because, we are just enabling the clock > > > > > > > > > > > > > provider to also act as interconnect provider. Only when the > > > > > > > > > > > > > consumers are enabled with interconnect usage, this will create > > > > > > > > > > > > > paths and turn on the relevant NOC clocks. > > > > > > > > > > > > > > > > > > > > > > > > No, with sync_state it actually does "something" (sets the interconnect > > > > > > > > > > > > path bandwidths to zero). And *this* patch does nothing functionally, > > > > > > > > > > > > it only makes the dt checker happy. > > > > > > > > > > > > > [..] > > > > > nsscc_ipq9574 was not using icc_sync_state. After adding that, I > > can see the following messages printed from icc_sync_state. I > > also added a print to confirm if 'p->set(n, n);' is called. > > Ok, that's good! So now when all providers are using sync_state, we > can go back to the initial comment from Konrad. I think you should > re-check the tests that you did, as the current results just lead to > more questions than answers. Maybe it was just the sync-state that > was missing, or there is some other issue. Georgi, Thanks very much for the clarifications. Will re-test the patches and update the thread. -Varada > [..] > > > > The gcc based interconnect paths are referenced by PCIe controller > > nodes. Please refer to this patch > > > > [PATCH V5 4/6] arm64: dts: qcom: ipq9574: Add PCIe PHYs and controller nodes > > https://lore.kernel.org/linux-arm-msm/20240512082858.1806694-5-quic_devipriy@quicinc.com/ > > > > Sorry, did not post the nsscc related patches since this base ICC > > patch hasn't reached closure. The nsscc patches are very similar > > to this gcc based series. Wanted to gather the issues raised in > > this and address them in nsscc so that it is in a more acceptable > > shape. > > > > Thanks > > Varada >
On Thu, Jun 13, 2024 at 09:19:13AM +0530, Varadarajan Narayanan wrote: > On Wed, Jun 12, 2024 at 03:52:51PM +0300, Georgi Djakov wrote: > > On 12.06.24 13:28, Varadarajan Narayanan wrote: > > > On Wed, Jun 12, 2024 at 11:48:17AM +0300, Georgi Djakov wrote: > > > > On 12.06.24 9:30, Varadarajan Narayanan wrote: > > > > > On Tue, Jun 11, 2024 at 02:29:48PM +0300, Georgi Djakov wrote: > > > > > > On 11.06.24 12:42, Varadarajan Narayanan wrote: > > > > > > > On Thu, Jun 06, 2024 at 04:06:01PM +0200, Konrad Dybcio wrote: > > > > > > > > On 8.05.2024 10:10 AM, Dmitry Baryshkov wrote: > > > > > > > > > On Wed, 8 May 2024 at 09:53, Varadarajan Narayanan > > > > > > > > > <quic_varada@quicinc.com> wrote: > > > > > > > > > > > > > > > > > > > > On Fri, May 03, 2024 at 04:51:04PM +0300, Georgi Djakov wrote: > > > > > > > > > > > Hi Varada, > > > > > > > > > > > > > > > > > > > > > > Thank you for your work on this! > > > > > > > > > > > > > > > > > > > > > > On 2.05.24 12:30, Varadarajan Narayanan wrote: > > > > > > > > > > > > On Tue, Apr 30, 2024 at 12:05:29PM +0200, Konrad Dybcio wrote: > > > > > > > > > > > > > On 25.04.2024 12:26 PM, Varadarajan Narayanan wrote: > > > > > > > > > > > > > > On Tue, Apr 23, 2024 at 02:58:41PM +0200, Konrad Dybcio wrote: > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > On 4/18/24 11:23, Varadarajan Narayanan wrote: > > > > > > > > > > > > > > > > IPQ SoCs dont involve RPM in managing NoC related clocks and > > > > > > > > > > > > > > > > there is no NoC scaling. Linux itself handles these clocks. > > > > > > > > > > > > > > > > However, these should not be exposed as just clocks and align > > > > > > > > > > > > > > > > with other Qualcomm SoCs that handle these clocks from a > > > > > > > > > > > > > > > > interconnect provider. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Hence include icc provider capability to the gcc node so that > > > > > > > > > > > > > > > > peripherals can use the interconnect facility to enable these > > > > > > > > > > > > > > > > clocks. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org> > > > > > > > > > > > > > > > > Signed-off-by: Varadarajan Narayanan <quic_varada@quicinc.com> > > > > > > > > > > > > > > > > --- > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > If this is all you do to enable interconnect (which is not the case, > > > > > > > > > > > > > > > as this patch only satisfies the bindings checker, the meaningful > > > > > > > > > > > > > > > change happens in the previous patch) and nothing explodes, this is > > > > > > > > > > > > > > > an apparent sign of your driver doing nothing. > > > > > > > > > > > > > > > > > > > > > > > > > > > > It appears to do nothing because, we are just enabling the clock > > > > > > > > > > > > > > provider to also act as interconnect provider. Only when the > > > > > > > > > > > > > > consumers are enabled with interconnect usage, this will create > > > > > > > > > > > > > > paths and turn on the relevant NOC clocks. > > > > > > > > > > > > > > > > > > > > > > > > > > No, with sync_state it actually does "something" (sets the interconnect > > > > > > > > > > > > > path bandwidths to zero). And *this* patch does nothing functionally, > > > > > > > > > > > > > it only makes the dt checker happy. > > > > > > > > > > > > > > > > [..] > > > > > > > > nsscc_ipq9574 was not using icc_sync_state. After adding that, I > > > can see the following messages printed from icc_sync_state. I > > > also added a print to confirm if 'p->set(n, n);' is called. > > > > Ok, that's good! So now when all providers are using sync_state, we > > can go back to the initial comment from Konrad. I think you should > > re-check the tests that you did, as the current results just lead to > > more questions than answers. Maybe it was just the sync-state that > > was missing, or there is some other issue. > > Georgi, > > Thanks very much for the clarifications. Will re-test the patches > and update the thread. > > -Varada Georgi, Tested the patches with both gcc and nsscc providers having 'sync_state' set to icc_sync_state. # dmesg | grep synced [ 3.029820] qcom,gcc-ipq9574 1800000.clock-controller: interconnect provider is in synced state [ 3.470106] qcom,nsscc-ipq9574 39b00000.clock-controller: interconnect provider is in synced state I can see that icc_sync_state is getting called and clocks related to paths with zero bandwidth are getting disabled. Will post the NSSCC patches to get the full picture. -Varada > > > > nodes. Please refer to this patch > > > > > > [PATCH V5 4/6] arm64: dts: qcom: ipq9574: Add PCIe PHYs and controller nodes > > > https://lore.kernel.org/linux-arm-msm/20240512082858.1806694-5-quic_devipriy@quicinc.com/ > > > > > > Sorry, did not post the nsscc related patches since this base ICC > > > patch hasn't reached closure. The nsscc patches are very similar > > > to this gcc based series. Wanted to gather the issues raised in > > > this and address them in nsscc so that it is in a more acceptable > > > shape. > > > > > > Thanks > > > Varada > > >
On 19.06.2024 9:36 AM, Varadarajan Narayanan wrote: [...] > Tested the patches with both gcc and nsscc providers having > 'sync_state' set to icc_sync_state. > > # dmesg | grep synced > [ 3.029820] qcom,gcc-ipq9574 1800000.clock-controller: interconnect provider is in synced state > [ 3.470106] qcom,nsscc-ipq9574 39b00000.clock-controller: interconnect provider is in synced state > > I can see that icc_sync_state is getting called and clocks > related to paths with zero bandwidth are getting disabled. > > Will post the NSSCC patches to get the full picture. Going back to the original question, does removing interconnects = from things like PCIe now make them not work / crash the device, which would indicate the NoC clocks were indeed gated? Konrad
On Thu, Jun 27, 2024 at 12:00:35AM +0200, Konrad Dybcio wrote: > On 19.06.2024 9:36 AM, Varadarajan Narayanan wrote: > > [...] > > > > Tested the patches with both gcc and nsscc providers having > > 'sync_state' set to icc_sync_state. > > > > # dmesg | grep synced > > [ 3.029820] qcom,gcc-ipq9574 1800000.clock-controller: interconnect provider is in synced state > > [ 3.470106] qcom,nsscc-ipq9574 39b00000.clock-controller: interconnect provider is in synced state > > > > I can see that icc_sync_state is getting called and clocks > > related to paths with zero bandwidth are getting disabled. > > > > Will post the NSSCC patches to get the full picture. > > Going back to the original question, does removing interconnects = from > things like PCIe now make them not work / crash the device, which would > indicate the NoC clocks were indeed gated? Yes. With and without 'interconnects =', the following behaviour is same * Boot completes * PCIe devices were probed succesfully and can be seen in /proc/bus/pci/devices. * icc_sync_state is called. The system has 4 pcie nodes in the DT, out of which pcie0 is not enabled. The difference is seen in icc_sync_state With 'interconnects =' * During icc_sync_state, the following 2 clocks corresponding to the interconnects of 'pcie0' get disabled. [ 2.986356] ---> clk_core_disable_lock: gcc_anoc_pcie0_1lane_m_clk [ 3.012486] ---> clk_core_disable_lock: gcc_snoc_pcie0_1lane_s_clk * System shutdown also completes without issues Without the 'interconnects =', * During icc_sync_state, the following clocks corresponding to the interconnects of all the 4 PCIe nodes get disabled. [ 2.887860] ---> clk_core_disable_lock: gcc_anoc_pcie0_1lane_m_clk [ 2.913988] ---> clk_core_disable_lock: gcc_snoc_pcie0_1lane_s_clk [ 2.939857] ---> clk_core_disable_lock: gcc_anoc_pcie1_1lane_m_clk [ 2.965725] ---> clk_core_disable_lock: gcc_snoc_pcie1_1lane_s_clk [ 2.991594] ---> clk_core_disable_lock: gcc_anoc_pcie2_2lane_m_clk [ 3.017463] ---> clk_core_disable_lock: gcc_snoc_pcie2_2lane_s_clk [ 3.043328] ---> clk_core_disable_lock: gcc_anoc_pcie3_2lane_m_clk [ 3.069201] ---> clk_core_disable_lock: gcc_snoc_pcie3_2lane_s_clk * System shutdown hangs (possibly due to un-clocked access of PCIe register) in pcie_pme_interrupt_enable [ 10.773134] dump_stack+0x18/0x24 [ 10.776779] pcie_pme_remove+0x2c/0x88 [ 10.780078] pcie_port_remove_service+0x50/0x74 [ 10.783725] device_remove+0x12c/0x148 [ 10.788151] __device_release_driver+0x65c/0x8cc [ 10.791972] device_release_driver+0x2c/0x44 [ 10.796746] bus_remove_device+0xcc/0x10c [ 10.800999] device_del+0x14c/0x400 [ 10.804904] device_unregister+0x18/0x34 [ 10.808203] remove_iter+0x2c/0x3c [ 10.812369] device_for_each_child+0x60/0xb4 [ 10.815583] pcie_portdrv_shutdown+0x34/0x90 [ 10.820009] pci_device_shutdown+0x34/0x74 [ 10.824263] device_shutdown+0x150/0x258 [ 10.828169] kernel_restart_prepare+0x98/0xbc [ 10.832249] kernel_restart+0x44/0x110 [ 10.836502] __do_sys_reboot+0x18c/0x304 I believe, this is confirms NOC clocks getting disabled by icc_sync_state. Thanks Varada
On 28.06.2024 10:48 AM, Varadarajan Narayanan wrote: > On Thu, Jun 27, 2024 at 12:00:35AM +0200, Konrad Dybcio wrote: >> On 19.06.2024 9:36 AM, Varadarajan Narayanan wrote: >> >> [...] >> >> >>> Tested the patches with both gcc and nsscc providers having >>> 'sync_state' set to icc_sync_state. >>> >>> # dmesg | grep synced >>> [ 3.029820] qcom,gcc-ipq9574 1800000.clock-controller: interconnect provider is in synced state >>> [ 3.470106] qcom,nsscc-ipq9574 39b00000.clock-controller: interconnect provider is in synced state >>> >>> I can see that icc_sync_state is getting called and clocks >>> related to paths with zero bandwidth are getting disabled. >>> >>> Will post the NSSCC patches to get the full picture. >> >> Going back to the original question, does removing interconnects = from >> things like PCIe now make them not work / crash the device, which would >> indicate the NoC clocks were indeed gated? > > Yes. With and without 'interconnects =', the following behaviour > is same > * Boot completes > * PCIe devices were probed succesfully and can be > seen in /proc/bus/pci/devices. > * icc_sync_state is called. The system has 4 pcie nodes > in the DT, out of which pcie0 is not enabled. > > The difference is seen in icc_sync_state > > With 'interconnects =' > > * During icc_sync_state, the following 2 clocks > corresponding to the interconnects of 'pcie0' get > disabled. > > [ 2.986356] ---> clk_core_disable_lock: gcc_anoc_pcie0_1lane_m_clk > [ 3.012486] ---> clk_core_disable_lock: gcc_snoc_pcie0_1lane_s_clk > > * System shutdown also completes without issues > > Without the 'interconnects =', > > * During icc_sync_state, the following clocks > corresponding to the interconnects of all the 4 PCIe > nodes get disabled. > > [ 2.887860] ---> clk_core_disable_lock: gcc_anoc_pcie0_1lane_m_clk > [ 2.913988] ---> clk_core_disable_lock: gcc_snoc_pcie0_1lane_s_clk > [ 2.939857] ---> clk_core_disable_lock: gcc_anoc_pcie1_1lane_m_clk > [ 2.965725] ---> clk_core_disable_lock: gcc_snoc_pcie1_1lane_s_clk > [ 2.991594] ---> clk_core_disable_lock: gcc_anoc_pcie2_2lane_m_clk > [ 3.017463] ---> clk_core_disable_lock: gcc_snoc_pcie2_2lane_s_clk > [ 3.043328] ---> clk_core_disable_lock: gcc_anoc_pcie3_2lane_m_clk > [ 3.069201] ---> clk_core_disable_lock: gcc_snoc_pcie3_2lane_s_clk > > * System shutdown hangs (possibly due to un-clocked > access of PCIe register) in pcie_pme_interrupt_enable > > [ 10.773134] dump_stack+0x18/0x24 > [ 10.776779] pcie_pme_remove+0x2c/0x88 > [ 10.780078] pcie_port_remove_service+0x50/0x74 > [ 10.783725] device_remove+0x12c/0x148 > [ 10.788151] __device_release_driver+0x65c/0x8cc > [ 10.791972] device_release_driver+0x2c/0x44 > [ 10.796746] bus_remove_device+0xcc/0x10c > [ 10.800999] device_del+0x14c/0x400 > [ 10.804904] device_unregister+0x18/0x34 > [ 10.808203] remove_iter+0x2c/0x3c > [ 10.812369] device_for_each_child+0x60/0xb4 > [ 10.815583] pcie_portdrv_shutdown+0x34/0x90 > [ 10.820009] pci_device_shutdown+0x34/0x74 > [ 10.824263] device_shutdown+0x150/0x258 > [ 10.828169] kernel_restart_prepare+0x98/0xbc > [ 10.832249] kernel_restart+0x44/0x110 > [ 10.836502] __do_sys_reboot+0x18c/0x304 > > I believe, this is confirms NOC clocks getting disabled by > icc_sync_state. Yes, this looks good now, thanks. Konrad
diff --git a/arch/arm64/boot/dts/qcom/ipq9574.dtsi b/arch/arm64/boot/dts/qcom/ipq9574.dtsi index 7f2e5cbf3bbb..5b3e69379b1f 100644 --- a/arch/arm64/boot/dts/qcom/ipq9574.dtsi +++ b/arch/arm64/boot/dts/qcom/ipq9574.dtsi @@ -8,6 +8,7 @@ #include <dt-bindings/clock/qcom,apss-ipq.h> #include <dt-bindings/clock/qcom,ipq9574-gcc.h> +#include <dt-bindings/interconnect/qcom,ipq9574.h> #include <dt-bindings/interrupt-controller/arm-gic.h> #include <dt-bindings/reset/qcom,ipq9574-gcc.h> #include <dt-bindings/thermal/thermal.h> @@ -306,6 +307,7 @@ gcc: clock-controller@1800000 { #clock-cells = <1>; #reset-cells = <1>; #power-domain-cells = <1>; + #interconnect-cells = <1>; }; tcsr_mutex: hwlock@1905000 {