Message ID | 20181127180349.29997-1-georgi.djakov@linaro.org (mailing list archive) |
---|---|
Headers | show |
Series | Introduce on-chip interconnect API | expand |
On Tue, Nov 27, 2018 at 10:03 AM Georgi Djakov <georgi.djakov@linaro.org> wrote: > > Modern SoCs have multiple processors and various dedicated cores (video, gpu, > graphics, modem). These cores are talking to each other and can generate a > lot of data flowing through the on-chip interconnects. These interconnect > buses could form different topologies such as crossbar, point to point buses, > hierarchical buses or use the network-on-chip concept. > > These buses have been sized usually to handle use cases with high data > throughput but it is not necessary all the time and consume a lot of power. > Furthermore, the priority between masters can vary depending on the running > use case like video playback or CPU intensive tasks. > > Having an API to control the requirement of the system in terms of bandwidth > and QoS, so we can adapt the interconnect configuration to match those by > scaling the frequencies, setting link priority and tuning QoS parameters. > This configuration can be a static, one-time operation done at boot for some > platforms or a dynamic set of operations that happen at run-time. > > This patchset introduce a new API to get the requirement and configure the > interconnect buses across the entire chipset to fit with the current demand. > The API is NOT for changing the performance of the endpoint devices, but only > the interconnect path in between them. For what it's worth, we are ready to land this in Chrome OS. I think this series has been very well discussed and reviewed, hasn't changed much in the last few spins, and is in good enough shape to use as a base for future patches. Georgi's also done a great job reaching out to other SoC vendors, and there appears to be enough consensus that this framework will be usable by more than just Qualcomm. There are also several drivers out on the list trying to add patches to use this framework, with more to come, so it made sense (to us) to get this base framework nailed down. In my experiments this is an important piece of the overall power management story, especially on systems that are mostly idle. I'll continue to track changes to this series and we will ultimately reconcile with whatever happens upstream, but I thought it was worth sending this note to express our "thumbs up" towards this framework. -Evan
On Wed, Dec 05, 2018 at 12:41:35PM -0800, Evan Green wrote: > On Tue, Nov 27, 2018 at 10:03 AM Georgi Djakov <georgi.djakov@linaro.org> wrote: > > > > Modern SoCs have multiple processors and various dedicated cores (video, gpu, > > graphics, modem). These cores are talking to each other and can generate a > > lot of data flowing through the on-chip interconnects. These interconnect > > buses could form different topologies such as crossbar, point to point buses, > > hierarchical buses or use the network-on-chip concept. > > > > These buses have been sized usually to handle use cases with high data > > throughput but it is not necessary all the time and consume a lot of power. > > Furthermore, the priority between masters can vary depending on the running > > use case like video playback or CPU intensive tasks. > > > > Having an API to control the requirement of the system in terms of bandwidth > > and QoS, so we can adapt the interconnect configuration to match those by > > scaling the frequencies, setting link priority and tuning QoS parameters. > > This configuration can be a static, one-time operation done at boot for some > > platforms or a dynamic set of operations that happen at run-time. > > > > This patchset introduce a new API to get the requirement and configure the > > interconnect buses across the entire chipset to fit with the current demand. > > The API is NOT for changing the performance of the endpoint devices, but only > > the interconnect path in between them. > > For what it's worth, we are ready to land this in Chrome OS. I think > this series has been very well discussed and reviewed, hasn't changed > much in the last few spins, and is in good enough shape to use as a > base for future patches. Georgi's also done a great job reaching out > to other SoC vendors, and there appears to be enough consensus that > this framework will be usable by more than just Qualcomm. There are > also several drivers out on the list trying to add patches to use this > framework, with more to come, so it made sense (to us) to get this > base framework nailed down. In my experiments this is an important > piece of the overall power management story, especially on systems > that are mostly idle. > > I'll continue to track changes to this series and we will ultimately > reconcile with whatever happens upstream, but I thought it was worth > sending this note to express our "thumbs up" towards this framework. Looks like a v11 will be forthcoming, so I'll wait for that one to apply it to the tree if all looks good. thanks, greg k-h
Hi Greg and Evan, On 12/6/18 16:55, Greg KH wrote: > On Wed, Dec 05, 2018 at 12:41:35PM -0800, Evan Green wrote: >> On Tue, Nov 27, 2018 at 10:03 AM Georgi Djakov <georgi.djakov@linaro.org> wrote: >>> >>> Modern SoCs have multiple processors and various dedicated cores (video, gpu, >>> graphics, modem). These cores are talking to each other and can generate a >>> lot of data flowing through the on-chip interconnects. These interconnect >>> buses could form different topologies such as crossbar, point to point buses, >>> hierarchical buses or use the network-on-chip concept. >>> >>> These buses have been sized usually to handle use cases with high data >>> throughput but it is not necessary all the time and consume a lot of power. >>> Furthermore, the priority between masters can vary depending on the running >>> use case like video playback or CPU intensive tasks. >>> >>> Having an API to control the requirement of the system in terms of bandwidth >>> and QoS, so we can adapt the interconnect configuration to match those by >>> scaling the frequencies, setting link priority and tuning QoS parameters. >>> This configuration can be a static, one-time operation done at boot for some >>> platforms or a dynamic set of operations that happen at run-time. >>> >>> This patchset introduce a new API to get the requirement and configure the >>> interconnect buses across the entire chipset to fit with the current demand. >>> The API is NOT for changing the performance of the endpoint devices, but only >>> the interconnect path in between them. >> >> For what it's worth, we are ready to land this in Chrome OS. I think >> this series has been very well discussed and reviewed, hasn't changed >> much in the last few spins, and is in good enough shape to use as a >> base for future patches. Georgi's also done a great job reaching out >> to other SoC vendors, and there appears to be enough consensus that >> this framework will be usable by more than just Qualcomm. There are >> also several drivers out on the list trying to add patches to use this >> framework, with more to come, so it made sense (to us) to get this >> base framework nailed down. In my experiments this is an important >> piece of the overall power management story, especially on systems >> that are mostly idle. >> >> I'll continue to track changes to this series and we will ultimately >> reconcile with whatever happens upstream, but I thought it was worth >> sending this note to express our "thumbs up" towards this framework. > > Looks like a v11 will be forthcoming, so I'll wait for that one to apply > it to the tree if all looks good. > Yes, it's coming. I will also include an additional fixup patch, as the sdm845 provider driver will fail to build in linux-next, due to a recent change in the cmd_db API. Thanks, Georgi
On Thu, Dec 6, 2018 at 3:55 PM Greg KH <gregkh@linuxfoundation.org> wrote: > > On Wed, Dec 05, 2018 at 12:41:35PM -0800, Evan Green wrote: > > On Tue, Nov 27, 2018 at 10:03 AM Georgi Djakov <georgi.djakov@linaro.org> wrote: > > > > > > Modern SoCs have multiple processors and various dedicated cores (video, gpu, > > > graphics, modem). These cores are talking to each other and can generate a > > > lot of data flowing through the on-chip interconnects. These interconnect > > > buses could form different topologies such as crossbar, point to point buses, > > > hierarchical buses or use the network-on-chip concept. > > > > > > These buses have been sized usually to handle use cases with high data > > > throughput but it is not necessary all the time and consume a lot of power. > > > Furthermore, the priority between masters can vary depending on the running > > > use case like video playback or CPU intensive tasks. > > > > > > Having an API to control the requirement of the system in terms of bandwidth > > > and QoS, so we can adapt the interconnect configuration to match those by > > > scaling the frequencies, setting link priority and tuning QoS parameters. > > > This configuration can be a static, one-time operation done at boot for some > > > platforms or a dynamic set of operations that happen at run-time. > > > > > > This patchset introduce a new API to get the requirement and configure the > > > interconnect buses across the entire chipset to fit with the current demand. > > > The API is NOT for changing the performance of the endpoint devices, but only > > > the interconnect path in between them. > > > > For what it's worth, we are ready to land this in Chrome OS. I think > > this series has been very well discussed and reviewed, hasn't changed > > much in the last few spins, and is in good enough shape to use as a > > base for future patches. Georgi's also done a great job reaching out > > to other SoC vendors, and there appears to be enough consensus that > > this framework will be usable by more than just Qualcomm. There are > > also several drivers out on the list trying to add patches to use this > > framework, with more to come, so it made sense (to us) to get this > > base framework nailed down. In my experiments this is an important > > piece of the overall power management story, especially on systems > > that are mostly idle. > > > > I'll continue to track changes to this series and we will ultimately > > reconcile with whatever happens upstream, but I thought it was worth > > sending this note to express our "thumbs up" towards this framework. > > Looks like a v11 will be forthcoming, so I'll wait for that one to apply > it to the tree if all looks good. I'm honestly not sure if it is ready yet. New versions are coming on and on, which may make such an impression, but we had some discussion on it at the LPC and some serious questions were asked during it, for instance regarding the DT binding introduced here. I'm not sure how this particular issue has been addressed here, for example. Thanks, Rafael
Hi Rafael, On 12/10/18 11:04, Rafael J. Wysocki wrote: > On Thu, Dec 6, 2018 at 3:55 PM Greg KH <gregkh@linuxfoundation.org> wrote: >> >> On Wed, Dec 05, 2018 at 12:41:35PM -0800, Evan Green wrote: >>> On Tue, Nov 27, 2018 at 10:03 AM Georgi Djakov <georgi.djakov@linaro.org> wrote: >>>> >>>> Modern SoCs have multiple processors and various dedicated cores (video, gpu, >>>> graphics, modem). These cores are talking to each other and can generate a >>>> lot of data flowing through the on-chip interconnects. These interconnect >>>> buses could form different topologies such as crossbar, point to point buses, >>>> hierarchical buses or use the network-on-chip concept. >>>> >>>> These buses have been sized usually to handle use cases with high data >>>> throughput but it is not necessary all the time and consume a lot of power. >>>> Furthermore, the priority between masters can vary depending on the running >>>> use case like video playback or CPU intensive tasks. >>>> >>>> Having an API to control the requirement of the system in terms of bandwidth >>>> and QoS, so we can adapt the interconnect configuration to match those by >>>> scaling the frequencies, setting link priority and tuning QoS parameters. >>>> This configuration can be a static, one-time operation done at boot for some >>>> platforms or a dynamic set of operations that happen at run-time. >>>> >>>> This patchset introduce a new API to get the requirement and configure the >>>> interconnect buses across the entire chipset to fit with the current demand. >>>> The API is NOT for changing the performance of the endpoint devices, but only >>>> the interconnect path in between them. >>> >>> For what it's worth, we are ready to land this in Chrome OS. I think >>> this series has been very well discussed and reviewed, hasn't changed >>> much in the last few spins, and is in good enough shape to use as a >>> base for future patches. Georgi's also done a great job reaching out >>> to other SoC vendors, and there appears to be enough consensus that >>> this framework will be usable by more than just Qualcomm. There are >>> also several drivers out on the list trying to add patches to use this >>> framework, with more to come, so it made sense (to us) to get this >>> base framework nailed down. In my experiments this is an important >>> piece of the overall power management story, especially on systems >>> that are mostly idle. >>> >>> I'll continue to track changes to this series and we will ultimately >>> reconcile with whatever happens upstream, but I thought it was worth >>> sending this note to express our "thumbs up" towards this framework. >> >> Looks like a v11 will be forthcoming, so I'll wait for that one to apply >> it to the tree if all looks good. > > I'm honestly not sure if it is ready yet. > > New versions are coming on and on, which may make such an impression, > but we had some discussion on it at the LPC and some serious questions > were asked during it, for instance regarding the DT binding introduced > here. I'm not sure how this particular issue has been addressed here, > for example. There have been no changes in bindings since v4 (other than squashing consumer and provider bindings into a single patch and fixing typos). The last DT comment was on v9 [1] where Rob wanted confirmation from other SoC vendors that this works for them too. And now we have that confirmation and there are patches posted on the list [2]. The second thing (also discussed at LPC) was about possible cases where some consumer drivers can't calculate how much bandwidth they actually need and how to address that. The proposal was to extend the OPP bindings with one more property, but this is not part of this patchset. It is a future step that needs more discussion on the mailing list. If a driver really needs some bandwidth data now, it should be put into the driver and not in DT. After we have enough consumers, we can discuss again if it makes sense to extract something into DT or not. Thanks, Georgi [1] https://lkml.org/lkml/2018/9/25/939 [2] https://lkml.org/lkml/2018/11/28/12
On Mon, Dec 10, 2018 at 11:18 AM Georgi Djakov <georgi.djakov@linaro.org> wrote: > > Hi Rafael, > > On 12/10/18 11:04, Rafael J. Wysocki wrote: > > On Thu, Dec 6, 2018 at 3:55 PM Greg KH <gregkh@linuxfoundation.org> wrote: > >> > >> On Wed, Dec 05, 2018 at 12:41:35PM -0800, Evan Green wrote: > >>> On Tue, Nov 27, 2018 at 10:03 AM Georgi Djakov <georgi.djakov@linaro.org> wrote: > >>>> > >>>> Modern SoCs have multiple processors and various dedicated cores (video, gpu, > >>>> graphics, modem). These cores are talking to each other and can generate a > >>>> lot of data flowing through the on-chip interconnects. These interconnect > >>>> buses could form different topologies such as crossbar, point to point buses, > >>>> hierarchical buses or use the network-on-chip concept. > >>>> > >>>> These buses have been sized usually to handle use cases with high data > >>>> throughput but it is not necessary all the time and consume a lot of power. > >>>> Furthermore, the priority between masters can vary depending on the running > >>>> use case like video playback or CPU intensive tasks. > >>>> > >>>> Having an API to control the requirement of the system in terms of bandwidth > >>>> and QoS, so we can adapt the interconnect configuration to match those by > >>>> scaling the frequencies, setting link priority and tuning QoS parameters. > >>>> This configuration can be a static, one-time operation done at boot for some > >>>> platforms or a dynamic set of operations that happen at run-time. > >>>> > >>>> This patchset introduce a new API to get the requirement and configure the > >>>> interconnect buses across the entire chipset to fit with the current demand. > >>>> The API is NOT for changing the performance of the endpoint devices, but only > >>>> the interconnect path in between them. > >>> > >>> For what it's worth, we are ready to land this in Chrome OS. I think > >>> this series has been very well discussed and reviewed, hasn't changed > >>> much in the last few spins, and is in good enough shape to use as a > >>> base for future patches. Georgi's also done a great job reaching out > >>> to other SoC vendors, and there appears to be enough consensus that > >>> this framework will be usable by more than just Qualcomm. There are > >>> also several drivers out on the list trying to add patches to use this > >>> framework, with more to come, so it made sense (to us) to get this > >>> base framework nailed down. In my experiments this is an important > >>> piece of the overall power management story, especially on systems > >>> that are mostly idle. > >>> > >>> I'll continue to track changes to this series and we will ultimately > >>> reconcile with whatever happens upstream, but I thought it was worth > >>> sending this note to express our "thumbs up" towards this framework. > >> > >> Looks like a v11 will be forthcoming, so I'll wait for that one to apply > >> it to the tree if all looks good. > > > > I'm honestly not sure if it is ready yet. > > > > New versions are coming on and on, which may make such an impression, > > but we had some discussion on it at the LPC and some serious questions > > were asked during it, for instance regarding the DT binding introduced > > here. I'm not sure how this particular issue has been addressed here, > > for example. > > There have been no changes in bindings since v4 (other than squashing > consumer and provider bindings into a single patch and fixing typos). > > The last DT comment was on v9 [1] where Rob wanted confirmation from > other SoC vendors that this works for them too. And now we have that > confirmation and there are patches posted on the list [2]. OK > The second thing (also discussed at LPC) was about possible cases where > some consumer drivers can't calculate how much bandwidth they actually > need and how to address that. The proposal was to extend the OPP > bindings with one more property, but this is not part of this patchset. > It is a future step that needs more discussion on the mailing list. If a > driver really needs some bandwidth data now, it should be put into the > driver and not in DT. After we have enough consumers, we can discuss > again if it makes sense to extract something into DT or not. That's fine by me. Admittedly, I have some reservations regarding the extent to which this approach will turn out to be useful in practice, but I guess as long as there is enough traction, the best way to find out it to try and see. :-) From now on I will assume that this series is going to be applied by Greg. Thanks, Rafael
On 12/10/18 13:00, Rafael J. Wysocki wrote: > On Mon, Dec 10, 2018 at 11:18 AM Georgi Djakov <georgi.djakov@linaro.org> wrote: >> >> Hi Rafael, >> >> On 12/10/18 11:04, Rafael J. Wysocki wrote: >>> On Thu, Dec 6, 2018 at 3:55 PM Greg KH <gregkh@linuxfoundation.org> wrote: >>>> >>>> On Wed, Dec 05, 2018 at 12:41:35PM -0800, Evan Green wrote: >>>>> On Tue, Nov 27, 2018 at 10:03 AM Georgi Djakov <georgi.djakov@linaro.org> wrote: >>>>>> >>>>>> Modern SoCs have multiple processors and various dedicated cores (video, gpu, >>>>>> graphics, modem). These cores are talking to each other and can generate a >>>>>> lot of data flowing through the on-chip interconnects. These interconnect >>>>>> buses could form different topologies such as crossbar, point to point buses, >>>>>> hierarchical buses or use the network-on-chip concept. >>>>>> >>>>>> These buses have been sized usually to handle use cases with high data >>>>>> throughput but it is not necessary all the time and consume a lot of power. >>>>>> Furthermore, the priority between masters can vary depending on the running >>>>>> use case like video playback or CPU intensive tasks. >>>>>> >>>>>> Having an API to control the requirement of the system in terms of bandwidth >>>>>> and QoS, so we can adapt the interconnect configuration to match those by >>>>>> scaling the frequencies, setting link priority and tuning QoS parameters. >>>>>> This configuration can be a static, one-time operation done at boot for some >>>>>> platforms or a dynamic set of operations that happen at run-time. >>>>>> >>>>>> This patchset introduce a new API to get the requirement and configure the >>>>>> interconnect buses across the entire chipset to fit with the current demand. >>>>>> The API is NOT for changing the performance of the endpoint devices, but only >>>>>> the interconnect path in between them. >>>>> >>>>> For what it's worth, we are ready to land this in Chrome OS. I think >>>>> this series has been very well discussed and reviewed, hasn't changed >>>>> much in the last few spins, and is in good enough shape to use as a >>>>> base for future patches. Georgi's also done a great job reaching out >>>>> to other SoC vendors, and there appears to be enough consensus that >>>>> this framework will be usable by more than just Qualcomm. There are >>>>> also several drivers out on the list trying to add patches to use this >>>>> framework, with more to come, so it made sense (to us) to get this >>>>> base framework nailed down. In my experiments this is an important >>>>> piece of the overall power management story, especially on systems >>>>> that are mostly idle. >>>>> >>>>> I'll continue to track changes to this series and we will ultimately >>>>> reconcile with whatever happens upstream, but I thought it was worth >>>>> sending this note to express our "thumbs up" towards this framework. >>>> >>>> Looks like a v11 will be forthcoming, so I'll wait for that one to apply >>>> it to the tree if all looks good. >>> >>> I'm honestly not sure if it is ready yet. >>> >>> New versions are coming on and on, which may make such an impression, >>> but we had some discussion on it at the LPC and some serious questions >>> were asked during it, for instance regarding the DT binding introduced >>> here. I'm not sure how this particular issue has been addressed here, >>> for example. >> >> There have been no changes in bindings since v4 (other than squashing >> consumer and provider bindings into a single patch and fixing typos). >> >> The last DT comment was on v9 [1] where Rob wanted confirmation from >> other SoC vendors that this works for them too. And now we have that >> confirmation and there are patches posted on the list [2]. > > OK > >> The second thing (also discussed at LPC) was about possible cases where >> some consumer drivers can't calculate how much bandwidth they actually >> need and how to address that. The proposal was to extend the OPP >> bindings with one more property, but this is not part of this patchset. >> It is a future step that needs more discussion on the mailing list. If a >> driver really needs some bandwidth data now, it should be put into the >> driver and not in DT. After we have enough consumers, we can discuss >> again if it makes sense to extract something into DT or not. > > That's fine by me. > > Admittedly, I have some reservations regarding the extent to which > this approach will turn out to be useful in practice, but I guess as > long as there is enough traction, the best way to find out it to try > and see. :-) > > From now on I will assume that this series is going to be applied by Greg. That was the initial idea, but the problem is that there is a recent change in the cmd_db API (needed by the sdm845 provider driver), which is going through arm-soc/qcom/drivers. So either Greg pulls also the qcom-drivers-for-4.21 tag from Andy or the whole series goes via Olof and Arnd. Maybe there are other options. I don't have any preference and don't want to put extra burden on any maintainers, so i am ok with what they prefer. Thanks, Georgi
On Mon, Dec 10, 2018 at 04:50:00PM +0200, Georgi Djakov wrote: > On 12/10/18 13:00, Rafael J. Wysocki wrote: > > On Mon, Dec 10, 2018 at 11:18 AM Georgi Djakov <georgi.djakov@linaro.org> wrote: > >> > >> Hi Rafael, > >> > >> On 12/10/18 11:04, Rafael J. Wysocki wrote: > >>> On Thu, Dec 6, 2018 at 3:55 PM Greg KH <gregkh@linuxfoundation.org> wrote: > >>>> > >>>> On Wed, Dec 05, 2018 at 12:41:35PM -0800, Evan Green wrote: > >>>>> On Tue, Nov 27, 2018 at 10:03 AM Georgi Djakov <georgi.djakov@linaro.org> wrote: > >>>>>> > >>>>>> Modern SoCs have multiple processors and various dedicated cores (video, gpu, > >>>>>> graphics, modem). These cores are talking to each other and can generate a > >>>>>> lot of data flowing through the on-chip interconnects. These interconnect > >>>>>> buses could form different topologies such as crossbar, point to point buses, > >>>>>> hierarchical buses or use the network-on-chip concept. > >>>>>> > >>>>>> These buses have been sized usually to handle use cases with high data > >>>>>> throughput but it is not necessary all the time and consume a lot of power. > >>>>>> Furthermore, the priority between masters can vary depending on the running > >>>>>> use case like video playback or CPU intensive tasks. > >>>>>> > >>>>>> Having an API to control the requirement of the system in terms of bandwidth > >>>>>> and QoS, so we can adapt the interconnect configuration to match those by > >>>>>> scaling the frequencies, setting link priority and tuning QoS parameters. > >>>>>> This configuration can be a static, one-time operation done at boot for some > >>>>>> platforms or a dynamic set of operations that happen at run-time. > >>>>>> > >>>>>> This patchset introduce a new API to get the requirement and configure the > >>>>>> interconnect buses across the entire chipset to fit with the current demand. > >>>>>> The API is NOT for changing the performance of the endpoint devices, but only > >>>>>> the interconnect path in between them. > >>>>> > >>>>> For what it's worth, we are ready to land this in Chrome OS. I think > >>>>> this series has been very well discussed and reviewed, hasn't changed > >>>>> much in the last few spins, and is in good enough shape to use as a > >>>>> base for future patches. Georgi's also done a great job reaching out > >>>>> to other SoC vendors, and there appears to be enough consensus that > >>>>> this framework will be usable by more than just Qualcomm. There are > >>>>> also several drivers out on the list trying to add patches to use this > >>>>> framework, with more to come, so it made sense (to us) to get this > >>>>> base framework nailed down. In my experiments this is an important > >>>>> piece of the overall power management story, especially on systems > >>>>> that are mostly idle. > >>>>> > >>>>> I'll continue to track changes to this series and we will ultimately > >>>>> reconcile with whatever happens upstream, but I thought it was worth > >>>>> sending this note to express our "thumbs up" towards this framework. > >>>> > >>>> Looks like a v11 will be forthcoming, so I'll wait for that one to apply > >>>> it to the tree if all looks good. > >>> > >>> I'm honestly not sure if it is ready yet. > >>> > >>> New versions are coming on and on, which may make such an impression, > >>> but we had some discussion on it at the LPC and some serious questions > >>> were asked during it, for instance regarding the DT binding introduced > >>> here. I'm not sure how this particular issue has been addressed here, > >>> for example. > >> > >> There have been no changes in bindings since v4 (other than squashing > >> consumer and provider bindings into a single patch and fixing typos). > >> > >> The last DT comment was on v9 [1] where Rob wanted confirmation from > >> other SoC vendors that this works for them too. And now we have that > >> confirmation and there are patches posted on the list [2]. > > > > OK > > > >> The second thing (also discussed at LPC) was about possible cases where > >> some consumer drivers can't calculate how much bandwidth they actually > >> need and how to address that. The proposal was to extend the OPP > >> bindings with one more property, but this is not part of this patchset. > >> It is a future step that needs more discussion on the mailing list. If a > >> driver really needs some bandwidth data now, it should be put into the > >> driver and not in DT. After we have enough consumers, we can discuss > >> again if it makes sense to extract something into DT or not. > > > > That's fine by me. > > > > Admittedly, I have some reservations regarding the extent to which > > this approach will turn out to be useful in practice, but I guess as > > long as there is enough traction, the best way to find out it to try > > and see. :-) > > > > From now on I will assume that this series is going to be applied by Greg. > > That was the initial idea, but the problem is that there is a recent > change in the cmd_db API (needed by the sdm845 provider driver), which > is going through arm-soc/qcom/drivers. So either Greg pulls also the > qcom-drivers-for-4.21 tag from Andy or the whole series goes via Olof > and Arnd. Maybe there are other options. I don't have any preference and > don't want to put extra burden on any maintainers, so i am ok with what > they prefer. Let me take the time later this week to review the code, which I haven't done in a while... thanks, greg k-h
Hi Greg, On 12/11/18 08:58, Greg Kroah-Hartman wrote: > On Mon, Dec 10, 2018 at 04:50:00PM +0200, Georgi Djakov wrote: >> On 12/10/18 13:00, Rafael J. Wysocki wrote: >>> On Mon, Dec 10, 2018 at 11:18 AM Georgi Djakov <georgi.djakov@linaro.org> wrote: >>>> >>>> Hi Rafael, >>>> >>>> On 12/10/18 11:04, Rafael J. Wysocki wrote: >>>>> On Thu, Dec 6, 2018 at 3:55 PM Greg KH <gregkh@linuxfoundation.org> wrote: >>>>>> >>>>>> On Wed, Dec 05, 2018 at 12:41:35PM -0800, Evan Green wrote: >>>>>>> On Tue, Nov 27, 2018 at 10:03 AM Georgi Djakov <georgi.djakov@linaro.org> wrote: >>>>>>>> >>>>>>>> Modern SoCs have multiple processors and various dedicated cores (video, gpu, >>>>>>>> graphics, modem). These cores are talking to each other and can generate a >>>>>>>> lot of data flowing through the on-chip interconnects. These interconnect >>>>>>>> buses could form different topologies such as crossbar, point to point buses, >>>>>>>> hierarchical buses or use the network-on-chip concept. >>>>>>>> >>>>>>>> These buses have been sized usually to handle use cases with high data >>>>>>>> throughput but it is not necessary all the time and consume a lot of power. >>>>>>>> Furthermore, the priority between masters can vary depending on the running >>>>>>>> use case like video playback or CPU intensive tasks. >>>>>>>> >>>>>>>> Having an API to control the requirement of the system in terms of bandwidth >>>>>>>> and QoS, so we can adapt the interconnect configuration to match those by >>>>>>>> scaling the frequencies, setting link priority and tuning QoS parameters. >>>>>>>> This configuration can be a static, one-time operation done at boot for some >>>>>>>> platforms or a dynamic set of operations that happen at run-time. >>>>>>>> >>>>>>>> This patchset introduce a new API to get the requirement and configure the >>>>>>>> interconnect buses across the entire chipset to fit with the current demand. >>>>>>>> The API is NOT for changing the performance of the endpoint devices, but only >>>>>>>> the interconnect path in between them. >>>>>>> >>>>>>> For what it's worth, we are ready to land this in Chrome OS. I think >>>>>>> this series has been very well discussed and reviewed, hasn't changed >>>>>>> much in the last few spins, and is in good enough shape to use as a >>>>>>> base for future patches. Georgi's also done a great job reaching out >>>>>>> to other SoC vendors, and there appears to be enough consensus that >>>>>>> this framework will be usable by more than just Qualcomm. There are >>>>>>> also several drivers out on the list trying to add patches to use this >>>>>>> framework, with more to come, so it made sense (to us) to get this >>>>>>> base framework nailed down. In my experiments this is an important >>>>>>> piece of the overall power management story, especially on systems >>>>>>> that are mostly idle. >>>>>>> >>>>>>> I'll continue to track changes to this series and we will ultimately >>>>>>> reconcile with whatever happens upstream, but I thought it was worth >>>>>>> sending this note to express our "thumbs up" towards this framework. >>>>>> >>>>>> Looks like a v11 will be forthcoming, so I'll wait for that one to apply >>>>>> it to the tree if all looks good. >>>>> >>>>> I'm honestly not sure if it is ready yet. >>>>> >>>>> New versions are coming on and on, which may make such an impression, >>>>> but we had some discussion on it at the LPC and some serious questions >>>>> were asked during it, for instance regarding the DT binding introduced >>>>> here. I'm not sure how this particular issue has been addressed here, >>>>> for example. >>>> >>>> There have been no changes in bindings since v4 (other than squashing >>>> consumer and provider bindings into a single patch and fixing typos). >>>> >>>> The last DT comment was on v9 [1] where Rob wanted confirmation from >>>> other SoC vendors that this works for them too. And now we have that >>>> confirmation and there are patches posted on the list [2]. >>> >>> OK >>> >>>> The second thing (also discussed at LPC) was about possible cases where >>>> some consumer drivers can't calculate how much bandwidth they actually >>>> need and how to address that. The proposal was to extend the OPP >>>> bindings with one more property, but this is not part of this patchset. >>>> It is a future step that needs more discussion on the mailing list. If a >>>> driver really needs some bandwidth data now, it should be put into the >>>> driver and not in DT. After we have enough consumers, we can discuss >>>> again if it makes sense to extract something into DT or not. >>> >>> That's fine by me. >>> >>> Admittedly, I have some reservations regarding the extent to which >>> this approach will turn out to be useful in practice, but I guess as >>> long as there is enough traction, the best way to find out it to try >>> and see. :-) >>> >>> From now on I will assume that this series is going to be applied by Greg. >> >> That was the initial idea, but the problem is that there is a recent >> change in the cmd_db API (needed by the sdm845 provider driver), which >> is going through arm-soc/qcom/drivers. So either Greg pulls also the >> qcom-drivers-for-4.21 tag from Andy or the whole series goes via Olof >> and Arnd. Maybe there are other options. I don't have any preference and >> don't want to put extra burden on any maintainers, so i am ok with what >> they prefer. > > Let me take the time later this week to review the code, which I haven't > done in a while... > When you get a chance to review, please keep in mind that the latest version is v12 (from 08.Dec). The same is also available in linux-next with no reported issues. Thanks, Georgi
Hi Greg, On 12/17/18 13:17, Georgi Djakov wrote: > Hi Greg, > > On 12/11/18 08:58, Greg Kroah-Hartman wrote: >> On Mon, Dec 10, 2018 at 04:50:00PM +0200, Georgi Djakov wrote: >>> On 12/10/18 13:00, Rafael J. Wysocki wrote: >>>> On Mon, Dec 10, 2018 at 11:18 AM Georgi Djakov <georgi.djakov@linaro.org> wrote: >>>>> >>>>> Hi Rafael, >>>>> >>>>> On 12/10/18 11:04, Rafael J. Wysocki wrote: >>>>>> On Thu, Dec 6, 2018 at 3:55 PM Greg KH <gregkh@linuxfoundation.org> wrote: >>>>>>> >>>>>>> On Wed, Dec 05, 2018 at 12:41:35PM -0800, Evan Green wrote: >>>>>>>> On Tue, Nov 27, 2018 at 10:03 AM Georgi Djakov <georgi.djakov@linaro.org> wrote: >>>>>>>>> >>>>>>>>> Modern SoCs have multiple processors and various dedicated cores (video, gpu, >>>>>>>>> graphics, modem). These cores are talking to each other and can generate a >>>>>>>>> lot of data flowing through the on-chip interconnects. These interconnect >>>>>>>>> buses could form different topologies such as crossbar, point to point buses, >>>>>>>>> hierarchical buses or use the network-on-chip concept. >>>>>>>>> >>>>>>>>> These buses have been sized usually to handle use cases with high data >>>>>>>>> throughput but it is not necessary all the time and consume a lot of power. >>>>>>>>> Furthermore, the priority between masters can vary depending on the running >>>>>>>>> use case like video playback or CPU intensive tasks. >>>>>>>>> >>>>>>>>> Having an API to control the requirement of the system in terms of bandwidth >>>>>>>>> and QoS, so we can adapt the interconnect configuration to match those by >>>>>>>>> scaling the frequencies, setting link priority and tuning QoS parameters. >>>>>>>>> This configuration can be a static, one-time operation done at boot for some >>>>>>>>> platforms or a dynamic set of operations that happen at run-time. >>>>>>>>> >>>>>>>>> This patchset introduce a new API to get the requirement and configure the >>>>>>>>> interconnect buses across the entire chipset to fit with the current demand. >>>>>>>>> The API is NOT for changing the performance of the endpoint devices, but only >>>>>>>>> the interconnect path in between them. >>>>>>>> >>>>>>>> For what it's worth, we are ready to land this in Chrome OS. I think >>>>>>>> this series has been very well discussed and reviewed, hasn't changed >>>>>>>> much in the last few spins, and is in good enough shape to use as a >>>>>>>> base for future patches. Georgi's also done a great job reaching out >>>>>>>> to other SoC vendors, and there appears to be enough consensus that >>>>>>>> this framework will be usable by more than just Qualcomm. There are >>>>>>>> also several drivers out on the list trying to add patches to use this >>>>>>>> framework, with more to come, so it made sense (to us) to get this >>>>>>>> base framework nailed down. In my experiments this is an important >>>>>>>> piece of the overall power management story, especially on systems >>>>>>>> that are mostly idle. >>>>>>>> >>>>>>>> I'll continue to track changes to this series and we will ultimately >>>>>>>> reconcile with whatever happens upstream, but I thought it was worth >>>>>>>> sending this note to express our "thumbs up" towards this framework. >>>>>>> >>>>>>> Looks like a v11 will be forthcoming, so I'll wait for that one to apply >>>>>>> it to the tree if all looks good. >>>>>> >>>>>> I'm honestly not sure if it is ready yet. >>>>>> >>>>>> New versions are coming on and on, which may make such an impression, >>>>>> but we had some discussion on it at the LPC and some serious questions >>>>>> were asked during it, for instance regarding the DT binding introduced >>>>>> here. I'm not sure how this particular issue has been addressed here, >>>>>> for example. >>>>> >>>>> There have been no changes in bindings since v4 (other than squashing >>>>> consumer and provider bindings into a single patch and fixing typos). >>>>> >>>>> The last DT comment was on v9 [1] where Rob wanted confirmation from >>>>> other SoC vendors that this works for them too. And now we have that >>>>> confirmation and there are patches posted on the list [2]. >>>> >>>> OK >>>> >>>>> The second thing (also discussed at LPC) was about possible cases where >>>>> some consumer drivers can't calculate how much bandwidth they actually >>>>> need and how to address that. The proposal was to extend the OPP >>>>> bindings with one more property, but this is not part of this patchset. >>>>> It is a future step that needs more discussion on the mailing list. If a >>>>> driver really needs some bandwidth data now, it should be put into the >>>>> driver and not in DT. After we have enough consumers, we can discuss >>>>> again if it makes sense to extract something into DT or not. >>>> >>>> That's fine by me. >>>> >>>> Admittedly, I have some reservations regarding the extent to which >>>> this approach will turn out to be useful in practice, but I guess as >>>> long as there is enough traction, the best way to find out it to try >>>> and see. :-) >>>> >>>> From now on I will assume that this series is going to be applied by Greg. >>> >>> That was the initial idea, but the problem is that there is a recent >>> change in the cmd_db API (needed by the sdm845 provider driver), which >>> is going through arm-soc/qcom/drivers. So either Greg pulls also the >>> qcom-drivers-for-4.21 tag from Andy or the whole series goes via Olof >>> and Arnd. Maybe there are other options. I don't have any preference and >>> don't want to put extra burden on any maintainers, so i am ok with what >>> they prefer. >> >> Let me take the time later this week to review the code, which I haven't >> done in a while... >> > > When you get a chance to review, please keep in mind that the latest > version is v12 (from 08.Dec). The same is also available in linux-next > with no reported issues. The dependencies for this patchset have been already merged in v5.0-rc1, so i was wondering if this can still go into -rc2? Various patches that use this API are already posted and having it sooner will make dealing with dependencies and merge paths a bit easier during the next merge window. Or i can just rebase and resend everything targeting v5.1. Thanks, Georgi
On Thu, Jan 10, 2019 at 04:19:14PM +0200, Georgi Djakov wrote: > Hi Greg, > > On 12/17/18 13:17, Georgi Djakov wrote: > > Hi Greg, > > > > On 12/11/18 08:58, Greg Kroah-Hartman wrote: > >> On Mon, Dec 10, 2018 at 04:50:00PM +0200, Georgi Djakov wrote: > >>> On 12/10/18 13:00, Rafael J. Wysocki wrote: > >>>> On Mon, Dec 10, 2018 at 11:18 AM Georgi Djakov <georgi.djakov@linaro.org> wrote: > >>>>> > >>>>> Hi Rafael, > >>>>> > >>>>> On 12/10/18 11:04, Rafael J. Wysocki wrote: > >>>>>> On Thu, Dec 6, 2018 at 3:55 PM Greg KH <gregkh@linuxfoundation.org> wrote: > >>>>>>> > >>>>>>> On Wed, Dec 05, 2018 at 12:41:35PM -0800, Evan Green wrote: > >>>>>>>> On Tue, Nov 27, 2018 at 10:03 AM Georgi Djakov <georgi.djakov@linaro.org> wrote: > >>>>>>>>> > >>>>>>>>> Modern SoCs have multiple processors and various dedicated cores (video, gpu, > >>>>>>>>> graphics, modem). These cores are talking to each other and can generate a > >>>>>>>>> lot of data flowing through the on-chip interconnects. These interconnect > >>>>>>>>> buses could form different topologies such as crossbar, point to point buses, > >>>>>>>>> hierarchical buses or use the network-on-chip concept. > >>>>>>>>> > >>>>>>>>> These buses have been sized usually to handle use cases with high data > >>>>>>>>> throughput but it is not necessary all the time and consume a lot of power. > >>>>>>>>> Furthermore, the priority between masters can vary depending on the running > >>>>>>>>> use case like video playback or CPU intensive tasks. > >>>>>>>>> > >>>>>>>>> Having an API to control the requirement of the system in terms of bandwidth > >>>>>>>>> and QoS, so we can adapt the interconnect configuration to match those by > >>>>>>>>> scaling the frequencies, setting link priority and tuning QoS parameters. > >>>>>>>>> This configuration can be a static, one-time operation done at boot for some > >>>>>>>>> platforms or a dynamic set of operations that happen at run-time. > >>>>>>>>> > >>>>>>>>> This patchset introduce a new API to get the requirement and configure the > >>>>>>>>> interconnect buses across the entire chipset to fit with the current demand. > >>>>>>>>> The API is NOT for changing the performance of the endpoint devices, but only > >>>>>>>>> the interconnect path in between them. > >>>>>>>> > >>>>>>>> For what it's worth, we are ready to land this in Chrome OS. I think > >>>>>>>> this series has been very well discussed and reviewed, hasn't changed > >>>>>>>> much in the last few spins, and is in good enough shape to use as a > >>>>>>>> base for future patches. Georgi's also done a great job reaching out > >>>>>>>> to other SoC vendors, and there appears to be enough consensus that > >>>>>>>> this framework will be usable by more than just Qualcomm. There are > >>>>>>>> also several drivers out on the list trying to add patches to use this > >>>>>>>> framework, with more to come, so it made sense (to us) to get this > >>>>>>>> base framework nailed down. In my experiments this is an important > >>>>>>>> piece of the overall power management story, especially on systems > >>>>>>>> that are mostly idle. > >>>>>>>> > >>>>>>>> I'll continue to track changes to this series and we will ultimately > >>>>>>>> reconcile with whatever happens upstream, but I thought it was worth > >>>>>>>> sending this note to express our "thumbs up" towards this framework. > >>>>>>> > >>>>>>> Looks like a v11 will be forthcoming, so I'll wait for that one to apply > >>>>>>> it to the tree if all looks good. > >>>>>> > >>>>>> I'm honestly not sure if it is ready yet. > >>>>>> > >>>>>> New versions are coming on and on, which may make such an impression, > >>>>>> but we had some discussion on it at the LPC and some serious questions > >>>>>> were asked during it, for instance regarding the DT binding introduced > >>>>>> here. I'm not sure how this particular issue has been addressed here, > >>>>>> for example. > >>>>> > >>>>> There have been no changes in bindings since v4 (other than squashing > >>>>> consumer and provider bindings into a single patch and fixing typos). > >>>>> > >>>>> The last DT comment was on v9 [1] where Rob wanted confirmation from > >>>>> other SoC vendors that this works for them too. And now we have that > >>>>> confirmation and there are patches posted on the list [2]. > >>>> > >>>> OK > >>>> > >>>>> The second thing (also discussed at LPC) was about possible cases where > >>>>> some consumer drivers can't calculate how much bandwidth they actually > >>>>> need and how to address that. The proposal was to extend the OPP > >>>>> bindings with one more property, but this is not part of this patchset. > >>>>> It is a future step that needs more discussion on the mailing list. If a > >>>>> driver really needs some bandwidth data now, it should be put into the > >>>>> driver and not in DT. After we have enough consumers, we can discuss > >>>>> again if it makes sense to extract something into DT or not. > >>>> > >>>> That's fine by me. > >>>> > >>>> Admittedly, I have some reservations regarding the extent to which > >>>> this approach will turn out to be useful in practice, but I guess as > >>>> long as there is enough traction, the best way to find out it to try > >>>> and see. :-) > >>>> > >>>> From now on I will assume that this series is going to be applied by Greg. > >>> > >>> That was the initial idea, but the problem is that there is a recent > >>> change in the cmd_db API (needed by the sdm845 provider driver), which > >>> is going through arm-soc/qcom/drivers. So either Greg pulls also the > >>> qcom-drivers-for-4.21 tag from Andy or the whole series goes via Olof > >>> and Arnd. Maybe there are other options. I don't have any preference and > >>> don't want to put extra burden on any maintainers, so i am ok with what > >>> they prefer. > >> > >> Let me take the time later this week to review the code, which I haven't > >> done in a while... > >> > > > > When you get a chance to review, please keep in mind that the latest > > version is v12 (from 08.Dec). The same is also available in linux-next > > with no reported issues. > > The dependencies for this patchset have been already merged in v5.0-rc1, > so i was wondering if this can still go into -rc2? Various patches that > use this API are already posted and having it sooner will make dealing > with dependencies and merge paths a bit easier during the next merge > window. Or i can just rebase and resend everything targeting v5.1. We can't add new features after -rc1, sorry. Please rebase and resend to target 5.1 thanks, greg k-h
On 1/10/19 18:29, Greg Kroah-Hartman wrote: > On Thu, Jan 10, 2019 at 04:19:14PM +0200, Georgi Djakov wrote: >> Hi Greg, >> >> On 12/17/18 13:17, Georgi Djakov wrote: >>> Hi Greg, >>> >>> On 12/11/18 08:58, Greg Kroah-Hartman wrote: >>>> On Mon, Dec 10, 2018 at 04:50:00PM +0200, Georgi Djakov wrote: >>>>> On 12/10/18 13:00, Rafael J. Wysocki wrote: >>>>>> On Mon, Dec 10, 2018 at 11:18 AM Georgi Djakov <georgi.djakov@linaro.org> wrote: >>>>>>> >>>>>>> Hi Rafael, >>>>>>> >>>>>>> On 12/10/18 11:04, Rafael J. Wysocki wrote: >>>>>>>> On Thu, Dec 6, 2018 at 3:55 PM Greg KH <gregkh@linuxfoundation.org> wrote: >>>>>>>>> >>>>>>>>> On Wed, Dec 05, 2018 at 12:41:35PM -0800, Evan Green wrote: >>>>>>>>>> On Tue, Nov 27, 2018 at 10:03 AM Georgi Djakov <georgi.djakov@linaro.org> wrote: >>>>>>>>>>> >>>>>>>>>>> Modern SoCs have multiple processors and various dedicated cores (video, gpu, >>>>>>>>>>> graphics, modem). These cores are talking to each other and can generate a >>>>>>>>>>> lot of data flowing through the on-chip interconnects. These interconnect >>>>>>>>>>> buses could form different topologies such as crossbar, point to point buses, >>>>>>>>>>> hierarchical buses or use the network-on-chip concept. >>>>>>>>>>> >>>>>>>>>>> These buses have been sized usually to handle use cases with high data >>>>>>>>>>> throughput but it is not necessary all the time and consume a lot of power. >>>>>>>>>>> Furthermore, the priority between masters can vary depending on the running >>>>>>>>>>> use case like video playback or CPU intensive tasks. >>>>>>>>>>> >>>>>>>>>>> Having an API to control the requirement of the system in terms of bandwidth >>>>>>>>>>> and QoS, so we can adapt the interconnect configuration to match those by >>>>>>>>>>> scaling the frequencies, setting link priority and tuning QoS parameters. >>>>>>>>>>> This configuration can be a static, one-time operation done at boot for some >>>>>>>>>>> platforms or a dynamic set of operations that happen at run-time. >>>>>>>>>>> >>>>>>>>>>> This patchset introduce a new API to get the requirement and configure the >>>>>>>>>>> interconnect buses across the entire chipset to fit with the current demand. >>>>>>>>>>> The API is NOT for changing the performance of the endpoint devices, but only >>>>>>>>>>> the interconnect path in between them. >>>>>>>>>> >>>>>>>>>> For what it's worth, we are ready to land this in Chrome OS. I think >>>>>>>>>> this series has been very well discussed and reviewed, hasn't changed >>>>>>>>>> much in the last few spins, and is in good enough shape to use as a >>>>>>>>>> base for future patches. Georgi's also done a great job reaching out >>>>>>>>>> to other SoC vendors, and there appears to be enough consensus that >>>>>>>>>> this framework will be usable by more than just Qualcomm. There are >>>>>>>>>> also several drivers out on the list trying to add patches to use this >>>>>>>>>> framework, with more to come, so it made sense (to us) to get this >>>>>>>>>> base framework nailed down. In my experiments this is an important >>>>>>>>>> piece of the overall power management story, especially on systems >>>>>>>>>> that are mostly idle. >>>>>>>>>> >>>>>>>>>> I'll continue to track changes to this series and we will ultimately >>>>>>>>>> reconcile with whatever happens upstream, but I thought it was worth >>>>>>>>>> sending this note to express our "thumbs up" towards this framework. >>>>>>>>> >>>>>>>>> Looks like a v11 will be forthcoming, so I'll wait for that one to apply >>>>>>>>> it to the tree if all looks good. >>>>>>>> >>>>>>>> I'm honestly not sure if it is ready yet. >>>>>>>> >>>>>>>> New versions are coming on and on, which may make such an impression, >>>>>>>> but we had some discussion on it at the LPC and some serious questions >>>>>>>> were asked during it, for instance regarding the DT binding introduced >>>>>>>> here. I'm not sure how this particular issue has been addressed here, >>>>>>>> for example. >>>>>>> >>>>>>> There have been no changes in bindings since v4 (other than squashing >>>>>>> consumer and provider bindings into a single patch and fixing typos). >>>>>>> >>>>>>> The last DT comment was on v9 [1] where Rob wanted confirmation from >>>>>>> other SoC vendors that this works for them too. And now we have that >>>>>>> confirmation and there are patches posted on the list [2]. >>>>>> >>>>>> OK >>>>>> >>>>>>> The second thing (also discussed at LPC) was about possible cases where >>>>>>> some consumer drivers can't calculate how much bandwidth they actually >>>>>>> need and how to address that. The proposal was to extend the OPP >>>>>>> bindings with one more property, but this is not part of this patchset. >>>>>>> It is a future step that needs more discussion on the mailing list. If a >>>>>>> driver really needs some bandwidth data now, it should be put into the >>>>>>> driver and not in DT. After we have enough consumers, we can discuss >>>>>>> again if it makes sense to extract something into DT or not. >>>>>> >>>>>> That's fine by me. >>>>>> >>>>>> Admittedly, I have some reservations regarding the extent to which >>>>>> this approach will turn out to be useful in practice, but I guess as >>>>>> long as there is enough traction, the best way to find out it to try >>>>>> and see. :-) >>>>>> >>>>>> From now on I will assume that this series is going to be applied by Greg. >>>>> >>>>> That was the initial idea, but the problem is that there is a recent >>>>> change in the cmd_db API (needed by the sdm845 provider driver), which >>>>> is going through arm-soc/qcom/drivers. So either Greg pulls also the >>>>> qcom-drivers-for-4.21 tag from Andy or the whole series goes via Olof >>>>> and Arnd. Maybe there are other options. I don't have any preference and >>>>> don't want to put extra burden on any maintainers, so i am ok with what >>>>> they prefer. >>>> >>>> Let me take the time later this week to review the code, which I haven't >>>> done in a while... >>>> >>> >>> When you get a chance to review, please keep in mind that the latest >>> version is v12 (from 08.Dec). The same is also available in linux-next >>> with no reported issues. >> >> The dependencies for this patchset have been already merged in v5.0-rc1, >> so i was wondering if this can still go into -rc2? Various patches that >> use this API are already posted and having it sooner will make dealing >> with dependencies and merge paths a bit easier during the next merge >> window. Or i can just rebase and resend everything targeting v5.1. > > We can't add new features after -rc1, sorry. > > Please rebase and resend to target 5.1 Ok, i was expecting that. Thanks for confirming! BR, Georgi