Message ID | 20240325181628.9407-2-quic_okukatla@quicinc.com (mailing list archive) |
---|---|
State | Handled Elsewhere |
Headers | show |
Series | Add support for QoS configuration | expand |
On 25.03.2024 7:16 PM, Odelu Kukatla wrote: > It adds QoS support for QNOC device and includes support for > configuring priority, priority forward disable, urgency forwarding. > This helps in priortizing the traffic originating from different > interconnect masters at NoC(Network On Chip). > > Signed-off-by: Odelu Kukatla <quic_okukatla@quicinc.com> > --- [...] > > + if (desc->config) { > + struct resource *res; > + void __iomem *base; > + > + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); > + if (!res) > + goto skip_qos_config; > + > + base = devm_ioremap_resource(dev, res); You were asked to substitute this call like 3 times already.. devm_platform_get_and_ioremap_resource or even better, devm_platform_ioremap_resource [...] > @@ -70,6 +102,7 @@ struct qcom_icc_node { > u64 max_peak[QCOM_ICC_NUM_BUCKETS]; > struct qcom_icc_bcm *bcms[MAX_BCM_PER_NODE]; > size_t num_bcms; > + const struct qcom_icc_qosbox *qosbox; I believe I came up with a better approach for storing this.. see [1] Konrad [1] https://lore.kernel.org/linux-arm-msm/20240326-topic-rpm_icc_qos_cleanup-v1-4-357e736792be@linaro.org/
On 26/03/2024 21:56, Konrad Dybcio wrote: > On 25.03.2024 7:16 PM, Odelu Kukatla wrote: >> It adds QoS support for QNOC device and includes support for >> configuring priority, priority forward disable, urgency forwarding. >> This helps in priortizing the traffic originating from different >> interconnect masters at NoC(Network On Chip). >> >> Signed-off-by: Odelu Kukatla <quic_okukatla@quicinc.com> >> --- > > [...] > >> >> + if (desc->config) { >> + struct resource *res; >> + void __iomem *base; >> + >> + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); >> + if (!res) >> + goto skip_qos_config; >> + >> + base = devm_ioremap_resource(dev, res); > > You were asked to substitute this call like 3 times already.. > > devm_platform_get_and_ioremap_resource > > or even better, devm_platform_ioremap_resource Yeah, I wonder what else from my feedback got ignored :( Best regards, Krzysztof
On 3/27/2024 2:14 PM, Krzysztof Kozlowski wrote: > On 26/03/2024 21:56, Konrad Dybcio wrote: >> On 25.03.2024 7:16 PM, Odelu Kukatla wrote: >>> It adds QoS support for QNOC device and includes support for >>> configuring priority, priority forward disable, urgency forwarding. >>> This helps in priortizing the traffic originating from different >>> interconnect masters at NoC(Network On Chip). >>> >>> Signed-off-by: Odelu Kukatla <quic_okukatla@quicinc.com> >>> --- >> >> [...] >> >>> >>> + if (desc->config) { >>> + struct resource *res; >>> + void __iomem *base; >>> + >>> + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); >>> + if (!res) >>> + goto skip_qos_config; >>> + >>> + base = devm_ioremap_resource(dev, res); >> >> You were asked to substitute this call like 3 times already.. >> >> devm_platform_get_and_ioremap_resource >> >> or even better, devm_platform_ioremap_resource > > Yeah, I wonder what else from my feedback got ignored :( > There was a misinterpretation of your comment from my side. Got it now, I will address this. > > Best regards, > Krzysztof > Thanks, Odelu
On 3/27/2024 2:26 AM, Konrad Dybcio wrote: > On 25.03.2024 7:16 PM, Odelu Kukatla wrote: >> It adds QoS support for QNOC device and includes support for >> configuring priority, priority forward disable, urgency forwarding. >> This helps in priortizing the traffic originating from different >> interconnect masters at NoC(Network On Chip). >> >> Signed-off-by: Odelu Kukatla <quic_okukatla@quicinc.com> >> --- > > [...] > >> >> + if (desc->config) { >> + struct resource *res; >> + void __iomem *base; >> + >> + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); >> + if (!res) >> + goto skip_qos_config; >> + >> + base = devm_ioremap_resource(dev, res); > > You were asked to substitute this call like 3 times already.. > > devm_platform_get_and_ioremap_resource > > or even better, devm_platform_ioremap_resource > > [...] > >> @@ -70,6 +102,7 @@ struct qcom_icc_node { >> u64 max_peak[QCOM_ICC_NUM_BUCKETS]; >> struct qcom_icc_bcm *bcms[MAX_BCM_PER_NODE]; >> size_t num_bcms; >> + const struct qcom_icc_qosbox *qosbox; > > I believe I came up with a better approach for storing this.. see [1] > > Konrad > > [1] https://lore.kernel.org/linux-arm-msm/20240326-topic-rpm_icc_qos_cleanup-v1-4-357e736792be@linaro.org/ > I see in this series, QoS parameters are moved into struct qcom_icc_desc. Even though we program QoS at Provider/Bus level, it is property of the node/master connected to a Bus/NoC. It will be easier later to know which master's QoS we are programming if we add in node data. Readability point of view, it might be good to keep QoS parameters in node data. Thanks, Odelu
On 3.04.2024 10:45 AM, Odelu Kukatla wrote: > > > On 3/27/2024 2:26 AM, Konrad Dybcio wrote: >> On 25.03.2024 7:16 PM, Odelu Kukatla wrote: >>> It adds QoS support for QNOC device and includes support for >>> configuring priority, priority forward disable, urgency forwarding. >>> This helps in priortizing the traffic originating from different >>> interconnect masters at NoC(Network On Chip). >>> >>> Signed-off-by: Odelu Kukatla <quic_okukatla@quicinc.com> >>> --- [...] >>> @@ -70,6 +102,7 @@ struct qcom_icc_node { >>> u64 max_peak[QCOM_ICC_NUM_BUCKETS]; >>> struct qcom_icc_bcm *bcms[MAX_BCM_PER_NODE]; >>> size_t num_bcms; >>> + const struct qcom_icc_qosbox *qosbox; >> >> I believe I came up with a better approach for storing this.. see [1] >> >> Konrad >> >> [1] https://lore.kernel.org/linux-arm-msm/20240326-topic-rpm_icc_qos_cleanup-v1-4-357e736792be@linaro.org/ >> > > I see in this series, QoS parameters are moved into struct qcom_icc_desc. > Even though we program QoS at Provider/Bus level, it is property of the node/master connected to a Bus/NoC. I don't see how it could be the case, we're obviously telling the controller which endpoints have priority over others, not telling nodes whether the data they transfer can omit the queue. > It will be easier later to know which master's QoS we are programming if we add in node data. > Readability point of view, it might be good to keep QoS parameters in node data. I don't agree here either, with the current approach we've made countless mistakes when converting the downstream data (I have already submitted some fixes with more in flight), as there's tons of jumping around the code to find what goes where. Konrad
On Sat, Apr 13, 2024 at 09:31:47PM +0200, Konrad Dybcio wrote: > On 3.04.2024 10:45 AM, Odelu Kukatla wrote: > > > > > > On 3/27/2024 2:26 AM, Konrad Dybcio wrote: > >> On 25.03.2024 7:16 PM, Odelu Kukatla wrote: > >>> It adds QoS support for QNOC device and includes support for > >>> configuring priority, priority forward disable, urgency forwarding. > >>> This helps in priortizing the traffic originating from different > >>> interconnect masters at NoC(Network On Chip). > >>> > >>> Signed-off-by: Odelu Kukatla <quic_okukatla@quicinc.com> > >>> --- > > [...] > > >>> @@ -70,6 +102,7 @@ struct qcom_icc_node { > >>> u64 max_peak[QCOM_ICC_NUM_BUCKETS]; > >>> struct qcom_icc_bcm *bcms[MAX_BCM_PER_NODE]; > >>> size_t num_bcms; > >>> + const struct qcom_icc_qosbox *qosbox; > >> > >> I believe I came up with a better approach for storing this.. see [1] > >> > >> Konrad > >> > >> [1] https://lore.kernel.org/linux-arm-msm/20240326-topic-rpm_icc_qos_cleanup-v1-4-357e736792be@linaro.org/ Note that I replied to this patch series as well. Similar comments here for how that approach would apply to icc-rpmh. > >> > > > > I see in this series, QoS parameters are moved into struct qcom_icc_desc. > > Even though we program QoS at Provider/Bus level, it is property of the node/master connected to a Bus/NoC. > > I don't see how it could be the case, we're obviously telling the controller which > endpoints have priority over others, not telling nodes whether the data they > transfer can omit the queue. The QoS settings tune the priority of data coming out of a specific port on the NOC. The nodes are 1:1 with the ports. Yes, this does tell the NOC which ports have priority over others. But that's done by configuring each port's priority in their own port-specific QoS registers. > > > It will be easier later to know which master's QoS we are programming if we add in node data. > > Readability point of view, it might be good to keep QoS parameters in node data. > > I don't agree here either, with the current approach we've made countless mistakes > when converting the downstream data (I have already submitted some fixes with more > in flight), as there's tons of jumping around the code to find what goes where. I don't follow why keeping the port's own QoS settings in that port's struct results in more jumping around. It should do the opposite, in fact. If someone wants to know the QoS settings applied to the qhm_qup0 port, then they should be able to look directly in the qhm_qup0 struct. Otherwise, if it's placed elsewhere then they'd have to jump elsewhere to find what that logical qhm_qup0-related data is set to. If it *was* placed elsewhere, then we'd still need some logical way to map between that separate location and the node it's associated with. Which is a problem with your patch for cleaning up the icc-rpm QoS. In its current form, it's impossible to identify which QoS settings apply to which logical node (without detailed knowledge of the NOC register layout). Keeping this data with the node struct reduces the need for extra layers of mapping between the QoS settings and the node struct. It keeps all the port-related information all together in one place. I did like your earlier suggestion of using a compound literal to initialize the .qosbox pointers, such that we don't need a separate top-level variable defined for them. They're only ever referenced by a single node, so there's no need for them to be separate variables. But I don't see the logic in totally separating the QoS data from the port it's associated with. > > Konrad
Hi Konrad, On 5/8/2024 8:07 AM, Mike Tipton wrote: > On Sat, Apr 13, 2024 at 09:31:47PM +0200, Konrad Dybcio wrote: >> On 3.04.2024 10:45 AM, Odelu Kukatla wrote: >>> >>> >>> On 3/27/2024 2:26 AM, Konrad Dybcio wrote: >>>> On 25.03.2024 7:16 PM, Odelu Kukatla wrote: >>>>> It adds QoS support for QNOC device and includes support for >>>>> configuring priority, priority forward disable, urgency forwarding. >>>>> This helps in priortizing the traffic originating from different >>>>> interconnect masters at NoC(Network On Chip). >>>>> >>>>> Signed-off-by: Odelu Kukatla <quic_okukatla@quicinc.com> >>>>> --- >> >> [...] >> >>>>> @@ -70,6 +102,7 @@ struct qcom_icc_node { >>>>> u64 max_peak[QCOM_ICC_NUM_BUCKETS]; >>>>> struct qcom_icc_bcm *bcms[MAX_BCM_PER_NODE]; >>>>> size_t num_bcms; >>>>> + const struct qcom_icc_qosbox *qosbox; >>>> >>>> I believe I came up with a better approach for storing this.. see [1] >>>> >>>> Konrad >>>> >>>> [1] https://lore.kernel.org/linux-arm-msm/20240326-topic-rpm_icc_qos_cleanup-v1-4-357e736792be@linaro.org/ > > Note that I replied to this patch series as well. Similar comments here > for how that approach would apply to icc-rpmh. > >>>> >>> >>> I see in this series, QoS parameters are moved into struct qcom_icc_desc. >>> Even though we program QoS at Provider/Bus level, it is property of the node/master connected to a Bus/NoC. >> >> I don't see how it could be the case, we're obviously telling the controller which >> endpoints have priority over others, not telling nodes whether the data they >> transfer can omit the queue. > > The QoS settings tune the priority of data coming out of a specific port > on the NOC. The nodes are 1:1 with the ports. Yes, this does tell the > NOC which ports have priority over others. But that's done by > configuring each port's priority in their own port-specific QoS > registers. > >> >>> It will be easier later to know which master's QoS we are programming if we add in node data. >>> Readability point of view, it might be good to keep QoS parameters in node data. >> >> I don't agree here either, with the current approach we've made countless mistakes >> when converting the downstream data (I have already submitted some fixes with more >> in flight), as there's tons of jumping around the code to find what goes where. > > I don't follow why keeping the port's own QoS settings in that port's > struct results in more jumping around. It should do the opposite, in > fact. If someone wants to know the QoS settings applied to the qhm_qup0 > port, then they should be able to look directly in the qhm_qup0 struct. > Otherwise, if it's placed elsewhere then they'd have to jump elsewhere > to find what that logical qhm_qup0-related data is set to. > > If it *was* placed elsewhere, then we'd still need some logical way to > map between that separate location and the node it's associated with. > Which is a problem with your patch for cleaning up the icc-rpm QoS. In > its current form, it's impossible to identify which QoS settings apply > to which logical node (without detailed knowledge of the NOC register > layout). > > Keeping this data with the node struct reduces the need for extra layers > of mapping between the QoS settings and the node struct. It keeps all > the port-related information all together in one place. > > I did like your earlier suggestion of using a compound literal to > initialize the .qosbox pointers, such that we don't need a separate > top-level variable defined for them. They're only ever referenced by a > single node, so there's no need for them to be separate variables. > > But I don't see the logic in totally separating the QoS data from the > port it's associated with. > >> I will update the patch as per your suggestion of keeping .qosbox initialization inside *qcom_icc_node* structure. I will post next version with this update and addressing other comments from v4. Thanks, Odelu >> Konrad
On 5/28/24 16:52, Odelu Kukatla wrote: > Hi Konrad, > > On 5/8/2024 8:07 AM, Mike Tipton wrote: >> On Sat, Apr 13, 2024 at 09:31:47PM +0200, Konrad Dybcio wrote: >>> On 3.04.2024 10:45 AM, Odelu Kukatla wrote: >>>> >>>> >>>> On 3/27/2024 2:26 AM, Konrad Dybcio wrote: >>>>> On 25.03.2024 7:16 PM, Odelu Kukatla wrote: >>>>>> It adds QoS support for QNOC device and includes support for >>>>>> configuring priority, priority forward disable, urgency forwarding. >>>>>> This helps in priortizing the traffic originating from different >>>>>> interconnect masters at NoC(Network On Chip). >>>>>> >>>>>> Signed-off-by: Odelu Kukatla <quic_okukatla@quicinc.com> >>>>>> --- >>> >>> [...] >>> >>>>>> @@ -70,6 +102,7 @@ struct qcom_icc_node { >>>>>> u64 max_peak[QCOM_ICC_NUM_BUCKETS]; >>>>>> struct qcom_icc_bcm *bcms[MAX_BCM_PER_NODE]; >>>>>> size_t num_bcms; >>>>>> + const struct qcom_icc_qosbox *qosbox; >>>>> >>>>> I believe I came up with a better approach for storing this.. see [1] >>>>> >>>>> Konrad >>>>> >>>>> [1] https://lore.kernel.org/linux-arm-msm/20240326-topic-rpm_icc_qos_cleanup-v1-4-357e736792be@linaro.org/ >> >> Note that I replied to this patch series as well. Similar comments here >> for how that approach would apply to icc-rpmh. >> >>>>> >>>> >>>> I see in this series, QoS parameters are moved into struct qcom_icc_desc. >>>> Even though we program QoS at Provider/Bus level, it is property of the node/master connected to a Bus/NoC. >>> >>> I don't see how it could be the case, we're obviously telling the controller which >>> endpoints have priority over others, not telling nodes whether the data they >>> transfer can omit the queue. >> >> The QoS settings tune the priority of data coming out of a specific port >> on the NOC. The nodes are 1:1 with the ports. Yes, this does tell the >> NOC which ports have priority over others. But that's done by >> configuring each port's priority in their own port-specific QoS >> registers. >> >>> >>>> It will be easier later to know which master's QoS we are programming if we add in node data. >>>> Readability point of view, it might be good to keep QoS parameters in node data. >>> >>> I don't agree here either, with the current approach we've made countless mistakes >>> when converting the downstream data (I have already submitted some fixes with more >>> in flight), as there's tons of jumping around the code to find what goes where. >> >> I don't follow why keeping the port's own QoS settings in that port's >> struct results in more jumping around. It should do the opposite, in >> fact. If someone wants to know the QoS settings applied to the qhm_qup0 >> port, then they should be able to look directly in the qhm_qup0 struct. >> Otherwise, if it's placed elsewhere then they'd have to jump elsewhere >> to find what that logical qhm_qup0-related data is set to. >> >> If it *was* placed elsewhere, then we'd still need some logical way to >> map between that separate location and the node it's associated with. >> Which is a problem with your patch for cleaning up the icc-rpm QoS. In >> its current form, it's impossible to identify which QoS settings apply >> to which logical node (without detailed knowledge of the NOC register >> layout). >> >> Keeping this data with the node struct reduces the need for extra layers >> of mapping between the QoS settings and the node struct. It keeps all >> the port-related information all together in one place. >> >> I did like your earlier suggestion of using a compound literal to >> initialize the .qosbox pointers, such that we don't need a separate >> top-level variable defined for them. They're only ever referenced by a >> single node, so there's no need for them to be separate variables. >> >> But I don't see the logic in totally separating the QoS data from the >> port it's associated with. >> >>> > I will update the patch as per your suggestion of keeping .qosbox initialization inside *qcom_icc_node* structure. > I will post next version with this update and addressing other comments from v4. Sorry for the late answer, but ack, let's go forward with this Konrad
diff --git a/drivers/interconnect/qcom/icc-rpmh.c b/drivers/interconnect/qcom/icc-rpmh.c index c1aa265c1f4e..bc85701ee027 100644 --- a/drivers/interconnect/qcom/icc-rpmh.c +++ b/drivers/interconnect/qcom/icc-rpmh.c @@ -1,8 +1,11 @@ // SPDX-License-Identifier: GPL-2.0 /* * Copyright (c) 2020, The Linux Foundation. All rights reserved. + * Copyright (c) 2024 Qualcomm Innovation Center, Inc. All rights reserved. */ +#include <linux/bitfield.h> +#include <linux/clk.h> #include <linux/interconnect.h> #include <linux/interconnect-provider.h> #include <linux/module.h> @@ -14,6 +17,38 @@ #include "icc-common.h" #include "icc-rpmh.h" +/* QNOC QoS */ +#define QOSGEN_MAINCTL_LO(p, qp) (0x8 + (p->port_offsets[qp])) +#define QOS_SLV_URG_MSG_EN_MASK GENMASK(3, 3) +#define QOS_DFLT_PRIO_MASK GENMASK(6, 4) +#define QOS_DISABLE_MASK GENMASK(24, 24) + +/** + * qcom_icc_set_qos - initialize static QoS configurations + * @qp: qcom icc provider to which @node belongs + * @node: qcom icc node to operate on + */ +static void qcom_icc_set_qos(struct qcom_icc_provider *qp, + struct qcom_icc_node *node) +{ + const struct qcom_icc_qosbox *qos = node->qosbox; + int port; + + for (port = 0; port < qos->num_ports; port++) { + regmap_update_bits(qp->regmap, QOSGEN_MAINCTL_LO(qos, port), + QOS_DISABLE_MASK, + FIELD_PREP(QOS_DISABLE_MASK, qos->prio_fwd_disable)); + + regmap_update_bits(qp->regmap, QOSGEN_MAINCTL_LO(qos, port), + QOS_DFLT_PRIO_MASK, + FIELD_PREP(QOS_DFLT_PRIO_MASK, qos->prio)); + + regmap_update_bits(qp->regmap, QOSGEN_MAINCTL_LO(qos, port), + QOS_SLV_URG_MSG_EN_MASK, + FIELD_PREP(QOS_SLV_URG_MSG_EN_MASK, qos->urg_fwd)); + } +} + /** * qcom_icc_pre_aggregate - cleans up stale values from prior icc_set * @node: icc node to operate on @@ -159,6 +194,36 @@ int qcom_icc_bcm_init(struct qcom_icc_bcm *bcm, struct device *dev) } EXPORT_SYMBOL_GPL(qcom_icc_bcm_init); +/** + * qcom_icc_rpmh_configure_qos - configure QoS parameters + * @qp: qcom icc provider associated with QoS endpoint nodes + * + * Return: 0 on success, or an error code otherwise + */ +static int qcom_icc_rpmh_configure_qos(struct qcom_icc_provider *qp) +{ + struct qcom_icc_node *qnode; + size_t i; + int ret; + + ret = clk_bulk_prepare_enable(qp->num_clks, qp->clks); + if (ret) + return ret; + + for (i = 0; i < qp->num_nodes; i++) { + qnode = qp->nodes[i]; + if (!qnode) + continue; + + if (qnode->qosbox) + qcom_icc_set_qos(qp, qnode); + } + + clk_bulk_disable_unprepare(qp->num_clks, qp->clks); + + return ret; +} + int qcom_icc_rpmh_probe(struct platform_device *pdev) { const struct qcom_icc_desc *desc; @@ -199,7 +264,9 @@ int qcom_icc_rpmh_probe(struct platform_device *pdev) qp->dev = dev; qp->bcms = desc->bcms; + qp->nodes = desc->nodes; qp->num_bcms = desc->num_bcms; + qp->num_nodes = desc->num_nodes; qp->voter = of_bcm_voter_get(qp->dev, NULL); if (IS_ERR(qp->voter)) @@ -229,6 +296,38 @@ int qcom_icc_rpmh_probe(struct platform_device *pdev) data->nodes[i] = node; } + if (desc->config) { + struct resource *res; + void __iomem *base; + + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); + if (!res) + goto skip_qos_config; + + base = devm_ioremap_resource(dev, res); + if (IS_ERR(base)) { + dev_info(dev, "Skipping QoS, ioremap failed: %ld\n", PTR_ERR(base)); + goto skip_qos_config; + }; + + qp->regmap = devm_regmap_init_mmio(dev, base, desc->config); + if (IS_ERR(qp->regmap)) { + dev_info(dev, "Skipping QoS, regmap failed; %ld\n", PTR_ERR(qp->regmap)); + goto skip_qos_config; + } + + qp->num_clks = devm_clk_bulk_get_all(qp->dev, &qp->clks); + if (qp->num_clks < 0) { + dev_info(dev, "Skipping QoS, failed to get clk: %d\n", qp->num_clks); + goto skip_qos_config; + } + + ret = qcom_icc_rpmh_configure_qos(qp); + if (ret) + dev_info(dev, "Failed to program QoS: %d\n", ret); + } + +skip_qos_config: ret = icc_provider_register(provider); if (ret) goto err_remove_nodes; diff --git a/drivers/interconnect/qcom/icc-rpmh.h b/drivers/interconnect/qcom/icc-rpmh.h index 2de29460e808..4fdc75c84c95 100644 --- a/drivers/interconnect/qcom/icc-rpmh.h +++ b/drivers/interconnect/qcom/icc-rpmh.h @@ -1,12 +1,14 @@ /* SPDX-License-Identifier: GPL-2.0 */ /* * Copyright (c) 2020, The Linux Foundation. All rights reserved. + * Copyright (c) 2024 Qualcomm Innovation Center, Inc. All rights reserved. */ #ifndef __DRIVERS_INTERCONNECT_QCOM_ICC_RPMH_H__ #define __DRIVERS_INTERCONNECT_QCOM_ICC_RPMH_H__ #include <dt-bindings/interconnect/qcom,icc.h> +#include <linux/regmap.h> #define to_qcom_provider(_provider) \ container_of(_provider, struct qcom_icc_provider, provider) @@ -18,6 +20,11 @@ * @bcms: list of bcms that maps to the provider * @num_bcms: number of @bcms * @voter: bcm voter targeted by this provider + * @nodes: list of icc nodes that maps to the provider + * @num_nodes: number of @nodes + * @regmap: used for QoS, register access + * @clks : clks required for register access + * @num_clks: number of @clks */ struct qcom_icc_provider { struct icc_provider provider; @@ -25,6 +32,11 @@ struct qcom_icc_provider { struct qcom_icc_bcm * const *bcms; size_t num_bcms; struct bcm_voter *voter; + struct qcom_icc_node * const *nodes; + size_t num_nodes; + struct regmap *regmap; + struct clk_bulk_data *clks; + int num_clks; }; /** @@ -41,6 +53,25 @@ struct bcm_db { u8 reserved; }; +/** + * struct qcom_icc_qosbox - Qualcomm specific QoS config + * @prio: priority value assigned to requests on the node + * @urg_fwd: whether to forward the urgency promotion issued by master + * (endpoint), or discard + * @prio_fwd_disable: whether to forward the priority driven by master, or + * override by @prio + * @num_ports: number of @ports + * @port_offsets: qos register offsets + */ + +struct qcom_icc_qosbox { + const u32 prio; + const bool urg_fwd; + const bool prio_fwd_disable; + const u32 num_ports; + const u32 port_offsets[] __counted_by(num_ports); +}; + #define MAX_LINKS 128 #define MAX_BCMS 64 #define MAX_BCM_PER_NODE 3 @@ -58,6 +89,7 @@ struct bcm_db { * @max_peak: current max aggregate value of all peak bw requests * @bcms: list of bcms associated with this logical node * @num_bcms: num of @bcms + * @qosbox: qos config data associated with node */ struct qcom_icc_node { const char *name; @@ -70,6 +102,7 @@ struct qcom_icc_node { u64 max_peak[QCOM_ICC_NUM_BUCKETS]; struct qcom_icc_bcm *bcms[MAX_BCM_PER_NODE]; size_t num_bcms; + const struct qcom_icc_qosbox *qosbox; }; /** @@ -114,6 +147,7 @@ struct qcom_icc_fabric { }; struct qcom_icc_desc { + const struct regmap_config *config; struct qcom_icc_node * const *nodes; size_t num_nodes; struct qcom_icc_bcm * const *bcms;
It adds QoS support for QNOC device and includes support for configuring priority, priority forward disable, urgency forwarding. This helps in priortizing the traffic originating from different interconnect masters at NoC(Network On Chip). Signed-off-by: Odelu Kukatla <quic_okukatla@quicinc.com> --- drivers/interconnect/qcom/icc-rpmh.c | 99 ++++++++++++++++++++++++++++ drivers/interconnect/qcom/icc-rpmh.h | 34 ++++++++++ 2 files changed, 133 insertions(+)