From patchwork Fri Aug 5 07:49:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jun Nie X-Patchwork-Id: 12937029 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2C72DC25B08 for ; Fri, 5 Aug 2022 07:49:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237761AbiHEHtx (ORCPT ); Fri, 5 Aug 2022 03:49:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50112 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240250AbiHEHtu (ORCPT ); Fri, 5 Aug 2022 03:49:50 -0400 Received: from mail-pj1-x1034.google.com (mail-pj1-x1034.google.com [IPv6:2607:f8b0:4864:20::1034]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9BCAF62F4 for ; Fri, 5 Aug 2022 00:49:47 -0700 (PDT) Received: by mail-pj1-x1034.google.com with SMTP id b4so1973259pji.4 for ; Fri, 05 Aug 2022 00:49:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Brym2/V9OM2lQXUhI8Uh5+ajUm4TUUoqfd33kuLRL58=; b=Pgq/sMXnxHTiJb6P6KHRWfGQnSmb5sCtoQEuyLt6rd8iIQ1wcQNSohjiyROHsU2nnX qkEPP1OUhFk23ISXRF95p4xvXTseAUXsywSGet4HTtoptEB1pwb0O3p1syOucsbrEAPQ dwcvqqisX8osKMECotkh6aDNhpSd0QSgAzmBMMHsLg/ZCmFbFp3LG6D1rJu0hfb7JKSy SP9fruJURboEGIwvYMb/HDOGATTZawg/OKR9E2RmxRuxSG4GZ5ZQfTPn6yNBo7wxWks2 8yxNAMne77naZGBbmJ1wHPlbmEkFdu0sjuYsstRX76Ju2+leOCEjAx+4y7Zn7WXkLuN9 16ew== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Brym2/V9OM2lQXUhI8Uh5+ajUm4TUUoqfd33kuLRL58=; b=znMNRuhymlG7J3e2ICiw2rAAYsbpIEzcPSRz0kFdhHw6kps/KsnsGg8Y23EDRaMvy5 p5FEK63khAAlqwmoNsWnvwm4oJ+nOkJhbQETQRMlg4B0Rjlhr2PTpWLWUREMJ8HaYkMO Z41V5QbOm43yFib+dUrxzt2t9F2tV9diCwA+gsp3cywD7ob09/WAKmz/b9DQAroSBkwM 2jsW6z2f1Lvi0Y3jj3XXlBg23loEXPLm2huIU8eCHoPqhhBUSnmgZLRSt43OBWrhGroE NdXt9CHq/DRd+rbCvNVlQuj4ebJ9j9CXPolDpaWYW2Tr49mWFUWSaCY1MHHC7YMK0wVk cGUA== X-Gm-Message-State: ACgBeo11pdFOgJg7NMgnXCJBXFqDMu/M1plKdQJhhww0Py73HZ/ji+c6 VCkqfj6fIQgmaZ5K30viMwIglA== X-Google-Smtp-Source: AA6agR4CDPSMM4xDjQGLfZGRixoaLgCUYWEAIFrv7FIOhzE9KW1P9m6pxc+3cTI/dHAUmJU/h321Yw== X-Received: by 2002:a17:903:2452:b0:16e:d0b6:6507 with SMTP id l18-20020a170903245200b0016ed0b66507mr5769571pls.68.1659685787060; Fri, 05 Aug 2022 00:49:47 -0700 (PDT) Received: from localhost.localdomain ([45.8.68.134]) by smtp.gmail.com with ESMTPSA id z10-20020a1709027e8a00b0016ecda71e26sm2309372pla.39.2022.08.05.00.49.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 05 Aug 2022 00:49:46 -0700 (PDT) From: Jun Nie To: abel.vesa@linaro.org, bjorn.andersson@linaro.org, mturquette@baylibre.com, sboyd@kernel.org Cc: agross@kernel.org, shawn.guo@linaro.org, bryan.odonoghue@linaro.org, linux-clk@vger.kernel.org, linux-arm-msm@vger.kernel.org, devicetree@vger.kernel.org, Jun Nie Subject: [PATCH 1/4] clk: Aggregate power operation in clock controller Date: Fri, 5 Aug 2022 15:49:32 +0800 Message-Id: <20220805074935.1158098-2-jun.nie@linaro.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220805074935.1158098-1-jun.nie@linaro.org> References: <20220805074935.1158098-1-jun.nie@linaro.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org Add power domain operation per clk frequency. Some hardware support dynamic voltage frequency scaling in clock controller besides devices. Which is not related to any clock consumer devices. While power domain operation is operated per device in driver model. If they are voted per clk, not per device, we need to aggregate them in clock framework, then send request to power framework. Signed-off-by: Jun Nie --- drivers/clk/clk.c | 212 ++++++++++++++++++++++++++++++++++- include/linux/clk-provider.h | 62 ++++++++++ 2 files changed, 272 insertions(+), 2 deletions(-) diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c index f00d4c1158d7..0ab79b9ebefd 100644 --- a/drivers/clk/clk.c +++ b/drivers/clk/clk.c @@ -18,6 +18,7 @@ #include #include #include +#include #include #include #include @@ -89,6 +90,7 @@ struct clk_core { struct hlist_node debug_node; #endif struct kref ref; + struct clk_power_data *power; }; #define CREATE_TRACE_POINTS @@ -812,6 +814,161 @@ int clk_rate_exclusive_get(struct clk *clk) } EXPORT_SYMBOL_GPL(clk_rate_exclusive_get); +static void clk_unvote_genpd(struct clk_core *core) +{ + struct clkpstate_node *ps_node = NULL; + struct clk_power_data *power = core->power; + unsigned int pstate = 0; + + mutex_lock(power->genpd_lock); + /* Do not free the node. As performance states number is limited, and we + * will re-visit it later. + */ + list_del_init(&power->genpd_list); + power->genpd_pstate = 0; + + /* Find and set the highest pstate */ + list_for_each_entry_reverse(ps_node, power->genpd_head, genpd_list) { + if (!list_empty(&ps_node->genpd_pstate_head)) { + pstate = ps_node->pstate; + break; + } + } + + pr_debug("%s: clk %s unvote genpd set genpd perf state %d\n", + __func__, core->name, pstate); + dev_pm_genpd_set_performance_state(*power->genpd_dev, pstate); + + mutex_unlock(power->genpd_lock); +} + +static int clk_vote_genpd(struct clk_core *core, unsigned long rate) +{ + struct clkpstate_node *new_ps_node, *ps_node, *pre_ps_node = NULL; + unsigned int cnt, pstate = 0; + struct list_head *insert_pos; + int ret = 0; + struct clk_power_data *power = core->power; + const struct genpdopp_table *tbl = power->genpdopp_table; + + /* Find opp pstate for required rate */ + for (cnt = 0; cnt < power->genpdopp_num; cnt++, tbl++) { + if (rate <= tbl->ceiling_rate) { + pstate = tbl->pstate; + break; + } + } + + if (!pstate && cnt == power->genpdopp_num) { + pr_err("%s: clk %s rate %lu not supported by genpd\n", __func__, + core->name, rate); + return -EINVAL; + } + + pr_debug("%s: clk %s votes perf state %d\n", + __func__, core->name, pstate); + mutex_lock(power->genpd_lock); + if (list_empty(power->genpd_head)) { + insert_pos = power->genpd_head; + goto new_pstate_node; + } + + /* If this clk power is already in some perf state */ + if (!list_empty(&power->genpd_list)) { + if (pstate == power->genpd_pstate) { + mutex_unlock(power->genpd_lock); + return 0; + } + list_del_init(&power->genpd_list); + } + + /* search the genpd pstate node that match pstate requirement */ + list_for_each_entry(ps_node, power->genpd_head, genpd_list) { + if (ps_node->pstate == pstate) { + new_ps_node = ps_node; + list_add(&power->genpd_list, + &new_ps_node->genpd_pstate_head); + goto linked_into_pstate; + } + if (ps_node->pstate > pstate) { + if (pre_ps_node != NULL) + insert_pos = &pre_ps_node->genpd_list; + else + insert_pos = power->genpd_head; + goto new_pstate_node; + } + pre_ps_node = ps_node; + } + /* Add new genpd pstate node in the end */ + insert_pos = &pre_ps_node->genpd_list; + +new_pstate_node: + new_ps_node = kmalloc(sizeof(struct clkpstate_node), GFP_KERNEL); + if (new_ps_node == NULL) { + mutex_unlock(power->genpd_lock); + return -ENOMEM; + } + + /* link this pstate node into genpd pstate link list */ + INIT_LIST_HEAD(&new_ps_node->genpd_list); + INIT_LIST_HEAD(&new_ps_node->genpd_pstate_head); + new_ps_node->pstate = pstate; + list_add(&new_ps_node->genpd_list, insert_pos); + list_add(&power->genpd_list, &new_ps_node->genpd_pstate_head); + + /* Find and set the highest pstate */ + list_for_each_entry_reverse(ps_node, power->genpd_head, genpd_list) { + if (!list_empty(&ps_node->genpd_pstate_head)) { + pr_debug("%s: genpd set perf state %d for clk %s\n", + __func__, pstate, core->name); + ret = dev_pm_genpd_set_performance_state( + *power->genpd_dev, ps_node->pstate); + if (ret) { + /* No need to free new_ps_node as it's empty */ + mutex_unlock(power->genpd_lock); + pr_err("%s: fail to set genpd opp for clk %s\n", + __func__, core->name); + return ret; + } + break; + } + } + +linked_into_pstate: + power->genpd_pstate = pstate; + mutex_unlock(power->genpd_lock); + return ret; +} + +static void clk_unvote_power(struct clk_core *core) +{ + struct clk_power_data *power = core->power; + + if (!core->power) + return; + + /* regulator added here in future */ + + if (power->genpd_dev) + clk_unvote_genpd(core); +} + +static int clk_vote_power(struct clk_core *core, unsigned long rate) +{ + struct clk_power_data *power = core->power; + int ret = 0; + + if (!core->power) + return 0; + + /* regulator added here in future */ + + if (power->genpd_dev) + ret = clk_vote_genpd(core, rate); + + return ret; +} + static void clk_core_unprepare(struct clk_core *core) { lockdep_assert_held(&prepare_lock); @@ -840,6 +997,8 @@ static void clk_core_unprepare(struct clk_core *core) if (core->ops->unprepare) core->ops->unprepare(core->hw); + clk_unvote_power(core); + clk_pm_runtime_put(core); trace_clk_unprepare_complete(core); @@ -887,6 +1046,10 @@ static int clk_core_prepare(struct clk_core *core) if (ret) return ret; + ret = clk_vote_power(core, core->rate); + if (ret) + return ret; + ret = clk_core_prepare(core->parent); if (ret) goto runtime_put; @@ -2189,7 +2352,7 @@ static int clk_core_set_rate_nolock(struct clk_core *core, { struct clk_core *top, *fail_clk; unsigned long rate; - int ret = 0; + int ret = 0, post_set_power = 0; if (!core) return 0; @@ -2223,10 +2386,21 @@ static int clk_core_set_rate_nolock(struct clk_core *core, goto err; } + if (rate > core->rate) { + ret = clk_vote_power(core, rate); + if (ret) + goto err; + } else { + post_set_power = 1; + } + /* change the rates */ clk_change_rate(top); core->req_rate = req_rate; + + if (post_set_power) + ret = clk_vote_power(core, rate); err: clk_pm_runtime_put(core); @@ -3905,7 +4079,8 @@ static void clk_core_free_parent_map(struct clk_core *core) static struct clk * __clk_register(struct device *dev, struct device_node *np, struct clk_hw *hw) { - int ret; + int ret, cnt; + unsigned long rate; struct clk_core *core; const struct clk_init_data *init = hw->init; @@ -3946,6 +4121,38 @@ __clk_register(struct device *dev, struct device_node *np, struct clk_hw *hw) core->min_rate = 0; core->max_rate = ULONG_MAX; + if (init->power && init->power_magic == CLK_POWER_MAGIC) { + struct clk_power_data *power = init->power; + const struct genpdopp_table *ptable = power->genpdopp_table; + + power->core = core; + if (power->genpd_dev) { + if (!power->genpd_lock || !power->genpd_head || + !power->genpdopp_table || !power->genpdopp_num) { + pr_err("%s: invalid power domain for clk %s\n", + __func__, core->name); + goto skip_clk_power; + } + } + for (cnt = 0; cnt < power->genpdopp_num - 1; cnt++) { + rate = ptable->ceiling_rate; + ptable++; + if (rate >= ptable->ceiling_rate) { + pr_err("%s: invalid asending rate for clk %s\n", + __func__, core->name); + ret = -EINVAL; + goto skip_clk_power; + } + } + core->power = kmalloc(sizeof(*power), GFP_KERNEL); + if (!core->power) + goto skip_clk_power; + + memcpy(core->power, power, sizeof(*power)); + INIT_LIST_HEAD(&core->power->genpd_list); + } + +skip_clk_power: ret = clk_core_populate_parent_map(core, init); if (ret) goto fail_parents; @@ -3978,6 +4185,7 @@ __clk_register(struct device *dev, struct device_node *np, struct clk_hw *hw) fail_create_clk: clk_core_free_parent_map(core); fail_parents: + kfree(core->power); fail_ops: kfree_const(core->name); fail_name: diff --git a/include/linux/clk-provider.h b/include/linux/clk-provider.h index c10dc4c659e2..bcf62fb0a6a1 100644 --- a/include/linux/clk-provider.h +++ b/include/linux/clk-provider.h @@ -268,12 +268,71 @@ struct clk_parent_data { int index; }; +/** + * struct genpdopp_table - opp pstate and clk rate mapping table + * + * @pstate: power domain performance state + * @ceiling_rate: the max clock rate this pstate supports + */ +struct genpdopp_table { + unsigned int pstate; + unsigned long ceiling_rate; +}; + +/** + * struct clkpstate_node - opp pstate node holds lists of clks that depends + * on a specific performance state. The nodes should be + * from low to high pstate lead by genpd_head. + * + * @genpd_list: list node that linked to a genpd list + * @genpd_pstate_head: list head that lead clks that depends on this pstate + * @pstate: power domain performance state + */ +struct clkpstate_node { + struct list_head genpd_list; + struct list_head genpd_pstate_head; + unsigned int pstate; +}; + +/** + * struct clk_power_data - holds power data that's common to all clocks and is + * shared between the clock provider and the common clock framework. + * + * @genpd_list: genpd consumer node of this clk, to be into one of genpd pstate + * consumer lists that lead by genpd_head when clk rate is set to + * a genpd opp pstate. + * @genpd_head: list head that holds genpd performance states heads, where + * genpd performance list heads are held. Those heads are holding + * genpd consumers in different opp pstate. + * @genpd_lock: spin_lock that protect genpd list operation + * @genpd_dev: device that bind the power domain where clk is on. It is clock + * controller device by default, or virtual device if there are + * multiple power domain for controller device + * @genpdopp_table: genpd opp pstate and clk rate mapping table. The rate should + be listed from lowest to highest strictly in table. + * @genpdopp_num: genpd opp pstate table entry number + * @genpd_pstate: current genpd opp pstate this clk requires + */ +struct clk_power_data { + struct list_head genpd_list; + struct list_head *genpd_head; + struct mutex *genpd_lock; + struct device **genpd_dev; + const struct genpdopp_table *genpdopp_table; + unsigned int genpdopp_num; + unsigned int genpd_pstate; + struct clk_core *core; +}; + /** * struct clk_init_data - holds init data that's common to all clocks and is * shared between the clock provider and the common clock framework. * * @name: clock name * @ops: operations this clock supports + * power: power data that this clock operates on + * @power_magic: magic number to indicate that power data is valid. To sanity + * check for none NULL invalid power data case. * @parent_names: array of string names for all possible parents * @parent_data: array of parent data for all possible parents (when some * parents are external to the clk controller) @@ -282,9 +341,12 @@ struct clk_parent_data { * @num_parents: number of possible parents * @flags: framework-level hints and quirks */ +#define CLK_POWER_MAGIC 0x5c5c6969 struct clk_init_data { const char *name; + int power_magic; const struct clk_ops *ops; + struct clk_power_data *power; /* Only one of the following three should be assigned */ const char * const *parent_names; const struct clk_parent_data *parent_data; From patchwork Fri Aug 5 07:49:33 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jun Nie X-Patchwork-Id: 12937030 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D8843C3F6B0 for ; Fri, 5 Aug 2022 07:49:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240316AbiHEHt6 (ORCPT ); Fri, 5 Aug 2022 03:49:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50208 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240259AbiHEHt5 (ORCPT ); Fri, 5 Aug 2022 03:49:57 -0400 Received: from mail-pl1-x62b.google.com (mail-pl1-x62b.google.com [IPv6:2607:f8b0:4864:20::62b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A0AB920BD1 for ; Fri, 5 Aug 2022 00:49:54 -0700 (PDT) Received: by mail-pl1-x62b.google.com with SMTP id x23so1975828pll.7 for ; Fri, 05 Aug 2022 00:49:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=0h60Pr0GHO67tyGTXkVyvVhWgn0K99BvyzPjUeZGI0g=; b=wNx5d5B6fC7hgZ0JrG0Jj0+ASpM5IcMkIsIXVRss4ZS2st2cc1T8ohiD/3NszzfoxE 8SScxkUvJ1QOInqNejvOgVwdav8rDckYqwESZ5GFf5YeAvLohlc47ZUTu8fZmcHCeNpo 50GlnspXKqwZcpEkwZXSxDd1+sEZYYM/r/Ahkej5PHCk7o1/WyFQFgC3+gUDurRScWRZ Cg5a0m6kSH/nXrISuLMAJegLTWKjHJN33WRIi+LM/56XaidRWObFVRXtEBxTf5lAWPse bcb4JKmnjRZgWjJ98se3MEJ/1u8fej/yuco1sPNxR/PvQyHlfuFq3Yhx0rOsRq/Gwdkt 1mlw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=0h60Pr0GHO67tyGTXkVyvVhWgn0K99BvyzPjUeZGI0g=; b=ZonvxcN+6EC6V+nc54XhOrdXetH9QKspRHRKtPghkJ9HMukVJhCU4D3N7Y0dLhQ65P a8hLvHeET+IG3xh6TQ3NPaYAR670Ul5mbvQfRdpDDyMhMGw2J+g4Uz51SLD4t51Yl5nE sTaQIggxh4B8l2Hhbn8dGnuW0jIGXdjn83PSjfE000Qh1E3nWn/9089VUOMgJyGd5H8b UcvvRLef8wLHVNQ6GOrmFjBYjn9MUWOdaSUkdYtmYv8tHVsK/dUhLPqyIYM1Zr0JmMR4 jY8wdTBzXsB7+4ZRtESuqn25C5cGZwzYyz24VIO32iE7tV5SY5Z6TMvYq9Q0QsqKWadj PqUw== X-Gm-Message-State: ACgBeo1s3YrPFW+k4c4vlessTZpM/NCRXHWO4pYjpyugd/sxOIur3mZE RRA2C9nRm5PZt3SiIXMeCH+mEA== X-Google-Smtp-Source: AA6agR6a3m/LZS0oVrZ4twvRV4bG+4/dadmQY1mQdWqPr7D3Syx1PIzAjqofz5Ti+2Ki1HnAi8GDoQ== X-Received: by 2002:a17:903:1111:b0:16a:acf4:e951 with SMTP id n17-20020a170903111100b0016aacf4e951mr5556317plh.72.1659685794118; Fri, 05 Aug 2022 00:49:54 -0700 (PDT) Received: from localhost.localdomain ([45.8.68.134]) by smtp.gmail.com with ESMTPSA id z10-20020a1709027e8a00b0016ecda71e26sm2309372pla.39.2022.08.05.00.49.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 05 Aug 2022 00:49:53 -0700 (PDT) From: Jun Nie To: abel.vesa@linaro.org, bjorn.andersson@linaro.org, mturquette@baylibre.com, sboyd@kernel.org Cc: agross@kernel.org, shawn.guo@linaro.org, bryan.odonoghue@linaro.org, linux-clk@vger.kernel.org, linux-arm-msm@vger.kernel.org, devicetree@vger.kernel.org, Jun Nie Subject: [PATCH 2/4] soc: qcom: rpmpd: Add corner power-domains states Date: Fri, 5 Aug 2022 15:49:33 +0800 Message-Id: <20220805074935.1158098-3-jun.nie@linaro.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220805074935.1158098-1-jun.nie@linaro.org> References: <20220805074935.1158098-1-jun.nie@linaro.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org Some SoCs use corner instead of level in rpm regulator, such as MSM8916 and MSM8939. Add these power-domains states value so that devices can vote them. Note that there is a shift with 1 when converting the value from regulator usage in Qualcomm Linux 3.18 to power domain usage here. Because corner is not well hacked in regulator framework in 3.18. For example, RPM_REGULATOR_CORNER_RETENTION is 2 in 3.18 while RPM_SMD_CORNER_RETENTION is 1. Signed-off-by: Jun Nie --- include/dt-bindings/power/qcom-rpmpd.h | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/include/dt-bindings/power/qcom-rpmpd.h b/include/dt-bindings/power/qcom-rpmpd.h index 6cce5b7aa940..f778dbbf083d 100644 --- a/include/dt-bindings/power/qcom-rpmpd.h +++ b/include/dt-bindings/power/qcom-rpmpd.h @@ -297,4 +297,12 @@ #define RPM_SMD_LEVEL_TURBO_HIGH 448 #define RPM_SMD_LEVEL_BINNING 512 +/* RPM SMD Power Domain performance levels in regulator corner method */ +#define RPM_SMD_CORNER_RETENTION 1 +#define RPM_SMD_CORNER_SVS_KRAIT 2 +#define RPM_SMD_CORNER_SVS_SOC 3 +#define RPM_SMD_CORNER_NORMAL 4 +#define RPM_SMD_CORNER_TURBO 5 +#define RPM_SMD_CORNER_SUPER_TURBO 6 + #endif From patchwork Fri Aug 5 07:49:34 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jun Nie X-Patchwork-Id: 12937031 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C4976C3F6B0 for ; Fri, 5 Aug 2022 07:50:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240250AbiHEHuE (ORCPT ); Fri, 5 Aug 2022 03:50:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50320 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240319AbiHEHuD (ORCPT ); Fri, 5 Aug 2022 03:50:03 -0400 Received: from mail-pl1-x631.google.com (mail-pl1-x631.google.com [IPv6:2607:f8b0:4864:20::631]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AF19F61D86 for ; Fri, 5 Aug 2022 00:50:01 -0700 (PDT) Received: by mail-pl1-x631.google.com with SMTP id o3so1982639ple.5 for ; Fri, 05 Aug 2022 00:50:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=4fMMHWvyg4GU7gRV3bzyHI/9v3wkdIknoimmt7krGAY=; b=SBwIQEBS5OLlLKVrNkobgKRtOwMC9KhMVwRrSlHdxz4VRXR0glBhSBoquXjuPrrvMt O66C2m6XUa9h+0z4CW0OwujLEyYD8GItcERM1jJoPxm1g8xH7qT5MvjgqnmrI5Mj7ibs ZZZMk6DC8wzFQWh7hVAwlW1NXEeC8g2oEtrNP1GdG26+Ej2Kk5xJ83XTL/5W95QOWix7 qyWpxwAk0oW3szjbIb/6/ZB7HYKx9QeFaNk74rycYoJtJFqzXdPoT/hUdTs4wQeikf/t fE/IiKR08reJ8kTWCyXfkUB25/wG34DloCRGZj+g1i5WOdKP0/lOHo+2aXfVvVRIu0Tq llgg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=4fMMHWvyg4GU7gRV3bzyHI/9v3wkdIknoimmt7krGAY=; b=kaCZ4+B5zUYaJNg77Tu3nxG/6zB/jLd4gybLiVUF4+qhMd4N91V+3we8fsVqtFiFRg AeJDQQzjTfAbXEnQWlF4WGJWybL96Nhd9xT/42pf29bN5X2Rlt9aAmfsvJWjNL0mInU4 zqd4GyejI8HDhtXKK67ypnwVQjcdAt4jjQbAj2u1jwXkjtJCnzKfz/SPhFxNeoBJxSqd hrCwndMJJRGeOiCVgqqKmVvnti7U4QAU+EVeJehknI5W+8MhgTRAfXUrbn6HWDJVjhUv dCzphBl2mE0bbgPnJge7x32JUnZBpGtIREYEOS4qsIMBfmdRJpby92iW8og5PJBc6Hlc H+AA== X-Gm-Message-State: ACgBeo0u3Kj9uZFms13KsQ+/C1CpjE3NXfM4XojWOFEe6PcqIg/3oXfR evxLuwq8NgSjUGH+uUKJyw0IQA== X-Google-Smtp-Source: AA6agR4qlmdhm+PfIJ042pSYU/7pcw4mifiS+duHcfD/56kLs2ea+10997KQtLuZHNrwdkxVqJD7hA== X-Received: by 2002:a17:90a:a4d:b0:1f5:5293:6abb with SMTP id o71-20020a17090a0a4d00b001f552936abbmr11131594pjo.236.1659685801124; Fri, 05 Aug 2022 00:50:01 -0700 (PDT) Received: from localhost.localdomain ([45.8.68.134]) by smtp.gmail.com with ESMTPSA id z10-20020a1709027e8a00b0016ecda71e26sm2309372pla.39.2022.08.05.00.49.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 05 Aug 2022 00:50:00 -0700 (PDT) From: Jun Nie To: abel.vesa@linaro.org, bjorn.andersson@linaro.org, mturquette@baylibre.com, sboyd@kernel.org Cc: agross@kernel.org, shawn.guo@linaro.org, bryan.odonoghue@linaro.org, linux-clk@vger.kernel.org, linux-arm-msm@vger.kernel.org, devicetree@vger.kernel.org, Jun Nie Subject: [PATCH 3/4] arm64: dts: qcom: add power domain for clk controller Date: Fri, 5 Aug 2022 15:49:34 +0800 Message-Id: <20220805074935.1158098-4-jun.nie@linaro.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220805074935.1158098-1-jun.nie@linaro.org> References: <20220805074935.1158098-1-jun.nie@linaro.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org Add RPM power domain for clk controller so that clock controller can use it for dynamic voltage frequency scaling. Also replace the RPM power domain value with defninition. Signed-off-by: Jun Nie --- arch/arm64/boot/dts/qcom/msm8916.dtsi | 14 ++++++++------ 1 file changed, 8 insertions(+), 6 deletions(-) diff --git a/arch/arm64/boot/dts/qcom/msm8916.dtsi b/arch/arm64/boot/dts/qcom/msm8916.dtsi index 05472510e29d..fdb32b3a23e8 100644 --- a/arch/arm64/boot/dts/qcom/msm8916.dtsi +++ b/arch/arm64/boot/dts/qcom/msm8916.dtsi @@ -312,22 +312,22 @@ rpmpd_opp_table: opp-table { compatible = "operating-points-v2"; rpmpd_opp_ret: opp1 { - opp-level = <1>; + opp-level = ; }; rpmpd_opp_svs_krait: opp2 { - opp-level = <2>; + opp-level = ; }; rpmpd_opp_svs_soc: opp3 { - opp-level = <3>; + opp-level = ; }; rpmpd_opp_nom: opp4 { - opp-level = <4>; + opp-level = ; }; rpmpd_opp_turbo: opp5 { - opp-level = <5>; + opp-level = ; }; rpmpd_opp_super_turbo: opp6 { - opp-level = <6>; + opp-level = ; }; }; }; @@ -933,6 +933,8 @@ gcc: clock-controller@1800000 { #clock-cells = <1>; #reset-cells = <1>; #power-domain-cells = <1>; + power-domains = <&rpmpd MSM8916_VDDCX>; + power-domain-names = "vdd"; reg = <0x01800000 0x80000>; }; From patchwork Fri Aug 5 07:49:35 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jun Nie X-Patchwork-Id: 12937032 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DED3CC00140 for ; Fri, 5 Aug 2022 07:50:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240319AbiHEHuP (ORCPT ); Fri, 5 Aug 2022 03:50:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50524 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240091AbiHEHuN (ORCPT ); Fri, 5 Aug 2022 03:50:13 -0400 Received: from mail-pl1-x62b.google.com (mail-pl1-x62b.google.com [IPv6:2607:f8b0:4864:20::62b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 409D962F4 for ; Fri, 5 Aug 2022 00:50:09 -0700 (PDT) Received: by mail-pl1-x62b.google.com with SMTP id w14so1962451plp.9 for ; Fri, 05 Aug 2022 00:50:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=6shUobG0mUHt8plNau/DLr0D/jTsA6xW6lO9UB121Jo=; b=eTDAyR4Z66EIKJwrVRQYZVE8S8yRg6WXiDYK3xJ8PmJg+tYlXz3x3i296zocrOO3Q1 yrGu0+noGM30Q4z9WRKtmak23ZtzodqksT+BgwDLNSjWS9R6BVqYPPE4GJM5AQ6vWfgu AYVGzSsAZIUeeEq5sUFM3SUwDAgUgz46y6DMOhM/Ep7mf9hZf94zCxstL9LiEIueX71Z UKaK4mE6qRqMYOTYz+ChXhUd+yvGFHMuxnlUF44MP9f1puM2tfZpJjcQQ19yxNnCUII5 ywCxXkR8sa/GbH8eoZU5aAzVF8Mlvpzga11Vk1B/1Avrg3NxtzAWz/V2ltdYi05H+FRD YWpw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=6shUobG0mUHt8plNau/DLr0D/jTsA6xW6lO9UB121Jo=; b=WPBJaSOZPtSLSwDyfX2wyvgyuFcSzoyotpMT4cx/0lKmJtmVFMWCkNbuUVUbIeVb60 bX9n11QwF8bL92yxigBLWxyoVQQBEQe9pxOLrZuBl70z9t13OjKT6mrMYFhgfEh3Y+m/ heksB09s/LOa9Va9GHKt+VF7ppCnsiQexuFyaRaarpNVqzBn1C0j6HyQCEBH7u/APfJn MbrHRIAxlMHlqGAe7eEyoY4Si3AxWdLB/jeTMa0uAY9Jwb87G+2IFJm9pJxeajLf1Qjo ui5XH4eq7jmRXNec9taS4TFlY0y21TWp3+Li2xDofyw/IjNn5DqoyUIP8jHJdsIE2RB1 r23A== X-Gm-Message-State: ACgBeo2g8Q5uIM5NXpGNPQGutreGP7vrblwpeRaVggO9kph/w0mPalOP LoU1z68j+myY3UL8yAndEiBFug== X-Google-Smtp-Source: AA6agR6Pl4lW/hjdaRP2Yf//SNfwxK5PBlvxTDa9OT2Q85KFNEDnivuHNQUbYkbT9EgzFCL8FqOGsg== X-Received: by 2002:a17:902:cf06:b0:16b:cc33:5bce with SMTP id i6-20020a170902cf0600b0016bcc335bcemr5682860plg.152.1659685808881; Fri, 05 Aug 2022 00:50:08 -0700 (PDT) Received: from localhost.localdomain ([45.8.68.134]) by smtp.gmail.com with ESMTPSA id z10-20020a1709027e8a00b0016ecda71e26sm2309372pla.39.2022.08.05.00.50.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 05 Aug 2022 00:50:08 -0700 (PDT) From: Jun Nie To: abel.vesa@linaro.org, bjorn.andersson@linaro.org, mturquette@baylibre.com, sboyd@kernel.org Cc: agross@kernel.org, shawn.guo@linaro.org, bryan.odonoghue@linaro.org, linux-clk@vger.kernel.org, linux-arm-msm@vger.kernel.org, devicetree@vger.kernel.org, Jun Nie Subject: [PATCH 4/4] clk: qcom: gcc-msm8916: Add power domain data Date: Fri, 5 Aug 2022 15:49:35 +0800 Message-Id: <20220805074935.1158098-5-jun.nie@linaro.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220805074935.1158098-1-jun.nie@linaro.org> References: <20220805074935.1158098-1-jun.nie@linaro.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org Add power domain performance state and ceiling freq mapping table so that optimal performance point can be voted by clks within clock controller. This is not related to the clks consumer devices. Run this command to check genpd perf state: cat /sys/kernel/debug/pm_genpd/vddcx/perf_state Signed-off-by: Jun Nie --- drivers/clk/qcom/gcc-msm8916.c | 182 +++++++++++++++++++++++++++++++++ 1 file changed, 182 insertions(+) diff --git a/drivers/clk/qcom/gcc-msm8916.c b/drivers/clk/qcom/gcc-msm8916.c index 17e4a5a2a9fd..b42e39688a28 100644 --- a/drivers/clk/qcom/gcc-msm8916.c +++ b/drivers/clk/qcom/gcc-msm8916.c @@ -13,8 +13,10 @@ #include #include #include +#include #include +#include #include #include "common.h" @@ -25,6 +27,20 @@ #include "reset.h" #include "gdsc.h" +static struct device *genpd_dev; +static struct mutex genpd_lock; +static struct list_head genpd_head; + +#define POWER_PDOPP(table) \ + .power_magic = CLK_POWER_MAGIC, \ + .power = &(struct clk_power_data) { \ + .genpd_head = &genpd_head, \ + .genpd_lock = &genpd_lock, \ + .genpdopp_table = table, \ + .genpdopp_num = ARRAY_SIZE(table), \ + .genpd_dev = &genpd_dev, \ + } + enum { P_XO, P_GPLL0, @@ -394,6 +410,11 @@ static const struct freq_tbl ftbl_gcc_camss_ahb_clk[] = { { } }; +static const struct genpdopp_table camss_ahb_genpdopp[] = { + {RPM_SMD_CORNER_SVS_SOC, 40000000}, + {RPM_SMD_CORNER_NORMAL, 80000000}, +}; + static struct clk_rcg2 camss_ahb_clk_src = { .cmd_rcgr = 0x5a000, .mnd_width = 8, @@ -405,6 +426,7 @@ static struct clk_rcg2 camss_ahb_clk_src = { .parent_names = gcc_xo_gpll0, .num_parents = 2, .ops = &clk_rcg2_ops, + POWER_PDOPP(camss_ahb_genpdopp), }, }; @@ -435,6 +457,11 @@ static const struct freq_tbl ftbl_gcc_camss_csi0_1_clk[] = { { } }; +static const struct genpdopp_table camss_csi0_1_genpdopp[] = { + {RPM_SMD_CORNER_SVS_SOC, 100000000}, + {RPM_SMD_CORNER_NORMAL, 200000000}, +}; + static struct clk_rcg2 csi0_clk_src = { .cmd_rcgr = 0x4e020, .hid_width = 5, @@ -445,6 +472,7 @@ static struct clk_rcg2 csi0_clk_src = { .parent_names = gcc_xo_gpll0, .num_parents = 2, .ops = &clk_rcg2_ops, + POWER_PDOPP(camss_csi0_1_genpdopp), }, }; @@ -458,6 +486,7 @@ static struct clk_rcg2 csi1_clk_src = { .parent_names = gcc_xo_gpll0, .num_parents = 2, .ops = &clk_rcg2_ops, + POWER_PDOPP(camss_csi0_1_genpdopp), }, }; @@ -476,6 +505,12 @@ static const struct freq_tbl ftbl_gcc_oxili_gfx3d_clk[] = { { } }; + +static const struct genpdopp_table gfx3d_genpdopp[] = { + {RPM_SMD_CORNER_SVS_SOC, 200000000}, + {RPM_SMD_CORNER_NORMAL, 310000000}, + {RPM_SMD_CORNER_SUPER_TURBO, 400000000}, +}; static struct clk_rcg2 gfx3d_clk_src = { .cmd_rcgr = 0x59000, .hid_width = 5, @@ -486,6 +521,7 @@ static struct clk_rcg2 gfx3d_clk_src = { .parent_names = gcc_xo_gpll0a_gpll1_gpll2a, .num_parents = 4, .ops = &clk_rcg2_ops, + POWER_PDOPP(gfx3d_genpdopp), }, }; @@ -503,6 +539,12 @@ static const struct freq_tbl ftbl_gcc_camss_vfe0_clk[] = { { } }; +static const struct genpdopp_table vfe0_genpdopp[] = { + {RPM_SMD_CORNER_SVS_SOC, 160000000}, + {RPM_SMD_CORNER_NORMAL, 320000000}, + {RPM_SMD_CORNER_SUPER_TURBO, 465000000}, +}; + static struct clk_rcg2 vfe0_clk_src = { .cmd_rcgr = 0x58000, .hid_width = 5, @@ -513,6 +555,7 @@ static struct clk_rcg2 vfe0_clk_src = { .parent_names = gcc_xo_gpll0_gpll2, .num_parents = 3, .ops = &clk_rcg2_ops, + POWER_PDOPP(vfe0_genpdopp), }, }; @@ -522,6 +565,10 @@ static const struct freq_tbl ftbl_gcc_blsp1_qup1_6_i2c_apps_clk[] = { { } }; +static const struct genpdopp_table qup1_6_i2c_genpdopp[] = { + {RPM_SMD_CORNER_SVS_SOC, 50000000}, +}; + static struct clk_rcg2 blsp1_qup1_i2c_apps_clk_src = { .cmd_rcgr = 0x0200c, .hid_width = 5, @@ -532,6 +579,7 @@ static struct clk_rcg2 blsp1_qup1_i2c_apps_clk_src = { .parent_names = gcc_xo_gpll0, .num_parents = 2, .ops = &clk_rcg2_ops, + POWER_PDOPP(qup1_6_i2c_genpdopp), }, }; @@ -550,6 +598,11 @@ static const struct freq_tbl ftbl_gcc_blsp1_qup1_6_spi_apps_clk[] = { { } }; +static const struct genpdopp_table qup1_6_spi_genpdopp[] = { + {RPM_SMD_CORNER_SVS_SOC, 25000000}, + {RPM_SMD_CORNER_NORMAL, 50000000}, +}; + static struct clk_rcg2 blsp1_qup1_spi_apps_clk_src = { .cmd_rcgr = 0x02024, .mnd_width = 8, @@ -561,6 +614,7 @@ static struct clk_rcg2 blsp1_qup1_spi_apps_clk_src = { .parent_names = gcc_xo_gpll0, .num_parents = 2, .ops = &clk_rcg2_ops, + POWER_PDOPP(qup1_6_spi_genpdopp), }, }; @@ -574,6 +628,7 @@ static struct clk_rcg2 blsp1_qup2_i2c_apps_clk_src = { .parent_names = gcc_xo_gpll0, .num_parents = 2, .ops = &clk_rcg2_ops, + POWER_PDOPP(qup1_6_i2c_genpdopp), }, }; @@ -588,6 +643,7 @@ static struct clk_rcg2 blsp1_qup2_spi_apps_clk_src = { .parent_names = gcc_xo_gpll0, .num_parents = 2, .ops = &clk_rcg2_ops, + POWER_PDOPP(qup1_6_spi_genpdopp), }, }; @@ -601,6 +657,7 @@ static struct clk_rcg2 blsp1_qup3_i2c_apps_clk_src = { .parent_names = gcc_xo_gpll0, .num_parents = 2, .ops = &clk_rcg2_ops, + POWER_PDOPP(qup1_6_i2c_genpdopp), }, }; @@ -615,6 +672,7 @@ static struct clk_rcg2 blsp1_qup3_spi_apps_clk_src = { .parent_names = gcc_xo_gpll0, .num_parents = 2, .ops = &clk_rcg2_ops, + POWER_PDOPP(qup1_6_spi_genpdopp), }, }; @@ -628,6 +686,7 @@ static struct clk_rcg2 blsp1_qup4_i2c_apps_clk_src = { .parent_names = gcc_xo_gpll0, .num_parents = 2, .ops = &clk_rcg2_ops, + POWER_PDOPP(qup1_6_i2c_genpdopp), }, }; @@ -642,6 +701,7 @@ static struct clk_rcg2 blsp1_qup4_spi_apps_clk_src = { .parent_names = gcc_xo_gpll0, .num_parents = 2, .ops = &clk_rcg2_ops, + POWER_PDOPP(qup1_6_spi_genpdopp), }, }; @@ -655,6 +715,7 @@ static struct clk_rcg2 blsp1_qup5_i2c_apps_clk_src = { .parent_names = gcc_xo_gpll0, .num_parents = 2, .ops = &clk_rcg2_ops, + POWER_PDOPP(qup1_6_i2c_genpdopp), }, }; @@ -669,6 +730,7 @@ static struct clk_rcg2 blsp1_qup5_spi_apps_clk_src = { .parent_names = gcc_xo_gpll0, .num_parents = 2, .ops = &clk_rcg2_ops, + POWER_PDOPP(qup1_6_spi_genpdopp), }, }; @@ -682,6 +744,7 @@ static struct clk_rcg2 blsp1_qup6_i2c_apps_clk_src = { .parent_names = gcc_xo_gpll0, .num_parents = 2, .ops = &clk_rcg2_ops, + POWER_PDOPP(qup1_6_i2c_genpdopp), }, }; @@ -696,6 +759,7 @@ static struct clk_rcg2 blsp1_qup6_spi_apps_clk_src = { .parent_names = gcc_xo_gpll0, .num_parents = 2, .ops = &clk_rcg2_ops, + POWER_PDOPP(qup1_6_spi_genpdopp), }, }; @@ -718,6 +782,11 @@ static const struct freq_tbl ftbl_gcc_blsp1_uart1_6_apps_clk[] = { { } }; +static const struct genpdopp_table uart1_2_genpdopp[] = { + {RPM_SMD_CORNER_SVS_SOC, 32000000}, + {RPM_SMD_CORNER_NORMAL, 64000000}, +}; + static struct clk_rcg2 blsp1_uart1_apps_clk_src = { .cmd_rcgr = 0x02044, .mnd_width = 16, @@ -729,6 +798,7 @@ static struct clk_rcg2 blsp1_uart1_apps_clk_src = { .parent_names = gcc_xo_gpll0, .num_parents = 2, .ops = &clk_rcg2_ops, + POWER_PDOPP(uart1_2_genpdopp), }, }; @@ -743,6 +813,7 @@ static struct clk_rcg2 blsp1_uart2_apps_clk_src = { .parent_names = gcc_xo_gpll0, .num_parents = 2, .ops = &clk_rcg2_ops, + POWER_PDOPP(uart1_2_genpdopp), }, }; @@ -751,6 +822,10 @@ static const struct freq_tbl ftbl_gcc_camss_cci_clk[] = { { } }; +static const struct genpdopp_table cci_genpdopp[] = { + {RPM_SMD_CORNER_SVS_SOC, 19200000}, +}; + static struct clk_rcg2 cci_clk_src = { .cmd_rcgr = 0x51000, .mnd_width = 8, @@ -762,6 +837,7 @@ static struct clk_rcg2 cci_clk_src = { .parent_names = gcc_xo_gpll0a, .num_parents = 2, .ops = &clk_rcg2_ops, + POWER_PDOPP(cci_genpdopp), }, }; @@ -771,6 +847,11 @@ static const struct freq_tbl ftbl_gcc_camss_gp0_1_clk[] = { { } }; +static const struct genpdopp_table gp0_1_genpdopp[] = { + {RPM_SMD_CORNER_SVS_SOC, 100000000}, + {RPM_SMD_CORNER_NORMAL, 200000000}, +}; + static struct clk_rcg2 camss_gp0_clk_src = { .cmd_rcgr = 0x54000, .mnd_width = 8, @@ -782,6 +863,7 @@ static struct clk_rcg2 camss_gp0_clk_src = { .parent_names = gcc_xo_gpll0_gpll1a_sleep, .num_parents = 4, .ops = &clk_rcg2_ops, + POWER_PDOPP(gp0_1_genpdopp), }, }; @@ -796,6 +878,7 @@ static struct clk_rcg2 camss_gp1_clk_src = { .parent_names = gcc_xo_gpll0_gpll1a_sleep, .num_parents = 4, .ops = &clk_rcg2_ops, + POWER_PDOPP(gp0_1_genpdopp), }, }; @@ -806,6 +889,12 @@ static const struct freq_tbl ftbl_gcc_camss_jpeg0_clk[] = { { } }; +static const struct genpdopp_table jpeg0_genpdopp[] = { + {RPM_SMD_CORNER_SVS_SOC, 133330000}, + {RPM_SMD_CORNER_NORMAL, 266670000}, + {RPM_SMD_CORNER_SUPER_TURBO, 320000000}, +}; + static struct clk_rcg2 jpeg0_clk_src = { .cmd_rcgr = 0x57000, .hid_width = 5, @@ -816,6 +905,7 @@ static struct clk_rcg2 jpeg0_clk_src = { .parent_names = gcc_xo_gpll0, .num_parents = 2, .ops = &clk_rcg2_ops, + POWER_PDOPP(jpeg0_genpdopp), }, }; @@ -826,6 +916,11 @@ static const struct freq_tbl ftbl_gcc_camss_mclk0_1_clk[] = { { } }; +static const struct genpdopp_table mclk0_1_genpdopp[] = { + {RPM_SMD_CORNER_SVS_SOC, 24000000}, + {RPM_SMD_CORNER_NORMAL, 66670000}, +}; + static struct clk_rcg2 mclk0_clk_src = { .cmd_rcgr = 0x52000, .mnd_width = 8, @@ -837,6 +932,7 @@ static struct clk_rcg2 mclk0_clk_src = { .parent_names = gcc_xo_gpll0_gpll1a_sleep, .num_parents = 4, .ops = &clk_rcg2_ops, + POWER_PDOPP(mclk0_1_genpdopp), }, }; @@ -851,6 +947,7 @@ static struct clk_rcg2 mclk1_clk_src = { .parent_names = gcc_xo_gpll0_gpll1a_sleep, .num_parents = 4, .ops = &clk_rcg2_ops, + POWER_PDOPP(mclk0_1_genpdopp), }, }; @@ -873,6 +970,11 @@ static struct clk_rcg2 csi0phytimer_clk_src = { }, }; +static const struct genpdopp_table csi1phytimer_genpdopp[] = { + {RPM_SMD_CORNER_SVS_SOC, 100000000}, + {RPM_SMD_CORNER_NORMAL, 200000000}, +}; + static struct clk_rcg2 csi1phytimer_clk_src = { .cmd_rcgr = 0x4f000, .hid_width = 5, @@ -883,6 +985,7 @@ static struct clk_rcg2 csi1phytimer_clk_src = { .parent_names = gcc_xo_gpll0_gpll1a, .num_parents = 3, .ops = &clk_rcg2_ops, + POWER_PDOPP(csi1phytimer_genpdopp), }, }; @@ -893,6 +996,12 @@ static const struct freq_tbl ftbl_gcc_camss_cpp_clk[] = { { } }; +static const struct genpdopp_table cpp_genpdopp[] = { + {RPM_SMD_CORNER_SVS_SOC, 160000000}, + {RPM_SMD_CORNER_NORMAL, 320000000}, + {RPM_SMD_CORNER_SUPER_TURBO, 465000000}, +}; + static struct clk_rcg2 cpp_clk_src = { .cmd_rcgr = 0x58018, .hid_width = 5, @@ -903,6 +1012,7 @@ static struct clk_rcg2 cpp_clk_src = { .parent_names = gcc_xo_gpll0_gpll2, .num_parents = 3, .ops = &clk_rcg2_ops, + POWER_PDOPP(cpp_genpdopp), }, }; @@ -914,6 +1024,11 @@ static const struct freq_tbl ftbl_gcc_crypto_clk[] = { { } }; +static const struct genpdopp_table crypto_genpdopp[] = { + {RPM_SMD_CORNER_SVS_SOC, 80000000}, + {RPM_SMD_CORNER_NORMAL, 160000000}, +}; + static struct clk_rcg2 crypto_clk_src = { .cmd_rcgr = 0x16004, .hid_width = 5, @@ -924,6 +1039,7 @@ static struct clk_rcg2 crypto_clk_src = { .parent_names = gcc_xo_gpll0, .num_parents = 2, .ops = &clk_rcg2_ops, + POWER_PDOPP(crypto_genpdopp), }, }; @@ -932,6 +1048,11 @@ static const struct freq_tbl ftbl_gcc_gp1_3_clk[] = { { } }; +static const struct genpdopp_table gp1_3_genpdopp[] = { + {RPM_SMD_CORNER_SVS_SOC, 100000000}, + {RPM_SMD_CORNER_NORMAL, 200000000}, +}; + static struct clk_rcg2 gp1_clk_src = { .cmd_rcgr = 0x08004, .mnd_width = 8, @@ -943,6 +1064,7 @@ static struct clk_rcg2 gp1_clk_src = { .parent_names = gcc_xo_gpll0_gpll1a_sleep, .num_parents = 3, .ops = &clk_rcg2_ops, + POWER_PDOPP(gp1_3_genpdopp), }, }; @@ -957,6 +1079,7 @@ static struct clk_rcg2 gp2_clk_src = { .parent_names = gcc_xo_gpll0_gpll1a_sleep, .num_parents = 3, .ops = &clk_rcg2_ops, + POWER_PDOPP(gp1_3_genpdopp), }, }; @@ -971,9 +1094,15 @@ static struct clk_rcg2 gp3_clk_src = { .parent_names = gcc_xo_gpll0_gpll1a_sleep, .num_parents = 3, .ops = &clk_rcg2_ops, + POWER_PDOPP(gp1_3_genpdopp), }, }; +static const struct genpdopp_table byte0_genpdopp[] = { + {RPM_SMD_CORNER_SVS_SOC, 94400000}, + {RPM_SMD_CORNER_NORMAL, 188500000}, +}; + static struct clk_rcg2 byte0_clk_src = { .cmd_rcgr = 0x4d044, .hid_width = 5, @@ -984,6 +1113,7 @@ static struct clk_rcg2 byte0_clk_src = { .num_parents = 3, .ops = &clk_byte2_ops, .flags = CLK_SET_RATE_PARENT, + POWER_PDOPP(byte0_genpdopp), }, }; @@ -992,6 +1122,10 @@ static const struct freq_tbl ftbl_gcc_mdss_esc0_clk[] = { { } }; +static const struct genpdopp_table esc0_genpdopp[] = { + {RPM_SMD_CORNER_SVS_SOC, 19200000}, +}; + static struct clk_rcg2 esc0_clk_src = { .cmd_rcgr = 0x4d05c, .hid_width = 5, @@ -1002,6 +1136,7 @@ static struct clk_rcg2 esc0_clk_src = { .parent_names = gcc_xo_dsibyte, .num_parents = 2, .ops = &clk_rcg2_ops, + POWER_PDOPP(esc0_genpdopp), }, }; @@ -1017,6 +1152,12 @@ static const struct freq_tbl ftbl_gcc_mdss_mdp_clk[] = { { } }; +static const struct genpdopp_table mdp_genpdopp[] = { + {RPM_SMD_CORNER_SVS_SOC, 160000000}, + {RPM_SMD_CORNER_NORMAL, 266670000}, + {RPM_SMD_CORNER_SUPER_TURBO, 320000000}, +}; + static struct clk_rcg2 mdp_clk_src = { .cmd_rcgr = 0x4d014, .hid_width = 5, @@ -1027,9 +1168,15 @@ static struct clk_rcg2 mdp_clk_src = { .parent_names = gcc_xo_gpll0_dsiphy, .num_parents = 3, .ops = &clk_rcg2_ops, + POWER_PDOPP(mdp_genpdopp), }, }; +static const struct genpdopp_table pclk0_genpdopp[] = { + {RPM_SMD_CORNER_SVS_SOC, 150000000}, + {RPM_SMD_CORNER_NORMAL, 250000000}, +}; + static struct clk_rcg2 pclk0_clk_src = { .cmd_rcgr = 0x4d000, .mnd_width = 8, @@ -1041,6 +1188,7 @@ static struct clk_rcg2 pclk0_clk_src = { .num_parents = 3, .ops = &clk_pixel_ops, .flags = CLK_SET_RATE_PARENT, + POWER_PDOPP(pclk0_genpdopp), }, }; @@ -1049,6 +1197,10 @@ static const struct freq_tbl ftbl_gcc_mdss_vsync_clk[] = { { } }; +static const struct genpdopp_table vsync_genpdopp[] = { + {RPM_SMD_CORNER_SVS_SOC, 19200000}, +}; + static struct clk_rcg2 vsync_clk_src = { .cmd_rcgr = 0x4d02c, .hid_width = 5, @@ -1059,6 +1211,7 @@ static struct clk_rcg2 vsync_clk_src = { .parent_names = gcc_xo_gpll0a, .num_parents = 2, .ops = &clk_rcg2_ops, + POWER_PDOPP(vsync_genpdopp), }, }; @@ -1067,6 +1220,10 @@ static const struct freq_tbl ftbl_gcc_pdm2_clk[] = { { } }; +static const struct genpdopp_table pdm2_genpdopp[] = { + {RPM_SMD_CORNER_SVS_SOC, 64000000}, +}; + static struct clk_rcg2 pdm2_clk_src = { .cmd_rcgr = 0x44010, .hid_width = 5, @@ -1077,6 +1234,7 @@ static struct clk_rcg2 pdm2_clk_src = { .parent_names = gcc_xo_gpll0, .num_parents = 2, .ops = &clk_rcg2_ops, + POWER_PDOPP(pdm2_genpdopp), }, }; @@ -1091,6 +1249,11 @@ static const struct freq_tbl ftbl_gcc_sdcc1_apps_clk[] = { { } }; +static const struct genpdopp_table sdcc1_2_genpdopp[] = { + {RPM_SMD_CORNER_SVS_SOC, 50000000}, + {RPM_SMD_CORNER_NORMAL, 200000000}, +}; + static struct clk_rcg2 sdcc1_apps_clk_src = { .cmd_rcgr = 0x42004, .mnd_width = 8, @@ -1102,6 +1265,7 @@ static struct clk_rcg2 sdcc1_apps_clk_src = { .parent_names = gcc_xo_gpll0, .num_parents = 2, .ops = &clk_rcg2_floor_ops, + POWER_PDOPP(sdcc1_2_genpdopp), }, }; @@ -1127,6 +1291,7 @@ static struct clk_rcg2 sdcc2_apps_clk_src = { .parent_names = gcc_xo_gpll0, .num_parents = 2, .ops = &clk_rcg2_floor_ops, + POWER_PDOPP(sdcc1_2_genpdopp), }, }; @@ -1179,6 +1344,11 @@ static const struct freq_tbl ftbl_gcc_usb_hs_system_clk[] = { { } }; +static const struct genpdopp_table usb_hs_system_genpdopp[] = { + {RPM_SMD_CORNER_SVS_SOC, 57140000}, + {RPM_SMD_CORNER_NORMAL, 80000000}, +}; + static struct clk_rcg2 usb_hs_system_clk_src = { .cmd_rcgr = 0x41010, .hid_width = 5, @@ -1189,6 +1359,7 @@ static struct clk_rcg2 usb_hs_system_clk_src = { .parent_names = gcc_xo_gpll0, .num_parents = 2, .ops = &clk_rcg2_ops, + POWER_PDOPP(usb_hs_system_genpdopp), }, }; @@ -1506,6 +1677,12 @@ static const struct freq_tbl ftbl_gcc_venus0_vcodec0_clk[] = { { } }; +static const struct genpdopp_table vcodec0_genpdopp[] = { + {RPM_SMD_CORNER_SVS_SOC, 100000000}, + {RPM_SMD_CORNER_NORMAL, 160000000}, + {RPM_SMD_CORNER_SUPER_TURBO, 228570000}, +}; + static struct clk_rcg2 vcodec0_clk_src = { .cmd_rcgr = 0x4C000, .mnd_width = 8, @@ -1517,6 +1694,7 @@ static struct clk_rcg2 vcodec0_clk_src = { .parent_names = gcc_xo_gpll0, .num_parents = 2, .ops = &clk_rcg2_ops, + POWER_PDOPP(vcodec0_genpdopp), }, }; @@ -3389,6 +3567,10 @@ static int gcc_msm8916_probe(struct platform_device *pdev) if (ret) return ret; + genpd_dev = dev; + mutex_init(&genpd_lock); + INIT_LIST_HEAD(&genpd_head); + return qcom_cc_probe(pdev, &gcc_msm8916_desc); }