From patchwork Fri Aug 5 07:49:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jun Nie X-Patchwork-Id: 12937034 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 93ABFC3F6B0 for ; Fri, 5 Aug 2022 07:49:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236316AbiHEHtx (ORCPT ); Fri, 5 Aug 2022 03:49:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50106 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240211AbiHEHtu (ORCPT ); Fri, 5 Aug 2022 03:49:50 -0400 Received: from mail-pj1-x102a.google.com (mail-pj1-x102a.google.com [IPv6:2607:f8b0:4864:20::102a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9A4CF2AEB for ; Fri, 5 Aug 2022 00:49:47 -0700 (PDT) Received: by mail-pj1-x102a.google.com with SMTP id c19-20020a17090ae11300b001f2f94ed5c6so7017593pjz.1 for ; Fri, 05 Aug 2022 00:49:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Brym2/V9OM2lQXUhI8Uh5+ajUm4TUUoqfd33kuLRL58=; b=Pgq/sMXnxHTiJb6P6KHRWfGQnSmb5sCtoQEuyLt6rd8iIQ1wcQNSohjiyROHsU2nnX qkEPP1OUhFk23ISXRF95p4xvXTseAUXsywSGet4HTtoptEB1pwb0O3p1syOucsbrEAPQ dwcvqqisX8osKMECotkh6aDNhpSd0QSgAzmBMMHsLg/ZCmFbFp3LG6D1rJu0hfb7JKSy SP9fruJURboEGIwvYMb/HDOGATTZawg/OKR9E2RmxRuxSG4GZ5ZQfTPn6yNBo7wxWks2 8yxNAMne77naZGBbmJ1wHPlbmEkFdu0sjuYsstRX76Ju2+leOCEjAx+4y7Zn7WXkLuN9 16ew== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Brym2/V9OM2lQXUhI8Uh5+ajUm4TUUoqfd33kuLRL58=; b=S2w2rUdpfcO+hfhFgKPlpdDzo0+4rz0C8E2O4QmYqM4gJZ7fZCTYi9rkmAOlF2eWv2 Wr1VkksDZJXU2eSF49N8HStkC+QHaw0Bv5VDfixKnhlf4kuEJ8aOI3RD3xCdPg8lbdrq Y3RwY5SYz2tLkMQGYxoxQx07kJbysIuDDOviUdvNQRFJl9nQzkFPEcnAetNj1xfdxXBe uQhIk+/utF2I4e0dcGuVvvAMugGDtCUhUsrGrxxBhwkebiSBd7mO7DCCp7VuUwI2dc1M kL/TRQ+9nmTjHgiPHzl+fEtirIyx0MqNuDBxZ1/hwlKoSzHq+N6zscZWz+Qgx/giR7Kp 25Ew== X-Gm-Message-State: ACgBeo0KoovWtEQd8wBmufrJ0W00Ldp7iIHxdbcyZhYLdc+kHmeUZE/M uGvZYEQNmBJcd9o24jNZXoAgOVGDdcIwgA== X-Google-Smtp-Source: AA6agR4CDPSMM4xDjQGLfZGRixoaLgCUYWEAIFrv7FIOhzE9KW1P9m6pxc+3cTI/dHAUmJU/h321Yw== X-Received: by 2002:a17:903:2452:b0:16e:d0b6:6507 with SMTP id l18-20020a170903245200b0016ed0b66507mr5769571pls.68.1659685787060; Fri, 05 Aug 2022 00:49:47 -0700 (PDT) Received: from localhost.localdomain ([45.8.68.134]) by smtp.gmail.com with ESMTPSA id z10-20020a1709027e8a00b0016ecda71e26sm2309372pla.39.2022.08.05.00.49.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 05 Aug 2022 00:49:46 -0700 (PDT) From: Jun Nie To: abel.vesa@linaro.org, bjorn.andersson@linaro.org, mturquette@baylibre.com, sboyd@kernel.org Cc: agross@kernel.org, shawn.guo@linaro.org, bryan.odonoghue@linaro.org, linux-clk@vger.kernel.org, linux-arm-msm@vger.kernel.org, devicetree@vger.kernel.org, Jun Nie Subject: [PATCH 1/4] clk: Aggregate power operation in clock controller Date: Fri, 5 Aug 2022 15:49:32 +0800 Message-Id: <20220805074935.1158098-2-jun.nie@linaro.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220805074935.1158098-1-jun.nie@linaro.org> References: <20220805074935.1158098-1-jun.nie@linaro.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-clk@vger.kernel.org Add power domain operation per clk frequency. Some hardware support dynamic voltage frequency scaling in clock controller besides devices. Which is not related to any clock consumer devices. While power domain operation is operated per device in driver model. If they are voted per clk, not per device, we need to aggregate them in clock framework, then send request to power framework. Signed-off-by: Jun Nie --- drivers/clk/clk.c | 212 ++++++++++++++++++++++++++++++++++- include/linux/clk-provider.h | 62 ++++++++++ 2 files changed, 272 insertions(+), 2 deletions(-) diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c index f00d4c1158d7..0ab79b9ebefd 100644 --- a/drivers/clk/clk.c +++ b/drivers/clk/clk.c @@ -18,6 +18,7 @@ #include #include #include +#include #include #include #include @@ -89,6 +90,7 @@ struct clk_core { struct hlist_node debug_node; #endif struct kref ref; + struct clk_power_data *power; }; #define CREATE_TRACE_POINTS @@ -812,6 +814,161 @@ int clk_rate_exclusive_get(struct clk *clk) } EXPORT_SYMBOL_GPL(clk_rate_exclusive_get); +static void clk_unvote_genpd(struct clk_core *core) +{ + struct clkpstate_node *ps_node = NULL; + struct clk_power_data *power = core->power; + unsigned int pstate = 0; + + mutex_lock(power->genpd_lock); + /* Do not free the node. As performance states number is limited, and we + * will re-visit it later. + */ + list_del_init(&power->genpd_list); + power->genpd_pstate = 0; + + /* Find and set the highest pstate */ + list_for_each_entry_reverse(ps_node, power->genpd_head, genpd_list) { + if (!list_empty(&ps_node->genpd_pstate_head)) { + pstate = ps_node->pstate; + break; + } + } + + pr_debug("%s: clk %s unvote genpd set genpd perf state %d\n", + __func__, core->name, pstate); + dev_pm_genpd_set_performance_state(*power->genpd_dev, pstate); + + mutex_unlock(power->genpd_lock); +} + +static int clk_vote_genpd(struct clk_core *core, unsigned long rate) +{ + struct clkpstate_node *new_ps_node, *ps_node, *pre_ps_node = NULL; + unsigned int cnt, pstate = 0; + struct list_head *insert_pos; + int ret = 0; + struct clk_power_data *power = core->power; + const struct genpdopp_table *tbl = power->genpdopp_table; + + /* Find opp pstate for required rate */ + for (cnt = 0; cnt < power->genpdopp_num; cnt++, tbl++) { + if (rate <= tbl->ceiling_rate) { + pstate = tbl->pstate; + break; + } + } + + if (!pstate && cnt == power->genpdopp_num) { + pr_err("%s: clk %s rate %lu not supported by genpd\n", __func__, + core->name, rate); + return -EINVAL; + } + + pr_debug("%s: clk %s votes perf state %d\n", + __func__, core->name, pstate); + mutex_lock(power->genpd_lock); + if (list_empty(power->genpd_head)) { + insert_pos = power->genpd_head; + goto new_pstate_node; + } + + /* If this clk power is already in some perf state */ + if (!list_empty(&power->genpd_list)) { + if (pstate == power->genpd_pstate) { + mutex_unlock(power->genpd_lock); + return 0; + } + list_del_init(&power->genpd_list); + } + + /* search the genpd pstate node that match pstate requirement */ + list_for_each_entry(ps_node, power->genpd_head, genpd_list) { + if (ps_node->pstate == pstate) { + new_ps_node = ps_node; + list_add(&power->genpd_list, + &new_ps_node->genpd_pstate_head); + goto linked_into_pstate; + } + if (ps_node->pstate > pstate) { + if (pre_ps_node != NULL) + insert_pos = &pre_ps_node->genpd_list; + else + insert_pos = power->genpd_head; + goto new_pstate_node; + } + pre_ps_node = ps_node; + } + /* Add new genpd pstate node in the end */ + insert_pos = &pre_ps_node->genpd_list; + +new_pstate_node: + new_ps_node = kmalloc(sizeof(struct clkpstate_node), GFP_KERNEL); + if (new_ps_node == NULL) { + mutex_unlock(power->genpd_lock); + return -ENOMEM; + } + + /* link this pstate node into genpd pstate link list */ + INIT_LIST_HEAD(&new_ps_node->genpd_list); + INIT_LIST_HEAD(&new_ps_node->genpd_pstate_head); + new_ps_node->pstate = pstate; + list_add(&new_ps_node->genpd_list, insert_pos); + list_add(&power->genpd_list, &new_ps_node->genpd_pstate_head); + + /* Find and set the highest pstate */ + list_for_each_entry_reverse(ps_node, power->genpd_head, genpd_list) { + if (!list_empty(&ps_node->genpd_pstate_head)) { + pr_debug("%s: genpd set perf state %d for clk %s\n", + __func__, pstate, core->name); + ret = dev_pm_genpd_set_performance_state( + *power->genpd_dev, ps_node->pstate); + if (ret) { + /* No need to free new_ps_node as it's empty */ + mutex_unlock(power->genpd_lock); + pr_err("%s: fail to set genpd opp for clk %s\n", + __func__, core->name); + return ret; + } + break; + } + } + +linked_into_pstate: + power->genpd_pstate = pstate; + mutex_unlock(power->genpd_lock); + return ret; +} + +static void clk_unvote_power(struct clk_core *core) +{ + struct clk_power_data *power = core->power; + + if (!core->power) + return; + + /* regulator added here in future */ + + if (power->genpd_dev) + clk_unvote_genpd(core); +} + +static int clk_vote_power(struct clk_core *core, unsigned long rate) +{ + struct clk_power_data *power = core->power; + int ret = 0; + + if (!core->power) + return 0; + + /* regulator added here in future */ + + if (power->genpd_dev) + ret = clk_vote_genpd(core, rate); + + return ret; +} + static void clk_core_unprepare(struct clk_core *core) { lockdep_assert_held(&prepare_lock); @@ -840,6 +997,8 @@ static void clk_core_unprepare(struct clk_core *core) if (core->ops->unprepare) core->ops->unprepare(core->hw); + clk_unvote_power(core); + clk_pm_runtime_put(core); trace_clk_unprepare_complete(core); @@ -887,6 +1046,10 @@ static int clk_core_prepare(struct clk_core *core) if (ret) return ret; + ret = clk_vote_power(core, core->rate); + if (ret) + return ret; + ret = clk_core_prepare(core->parent); if (ret) goto runtime_put; @@ -2189,7 +2352,7 @@ static int clk_core_set_rate_nolock(struct clk_core *core, { struct clk_core *top, *fail_clk; unsigned long rate; - int ret = 0; + int ret = 0, post_set_power = 0; if (!core) return 0; @@ -2223,10 +2386,21 @@ static int clk_core_set_rate_nolock(struct clk_core *core, goto err; } + if (rate > core->rate) { + ret = clk_vote_power(core, rate); + if (ret) + goto err; + } else { + post_set_power = 1; + } + /* change the rates */ clk_change_rate(top); core->req_rate = req_rate; + + if (post_set_power) + ret = clk_vote_power(core, rate); err: clk_pm_runtime_put(core); @@ -3905,7 +4079,8 @@ static void clk_core_free_parent_map(struct clk_core *core) static struct clk * __clk_register(struct device *dev, struct device_node *np, struct clk_hw *hw) { - int ret; + int ret, cnt; + unsigned long rate; struct clk_core *core; const struct clk_init_data *init = hw->init; @@ -3946,6 +4121,38 @@ __clk_register(struct device *dev, struct device_node *np, struct clk_hw *hw) core->min_rate = 0; core->max_rate = ULONG_MAX; + if (init->power && init->power_magic == CLK_POWER_MAGIC) { + struct clk_power_data *power = init->power; + const struct genpdopp_table *ptable = power->genpdopp_table; + + power->core = core; + if (power->genpd_dev) { + if (!power->genpd_lock || !power->genpd_head || + !power->genpdopp_table || !power->genpdopp_num) { + pr_err("%s: invalid power domain for clk %s\n", + __func__, core->name); + goto skip_clk_power; + } + } + for (cnt = 0; cnt < power->genpdopp_num - 1; cnt++) { + rate = ptable->ceiling_rate; + ptable++; + if (rate >= ptable->ceiling_rate) { + pr_err("%s: invalid asending rate for clk %s\n", + __func__, core->name); + ret = -EINVAL; + goto skip_clk_power; + } + } + core->power = kmalloc(sizeof(*power), GFP_KERNEL); + if (!core->power) + goto skip_clk_power; + + memcpy(core->power, power, sizeof(*power)); + INIT_LIST_HEAD(&core->power->genpd_list); + } + +skip_clk_power: ret = clk_core_populate_parent_map(core, init); if (ret) goto fail_parents; @@ -3978,6 +4185,7 @@ __clk_register(struct device *dev, struct device_node *np, struct clk_hw *hw) fail_create_clk: clk_core_free_parent_map(core); fail_parents: + kfree(core->power); fail_ops: kfree_const(core->name); fail_name: diff --git a/include/linux/clk-provider.h b/include/linux/clk-provider.h index c10dc4c659e2..bcf62fb0a6a1 100644 --- a/include/linux/clk-provider.h +++ b/include/linux/clk-provider.h @@ -268,12 +268,71 @@ struct clk_parent_data { int index; }; +/** + * struct genpdopp_table - opp pstate and clk rate mapping table + * + * @pstate: power domain performance state + * @ceiling_rate: the max clock rate this pstate supports + */ +struct genpdopp_table { + unsigned int pstate; + unsigned long ceiling_rate; +}; + +/** + * struct clkpstate_node - opp pstate node holds lists of clks that depends + * on a specific performance state. The nodes should be + * from low to high pstate lead by genpd_head. + * + * @genpd_list: list node that linked to a genpd list + * @genpd_pstate_head: list head that lead clks that depends on this pstate + * @pstate: power domain performance state + */ +struct clkpstate_node { + struct list_head genpd_list; + struct list_head genpd_pstate_head; + unsigned int pstate; +}; + +/** + * struct clk_power_data - holds power data that's common to all clocks and is + * shared between the clock provider and the common clock framework. + * + * @genpd_list: genpd consumer node of this clk, to be into one of genpd pstate + * consumer lists that lead by genpd_head when clk rate is set to + * a genpd opp pstate. + * @genpd_head: list head that holds genpd performance states heads, where + * genpd performance list heads are held. Those heads are holding + * genpd consumers in different opp pstate. + * @genpd_lock: spin_lock that protect genpd list operation + * @genpd_dev: device that bind the power domain where clk is on. It is clock + * controller device by default, or virtual device if there are + * multiple power domain for controller device + * @genpdopp_table: genpd opp pstate and clk rate mapping table. The rate should + be listed from lowest to highest strictly in table. + * @genpdopp_num: genpd opp pstate table entry number + * @genpd_pstate: current genpd opp pstate this clk requires + */ +struct clk_power_data { + struct list_head genpd_list; + struct list_head *genpd_head; + struct mutex *genpd_lock; + struct device **genpd_dev; + const struct genpdopp_table *genpdopp_table; + unsigned int genpdopp_num; + unsigned int genpd_pstate; + struct clk_core *core; +}; + /** * struct clk_init_data - holds init data that's common to all clocks and is * shared between the clock provider and the common clock framework. * * @name: clock name * @ops: operations this clock supports + * power: power data that this clock operates on + * @power_magic: magic number to indicate that power data is valid. To sanity + * check for none NULL invalid power data case. * @parent_names: array of string names for all possible parents * @parent_data: array of parent data for all possible parents (when some * parents are external to the clk controller) @@ -282,9 +341,12 @@ struct clk_parent_data { * @num_parents: number of possible parents * @flags: framework-level hints and quirks */ +#define CLK_POWER_MAGIC 0x5c5c6969 struct clk_init_data { const char *name; + int power_magic; const struct clk_ops *ops; + struct clk_power_data *power; /* Only one of the following three should be assigned */ const char * const *parent_names; const struct clk_parent_data *parent_data; From patchwork Fri Aug 5 07:49:33 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jun Nie X-Patchwork-Id: 12937035 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 971D2C00140 for ; Fri, 5 Aug 2022 07:49:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240287AbiHEHt6 (ORCPT ); Fri, 5 Aug 2022 03:49:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50202 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240250AbiHEHt5 (ORCPT ); Fri, 5 Aug 2022 03:49:57 -0400 Received: from mail-pl1-x636.google.com (mail-pl1-x636.google.com [IPv6:2607:f8b0:4864:20::636]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9FC381018 for ; Fri, 5 Aug 2022 00:49:54 -0700 (PDT) Received: by mail-pl1-x636.google.com with SMTP id w10so2026692plq.0 for ; Fri, 05 Aug 2022 00:49:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=0h60Pr0GHO67tyGTXkVyvVhWgn0K99BvyzPjUeZGI0g=; b=wNx5d5B6fC7hgZ0JrG0Jj0+ASpM5IcMkIsIXVRss4ZS2st2cc1T8ohiD/3NszzfoxE 8SScxkUvJ1QOInqNejvOgVwdav8rDckYqwESZ5GFf5YeAvLohlc47ZUTu8fZmcHCeNpo 50GlnspXKqwZcpEkwZXSxDd1+sEZYYM/r/Ahkej5PHCk7o1/WyFQFgC3+gUDurRScWRZ Cg5a0m6kSH/nXrISuLMAJegLTWKjHJN33WRIi+LM/56XaidRWObFVRXtEBxTf5lAWPse bcb4JKmnjRZgWjJ98se3MEJ/1u8fej/yuco1sPNxR/PvQyHlfuFq3Yhx0rOsRq/Gwdkt 1mlw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=0h60Pr0GHO67tyGTXkVyvVhWgn0K99BvyzPjUeZGI0g=; b=lfxz3I6FMfyNeAEITyU9HbwBLkTICtQOeGjz1sEE0/C6igR1BzW1YKJknXQX5HtTab S5qaGlewp3J1ivH/ljF6CxdClz1B3Oxw8yLvr0QdDOgvrd2qDpfRhLeSsU6fmhpnKmq0 YjWV6nG0PApi6n0aANBnCmg9LSjDbG/iEeaD/nKRNp45k3hrJv9kdkev1sRbdQClz4pB UWO7+8F5M8hwWhZP7G4cRxJ7SK+jTXW/kSjBjLrGevv3ZOYb7sdboDy0vTinNFAg7v1J oAD6CNUkGRwtZMxmjnlXTwsebHHdJuzPXrdqcFejPIFgcJP+xR+HUTes1ljmuhW7J/Ya g8Lg== X-Gm-Message-State: ACgBeo3BUoHEt6G37dO5p5B0TJoXa8Ts7yyhvVkrRI1rghYFLgZgDZkm ubTbKbhuTZxGtBwanRePqlS4Zw== X-Google-Smtp-Source: AA6agR6a3m/LZS0oVrZ4twvRV4bG+4/dadmQY1mQdWqPr7D3Syx1PIzAjqofz5Ti+2Ki1HnAi8GDoQ== X-Received: by 2002:a17:903:1111:b0:16a:acf4:e951 with SMTP id n17-20020a170903111100b0016aacf4e951mr5556317plh.72.1659685794118; Fri, 05 Aug 2022 00:49:54 -0700 (PDT) Received: from localhost.localdomain ([45.8.68.134]) by smtp.gmail.com with ESMTPSA id z10-20020a1709027e8a00b0016ecda71e26sm2309372pla.39.2022.08.05.00.49.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 05 Aug 2022 00:49:53 -0700 (PDT) From: Jun Nie To: abel.vesa@linaro.org, bjorn.andersson@linaro.org, mturquette@baylibre.com, sboyd@kernel.org Cc: agross@kernel.org, shawn.guo@linaro.org, bryan.odonoghue@linaro.org, linux-clk@vger.kernel.org, linux-arm-msm@vger.kernel.org, devicetree@vger.kernel.org, Jun Nie Subject: [PATCH 2/4] soc: qcom: rpmpd: Add corner power-domains states Date: Fri, 5 Aug 2022 15:49:33 +0800 Message-Id: <20220805074935.1158098-3-jun.nie@linaro.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220805074935.1158098-1-jun.nie@linaro.org> References: <20220805074935.1158098-1-jun.nie@linaro.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-clk@vger.kernel.org Some SoCs use corner instead of level in rpm regulator, such as MSM8916 and MSM8939. Add these power-domains states value so that devices can vote them. Note that there is a shift with 1 when converting the value from regulator usage in Qualcomm Linux 3.18 to power domain usage here. Because corner is not well hacked in regulator framework in 3.18. For example, RPM_REGULATOR_CORNER_RETENTION is 2 in 3.18 while RPM_SMD_CORNER_RETENTION is 1. Signed-off-by: Jun Nie --- include/dt-bindings/power/qcom-rpmpd.h | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/include/dt-bindings/power/qcom-rpmpd.h b/include/dt-bindings/power/qcom-rpmpd.h index 6cce5b7aa940..f778dbbf083d 100644 --- a/include/dt-bindings/power/qcom-rpmpd.h +++ b/include/dt-bindings/power/qcom-rpmpd.h @@ -297,4 +297,12 @@ #define RPM_SMD_LEVEL_TURBO_HIGH 448 #define RPM_SMD_LEVEL_BINNING 512 +/* RPM SMD Power Domain performance levels in regulator corner method */ +#define RPM_SMD_CORNER_RETENTION 1 +#define RPM_SMD_CORNER_SVS_KRAIT 2 +#define RPM_SMD_CORNER_SVS_SOC 3 +#define RPM_SMD_CORNER_NORMAL 4 +#define RPM_SMD_CORNER_TURBO 5 +#define RPM_SMD_CORNER_SUPER_TURBO 6 + #endif From patchwork Fri Aug 5 07:49:34 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jun Nie X-Patchwork-Id: 12937036 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A458DC00140 for ; Fri, 5 Aug 2022 07:50:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240222AbiHEHuE (ORCPT ); Fri, 5 Aug 2022 03:50:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50318 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240208AbiHEHuD (ORCPT ); Fri, 5 Aug 2022 03:50:03 -0400 Received: from mail-pl1-x62b.google.com (mail-pl1-x62b.google.com [IPv6:2607:f8b0:4864:20::62b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AD5E42A275 for ; Fri, 5 Aug 2022 00:50:01 -0700 (PDT) Received: by mail-pl1-x62b.google.com with SMTP id w14so1962205plp.9 for ; Fri, 05 Aug 2022 00:50:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=4fMMHWvyg4GU7gRV3bzyHI/9v3wkdIknoimmt7krGAY=; b=SBwIQEBS5OLlLKVrNkobgKRtOwMC9KhMVwRrSlHdxz4VRXR0glBhSBoquXjuPrrvMt O66C2m6XUa9h+0z4CW0OwujLEyYD8GItcERM1jJoPxm1g8xH7qT5MvjgqnmrI5Mj7ibs ZZZMk6DC8wzFQWh7hVAwlW1NXEeC8g2oEtrNP1GdG26+Ej2Kk5xJ83XTL/5W95QOWix7 qyWpxwAk0oW3szjbIb/6/ZB7HYKx9QeFaNk74rycYoJtJFqzXdPoT/hUdTs4wQeikf/t fE/IiKR08reJ8kTWCyXfkUB25/wG34DloCRGZj+g1i5WOdKP0/lOHo+2aXfVvVRIu0Tq llgg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=4fMMHWvyg4GU7gRV3bzyHI/9v3wkdIknoimmt7krGAY=; b=GzoUvS375kbAF05NoALaKmdhrt9x0b2FlnW4YJklFBCC+8iPru2xznoPEgDXJfN1jp fd/iOnJLbNNzIavoR3T4gfEIRzdw1zafKjev7x8StuOg0xbvR8SsYW6Uq+2V2EWEQZBC 6cndLicN+Z+Er0DCX15yoPG0+cu3xEane4dqoQo9OLPrsvZZZo63PCTu/WI0bubyKu1L YVioAIOLS89TWCllTsN2ppwiQjngWG6OBGkNSmfDlSBaDfE+DrxYpMcXedcXd6Au8V9q cqWikG0Ivz/wn4hhnTdGoXQMfmUoGp++1ZgGtK2YKMlfIYYskg7E9fRCE2UFA+bBIB7x FCog== X-Gm-Message-State: ACgBeo14FcSMQIX4XStUXSLlpXgwQqYq5gUJBr/cFYkmxWgMPQN+M0Sg zTlY1LWde/0Cab7oAGp0tkSGvQ== X-Google-Smtp-Source: AA6agR4qlmdhm+PfIJ042pSYU/7pcw4mifiS+duHcfD/56kLs2ea+10997KQtLuZHNrwdkxVqJD7hA== X-Received: by 2002:a17:90a:a4d:b0:1f5:5293:6abb with SMTP id o71-20020a17090a0a4d00b001f552936abbmr11131594pjo.236.1659685801124; Fri, 05 Aug 2022 00:50:01 -0700 (PDT) Received: from localhost.localdomain ([45.8.68.134]) by smtp.gmail.com with ESMTPSA id z10-20020a1709027e8a00b0016ecda71e26sm2309372pla.39.2022.08.05.00.49.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 05 Aug 2022 00:50:00 -0700 (PDT) From: Jun Nie To: abel.vesa@linaro.org, bjorn.andersson@linaro.org, mturquette@baylibre.com, sboyd@kernel.org Cc: agross@kernel.org, shawn.guo@linaro.org, bryan.odonoghue@linaro.org, linux-clk@vger.kernel.org, linux-arm-msm@vger.kernel.org, devicetree@vger.kernel.org, Jun Nie Subject: [PATCH 3/4] arm64: dts: qcom: add power domain for clk controller Date: Fri, 5 Aug 2022 15:49:34 +0800 Message-Id: <20220805074935.1158098-4-jun.nie@linaro.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220805074935.1158098-1-jun.nie@linaro.org> References: <20220805074935.1158098-1-jun.nie@linaro.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-clk@vger.kernel.org Add RPM power domain for clk controller so that clock controller can use it for dynamic voltage frequency scaling. Also replace the RPM power domain value with defninition. Signed-off-by: Jun Nie --- arch/arm64/boot/dts/qcom/msm8916.dtsi | 14 ++++++++------ 1 file changed, 8 insertions(+), 6 deletions(-) diff --git a/arch/arm64/boot/dts/qcom/msm8916.dtsi b/arch/arm64/boot/dts/qcom/msm8916.dtsi index 05472510e29d..fdb32b3a23e8 100644 --- a/arch/arm64/boot/dts/qcom/msm8916.dtsi +++ b/arch/arm64/boot/dts/qcom/msm8916.dtsi @@ -312,22 +312,22 @@ rpmpd_opp_table: opp-table { compatible = "operating-points-v2"; rpmpd_opp_ret: opp1 { - opp-level = <1>; + opp-level = ; }; rpmpd_opp_svs_krait: opp2 { - opp-level = <2>; + opp-level = ; }; rpmpd_opp_svs_soc: opp3 { - opp-level = <3>; + opp-level = ; }; rpmpd_opp_nom: opp4 { - opp-level = <4>; + opp-level = ; }; rpmpd_opp_turbo: opp5 { - opp-level = <5>; + opp-level = ; }; rpmpd_opp_super_turbo: opp6 { - opp-level = <6>; + opp-level = ; }; }; }; @@ -933,6 +933,8 @@ gcc: clock-controller@1800000 { #clock-cells = <1>; #reset-cells = <1>; #power-domain-cells = <1>; + power-domains = <&rpmpd MSM8916_VDDCX>; + power-domain-names = "vdd"; reg = <0x01800000 0x80000>; }; From patchwork Fri Aug 5 07:49:35 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jun Nie X-Patchwork-Id: 12937037 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 25346C3F6B0 for ; Fri, 5 Aug 2022 07:50:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240320AbiHEHuP (ORCPT ); Fri, 5 Aug 2022 03:50:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50526 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240208AbiHEHuN (ORCPT ); Fri, 5 Aug 2022 03:50:13 -0400 Received: from mail-pj1-x1033.google.com (mail-pj1-x1033.google.com [IPv6:2607:f8b0:4864:20::1033]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7945A20BD1 for ; Fri, 5 Aug 2022 00:50:09 -0700 (PDT) Received: by mail-pj1-x1033.google.com with SMTP id a8so1968566pjg.5 for ; Fri, 05 Aug 2022 00:50:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=6shUobG0mUHt8plNau/DLr0D/jTsA6xW6lO9UB121Jo=; b=eTDAyR4Z66EIKJwrVRQYZVE8S8yRg6WXiDYK3xJ8PmJg+tYlXz3x3i296zocrOO3Q1 yrGu0+noGM30Q4z9WRKtmak23ZtzodqksT+BgwDLNSjWS9R6BVqYPPE4GJM5AQ6vWfgu AYVGzSsAZIUeeEq5sUFM3SUwDAgUgz46y6DMOhM/Ep7mf9hZf94zCxstL9LiEIueX71Z UKaK4mE6qRqMYOTYz+ChXhUd+yvGFHMuxnlUF44MP9f1puM2tfZpJjcQQ19yxNnCUII5 ywCxXkR8sa/GbH8eoZU5aAzVF8Mlvpzga11Vk1B/1Avrg3NxtzAWz/V2ltdYi05H+FRD YWpw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=6shUobG0mUHt8plNau/DLr0D/jTsA6xW6lO9UB121Jo=; b=G7qdRERcDqs8ii/+u2Q6OsbpcN1v+WK31luWTD2pI82upmOEVqC7+pMqNWNM0fkRKw iy4pzzZZ9i2r5LJWEjyrPpIQyuIR6N9RIR9JQntLml96rLqFcyPFV1KNph6yCRl5fhhJ cbc8nminZGd7Y+FKm/d9gVR5umFmdyGhvYgP0DMl+TfI7sQhW2HG2LzofJr7ON39gBjd vGKNtUPHFeRAu81KS1Kz5Zwao47hdmQITy4mM8DSg8teongHDn9/54C8f8CxYdKeeXzv XeBKkjMIQ9UGmoMLkE6m8FWMkUZIwYA8xB2xbrTX1gUKYIGa0zOlUZi2YyazAG/5Zg7X 6+0w== X-Gm-Message-State: ACgBeo3HQOgmI0LjVKGAMuXFuHcXTUMGWLTSz5aMPwMAN4bsbZuiweuS UevJTRyx/jyFzVVz/gnxTHuyxbKliE338A== X-Google-Smtp-Source: AA6agR6Pl4lW/hjdaRP2Yf//SNfwxK5PBlvxTDa9OT2Q85KFNEDnivuHNQUbYkbT9EgzFCL8FqOGsg== X-Received: by 2002:a17:902:cf06:b0:16b:cc33:5bce with SMTP id i6-20020a170902cf0600b0016bcc335bcemr5682860plg.152.1659685808881; Fri, 05 Aug 2022 00:50:08 -0700 (PDT) Received: from localhost.localdomain ([45.8.68.134]) by smtp.gmail.com with ESMTPSA id z10-20020a1709027e8a00b0016ecda71e26sm2309372pla.39.2022.08.05.00.50.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 05 Aug 2022 00:50:08 -0700 (PDT) From: Jun Nie To: abel.vesa@linaro.org, bjorn.andersson@linaro.org, mturquette@baylibre.com, sboyd@kernel.org Cc: agross@kernel.org, shawn.guo@linaro.org, bryan.odonoghue@linaro.org, linux-clk@vger.kernel.org, linux-arm-msm@vger.kernel.org, devicetree@vger.kernel.org, Jun Nie Subject: [PATCH 4/4] clk: qcom: gcc-msm8916: Add power domain data Date: Fri, 5 Aug 2022 15:49:35 +0800 Message-Id: <20220805074935.1158098-5-jun.nie@linaro.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220805074935.1158098-1-jun.nie@linaro.org> References: <20220805074935.1158098-1-jun.nie@linaro.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-clk@vger.kernel.org Add power domain performance state and ceiling freq mapping table so that optimal performance point can be voted by clks within clock controller. This is not related to the clks consumer devices. Run this command to check genpd perf state: cat /sys/kernel/debug/pm_genpd/vddcx/perf_state Signed-off-by: Jun Nie --- drivers/clk/qcom/gcc-msm8916.c | 182 +++++++++++++++++++++++++++++++++ 1 file changed, 182 insertions(+) diff --git a/drivers/clk/qcom/gcc-msm8916.c b/drivers/clk/qcom/gcc-msm8916.c index 17e4a5a2a9fd..b42e39688a28 100644 --- a/drivers/clk/qcom/gcc-msm8916.c +++ b/drivers/clk/qcom/gcc-msm8916.c @@ -13,8 +13,10 @@ #include #include #include +#include #include +#include #include #include "common.h" @@ -25,6 +27,20 @@ #include "reset.h" #include "gdsc.h" +static struct device *genpd_dev; +static struct mutex genpd_lock; +static struct list_head genpd_head; + +#define POWER_PDOPP(table) \ + .power_magic = CLK_POWER_MAGIC, \ + .power = &(struct clk_power_data) { \ + .genpd_head = &genpd_head, \ + .genpd_lock = &genpd_lock, \ + .genpdopp_table = table, \ + .genpdopp_num = ARRAY_SIZE(table), \ + .genpd_dev = &genpd_dev, \ + } + enum { P_XO, P_GPLL0, @@ -394,6 +410,11 @@ static const struct freq_tbl ftbl_gcc_camss_ahb_clk[] = { { } }; +static const struct genpdopp_table camss_ahb_genpdopp[] = { + {RPM_SMD_CORNER_SVS_SOC, 40000000}, + {RPM_SMD_CORNER_NORMAL, 80000000}, +}; + static struct clk_rcg2 camss_ahb_clk_src = { .cmd_rcgr = 0x5a000, .mnd_width = 8, @@ -405,6 +426,7 @@ static struct clk_rcg2 camss_ahb_clk_src = { .parent_names = gcc_xo_gpll0, .num_parents = 2, .ops = &clk_rcg2_ops, + POWER_PDOPP(camss_ahb_genpdopp), }, }; @@ -435,6 +457,11 @@ static const struct freq_tbl ftbl_gcc_camss_csi0_1_clk[] = { { } }; +static const struct genpdopp_table camss_csi0_1_genpdopp[] = { + {RPM_SMD_CORNER_SVS_SOC, 100000000}, + {RPM_SMD_CORNER_NORMAL, 200000000}, +}; + static struct clk_rcg2 csi0_clk_src = { .cmd_rcgr = 0x4e020, .hid_width = 5, @@ -445,6 +472,7 @@ static struct clk_rcg2 csi0_clk_src = { .parent_names = gcc_xo_gpll0, .num_parents = 2, .ops = &clk_rcg2_ops, + POWER_PDOPP(camss_csi0_1_genpdopp), }, }; @@ -458,6 +486,7 @@ static struct clk_rcg2 csi1_clk_src = { .parent_names = gcc_xo_gpll0, .num_parents = 2, .ops = &clk_rcg2_ops, + POWER_PDOPP(camss_csi0_1_genpdopp), }, }; @@ -476,6 +505,12 @@ static const struct freq_tbl ftbl_gcc_oxili_gfx3d_clk[] = { { } }; + +static const struct genpdopp_table gfx3d_genpdopp[] = { + {RPM_SMD_CORNER_SVS_SOC, 200000000}, + {RPM_SMD_CORNER_NORMAL, 310000000}, + {RPM_SMD_CORNER_SUPER_TURBO, 400000000}, +}; static struct clk_rcg2 gfx3d_clk_src = { .cmd_rcgr = 0x59000, .hid_width = 5, @@ -486,6 +521,7 @@ static struct clk_rcg2 gfx3d_clk_src = { .parent_names = gcc_xo_gpll0a_gpll1_gpll2a, .num_parents = 4, .ops = &clk_rcg2_ops, + POWER_PDOPP(gfx3d_genpdopp), }, }; @@ -503,6 +539,12 @@ static const struct freq_tbl ftbl_gcc_camss_vfe0_clk[] = { { } }; +static const struct genpdopp_table vfe0_genpdopp[] = { + {RPM_SMD_CORNER_SVS_SOC, 160000000}, + {RPM_SMD_CORNER_NORMAL, 320000000}, + {RPM_SMD_CORNER_SUPER_TURBO, 465000000}, +}; + static struct clk_rcg2 vfe0_clk_src = { .cmd_rcgr = 0x58000, .hid_width = 5, @@ -513,6 +555,7 @@ static struct clk_rcg2 vfe0_clk_src = { .parent_names = gcc_xo_gpll0_gpll2, .num_parents = 3, .ops = &clk_rcg2_ops, + POWER_PDOPP(vfe0_genpdopp), }, }; @@ -522,6 +565,10 @@ static const struct freq_tbl ftbl_gcc_blsp1_qup1_6_i2c_apps_clk[] = { { } }; +static const struct genpdopp_table qup1_6_i2c_genpdopp[] = { + {RPM_SMD_CORNER_SVS_SOC, 50000000}, +}; + static struct clk_rcg2 blsp1_qup1_i2c_apps_clk_src = { .cmd_rcgr = 0x0200c, .hid_width = 5, @@ -532,6 +579,7 @@ static struct clk_rcg2 blsp1_qup1_i2c_apps_clk_src = { .parent_names = gcc_xo_gpll0, .num_parents = 2, .ops = &clk_rcg2_ops, + POWER_PDOPP(qup1_6_i2c_genpdopp), }, }; @@ -550,6 +598,11 @@ static const struct freq_tbl ftbl_gcc_blsp1_qup1_6_spi_apps_clk[] = { { } }; +static const struct genpdopp_table qup1_6_spi_genpdopp[] = { + {RPM_SMD_CORNER_SVS_SOC, 25000000}, + {RPM_SMD_CORNER_NORMAL, 50000000}, +}; + static struct clk_rcg2 blsp1_qup1_spi_apps_clk_src = { .cmd_rcgr = 0x02024, .mnd_width = 8, @@ -561,6 +614,7 @@ static struct clk_rcg2 blsp1_qup1_spi_apps_clk_src = { .parent_names = gcc_xo_gpll0, .num_parents = 2, .ops = &clk_rcg2_ops, + POWER_PDOPP(qup1_6_spi_genpdopp), }, }; @@ -574,6 +628,7 @@ static struct clk_rcg2 blsp1_qup2_i2c_apps_clk_src = { .parent_names = gcc_xo_gpll0, .num_parents = 2, .ops = &clk_rcg2_ops, + POWER_PDOPP(qup1_6_i2c_genpdopp), }, }; @@ -588,6 +643,7 @@ static struct clk_rcg2 blsp1_qup2_spi_apps_clk_src = { .parent_names = gcc_xo_gpll0, .num_parents = 2, .ops = &clk_rcg2_ops, + POWER_PDOPP(qup1_6_spi_genpdopp), }, }; @@ -601,6 +657,7 @@ static struct clk_rcg2 blsp1_qup3_i2c_apps_clk_src = { .parent_names = gcc_xo_gpll0, .num_parents = 2, .ops = &clk_rcg2_ops, + POWER_PDOPP(qup1_6_i2c_genpdopp), }, }; @@ -615,6 +672,7 @@ static struct clk_rcg2 blsp1_qup3_spi_apps_clk_src = { .parent_names = gcc_xo_gpll0, .num_parents = 2, .ops = &clk_rcg2_ops, + POWER_PDOPP(qup1_6_spi_genpdopp), }, }; @@ -628,6 +686,7 @@ static struct clk_rcg2 blsp1_qup4_i2c_apps_clk_src = { .parent_names = gcc_xo_gpll0, .num_parents = 2, .ops = &clk_rcg2_ops, + POWER_PDOPP(qup1_6_i2c_genpdopp), }, }; @@ -642,6 +701,7 @@ static struct clk_rcg2 blsp1_qup4_spi_apps_clk_src = { .parent_names = gcc_xo_gpll0, .num_parents = 2, .ops = &clk_rcg2_ops, + POWER_PDOPP(qup1_6_spi_genpdopp), }, }; @@ -655,6 +715,7 @@ static struct clk_rcg2 blsp1_qup5_i2c_apps_clk_src = { .parent_names = gcc_xo_gpll0, .num_parents = 2, .ops = &clk_rcg2_ops, + POWER_PDOPP(qup1_6_i2c_genpdopp), }, }; @@ -669,6 +730,7 @@ static struct clk_rcg2 blsp1_qup5_spi_apps_clk_src = { .parent_names = gcc_xo_gpll0, .num_parents = 2, .ops = &clk_rcg2_ops, + POWER_PDOPP(qup1_6_spi_genpdopp), }, }; @@ -682,6 +744,7 @@ static struct clk_rcg2 blsp1_qup6_i2c_apps_clk_src = { .parent_names = gcc_xo_gpll0, .num_parents = 2, .ops = &clk_rcg2_ops, + POWER_PDOPP(qup1_6_i2c_genpdopp), }, }; @@ -696,6 +759,7 @@ static struct clk_rcg2 blsp1_qup6_spi_apps_clk_src = { .parent_names = gcc_xo_gpll0, .num_parents = 2, .ops = &clk_rcg2_ops, + POWER_PDOPP(qup1_6_spi_genpdopp), }, }; @@ -718,6 +782,11 @@ static const struct freq_tbl ftbl_gcc_blsp1_uart1_6_apps_clk[] = { { } }; +static const struct genpdopp_table uart1_2_genpdopp[] = { + {RPM_SMD_CORNER_SVS_SOC, 32000000}, + {RPM_SMD_CORNER_NORMAL, 64000000}, +}; + static struct clk_rcg2 blsp1_uart1_apps_clk_src = { .cmd_rcgr = 0x02044, .mnd_width = 16, @@ -729,6 +798,7 @@ static struct clk_rcg2 blsp1_uart1_apps_clk_src = { .parent_names = gcc_xo_gpll0, .num_parents = 2, .ops = &clk_rcg2_ops, + POWER_PDOPP(uart1_2_genpdopp), }, }; @@ -743,6 +813,7 @@ static struct clk_rcg2 blsp1_uart2_apps_clk_src = { .parent_names = gcc_xo_gpll0, .num_parents = 2, .ops = &clk_rcg2_ops, + POWER_PDOPP(uart1_2_genpdopp), }, }; @@ -751,6 +822,10 @@ static const struct freq_tbl ftbl_gcc_camss_cci_clk[] = { { } }; +static const struct genpdopp_table cci_genpdopp[] = { + {RPM_SMD_CORNER_SVS_SOC, 19200000}, +}; + static struct clk_rcg2 cci_clk_src = { .cmd_rcgr = 0x51000, .mnd_width = 8, @@ -762,6 +837,7 @@ static struct clk_rcg2 cci_clk_src = { .parent_names = gcc_xo_gpll0a, .num_parents = 2, .ops = &clk_rcg2_ops, + POWER_PDOPP(cci_genpdopp), }, }; @@ -771,6 +847,11 @@ static const struct freq_tbl ftbl_gcc_camss_gp0_1_clk[] = { { } }; +static const struct genpdopp_table gp0_1_genpdopp[] = { + {RPM_SMD_CORNER_SVS_SOC, 100000000}, + {RPM_SMD_CORNER_NORMAL, 200000000}, +}; + static struct clk_rcg2 camss_gp0_clk_src = { .cmd_rcgr = 0x54000, .mnd_width = 8, @@ -782,6 +863,7 @@ static struct clk_rcg2 camss_gp0_clk_src = { .parent_names = gcc_xo_gpll0_gpll1a_sleep, .num_parents = 4, .ops = &clk_rcg2_ops, + POWER_PDOPP(gp0_1_genpdopp), }, }; @@ -796,6 +878,7 @@ static struct clk_rcg2 camss_gp1_clk_src = { .parent_names = gcc_xo_gpll0_gpll1a_sleep, .num_parents = 4, .ops = &clk_rcg2_ops, + POWER_PDOPP(gp0_1_genpdopp), }, }; @@ -806,6 +889,12 @@ static const struct freq_tbl ftbl_gcc_camss_jpeg0_clk[] = { { } }; +static const struct genpdopp_table jpeg0_genpdopp[] = { + {RPM_SMD_CORNER_SVS_SOC, 133330000}, + {RPM_SMD_CORNER_NORMAL, 266670000}, + {RPM_SMD_CORNER_SUPER_TURBO, 320000000}, +}; + static struct clk_rcg2 jpeg0_clk_src = { .cmd_rcgr = 0x57000, .hid_width = 5, @@ -816,6 +905,7 @@ static struct clk_rcg2 jpeg0_clk_src = { .parent_names = gcc_xo_gpll0, .num_parents = 2, .ops = &clk_rcg2_ops, + POWER_PDOPP(jpeg0_genpdopp), }, }; @@ -826,6 +916,11 @@ static const struct freq_tbl ftbl_gcc_camss_mclk0_1_clk[] = { { } }; +static const struct genpdopp_table mclk0_1_genpdopp[] = { + {RPM_SMD_CORNER_SVS_SOC, 24000000}, + {RPM_SMD_CORNER_NORMAL, 66670000}, +}; + static struct clk_rcg2 mclk0_clk_src = { .cmd_rcgr = 0x52000, .mnd_width = 8, @@ -837,6 +932,7 @@ static struct clk_rcg2 mclk0_clk_src = { .parent_names = gcc_xo_gpll0_gpll1a_sleep, .num_parents = 4, .ops = &clk_rcg2_ops, + POWER_PDOPP(mclk0_1_genpdopp), }, }; @@ -851,6 +947,7 @@ static struct clk_rcg2 mclk1_clk_src = { .parent_names = gcc_xo_gpll0_gpll1a_sleep, .num_parents = 4, .ops = &clk_rcg2_ops, + POWER_PDOPP(mclk0_1_genpdopp), }, }; @@ -873,6 +970,11 @@ static struct clk_rcg2 csi0phytimer_clk_src = { }, }; +static const struct genpdopp_table csi1phytimer_genpdopp[] = { + {RPM_SMD_CORNER_SVS_SOC, 100000000}, + {RPM_SMD_CORNER_NORMAL, 200000000}, +}; + static struct clk_rcg2 csi1phytimer_clk_src = { .cmd_rcgr = 0x4f000, .hid_width = 5, @@ -883,6 +985,7 @@ static struct clk_rcg2 csi1phytimer_clk_src = { .parent_names = gcc_xo_gpll0_gpll1a, .num_parents = 3, .ops = &clk_rcg2_ops, + POWER_PDOPP(csi1phytimer_genpdopp), }, }; @@ -893,6 +996,12 @@ static const struct freq_tbl ftbl_gcc_camss_cpp_clk[] = { { } }; +static const struct genpdopp_table cpp_genpdopp[] = { + {RPM_SMD_CORNER_SVS_SOC, 160000000}, + {RPM_SMD_CORNER_NORMAL, 320000000}, + {RPM_SMD_CORNER_SUPER_TURBO, 465000000}, +}; + static struct clk_rcg2 cpp_clk_src = { .cmd_rcgr = 0x58018, .hid_width = 5, @@ -903,6 +1012,7 @@ static struct clk_rcg2 cpp_clk_src = { .parent_names = gcc_xo_gpll0_gpll2, .num_parents = 3, .ops = &clk_rcg2_ops, + POWER_PDOPP(cpp_genpdopp), }, }; @@ -914,6 +1024,11 @@ static const struct freq_tbl ftbl_gcc_crypto_clk[] = { { } }; +static const struct genpdopp_table crypto_genpdopp[] = { + {RPM_SMD_CORNER_SVS_SOC, 80000000}, + {RPM_SMD_CORNER_NORMAL, 160000000}, +}; + static struct clk_rcg2 crypto_clk_src = { .cmd_rcgr = 0x16004, .hid_width = 5, @@ -924,6 +1039,7 @@ static struct clk_rcg2 crypto_clk_src = { .parent_names = gcc_xo_gpll0, .num_parents = 2, .ops = &clk_rcg2_ops, + POWER_PDOPP(crypto_genpdopp), }, }; @@ -932,6 +1048,11 @@ static const struct freq_tbl ftbl_gcc_gp1_3_clk[] = { { } }; +static const struct genpdopp_table gp1_3_genpdopp[] = { + {RPM_SMD_CORNER_SVS_SOC, 100000000}, + {RPM_SMD_CORNER_NORMAL, 200000000}, +}; + static struct clk_rcg2 gp1_clk_src = { .cmd_rcgr = 0x08004, .mnd_width = 8, @@ -943,6 +1064,7 @@ static struct clk_rcg2 gp1_clk_src = { .parent_names = gcc_xo_gpll0_gpll1a_sleep, .num_parents = 3, .ops = &clk_rcg2_ops, + POWER_PDOPP(gp1_3_genpdopp), }, }; @@ -957,6 +1079,7 @@ static struct clk_rcg2 gp2_clk_src = { .parent_names = gcc_xo_gpll0_gpll1a_sleep, .num_parents = 3, .ops = &clk_rcg2_ops, + POWER_PDOPP(gp1_3_genpdopp), }, }; @@ -971,9 +1094,15 @@ static struct clk_rcg2 gp3_clk_src = { .parent_names = gcc_xo_gpll0_gpll1a_sleep, .num_parents = 3, .ops = &clk_rcg2_ops, + POWER_PDOPP(gp1_3_genpdopp), }, }; +static const struct genpdopp_table byte0_genpdopp[] = { + {RPM_SMD_CORNER_SVS_SOC, 94400000}, + {RPM_SMD_CORNER_NORMAL, 188500000}, +}; + static struct clk_rcg2 byte0_clk_src = { .cmd_rcgr = 0x4d044, .hid_width = 5, @@ -984,6 +1113,7 @@ static struct clk_rcg2 byte0_clk_src = { .num_parents = 3, .ops = &clk_byte2_ops, .flags = CLK_SET_RATE_PARENT, + POWER_PDOPP(byte0_genpdopp), }, }; @@ -992,6 +1122,10 @@ static const struct freq_tbl ftbl_gcc_mdss_esc0_clk[] = { { } }; +static const struct genpdopp_table esc0_genpdopp[] = { + {RPM_SMD_CORNER_SVS_SOC, 19200000}, +}; + static struct clk_rcg2 esc0_clk_src = { .cmd_rcgr = 0x4d05c, .hid_width = 5, @@ -1002,6 +1136,7 @@ static struct clk_rcg2 esc0_clk_src = { .parent_names = gcc_xo_dsibyte, .num_parents = 2, .ops = &clk_rcg2_ops, + POWER_PDOPP(esc0_genpdopp), }, }; @@ -1017,6 +1152,12 @@ static const struct freq_tbl ftbl_gcc_mdss_mdp_clk[] = { { } }; +static const struct genpdopp_table mdp_genpdopp[] = { + {RPM_SMD_CORNER_SVS_SOC, 160000000}, + {RPM_SMD_CORNER_NORMAL, 266670000}, + {RPM_SMD_CORNER_SUPER_TURBO, 320000000}, +}; + static struct clk_rcg2 mdp_clk_src = { .cmd_rcgr = 0x4d014, .hid_width = 5, @@ -1027,9 +1168,15 @@ static struct clk_rcg2 mdp_clk_src = { .parent_names = gcc_xo_gpll0_dsiphy, .num_parents = 3, .ops = &clk_rcg2_ops, + POWER_PDOPP(mdp_genpdopp), }, }; +static const struct genpdopp_table pclk0_genpdopp[] = { + {RPM_SMD_CORNER_SVS_SOC, 150000000}, + {RPM_SMD_CORNER_NORMAL, 250000000}, +}; + static struct clk_rcg2 pclk0_clk_src = { .cmd_rcgr = 0x4d000, .mnd_width = 8, @@ -1041,6 +1188,7 @@ static struct clk_rcg2 pclk0_clk_src = { .num_parents = 3, .ops = &clk_pixel_ops, .flags = CLK_SET_RATE_PARENT, + POWER_PDOPP(pclk0_genpdopp), }, }; @@ -1049,6 +1197,10 @@ static const struct freq_tbl ftbl_gcc_mdss_vsync_clk[] = { { } }; +static const struct genpdopp_table vsync_genpdopp[] = { + {RPM_SMD_CORNER_SVS_SOC, 19200000}, +}; + static struct clk_rcg2 vsync_clk_src = { .cmd_rcgr = 0x4d02c, .hid_width = 5, @@ -1059,6 +1211,7 @@ static struct clk_rcg2 vsync_clk_src = { .parent_names = gcc_xo_gpll0a, .num_parents = 2, .ops = &clk_rcg2_ops, + POWER_PDOPP(vsync_genpdopp), }, }; @@ -1067,6 +1220,10 @@ static const struct freq_tbl ftbl_gcc_pdm2_clk[] = { { } }; +static const struct genpdopp_table pdm2_genpdopp[] = { + {RPM_SMD_CORNER_SVS_SOC, 64000000}, +}; + static struct clk_rcg2 pdm2_clk_src = { .cmd_rcgr = 0x44010, .hid_width = 5, @@ -1077,6 +1234,7 @@ static struct clk_rcg2 pdm2_clk_src = { .parent_names = gcc_xo_gpll0, .num_parents = 2, .ops = &clk_rcg2_ops, + POWER_PDOPP(pdm2_genpdopp), }, }; @@ -1091,6 +1249,11 @@ static const struct freq_tbl ftbl_gcc_sdcc1_apps_clk[] = { { } }; +static const struct genpdopp_table sdcc1_2_genpdopp[] = { + {RPM_SMD_CORNER_SVS_SOC, 50000000}, + {RPM_SMD_CORNER_NORMAL, 200000000}, +}; + static struct clk_rcg2 sdcc1_apps_clk_src = { .cmd_rcgr = 0x42004, .mnd_width = 8, @@ -1102,6 +1265,7 @@ static struct clk_rcg2 sdcc1_apps_clk_src = { .parent_names = gcc_xo_gpll0, .num_parents = 2, .ops = &clk_rcg2_floor_ops, + POWER_PDOPP(sdcc1_2_genpdopp), }, }; @@ -1127,6 +1291,7 @@ static struct clk_rcg2 sdcc2_apps_clk_src = { .parent_names = gcc_xo_gpll0, .num_parents = 2, .ops = &clk_rcg2_floor_ops, + POWER_PDOPP(sdcc1_2_genpdopp), }, }; @@ -1179,6 +1344,11 @@ static const struct freq_tbl ftbl_gcc_usb_hs_system_clk[] = { { } }; +static const struct genpdopp_table usb_hs_system_genpdopp[] = { + {RPM_SMD_CORNER_SVS_SOC, 57140000}, + {RPM_SMD_CORNER_NORMAL, 80000000}, +}; + static struct clk_rcg2 usb_hs_system_clk_src = { .cmd_rcgr = 0x41010, .hid_width = 5, @@ -1189,6 +1359,7 @@ static struct clk_rcg2 usb_hs_system_clk_src = { .parent_names = gcc_xo_gpll0, .num_parents = 2, .ops = &clk_rcg2_ops, + POWER_PDOPP(usb_hs_system_genpdopp), }, }; @@ -1506,6 +1677,12 @@ static const struct freq_tbl ftbl_gcc_venus0_vcodec0_clk[] = { { } }; +static const struct genpdopp_table vcodec0_genpdopp[] = { + {RPM_SMD_CORNER_SVS_SOC, 100000000}, + {RPM_SMD_CORNER_NORMAL, 160000000}, + {RPM_SMD_CORNER_SUPER_TURBO, 228570000}, +}; + static struct clk_rcg2 vcodec0_clk_src = { .cmd_rcgr = 0x4C000, .mnd_width = 8, @@ -1517,6 +1694,7 @@ static struct clk_rcg2 vcodec0_clk_src = { .parent_names = gcc_xo_gpll0, .num_parents = 2, .ops = &clk_rcg2_ops, + POWER_PDOPP(vcodec0_genpdopp), }, }; @@ -3389,6 +3567,10 @@ static int gcc_msm8916_probe(struct platform_device *pdev) if (ret) return ret; + genpd_dev = dev; + mutex_init(&genpd_lock); + INIT_LIST_HEAD(&genpd_head); + return qcom_cc_probe(pdev, &gcc_msm8916_desc); }