From patchwork Thu Nov 16 15:33:02 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter De Schrijver X-Patchwork-Id: 10061389 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 3B200601AE for ; Thu, 16 Nov 2017 15:33:22 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2A54B2A1C7 for ; Thu, 16 Nov 2017 15:33:22 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1F40C2AB41; Thu, 16 Nov 2017 15:33:22 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B62042A1C7 for ; Thu, 16 Nov 2017 15:33:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1759926AbdKPPdV (ORCPT ); Thu, 16 Nov 2017 10:33:21 -0500 Received: from hqemgate15.nvidia.com ([216.228.121.64]:6612 "EHLO hqemgate15.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753928AbdKPPdT (ORCPT ); Thu, 16 Nov 2017 10:33:19 -0500 Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqemgate15.nvidia.com id ; Thu, 16 Nov 2017 07:33:17 -0800 Received: from HQMAIL105.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Thu, 16 Nov 2017 07:33:19 -0800 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Thu, 16 Nov 2017 07:33:19 -0800 Received: from UKMAIL101.nvidia.com (10.26.138.13) by HQMAIL105.nvidia.com (172.20.187.12) with Microsoft SMTP Server (TLS) id 15.0.1293.2; Thu, 16 Nov 2017 15:33:18 +0000 Received: from tbergstrom-lnx.Nvidia.com (10.21.24.170) by UKMAIL101.nvidia.com (10.26.138.13) with Microsoft SMTP Server (TLS) id 15.0.1293.2; Thu, 16 Nov 2017 15:33:13 +0000 Received: from tbergstrom-lnx.Nvidia.com (localhost [127.0.0.1]) by tbergstrom-lnx.Nvidia.com (Postfix) with ESMTP id 563C8F8001A; Thu, 16 Nov 2017 17:33:12 +0200 (EET) From: Peter De Schrijver To: , CC: Peter De Schrijver Subject: [PATCH 1/8] clk: tegra: dfll registration for multiple SoCs Date: Thu, 16 Nov 2017 17:33:02 +0200 Message-ID: <1510846389-28712-2-git-send-email-pdeschrijver@nvidia.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1510846389-28712-1-git-send-email-pdeschrijver@nvidia.com> References: <1510846389-28712-1-git-send-email-pdeschrijver@nvidia.com> X-NVConfidentiality: public MIME-Version: 1.0 X-Originating-IP: [10.21.24.170] X-ClientProxiedBy: UKMAIL102.nvidia.com (10.26.138.15) To UKMAIL101.nvidia.com (10.26.138.13) Sender: linux-clk-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-clk@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP In a future patch, support for the DFLL in Tegra210 will be introduced. This requires support for more than 1 set of CVB and CPU max frequency tables. Signed-off-by: Peter De Schrijver --- drivers/clk/tegra/clk-tegra124-dfll-fcpu.c | 43 +++++++++++++++++++++++------- 1 file changed, 33 insertions(+), 10 deletions(-) diff --git a/drivers/clk/tegra/clk-tegra124-dfll-fcpu.c b/drivers/clk/tegra/clk-tegra124-dfll-fcpu.c index ad1c1cc..1976e96 100644 --- a/drivers/clk/tegra/clk-tegra124-dfll-fcpu.c +++ b/drivers/clk/tegra/clk-tegra124-dfll-fcpu.c @@ -21,6 +21,7 @@ #include #include #include +#include #include #include @@ -28,8 +29,15 @@ #include "clk-dfll.h" #include "cvb.h" +struct dfll_fcpu_data { + const unsigned long *cpu_max_freq_table; + unsigned int cpu_max_freq_table_size; + const struct cvb_table *cpu_cvb_tables; + unsigned int cpu_cvb_tables_size; +}; + /* Maximum CPU frequency, indexed by CPU speedo id */ -static const unsigned long cpu_max_freq_table[] = { +static const unsigned long tegra124_cpu_max_freq_table[] = { [0] = 2014500000UL, [1] = 2320500000UL, [2] = 2116500000UL, @@ -82,16 +90,36 @@ }, }; +static const struct dfll_fcpu_data tegra124_dfll_fcpu_data = { + .cpu_max_freq_table = tegra124_cpu_max_freq_table, + .cpu_max_freq_table_size = ARRAY_SIZE(tegra124_cpu_max_freq_table), + .cpu_cvb_tables = tegra124_cpu_cvb_tables, + .cpu_cvb_tables_size = ARRAY_SIZE(tegra124_cpu_cvb_tables) +}; + +static const struct of_device_id tegra124_dfll_fcpu_of_match[] = { + { + .compatible = "nvidia,tegra124-dfll", + .data = &tegra124_dfll_fcpu_data, + }, + { }, +}; + static int tegra124_dfll_fcpu_probe(struct platform_device *pdev) { int process_id, speedo_id, speedo_value, err; struct tegra_dfll_soc_data *soc; + const struct of_device_id *of_id; + const struct dfll_fcpu_data *fcpu_data; + + of_id = of_match_device(tegra124_dfll_fcpu_of_match, &pdev->dev); + fcpu_data = of_id->data; process_id = tegra_sku_info.cpu_process_id; speedo_id = tegra_sku_info.cpu_speedo_id; speedo_value = tegra_sku_info.cpu_speedo_value; - if (speedo_id >= ARRAY_SIZE(cpu_max_freq_table)) { + if (speedo_id >= fcpu_data->cpu_max_freq_table_size) { dev_err(&pdev->dev, "unknown max CPU freq for speedo_id=%d\n", speedo_id); return -ENODEV; @@ -107,10 +135,10 @@ static int tegra124_dfll_fcpu_probe(struct platform_device *pdev) return -ENODEV; } - soc->max_freq = cpu_max_freq_table[speedo_id]; + soc->max_freq = fcpu_data->cpu_max_freq_table[speedo_id]; - soc->cvb = tegra_cvb_add_opp_table(soc->dev, tegra124_cpu_cvb_tables, - ARRAY_SIZE(tegra124_cpu_cvb_tables), + soc->cvb = tegra_cvb_add_opp_table(soc->dev, fcpu_data->cpu_cvb_tables, + fcpu_data->cpu_cvb_tables_size, process_id, speedo_id, speedo_value, soc->max_freq); if (IS_ERR(soc->cvb)) { @@ -144,11 +172,6 @@ static int tegra124_dfll_fcpu_remove(struct platform_device *pdev) return 0; } -static const struct of_device_id tegra124_dfll_fcpu_of_match[] = { - { .compatible = "nvidia,tegra124-dfll", }, - { }, -}; - static const struct dev_pm_ops tegra124_dfll_pm_ops = { SET_RUNTIME_PM_OPS(tegra_dfll_runtime_suspend, tegra_dfll_runtime_resume, NULL)