From patchwork Thu Dec 21 09:32:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "D, Lakshmi Sowjanya" X-Patchwork-Id: 13501507 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.12]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B20D24D125; Thu, 21 Dec 2023 09:33:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="j6+57GUf" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1703151188; x=1734687188; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=UDMw+55Cl3aF1QubHYTr8F0l7Gi7wc+yanz1auj4l2s=; b=j6+57GUfIk9UUzJnhAavrak3smVz+x7VkOIJL1ciM8LeLvLL48p11JA/ 7/W6RvG6pPaw+2tY9cD6Wt+PNZZU+yD/1YdhHI8DIuS0BlPKZu0FBOj6E DWgw5CtLA2mRtXAsznOwHQSCwoGkj500uyuKoUKNRqaXtMZUw+RXeEyWE v+hqnFj6Vxtokt3C3Fptpq6bqWrvwbmuRwTFIyymcRJwGK8Hw6IkdVzMJ T7XxoVGex0DSEyj1CunVpDYSV+kEYtifzX8Pm/+14VANdvrHdtIdF1hc/ QUaVhHbUBzWw3R3In7FdldFoL1F/CDwpgbouEFczkvru6+noXjkn+ERuD A==; X-IronPort-AV: E=McAfee;i="6600,9927,10930"; a="3188450" X-IronPort-AV: E=Sophos;i="6.04,293,1695711600"; d="scan'208";a="3188450" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orvoesa104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Dec 2023 01:33:08 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10930"; a="810918135" X-IronPort-AV: E=Sophos;i="6.04,293,1695711600"; d="scan'208";a="810918135" Received: from inlubt0316.iind.intel.com ([10.191.20.213]) by orsmga001.jf.intel.com with ESMTP; 21 Dec 2023 01:33:02 -0800 From: lakshmi.sowjanya.d@intel.com To: tglx@linutronix.de, jstultz@google.com, giometti@enneenne.com, corbet@lwn.net, linux-kernel@vger.kernel.org Cc: x86@kernel.org, netdev@vger.kernel.org, linux-doc@vger.kernel.org, intel-wired-lan@lists.osuosl.org, andriy.shevchenko@linux.intel.com, eddie.dong@intel.com, christopher.s.hall@intel.com, jesse.brandeburg@intel.com, davem@davemloft.net, alexandre.torgue@foss.st.com, joabreu@synopsys.com, mcoquelin.stm32@gmail.com, perex@perex.cz, linux-sound@vger.kernel.org, anthony.l.nguyen@intel.com, pandith.n@intel.com, mallikarjunappa.sangannavar@intel.com, thejesh.reddy.t.r@intel.com, lakshmi.sowjanya.d@intel.com Subject: [RFC PATCH v2 01/10] x86/tsc: Add base clock properties in clocksource structure Date: Thu, 21 Dec 2023 15:02:45 +0530 Message-Id: <20231221093254.9599-2-lakshmi.sowjanya.d@intel.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20231221093254.9599-1-lakshmi.sowjanya.d@intel.com> References: <20231221093254.9599-1-lakshmi.sowjanya.d@intel.com> Precedence: bulk X-Mailing-List: linux-sound@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-State: RFC From: Lakshmi Sowjanya D Remove convert_art_to_tsc() and convert_art_ns_to_tsc(), as this patch series introduces a generic function ktime_real_to_base_clock() to convert realtime to base clock domain. Add hardware abstraction, struct clocksource_base in clocksource. Add clocksource ID for x86 ART(Always Running Timer). Co-developed-by: Thomas Gleixner Signed-off-by: Thomas Gleixner Co-developed-by: Christopher S. Hall Signed-off-by: Christopher S. Hall Signed-off-by: Lakshmi Sowjanya D --- arch/x86/include/asm/tsc.h | 3 -- arch/x86/kernel/tsc.c | 94 +++++++-------------------------- include/linux/clocksource.h | 27 ++++++++++ include/linux/clocksource_ids.h | 1 + 4 files changed, 47 insertions(+), 78 deletions(-) diff --git a/arch/x86/include/asm/tsc.h b/arch/x86/include/asm/tsc.h index 594fce0ca744..5e36495cc821 100644 --- a/arch/x86/include/asm/tsc.h +++ b/arch/x86/include/asm/tsc.h @@ -27,9 +27,6 @@ static inline cycles_t get_cycles(void) } #define get_cycles get_cycles -extern struct system_counterval_t convert_art_to_tsc(u64 art); -extern struct system_counterval_t convert_art_ns_to_tsc(u64 art_ns); - extern void tsc_early_init(void); extern void tsc_init(void); extern void mark_tsc_unstable(char *reason); diff --git a/arch/x86/kernel/tsc.c b/arch/x86/kernel/tsc.c index 868f09966b0f..b45ce594cfef 100644 --- a/arch/x86/kernel/tsc.c +++ b/arch/x86/kernel/tsc.c @@ -51,9 +51,9 @@ int tsc_clocksource_reliable; static int __read_mostly tsc_force_recalibrate; -static u32 art_to_tsc_numerator; -static u32 art_to_tsc_denominator; -static u64 art_to_tsc_offset; +static struct clocksource_base art_base_clk = { + .id = CSID_X86_ART, +}; static bool have_art; struct cyc2ns { @@ -1075,7 +1075,7 @@ core_initcall(cpufreq_register_tsc_scaling); */ static void __init detect_art(void) { - unsigned int unused[2]; + unsigned int unused; if (boot_cpu_data.cpuid_level < ART_CPUID_LEAF) return; @@ -1090,13 +1090,14 @@ static void __init detect_art(void) tsc_async_resets) return; - cpuid(ART_CPUID_LEAF, &art_to_tsc_denominator, - &art_to_tsc_numerator, unused, unused+1); + cpuid(ART_CPUID_LEAF, &art_base_clk.denominator, + &art_base_clk.numerator, &art_base_clk.freq_khz, &unused); - if (art_to_tsc_denominator < ART_MIN_DENOMINATOR) + art_base_clk.freq_khz /= KHZ; + if (art_base_clk.denominator < ART_MIN_DENOMINATOR) return; - rdmsrl(MSR_IA32_TSC_ADJUST, art_to_tsc_offset); + rdmsrl(MSR_IA32_TSC_ADJUST, art_base_clk.offset); /* Make this sticky over multiple CPU init calls */ setup_force_cpu_cap(X86_FEATURE_ART); @@ -1297,69 +1298,6 @@ int unsynchronized_tsc(void) return 0; } -/* - * Convert ART to TSC given numerator/denominator found in detect_art() - */ -struct system_counterval_t convert_art_to_tsc(u64 art) -{ - u64 tmp, res, rem; - - rem = do_div(art, art_to_tsc_denominator); - - res = art * art_to_tsc_numerator; - tmp = rem * art_to_tsc_numerator; - - do_div(tmp, art_to_tsc_denominator); - res += tmp + art_to_tsc_offset; - - return (struct system_counterval_t) { - .cs_id = have_art ? CSID_X86_TSC : CSID_GENERIC, - .cycles = res - }; -} -EXPORT_SYMBOL(convert_art_to_tsc); - -/** - * convert_art_ns_to_tsc() - Convert ART in nanoseconds to TSC. - * @art_ns: ART (Always Running Timer) in unit of nanoseconds - * - * PTM requires all timestamps to be in units of nanoseconds. When user - * software requests a cross-timestamp, this function converts system timestamp - * to TSC. - * - * This is valid when CPU feature flag X86_FEATURE_TSC_KNOWN_FREQ is set - * indicating the tsc_khz is derived from CPUID[15H]. Drivers should check - * that this flag is set before conversion to TSC is attempted. - * - * Return: - * struct system_counterval_t - system counter value with the ID of the - * corresponding clocksource - * @cycles: System counter value - * @cs_id: Clocksource ID corresponding to system counter value. - * Used by timekeeping code to verify comparability of two - * cycle values. - */ - -struct system_counterval_t convert_art_ns_to_tsc(u64 art_ns) -{ - u64 tmp, res, rem; - - rem = do_div(art_ns, USEC_PER_SEC); - - res = art_ns * tsc_khz; - tmp = rem * tsc_khz; - - do_div(tmp, USEC_PER_SEC); - res += tmp; - - return (struct system_counterval_t) { - .cs_id = have_art ? CSID_X86_TSC : CSID_GENERIC, - .cycles = res - }; -} -EXPORT_SYMBOL(convert_art_ns_to_tsc); - - static void tsc_refine_calibration_work(struct work_struct *work); static DECLARE_DELAYED_WORK(tsc_irqwork, tsc_refine_calibration_work); /** @@ -1461,8 +1399,10 @@ static void tsc_refine_calibration_work(struct work_struct *work) if (tsc_unstable) goto unreg; - if (boot_cpu_has(X86_FEATURE_ART)) + if (boot_cpu_has(X86_FEATURE_ART)) { have_art = true; + clocksource_tsc.base = &art_base_clk; + } clocksource_register_khz(&clocksource_tsc, tsc_khz); unreg: clocksource_unregister(&clocksource_tsc_early); @@ -1487,8 +1427,10 @@ static int __init init_tsc_clocksource(void) * the refined calibration and directly register it as a clocksource. */ if (boot_cpu_has(X86_FEATURE_TSC_KNOWN_FREQ)) { - if (boot_cpu_has(X86_FEATURE_ART)) + if (boot_cpu_has(X86_FEATURE_ART)) { have_art = true; + clocksource_tsc.base = &art_base_clk; + } clocksource_register_khz(&clocksource_tsc, tsc_khz); clocksource_unregister(&clocksource_tsc_early); @@ -1512,10 +1454,12 @@ static bool __init determine_cpu_tsc_frequencies(bool early) if (early) { cpu_khz = x86_platform.calibrate_cpu(); - if (tsc_early_khz) + if (tsc_early_khz) { tsc_khz = tsc_early_khz; - else + } else { tsc_khz = x86_platform.calibrate_tsc(); + clocksource_tsc.freq_khz = tsc_khz; + } } else { /* We should not be here with non-native cpu calibration */ WARN_ON(x86_platform.calibrate_cpu != native_calibrate_cpu); diff --git a/include/linux/clocksource.h b/include/linux/clocksource.h index 1d42d4b17327..0a1110a0e660 100644 --- a/include/linux/clocksource.h +++ b/include/linux/clocksource.h @@ -21,6 +21,7 @@ #include #include +struct clocksource_base; struct clocksource; struct module; @@ -48,6 +49,7 @@ struct module; * @archdata: Optional arch-specific data * @max_cycles: Maximum safe cycle value which won't overflow on * multiplication + * @freq_khz: Clocksource frequency in khz. * @name: Pointer to clocksource name * @list: List head for registration (internal) * @rating: Rating value for selection (higher is better) @@ -70,6 +72,8 @@ struct module; * validate the clocksource from which the snapshot was * taken. * @flags: Flags describing special properties + * @base: Hardware abstraction for clock on which a clocksource + * is based * @enable: Optional function to enable the clocksource * @disable: Optional function to disable the clocksource * @suspend: Optional suspend function for the clocksource @@ -105,12 +109,14 @@ struct clocksource { struct arch_clocksource_data archdata; #endif u64 max_cycles; + u32 freq_khz; const char *name; struct list_head list; int rating; enum clocksource_ids id; enum vdso_clock_mode vdso_clock_mode; unsigned long flags; + struct clocksource_base *base; int (*enable)(struct clocksource *cs); void (*disable)(struct clocksource *cs); @@ -294,4 +300,25 @@ static inline void timer_probe(void) {} extern ulong max_cswd_read_retries; void clocksource_verify_percpu(struct clocksource *cs); +/** + * struct clocksource_base - hardware abstraction for clock on which a clocksource + * is based + * @id: Defaults to CSID_GENERIC. The id value is used for conversion + * functions which require that the current clocksource is based + * on a clocksource_base with a particular ID in certain snapshot + * functions to allow callers to validate the clocksource from + * which the snapshot was taken. + * @freq_khz: Nominal frequency of the base clock in kHz + * @offset: Offset between the base clock and the clocksource + * @numerator: Numerator of the clock ratio between base clock and the clocksource + * @denominator: Denominator of the clock ratio between base clock and the clocksource + */ +struct clocksource_base { + enum clocksource_ids id; + u32 freq_khz; + u64 offset; + u32 numerator; + u32 denominator; +}; + #endif /* _LINUX_CLOCKSOURCE_H */ diff --git a/include/linux/clocksource_ids.h b/include/linux/clocksource_ids.h index a4fa3436940c..2bb4d8c2f1b0 100644 --- a/include/linux/clocksource_ids.h +++ b/include/linux/clocksource_ids.h @@ -9,6 +9,7 @@ enum clocksource_ids { CSID_X86_TSC_EARLY, CSID_X86_TSC, CSID_X86_KVM_CLK, + CSID_X86_ART, CSID_MAX, };