From patchwork Tue Feb 9 12:54:48 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 12078041 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.2 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 46914C433DB for ; Tue, 9 Feb 2021 12:55:00 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id D789E64E75 for ; Tue, 9 Feb 2021 12:54:59 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D789E64E75 Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.83220.154305 (Exim 4.92) (envelope-from ) id 1l9SXb-0007eZ-Ne; Tue, 09 Feb 2021 12:54:51 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 83220.154305; Tue, 09 Feb 2021 12:54:51 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l9SXb-0007eS-K2; Tue, 09 Feb 2021 12:54:51 +0000 Received: by outflank-mailman (input) for mailman id 83220; Tue, 09 Feb 2021 12:54:49 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l9SXZ-0007eK-TA for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 12:54:49 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id eecf05fe-8a00-45f6-9f2a-56b6d6230b69; Tue, 09 Feb 2021 12:54:49 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 1D3C5ADCD; Tue, 9 Feb 2021 12:54:48 +0000 (UTC) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: eecf05fe-8a00-45f6-9f2a-56b6d6230b69 X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1612875288; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ssL8YomSClhyIfhNV+3oRLDdbnJ7jLRwXkJQuCwirWw=; b=qNDMd4ahTwY5I4S9mSOMEuccQJ8Gk5viheOJrJQalud4LoxASbLpCRcJGfUAvctw0qZd9f Rb2t6qBqzt/Fw7SRHEujmDIiUFSF+eXfwjwAdvL+TwtjV8i4BKwfofD5uROmXVqlNred8S 3jNDj/VTmQ/OqN7lQ+iqRAcAktY4VHk= Subject: [PATCH v3 1/4] x86/time: change initiation of the calibration timer From: Jan Beulich To: "xen-devel@lists.xenproject.org" Cc: Andrew Cooper , Wei Liu , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Claudemir Todo Bom , Ian Jackson References: Message-ID: <9068daae-0f27-a724-5062-c02e5ebb4593@suse.com> Date: Tue, 9 Feb 2021 13:54:48 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.7.1 MIME-Version: 1.0 In-Reply-To: Content-Language: en-US Setting the timer a second (EPOCH) into the future at a random point during boot (prior to bringing up APs and prior to launching Dom0) does not yield predictable results: The timer may expire while we're still bringing up APs (too early) or when Dom0 already boots (too late). Instead invoke the timer handler function explicitly at a predictable point in time, once we've established the rendezvous function to use (and hence also once all APs are online). This will, through the raising and handling of TIMER_SOFTIRQ, then also have the effect of arming the timer. Signed-off-by: Jan Beulich Acked-by: Roger Pau Monné --- a/xen/arch/x86/time.c +++ b/xen/arch/x86/time.c @@ -854,9 +854,7 @@ static void resume_platform_timer(void) static void __init reset_platform_timer(void) { - /* Deactivate any timers running */ kill_timer(&plt_overflow_timer); - kill_timer(&calibration_timer); /* Reset counters and stamps */ spin_lock_irq(&platform_timer_lock); @@ -1956,19 +1954,13 @@ static void __init reset_percpu_time(voi t->stamp.master_stime = t->stamp.local_stime; } -static void __init try_platform_timer_tail(bool late) +static void __init try_platform_timer_tail(void) { init_timer(&plt_overflow_timer, plt_overflow, NULL, 0); plt_overflow(NULL); platform_timer_stamp = plt_stamp64; stime_platform_stamp = NOW(); - - if ( !late ) - init_percpu_time(); - - init_timer(&calibration_timer, time_calibration, NULL, 0); - set_timer(&calibration_timer, NOW() + EPOCH); } /* Late init function, after all cpus have booted */ @@ -2009,10 +2001,13 @@ static int __init verify_tsc_reliability time_calibration_rendezvous_fn = time_calibration_nop_rendezvous; /* Finish platform timer switch. */ - try_platform_timer_tail(true); + try_platform_timer_tail(); printk("Switched to Platform timer %s TSC\n", freq_string(plt_src.frequency)); + + time_calibration(NULL); + return 0; } } @@ -2033,6 +2028,8 @@ static int __init verify_tsc_reliability !boot_cpu_has(X86_FEATURE_TSC_RELIABLE) ) time_calibration_rendezvous_fn = time_calibration_tsc_rendezvous; + time_calibration(NULL); + return 0; } __initcall(verify_tsc_reliability); @@ -2048,7 +2045,11 @@ int __init init_xen_time(void) do_settime(get_wallclock_time(), 0, NOW()); /* Finish platform timer initialization. */ - try_platform_timer_tail(false); + try_platform_timer_tail(); + + init_percpu_time(); + + init_timer(&calibration_timer, time_calibration, NULL, 0); /* * Setup space to track per-socket TSC_ADJUST values. Don't fiddle with From patchwork Tue Feb 9 12:55:20 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 12078043 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.2 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A755BC433E0 for ; Tue, 9 Feb 2021 12:55:33 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 469AB64E75 for ; Tue, 9 Feb 2021 12:55:33 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 469AB64E75 Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.83221.154316 (Exim 4.92) (envelope-from ) id 1l9SY8-0007kv-07; Tue, 09 Feb 2021 12:55:24 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 83221.154316; Tue, 09 Feb 2021 12:55:23 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l9SY7-0007ko-TC; Tue, 09 Feb 2021 12:55:23 +0000 Received: by outflank-mailman (input) for mailman id 83221; Tue, 09 Feb 2021 12:55:22 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l9SY6-0007kd-39 for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 12:55:22 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id e30d3eeb-2174-45f1-8bad-e6cc0e4db328; Tue, 09 Feb 2021 12:55:21 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 36B88AF7B; Tue, 9 Feb 2021 12:55:20 +0000 (UTC) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: e30d3eeb-2174-45f1-8bad-e6cc0e4db328 X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1612875320; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Wj3LF6N3hDpyMxGowUzNE8QQ+tNIwy9wVm4fwaEKDOE=; b=vHpsQ5Aw3nml/+0jP8BUWaK7WUZklTEIBFXQy//AlIp4Tz0krwYW650znhIxTqU3UVo/6G vat7dW0J0gMTpYXSp4cMmVREXTQxSzOzmQ5NZGABwINBmH8cTtLi4CJMoOyOfRT2RCxi7s Z32QdGSq6yNxYkGGg/X45DkRyfA9uVU= Subject: [PATCH v3 2/4] x86/time: adjust time recording in time_calibration_tsc_rendezvous() From: Jan Beulich To: "xen-devel@lists.xenproject.org" Cc: Andrew Cooper , Wei Liu , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Claudemir Todo Bom , Ian Jackson References: Message-ID: <834b2d29-2589-f2b7-b496-7f1b35d35cff@suse.com> Date: Tue, 9 Feb 2021 13:55:20 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.7.1 MIME-Version: 1.0 In-Reply-To: Content-Language: en-US The (stime,tsc) tuple is the basis for extrapolation by get_s_time(). Therefore the two better get taken as close to one another as possible. This means two things: First, reading platform time is too early when done on the first iteration. The closest we can get is on the last iteration, immediately before telling other CPUs to write their TSCs (and then also writing CPU0's). While at the first glance it may seem not overly relevant when exactly platform time is read (when assuming that only stime is ever relevant anywhere, and hence the association with the precise TSC values is of lower interest), both CPU frequency changes and the effects of SMT make it unpredictable (between individual rendezvous instances) how long the loop iterations will take. This will in turn lead to higher an error than neccesary in how close to linear stime movement we can get. Second, re-reading the TSC for local recording is increasing the overall error as well, when we already know a more precise value - the one just written. Signed-off-by: Jan Beulich Acked-by: Roger Pau Monné --- v2: New. --- a/xen/arch/x86/time.c +++ b/xen/arch/x86/time.c @@ -1662,11 +1662,12 @@ struct calibration_rendezvous { }; static void -time_calibration_rendezvous_tail(const struct calibration_rendezvous *r) +time_calibration_rendezvous_tail(const struct calibration_rendezvous *r, + uint64_t tsc) { struct cpu_time_stamp *c = &this_cpu(cpu_calibration); - c->local_tsc = rdtsc_ordered(); + c->local_tsc = tsc; c->local_stime = get_s_time_fixed(c->local_tsc); c->master_stime = r->master_stime; @@ -1691,11 +1692,11 @@ static void time_calibration_tsc_rendezv while ( atomic_read(&r->semaphore) != (total_cpus - 1) ) cpu_relax(); - if ( r->master_stime == 0 ) - { - r->master_stime = read_platform_stime(NULL); + if ( r->master_tsc_stamp == 0 ) r->master_tsc_stamp = rdtsc_ordered(); - } + else if ( i == 0 ) + r->master_stime = read_platform_stime(NULL); + atomic_inc(&r->semaphore); if ( i == 0 ) @@ -1720,7 +1721,7 @@ static void time_calibration_tsc_rendezv } } - time_calibration_rendezvous_tail(r); + time_calibration_rendezvous_tail(r, r->master_tsc_stamp); } /* Ordinary rendezvous function which does not modify TSC values. */ @@ -1745,7 +1746,7 @@ static void time_calibration_std_rendezv smp_rmb(); /* receive signal /then/ read r->master_stime */ } - time_calibration_rendezvous_tail(r); + time_calibration_rendezvous_tail(r, rdtsc_ordered()); } /* From patchwork Tue Feb 9 12:55:51 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 12078045 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.2 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5B0ABC433DB for ; Tue, 9 Feb 2021 12:56:02 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id E111F64D9D for ; Tue, 9 Feb 2021 12:56:01 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E111F64D9D Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.83222.154329 (Exim 4.92) (envelope-from ) id 1l9SYb-0007rM-94; Tue, 09 Feb 2021 12:55:53 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 83222.154329; Tue, 09 Feb 2021 12:55:53 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l9SYb-0007rF-63; Tue, 09 Feb 2021 12:55:53 +0000 Received: by outflank-mailman (input) for mailman id 83222; Tue, 09 Feb 2021 12:55:52 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l9SYa-0007r9-Qv for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 12:55:52 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 008cb336-344b-47ca-9730-6ac64ba4ecc5; Tue, 09 Feb 2021 12:55:51 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 7CAB3AF6F; Tue, 9 Feb 2021 12:55:50 +0000 (UTC) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 008cb336-344b-47ca-9730-6ac64ba4ecc5 X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1612875350; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=HnGdtCYHBPgvwpHsIvzUOym5DXab5WLpkJmnAis8uD4=; b=DzOcWzU3kh8hg69qB9McgliT0UySqj7K1zXEglgiEV+PmEuGG6yjHa3xAuUIKO3cRxDavo r9/exgmYGinySPkl7lVUhvSajbURIbFuomFFuO2R65kDJCpC9XRTVZq6NdQ8eVTrx+pkAN mswRk5n8FLNZNyNt0hYKr/+ezprrbyE= Subject: [PATCH v3 3/4] x86/time: don't move TSC backwards in time_calibration_tsc_rendezvous() From: Jan Beulich To: "xen-devel@lists.xenproject.org" Cc: Andrew Cooper , Wei Liu , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Claudemir Todo Bom , Ian Jackson References: Message-ID: Date: Tue, 9 Feb 2021 13:55:51 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.7.1 MIME-Version: 1.0 In-Reply-To: Content-Language: en-US While doing this for small amounts may be okay, the unconditional use of CPU0's value here has been found to be a problem when the boot time TSC of the BSP was behind that of all APs by more than a second. In particular because of get_s_time_fixed() producing insane output when the calculated delta is negative, we can't allow this to happen. On the first iteration have all other CPUs sort out the highest TSC value any one of them has read. On the second iteration, if that maximum is higher than CPU0's, update its recorded value from that taken in the first iteration. Use the resulting value on the last iteration to write everyone's TSCs. To account for the possible discontinuity, have time_calibration_rendezvous_tail() record the newly written value, but extrapolate local stime using the value read. Reported-by: Claudemir Todo Bom Signed-off-by: Jan Beulich Reviewed-by: Roger Pau Monné --- v3: Simplify cmpxchg() loop. v2: Don't update r->master_stime by calculation. Re-base over new earlier patch. Make time_calibration_rendezvous_tail() take two TSC values. --- Since CPU0 reads its TSC last on the first iteration, if TSCs were perfectly sync-ed there shouldn't ever be a need to update. However, even on the TSC-reliable system I first tested this on (using "tsc=skewed" to get this rendezvous function into use in the first place) updates by up to several thousand clocks did happen. I wonder whether this points at some problem with the approach that I'm not (yet) seeing. Considering the sufficiently modern CPU it's using, I suspect the reporter's system wouldn't even need to turn off TSC_RELIABLE, if only there wasn't the boot time skew. Hence another approach might be to fix this boot time skew. Of course to recognize whether the TSCs then still aren't in sync we'd need to run tsc_check_reliability() sufficiently long after that adjustment. Which is besides the need to have this "fixing" be precise enough for the TSCs to not look skewed anymore afterwards. As per the comment ahead of it, the original purpose of the function was to deal with TSCs halted in deep C states. While this probably explains why only forward moves were ever expected, I don't see how this could have been reliable in case CPU0 was deep-sleeping for a sufficiently long time. My only guess here is a hidden assumption of CPU0 never being idle for long enough. Furthermore that comment looks to be contradicting the actual use of the function: It gets installed when !RELIABLE_TSC, while the comment would suggest !NONSTOP_TSC. I suppose the comment is simply misleading, because RELIABLE_TSC implies NONSTOP_TSC according to all the places where either of the two feature bits gets played with. Plus in the !NONSTOP_TSC case we write the TSC explicitly anyway when coming back out of a (deep; see below) C-state. As an implication from the above mwait_idle_cpu_init() then looks to pointlessly clear "reliable" when "nonstop" is clear. It further looks odd that mwait_idle() (unlike acpi_processor_idle()) calls cstate_restore_tsc() independent of what C-state was active. --- a/xen/arch/x86/time.c +++ b/xen/arch/x86/time.c @@ -1658,17 +1658,17 @@ struct calibration_rendezvous { cpumask_t cpu_calibration_map; atomic_t semaphore; s_time_t master_stime; - u64 master_tsc_stamp; + uint64_t master_tsc_stamp, max_tsc_stamp; }; static void time_calibration_rendezvous_tail(const struct calibration_rendezvous *r, - uint64_t tsc) + uint64_t old_tsc, uint64_t new_tsc) { struct cpu_time_stamp *c = &this_cpu(cpu_calibration); - c->local_tsc = tsc; - c->local_stime = get_s_time_fixed(c->local_tsc); + c->local_tsc = new_tsc; + c->local_stime = get_s_time_fixed(old_tsc ?: new_tsc); c->master_stime = r->master_stime; raise_softirq(TIME_CALIBRATE_SOFTIRQ); @@ -1683,6 +1683,7 @@ static void time_calibration_tsc_rendezv int i; struct calibration_rendezvous *r = _r; unsigned int total_cpus = cpumask_weight(&r->cpu_calibration_map); + uint64_t tsc = 0; /* Loop to get rid of cache effects on TSC skew. */ for ( i = 4; i >= 0; i-- ) @@ -1692,8 +1693,15 @@ static void time_calibration_tsc_rendezv while ( atomic_read(&r->semaphore) != (total_cpus - 1) ) cpu_relax(); - if ( r->master_tsc_stamp == 0 ) - r->master_tsc_stamp = rdtsc_ordered(); + if ( tsc == 0 ) + r->master_tsc_stamp = tsc = rdtsc_ordered(); + else if ( r->master_tsc_stamp < r->max_tsc_stamp ) + /* + * We want to avoid moving the TSC backwards for any CPU. + * Use the largest value observed anywhere on the first + * iteration. + */ + r->master_tsc_stamp = r->max_tsc_stamp; else if ( i == 0 ) r->master_stime = read_platform_stime(NULL); @@ -1712,6 +1720,15 @@ static void time_calibration_tsc_rendezv while ( atomic_read(&r->semaphore) < total_cpus ) cpu_relax(); + if ( tsc == 0 ) + { + uint64_t cur = ACCESS_ONCE(r->max_tsc_stamp); + + tsc = rdtsc_ordered(); + while ( tsc > cur ) + cur = cmpxchg(&r->max_tsc_stamp, cur, tsc); + } + if ( i == 0 ) write_tsc(r->master_tsc_stamp); @@ -1719,9 +1736,12 @@ static void time_calibration_tsc_rendezv while ( atomic_read(&r->semaphore) > total_cpus ) cpu_relax(); } + + /* Just in case a read above ended up reading zero. */ + tsc += !tsc; } - time_calibration_rendezvous_tail(r, r->master_tsc_stamp); + time_calibration_rendezvous_tail(r, tsc, r->master_tsc_stamp); } /* Ordinary rendezvous function which does not modify TSC values. */ @@ -1746,7 +1766,7 @@ static void time_calibration_std_rendezv smp_rmb(); /* receive signal /then/ read r->master_stime */ } - time_calibration_rendezvous_tail(r, rdtsc_ordered()); + time_calibration_rendezvous_tail(r, 0, rdtsc_ordered()); } /* From patchwork Tue Feb 9 12:57:05 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 12078055 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.2 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 75C84C433E0 for ; Tue, 9 Feb 2021 12:57:16 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 1102D64E75 for ; Tue, 9 Feb 2021 12:57:16 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1102D64E75 Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.83225.154355 (Exim 4.92) (envelope-from ) id 1l9SZo-00088G-1V; Tue, 09 Feb 2021 12:57:08 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 83225.154355; Tue, 09 Feb 2021 12:57:08 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l9SZn-000888-Ui; Tue, 09 Feb 2021 12:57:07 +0000 Received: by outflank-mailman (input) for mailman id 83225; Tue, 09 Feb 2021 12:57:06 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l9SZm-00087u-5T for xen-devel@lists.xenproject.org; Tue, 09 Feb 2021 12:57:06 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 279d4154-5031-449e-842b-7b4652fc68ce; Tue, 09 Feb 2021 12:57:05 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 78800AF57; Tue, 9 Feb 2021 12:57:04 +0000 (UTC) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 279d4154-5031-449e-842b-7b4652fc68ce X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1612875424; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=FDe1jUzRfbbh7+YuIc5yGiZBwFURovh6CEvEftGqGYw=; b=S3/TWOADLqQR2nKGj5HFEPFdKZxPTyk+Xw+ZRLaMAKmUwrIaPIGXaD12RG9NWrB/qjwijr 1A2IglhliX5W9ILvav41xKsxqTVOQK+Wv1k+GxMbTkF6ap5QpbfTum7eteNr8gYgaore8U rpN26Ghp/9Y896HOb3jD3i+5CPEBITQ= Subject: [PATCH RFC v3 4/4] x86/time: re-arrange struct calibration_rendezvous From: Jan Beulich To: "xen-devel@lists.xenproject.org" Cc: Andrew Cooper , Wei Liu , =?utf-8?q?Roger_Pau_Monn=C3=A9?= References: Message-ID: <56d70757-a887-6824-18f4-93b1f244e44b@suse.com> Date: Tue, 9 Feb 2021 13:57:05 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.7.1 MIME-Version: 1.0 In-Reply-To: Content-Language: en-US To reduce latency on time_calibration_tsc_rendezvous()'s last loop iteration, separate fields written on the last iteration enough from the crucial field read by all CPUs on the last iteration such that they end up in distinct cache lines. Prefetch this field on an earlier iteration. Signed-off-by: Jan Beulich --- v3: New. --- While readability would likely suffer, we may want to consider avoiding the prefetch on at least the first two iterations (where the field still gets / may get written to by CPU0). Could e.g. be switch ( i ) { case 0: write_tsc(r->master_tsc_stamp); break; case 1: prefetch(&r->master_tsc_stamp); break; } Of course it would also be nice to avoid the pretty likely branch misprediction on the last iteration. But with the static prediction hints having been rather short-lived in the architecture, I don't see any good means to do so. --- a/xen/arch/x86/time.c +++ b/xen/arch/x86/time.c @@ -1655,10 +1655,20 @@ static void tsc_check_reliability(void) * All CPUS snapshot their local TSC and extrapolation of system time. */ struct calibration_rendezvous { - cpumask_t cpu_calibration_map; atomic_t semaphore; s_time_t master_stime; - uint64_t master_tsc_stamp, max_tsc_stamp; + cpumask_t cpu_calibration_map; + /* + * All CPUs want to read master_tsc_stamp on the last iteration. If + * cpu_calibration_map isn't large enough to push the field into a cache + * line different from the one used by semaphore (written by all CPUs on + * every iteration) and master_stime (written by CPU0 on the last + * iteration), align the field suitably. + */ + uint64_t __aligned(BITS_TO_LONGS(NR_CPUS) * sizeof(long) + + sizeof(atomic_t) + sizeof(s_time_t) < SMP_CACHE_BYTES + ? SMP_CACHE_BYTES : 1) master_tsc_stamp; + uint64_t max_tsc_stamp; }; static void @@ -1709,6 +1719,8 @@ static void time_calibration_tsc_rendezv if ( i == 0 ) write_tsc(r->master_tsc_stamp); + else + prefetch(&r->master_tsc_stamp); while ( atomic_read(&r->semaphore) != (2*total_cpus - 1) ) cpu_relax(); @@ -1731,6 +1743,8 @@ static void time_calibration_tsc_rendezv if ( i == 0 ) write_tsc(r->master_tsc_stamp); + else + prefetch(&r->master_tsc_stamp); atomic_inc(&r->semaphore); while ( atomic_read(&r->semaphore) > total_cpus )