diff mbox series

[v7] KVM: x86/tsc: Don't sync user-written TSC against startup values

Message ID 20231008025335.7419-1-likexu@tencent.com (mailing list archive)
State New, archived
Headers show
Series [v7] KVM: x86/tsc: Don't sync user-written TSC against startup values | expand

Commit Message

Like Xu Oct. 8, 2023, 2:53 a.m. UTC
From: Like Xu <likexu@tencent.com>

The legacy API for setting the TSC is fundamentally broken, and only
allows userspace to set a TSC "now", without any way to account for
time lost to preemption between the calculation of the value, and the
kernel eventually handling the ioctl.

To work around this we have had a hack which, if a TSC is set with a
value which is within a second's worth of a previous vCPU, assumes that
userspace actually intended them to be in sync and adjusts the newly-
written TSC value accordingly.

Thus, when a VMM restores a guest after suspend or migration using the
legacy API, the TSCs aren't necessarily *right*, but at least they're
in sync.

This trick falls down when restoring a guest which genuinely has been
running for less time than the 1 second of imprecision which we allow
for in the legacy API. On *creation* the first vCPU starts its TSC
counting from zero, and the subsequent vCPUs synchronize to that. But
then when the VMM tries to set the intended TSC value, because that's
within a second of what the last TSC synced to, KVM just adjusts it
to match that.

But we can pile further hacks onto our existing hackish ABI, and
declare that the *first* value written by userspace (on any vCPU)
should not be subject to this 'correction' to make it sync up with
values that only come from the kernel's default vCPU creation.

To that end: Add a flag in kvm->arch.user_set_tsc, protected by
kvm->arch.tsc_write_lock, to record that a TSC for at least one vCPU in
this KVM *has* been set by userspace. Make the 1-second slop hack only
trigger if that flag is already set.

Note that userspace can explicitly request a *synchronization* of the
TSC by writing zero. For the purpose of this patch, this counts as
"setting" the TSC. If userspace then subsequently writes an explicit
non-zero value which happens to be within 1 second of the previous
value, it will be 'corrected'. For that case, this preserves the prior
behaviour of KVM (which always applied the 1-second 'correction'
regardless of user vs. kernel).

Reported-by: Yong He <alexyonghe@tencent.com>
Closes: https://bugzilla.kernel.org/show_bug.cgi?id=217423
Suggested-by: Oliver Upton <oliver.upton@linux.dev>
Original-by: Oliver Upton <oliver.upton@linux.dev>
Original-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Like Xu <likexu@tencent.com>
Tested-by: Yong He <alexyonghe@tencent.com>
---
V6 -> V7 Changelog:
- Refine commit message and comments to make more sense; (David & Sean)
- A @user_value of '0' would still force synchronization; (Sean)
V6: https://lore.kernel.org/kvm/20230913103729.51194-1-likexu@tencent.com/
 arch/x86/include/asm/kvm_host.h |  1 +
 arch/x86/kvm/x86.c              | 34 +++++++++++++++++++++++----------
 2 files changed, 25 insertions(+), 10 deletions(-)


base-commit: 86701e115030e020a052216baa942e8547e0b487

Comments

Maxim Levitsky Oct. 9, 2023, 3:50 p.m. UTC | #1
У нд, 2023-10-08 у 10:53 +0800, Like Xu пише:
> From: Like Xu <likexu@tencent.com>
> 
> The legacy API for setting the TSC is fundamentally broken, and only
> allows userspace to set a TSC "now", without any way to account for
> time lost to preemption between the calculation of the value, and the
> kernel eventually handling the ioctl.
> 
> To work around this we have had a hack which, if a TSC is set with a
> value which is within a second's worth of a previous vCPU, assumes that
> userspace actually intended them to be in sync and adjusts the newly-
> written TSC value accordingly.
> 
> Thus, when a VMM restores a guest after suspend or migration using the
> legacy API, the TSCs aren't necessarily *right*, but at least they're
> in sync.
> 
> This trick falls down when restoring a guest which genuinely has been
> running for less time than the 1 second of imprecision which we allow
> for in the legacy API. On *creation* the first vCPU starts its TSC
> counting from zero, and the subsequent vCPUs synchronize to that. But
> then when the VMM tries to set the intended TSC value, because that's
> within a second of what the last TSC synced to, KVM just adjusts it
> to match that.
> 
> But we can pile further hacks onto our existing hackish ABI, and
> declare that the *first* value written by userspace (on any vCPU)
> should not be subject to this 'correction' to make it sync up with
> values that only come from the kernel's default vCPU creation.
> 
> To that end: Add a flag in kvm->arch.user_set_tsc, protected by
> kvm->arch.tsc_write_lock, to record that a TSC for at least one vCPU in
> this KVM *has* been set by userspace. Make the 1-second slop hack only
> trigger if that flag is already set.
> 
> Note that userspace can explicitly request a *synchronization* of the
> TSC by writing zero. For the purpose of this patch, this counts as
> "setting" the TSC. If userspace then subsequently writes an explicit
> non-zero value which happens to be within 1 second of the previous
> value, it will be 'corrected'. For that case, this preserves the prior
> behaviour of KVM (which always applied the 1-second 'correction'
> regardless of user vs. kernel).
> 
> Reported-by: Yong He <alexyonghe@tencent.com>
> Closes: https://bugzilla.kernel.org/show_bug.cgi?id=217423
> Suggested-by: Oliver Upton <oliver.upton@linux.dev>
> Original-by: Oliver Upton <oliver.upton@linux.dev>
> Original-by: Sean Christopherson <seanjc@google.com>
> Signed-off-by: Like Xu <likexu@tencent.com>
> Tested-by: Yong He <alexyonghe@tencent.com>
> ---
> V6 -> V7 Changelog:
> - Refine commit message and comments to make more sense; (David & Sean)
> - A @user_value of '0' would still force synchronization; (Sean)
> V6: https://lore.kernel.org/kvm/20230913103729.51194-1-likexu@tencent.com/
>  arch/x86/include/asm/kvm_host.h |  1 +
>  arch/x86/kvm/x86.c              | 34 +++++++++++++++++++++++----------
>  2 files changed, 25 insertions(+), 10 deletions(-)
> 
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 41558d13a9a6..7c228ae05df0 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -1334,6 +1334,7 @@ struct kvm_arch {
>  	int nr_vcpus_matched_tsc;
>  
>  	u32 default_tsc_khz;
> +	bool user_set_tsc;
>  
>  	seqcount_raw_spinlock_t pvclock_sc;
>  	bool use_master_clock;
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index fdb2b0e61c43..776506a77e1b 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -2709,8 +2709,9 @@ static void __kvm_synchronize_tsc(struct kvm_vcpu *vcpu, u64 offset, u64 tsc,
>  	kvm_track_tsc_matching(vcpu);
>  }
>  
> -static void kvm_synchronize_tsc(struct kvm_vcpu *vcpu, u64 data)
> +static void kvm_synchronize_tsc(struct kvm_vcpu *vcpu, u64 *user_value)
>  {
> +	u64 data = user_value ? *user_value : 0;
>  	struct kvm *kvm = vcpu->kvm;
>  	u64 offset, ns, elapsed;
>  	unsigned long flags;
> @@ -2725,25 +2726,37 @@ static void kvm_synchronize_tsc(struct kvm_vcpu *vcpu, u64 data)
>  	if (vcpu->arch.virtual_tsc_khz) {
>  		if (data == 0) {
>  			/*
> -			 * detection of vcpu initialization -- need to sync
> -			 * with other vCPUs. This particularly helps to keep
> -			 * kvm_clock stable after CPU hotplug
> +			 * Force synchronization when creating a vCPU, or when
> +			 * userspace explicitly writes a zero value.
>  			 */
>  			synchronizing = true;
> -		} else {
> +		} else if (kvm->arch.user_set_tsc) {
>  			u64 tsc_exp = kvm->arch.last_tsc_write +
>  						nsec_to_cycles(vcpu, elapsed);
>  			u64 tsc_hz = vcpu->arch.virtual_tsc_khz * 1000LL;
>  			/*
> -			 * Special case: TSC write with a small delta (1 second)
> -			 * of virtual cycle time against real time is
> -			 * interpreted as an attempt to synchronize the CPU.
> +			 * Here lies UAPI baggage: when a user-initiated TSC write has
> +			 * a small delta (1 second) of virtual cycle time against the
> +			 * previously set vCPU, we assume that they were intended to be
> +			 * in sync and the delta was only due to the racy nature of the
> +			 * legacy API.
> +			 *
> +			 * This trick falls down when restoring a guest which genuinely
> +			 * has been running for less time than the 1 second of imprecision
> +			 * which we allow for in the legacy API. In this case, the first
> +			 * value written by userspace (on any vCPU) should not be subject
> +			 * to this 'correction' to make it sync up with values that only
> +			 * come from the kernel's default vCPU creation. Make the 1-second
> +			 * slop hack only trigger if the user_set_tsc flag is already set.
>  			 */
>  			synchronizing = data < tsc_exp + tsc_hz &&
>  					data + tsc_hz > tsc_exp;
>  		}
>  	}
>  
> +	if (user_value)
> +		kvm->arch.user_set_tsc = true;
> +
>  	/*
>  	 * For a reliable TSC, we can match TSC offsets, and for an unstable
>  	 * TSC, we add elapsed time in this computation.  We could let the
> @@ -3869,7 +3882,7 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
>  		break;
>  	case MSR_IA32_TSC:
>  		if (msr_info->host_initiated) {
> -			kvm_synchronize_tsc(vcpu, data);
> +			kvm_synchronize_tsc(vcpu, &data);
>  		} else {
>  			u64 adj = kvm_compute_l1_tsc_offset(vcpu, data) - vcpu->arch.l1_tsc_offset;
>  			adjust_tsc_offset_guest(vcpu, adj);
> @@ -5639,6 +5652,7 @@ static int kvm_arch_tsc_set_attr(struct kvm_vcpu *vcpu,
>  		tsc = kvm_scale_tsc(rdtsc(), vcpu->arch.l1_tsc_scaling_ratio) + offset;
>  		ns = get_kvmclock_base_ns();
>  
> +		kvm->arch.user_set_tsc = true;
>  		__kvm_synchronize_tsc(vcpu, offset, tsc, ns, matched);
>  		raw_spin_unlock_irqrestore(&kvm->arch.tsc_write_lock, flags);
>  
> @@ -12073,7 +12087,7 @@ void kvm_arch_vcpu_postcreate(struct kvm_vcpu *vcpu)
>  	if (mutex_lock_killable(&vcpu->mutex))
>  		return;
>  	vcpu_load(vcpu);
> -	kvm_synchronize_tsc(vcpu, 0);
> +	kvm_synchronize_tsc(vcpu, NULL);
>  	vcpu_put(vcpu);
>  
>  	/* poll control enabled by default */
> 
> base-commit: 86701e115030e020a052216baa942e8547e0b487


Just small note: Note that qemu resets TSC on the CPU reset,
But recently it started to reset it to '1' instead of '0' to avoid trigerring this synchronization.

As far as I can see, this patch should still work in this case,
because vCPU reset usually happens long after the vCPUs are all created and running, 
but this is still something to keep in the mind.

Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>


Best regards,
	Maxim Levitsky
Sean Christopherson Oct. 10, 2023, 12:44 a.m. UTC | #2
On Sun, 08 Oct 2023 10:53:35 +0800, Like Xu wrote:
> The legacy API for setting the TSC is fundamentally broken, and only
> allows userspace to set a TSC "now", without any way to account for
> time lost to preemption between the calculation of the value, and the
> kernel eventually handling the ioctl.
> 
> To work around this we have had a hack which, if a TSC is set with a
> value which is within a second's worth of a previous vCPU, assumes that
> userspace actually intended them to be in sync and adjusts the newly-
> written TSC value accordingly.
> 
> [...]

Applied to kvm-x86 misc, thanks!  I massaged away most of the pronouns in the
changelog.  Yes, they bug me that much, and I genuinely had a hard time following
some of the paragraphs even though I already knew what the patch is doing.

Everyone, please take a look and make sure I didn't botch anything.  I tried my
best to keep the existing "voice" and tone of the changelog (sans pronouns
obviously).  I definitely don't want to bikeshed this thing any further.  If
I've learned anything by this patch, it's that the only guaranteed outcome of
changelog-by-committee is that no one will walk away 100% happy :-)

[1/1] KVM: x86/tsc: Don't sync user-written TSC against startup values
      https://github.com/kvm-x86/linux/commit/bf328e22e472

--
https://github.com/kvm-x86/linux/tree/next
David Woodhouse Oct. 10, 2023, 10:03 a.m. UTC | #3
On 10 October 2023 01:44:32 BST, Sean Christopherson <seanjc@google.com> wrote:
>On Sun, 08 Oct 2023 10:53:35 +0800, Like Xu wrote:
>> The legacy API for setting the TSC is fundamentally broken, and only
>> allows userspace to set a TSC "now", without any way to account for
>> time lost to preemption between the calculation of the value, and the
>> kernel eventually handling the ioctl.
>> 
>> To work around this we have had a hack which, if a TSC is set with a
>> value which is within a second's worth of a previous vCPU, assumes that
>> userspace actually intended them to be in sync and adjusts the newly-
>> written TSC value accordingly.
>> 
>> [...]
>
>Applied to kvm-x86 misc, thanks!  I massaged away most of the pronouns in the
>changelog.  Yes, they bug me that much, and I genuinely had a hard time following
>some of the paragraphs even though I already knew what the patch is doing.
>
>Everyone, please take a look and make sure I didn't botch anything.  I tried my
>best to keep the existing "voice" and tone of the changelog (sans pronouns
>obviously).  I definitely don't want to bikeshed this thing any further.  If
>I've learned anything by this patch, it's that the only guaranteed outcome of
>changelog-by-committee is that no one will walk away 100% happy :-)

LGTM. I forgive you for not respecting my pronouns. :)

Thanks.
diff mbox series

Patch

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 41558d13a9a6..7c228ae05df0 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1334,6 +1334,7 @@  struct kvm_arch {
 	int nr_vcpus_matched_tsc;
 
 	u32 default_tsc_khz;
+	bool user_set_tsc;
 
 	seqcount_raw_spinlock_t pvclock_sc;
 	bool use_master_clock;
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index fdb2b0e61c43..776506a77e1b 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -2709,8 +2709,9 @@  static void __kvm_synchronize_tsc(struct kvm_vcpu *vcpu, u64 offset, u64 tsc,
 	kvm_track_tsc_matching(vcpu);
 }
 
-static void kvm_synchronize_tsc(struct kvm_vcpu *vcpu, u64 data)
+static void kvm_synchronize_tsc(struct kvm_vcpu *vcpu, u64 *user_value)
 {
+	u64 data = user_value ? *user_value : 0;
 	struct kvm *kvm = vcpu->kvm;
 	u64 offset, ns, elapsed;
 	unsigned long flags;
@@ -2725,25 +2726,37 @@  static void kvm_synchronize_tsc(struct kvm_vcpu *vcpu, u64 data)
 	if (vcpu->arch.virtual_tsc_khz) {
 		if (data == 0) {
 			/*
-			 * detection of vcpu initialization -- need to sync
-			 * with other vCPUs. This particularly helps to keep
-			 * kvm_clock stable after CPU hotplug
+			 * Force synchronization when creating a vCPU, or when
+			 * userspace explicitly writes a zero value.
 			 */
 			synchronizing = true;
-		} else {
+		} else if (kvm->arch.user_set_tsc) {
 			u64 tsc_exp = kvm->arch.last_tsc_write +
 						nsec_to_cycles(vcpu, elapsed);
 			u64 tsc_hz = vcpu->arch.virtual_tsc_khz * 1000LL;
 			/*
-			 * Special case: TSC write with a small delta (1 second)
-			 * of virtual cycle time against real time is
-			 * interpreted as an attempt to synchronize the CPU.
+			 * Here lies UAPI baggage: when a user-initiated TSC write has
+			 * a small delta (1 second) of virtual cycle time against the
+			 * previously set vCPU, we assume that they were intended to be
+			 * in sync and the delta was only due to the racy nature of the
+			 * legacy API.
+			 *
+			 * This trick falls down when restoring a guest which genuinely
+			 * has been running for less time than the 1 second of imprecision
+			 * which we allow for in the legacy API. In this case, the first
+			 * value written by userspace (on any vCPU) should not be subject
+			 * to this 'correction' to make it sync up with values that only
+			 * come from the kernel's default vCPU creation. Make the 1-second
+			 * slop hack only trigger if the user_set_tsc flag is already set.
 			 */
 			synchronizing = data < tsc_exp + tsc_hz &&
 					data + tsc_hz > tsc_exp;
 		}
 	}
 
+	if (user_value)
+		kvm->arch.user_set_tsc = true;
+
 	/*
 	 * For a reliable TSC, we can match TSC offsets, and for an unstable
 	 * TSC, we add elapsed time in this computation.  We could let the
@@ -3869,7 +3882,7 @@  int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 		break;
 	case MSR_IA32_TSC:
 		if (msr_info->host_initiated) {
-			kvm_synchronize_tsc(vcpu, data);
+			kvm_synchronize_tsc(vcpu, &data);
 		} else {
 			u64 adj = kvm_compute_l1_tsc_offset(vcpu, data) - vcpu->arch.l1_tsc_offset;
 			adjust_tsc_offset_guest(vcpu, adj);
@@ -5639,6 +5652,7 @@  static int kvm_arch_tsc_set_attr(struct kvm_vcpu *vcpu,
 		tsc = kvm_scale_tsc(rdtsc(), vcpu->arch.l1_tsc_scaling_ratio) + offset;
 		ns = get_kvmclock_base_ns();
 
+		kvm->arch.user_set_tsc = true;
 		__kvm_synchronize_tsc(vcpu, offset, tsc, ns, matched);
 		raw_spin_unlock_irqrestore(&kvm->arch.tsc_write_lock, flags);
 
@@ -12073,7 +12087,7 @@  void kvm_arch_vcpu_postcreate(struct kvm_vcpu *vcpu)
 	if (mutex_lock_killable(&vcpu->mutex))
 		return;
 	vcpu_load(vcpu);
-	kvm_synchronize_tsc(vcpu, 0);
+	kvm_synchronize_tsc(vcpu, NULL);
 	vcpu_put(vcpu);
 
 	/* poll control enabled by default */