diff mbox

[v8,1/4] KVM: X86: Add vCPU running/preempted state

Message ID 1513128784-5924-2-git-send-email-wanpeng.li@hotmail.com (mailing list archive)
State New, archived
Headers show

Commit Message

Wanpeng Li Dec. 13, 2017, 1:33 a.m. UTC
From: Wanpeng Li <wanpeng.li@hotmail.com>

This patch reuses the preempted field in kvm_steal_time, and will export
the vcpu running/pre-empted information to the guest from host. This will
enable guest to intelligently send ipi to running vcpus and set flag for
pre-empted vcpus. This will prevent waiting for vcpus that are not running.

Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Radim Krčmář <rkrcmar@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Wanpeng Li <wanpeng.li@hotmail.com>
---
 arch/x86/include/uapi/asm/kvm_para.h | 3 +++
 arch/x86/kernel/kvm.c                | 2 +-
 arch/x86/kvm/x86.c                   | 4 ++--
 3 files changed, 6 insertions(+), 3 deletions(-)

Comments

David Hildenbrand Dec. 13, 2017, 10:20 a.m. UTC | #1
On 13.12.2017 02:33, Wanpeng Li wrote:
> From: Wanpeng Li <wanpeng.li@hotmail.com>
> 
> This patch reuses the preempted field in kvm_steal_time, and will export
> the vcpu running/pre-empted information to the guest from host. This will
> enable guest to intelligently send ipi to running vcpus and set flag for
> pre-empted vcpus. This will prevent waiting for vcpus that are not running.
> 
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: Radim Krčmář <rkrcmar@redhat.com>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Signed-off-by: Wanpeng Li <wanpeng.li@hotmail.com>
> ---
>  arch/x86/include/uapi/asm/kvm_para.h | 3 +++
>  arch/x86/kernel/kvm.c                | 2 +-
>  arch/x86/kvm/x86.c                   | 4 ++--
>  3 files changed, 6 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/x86/include/uapi/asm/kvm_para.h b/arch/x86/include/uapi/asm/kvm_para.h
> index 09cc064..763b692 100644
> --- a/arch/x86/include/uapi/asm/kvm_para.h
> +++ b/arch/x86/include/uapi/asm/kvm_para.h
> @@ -51,6 +51,9 @@ struct kvm_steal_time {
>  	__u32 pad[11];
>  };
>  
> +#define KVM_VCPU_NOT_PREEMPTED      (0 << 0)
> +#define KVM_VCPU_PREEMPTED          (1 << 0)

Is it really helpful to have two flags?

Just use KVM_VCPU_PREEMPTED  and clear that one in record_steal_time()
Wanpeng Li Dec. 13, 2017, 11:38 a.m. UTC | #2
2017-12-13 18:20 GMT+08:00 David Hildenbrand <david@redhat.com>:
> On 13.12.2017 02:33, Wanpeng Li wrote:
>> From: Wanpeng Li <wanpeng.li@hotmail.com>
>>
>> This patch reuses the preempted field in kvm_steal_time, and will export
>> the vcpu running/pre-empted information to the guest from host. This will
>> enable guest to intelligently send ipi to running vcpus and set flag for
>> pre-empted vcpus. This will prevent waiting for vcpus that are not running.
>>
>> Cc: Paolo Bonzini <pbonzini@redhat.com>
>> Cc: Radim Krčmář <rkrcmar@redhat.com>
>> Cc: Peter Zijlstra <peterz@infradead.org>
>> Signed-off-by: Wanpeng Li <wanpeng.li@hotmail.com>
>> ---
>>  arch/x86/include/uapi/asm/kvm_para.h | 3 +++
>>  arch/x86/kernel/kvm.c                | 2 +-
>>  arch/x86/kvm/x86.c                   | 4 ++--
>>  3 files changed, 6 insertions(+), 3 deletions(-)
>>
>> diff --git a/arch/x86/include/uapi/asm/kvm_para.h b/arch/x86/include/uapi/asm/kvm_para.h
>> index 09cc064..763b692 100644
>> --- a/arch/x86/include/uapi/asm/kvm_para.h
>> +++ b/arch/x86/include/uapi/asm/kvm_para.h
>> @@ -51,6 +51,9 @@ struct kvm_steal_time {
>>       __u32 pad[11];
>>  };
>>
>> +#define KVM_VCPU_NOT_PREEMPTED      (0 << 0)
>> +#define KVM_VCPU_PREEMPTED          (1 << 0)
>
> Is it really helpful to have two flags?
>
> Just use KVM_VCPU_PREEMPTED  and clear that one in record_steal_time()

I think it is fine since there is a third flag introduced in patch
2/4, it is more clear currently.

Regards,
Wanpeng Li
David Hildenbrand Dec. 13, 2017, 11:45 a.m. UTC | #3
On 13.12.2017 12:38, Wanpeng Li wrote:
> 2017-12-13 18:20 GMT+08:00 David Hildenbrand <david@redhat.com>:
>> On 13.12.2017 02:33, Wanpeng Li wrote:
>>> From: Wanpeng Li <wanpeng.li@hotmail.com>
>>>
>>> This patch reuses the preempted field in kvm_steal_time, and will export
>>> the vcpu running/pre-empted information to the guest from host. This will
>>> enable guest to intelligently send ipi to running vcpus and set flag for
>>> pre-empted vcpus. This will prevent waiting for vcpus that are not running.
>>>
>>> Cc: Paolo Bonzini <pbonzini@redhat.com>
>>> Cc: Radim Krčmář <rkrcmar@redhat.com>
>>> Cc: Peter Zijlstra <peterz@infradead.org>
>>> Signed-off-by: Wanpeng Li <wanpeng.li@hotmail.com>
>>> ---
>>>  arch/x86/include/uapi/asm/kvm_para.h | 3 +++
>>>  arch/x86/kernel/kvm.c                | 2 +-
>>>  arch/x86/kvm/x86.c                   | 4 ++--
>>>  3 files changed, 6 insertions(+), 3 deletions(-)
>>>
>>> diff --git a/arch/x86/include/uapi/asm/kvm_para.h b/arch/x86/include/uapi/asm/kvm_para.h
>>> index 09cc064..763b692 100644
>>> --- a/arch/x86/include/uapi/asm/kvm_para.h
>>> +++ b/arch/x86/include/uapi/asm/kvm_para.h
>>> @@ -51,6 +51,9 @@ struct kvm_steal_time {
>>>       __u32 pad[11];
>>>  };
>>>
>>> +#define KVM_VCPU_NOT_PREEMPTED      (0 << 0)
>>> +#define KVM_VCPU_PREEMPTED          (1 << 0)
>>
>> Is it really helpful to have two flags?
>>
>> Just use KVM_VCPU_PREEMPTED  and clear that one in record_steal_time()
> 
> I think it is fine since there is a third flag introduced in patch
> 2/4, it is more clear currently.
> 
> Regards,
> Wanpeng Li
> 

Having two flags representing the same thing is not clear to me.
Paolo Bonzini Dec. 13, 2017, 12:31 p.m. UTC | #4
On 13/12/2017 12:45, David Hildenbrand wrote:
> On 13.12.2017 12:38, Wanpeng Li wrote:
>> 2017-12-13 18:20 GMT+08:00 David Hildenbrand <david@redhat.com>:
>>> On 13.12.2017 02:33, Wanpeng Li wrote:
>>>> From: Wanpeng Li <wanpeng.li@hotmail.com>
>>>>
>>>> This patch reuses the preempted field in kvm_steal_time, and will export
>>>> the vcpu running/pre-empted information to the guest from host. This will
>>>> enable guest to intelligently send ipi to running vcpus and set flag for
>>>> pre-empted vcpus. This will prevent waiting for vcpus that are not running.
>>>>
>>>> Cc: Paolo Bonzini <pbonzini@redhat.com>
>>>> Cc: Radim Krčmář <rkrcmar@redhat.com>
>>>> Cc: Peter Zijlstra <peterz@infradead.org>
>>>> Signed-off-by: Wanpeng Li <wanpeng.li@hotmail.com>
>>>> ---
>>>>  arch/x86/include/uapi/asm/kvm_para.h | 3 +++
>>>>  arch/x86/kernel/kvm.c                | 2 +-
>>>>  arch/x86/kvm/x86.c                   | 4 ++--
>>>>  3 files changed, 6 insertions(+), 3 deletions(-)
>>>>
>>>> diff --git a/arch/x86/include/uapi/asm/kvm_para.h b/arch/x86/include/uapi/asm/kvm_para.h
>>>> index 09cc064..763b692 100644
>>>> --- a/arch/x86/include/uapi/asm/kvm_para.h
>>>> +++ b/arch/x86/include/uapi/asm/kvm_para.h
>>>> @@ -51,6 +51,9 @@ struct kvm_steal_time {
>>>>       __u32 pad[11];
>>>>  };
>>>>
>>>> +#define KVM_VCPU_NOT_PREEMPTED      (0 << 0)
>>>> +#define KVM_VCPU_PREEMPTED          (1 << 0)
>>>
>>> Is it really helpful to have two flags?
>>>
>>> Just use KVM_VCPU_PREEMPTED  and clear that one in record_steal_time()
>>
>> I think it is fine since there is a third flag introduced in patch
>> 2/4, it is more clear currently.
>>
>> Regards,
>> Wanpeng Li
>>
> 
> Having two flags representing the same thing is not clear to me.

I agree that KVM_VCPU_NOT_PREEMPTED is not particularly necessary, but
it is not correct to clear KVM_VCPU_PREEMPTED; instead, the entire field
must be cleared to zero.

Also, this patch is not justified very well by the commit message.  A
better wording would be:

The next patch will add another bit to the preempted field in
kvm_steal_time.  Define a constant for bit 0 (the only one that is
currently used).
diff mbox

Patch

diff --git a/arch/x86/include/uapi/asm/kvm_para.h b/arch/x86/include/uapi/asm/kvm_para.h
index 09cc064..763b692 100644
--- a/arch/x86/include/uapi/asm/kvm_para.h
+++ b/arch/x86/include/uapi/asm/kvm_para.h
@@ -51,6 +51,9 @@  struct kvm_steal_time {
 	__u32 pad[11];
 };
 
+#define KVM_VCPU_NOT_PREEMPTED      (0 << 0)
+#define KVM_VCPU_PREEMPTED          (1 << 0)
+
 #define KVM_CLOCK_PAIRING_WALLCLOCK 0
 struct kvm_clock_pairing {
 	__s64 sec;
diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index b40ffbf..6610b92 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -643,7 +643,7 @@  __visible bool __kvm_vcpu_is_preempted(long cpu)
 {
 	struct kvm_steal_time *src = &per_cpu(steal_time, cpu);
 
-	return !!src->preempted;
+	return !!(src->preempted & KVM_VCPU_PREEMPTED);
 }
 PV_CALLEE_SAVE_REGS_THUNK(__kvm_vcpu_is_preempted);
 
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 9212dad..cdf716a 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -2131,7 +2131,7 @@  static void record_steal_time(struct kvm_vcpu *vcpu)
 		&vcpu->arch.st.steal, sizeof(struct kvm_steal_time))))
 		return;
 
-	vcpu->arch.st.steal.preempted = 0;
+	vcpu->arch.st.steal.preempted = KVM_VCPU_NOT_PREEMPTED;
 
 	if (vcpu->arch.st.steal.version & 1)
 		vcpu->arch.st.steal.version += 1;  /* first time write, random junk */
@@ -2917,7 +2917,7 @@  static void kvm_steal_time_set_preempted(struct kvm_vcpu *vcpu)
 	if (!(vcpu->arch.st.msr_val & KVM_MSR_ENABLED))
 		return;
 
-	vcpu->arch.st.steal.preempted = 1;
+	vcpu->arch.st.steal.preempted = KVM_VCPU_PREEMPTED;
 
 	kvm_write_guest_offset_cached(vcpu->kvm, &vcpu->arch.st.stime,
 			&vcpu->arch.st.steal.preempted,