diff mbox

[RFC,4/6] KVM: s390: consider epoch index on TOD clock syncs

Message ID 20180207114647.6220-5-david@redhat.com (mailing list archive)
State New, archived
Headers show

Commit Message

David Hildenbrand Feb. 7, 2018, 11:46 a.m. UTC
For now, we don't take care of over/underflows. Especially underflows
are critical:

Assume the epoch is currently 0 and we get a sync request for delta=1,
meaning the TOD is moved forward by 1 and we have to fix it up by
subtracting 1 from the epoch. Right now, this will leave the epoch
index untouched, resulting in epoch=-1, epoch_idx=0, which is wrong.

We have to take care of over and underflows, also for the VSIE case. So
let's factor out calculation into a separate function.

Signed-off-by: David Hildenbrand <david@redhat.com>
---
 arch/s390/kvm/kvm-s390.c | 32 +++++++++++++++++++++++++++++---
 1 file changed, 29 insertions(+), 3 deletions(-)

Comments

Collin L. Walling Feb. 7, 2018, 8:08 p.m. UTC | #1
On 02/07/2018 06:46 AM, David Hildenbrand wrote:
> For now, we don't take care of over/underflows. Especially underflows
> are critical:
>
> Assume the epoch is currently 0 and we get a sync request for delta=1,
> meaning the TOD is moved forward by 1 and we have to fix it up by
> subtracting 1 from the epoch. Right now, this will leave the epoch
> index untouched, resulting in epoch=-1, epoch_idx=0, which is wrong.
>
> We have to take care of over and underflows, also for the VSIE case. So
> let's factor out calculation into a separate function.
>
> Signed-off-by: David Hildenbrand <david@redhat.com>
> ---
>   arch/s390/kvm/kvm-s390.c | 32 +++++++++++++++++++++++++++++---
>   1 file changed, 29 insertions(+), 3 deletions(-)
>
> diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
> index d007b737cd4d..c2b62379049e 100644
> --- a/arch/s390/kvm/kvm-s390.c
> +++ b/arch/s390/kvm/kvm-s390.c
> @@ -179,6 +179,28 @@ int kvm_arch_hardware_enable(void)
>   static void kvm_gmap_notifier(struct gmap *gmap, unsigned long start,
>   			      unsigned long end);
>
> +static void kvm_clock_sync_scb(struct kvm_s390_sie_block *scb, u64 delta)
> +{
> +	u64 delta_idx = 0;
> +
> +	/*
> +	 * The TOD jumps by delta, we have to compensate this by adding
> +	 * -delta to the epoch.
> +	 */
> +	delta = -delta;
> +
> +	/* sign-extension - we're adding to signed values below */
> +	if ((s64)delta < 0)
> +		delta_idx = 0xff;
> +
> +	scb->epoch += delta;
> +	if (scb->ecd & ECD_MEF) {
> +		scb->epdx += delta_idx;
> +		if (scb->epoch < delta)
> +			scb->epdx += 1;
> +	}
> +}
> +

Is the sync always a jump forward? Do we need to worry about a borrow 
from the epdx in case of underflow?

>   /*
>    * This callback is executed during stop_machine(). All CPUs are therefore
>    * temporarily stopped. In order not to change guest behavior, we have to
> @@ -194,13 +216,17 @@ static int kvm_clock_sync(struct notifier_block *notifier, unsigned long val,
>   	unsigned long long *delta = v;
>
>   	list_for_each_entry(kvm, &vm_list, vm_list) {
> -		kvm->arch.epoch -= *delta;
>   		kvm_for_each_vcpu(i, vcpu, kvm) {
> -			vcpu->arch.sie_block->epoch -= *delta;
> +			kvm_clock_sync_scb(vcpu->arch.sie_block, *delta);
> +			if (i == 0) {
> +				kvm->arch.epoch = vcpu->arch.sie_block->epoch;
> +				kvm->arch.epdx = vcpu->arch.sie_block->epdx;

Are we safe by setting the kvm epochs to the sie epochs wrt migration?

> +			}
>   			if (vcpu->arch.cputm_enabled)
>   				vcpu->arch.cputm_start += *delta;
>   			if (vcpu->arch.vsie_block)
> -				vcpu->arch.vsie_block->epoch -= *delta;
> +				kvm_clock_sync_scb(vcpu->arch.vsie_block,
> +						   *delta);
>   		}
>   	}
>   	return NOTIFY_OK;
David Hildenbrand Feb. 7, 2018, 9:35 p.m. UTC | #2
On 07.02.2018 21:08, Collin L. Walling wrote:
> On 02/07/2018 06:46 AM, David Hildenbrand wrote:
>> For now, we don't take care of over/underflows. Especially underflows
>> are critical:
>>
>> Assume the epoch is currently 0 and we get a sync request for delta=1,
>> meaning the TOD is moved forward by 1 and we have to fix it up by
>> subtracting 1 from the epoch. Right now, this will leave the epoch
>> index untouched, resulting in epoch=-1, epoch_idx=0, which is wrong.
>>
>> We have to take care of over and underflows, also for the VSIE case. So
>> let's factor out calculation into a separate function.
>>
>> Signed-off-by: David Hildenbrand <david@redhat.com>
>> ---
>>   arch/s390/kvm/kvm-s390.c | 32 +++++++++++++++++++++++++++++---
>>   1 file changed, 29 insertions(+), 3 deletions(-)
>>
>> diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
>> index d007b737cd4d..c2b62379049e 100644
>> --- a/arch/s390/kvm/kvm-s390.c
>> +++ b/arch/s390/kvm/kvm-s390.c
>> @@ -179,6 +179,28 @@ int kvm_arch_hardware_enable(void)
>>   static void kvm_gmap_notifier(struct gmap *gmap, unsigned long start,
>>   			      unsigned long end);
>>
>> +static void kvm_clock_sync_scb(struct kvm_s390_sie_block *scb, u64 delta)
>> +{
>> +	u64 delta_idx = 0;
>> +
>> +	/*
>> +	 * The TOD jumps by delta, we have to compensate this by adding
>> +	 * -delta to the epoch.
>> +	 */
>> +	delta = -delta;
>> +
>> +	/* sign-extension - we're adding to signed values below */
>> +	if ((s64)delta < 0)
>> +		delta_idx = 0xff;
>> +
>> +	scb->epoch += delta;
>> +	if (scb->ecd & ECD_MEF) {
>> +		scb->epdx += delta_idx;
>> +		if (scb->epoch < delta)
>> +			scb->epdx += 1;
>> +	}
>> +}
>> +
> 
> Is the sync always a jump forward? Do we need to worry about a borrow 
> from the epdx in case of underflow?

I can jump forward and backwards, so delta can be positive or negative,
resulting in -delta being positive or negative.

The rules of unsigned addition should make sure that all cases are
covered. (I tried to find a counter example but wasn't able to find one)

(Especially, this is the same code pattern as found in
arch/s390/kvm/vsie.c:register_shadow_scb(), which also adds two signed
numbers.)

> 
>>   /*
>>    * This callback is executed during stop_machine(). All CPUs are therefore
>>    * temporarily stopped. In order not to change guest behavior, we have to
>> @@ -194,13 +216,17 @@ static int kvm_clock_sync(struct notifier_block *notifier, unsigned long val,
>>   	unsigned long long *delta = v;
>>
>>   	list_for_each_entry(kvm, &vm_list, vm_list) {
>> -		kvm->arch.epoch -= *delta;
>>   		kvm_for_each_vcpu(i, vcpu, kvm) {
>> -			vcpu->arch.sie_block->epoch -= *delta;
>> +			kvm_clock_sync_scb(vcpu->arch.sie_block, *delta);
>> +			if (i == 0) {
>> +				kvm->arch.epoch = vcpu->arch.sie_block->epoch;
>> +				kvm->arch.epdx = vcpu->arch.sie_block->epdx;
> 
> Are we safe by setting the kvm epochs to the sie epochs wrt migration?

Yes, in fact they should be the same for all VCPUs, otherwise we are in
trouble. The TOD has to be the same over all VCPUs.

So we should always have
- kvm->arch.epoch == vcpu->arch.sie_block->epoch
- kvm->arch.epdx == vcpu->arch.sie_block->epdx
for all VCPUs, otherwise their TOD could differ.

This is also guaranteed by the way we calculate and update these numbers.

The only special case is if somebody would be using
set_on_reg/get_one_reg with KVM_REG_S390_EPOCHDIFF, setting different
values - very unlikely and bad.

> 
>> +			}
>>   			if (vcpu->arch.cputm_enabled)
>>   				vcpu->arch.cputm_start += *delta;
>>   			if (vcpu->arch.vsie_block)
>> -				vcpu->arch.vsie_block->epoch -= *delta;
>> +				kvm_clock_sync_scb(vcpu->arch.vsie_block,
>> +						   *delta);
>>   		}
>>   	}
>>   	return NOTIFY_OK;
>
Collin L. Walling Feb. 7, 2018, 10:43 p.m. UTC | #3
On 02/07/2018 04:35 PM, David Hildenbrand wrote:
> On 07.02.2018 21:08, Collin L. Walling wrote:
>> On 02/07/2018 06:46 AM, David Hildenbrand wrote:
>>> For now, we don't take care of over/underflows. Especially underflows
>>> are critical:
>>>
>>> Assume the epoch is currently 0 and we get a sync request for delta=1,
>>> meaning the TOD is moved forward by 1 and we have to fix it up by
>>> subtracting 1 from the epoch. Right now, this will leave the epoch
>>> index untouched, resulting in epoch=-1, epoch_idx=0, which is wrong.
>>>
>>> We have to take care of over and underflows, also for the VSIE case. So
>>> let's factor out calculation into a separate function.
>>>
>>> Signed-off-by: David Hildenbrand <david@redhat.com>
>>> ---
>>>    arch/s390/kvm/kvm-s390.c | 32 +++++++++++++++++++++++++++++---
>>>    1 file changed, 29 insertions(+), 3 deletions(-)
>>>
>>> diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
>>> index d007b737cd4d..c2b62379049e 100644
>>> --- a/arch/s390/kvm/kvm-s390.c
>>> +++ b/arch/s390/kvm/kvm-s390.c
>>> @@ -179,6 +179,28 @@ int kvm_arch_hardware_enable(void)
>>>    static void kvm_gmap_notifier(struct gmap *gmap, unsigned long start,
>>>    			      unsigned long end);
>>>
>>> +static void kvm_clock_sync_scb(struct kvm_s390_sie_block *scb, u64 delta)
>>> +{
>>> +	u64 delta_idx = 0;
>>> +
>>> +	/*
>>> +	 * The TOD jumps by delta, we have to compensate this by adding
>>> +	 * -delta to the epoch.
>>> +	 */
>>> +	delta = -delta;
>>> +
>>> +	/* sign-extension - we're adding to signed values below */
>>> +	if ((s64)delta < 0)
>>> +		delta_idx = 0xff;
>>> +
>>> +	scb->epoch += delta;
>>> +	if (scb->ecd & ECD_MEF) {
>>> +		scb->epdx += delta_idx;
>>> +		if (scb->epoch < delta)
>>> +			scb->epdx += 1;
>>> +	}
>>> +}
>>> +
>> Is the sync always a jump forward? Do we need to worry about a borrow
>> from the epdx in case of underflow?
> I can jump forward and backwards, so delta can be positive or negative,
> resulting in -delta being positive or negative.
>
> The rules of unsigned addition should make sure that all cases are
> covered. (I tried to find a counter example but wasn't able to find one)

Agreed. I just wrote down a few edge cases myself... it seems to check 
out nicely.

>
> (Especially, this is the same code pattern as found in
> arch/s390/kvm/vsie.c:register_shadow_scb(), which also adds two signed
> numbers.)
>
>>>    /*
>>>     * This callback is executed during stop_machine(). All CPUs are therefore
>>>     * temporarily stopped. In order not to change guest behavior, we have to
>>> @@ -194,13 +216,17 @@ static int kvm_clock_sync(struct notifier_block *notifier, unsigned long val,
>>>    	unsigned long long *delta = v;
>>>
>>>    	list_for_each_entry(kvm, &vm_list, vm_list) {
>>> -		kvm->arch.epoch -= *delta;
>>>    		kvm_for_each_vcpu(i, vcpu, kvm) {
>>> -			vcpu->arch.sie_block->epoch -= *delta;
>>> +			kvm_clock_sync_scb(vcpu->arch.sie_block, *delta);
>>> +			if (i == 0) {
>>> +				kvm->arch.epoch = vcpu->arch.sie_block->epoch;
>>> +				kvm->arch.epdx = vcpu->arch.sie_block->epdx;
>> Are we safe by setting the kvm epochs to the sie epochs wrt migration?
> Yes, in fact they should be the same for all VCPUs, otherwise we are in
> trouble. The TOD has to be the same over all VCPUs.
>
> So we should always have
> - kvm->arch.epoch == vcpu->arch.sie_block->epoch
> - kvm->arch.epdx == vcpu->arch.sie_block->epdx
> for all VCPUs, otherwise their TOD could differ.

Perhaps then this could be shortened to calculate the epochs only once, 
then set
each vcpu to those values instead ofcalculating on each iteration?

I imagine the number of iterations would never be large enough to cause any
considerable performance hits, though.

>
> This is also guaranteed by the way we calculate and update these numbers.
>
> The only special case is if somebody would be using
> set_on_reg/get_one_reg with KVM_REG_S390_EPOCHDIFF, setting different
> values - very unlikely and bad.
>
>>> +			}
>>>    			if (vcpu->arch.cputm_enabled)
>>>    				vcpu->arch.cputm_start += *delta;
>>>    			if (vcpu->arch.vsie_block)
>>> -				vcpu->arch.vsie_block->epoch -= *delta;
>>> +				kvm_clock_sync_scb(vcpu->arch.vsie_block,
>>> +						   *delta);
>>>    		}
>>>    	}
>>>    	return NOTIFY_OK;
>
David Hildenbrand Feb. 8, 2018, 12:15 p.m. UTC | #4
>> The rules of unsigned addition should make sure that all cases are
>> covered. (I tried to find a counter example but wasn't able to find one)
> 
> Agreed. I just wrote down a few edge cases myself... it seems to check 
> out nicely.
> 
>>
>> (Especially, this is the same code pattern as found in
>> arch/s390/kvm/vsie.c:register_shadow_scb(), which also adds two signed
>> numbers.)
>>
>>>>    /*
>>>>     * This callback is executed during stop_machine(). All CPUs are therefore
>>>>     * temporarily stopped. In order not to change guest behavior, we have to
>>>> @@ -194,13 +216,17 @@ static int kvm_clock_sync(struct notifier_block *notifier, unsigned long val,
>>>>    	unsigned long long *delta = v;
>>>>
>>>>    	list_for_each_entry(kvm, &vm_list, vm_list) {
>>>> -		kvm->arch.epoch -= *delta;
>>>>    		kvm_for_each_vcpu(i, vcpu, kvm) {
>>>> -			vcpu->arch.sie_block->epoch -= *delta;
>>>> +			kvm_clock_sync_scb(vcpu->arch.sie_block, *delta);
>>>> +			if (i == 0) {
>>>> +				kvm->arch.epoch = vcpu->arch.sie_block->epoch;
>>>> +				kvm->arch.epdx = vcpu->arch.sie_block->epdx;
>>> Are we safe by setting the kvm epochs to the sie epochs wrt migration?
>> Yes, in fact they should be the same for all VCPUs, otherwise we are in
>> trouble. The TOD has to be the same over all VCPUs.
>>
>> So we should always have
>> - kvm->arch.epoch == vcpu->arch.sie_block->epoch
>> - kvm->arch.epdx == vcpu->arch.sie_block->epdx
>> for all VCPUs, otherwise their TOD could differ.
> 
> Perhaps then this could be shortened to calculate the epochs only once, 
> then set
> each vcpu to those values instead ofcalculating on each iteration?
> 

I had that before, but changed it to this. Especially because some weird user
space could set the epochs differently on different CPUs (e.g. for testing
purposes or IDK).

So something like this is not shorter and possibly performs less calculations


        list_for_each_entry(kvm, &vm_list, vm_list) {
                kvm_for_each_vcpu(i, vcpu, kvm) {
-                       kvm_clock_sync_scb(vcpu->arch.sie_block, *delta);
                        if (i == 0) {
+                               kvm_clock_sync_scb(vcpu->arch.sie_block, *delta);
                                kvm->arch.epoch = vcpu->arch.sie_block->epoch;
                                kvm->arch.epdx = vcpu->arch.sie_block->epdx;
+                       } else {
+                               vcpu->arch.sie_block->epoch = kvm->arch.epoch;
+                               vcpu->arch.sie_block->epdx = kvm->arch.epdx;
                        }
                        if (vcpu->arch.cputm_enabled)
                                vcpu->arch.cputm_start += *delta;

I'll let the Maintainers decide :)

> I imagine the number of iterations would never be large enough to cause any
> considerable performance hits, though.

Thanks!
Christian Borntraeger Feb. 20, 2018, 3:36 p.m. UTC | #5
On 02/07/2018 12:46 PM, David Hildenbrand wrote:
> For now, we don't take care of over/underflows. Especially underflows
> are critical:
> 
> Assume the epoch is currently 0 and we get a sync request for delta=1,
> meaning the TOD is moved forward by 1 and we have to fix it up by
> subtracting 1 from the epoch. Right now, this will leave the epoch
> index untouched, resulting in epoch=-1, epoch_idx=0, which is wrong.
> 
> We have to take care of over and underflows, also for the VSIE case. So
> let's factor out calculation into a separate function.
> 
> Signed-off-by: David Hildenbrand <david@redhat.com>
> ---
>  arch/s390/kvm/kvm-s390.c | 32 +++++++++++++++++++++++++++++---
>  1 file changed, 29 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
> index d007b737cd4d..c2b62379049e 100644
> --- a/arch/s390/kvm/kvm-s390.c
> +++ b/arch/s390/kvm/kvm-s390.c
> @@ -179,6 +179,28 @@ int kvm_arch_hardware_enable(void)
>  static void kvm_gmap_notifier(struct gmap *gmap, unsigned long start,
>  			      unsigned long end);
> 
> +static void kvm_clock_sync_scb(struct kvm_s390_sie_block *scb, u64 delta)
> +{
> +	u64 delta_idx = 0;

we only add it to epdx, so should it be u8?
> +
> +	/*
> +	 * The TOD jumps by delta, we have to compensate this by adding
> +	 * -delta to the epoch.
> +	 */
> +	delta = -delta;
> +
> +	/* sign-extension - we're adding to signed values below */
> +	if ((s64)delta < 0)
> +		delta_idx = 0xff;

		and -1 then here?

> +
> +	scb->epoch += delta;
> +	if (scb->ecd & ECD_MEF) {
> +		scb->epdx += delta_idx;
> +		if (scb->epoch < delta)
> +			scb->epdx += 1;
David Hildenbrand Feb. 20, 2018, 3:41 p.m. UTC | #6
On 20.02.2018 16:36, Christian Borntraeger wrote:
> 
> 
> On 02/07/2018 12:46 PM, David Hildenbrand wrote:
>> For now, we don't take care of over/underflows. Especially underflows
>> are critical:
>>
>> Assume the epoch is currently 0 and we get a sync request for delta=1,
>> meaning the TOD is moved forward by 1 and we have to fix it up by
>> subtracting 1 from the epoch. Right now, this will leave the epoch
>> index untouched, resulting in epoch=-1, epoch_idx=0, which is wrong.
>>
>> We have to take care of over and underflows, also for the VSIE case. So
>> let's factor out calculation into a separate function.
>>
>> Signed-off-by: David Hildenbrand <david@redhat.com>
>> ---
>>  arch/s390/kvm/kvm-s390.c | 32 +++++++++++++++++++++++++++++---
>>  1 file changed, 29 insertions(+), 3 deletions(-)
>>
>> diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
>> index d007b737cd4d..c2b62379049e 100644
>> --- a/arch/s390/kvm/kvm-s390.c
>> +++ b/arch/s390/kvm/kvm-s390.c
>> @@ -179,6 +179,28 @@ int kvm_arch_hardware_enable(void)
>>  static void kvm_gmap_notifier(struct gmap *gmap, unsigned long start,
>>  			      unsigned long end);
>>
>> +static void kvm_clock_sync_scb(struct kvm_s390_sie_block *scb, u64 delta)
>> +{
>> +	u64 delta_idx = 0;
> 
> we only add it to epdx, so should it be u8?

Indeed, this should be u8.

>> +
>> +	/*
>> +	 * The TOD jumps by delta, we have to compensate this by adding
>> +	 * -delta to the epoch.
>> +	 */
>> +	delta = -delta;
>> +
>> +	/* sign-extension - we're adding to signed values below */
>> +	if ((s64)delta < 0)
>> +		delta_idx = 0xff;
> 
> 		and -1 then here?

Yes, thanks!

> 
>> +
>> +	scb->epoch += delta;
>> +	if (scb->ecd & ECD_MEF) {
>> +		scb->epdx += delta_idx;
>> +		if (scb->epoch < delta)
>> +			scb->epdx += 1;
> 
>
Christian Borntraeger Feb. 20, 2018, 3:56 p.m. UTC | #7
On 02/20/2018 04:41 PM, David Hildenbrand wrote:
> On 20.02.2018 16:36, Christian Borntraeger wrote:
>>
>>
>> On 02/07/2018 12:46 PM, David Hildenbrand wrote:
>>> For now, we don't take care of over/underflows. Especially underflows
>>> are critical:
>>>
>>> Assume the epoch is currently 0 and we get a sync request for delta=1,
>>> meaning the TOD is moved forward by 1 and we have to fix it up by
>>> subtracting 1 from the epoch. Right now, this will leave the epoch
>>> index untouched, resulting in epoch=-1, epoch_idx=0, which is wrong.
>>>
>>> We have to take care of over and underflows, also for the VSIE case. So
>>> let's factor out calculation into a separate function.
>>>
>>> Signed-off-by: David Hildenbrand <david@redhat.com>
>>> ---
>>>  arch/s390/kvm/kvm-s390.c | 32 +++++++++++++++++++++++++++++---
>>>  1 file changed, 29 insertions(+), 3 deletions(-)
>>>
>>> diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
>>> index d007b737cd4d..c2b62379049e 100644
>>> --- a/arch/s390/kvm/kvm-s390.c
>>> +++ b/arch/s390/kvm/kvm-s390.c
>>> @@ -179,6 +179,28 @@ int kvm_arch_hardware_enable(void)
>>>  static void kvm_gmap_notifier(struct gmap *gmap, unsigned long start,
>>>  			      unsigned long end);
>>>
>>> +static void kvm_clock_sync_scb(struct kvm_s390_sie_block *scb, u64 delta)
>>> +{
>>> +	u64 delta_idx = 0;
>>
>> we only add it to epdx, so should it be u8?
> 
> Indeed, this should be u8.
> 
>>> +
>>> +	/*
>>> +	 * The TOD jumps by delta, we have to compensate this by adding
>>> +	 * -delta to the epoch.
>>> +	 */
>>> +	delta = -delta;
>>> +
>>> +	/* sign-extension - we're adding to signed values below */
>>> +	if ((s64)delta < 0)
>>> +		delta_idx = 0xff;
>>
>> 		and -1 then here?
> 
> Yes, thanks!

with that 
Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com>

applied and changed to u8. 

In plan to submit patch 1,3 and 4 for 4.16
diff mbox

Patch

diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
index d007b737cd4d..c2b62379049e 100644
--- a/arch/s390/kvm/kvm-s390.c
+++ b/arch/s390/kvm/kvm-s390.c
@@ -179,6 +179,28 @@  int kvm_arch_hardware_enable(void)
 static void kvm_gmap_notifier(struct gmap *gmap, unsigned long start,
 			      unsigned long end);
 
+static void kvm_clock_sync_scb(struct kvm_s390_sie_block *scb, u64 delta)
+{
+	u64 delta_idx = 0;
+
+	/*
+	 * The TOD jumps by delta, we have to compensate this by adding
+	 * -delta to the epoch.
+	 */
+	delta = -delta;
+
+	/* sign-extension - we're adding to signed values below */
+	if ((s64)delta < 0)
+		delta_idx = 0xff;
+
+	scb->epoch += delta;
+	if (scb->ecd & ECD_MEF) {
+		scb->epdx += delta_idx;
+		if (scb->epoch < delta)
+			scb->epdx += 1;
+	}
+}
+
 /*
  * This callback is executed during stop_machine(). All CPUs are therefore
  * temporarily stopped. In order not to change guest behavior, we have to
@@ -194,13 +216,17 @@  static int kvm_clock_sync(struct notifier_block *notifier, unsigned long val,
 	unsigned long long *delta = v;
 
 	list_for_each_entry(kvm, &vm_list, vm_list) {
-		kvm->arch.epoch -= *delta;
 		kvm_for_each_vcpu(i, vcpu, kvm) {
-			vcpu->arch.sie_block->epoch -= *delta;
+			kvm_clock_sync_scb(vcpu->arch.sie_block, *delta);
+			if (i == 0) {
+				kvm->arch.epoch = vcpu->arch.sie_block->epoch;
+				kvm->arch.epdx = vcpu->arch.sie_block->epdx;
+			}
 			if (vcpu->arch.cputm_enabled)
 				vcpu->arch.cputm_start += *delta;
 			if (vcpu->arch.vsie_block)
-				vcpu->arch.vsie_block->epoch -= *delta;
+				kvm_clock_sync_scb(vcpu->arch.vsie_block,
+						   *delta);
 		}
 	}
 	return NOTIFY_OK;