diff mbox

[v6,11/13] KVM: arm64: Handle RAS SErrors from EL1 on guest exit

Message ID 20180115193906.30053-12-james.morse@arm.com (mailing list archive)
State New, archived
Headers show

Commit Message

James Morse Jan. 15, 2018, 7:39 p.m. UTC
We expect to have firmware-first handling of RAS SErrors, with errors
notified via an APEI method. For systems without firmware-first, add
some minimal handling to KVM.

There are two ways KVM can take an SError due to a guest, either may be a
RAS error: we exit the guest due to an SError routed to EL2 by HCR_EL2.AMO,
or we take an SError from EL2 when we unmask PSTATE.A from __guest_exit.

For SError that interrupt a guest and are routed to EL2 the existing
behaviour is to inject an impdef SError into the guest.

Add code to handle RAS SError based on the ESR. For uncontained and
uncategorized errors arm64_is_fatal_ras_serror() will panic(), these
errors compromise the host too. All other error types are contained:
For the fatal errors the vCPU can't make progress, so we inject a virtual
SError. We ignore contained errors where we can make progress as if
we're lucky, we may not hit them again.

If only some of the CPUs support RAS the guest will see the cpufeature
sanitised version of the id registers, but we may still take RAS SError
on this CPU. Move the SError handling out of handle_exit() into a new
handler that runs before we can be preempted. This allows us to use
this_cpu_has_cap(), via arm64_is_ras_serror().

Signed-off-by: James Morse <james.morse@arm.com>
---
Changes since v4:
 * Moved SError handling into handle_exit_early(). This will need to move
   earlier, into an SError-masked region once we support kernel-first.
   (hence the vauge name)
 * Dropped Marc & Christoffer's Reviewed-by due to handle_exit_early().

 arch/arm/include/asm/kvm_host.h   |  3 +++
 arch/arm64/include/asm/kvm_host.h |  2 ++
 arch/arm64/kvm/handle_exit.c      | 18 +++++++++++++++++-
 virt/kvm/arm/arm.c                |  3 +++
 4 files changed, 25 insertions(+), 1 deletion(-)

Comments

Marc Zyngier Jan. 16, 2018, 9:29 a.m. UTC | #1
On 15/01/18 19:39, James Morse wrote:
> We expect to have firmware-first handling of RAS SErrors, with errors
> notified via an APEI method. For systems without firmware-first, add
> some minimal handling to KVM.
> 
> There are two ways KVM can take an SError due to a guest, either may be a
> RAS error: we exit the guest due to an SError routed to EL2 by HCR_EL2.AMO,
> or we take an SError from EL2 when we unmask PSTATE.A from __guest_exit.
> 
> For SError that interrupt a guest and are routed to EL2 the existing
> behaviour is to inject an impdef SError into the guest.
> 
> Add code to handle RAS SError based on the ESR. For uncontained and
> uncategorized errors arm64_is_fatal_ras_serror() will panic(), these
> errors compromise the host too. All other error types are contained:
> For the fatal errors the vCPU can't make progress, so we inject a virtual
> SError. We ignore contained errors where we can make progress as if
> we're lucky, we may not hit them again.
> 
> If only some of the CPUs support RAS the guest will see the cpufeature
> sanitised version of the id registers, but we may still take RAS SError
> on this CPU. Move the SError handling out of handle_exit() into a new
> handler that runs before we can be preempted. This allows us to use
> this_cpu_has_cap(), via arm64_is_ras_serror().
> 
> Signed-off-by: James Morse <james.morse@arm.com>
> ---
> Changes since v4:
>  * Moved SError handling into handle_exit_early(). This will need to move
>    earlier, into an SError-masked region once we support kernel-first.
>    (hence the vauge name)
>  * Dropped Marc & Christoffer's Reviewed-by due to handle_exit_early().
> 
>  arch/arm/include/asm/kvm_host.h   |  3 +++
>  arch/arm64/include/asm/kvm_host.h |  2 ++
>  arch/arm64/kvm/handle_exit.c      | 18 +++++++++++++++++-
>  virt/kvm/arm/arm.c                |  3 +++
>  4 files changed, 25 insertions(+), 1 deletion(-)

Acked-by: Marc Zyngier <marc.zyngier@arm.com>

	M.
Christoffer Dall Jan. 19, 2018, 7:20 p.m. UTC | #2
On Mon, Jan 15, 2018 at 07:39:04PM +0000, James Morse wrote:
> We expect to have firmware-first handling of RAS SErrors, with errors
> notified via an APEI method. For systems without firmware-first, add
> some minimal handling to KVM.
> 
> There are two ways KVM can take an SError due to a guest, either may be a
> RAS error: we exit the guest due to an SError routed to EL2 by HCR_EL2.AMO,
> or we take an SError from EL2 when we unmask PSTATE.A from __guest_exit.
> 
> For SError that interrupt a guest and are routed to EL2 the existing
> behaviour is to inject an impdef SError into the guest.
> 
> Add code to handle RAS SError based on the ESR. For uncontained and
> uncategorized errors arm64_is_fatal_ras_serror() will panic(), these
> errors compromise the host too. All other error types are contained:
> For the fatal errors the vCPU can't make progress, so we inject a virtual
> SError. We ignore contained errors where we can make progress as if
> we're lucky, we may not hit them again.
> 
> If only some of the CPUs support RAS the guest will see the cpufeature
> sanitised version of the id registers, but we may still take RAS SError
> on this CPU. Move the SError handling out of handle_exit() into a new
> handler that runs before we can be preempted. This allows us to use
> this_cpu_has_cap(), via arm64_is_ras_serror().

Would it be possible to optimize this a bit later on by caching
this_cpu_has_cap() in vcpu_load() so that we can use a single
handle_exit function to process all exits?

Thanks,
-Christoffer

> 
> Signed-off-by: James Morse <james.morse@arm.com>
> ---
> Changes since v4:
>  * Moved SError handling into handle_exit_early(). This will need to move
>    earlier, into an SError-masked region once we support kernel-first.
>    (hence the vauge name)
>  * Dropped Marc & Christoffer's Reviewed-by due to handle_exit_early().
> 
>  arch/arm/include/asm/kvm_host.h   |  3 +++
>  arch/arm64/include/asm/kvm_host.h |  2 ++
>  arch/arm64/kvm/handle_exit.c      | 18 +++++++++++++++++-
>  virt/kvm/arm/arm.c                |  3 +++
>  4 files changed, 25 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
> index b86fc4162539..acbf9ec7b396 100644
> --- a/arch/arm/include/asm/kvm_host.h
> +++ b/arch/arm/include/asm/kvm_host.h
> @@ -238,6 +238,9 @@ int kvm_arm_coproc_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *);
>  int handle_exit(struct kvm_vcpu *vcpu, struct kvm_run *run,
>  		int exception_index);
>  
> +static inline void handle_exit_early(struct kvm_vcpu *vcpu, struct kvm_run *run,
> +				     int exception_index) {}
> +
>  static inline void __cpu_init_hyp_mode(phys_addr_t pgd_ptr,
>  				       unsigned long hyp_stack_ptr,
>  				       unsigned long vector_ptr)
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index 84fcb2a896a1..abcfd164e690 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -347,6 +347,8 @@ void kvm_mmu_wp_memory_region(struct kvm *kvm, int slot);
>  
>  int handle_exit(struct kvm_vcpu *vcpu, struct kvm_run *run,
>  		int exception_index);
> +void handle_exit_early(struct kvm_vcpu *vcpu, struct kvm_run *run,
> +		       int exception_index);
>  
>  int kvm_perf_init(void);
>  int kvm_perf_teardown(void);
> diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
> index 304203fa9e33..6a5a5db4292f 100644
> --- a/arch/arm64/kvm/handle_exit.c
> +++ b/arch/arm64/kvm/handle_exit.c
> @@ -29,12 +29,19 @@
>  #include <asm/kvm_mmu.h>
>  #include <asm/kvm_psci.h>
>  #include <asm/debug-monitors.h>
> +#include <asm/traps.h>
>  
>  #define CREATE_TRACE_POINTS
>  #include "trace.h"
>  
>  typedef int (*exit_handle_fn)(struct kvm_vcpu *, struct kvm_run *);
>  
> +static void kvm_handle_guest_serror(struct kvm_vcpu *vcpu, u32 esr)
> +{
> +	if (!arm64_is_ras_serror(esr) || arm64_is_fatal_ras_serror(NULL, esr))
> +		kvm_inject_vabt(vcpu);
> +}
> +
>  static int handle_hvc(struct kvm_vcpu *vcpu, struct kvm_run *run)
>  {
>  	int ret;
> @@ -252,7 +259,6 @@ int handle_exit(struct kvm_vcpu *vcpu, struct kvm_run *run,
>  	case ARM_EXCEPTION_IRQ:
>  		return 1;
>  	case ARM_EXCEPTION_EL1_SERROR:
> -		kvm_inject_vabt(vcpu);
>  		/* We may still need to return for single-step */
>  		if (!(*vcpu_cpsr(vcpu) & DBG_SPSR_SS)
>  			&& kvm_arm_handle_step_debug(vcpu, run))
> @@ -275,3 +281,13 @@ int handle_exit(struct kvm_vcpu *vcpu, struct kvm_run *run,
>  		return 0;
>  	}
>  }
> +
> +/* For exit types that need handling before we can be preempted */
> +void handle_exit_early(struct kvm_vcpu *vcpu, struct kvm_run *run,
> +		       int exception_index)
> +{
> +	exception_index = ARM_EXCEPTION_CODE(exception_index);
> +
> +	if (exception_index == ARM_EXCEPTION_EL1_SERROR)
> +		kvm_handle_guest_serror(vcpu, kvm_vcpu_get_hsr(vcpu));
> +}
> diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
> index 38e81631fc91..15bf026eb182 100644
> --- a/virt/kvm/arm/arm.c
> +++ b/virt/kvm/arm/arm.c
> @@ -763,6 +763,9 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
>  		guest_exit();
>  		trace_kvm_exit(ret, kvm_vcpu_trap_get_class(vcpu), *vcpu_pc(vcpu));
>  
> +		/* Exit types that need handling before we can be preempted */
> +		handle_exit_early(vcpu, run, ret);
> +
>  		preempt_enable();
>  
>  		ret = handle_exit(vcpu, run, ret);
> -- 
> 2.15.1
>
James Morse Jan. 22, 2018, 6:18 p.m. UTC | #3
Hi Christoffer,

On 19/01/18 19:20, Christoffer Dall wrote:
> On Mon, Jan 15, 2018 at 07:39:04PM +0000, James Morse wrote:
>> We expect to have firmware-first handling of RAS SErrors, with errors
>> notified via an APEI method. For systems without firmware-first, add
>> some minimal handling to KVM.
>>
>> There are two ways KVM can take an SError due to a guest, either may be a
>> RAS error: we exit the guest due to an SError routed to EL2 by HCR_EL2.AMO,
>> or we take an SError from EL2 when we unmask PSTATE.A from __guest_exit.
>>
>> For SError that interrupt a guest and are routed to EL2 the existing
>> behaviour is to inject an impdef SError into the guest.
>>
>> Add code to handle RAS SError based on the ESR. For uncontained and
>> uncategorized errors arm64_is_fatal_ras_serror() will panic(), these
>> errors compromise the host too. All other error types are contained:
>> For the fatal errors the vCPU can't make progress, so we inject a virtual
>> SError. We ignore contained errors where we can make progress as if
>> we're lucky, we may not hit them again.
>>
>> If only some of the CPUs support RAS the guest will see the cpufeature
>> sanitised version of the id registers, but we may still take RAS SError
>> on this CPU. Move the SError handling out of handle_exit() into a new
>> handler that runs before we can be preempted. This allows us to use
>> this_cpu_has_cap(), via arm64_is_ras_serror().
> 
> Would it be possible to optimize this a bit later on by caching
> this_cpu_has_cap() in vcpu_load() so that we can use a single
> handle_exit function to process all exits?

If vcpu_load() prevents pre-emption between the guest-exit exception and the
this_cpu_has_cap() test then we wouldn't need a separate handle_exit().

But, if we support kernel-first RAS or firmware-first's NOTIFY_SEI we shouldn't
unmask SError until we've fed the guest-exit:SError into the RAS code. This
would also need the SError related handle_exit() calls to be separate/earlier.
(there was some verbiage on this in the cover letter).

(I started down the 'make handle_exit() non-preemptible', but WF{E,I}'s
kvm_vcpu_block()->schedule() and kvm_vcpu_on_spin()s use of kvm_vcpu_yield_to()
put an end to that).


In terms of caching this_cpu_has_cap() value, is this due to a performance
concern? It's all called behind 'exception_index == ARM_EXCEPTION_EL1_SERROR',
so we've already taken an SError out of the guest. Once its all put together
we're likely to have a pending signal for user-space.
'Corrected' (or at least ignorable) errors are going to be the odd one out, I
don't think we should worry about these!


Thanks,

James
Christoffer Dall Jan. 23, 2018, 3:32 p.m. UTC | #4
On Mon, Jan 22, 2018 at 06:18:54PM +0000, James Morse wrote:
> Hi Christoffer,
> 
> On 19/01/18 19:20, Christoffer Dall wrote:
> > On Mon, Jan 15, 2018 at 07:39:04PM +0000, James Morse wrote:
> >> We expect to have firmware-first handling of RAS SErrors, with errors
> >> notified via an APEI method. For systems without firmware-first, add
> >> some minimal handling to KVM.
> >>
> >> There are two ways KVM can take an SError due to a guest, either may be a
> >> RAS error: we exit the guest due to an SError routed to EL2 by HCR_EL2.AMO,
> >> or we take an SError from EL2 when we unmask PSTATE.A from __guest_exit.
> >>
> >> For SError that interrupt a guest and are routed to EL2 the existing
> >> behaviour is to inject an impdef SError into the guest.
> >>
> >> Add code to handle RAS SError based on the ESR. For uncontained and
> >> uncategorized errors arm64_is_fatal_ras_serror() will panic(), these
> >> errors compromise the host too. All other error types are contained:
> >> For the fatal errors the vCPU can't make progress, so we inject a virtual
> >> SError. We ignore contained errors where we can make progress as if
> >> we're lucky, we may not hit them again.
> >>
> >> If only some of the CPUs support RAS the guest will see the cpufeature
> >> sanitised version of the id registers, but we may still take RAS SError
> >> on this CPU. Move the SError handling out of handle_exit() into a new
> >> handler that runs before we can be preempted. This allows us to use
> >> this_cpu_has_cap(), via arm64_is_ras_serror().
> > 
> > Would it be possible to optimize this a bit later on by caching
> > this_cpu_has_cap() in vcpu_load() so that we can use a single
> > handle_exit function to process all exits?
> 
> If vcpu_load() prevents pre-emption between the guest-exit exception and the
> this_cpu_has_cap() test then we wouldn't need a separate handle_exit().

It doesn't, but you'd get another vcpu_put() / vcpu_load() if you get
preempted, and you could record anything you need to know about the CPU
that actually ran the guest in vcpu_put().

So it might be possible to call some "process pending serror" function
in vcpu_put().

> 
> But, if we support kernel-first RAS or firmware-first's NOTIFY_SEI we shouldn't
> unmask SError until we've fed the guest-exit:SError into the RAS code. This
> would also need the SError related handle_exit() calls to be separate/earlier.
> (there was some verbiage on this in the cover letter).

Yeah, I sort-of understood where this was going...

> 
> (I started down the 'make handle_exit() non-preemptible', but WF{E,I}'s
> kvm_vcpu_block()->schedule() and kvm_vcpu_on_spin()s use of kvm_vcpu_yield_to()
> put an end to that).

It's not clear to me exactly how that would work, as handle_exit() can
also block on stuff like allocating memory.  I suppose enabling
preemption could be per exit reason, but that might be hard to maintain.

> 
> 
> In terms of caching this_cpu_has_cap() value, is this due to a performance
> concern? It's all called behind 'exception_index == ARM_EXCEPTION_EL1_SERROR',
> so we've already taken an SError out of the guest. Once its all put together
> we're likely to have a pending signal for user-space.
> 'Corrected' (or at least ignorable) errors are going to be the odd one out, I
> don't think we should worry about these!

The performance concern is having to call another function to check the
return value again in the critical path.  On older implementations this
kind of thing is actually measureable, and there's a tendency to add a
call here and a call there for any new aspect of the architecture, and
it will eventually weigh things down, I believe.  On the other hand,
having a "process some things before we enable preemption" which is your
handle_exit_early() function (could this also have been called
handle_exit_nopreempt() ?) is a potentially generally useful thing to
have and a reasonable thing overall.

Anyway, I was just trying to spitball a bit on the topic, no immediate
change required.

Thanks,
-Christoffer
James Morse Jan. 30, 2018, 7:18 p.m. UTC | #5
Hi Christoffer,

On 23/01/18 15:32, Christoffer Dall wrote:
> On Mon, Jan 22, 2018 at 06:18:54PM +0000, James Morse wrote:
>> On 19/01/18 19:20, Christoffer Dall wrote:
>>> On Mon, Jan 15, 2018 at 07:39:04PM +0000, James Morse wrote:
>>>> If only some of the CPUs support RAS the guest will see the cpufeature
>>>> sanitised version of the id registers, but we may still take RAS SError
>>>> on this CPU. Move the SError handling out of handle_exit() into a new
>>>> handler that runs before we can be preempted. This allows us to use
>>>> this_cpu_has_cap(), via arm64_is_ras_serror().
>>>
>>> Would it be possible to optimize this a bit later on by caching
>>> this_cpu_has_cap() in vcpu_load() so that we can use a single
>>> handle_exit function to process all exits?
>>
>> If vcpu_load() prevents pre-emption between the guest-exit exception and the
>> this_cpu_has_cap() test then we wouldn't need a separate handle_exit().
> 
> It doesn't, but you'd get another vcpu_put() / vcpu_load() if you get
> preempted, and you could record anything you need to know about the CPU
> that actually ran the guest in vcpu_put().

Snazzy!

> So it might be possible to call some "process pending serror" function
> in vcpu_put().

Hmm, maybe. When we exit the guest its because we've had a notification an error
occurred, but we don't know what/where yet. The case that worries me is we
reschedule() onto some other affected task, and it gets notification of the
error too.

For notifications signalled by an SError I'd like to feed them into the RAS
machinery before we unmask SError on the host, so that the first error is
handled first. Otherwise KVM has to eyeball the SError ESR and guess as to
whether the host is affected by the error, before re-enabling preemption on the
grounds it 'probably only affects this guest'.


>> But, if we support kernel-first RAS or firmware-first's NOTIFY_SEI we shouldn't
>> unmask SError until we've fed the guest-exit:SError into the RAS code. This
>> would also need the SError related handle_exit() calls to be separate/earlier.
>> (there was some verbiage on this in the cover letter).
> 
> Yeah, I sort-of understood where this was going...

(sorry, I assume not everyone reads the cover letter!)


>> (I started down the 'make handle_exit() non-preemptible', but WF{E,I}'s
>> kvm_vcpu_block()->schedule() and kvm_vcpu_on_spin()s use of kvm_vcpu_yield_to()
>> put an end to that).
> 
> It's not clear to me exactly how that would work, as handle_exit() can
> also block on stuff like allocating memory.  

Yes, it was a dead end. I figured two handle_exit()s was a bit ugly, I assumed
you were asking about moving back to a single handle_exit()...


> I suppose enabling
> preemption could be per exit reason, but that might be hard to maintain.


>> In terms of caching this_cpu_has_cap() value, is this due to a performance
>> concern? It's all called behind 'exception_index == ARM_EXCEPTION_EL1_SERROR',
>> so we've already taken an SError out of the guest. Once its all put together
>> we're likely to have a pending signal for user-space.
>> 'Corrected' (or at least ignorable) errors are going to be the odd one out, I
>> don't think we should worry about these!
> 
> The performance concern is having to call another function to check the
> return value again in the critical path.

My justification for this sort of thing has been we've taken an SError, we may
panic() the host if its uncontained. Provided there is no extra cost on the 'no
SError' path, I don't think the 'we've taken an SError' paths need to be fast.


> On older implementations this
> kind of thing is actually measureable, and there's a tendency to add a
> call here and a call there for any new aspect of the architecture, and
> it will eventually weigh things down, I believe.

I'll keep this in mind.


> On the other hand,having a "process some things before we enable preemption"
>  which is your handle_exit_early() function (could this also have been called
> handle_exit_nopreempt() ?)

Yes, and that would have been a better name!


Thanks,

James


> is a potentially generally useful thing to
> have and a reasonable thing overall.
diff mbox

Patch

diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index b86fc4162539..acbf9ec7b396 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -238,6 +238,9 @@  int kvm_arm_coproc_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *);
 int handle_exit(struct kvm_vcpu *vcpu, struct kvm_run *run,
 		int exception_index);
 
+static inline void handle_exit_early(struct kvm_vcpu *vcpu, struct kvm_run *run,
+				     int exception_index) {}
+
 static inline void __cpu_init_hyp_mode(phys_addr_t pgd_ptr,
 				       unsigned long hyp_stack_ptr,
 				       unsigned long vector_ptr)
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 84fcb2a896a1..abcfd164e690 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -347,6 +347,8 @@  void kvm_mmu_wp_memory_region(struct kvm *kvm, int slot);
 
 int handle_exit(struct kvm_vcpu *vcpu, struct kvm_run *run,
 		int exception_index);
+void handle_exit_early(struct kvm_vcpu *vcpu, struct kvm_run *run,
+		       int exception_index);
 
 int kvm_perf_init(void);
 int kvm_perf_teardown(void);
diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
index 304203fa9e33..6a5a5db4292f 100644
--- a/arch/arm64/kvm/handle_exit.c
+++ b/arch/arm64/kvm/handle_exit.c
@@ -29,12 +29,19 @@ 
 #include <asm/kvm_mmu.h>
 #include <asm/kvm_psci.h>
 #include <asm/debug-monitors.h>
+#include <asm/traps.h>
 
 #define CREATE_TRACE_POINTS
 #include "trace.h"
 
 typedef int (*exit_handle_fn)(struct kvm_vcpu *, struct kvm_run *);
 
+static void kvm_handle_guest_serror(struct kvm_vcpu *vcpu, u32 esr)
+{
+	if (!arm64_is_ras_serror(esr) || arm64_is_fatal_ras_serror(NULL, esr))
+		kvm_inject_vabt(vcpu);
+}
+
 static int handle_hvc(struct kvm_vcpu *vcpu, struct kvm_run *run)
 {
 	int ret;
@@ -252,7 +259,6 @@  int handle_exit(struct kvm_vcpu *vcpu, struct kvm_run *run,
 	case ARM_EXCEPTION_IRQ:
 		return 1;
 	case ARM_EXCEPTION_EL1_SERROR:
-		kvm_inject_vabt(vcpu);
 		/* We may still need to return for single-step */
 		if (!(*vcpu_cpsr(vcpu) & DBG_SPSR_SS)
 			&& kvm_arm_handle_step_debug(vcpu, run))
@@ -275,3 +281,13 @@  int handle_exit(struct kvm_vcpu *vcpu, struct kvm_run *run,
 		return 0;
 	}
 }
+
+/* For exit types that need handling before we can be preempted */
+void handle_exit_early(struct kvm_vcpu *vcpu, struct kvm_run *run,
+		       int exception_index)
+{
+	exception_index = ARM_EXCEPTION_CODE(exception_index);
+
+	if (exception_index == ARM_EXCEPTION_EL1_SERROR)
+		kvm_handle_guest_serror(vcpu, kvm_vcpu_get_hsr(vcpu));
+}
diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
index 38e81631fc91..15bf026eb182 100644
--- a/virt/kvm/arm/arm.c
+++ b/virt/kvm/arm/arm.c
@@ -763,6 +763,9 @@  int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
 		guest_exit();
 		trace_kvm_exit(ret, kvm_vcpu_trap_get_class(vcpu), *vcpu_pc(vcpu));
 
+		/* Exit types that need handling before we can be preempted */
+		handle_exit_early(vcpu, run, ret);
+
 		preempt_enable();
 
 		ret = handle_exit(vcpu, run, ret);