diff mbox series

[v8,11/11] KVM: arm64: Handle protected guests at 32 bits

Message ID 20211010145636.1950948-12-tabba@google.com (mailing list archive)
State New, archived
Headers show
Series KVM: arm64: Fixed features for protected VMs | expand

Commit Message

Fuad Tabba Oct. 10, 2021, 2:56 p.m. UTC
Protected KVM does not support protected AArch32 guests. However,
it is possible for the guest to force run AArch32, potentially
causing problems. Add an extra check so that if the hypervisor
catches the guest doing that, it can prevent the guest from
running again by resetting vcpu->arch.target and returning
ARM_EXCEPTION_IL.

If this were to happen, The VMM can try and fix it by re-
initializing the vcpu with KVM_ARM_VCPU_INIT, however, this is
likely not possible for protected VMs.

Adapted from commit 22f553842b14 ("KVM: arm64: Handle Asymmetric
AArch32 systems")

Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/kvm/hyp/nvhe/switch.c | 34 ++++++++++++++++++++++++++++++++
 1 file changed, 34 insertions(+)

Comments

Marc Zyngier Oct. 11, 2021, 1:11 p.m. UTC | #1
On Sun, 10 Oct 2021 15:56:36 +0100,
Fuad Tabba <tabba@google.com> wrote:
> 
> Protected KVM does not support protected AArch32 guests. However,
> it is possible for the guest to force run AArch32, potentially
> causing problems. Add an extra check so that if the hypervisor
> catches the guest doing that, it can prevent the guest from
> running again by resetting vcpu->arch.target and returning
> ARM_EXCEPTION_IL.
> 
> If this were to happen, The VMM can try and fix it by re-
> initializing the vcpu with KVM_ARM_VCPU_INIT, however, this is
> likely not possible for protected VMs.
> 
> Adapted from commit 22f553842b14 ("KVM: arm64: Handle Asymmetric
> AArch32 systems")
> 
> Signed-off-by: Fuad Tabba <tabba@google.com>
> ---
>  arch/arm64/kvm/hyp/nvhe/switch.c | 34 ++++++++++++++++++++++++++++++++
>  1 file changed, 34 insertions(+)
> 
> diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
> index 2c72c31e516e..f25b6353a598 100644
> --- a/arch/arm64/kvm/hyp/nvhe/switch.c
> +++ b/arch/arm64/kvm/hyp/nvhe/switch.c
> @@ -232,6 +232,37 @@ static const exit_handler_fn *kvm_get_exit_handler_array(struct kvm *kvm)
>  	return hyp_exit_handlers;
>  }
>  
> +/*
> + * Some guests (e.g., protected VMs) are not be allowed to run in AArch32.
> + * The ARMv8 architecture does not give the hypervisor a mechanism to prevent a
> + * guest from dropping to AArch32 EL0 if implemented by the CPU. If the
> + * hypervisor spots a guest in such a state ensure it is handled, and don't
> + * trust the host to spot or fix it.  The check below is based on the one in
> + * kvm_arch_vcpu_ioctl_run().
> + *
> + * Returns false if the guest ran in AArch32 when it shouldn't have, and
> + * thus should exit to the host, or true if a the guest run loop can continue.
> + */
> +static bool handle_aarch32_guest(struct kvm_vcpu *vcpu, u64 *exit_code)
> +{
> +	struct kvm *kvm = kern_hyp_va(vcpu->kvm);
> +
> +	if (kvm_vm_is_protected(kvm) && vcpu_mode_is_32bit(vcpu)) {
> +		/*
> +		 * As we have caught the guest red-handed, decide that it isn't
> +		 * fit for purpose anymore by making the vcpu invalid. The VMM
> +		 * can try and fix it by re-initializing the vcpu with
> +		 * KVM_ARM_VCPU_INIT, however, this is likely not possible for
> +		 * protected VMs.
> +		 */
> +		vcpu->arch.target = -1;
> +		*exit_code = ARM_EXCEPTION_IL;

Aren't we losing a potential SError here, which the original commit
doesn't need to handle? I'd expect something like:

		*exit_code &= BIT(ARM_EXIT_WITH_SERROR_BIT);
		*exit_code |= ARM_EXCEPTION_IL;

> +		return false;
> +	}
> +
> +	return true;
> +}
> +
>  /* Switch to the guest for legacy non-VHE systems */
>  int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
>  {
> @@ -294,6 +325,9 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
>  		/* Jump in the fire! */
>  		exit_code = __guest_enter(vcpu);
>  
> +		if (unlikely(!handle_aarch32_guest(vcpu, &exit_code)))
> +			break;
> +
>  		/* And we're baaack! */
>  	} while (fixup_guest_exit(vcpu, &exit_code));
>  

Thanks,

	M.
Fuad Tabba Oct. 11, 2021, 1:36 p.m. UTC | #2
Hi Marc,

On Mon, Oct 11, 2021 at 2:11 PM Marc Zyngier <maz@kernel.org> wrote:
>
> On Sun, 10 Oct 2021 15:56:36 +0100,
> Fuad Tabba <tabba@google.com> wrote:
> >
> > Protected KVM does not support protected AArch32 guests. However,
> > it is possible for the guest to force run AArch32, potentially
> > causing problems. Add an extra check so that if the hypervisor
> > catches the guest doing that, it can prevent the guest from
> > running again by resetting vcpu->arch.target and returning
> > ARM_EXCEPTION_IL.
> >
> > If this were to happen, The VMM can try and fix it by re-
> > initializing the vcpu with KVM_ARM_VCPU_INIT, however, this is
> > likely not possible for protected VMs.
> >
> > Adapted from commit 22f553842b14 ("KVM: arm64: Handle Asymmetric
> > AArch32 systems")
> >
> > Signed-off-by: Fuad Tabba <tabba@google.com>
> > ---
> >  arch/arm64/kvm/hyp/nvhe/switch.c | 34 ++++++++++++++++++++++++++++++++
> >  1 file changed, 34 insertions(+)
> >
> > diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
> > index 2c72c31e516e..f25b6353a598 100644
> > --- a/arch/arm64/kvm/hyp/nvhe/switch.c
> > +++ b/arch/arm64/kvm/hyp/nvhe/switch.c
> > @@ -232,6 +232,37 @@ static const exit_handler_fn *kvm_get_exit_handler_array(struct kvm *kvm)
> >       return hyp_exit_handlers;
> >  }
> >
> > +/*
> > + * Some guests (e.g., protected VMs) are not be allowed to run in AArch32.
> > + * The ARMv8 architecture does not give the hypervisor a mechanism to prevent a
> > + * guest from dropping to AArch32 EL0 if implemented by the CPU. If the
> > + * hypervisor spots a guest in such a state ensure it is handled, and don't
> > + * trust the host to spot or fix it.  The check below is based on the one in
> > + * kvm_arch_vcpu_ioctl_run().
> > + *
> > + * Returns false if the guest ran in AArch32 when it shouldn't have, and
> > + * thus should exit to the host, or true if a the guest run loop can continue.
> > + */
> > +static bool handle_aarch32_guest(struct kvm_vcpu *vcpu, u64 *exit_code)
> > +{
> > +     struct kvm *kvm = kern_hyp_va(vcpu->kvm);
> > +
> > +     if (kvm_vm_is_protected(kvm) && vcpu_mode_is_32bit(vcpu)) {
> > +             /*
> > +              * As we have caught the guest red-handed, decide that it isn't
> > +              * fit for purpose anymore by making the vcpu invalid. The VMM
> > +              * can try and fix it by re-initializing the vcpu with
> > +              * KVM_ARM_VCPU_INIT, however, this is likely not possible for
> > +              * protected VMs.
> > +              */
> > +             vcpu->arch.target = -1;
> > +             *exit_code = ARM_EXCEPTION_IL;
>
> Aren't we losing a potential SError here, which the original commit
> doesn't need to handle? I'd expect something like:
>
>                 *exit_code &= BIT(ARM_EXIT_WITH_SERROR_BIT);
>                 *exit_code |= ARM_EXCEPTION_IL;

Yes, you're right. That would ensure the SError is preserved.

Thanks,
/fuad


> > +             return false;
> > +     }
> > +
> > +     return true;
> > +}
> > +
> >  /* Switch to the guest for legacy non-VHE systems */
> >  int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
> >  {
> > @@ -294,6 +325,9 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
> >               /* Jump in the fire! */
> >               exit_code = __guest_enter(vcpu);
> >
> > +             if (unlikely(!handle_aarch32_guest(vcpu, &exit_code)))
> > +                     break;
> > +
> >               /* And we're baaack! */
> >       } while (fixup_guest_exit(vcpu, &exit_code));
> >
>
> Thanks,
>
>         M.
>
> --
> Without deviation from the norm, progress is not possible.
Marc Zyngier Oct. 13, 2021, 12:03 p.m. UTC | #3
This is an update on Fuad's series[1].

Instead of going going back and forth over a series that has seen a
fair few versions, I've opted for simply writing a set of fixes on
top, hopefully greatly simplifying the handling of most registers, and
moving things around to suit my own taste (just because I can).

I won't be reposting the initial 11 patches, which is why this series
in is reply to patch 11.

Thanks,

	M.

[1] https://lore.kernel.org/r/20211010145636.1950948-1-tabba@google.com

Fuad Tabba (8):
  KVM: arm64: Pass struct kvm to per-EC handlers
  KVM: arm64: Add missing field descriptor for MDCR_EL2
  KVM: arm64: Simplify masking out MTE in feature id reg
  KVM: arm64: Add handlers for protected VM System Registers
  KVM: arm64: Initialize trap registers for protected VMs
  KVM: arm64: Move sanitized copies of CPU features
  KVM: arm64: Trap access to pVM restricted features
  KVM: arm64: Handle protected guests at 32 bits

Marc Zyngier (14):
  KVM: arm64: Move __get_fault_info() and co into their own include file
  KVM: arm64: Don't include switch.h into nvhe/kvm-main.c
  KVM: arm64: Move early handlers to per-EC handlers
  KVM: arm64: Fix early exit ptrauth handling
  KVM: arm64: pkvm: Use a single function to expose all id-regs
  KVM: arm64: pkvm: Make the ERR/ERX*_EL1 registers RAZ/WI
  KVM: arm64: pkvm: Drop AArch32-specific registers
  KVM: arm64: pkvm: Drop sysregs that should never be routed to the host
  KVM: arm64: pkvm: Handle GICv3 traps as required
  KVM: arm64: pkvm: Preserve pending SError on exit from AArch32
  KVM: arm64: pkvm: Consolidate include files
  KVM: arm64: pkvm: Move kvm_handle_pvm_restricted around
  KVM: arm64: pkvm: Pass vpcu instead of kvm to
    kvm_get_exit_handler_array()
  KVM: arm64: pkvm: Give priority to standard traps over pvm handling

 arch/arm64/include/asm/kvm_arm.h              |   1 +
 arch/arm64/include/asm/kvm_asm.h              |   1 +
 arch/arm64/include/asm/kvm_host.h             |   2 +
 arch/arm64/include/asm/kvm_hyp.h              |   5 +
 arch/arm64/kvm/arm.c                          |  13 +
 arch/arm64/kvm/hyp/include/hyp/fault.h        |  75 +++
 arch/arm64/kvm/hyp/include/hyp/switch.h       | 235 ++++-----
 .../arm64/kvm/hyp/include/nvhe/fixed_config.h | 200 +++++++
 .../arm64/kvm/hyp/include/nvhe/trap_handler.h |   2 +
 arch/arm64/kvm/hyp/nvhe/Makefile              |   2 +-
 arch/arm64/kvm/hyp/nvhe/hyp-main.c            |  11 +-
 arch/arm64/kvm/hyp/nvhe/mem_protect.c         |   8 +-
 arch/arm64/kvm/hyp/nvhe/pkvm.c                | 185 +++++++
 arch/arm64/kvm/hyp/nvhe/setup.c               |   3 +
 arch/arm64/kvm/hyp/nvhe/switch.c              |  99 ++++
 arch/arm64/kvm/hyp/nvhe/sys_regs.c            | 487 ++++++++++++++++++
 arch/arm64/kvm/hyp/vhe/switch.c               |  16 +
 arch/arm64/kvm/sys_regs.c                     |  10 +-
 18 files changed, 1200 insertions(+), 155 deletions(-)
 create mode 100644 arch/arm64/kvm/hyp/include/hyp/fault.h
 create mode 100644 arch/arm64/kvm/hyp/include/nvhe/fixed_config.h
 create mode 100644 arch/arm64/kvm/hyp/nvhe/pkvm.c
 create mode 100644 arch/arm64/kvm/hyp/nvhe/sys_regs.c
Fuad Tabba Oct. 18, 2021, 9:51 a.m. UTC | #4
Hi Marc,

On Wed, Oct 13, 2021 at 1:04 PM Marc Zyngier <maz@kernel.org> wrote:
>
> This is an update on Fuad's series[1].
>
> Instead of going going back and forth over a series that has seen a
> fair few versions, I've opted for simply writing a set of fixes on
> top, hopefully greatly simplifying the handling of most registers, and
> moving things around to suit my own taste (just because I can).
>
> I won't be reposting the initial 11 patches, which is why this series
> in is reply to patch 11.

Thanks for this series. I've reviewed, built it, and tested it with a
dummy protected VM (since we don't have proper protected VMs yet),
which initializes some of the relevant protected VMs metadata as well
as its control registers. So fwiw:

Reviewed-by: Fuad Tabba <tabba@google.com>

And to whatever extent possible at this stage:
Tested-by: Fuad Tabba <tabba@google.com>

Cheers,
/fuad





> Thanks,
>
>         M.
>
> [1] https://lore.kernel.org/r/20211010145636.1950948-1-tabba@google.com
>
> Fuad Tabba (8):
>   KVM: arm64: Pass struct kvm to per-EC handlers
>   KVM: arm64: Add missing field descriptor for MDCR_EL2
>   KVM: arm64: Simplify masking out MTE in feature id reg
>   KVM: arm64: Add handlers for protected VM System Registers
>   KVM: arm64: Initialize trap registers for protected VMs
>   KVM: arm64: Move sanitized copies of CPU features
>   KVM: arm64: Trap access to pVM restricted features
>   KVM: arm64: Handle protected guests at 32 bits
>
> Marc Zyngier (14):
>   KVM: arm64: Move __get_fault_info() and co into their own include file
>   KVM: arm64: Don't include switch.h into nvhe/kvm-main.c
>   KVM: arm64: Move early handlers to per-EC handlers
>   KVM: arm64: Fix early exit ptrauth handling
>   KVM: arm64: pkvm: Use a single function to expose all id-regs
>   KVM: arm64: pkvm: Make the ERR/ERX*_EL1 registers RAZ/WI
>   KVM: arm64: pkvm: Drop AArch32-specific registers
>   KVM: arm64: pkvm: Drop sysregs that should never be routed to the host
>   KVM: arm64: pkvm: Handle GICv3 traps as required
>   KVM: arm64: pkvm: Preserve pending SError on exit from AArch32
>   KVM: arm64: pkvm: Consolidate include files
>   KVM: arm64: pkvm: Move kvm_handle_pvm_restricted around
>   KVM: arm64: pkvm: Pass vpcu instead of kvm to
>     kvm_get_exit_handler_array()
>   KVM: arm64: pkvm: Give priority to standard traps over pvm handling
>
>  arch/arm64/include/asm/kvm_arm.h              |   1 +
>  arch/arm64/include/asm/kvm_asm.h              |   1 +
>  arch/arm64/include/asm/kvm_host.h             |   2 +
>  arch/arm64/include/asm/kvm_hyp.h              |   5 +
>  arch/arm64/kvm/arm.c                          |  13 +
>  arch/arm64/kvm/hyp/include/hyp/fault.h        |  75 +++
>  arch/arm64/kvm/hyp/include/hyp/switch.h       | 235 ++++-----
>  .../arm64/kvm/hyp/include/nvhe/fixed_config.h | 200 +++++++
>  .../arm64/kvm/hyp/include/nvhe/trap_handler.h |   2 +
>  arch/arm64/kvm/hyp/nvhe/Makefile              |   2 +-
>  arch/arm64/kvm/hyp/nvhe/hyp-main.c            |  11 +-
>  arch/arm64/kvm/hyp/nvhe/mem_protect.c         |   8 +-
>  arch/arm64/kvm/hyp/nvhe/pkvm.c                | 185 +++++++
>  arch/arm64/kvm/hyp/nvhe/setup.c               |   3 +
>  arch/arm64/kvm/hyp/nvhe/switch.c              |  99 ++++
>  arch/arm64/kvm/hyp/nvhe/sys_regs.c            | 487 ++++++++++++++++++
>  arch/arm64/kvm/hyp/vhe/switch.c               |  16 +
>  arch/arm64/kvm/sys_regs.c                     |  10 +-
>  18 files changed, 1200 insertions(+), 155 deletions(-)
>  create mode 100644 arch/arm64/kvm/hyp/include/hyp/fault.h
>  create mode 100644 arch/arm64/kvm/hyp/include/nvhe/fixed_config.h
>  create mode 100644 arch/arm64/kvm/hyp/nvhe/pkvm.c
>  create mode 100644 arch/arm64/kvm/hyp/nvhe/sys_regs.c
>
> --
> 2.30.2
>
Andrew Jones Oct. 18, 2021, 10:45 a.m. UTC | #5
On Mon, Oct 18, 2021 at 10:51:54AM +0100, Fuad Tabba wrote:
> Hi Marc,
> 
> On Wed, Oct 13, 2021 at 1:04 PM Marc Zyngier <maz@kernel.org> wrote:
> >
> > This is an update on Fuad's series[1].
> >
> > Instead of going going back and forth over a series that has seen a
> > fair few versions, I've opted for simply writing a set of fixes on
> > top, hopefully greatly simplifying the handling of most registers, and
> > moving things around to suit my own taste (just because I can).
> >
> > I won't be reposting the initial 11 patches, which is why this series
> > in is reply to patch 11.
> 
> Thanks for this series. I've reviewed, built it, and tested it with a
> dummy protected VM (since we don't have proper protected VMs yet),
> which initializes some of the relevant protected VMs metadata as well
> as its control registers. So fwiw:
> 
> Reviewed-by: Fuad Tabba <tabba@google.com>
> 
> And to whatever extent possible at this stage:
> Tested-by: Fuad Tabba <tabba@google.com>
>

Hi Fuad,

Out of curiosity, when testing pKVM, what VMM do you use? Also, can you
describe what a "dummy pVM" is? Is it a just pVM which is not actually
protected? How similar is a pVM to a typical VIRTIO-using VM? Actually,
maybe I should just ask if there are instructions for playing with pKVM
somewhere that I could get a pointer to.

Thanks,
drew
Fuad Tabba Oct. 18, 2021, 12:33 p.m. UTC | #6
Hi,

On Mon, Oct 18, 2021 at 11:45 AM Andrew Jones <drjones@redhat.com> wrote:
>
> On Mon, Oct 18, 2021 at 10:51:54AM +0100, Fuad Tabba wrote:
> > Hi Marc,
> >
> > On Wed, Oct 13, 2021 at 1:04 PM Marc Zyngier <maz@kernel.org> wrote:
> > >
> > > This is an update on Fuad's series[1].
> > >
> > > Instead of going going back and forth over a series that has seen a
> > > fair few versions, I've opted for simply writing a set of fixes on
> > > top, hopefully greatly simplifying the handling of most registers, and
> > > moving things around to suit my own taste (just because I can).
> > >
> > > I won't be reposting the initial 11 patches, which is why this series
> > > in is reply to patch 11.
> >
> > Thanks for this series. I've reviewed, built it, and tested it with a
> > dummy protected VM (since we don't have proper protected VMs yet),
> > which initializes some of the relevant protected VMs metadata as well
> > as its control registers. So fwiw:
> >
> > Reviewed-by: Fuad Tabba <tabba@google.com>
> >
> > And to whatever extent possible at this stage:
> > Tested-by: Fuad Tabba <tabba@google.com>
> >
>
> Hi Fuad,
>
> Out of curiosity, when testing pKVM, what VMM do you use? Also, can you
> describe what a "dummy pVM" is? Is it a just pVM which is not actually
> protected? How similar is a pVM to a typical VIRTIO-using VM? Actually,
> maybe I should just ask if there are instructions for playing with pKVM
> somewhere that I could get a pointer to.

Considering the WIP state of pKVM, my setup is hacky and not that
stable. I use QEMU, along with Will'ls pKVM user ABI patches [*] and a
couple of hacks added on top to run a normal VM with the protected
codepath applied to it, to be able to do some testing and sanity
checking. There isn't really any proper way of playing with protected
VMs yet.

Thanks,
/fuad

[*] https://lore.kernel.org/kvmarm/20210603183347.1695-1-will@kernel.org/

> Thanks,
> drew
>
Marc Zyngier Oct. 18, 2021, 4:37 p.m. UTC | #7
On Wed, 13 Oct 2021 13:03:35 +0100, Marc Zyngier wrote:
> This is an update on Fuad's series[1].
> 
> Instead of going going back and forth over a series that has seen a
> fair few versions, I've opted for simply writing a set of fixes on
> top, hopefully greatly simplifying the handling of most registers, and
> moving things around to suit my own taste (just because I can).
> 
> [...]

Applied to next, thanks!

[12/22] KVM: arm64: Fix early exit ptrauth handling
        commit: 8a049862c38f0c78b0e01ab5d36db1bffc832675
[13/22] KVM: arm64: pkvm: Use a single function to expose all id-regs
        commit: ce75916749b8cb5ec795f1157a5c426f6765a48c
[14/22] KVM: arm64: pkvm: Make the ERR/ERX*_EL1 registers RAZ/WI
        commit: 8ffb41888334c1247bd9b4d6ff6c092a90e8d0b8
[15/22] KVM: arm64: pkvm: Drop AArch32-specific registers
        commit: 3c90cb15e2e66bcc526d25133747b2af747f6cd8
[16/22] KVM: arm64: pkvm: Drop sysregs that should never be routed to the host
        commit: f3d5ccabab20c1be5838831f460f320a12e5e2c9
[17/22] KVM: arm64: pkvm: Handle GICv3 traps as required
        commit: cbca19738472be8156d854663ed724b01255c932
[18/22] KVM: arm64: pkvm: Preserve pending SError on exit from AArch32
        commit: 271b7286058da636ab6f5f47722e098ca3a0478b
[19/22] KVM: arm64: pkvm: Consolidate include files
        commit: 3061725d162cad0589b012fc6413c9dd0da8f02a
[20/22] KVM: arm64: pkvm: Move kvm_handle_pvm_restricted around
        commit: 746bdeadc53b0d58fddea6442591f5ec3eeabe7d
[21/22] KVM: arm64: pkvm: Pass vpcu instead of kvm to kvm_get_exit_handler_array()
        commit: 0c7639cc838263b6e38b3af76755d574f15cdf41
[22/22] KVM: arm64: pkvm: Give priority to standard traps over pvm handling
        commit: 07305590114af81817148d181f1eb0af294e40d6

Cheers,

	M.
diff mbox series

Patch

diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
index 2c72c31e516e..f25b6353a598 100644
--- a/arch/arm64/kvm/hyp/nvhe/switch.c
+++ b/arch/arm64/kvm/hyp/nvhe/switch.c
@@ -232,6 +232,37 @@  static const exit_handler_fn *kvm_get_exit_handler_array(struct kvm *kvm)
 	return hyp_exit_handlers;
 }
 
+/*
+ * Some guests (e.g., protected VMs) are not be allowed to run in AArch32.
+ * The ARMv8 architecture does not give the hypervisor a mechanism to prevent a
+ * guest from dropping to AArch32 EL0 if implemented by the CPU. If the
+ * hypervisor spots a guest in such a state ensure it is handled, and don't
+ * trust the host to spot or fix it.  The check below is based on the one in
+ * kvm_arch_vcpu_ioctl_run().
+ *
+ * Returns false if the guest ran in AArch32 when it shouldn't have, and
+ * thus should exit to the host, or true if a the guest run loop can continue.
+ */
+static bool handle_aarch32_guest(struct kvm_vcpu *vcpu, u64 *exit_code)
+{
+	struct kvm *kvm = kern_hyp_va(vcpu->kvm);
+
+	if (kvm_vm_is_protected(kvm) && vcpu_mode_is_32bit(vcpu)) {
+		/*
+		 * As we have caught the guest red-handed, decide that it isn't
+		 * fit for purpose anymore by making the vcpu invalid. The VMM
+		 * can try and fix it by re-initializing the vcpu with
+		 * KVM_ARM_VCPU_INIT, however, this is likely not possible for
+		 * protected VMs.
+		 */
+		vcpu->arch.target = -1;
+		*exit_code = ARM_EXCEPTION_IL;
+		return false;
+	}
+
+	return true;
+}
+
 /* Switch to the guest for legacy non-VHE systems */
 int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
 {
@@ -294,6 +325,9 @@  int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
 		/* Jump in the fire! */
 		exit_code = __guest_enter(vcpu);
 
+		if (unlikely(!handle_aarch32_guest(vcpu, &exit_code)))
+			break;
+
 		/* And we're baaack! */
 	} while (fixup_guest_exit(vcpu, &exit_code));