diff mbox

[v4,16/19] xen/arm: Introduce a macro to synchronize SError

Message ID 1491383361-22886-17-git-send-email-Wei.Chen@arm.com (mailing list archive)
State New, archived
Headers show

Commit Message

Wei Chen April 5, 2017, 9:09 a.m. UTC
In previous patches, we have provided the ability to synchronize
SErrors in exception entries. But we haven't synchronized SErrors
while returning to guest and doing context switch.

So we still have two risks:
1. Slipping hypervisor SErrors to guest. For example, hypervisor
   triggers a SError while returning to guest, but this SError may be
   delivered after entering guest. In "DIVERSE" option, this SError
   would be routed back to guest and panic the guest. But actually,
   we should crash the whole system due to this hypervisor SError.
2. Slipping previous guest SErrors to the next guest. In "FORWARD"
   option, if hypervisor triggers a SError while context switching.
   This SError may be delivered after switching to next vCPU. In this
   case, this SError will be forwarded to next vCPU and may panic
   an incorrect guest.

So we have have to introduce this macro to synchronize SErrors while
returning to guest and doing context switch. In this macro, we use
ASSERT to make sure the abort is ummasked. Because we unmasked abort
in the entries, but we don't know whether someone will mask it in the
future.

We also added a barrier to this macro to prevent compiler reorder our
asm volatile code.

Signed-off-by: Wei Chen <Wei.Chen@arm.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
---
 xen/include/asm-arm/processor.h | 13 +++++++++++++
 1 file changed, 13 insertions(+)

Comments

Julien Grall April 5, 2017, 11:15 a.m. UTC | #1
Hi Wei,

On 05/04/17 10:09, Wei Chen wrote:
> In previous patches, we have provided the ability to synchronize
> SErrors in exception entries. But we haven't synchronized SErrors
> while returning to guest and doing context switch.
>
> So we still have two risks:
> 1. Slipping hypervisor SErrors to guest. For example, hypervisor
>    triggers a SError while returning to guest, but this SError may be
>    delivered after entering guest. In "DIVERSE" option, this SError
>    would be routed back to guest and panic the guest. But actually,
>    we should crash the whole system due to this hypervisor SError.
> 2. Slipping previous guest SErrors to the next guest. In "FORWARD"
>    option, if hypervisor triggers a SError while context switching.
>    This SError may be delivered after switching to next vCPU. In this
>    case, this SError will be forwarded to next vCPU and may panic
>    an incorrect guest.
>
> So we have have to introduce this macro to synchronize SErrors while
> returning to guest and doing context switch. In this macro, we use
> ASSERT to make sure the abort is ummasked. Because we unmasked abort
> in the entries, but we don't know whether someone will mask it in the
> future.
>
> We also added a barrier to this macro to prevent compiler reorder our
> asm volatile code.
>
> Signed-off-by: Wei Chen <Wei.Chen@arm.com>
> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
> ---
>  xen/include/asm-arm/processor.h | 13 +++++++++++++
>  1 file changed, 13 insertions(+)
>
> diff --git a/xen/include/asm-arm/processor.h b/xen/include/asm-arm/processor.h
> index bb24bee..0ed6cac 100644
> --- a/xen/include/asm-arm/processor.h
> +++ b/xen/include/asm-arm/processor.h
> @@ -723,6 +723,19 @@ void abort_guest_exit_end(void);
>      ( (unsigned long)abort_guest_exit_end == (r)->pc ) \
>  )
>
> +/*
> + * Synchronize SError unless the feature is selected.
> + * This is relying on the SErrors are currently unmasked.
> + */
> +#define SYNCHRONIZE_SERROR(feat)                                  \
> +    do {                                                          \
> +        ASSERT(!cpus_have_cap(feat) || local_abort_is_enabled()); \
> +        ASSERT(local_abort_is_enabled());                         \

Only one of the ASSERT is enough here. I am easy on which one to keep.

> +        asm volatile(ALTERNATIVE("dsb sy; isb",                   \
> +                                 "nop; nop", feat)                \
> +                                 : : : "memory");                 \
> +    } while (0)
> +
>  #endif /* __ASSEMBLY__ */
>  #endif /* __ASM_ARM_PROCESSOR_H */
>  /*
>

Cheers,
diff mbox

Patch

diff --git a/xen/include/asm-arm/processor.h b/xen/include/asm-arm/processor.h
index bb24bee..0ed6cac 100644
--- a/xen/include/asm-arm/processor.h
+++ b/xen/include/asm-arm/processor.h
@@ -723,6 +723,19 @@  void abort_guest_exit_end(void);
     ( (unsigned long)abort_guest_exit_end == (r)->pc ) \
 )
 
+/*
+ * Synchronize SError unless the feature is selected.
+ * This is relying on the SErrors are currently unmasked.
+ */
+#define SYNCHRONIZE_SERROR(feat)                                  \
+    do {                                                          \
+        ASSERT(!cpus_have_cap(feat) || local_abort_is_enabled()); \
+        ASSERT(local_abort_is_enabled());                         \
+        asm volatile(ALTERNATIVE("dsb sy; isb",                   \
+                                 "nop; nop", feat)                \
+                                 : : : "memory");                 \
+    } while (0)
+
 #endif /* __ASSEMBLY__ */
 #endif /* __ASM_ARM_PROCESSOR_H */
 /*