diff mbox series

[2/2] xen/arm: Add defensive barrier in get_cycles for Arm64

Message ID 20210105071946.1971229-2-wei.chen@arm.com (mailing list archive)
State Superseded
Headers show
Series [1/2] xen/arm: Correct the coding style of get_cycles | expand

Commit Message

Wei Chen Jan. 5, 2021, 7:19 a.m. UTC
As the dicsussion [1] in mailing list. We'd better to have
a barrier after reading CNTPCT in get_cycles. If there is
not any barrier there. When get_cycles being used in some
seqlock critical context in the future, the seqlock can be
speculated potentially.

In order to reduce the impact of new barrier, we perfer to
use enforce order instead of ISB [2].

Currently, enforce order is not applied to arm32 as this is
not done in Linux at the date of this patch. If this is done
in Linux it will need to be also done in Xen.

[1] https://lists.xenproject.org/archives/html/xen-devel/2020-12/msg00181.html
[2] https://lkml.org/lkml/2020/3/13/645

Signed-off-by: Wei Chen <wei.chen@arm.com>
---
 xen/include/asm-arm/time.h | 43 ++++++++++++++++++++++++++++++++++++--
 1 file changed, 41 insertions(+), 2 deletions(-)

Comments

Julien Grall Jan. 5, 2021, 11:17 a.m. UTC | #1
Hi Wei,

On 05/01/2021 07:19, Wei Chen wrote:
> As the dicsussion [1] in mailing list. We'd better to have

I would say "Per the discussion [1] on the ...". I would also suggest to 
replace the "." with ",".

> a barrier after reading CNTPCT in get_cycles. If there is
> not any barrier there. When get_cycles being used in some
> seqlock critical context in the future, the seqlock can be
> speculated potentially.

This comment seems a little off because we don't have seqlock on Xen. I 
think it would be best if you re-use the Linux commit as it would be 
clear that this is a backport.

Something like:

"Import commit .... from Linux:

<entire commit message indented by one>

While we are not aware of such use in Xen, it would be best to add the 
barrier to avoid any suprise."
"

> 
> In order to reduce the impact of new barrier, we perfer to
> use enforce order instead of ISB [2].
> 
> Currently, enforce order is not applied to arm32 as this is
> not done in Linux at the date of this patch. If this is done
> in Linux it will need to be also done in Xen.
> 
> [1] https://lists.xenproject.org/archives/html/xen-devel/2020-12/msg00181.html
> [2] https://lkml.org/lkml/2020/3/13/645
> 
> Signed-off-by: Wei Chen <wei.chen@arm.com>
> ---
>   xen/include/asm-arm/time.h | 43 ++++++++++++++++++++++++++++++++++++--
>   1 file changed, 41 insertions(+), 2 deletions(-)
> 
> diff --git a/xen/include/asm-arm/time.h b/xen/include/asm-arm/time.h
> index 5c4529ebb5..3ef4e7cd3d 100644
> --- a/xen/include/asm-arm/time.h
> +++ b/xen/include/asm-arm/time.h
> @@ -11,9 +11,26 @@
>   
>   typedef uint64_t cycles_t;
>   
> -static inline cycles_t get_cycles(void)
> +/*
> + * Ensure that reads of the counter are treated the same as memory reads
> + * for the purposes of ordering by subsequent memory barriers.
> + */

The comment is before the #ifdef which suggests the ordering is required 
for Arm as well. I would suggest to either mention that oddity or move 
the comment after the #ifdef.

> +#if defined(CONFIG_ARM_64)
> +#define read_cntpct_enforce_ordering(val) do { \
> +    u64 tmp, _val = (val);                     \

Please use uint64_t here.

> +                                               \
> +    asm volatile(                              \
> +    "eor %0, %1, %1\n"                         \
> +    "add %0, sp, %0\n"                         \
> +    "ldr xzr, [%0]"                            \
> +    : "=r" (tmp) : "r" (_val));                \
> +} while (0)
> +#else
> +#define read_cntpct_enforce_ordering(val) do {} while (0)
> +#endif
> +
> +static inline cycles_t read_cntpct_stable(void)
>   {
> -    isb();
>       /*
>        * ARM_WORKAROUND_858921: Cortex-A73 (all versions) counter read
>        * can return a wrong value when the counter crosses a 32bit boundary.
> @@ -36,6 +53,28 @@ static inline cycles_t get_cycles(void)
>       }
>   }
>   
> +static inline cycles_t get_cycles(void)
> +{
> +    cycles_t cnt;
> +
> +    isb();
> +    cnt = read_cntpct_stable();
> +
> +    /*
> +     * If there is not any barrier here. When get_cycles being used in
> +     * some seqlock critical context in the future, the seqlock can be
> +     * speculated potentially.
> +     *
> +     * To prevent seqlock from being speculated silently, we add a barrier
> +     * here defensively. Normally, we just need an ISB here is enough, but
> +     * considering the minimum performance cost. We prefer to use enforce
> +     * order here.
> +     */

We don't use seqlock in Xen, so this comment looks quite confusing.. I 
think the comment on top of reach_cntpct_enforce_ordering() is 
sufficient, so I would drop this one. The alternative is to find a way 
to make the comment more Xen focused.

Although, I don't have a good suggestion so far.

> +    read_cntpct_enforce_ordering(cnt);
> +
> +    return cnt;
> +}
> +
>   /* List of timer's IRQ */
>   enum timer_ppi
>   {
> 

Cheers,
Wei Chen Jan. 5, 2021, 1:45 p.m. UTC | #2
Hi Julien,

> -----Original Message-----
> From: Julien Grall <julien@xen.org>
> Sent: 2021年1月5日 19:17
> To: Wei Chen <Wei.Chen@arm.com>; xen-devel@lists.xenproject.org;
> sstabellini@kernel.org
> Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>; Penny Zheng
> <Penny.Zheng@arm.com>; Jiamei Xie <Jiamei.Xie@arm.com>; nd
> <nd@arm.com>
> Subject: Re: [PATCH 2/2] xen/arm: Add defensive barrier in get_cycles for Arm64
> 
> Hi Wei,
> 
> On 05/01/2021 07:19, Wei Chen wrote:
> > As the dicsussion [1] in mailing list. We'd better to have
> 
> I would say "Per the discussion [1] on the ...". I would also suggest to
> replace the "." with ",".
> 
> > a barrier after reading CNTPCT in get_cycles. If there is
> > not any barrier there. When get_cycles being used in some
> > seqlock critical context in the future, the seqlock can be
> > speculated potentially.
> 
> This comment seems a little off because we don't have seqlock on Xen. I
> think it would be best if you re-use the Linux commit as it would be
> clear that this is a backport.
> 
> Something like:
> 
> "Import commit .... from Linux:
> 
> <entire commit message indented by one>
> 
> While we are not aware of such use in Xen, it would be best to add the
> barrier to avoid any suprise."
> "
> 

Yes, that would be better. I will add it in next version. Thanks.

> >
> > In order to reduce the impact of new barrier, we perfer to
> > use enforce order instead of ISB [2].
> >
> > Currently, enforce order is not applied to arm32 as this is
> > not done in Linux at the date of this patch. If this is done
> > in Linux it will need to be also done in Xen.
> >
> > [1] https://lists.xenproject.org/archives/html/xen-devel/2020-
> 12/msg00181.html
> > [2] https://lkml.org/lkml/2020/3/13/645
> >
> > Signed-off-by: Wei Chen <wei.chen@arm.com>
> > ---
> >   xen/include/asm-arm/time.h | 43
> ++++++++++++++++++++++++++++++++++++--
> >   1 file changed, 41 insertions(+), 2 deletions(-)
> >
> > diff --git a/xen/include/asm-arm/time.h b/xen/include/asm-arm/time.h
> > index 5c4529ebb5..3ef4e7cd3d 100644
> > --- a/xen/include/asm-arm/time.h
> > +++ b/xen/include/asm-arm/time.h
> > @@ -11,9 +11,26 @@
> >
> >   typedef uint64_t cycles_t;
> >
> > -static inline cycles_t get_cycles(void)
> > +/*
> > + * Ensure that reads of the counter are treated the same as memory reads
> > + * for the purposes of ordering by subsequent memory barriers.
> > + */
> 
> The comment is before the #ifdef which suggests the ordering is required
> for Arm as well. I would suggest to either mention that oddity or move
> the comment after the #ifdef.
> 
> > +#if defined(CONFIG_ARM_64)
> > +#define read_cntpct_enforce_ordering(val) do { \
> > +    u64 tmp, _val = (val);                     \
> 
> Please use uint64_t here.

Got it.

> 
> > +                                               \
> > +    asm volatile(                              \
> > +    "eor %0, %1, %1\n"                         \
> > +    "add %0, sp, %0\n"                         \
> > +    "ldr xzr, [%0]"                            \
> > +    : "=r" (tmp) : "r" (_val));                \
> > +} while (0)
> > +#else
> > +#define read_cntpct_enforce_ordering(val) do {} while (0)
> > +#endif
> > +
> > +static inline cycles_t read_cntpct_stable(void)
> >   {
> > -    isb();
> >       /*
> >        * ARM_WORKAROUND_858921: Cortex-A73 (all versions) counter read
> >        * can return a wrong value when the counter crosses a 32bit boundary.
> > @@ -36,6 +53,28 @@ static inline cycles_t get_cycles(void)
> >       }
> >   }
> >
> > +static inline cycles_t get_cycles(void)
> > +{
> > +    cycles_t cnt;
> > +
> > +    isb();
> > +    cnt = read_cntpct_stable();
> > +
> > +    /*
> > +     * If there is not any barrier here. When get_cycles being used in
> > +     * some seqlock critical context in the future, the seqlock can be
> > +     * speculated potentially.
> > +     *
> > +     * To prevent seqlock from being speculated silently, we add a barrier
> > +     * here defensively. Normally, we just need an ISB here is enough, but
> > +     * considering the minimum performance cost. We prefer to use enforce
> > +     * order here.
> > +     */
> 
> We don't use seqlock in Xen, so this comment looks quite confusing.. I
> think the comment on top of reach_cntpct_enforce_ordering() is
> sufficient, so I would drop this one. The alternative is to find a way
> to make the comment more Xen focused.
> 
> Although, I don't have a good suggestion so far.
> 

Ok, I will drop it.

> > +    read_cntpct_enforce_ordering(cnt);
> > +
> > +    return cnt;
> > +}
> > +
> >   /* List of timer's IRQ */
> >   enum timer_ppi
> >   {
> >
> 
> Cheers,
> 
> --
> Julien Grall
diff mbox series

Patch

diff --git a/xen/include/asm-arm/time.h b/xen/include/asm-arm/time.h
index 5c4529ebb5..3ef4e7cd3d 100644
--- a/xen/include/asm-arm/time.h
+++ b/xen/include/asm-arm/time.h
@@ -11,9 +11,26 @@ 
 
 typedef uint64_t cycles_t;
 
-static inline cycles_t get_cycles(void)
+/*
+ * Ensure that reads of the counter are treated the same as memory reads
+ * for the purposes of ordering by subsequent memory barriers.
+ */
+#if defined(CONFIG_ARM_64)
+#define read_cntpct_enforce_ordering(val) do { \
+    u64 tmp, _val = (val);                     \
+                                               \
+    asm volatile(                              \
+    "eor %0, %1, %1\n"                         \
+    "add %0, sp, %0\n"                         \
+    "ldr xzr, [%0]"                            \
+    : "=r" (tmp) : "r" (_val));                \
+} while (0)
+#else
+#define read_cntpct_enforce_ordering(val) do {} while (0)
+#endif
+
+static inline cycles_t read_cntpct_stable(void)
 {
-    isb();
     /*
      * ARM_WORKAROUND_858921: Cortex-A73 (all versions) counter read
      * can return a wrong value when the counter crosses a 32bit boundary.
@@ -36,6 +53,28 @@  static inline cycles_t get_cycles(void)
     }
 }
 
+static inline cycles_t get_cycles(void)
+{
+    cycles_t cnt;
+
+    isb();
+    cnt = read_cntpct_stable();
+
+    /*
+     * If there is not any barrier here. When get_cycles being used in
+     * some seqlock critical context in the future, the seqlock can be
+     * speculated potentially.
+     *
+     * To prevent seqlock from being speculated silently, we add a barrier
+     * here defensively. Normally, we just need an ISB here is enough, but
+     * considering the minimum performance cost. We prefer to use enforce
+     * order here.
+     */
+    read_cntpct_enforce_ordering(cnt);
+
+    return cnt;
+}
+
 /* List of timer's IRQ */
 enum timer_ppi
 {