diff mbox series

xen/wait: Describe RSB safety

Message ID 20220805103840.23796-1-andrew.cooper3@citrix.com (mailing list archive)
State New, archived
Headers show
Series xen/wait: Describe RSB safety | expand

Commit Message

Andrew Cooper Aug. 5, 2022, 10:38 a.m. UTC
It turns out that we do in fact have RSB safety here, but not for obvious
reasons.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
---
 xen/common/wait.c | 20 ++++++++++++++++++++
 1 file changed, 20 insertions(+)

Comments

Jan Beulich Aug. 5, 2022, 10:51 a.m. UTC | #1
On 05.08.2022 12:38, Andrew Cooper wrote:
> It turns out that we do in fact have RSB safety here, but not for obvious
> reasons.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>
preferably with ...

> --- a/xen/common/wait.c
> +++ b/xen/common/wait.c
> @@ -210,6 +210,26 @@ void check_wakeup_from_wait(void)
>      }
>  
>      /*
> +     * We are about to jump into a deeper call tree.  In principle, this risks
> +     * executing more RET than CALL instructions, and underflowing the RSB.
> +     *
> +     * However, we are pinned to the same CPU as previously.  Therefore,
> +     * either:
> +     *
> +     *   1) We've scheduled another vCPU in the meantime, and the context
> +     *      switch path has (by default) issued IPBP which flushes the RSB, or

... IBPB used here and ...

> +     *   2) We're still in the same context.  Returning back to the deeper
> +     *      call tree is resuming the execution path we left, and remains
> +     *      balanced as far as that logic is concerned.
> +     *
> +     *      In fact, the path though the scheduler will execute more CALL than

... (nit) "through" used here.

> +     *      RET instructions, making the RSB unbalanced in the safe direction.
> +     *
> +     * Therefore, no actions are necessary here to maintain RSB safety.
> +     */
> +
> +    /*
>       * Hand-rolled longjmp().
>       *
>       * check_wakeup_from_wait() is always called with a shallow stack,
Andrew Cooper Aug. 5, 2022, 11:10 a.m. UTC | #2
On 05/08/2022 11:51, Jan Beulich wrote:
> On 05.08.2022 12:38, Andrew Cooper wrote:
>> It turns out that we do in fact have RSB safety here, but not for obvious
>> reasons.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
> preferably with ...
>
>> --- a/xen/common/wait.c
>> +++ b/xen/common/wait.c
>> @@ -210,6 +210,26 @@ void check_wakeup_from_wait(void)
>>      }
>>  
>>      /*
>> +     * We are about to jump into a deeper call tree.  In principle, this risks
>> +     * executing more RET than CALL instructions, and underflowing the RSB.
>> +     *
>> +     * However, we are pinned to the same CPU as previously.  Therefore,
>> +     * either:
>> +     *
>> +     *   1) We've scheduled another vCPU in the meantime, and the context
>> +     *      switch path has (by default) issued IPBP which flushes the RSB, or
> ... IBPB used here and ...
>
>> +     *   2) We're still in the same context.  Returning back to the deeper
>> +     *      call tree is resuming the execution path we left, and remains
>> +     *      balanced as far as that logic is concerned.
>> +     *
>> +     *      In fact, the path though the scheduler will execute more CALL than
> ... (nit) "through" used here.

Wow I failed at writing...  Fixed.

~Andrew

>
>> +     *      RET instructions, making the RSB unbalanced in the safe direction.
>> +     *
>> +     * Therefore, no actions are necessary here to maintain RSB safety.
>> +     */
>> +
>> +    /*
>>       * Hand-rolled longjmp().
>>       *
>>       * check_wakeup_from_wait() is always called with a shallow stack,
diff mbox series

Patch

diff --git a/xen/common/wait.c b/xen/common/wait.c
index e45345ede704..1a3b348a383a 100644
--- a/xen/common/wait.c
+++ b/xen/common/wait.c
@@ -210,6 +210,26 @@  void check_wakeup_from_wait(void)
     }
 
     /*
+     * We are about to jump into a deeper call tree.  In principle, this risks
+     * executing more RET than CALL instructions, and underflowing the RSB.
+     *
+     * However, we are pinned to the same CPU as previously.  Therefore,
+     * either:
+     *
+     *   1) We've scheduled another vCPU in the meantime, and the context
+     *      switch path has (by default) issued IPBP which flushes the RSB, or
+     *
+     *   2) We're still in the same context.  Returning back to the deeper
+     *      call tree is resuming the execution path we left, and remains
+     *      balanced as far as that logic is concerned.
+     *
+     *      In fact, the path though the scheduler will execute more CALL than
+     *      RET instructions, making the RSB unbalanced in the safe direction.
+     *
+     * Therefore, no actions are necessary here to maintain RSB safety.
+     */
+
+    /*
      * Hand-rolled longjmp().
      *
      * check_wakeup_from_wait() is always called with a shallow stack,