diff mbox series

[L1TF,MDS,GT,v1,2/3] common/grant_table: harden bound accesses

Message ID 1558424746-24059-3-git-send-email-nmanthey@amazon.de (mailing list archive)
State New, archived
Headers show
Series [L1TF,MDS,GT,v1,1/3] common/grant_table: harden helpers | expand

Commit Message

Norbert Manthey May 21, 2019, 7:45 a.m. UTC
Guests can issue grant table operations and provide guest controlled
data to them. This data is used as index for memory loads after bound
checks have been done. To avoid speculative out-of-bound accesses, we
use the array_index_nospec macro where applicable, or the macro
block_speculation. Note, that the block_speculation is always used in
the calls to shared_entry_header and nr_grant_entries, so that no
additional protection is required once these functions have been
called.

Speculative execution is not blocked in case one of the following
properties is true:
 - path cannot be triggered by the guest
 - path does not return to the guest
 - path does not result in an out-of-bound access
 - path cannot be executed repeatedly
Only the combination of the above properties allows to actually leak
continuous chunks of memory. Therefore, we only add the penalty of
protective mechanisms in case a potential speculative out-of-bound
access matches all the above properties.

This commit addresses only out-of-bound accesses whose index is
directly controlled by the guest, and the index is checked before.
Potential out-of-bound accesses that are caused by speculatively
evaluating the version of the current table are not addressed in this
commit.

This is part of the speculative hardening effort.

Signed-off-by: Norbert Manthey <nmanthey@amazon.de>
---

Notes:
  v1:  adapt the comments for shared_entry_header to show that they 'also'
       block speculative execution

 xen/common/grant_table.c | 43 ++++++++++++++++++++++++++++++++++---------
 1 file changed, 34 insertions(+), 9 deletions(-)

Comments

Jan Beulich May 23, 2019, 2:17 p.m. UTC | #1
>>> On 21.05.19 at 09:45, <nmanthey@amazon.de> wrote:
> Guests can issue grant table operations and provide guest controlled
> data to them. This data is used as index for memory loads after bound
> checks have been done. To avoid speculative out-of-bound accesses, we
> use the array_index_nospec macro where applicable, or the macro
> block_speculation. Note, that the block_speculation is always used in

s/always/already/ ?

> the calls to shared_entry_header and nr_grant_entries, so that no
> additional protection is required once these functions have been
> called.

Isn't this too broad a statement? There's some protection, but not
for just anything that follows.

> --- a/xen/common/grant_table.c
> +++ b/xen/common/grant_table.c
> @@ -988,9 +988,10 @@ map_grant_ref(
>          PIN_FAIL(unlock_out, GNTST_bad_gntref, "Bad ref %#x for d%d\n",
>                   op->ref, rgt->domain->domain_id);
>  
> -    act = active_entry_acquire(rgt, op->ref);
> +    /* This call also ensures the above check cannot be passed speculatively */
>      shah = shared_entry_header(rgt, op->ref);
>      status = rgt->gt_version == 1 ? &shah->flags : &status_entry(rgt, op->ref);
> +    act = active_entry_acquire(rgt, op->ref);

I know we've been there before, but what guarantees that the
compiler won't reload op->ref from memory for either of the
latter two accesses? In fact afaict it always will, due to the
memory clobber in alternative().

> @@ -3863,6 +3883,9 @@ static int gnttab_get_status_frame_mfn(struct domain *d,
>              return -EINVAL;
>      }
>  
> +    /* Make sure idx is bounded wrt nr_status_frames */
> +    block_speculation();
> +
>      *mfn = _mfn(virt_to_mfn(gt->status[idx]));
>      return 0;
>  }

Why don't you use array_index_nospec() here? And how come
speculation into gnttab_grow_table() is fine a few lines above?
And what about the similar code in gnttab_get_shared_frame_mfn()?

Jan
Norbert Manthey May 24, 2019, 9:54 a.m. UTC | #2
On 5/23/19 16:17, Jan Beulich wrote:
>>>> On 21.05.19 at 09:45, <nmanthey@amazon.de> wrote:
>> Guests can issue grant table operations and provide guest controlled
>> data to them. This data is used as index for memory loads after bound
>> checks have been done. To avoid speculative out-of-bound accesses, we
>> use the array_index_nospec macro where applicable, or the macro
>> block_speculation. Note, that the block_speculation is always used in
> s/always/already/ ?
They both work, but the 'always' underlines that a caller can rely on
the fact that this will happen on all execution path of that function.
Hence, I like to stick to 'always' here.
>
>> the calls to shared_entry_header and nr_grant_entries, so that no
>> additional protection is required once these functions have been
>> called.
> Isn't this too broad a statement? There's some protection, but not
> for just anything that follows.
You are right, to given protection is that any bound check that could
have been bypassed speculatively is enforced after calling one of the
two functions. I will rephrase the commit message accordingly.
>
>> --- a/xen/common/grant_table.c
>> +++ b/xen/common/grant_table.c
>> @@ -988,9 +988,10 @@ map_grant_ref(
>>          PIN_FAIL(unlock_out, GNTST_bad_gntref, "Bad ref %#x for d%d\n",
>>                   op->ref, rgt->domain->domain_id);
>>  
>> -    act = active_entry_acquire(rgt, op->ref);
>> +    /* This call also ensures the above check cannot be passed speculatively */
>>      shah = shared_entry_header(rgt, op->ref);
>>      status = rgt->gt_version == 1 ? &shah->flags : &status_entry(rgt, op->ref);
>> +    act = active_entry_acquire(rgt, op->ref);
> I know we've been there before, but what guarantees that the
> compiler won't reload op->ref from memory for either of the
> latter two accesses? In fact afaict it always will, due to the
> memory clobber in alternative().
The compiler can reload op->ref from memory, that is fine here, as the
bound check happens above, and the shared_entry call comes with an
lfence() by now, so that we will not continue executing speculatively
with op->ref being out-of-bounds, independently of whether it's from
memory or registers.
>
>> @@ -3863,6 +3883,9 @@ static int gnttab_get_status_frame_mfn(struct domain *d,
>>              return -EINVAL;
>>      }
>>  
>> +    /* Make sure idx is bounded wrt nr_status_frames */
>> +    block_speculation();
>> +
>>      *mfn = _mfn(virt_to_mfn(gt->status[idx]));
>>      return 0;
>>  }
> Why don't you use array_index_nospec() here? And how come
There is no specific reason. I will swap.
> speculation into gnttab_grow_table() is fine a few lines above?
I do not see a reason why it would be bad to enter that function
speculatively. There are no accesses that would have to be protected by
extra checks, afaict. Otherwise, that function should be protected by
its own.
> And what about the similar code in gnttab_get_shared_frame_mfn()?
I will add an array_nospec_index there as well.
>
> Jan
>
>




Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrer: Christian Schlaeger, Ralf Herbrich
Ust-ID: DE 289 237 879
Eingetragen am Amtsgericht Charlottenburg HRB 149173 B
Jan Beulich May 24, 2019, 11:10 a.m. UTC | #3
>>> On 24.05.19 at 11:54, <nmanthey@amazon.de> wrote:
> On 5/23/19 16:17, Jan Beulich wrote:
>>>>> On 21.05.19 at 09:45, <nmanthey@amazon.de> wrote:
>>> Guests can issue grant table operations and provide guest controlled
>>> data to them. This data is used as index for memory loads after bound
>>> checks have been done. To avoid speculative out-of-bound accesses, we
>>> use the array_index_nospec macro where applicable, or the macro
>>> block_speculation. Note, that the block_speculation is always used in
>> s/always/already/ ?
> They both work, but the 'always' underlines that a caller can rely on
> the fact that this will happen on all execution path of that function.
> Hence, I like to stick to 'always' here.

Well, I'm not a native speaker, but to me "always" doesn't express
what you want to express here. What you mean I'd word "... is used
on all paths of ..."

>>> the calls to shared_entry_header and nr_grant_entries, so that no
>>> additional protection is required once these functions have been
>>> called.

As an aside, your use of "in the calls to" looks also misleading to me,
because the fences sit in the functions, not at the call sites.

>>> --- a/xen/common/grant_table.c
>>> +++ b/xen/common/grant_table.c
>>> @@ -988,9 +988,10 @@ map_grant_ref(
>>>          PIN_FAIL(unlock_out, GNTST_bad_gntref, "Bad ref %#x for d%d\n",
>>>                   op->ref, rgt->domain->domain_id);
>>>  
>>> -    act = active_entry_acquire(rgt, op->ref);
>>> +    /* This call also ensures the above check cannot be passed speculatively */
>>>      shah = shared_entry_header(rgt, op->ref);
>>>      status = rgt->gt_version == 1 ? &shah->flags : &status_entry(rgt, op->ref);
>>> +    act = active_entry_acquire(rgt, op->ref);
>> I know we've been there before, but what guarantees that the
>> compiler won't reload op->ref from memory for either of the
>> latter two accesses? In fact afaict it always will, due to the
>> memory clobber in alternative().
> The compiler can reload op->ref from memory, that is fine here, as the
> bound check happens above, and the shared_entry call comes with an
> lfence() by now, so that we will not continue executing speculatively
> with op->ref being out-of-bounds, independently of whether it's from
> memory or registers.

I don't buy this argumentation: In particular if the cache line got
flushed for whatever reason, the load may take almost arbitrarily
long, opening up a large speculation window again using the
destination register of the load (whatever - not bounds checked -
value that ends up holding).

>>> @@ -3863,6 +3883,9 @@ static int gnttab_get_status_frame_mfn(struct domain *d,
>>>              return -EINVAL;
>>>      }
>>>  
>>> +    /* Make sure idx is bounded wrt nr_status_frames */
>>> +    block_speculation();
>>> +
>>>      *mfn = _mfn(virt_to_mfn(gt->status[idx]));
>>>      return 0;
>>>  }
>> Why don't you use array_index_nospec() here? And how come
> There is no specific reason. I will swap.
>> speculation into gnttab_grow_table() is fine a few lines above?
> I do not see a reason why it would be bad to enter that function
> speculatively. There are no accesses that would have to be protected by
> extra checks, afaict. Otherwise, that function should be protected by
> its own.

Which in fact happens, but only in patch 3. This may be worth saying
explicitly here.

Jan
Norbert Manthey July 8, 2019, 12:58 p.m. UTC | #4
Sorry for the late reply. I try to pick up where we left the discussion
the last time.

On 5/24/19 13:10, Jan Beulich wrote:
>>>> On 24.05.19 at 11:54, <nmanthey@amazon.de> wrote:
>> On 5/23/19 16:17, Jan Beulich wrote:
>>>>>> On 21.05.19 at 09:45, <nmanthey@amazon.de> wrote:
>>>> Guests can issue grant table operations and provide guest controlled
>>>> data to them. This data is used as index for memory loads after bound
>>>> checks have been done. To avoid speculative out-of-bound accesses, we
>>>> use the array_index_nospec macro where applicable, or the macro
>>>> block_speculation. Note, that the block_speculation is always used in
>>> s/always/already/ ?
>> They both work, but the 'always' underlines that a caller can rely on
>> the fact that this will happen on all execution path of that function.
>> Hence, I like to stick to 'always' here.
> Well, I'm not a native speaker, but to me "always" doesn't express
> what you want to express here. What you mean I'd word "... is used
> on all paths of ..."
I will rephrase accordingly.
>
>>>> the calls to shared_entry_header and nr_grant_entries, so that no
>>>> additional protection is required once these functions have been
>>>> called.
> As an aside, your use of "in the calls to" looks also misleading to me,
> because the fences sit in the functions, not at the call sites.
Will fix.
>
>>>> --- a/xen/common/grant_table.c
>>>> +++ b/xen/common/grant_table.c
>>>> @@ -988,9 +988,10 @@ map_grant_ref(
>>>>          PIN_FAIL(unlock_out, GNTST_bad_gntref, "Bad ref %#x for d%d\n",
>>>>                   op->ref, rgt->domain->domain_id);
>>>>  
>>>> -    act = active_entry_acquire(rgt, op->ref);
>>>> +    /* This call also ensures the above check cannot be passed speculatively */
>>>>      shah = shared_entry_header(rgt, op->ref);
>>>>      status = rgt->gt_version == 1 ? &shah->flags : &status_entry(rgt, op->ref);
>>>> +    act = active_entry_acquire(rgt, op->ref);
>>> I know we've been there before, but what guarantees that the
>>> compiler won't reload op->ref from memory for either of the
>>> latter two accesses? In fact afaict it always will, due to the
>>> memory clobber in alternative().
>> The compiler can reload op->ref from memory, that is fine here, as the
>> bound check happens above, and the shared_entry call comes with an
>> lfence() by now, so that we will not continue executing speculatively
>> with op->ref being out-of-bounds, independently of whether it's from
>> memory or registers.
> I don't buy this argumentation: In particular if the cache line got
> flushed for whatever reason, the load may take almost arbitrarily
> long, opening up a large speculation window again using the
> destination register of the load (whatever - not bounds checked -
> value that ends up holding).
I agree, the given protection does not force the CPU to pick up the
fixed value. As you already noticed, the presented fix might not work in
all cases, but is among the suitable solutions we have today to prevent
simple user controlled out-of-bound accesses during speculation. Relying
on the stale value of the register that might be used during speculation
makes a potential out-of-bound access much more difficult. Hence, the
proposed solution looks good enough to me.
>
>>>> @@ -3863,6 +3883,9 @@ static int gnttab_get_status_frame_mfn(struct domain *d,
>>>>              return -EINVAL;
>>>>      }
>>>>  
>>>> +    /* Make sure idx is bounded wrt nr_status_frames */
>>>> +    block_speculation();
>>>> +
>>>>      *mfn = _mfn(virt_to_mfn(gt->status[idx]));
>>>>      return 0;
>>>>  }
>>> Why don't you use array_index_nospec() here? And how come
>> There is no specific reason. I will swap.
>>> speculation into q() is fine a few lines above?
>> I do not see a reason why it would be bad to enter that function
>> speculatively. There are no accesses that would have to be protected by
>> extra checks, afaict. Otherwise, that function should be protected by
>> its own.
> Which in fact happens, but only in patch 3. This may be worth saying
> explicitly here.

Do you want me to explicitly mention this in the commit message, or add
a comment here, which I have to drop in patch 3 again? For now, I'd just
leave it as is, as the version based fixes are handled in the other commit.

Best,
Norbert





Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Ralf Herbrich
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
Sitz: Berlin
Ust-ID: DE 289 237 879
Jan Beulich July 10, 2019, 3:04 a.m. UTC | #5
On 08.07.2019 14:58, Norbert Manthey wrote:
> On 5/24/19 13:10, Jan Beulich wrote:
>>>>> On 24.05.19 at 11:54, <nmanthey@amazon.de> wrote:
>>> On 5/23/19 16:17, Jan Beulich wrote:
>>>>>>> On 21.05.19 at 09:45, <nmanthey@amazon.de> wrote:
>>>>> --- a/xen/common/grant_table.c
>>>>> +++ b/xen/common/grant_table.c
>>>>> @@ -988,9 +988,10 @@ map_grant_ref(
>>>>>           PIN_FAIL(unlock_out, GNTST_bad_gntref, "Bad ref %#x for d%d\n",
>>>>>                    op->ref, rgt->domain->domain_id);
>>>>>   
>>>>> -    act = active_entry_acquire(rgt, op->ref);
>>>>> +    /* This call also ensures the above check cannot be passed speculatively */
>>>>>       shah = shared_entry_header(rgt, op->ref);
>>>>>       status = rgt->gt_version == 1 ? &shah->flags : &status_entry(rgt, op->ref);
>>>>> +    act = active_entry_acquire(rgt, op->ref);
>>>> I know we've been there before, but what guarantees that the
>>>> compiler won't reload op->ref from memory for either of the
>>>> latter two accesses? In fact afaict it always will, due to the
>>>> memory clobber in alternative().
>>> The compiler can reload op->ref from memory, that is fine here, as the
>>> bound check happens above, and the shared_entry call comes with an
>>> lfence() by now, so that we will not continue executing speculatively
>>> with op->ref being out-of-bounds, independently of whether it's from
>>> memory or registers.
>> I don't buy this argumentation: In particular if the cache line got
>> flushed for whatever reason, the load may take almost arbitrarily
>> long, opening up a large speculation window again using the
>> destination register of the load (whatever - not bounds checked -
>> value that ends up holding).
> I agree, the given protection does not force the CPU to pick up the
> fixed value. As you already noticed, the presented fix might not work in
> all cases, but is among the suitable solutions we have today to prevent
> simple user controlled out-of-bound accesses during speculation. Relying
> on the stale value of the register that might be used during speculation
> makes a potential out-of-bound access much more difficult. Hence, the
> proposed solution looks good enough to me.

But using a local variable further reduces the risk afaict: Either
the compiler puts it into a register, in which case we're entirely
fine. Or it puts it on the stack, which I assume is more likely to
stay in cache than a reference to some other data structure (iirc
also on the stack, but not in the current frame).

>>>>> @@ -3863,6 +3883,9 @@ static int gnttab_get_status_frame_mfn(struct domain *d,
>>>>>               return -EINVAL;
>>>>>       }
>>>>>   
>>>>> +    /* Make sure idx is bounded wrt nr_status_frames */
>>>>> +    block_speculation();
>>>>> +
>>>>>       *mfn = _mfn(virt_to_mfn(gt->status[idx]));
>>>>>       return 0;
>>>>>   }
>>>> Why don't you use array_index_nospec() here? And how come
>>> There is no specific reason. I will swap.
>>>> speculation into q() is fine a few lines above?
>>> I do not see a reason why it would be bad to enter that function
>>> speculatively. There are no accesses that would have to be protected by
>>> extra checks, afaict. Otherwise, that function should be protected by
>>> its own.
>> Which in fact happens, but only in patch 3. This may be worth saying
>> explicitly here.
> 
> Do you want me to explicitly mention this in the commit message, or add
> a comment here, which I have to drop in patch 3 again? For now, I'd just
> leave it as is, as the version based fixes are handled in the other commit.

A commit message remark would both help understand things now and
in the future, and at the same time avoid me or someone else re-
raising the question next time round, not the least because of the
noticable gaps between versions.

Jan
Norbert Manthey July 10, 2019, 8:54 a.m. UTC | #6
On 7/10/19 05:04, Jan Beulich wrote:
> On 08.07.2019 14:58, Norbert Manthey wrote:
>> On 5/24/19 13:10, Jan Beulich wrote:
>>>>>> On 24.05.19 at 11:54, <nmanthey@amazon.de> wrote:
>>>> On 5/23/19 16:17, Jan Beulich wrote:
>>>>>>>> On 21.05.19 at 09:45, <nmanthey@amazon.de> wrote:
>>>>>> --- a/xen/common/grant_table.c
>>>>>> +++ b/xen/common/grant_table.c
>>>>>> @@ -988,9 +988,10 @@ map_grant_ref(
>>>>>>           PIN_FAIL(unlock_out, GNTST_bad_gntref, "Bad ref %#x for d%d\n",
>>>>>>                    op->ref, rgt->domain->domain_id);
>>>>>>   
>>>>>> -    act = active_entry_acquire(rgt, op->ref);
>>>>>> +    /* This call also ensures the above check cannot be passed speculatively */
>>>>>>       shah = shared_entry_header(rgt, op->ref);
>>>>>>       status = rgt->gt_version == 1 ? &shah->flags : &status_entry(rgt, op->ref);
>>>>>> +    act = active_entry_acquire(rgt, op->ref);
>>>>> I know we've been there before, but what guarantees that the
>>>>> compiler won't reload op->ref from memory for either of the
>>>>> latter two accesses? In fact afaict it always will, due to the
>>>>> memory clobber in alternative().
>>>> The compiler can reload op->ref from memory, that is fine here, as the
>>>> bound check happens above, and the shared_entry call comes with an
>>>> lfence() by now, so that we will not continue executing speculatively
>>>> with op->ref being out-of-bounds, independently of whether it's from
>>>> memory or registers.
>>> I don't buy this argumentation: In particular if the cache line got
>>> flushed for whatever reason, the load may take almost arbitrarily
>>> long, opening up a large speculation window again using the
>>> destination register of the load (whatever - not bounds checked -
>>> value that ends up holding).
>> I agree, the given protection does not force the CPU to pick up the
>> fixed value. As you already noticed, the presented fix might not work in
>> all cases, but is among the suitable solutions we have today to prevent
>> simple user controlled out-of-bound accesses during speculation. Relying
>> on the stale value of the register that might be used during speculation
>> makes a potential out-of-bound access much more difficult. Hence, the
>> proposed solution looks good enough to me.
> But using a local variable further reduces the risk afaict: Either
> the compiler puts it into a register, in which case we're entirely
> fine. Or it puts it on the stack, which I assume is more likely to
> stay in cache than a reference to some other data structure (iirc
> also on the stack, but not in the current frame).
If you want me to introduce a local variable, I can do that. I remember
we had discussions around that as well.
>
>>>>>> @@ -3863,6 +3883,9 @@ static int gnttab_get_status_frame_mfn(struct domain *d,
>>>>>>               return -EINVAL;
>>>>>>       }
>>>>>>   
>>>>>> +    /* Make sure idx is bounded wrt nr_status_frames */
>>>>>> +    block_speculation();
>>>>>> +
>>>>>>       *mfn = _mfn(virt_to_mfn(gt->status[idx]));
>>>>>>       return 0;
>>>>>>   }
>>>>> Why don't you use array_index_nospec() here? And how come
>>>> There is no specific reason. I will swap.
>>>>> speculation into q() is fine a few lines above?
>>>> I do not see a reason why it would be bad to enter that function
>>>> speculatively. There are no accesses that would have to be protected by
>>>> extra checks, afaict. Otherwise, that function should be protected by
>>>> its own.
>>> Which in fact happens, but only in patch 3. This may be worth saying
>>> explicitly here.
>> Do you want me to explicitly mention this in the commit message, or add
>> a comment here, which I have to drop in patch 3 again? For now, I'd just
>> leave it as is, as the version based fixes are handled in the other commit.
> A commit message remark would both help understand things now and
> in the future, and at the same time avoid me or someone else re-
> raising the question next time round, not the least because of the
> noticable gaps between versions.

I will extend the commit message accordingly.

Best,
Norbert





Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Ralf Herbrich
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
Sitz: Berlin
Ust-ID: DE 289 237 879
Jan Beulich July 10, 2019, 11:59 a.m. UTC | #7
On 10.07.2019 10:54, Norbert Manthey wrote:
> On 7/10/19 05:04, Jan Beulich wrote:
>> On 08.07.2019 14:58, Norbert Manthey wrote:
>>> On 5/24/19 13:10, Jan Beulich wrote:
>>>>>>> On 24.05.19 at 11:54, <nmanthey@amazon.de> wrote:
>>>>> On 5/23/19 16:17, Jan Beulich wrote:
>>>>>>>>> On 21.05.19 at 09:45, <nmanthey@amazon.de> wrote:
>>>>>>> --- a/xen/common/grant_table.c
>>>>>>> +++ b/xen/common/grant_table.c
>>>>>>> @@ -988,9 +988,10 @@ map_grant_ref(
>>>>>>>            PIN_FAIL(unlock_out, GNTST_bad_gntref, "Bad ref %#x for d%d\n",
>>>>>>>                     op->ref, rgt->domain->domain_id);
>>>>>>>    
>>>>>>> -    act = active_entry_acquire(rgt, op->ref);
>>>>>>> +    /* This call also ensures the above check cannot be passed speculatively */
>>>>>>>        shah = shared_entry_header(rgt, op->ref);
>>>>>>>        status = rgt->gt_version == 1 ? &shah->flags : &status_entry(rgt, op->ref);
>>>>>>> +    act = active_entry_acquire(rgt, op->ref);
>>>>>> I know we've been there before, but what guarantees that the
>>>>>> compiler won't reload op->ref from memory for either of the
>>>>>> latter two accesses? In fact afaict it always will, due to the
>>>>>> memory clobber in alternative().
>>>>> The compiler can reload op->ref from memory, that is fine here, as the
>>>>> bound check happens above, and the shared_entry call comes with an
>>>>> lfence() by now, so that we will not continue executing speculatively
>>>>> with op->ref being out-of-bounds, independently of whether it's from
>>>>> memory or registers.
>>>> I don't buy this argumentation: In particular if the cache line got
>>>> flushed for whatever reason, the load may take almost arbitrarily
>>>> long, opening up a large speculation window again using the
>>>> destination register of the load (whatever - not bounds checked -
>>>> value that ends up holding).
>>> I agree, the given protection does not force the CPU to pick up the
>>> fixed value. As you already noticed, the presented fix might not work in
>>> all cases, but is among the suitable solutions we have today to prevent
>>> simple user controlled out-of-bound accesses during speculation. Relying
>>> on the stale value of the register that might be used during speculation
>>> makes a potential out-of-bound access much more difficult. Hence, the
>>> proposed solution looks good enough to me.
>> But using a local variable further reduces the risk afaict: Either
>> the compiler puts it into a register, in which case we're entirely
>> fine. Or it puts it on the stack, which I assume is more likely to
>> stay in cache than a reference to some other data structure (iirc
>> also on the stack, but not in the current frame).
> If you want me to introduce a local variable, I can do that. I remember
> we had discussions around that as well.

I think this would be (at least slightly) better, yes.

Jan
diff mbox series

Patch

diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -988,9 +988,10 @@  map_grant_ref(
         PIN_FAIL(unlock_out, GNTST_bad_gntref, "Bad ref %#x for d%d\n",
                  op->ref, rgt->domain->domain_id);
 
-    act = active_entry_acquire(rgt, op->ref);
+    /* This call also ensures the above check cannot be passed speculatively */
     shah = shared_entry_header(rgt, op->ref);
     status = rgt->gt_version == 1 ? &shah->flags : &status_entry(rgt, op->ref);
+    act = active_entry_acquire(rgt, op->ref);
 
     /* If already pinned, check the active domid and avoid refcnt overflow. */
     if ( act->pin &&
@@ -1346,6 +1347,9 @@  unmap_common(
         goto unlock_out;
     }
 
+    /* Make sure the above bound check cannot be bypassed speculatively */
+    block_speculation();
+
     act = active_entry_acquire(rgt, op->ref);
 
     /*
@@ -1443,7 +1447,7 @@  unmap_common_complete(struct gnttab_unmap_common *op)
     struct page_info *pg;
     uint16_t *status;
 
-    if ( !op->done )
+    if ( evaluate_nospec(!op->done) )
     {
         /* unmap_common() didn't do anything - nothing to complete. */
         return;
@@ -2051,6 +2055,7 @@  gnttab_prepare_for_transfer(
         goto fail;
     }
 
+    /* This call also ensures the above check cannot be passed speculatively */
     sha = shared_entry_header(rgt, ref);
 
     scombo.word = *(u32 *)&sha->flags;
@@ -2248,7 +2253,12 @@  gnttab_transfer(
         spin_unlock(&e->page_alloc_lock);
         okay = gnttab_prepare_for_transfer(e, d, gop.ref);
 
-        if ( unlikely(!okay || assign_pages(e, page, 0, MEMF_no_refcount)) )
+        /*
+         * Make sure the reference bound check in gnttab_prepare_for_transfer
+         * is respected and speculative execution is blocked accordingly
+         */
+        if ( unlikely(!evaluate_nospec(okay)) ||
+            unlikely(assign_pages(e, page, 0, MEMF_no_refcount)) )
         {
             bool drop_dom_ref;
 
@@ -2435,8 +2445,10 @@  acquire_grant_for_copy(
         PIN_FAIL(gt_unlock_out, GNTST_bad_gntref,
                  "Bad grant reference %#x\n", gref);
 
-    act = active_entry_acquire(rgt, gref);
+    /* This call also ensures the above check cannot be passed speculatively */
     shah = shared_entry_header(rgt, gref);
+    act = active_entry_acquire(rgt, gref);
+
     if ( rgt->gt_version == 1 )
     {
         sha2 = NULL;
@@ -2853,6 +2865,9 @@  static int gnttab_copy_buf(const struct gnttab_copy *op,
                  op->dest.offset, dest->ptr.offset,
                  op->len, dest->len);
 
+    /* Make sure the above checks are not bypassed speculatively */
+    block_speculation();
+
     memcpy(dest->virt + op->dest.offset, src->virt + op->source.offset,
            op->len);
     gnttab_mark_dirty(dest->domain, dest->mfn);
@@ -2972,7 +2987,7 @@  gnttab_set_version(XEN_GUEST_HANDLE_PARAM(gnttab_set_version_t) uop)
     struct grant_table *gt = currd->grant_table;
     grant_entry_v1_t reserved_entries[GNTTAB_NR_RESERVED_ENTRIES];
     int res;
-    unsigned int i;
+    unsigned int i, nr_ents;
 
     if ( copy_from_guest(&op, uop, 1) )
         return -EFAULT;
@@ -2996,7 +3011,8 @@  gnttab_set_version(XEN_GUEST_HANDLE_PARAM(gnttab_set_version_t) uop)
      * are allowed to be in use (xenstore/xenconsole keeps them mapped).
      * (You need to change the version number for e.g. kexec.)
      */
-    for ( i = GNTTAB_NR_RESERVED_ENTRIES; i < nr_grant_entries(gt); i++ )
+    nr_ents = nr_grant_entries(gt);
+    for ( i = GNTTAB_NR_RESERVED_ENTRIES; i < nr_ents; i++ )
     {
         if ( read_atomic(&_active_entry(gt, i).pin) != 0 )
         {
@@ -3238,6 +3254,9 @@  swap_grant_ref(grant_ref_t ref_a, grant_ref_t ref_b)
     if ( unlikely(ref_b >= nr_grant_entries(d->grant_table)))
         PIN_FAIL(out, GNTST_bad_gntref, "Bad ref-b %#x\n", ref_b);
 
+    /* Make sure the above checks are not bypassed speculatively */
+    block_speculation();
+
     /* Swapping the same ref is a no-op. */
     if ( ref_a == ref_b )
         goto out;
@@ -3707,13 +3726,14 @@  void grant_table_warn_active_grants(struct domain *d)
     struct grant_table *gt = d->grant_table;
     struct active_grant_entry *act;
     grant_ref_t ref;
-    unsigned int nr_active = 0;
+    unsigned int nr_active = 0, nr_ents;
 
 #define WARN_GRANT_MAX 10
 
     grant_read_lock(gt);
 
-    for ( ref = 0; ref != nr_grant_entries(gt); ref++ )
+    nr_ents = nr_grant_entries(gt);
+    for ( ref = 0; ref != nr_ents; ref++ )
     {
         act = active_entry_acquire(gt, ref);
         if ( !act->pin )
@@ -3863,6 +3883,9 @@  static int gnttab_get_status_frame_mfn(struct domain *d,
             return -EINVAL;
     }
 
+    /* Make sure idx is bounded wrt nr_status_frames */
+    block_speculation();
+
     *mfn = _mfn(virt_to_mfn(gt->status[idx]));
     return 0;
 }
@@ -3962,6 +3985,7 @@  static void gnttab_usage_print(struct domain *rd)
     int first = 1;
     grant_ref_t ref;
     struct grant_table *gt = rd->grant_table;
+    unsigned int nr_ents;
 
     printk("      -------- active --------       -------- shared --------\n");
     printk("[ref] localdom mfn      pin          localdom gmfn     flags\n");
@@ -3974,7 +3998,8 @@  static void gnttab_usage_print(struct domain *rd)
            nr_grant_frames(gt), gt->max_grant_frames,
            nr_maptrack_frames(gt), gt->max_maptrack_frames);
 
-    for ( ref = 0; ref != nr_grant_entries(gt); ref++ )
+    nr_ents = nr_grant_entries(gt);
+    for ( ref = 0; ref != nr_ents; ref++ )
     {
         struct active_grant_entry *act;
         struct grant_entry_header *sha;