diff mbox

[1/3] x86/HVM: limit writes to incoming TSS during task switch

Message ID 58345C450200007800120CD1@prv-mh.provo.novell.com (mailing list archive)
State New, archived
Headers show

Commit Message

Jan Beulich Nov. 22, 2016, 1:55 p.m. UTC
The only field modified (and even that conditionally) is the back link.
Write only that field, and only when it actually has been written to.

Take the opportunity and also ditch the pointless initializer from the
"tss" local variable.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
x86/HVM: limit writes to incoming TSS during task switch

The only field modified (and even that conditionally) is the back link.
Write only that field, and only when it actually has been written to.

Take the opportunity and also ditch the pointless initializer from the
"tss" local variable.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -2890,7 +2890,7 @@ void hvm_task_switch(
         u32 cr3, eip, eflags, eax, ecx, edx, ebx, esp, ebp, esi, edi;
         u16 es, _3, cs, _4, ss, _5, ds, _6, fs, _7, gs, _8, ldt, _9;
         u16 trace, iomap;
-    } tss = { 0 };
+    } tss;
 
     hvm_get_segment_register(v, x86_seg_gdtr, &gdt);
     hvm_get_segment_register(v, x86_seg_tr, &prev_tr);
@@ -3010,12 +3010,6 @@ void hvm_task_switch(
     regs->esi    = tss.esi;
     regs->edi    = tss.edi;
 
-    if ( (taskswitch_reason == TSW_call_or_int) )
-    {
-        regs->eflags |= X86_EFLAGS_NT;
-        tss.back_link = prev_tr.sel;
-    }
-
     exn_raised = 0;
     if ( hvm_load_segment_selector(x86_seg_es, tss.es, tss.eflags) ||
          hvm_load_segment_selector(x86_seg_cs, tss.cs, tss.eflags) ||
@@ -3025,12 +3019,18 @@ void hvm_task_switch(
          hvm_load_segment_selector(x86_seg_gs, tss.gs, tss.eflags) )
         exn_raised = 1;
 
-    rc = hvm_copy_to_guest_virt(
-        tr.base, &tss, sizeof(tss), PFEC_page_present);
-    if ( rc == HVMCOPY_bad_gva_to_gfn )
-        exn_raised = 1;
-    else if ( rc != HVMCOPY_okay )
-        goto out;
+    if ( taskswitch_reason == TSW_call_or_int )
+    {
+        regs->eflags |= X86_EFLAGS_NT;
+        tss.back_link = prev_tr.sel;
+
+        rc = hvm_copy_to_guest_virt(tr.base + offsetof(typeof(tss), back_link),
+                                    &tss.back_link, sizeof(tss.back_link), 0);
+        if ( rc == HVMCOPY_bad_gva_to_gfn )
+            exn_raised = 1;
+        else if ( rc != HVMCOPY_okay )
+            goto out;
+    }
 
     if ( (tss.trace & 1) && !exn_raised )
         hvm_inject_hw_exception(TRAP_debug, HVM_DELIVER_NO_ERROR_CODE);

Comments

Andrew Cooper Nov. 22, 2016, 4:32 p.m. UTC | #1
On 22/11/16 13:55, Jan Beulich wrote:
> The only field modified (and even that conditionally) is the back link.
> Write only that field, and only when it actually has been written to.
>
> Take the opportunity and also ditch the pointless initializer from the
> "tss" local variable.

It would help to point out that tss is unconditionally filled completely
from guest memory.

> Signed-off-by: Jan Beulich <jbeulich@suse.com>

As for the mechanical adjustments here, Reviewed-by: Andrew Cooper
<andrew.cooper3@citrix.com>

However, is the position of the backlink write actually correct?  I'd
have thought that all access to the old tss happen before switching cr3.

I can't find a useful description of the order of events in a task
switch in either manual.

~Andrew
Jan Beulich Nov. 23, 2016, 8:27 a.m. UTC | #2
>>> On 22.11.16 at 17:32, <andrew.cooper3@citrix.com> wrote:
> On 22/11/16 13:55, Jan Beulich wrote:
>> The only field modified (and even that conditionally) is the back link.
>> Write only that field, and only when it actually has been written to.
>>
>> Take the opportunity and also ditch the pointless initializer from the
>> "tss" local variable.
> 
> It would help to point out that tss is unconditionally filled completely
> from guest memory.
> 
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> As for the mechanical adjustments here, Reviewed-by: Andrew Cooper
> <andrew.cooper3@citrix.com>
> 
> However, is the position of the backlink write actually correct?  I'd
> have thought that all access to the old tss happen before switching cr3.

But the backlink gets written into the incoming TSS. And I think it
is being assumed anyway that both TSSes (just like the GDT) are
visible through either CR3, the more that the incoming TSS
necessarily is being read in the old address space context.

Jan
Andrew Cooper Nov. 23, 2016, 10:59 a.m. UTC | #3
On 23/11/16 08:27, Jan Beulich wrote:
>>>> On 22.11.16 at 17:32, <andrew.cooper3@citrix.com> wrote:
>> On 22/11/16 13:55, Jan Beulich wrote:
>>> The only field modified (and even that conditionally) is the back link.
>>> Write only that field, and only when it actually has been written to.
>>>
>>> Take the opportunity and also ditch the pointless initializer from the
>>> "tss" local variable.
>> It would help to point out that tss is unconditionally filled completely
>> from guest memory.
>>
>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> As for the mechanical adjustments here, Reviewed-by: Andrew Cooper
>> <andrew.cooper3@citrix.com>
>>
>> However, is the position of the backlink write actually correct?  I'd
>> have thought that all access to the old tss happen before switching cr3.
> But the backlink gets written into the incoming TSS.

Ah - of course it does.  I was getting confused by the repeated use of
the tss structure.  Sorry for the noise.

~Andrew
diff mbox

Patch

--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -2890,7 +2890,7 @@  void hvm_task_switch(
         u32 cr3, eip, eflags, eax, ecx, edx, ebx, esp, ebp, esi, edi;
         u16 es, _3, cs, _4, ss, _5, ds, _6, fs, _7, gs, _8, ldt, _9;
         u16 trace, iomap;
-    } tss = { 0 };
+    } tss;
 
     hvm_get_segment_register(v, x86_seg_gdtr, &gdt);
     hvm_get_segment_register(v, x86_seg_tr, &prev_tr);
@@ -3010,12 +3010,6 @@  void hvm_task_switch(
     regs->esi    = tss.esi;
     regs->edi    = tss.edi;
 
-    if ( (taskswitch_reason == TSW_call_or_int) )
-    {
-        regs->eflags |= X86_EFLAGS_NT;
-        tss.back_link = prev_tr.sel;
-    }
-
     exn_raised = 0;
     if ( hvm_load_segment_selector(x86_seg_es, tss.es, tss.eflags) ||
          hvm_load_segment_selector(x86_seg_cs, tss.cs, tss.eflags) ||
@@ -3025,12 +3019,18 @@  void hvm_task_switch(
          hvm_load_segment_selector(x86_seg_gs, tss.gs, tss.eflags) )
         exn_raised = 1;
 
-    rc = hvm_copy_to_guest_virt(
-        tr.base, &tss, sizeof(tss), PFEC_page_present);
-    if ( rc == HVMCOPY_bad_gva_to_gfn )
-        exn_raised = 1;
-    else if ( rc != HVMCOPY_okay )
-        goto out;
+    if ( taskswitch_reason == TSW_call_or_int )
+    {
+        regs->eflags |= X86_EFLAGS_NT;
+        tss.back_link = prev_tr.sel;
+
+        rc = hvm_copy_to_guest_virt(tr.base + offsetof(typeof(tss), back_link),
+                                    &tss.back_link, sizeof(tss.back_link), 0);
+        if ( rc == HVMCOPY_bad_gva_to_gfn )
+            exn_raised = 1;
+        else if ( rc != HVMCOPY_okay )
+            goto out;
+    }
 
     if ( (tss.trace & 1) && !exn_raised )
         hvm_inject_hw_exception(TRAP_debug, HVM_DELIVER_NO_ERROR_CODE);