diff mbox series

ioreq: cope with server disappearing while I/O is pending

Message ID 20201005140817.1339-1-paul@xen.org (mailing list archive)
State New, archived
Headers show
Series ioreq: cope with server disappearing while I/O is pending | expand

Commit Message

Paul Durrant Oct. 5, 2020, 2:08 p.m. UTC
From: Paul Durrant <pdurrant@amazon.com>

Currently, in the event of an ioreq server being destroyed while I/O is
pending in the attached emulator, it is possible that hvm_wait_for_io() will
dereference a pointer to a 'struct hvm_ioreq_vcpu' or the ioreq server's
shared page after it has been freed.
This will only occur if the emulator (which is necessarily running in a
service domain with some degree of privilege) does not complete pending I/O
during tear-down and is not directly exploitable by a guest domain.

This patch adds a call to get_pending_vcpu() into the condition of the
wait_on_xen_event_channel() macro to verify the continued existence of the
ioreq server. Should it disappear, the guest domain will be crashed.

NOTE: take the opportunity to modify the text on one gdprintk() for
      consistency with others.

Reported-by: Julien Grall <julien@xen.org>
Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>
Cc: Wei Liu <wl@xen.org>
---
 xen/arch/x86/hvm/ioreq.c | 30 ++++++++++++++++++++++--------
 1 file changed, 22 insertions(+), 8 deletions(-)

Comments

Andrew Cooper Oct. 5, 2020, 2:30 p.m. UTC | #1
On 05/10/2020 15:08, Paul Durrant wrote:
> From: Paul Durrant <pdurrant@amazon.com>
>
> Currently, in the event of an ioreq server being destroyed while I/O is
> pending in the attached emulator, it is possible that hvm_wait_for_io() will
> dereference a pointer to a 'struct hvm_ioreq_vcpu' or the ioreq server's
> shared page after it has been freed.

It's not legitimate for the shared page to be freed while Xen is still
using it.

Furthermore, this patch only covers the most obvious place for the bug
to manifest.  It doesn't fix them all, as a ioreq server destroy can
race with an in-progress emulation and still suffer a UAF.


An extra ref needs holding on the shared page between acquire_resource
and domain destruction, to cover Xen's use of the page.

Similarly, I don't think it is legitimate for hvm_ioreq_vcpu to be freed
while potentially in use.  These need to stick around until domain
destruction as well, I think.

> This will only occur if the emulator (which is necessarily running in a
> service domain with some degree of privilege) does not complete pending I/O
> during tear-down and is not directly exploitable by a guest domain.
>
> This patch adds a call to get_pending_vcpu() into the condition of the
> wait_on_xen_event_channel() macro to verify the continued existence of the
> ioreq server. Should it disappear, the guest domain will be crashed.
>
> NOTE: take the opportunity to modify the text on one gdprintk() for
>       consistency with others.

I know this is tangential, but all these gdprintk()'s need to change to
gprintk()'s, so release builds provide at least some hint as to why the
domain has been crashed.

(And I also really need to finish off my domain_crash() patch to force
this everywhere.)

~Andrew
Durrant, Paul Oct. 5, 2020, 2:39 p.m. UTC | #2
> -----Original Message-----
> From: Andrew Cooper <andrew.cooper3@citrix.com>
> Sent: 05 October 2020 15:30
> To: Paul Durrant <paul@xen.org>; xen-devel@lists.xenproject.org
> Cc: Durrant, Paul <pdurrant@amazon.co.uk>; Julien Grall <julien@xen.org>; Jan Beulich
> <jbeulich@suse.com>; Roger Pau Monné <roger.pau@citrix.com>; Wei Liu <wl@xen.org>
> Subject: RE: [EXTERNAL] [PATCH] ioreq: cope with server disappearing while I/O is pending
> 
> CAUTION: This email originated from outside of the organization. Do not click links or open
> attachments unless you can confirm the sender and know the content is safe.
> 
> 
> 
> On 05/10/2020 15:08, Paul Durrant wrote:
> > From: Paul Durrant <pdurrant@amazon.com>
> >
> > Currently, in the event of an ioreq server being destroyed while I/O is
> > pending in the attached emulator, it is possible that hvm_wait_for_io() will
> > dereference a pointer to a 'struct hvm_ioreq_vcpu' or the ioreq server's
> > shared page after it has been freed.
> 
> It's not legitimate for the shared page to be freed while Xen is still
> using it.
> 
> Furthermore, this patch only covers the most obvious place for the bug
> to manifest.  It doesn't fix them all, as a ioreq server destroy can
> race with an in-progress emulation and still suffer a UAF.
> 
> 
> An extra ref needs holding on the shared page between acquire_resource
> and domain destruction, to cover Xen's use of the page.
> 

Yes, that's true.

> Similarly, I don't think it is legitimate for hvm_ioreq_vcpu to be freed
> while potentially in use.  These need to stick around until domain
> destruction as well, I think.
> 

That would cover the problem with the sv pointer, I guess it would be possible to mark the struct stale and then free it when 'pending' transitions to false.

> > This will only occur if the emulator (which is necessarily running in a
> > service domain with some degree of privilege) does not complete pending I/O
> > during tear-down and is not directly exploitable by a guest domain.
> >
> > This patch adds a call to get_pending_vcpu() into the condition of the
> > wait_on_xen_event_channel() macro to verify the continued existence of the
> > ioreq server. Should it disappear, the guest domain will be crashed.
> >
> > NOTE: take the opportunity to modify the text on one gdprintk() for
> >       consistency with others.
> 
> I know this is tangential, but all these gdprintk()'s need to change to
> gprintk()'s, so release builds provide at least some hint as to why the
> domain has been crashed.
> 

Yes, that's also a good point.

I guess this will patch will probably need to become a series.

  Paul

> (And I also really need to finish off my domain_crash() patch to force
> this everywhere.)
> 
> ~Andrew
Julien Grall Oct. 5, 2020, 2:42 p.m. UTC | #3
Hi Paul,

On 05/10/2020 15:08, Paul Durrant wrote:
> From: Paul Durrant <pdurrant@amazon.com>
> 
> Currently, in the event of an ioreq server being destroyed while I/O is
> pending in the attached emulator, it is possible that hvm_wait_for_io() will
> dereference a pointer to a 'struct hvm_ioreq_vcpu' or the ioreq server's
> shared page after it has been freed.

So the IOREQ code will call domain_pause() before destroying the IOREQ.

A vCPU can only be descheduled in hvm_wait_for_io() from 
wait_on_xen_event_channel(). AFAIK, we would schedule a new vCPU (or 
idle) when this happens.

On x86, the schedule() function will not return after context switch. 
Therefore...

> This will only occur if the emulator (which is necessarily running in a
> service domain with some degree of privilege) does not complete pending I/O
> during tear-down and is not directly exploitable by a guest domain.

... I am not sure how this can happen on x86. Do you mind providing an 
example?

Note that on Arm, the schedule() function will return after context 
switch. So the problem you describe is there from an arch-agnostic PoV.

Cheers,
Durrant, Paul Oct. 5, 2020, 3:12 p.m. UTC | #4
> -----Original Message-----
> From: Julien Grall <julien@xen.org>
> Sent: 05 October 2020 15:42
> To: Paul Durrant <paul@xen.org>; xen-devel@lists.xenproject.org
> Cc: Durrant, Paul <pdurrant@amazon.co.uk>; Jan Beulich <jbeulich@suse.com>; Andrew Cooper
> <andrew.cooper3@citrix.com>; Roger Pau Monné <roger.pau@citrix.com>; Wei Liu <wl@xen.org>
> Subject: RE: [EXTERNAL] [PATCH] ioreq: cope with server disappearing while I/O is pending
> 
> CAUTION: This email originated from outside of the organization. Do not click links or open
> attachments unless you can confirm the sender and know the content is safe.
> 
> 
> 
> Hi Paul,
> 
> On 05/10/2020 15:08, Paul Durrant wrote:
> > From: Paul Durrant <pdurrant@amazon.com>
> >
> > Currently, in the event of an ioreq server being destroyed while I/O is
> > pending in the attached emulator, it is possible that hvm_wait_for_io() will
> > dereference a pointer to a 'struct hvm_ioreq_vcpu' or the ioreq server's
> > shared page after it has been freed.
> 
> So the IOREQ code will call domain_pause() before destroying the IOREQ.
> 
> A vCPU can only be descheduled in hvm_wait_for_io() from
> wait_on_xen_event_channel(). AFAIK, we would schedule a new vCPU (or
> idle) when this happens.
> 
> On x86, the schedule() function will not return after context switch.
> Therefore...
> 
> > This will only occur if the emulator (which is necessarily running in a
> > service domain with some degree of privilege) does not complete pending I/O
> > during tear-down and is not directly exploitable by a guest domain.
> 
> ... I am not sure how this can happen on x86. Do you mind providing an
> example?

Maybe I'm missing something, but I can't see anything that would prevent wait_from_xen_event_channel() returning after the ioreq server has been destroyed? The domain is only paused whilst destruction is in progress but both 'sv' and 'p' will be on-stack and thus, as soon as the domain is unpaused again, could be subject to UAF.

  Paul

> 
> Note that on Arm, the schedule() function will return after context
> switch. So the problem you describe is there from an arch-agnostic PoV.
> 
> Cheers,
> 
> --
> Julien Grall
diff mbox series

Patch

diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c
index 1cc27df87f..e8b97cd30c 100644
--- a/xen/arch/x86/hvm/ioreq.c
+++ b/xen/arch/x86/hvm/ioreq.c
@@ -115,6 +115,7 @@  bool hvm_io_pending(struct vcpu *v)
 
 static bool hvm_wait_for_io(struct hvm_ioreq_vcpu *sv, ioreq_t *p)
 {
+    struct vcpu *v = sv->vcpu;
     unsigned int prev_state = STATE_IOREQ_NONE;
     unsigned int state = p->state;
     uint64_t data = ~0;
@@ -132,7 +133,7 @@  static bool hvm_wait_for_io(struct hvm_ioreq_vcpu *sv, ioreq_t *p)
             gdprintk(XENLOG_ERR, "Weird HVM ioreq state transition %u -> %u\n",
                      prev_state, state);
             sv->pending = false;
-            domain_crash(sv->vcpu->domain);
+            domain_crash(v->domain);
             return false; /* bail */
         }
 
@@ -145,23 +146,36 @@  static bool hvm_wait_for_io(struct hvm_ioreq_vcpu *sv, ioreq_t *p)
 
         case STATE_IOREQ_READY:  /* IOREQ_{READY,INPROCESS} -> IORESP_READY */
         case STATE_IOREQ_INPROCESS:
-            wait_on_xen_event_channel(sv->ioreq_evtchn,
-                                      ({ state = p->state;
-                                         smp_rmb();
-                                         state != prev_state; }));
+            /*
+             * NOTE: The ioreq server may have been destroyed whilst the
+             *       vcpu was blocked so re-acquire the pointer to
+             *       hvm_ioreq_vcpu to check this condition.
+             */
+            wait_on_xen_event_channel(
+                sv->ioreq_evtchn,
+                ({ sv = get_pending_vcpu(v, NULL);
+                   state = sv ? p->state : STATE_IOREQ_NONE;
+                   smp_rmb();
+                   state != prev_state; }));
+            if ( !sv )
+            {
+                gdprintk(XENLOG_ERR, "HVM ioreq server has disappeared\n");
+                domain_crash(v->domain);
+                return false; /* bail */
+            }
             continue;
 
         default:
-            gdprintk(XENLOG_ERR, "Weird HVM iorequest state %u\n", state);
+            gdprintk(XENLOG_ERR, "Weird HVM ioreq state %u\n", state);
             sv->pending = false;
-            domain_crash(sv->vcpu->domain);
+            domain_crash(v->domain);
             return false; /* bail */
         }
 
         break;
     }
 
-    p = &sv->vcpu->arch.hvm.hvm_io.io_req;
+    p = &v->arch.hvm.hvm_io.io_req;
     if ( hvm_ioreq_needs_completion(p) )
         p->data = data;