diff mbox

[1/3] xen: fix quad word bufioreq handling

Message ID 58356E480200007800121296@prv-mh.provo.novell.com (mailing list archive)
State New, archived
Headers show

Commit Message

Jan Beulich Nov. 23, 2016, 9:24 a.m. UTC
We should not consume the second slot if it didn't get written yet.
Normal writers - i.e. Xen - would not update write_pointer between the
two writes, but the page may get fiddled with by the guest itself, and
we're better off entering an infinite loop in that case.

Reported-by: yanghongke <yanghongke@huawei.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
TBD: Alternatively we could call e.g. hw_error() instead.

Comments

Paul Durrant Nov. 23, 2016, 9:48 a.m. UTC | #1
> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: 23 November 2016 09:24
> To: qemu-devel@nongnu.org
> Cc: Anthony Perard <anthony.perard@citrix.com>; Paul Durrant
> <Paul.Durrant@citrix.com>; Stefano Stabellini <sstabellini@kernel.org>; xen-
> devel <xen-devel@lists.xenproject.org>
> Subject: [PATCH 1/3] xen: fix quad word bufioreq handling
> 
> We should not consume the second slot if it didn't get written yet.
> Normal writers - i.e. Xen - would not update write_pointer between the
> two writes, but the page may get fiddled with by the guest itself, and
> we're better off entering an infinite loop in that case.
> 

Xen would never put QEMU in this situation and the guest can't actually modify the page whilst it's in use, since activation of the IOREQ server removes the page from the guest's p2m so the premise of the patch is not correct.

  Paul

> Reported-by: yanghongke <yanghongke@huawei.com>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> TBD: Alternatively we could call e.g. hw_error() instead.
> 
> --- a/xen-hvm.c
> +++ b/xen-hvm.c
> @@ -1021,6 +1021,9 @@ static int handle_buffered_iopage(XenIOS
>          xen_rmb();
>          qw = (req.size == 8);
>          if (qw) {
> +            if (rdptr + 1 == wrptr) {
> +                break;
> +            }
>              buf_req = &buf_page->buf_ioreq[(rdptr + 1) %
>                                             IOREQ_BUFFER_SLOT_NUM];
>              req.data |= ((uint64_t)buf_req->data) << 32;
> 
>
Jan Beulich Nov. 23, 2016, 10:36 a.m. UTC | #2
>>> On 23.11.16 at 10:48, <Paul.Durrant@citrix.com> wrote:
>> From: Jan Beulich [mailto:JBeulich@suse.com]
>> Sent: 23 November 2016 09:24
>> 
>> We should not consume the second slot if it didn't get written yet.
>> Normal writers - i.e. Xen - would not update write_pointer between the
>> two writes, but the page may get fiddled with by the guest itself, and
>> we're better off entering an infinite loop in that case.
>> 
> 
> Xen would never put QEMU in this situation and the guest can't actually 
> modify the page whilst it's in use, since activation of the IOREQ server 
> removes the page from the guest's p2m so the premise of the patch is not 
> correct.

Is that the case even for pre-ioreq-server Xen versions? The issue
here was reported together with what became XSA-197, and it's
not been assigned its own XSA just because there are other ways
for a guest to place high load on its qemu process (and there are
ways to deal with such high load situations).

Jan
Paul Durrant Nov. 23, 2016, 10:45 a.m. UTC | #3
> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: 23 November 2016 10:36
> To: Paul Durrant <Paul.Durrant@citrix.com>
> Cc: Anthony Perard <anthony.perard@citrix.com>; Stefano Stabellini
> <sstabellini@kernel.org>; xen-devel <xen-devel@lists.xenproject.org>;
> qemu-devel@nongnu.org
> Subject: RE: [PATCH 1/3] xen: fix quad word bufioreq handling
> 
> >>> On 23.11.16 at 10:48, <Paul.Durrant@citrix.com> wrote:
> >> From: Jan Beulich [mailto:JBeulich@suse.com]
> >> Sent: 23 November 2016 09:24
> >>
> >> We should not consume the second slot if it didn't get written yet.
> >> Normal writers - i.e. Xen - would not update write_pointer between the
> >> two writes, but the page may get fiddled with by the guest itself, and
> >> we're better off entering an infinite loop in that case.
> >>
> >
> > Xen would never put QEMU in this situation and the guest can't actually
> > modify the page whilst it's in use, since activation of the IOREQ server
> > removes the page from the guest's p2m so the premise of the patch is not
> > correct.
> 
> Is that the case even for pre-ioreq-server Xen versions? The issue
> here was reported together with what became XSA-197, and it's
> not been assigned its own XSA just because there are other ways
> for a guest to place high load on its qemu process (and there are
> ways to deal with such high load situations).
> 

No, if QEMU is using a default ioreq server (i.e. the legacy way of doing things) then it's vulnerable to the guest messing with the rings and I'd forgotten that migrated-in guests from old QEMUs also end up using the default server, so I guess this is a worthy checkt to make... although maybe it's best to just bail if the check fails, since it would indicate a malicious guest.

  Paul

> Jan
Jan Beulich Nov. 23, 2016, 11:28 a.m. UTC | #4
>>> On 23.11.16 at 11:45, <Paul.Durrant@citrix.com> wrote:
> No, if QEMU is using a default ioreq server (i.e. the legacy way of doing 
> things) then it's vulnerable to the guest messing with the rings and I'd 
> forgotten that migrated-in guests from old QEMUs also end up using the default 
> server, so I guess this is a worthy checkt to make... although maybe it's 
> best to just bail if the check fails, since it would indicate a malicious 
> guest.

Okay, that's basically the TBD note I have in the patch; I'll wait for
at least one of the qemu maintainers to voice their preference.

Jan
Stefano Stabellini Nov. 23, 2016, 6:01 p.m. UTC | #5
On Wed, 23 Nov 2016, Jan Beulich wrote:
> >>> On 23.11.16 at 11:45, <Paul.Durrant@citrix.com> wrote:
> > No, if QEMU is using a default ioreq server (i.e. the legacy way of doing 
> > things) then it's vulnerable to the guest messing with the rings and I'd 
> > forgotten that migrated-in guests from old QEMUs also end up using the default 
> > server, so I guess this is a worthy checkt to make... although maybe it's 
> > best to just bail if the check fails, since it would indicate a malicious 
> > guest.
> 
> Okay, that's basically the TBD note I have in the patch; I'll wait for
> at least one of the qemu maintainers to voice their preference.
 
I think we should just print an error and destroy_hvm_domain(false) or
hw_error if the check fails.
diff mbox

Patch

--- a/xen-hvm.c
+++ b/xen-hvm.c
@@ -1021,6 +1021,9 @@  static int handle_buffered_iopage(XenIOS
         xen_rmb();
         qw = (req.size == 8);
         if (qw) {
+            if (rdptr + 1 == wrptr) {
+                break;
+            }
             buf_req = &buf_page->buf_ioreq[(rdptr + 1) %
                                            IOREQ_BUFFER_SLOT_NUM];
             req.data |= ((uint64_t)buf_req->data) << 32;