diff mbox

[1/1] io/channel-websock: handle continuous reads without any data

Message ID 1514453778-5363-1-git-send-email-den@openvz.org (mailing list archive)
State New, archived
Headers show

Commit Message

Denis V. Lunev Dec. 28, 2017, 9:36 a.m. UTC
From: Edgar Kaziakhmedov <edgar.kaziakhmedov@virtuozzo.com>

According to the current implementation of websocket protocol in QEMU,
qio_channel_websock_handshake_io tries to read handshake from the
channel to start communication over tcp socket. But this subroutine
doesn't cover scenario when socket was closed while handshaking. Therefore,
if G_IO_IN is caught and qio_channel_websock_handshake_read returns
zero, error has to be set and connection has to be done.

Such behaviour causes 100% CPU load in main QEMU loop, because main loop
poll continues to receive and handle G_IO_IN events from websocket.

Step to reproduce 100% CPU load:
- start qemu with the simplest configuration
    $ qemu -vnc [::1]:1,websocket=7500
2) open any vnc listener (which doesn't follow websocket protocol)
    $ vncviewer :7500
3) kill listener
4) qemu main thread eats 100% CPU usage

Signed-off-by: Edgar Kaziakhmedov <edgar.kaziakhmedov@virtuozzo.com>
Signed-off-by: Denis V. Lunev <den@openvz.org>
CC: Daniel P. Berrange <berrange@redhat.com>
---
 io/channel-websock.c | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

Comments

Edgar Kaziakhmedov Jan. 10, 2018, 9:55 a.m. UTC | #1
ping


On 12/28/2017 12:36 PM, Denis V. Lunev wrote:
> From: Edgar Kaziakhmedov <edgar.kaziakhmedov@virtuozzo.com>
>
> According to the current implementation of websocket protocol in QEMU,
> qio_channel_websock_handshake_io tries to read handshake from the
> channel to start communication over tcp socket. But this subroutine
> doesn't cover scenario when socket was closed while handshaking. Therefore,
> if G_IO_IN is caught and qio_channel_websock_handshake_read returns
> zero, error has to be set and connection has to be done.
>
> Such behaviour causes 100% CPU load in main QEMU loop, because main loop
> poll continues to receive and handle G_IO_IN events from websocket.
>
> Step to reproduce 100% CPU load:
> - start qemu with the simplest configuration
>      $ qemu -vnc [::1]:1,websocket=7500
> 2) open any vnc listener (which doesn't follow websocket protocol)
>      $ vncviewer :7500
> 3) kill listener
> 4) qemu main thread eats 100% CPU usage
>
> Signed-off-by: Edgar Kaziakhmedov <edgar.kaziakhmedov@virtuozzo.com>
> Signed-off-by: Denis V. Lunev <den@openvz.org>
> CC: Daniel P. Berrange <berrange@redhat.com>
> ---
>   io/channel-websock.c | 8 +++++---
>   1 file changed, 5 insertions(+), 3 deletions(-)
>
> diff --git a/io/channel-websock.c b/io/channel-websock.c
> index 87ebdeb..71f046a 100644
> --- a/io/channel-websock.c
> +++ b/io/channel-websock.c
> @@ -581,9 +581,11 @@ static gboolean qio_channel_websock_handshake_io(QIOChannel *ioc,
>           return FALSE;
>       }
>       if (ret == 0) {
> -        trace_qio_channel_websock_handshake_pending(ioc, G_IO_IN);
> -        /* need more data still */
> -        return TRUE;
> +        /* If G_IN_IO happened and there is no data to read, close connection */
> +        error_setg(&err, "connection was closed");
> +        qio_task_set_error(task, err);
> +        qio_task_complete(task);
> +        return FALSE;
>       }
>   
>       if (err) {
Daniel P. Berrangé Jan. 10, 2018, 12:33 p.m. UTC | #2
On Thu, Dec 28, 2017 at 12:36:18PM +0300, Denis V. Lunev wrote:
> From: Edgar Kaziakhmedov <edgar.kaziakhmedov@virtuozzo.com>
> 
> According to the current implementation of websocket protocol in QEMU,
> qio_channel_websock_handshake_io tries to read handshake from the
> channel to start communication over tcp socket. But this subroutine
> doesn't cover scenario when socket was closed while handshaking. Therefore,
> if G_IO_IN is caught and qio_channel_websock_handshake_read returns
> zero, error has to be set and connection has to be done.
> 
> Such behaviour causes 100% CPU load in main QEMU loop, because main loop
> poll continues to receive and handle G_IO_IN events from websocket.
> 
> Step to reproduce 100% CPU load:
> - start qemu with the simplest configuration
>     $ qemu -vnc [::1]:1,websocket=7500
> 2) open any vnc listener (which doesn't follow websocket protocol)
>     $ vncviewer :7500
> 3) kill listener
> 4) qemu main thread eats 100% CPU usage
> 
> Signed-off-by: Edgar Kaziakhmedov <edgar.kaziakhmedov@virtuozzo.com>
> Signed-off-by: Denis V. Lunev <den@openvz.org>
> CC: Daniel P. Berrange <berrange@redhat.com>
> ---
>  io/channel-websock.c | 8 +++++---
>  1 file changed, 5 insertions(+), 3 deletions(-)
> 
> diff --git a/io/channel-websock.c b/io/channel-websock.c
> index 87ebdeb..71f046a 100644
> --- a/io/channel-websock.c
> +++ b/io/channel-websock.c
> @@ -581,9 +581,11 @@ static gboolean qio_channel_websock_handshake_io(QIOChannel *ioc,
>          return FALSE;
>      }
>      if (ret == 0) {
> -        trace_qio_channel_websock_handshake_pending(ioc, G_IO_IN);
> -        /* need more data still */
> -        return TRUE;
> +        /* If G_IN_IO happened and there is no data to read, close connection */
> +        error_setg(&err, "connection was closed");
> +        qio_task_set_error(task, err);
> +        qio_task_complete(task);
> +        return FALSE;

This is bogus - 'ret == 0' from  qio_channel_websock_handshake_read()
does *not* imply end of file.

It says we have read some data, but not yet seen the end of the HTTP
headers.

If you want to detect EOF, you need to check the qio_channel_read()
function return value directly in that method.

Regards,
Daniel
Edgar Kaziakhmedov Jan. 10, 2018, 1:31 p.m. UTC | #3
On 01/10/2018 03:33 PM, Daniel P. Berrange wrote:
> On Thu, Dec 28, 2017 at 12:36:18PM +0300, Denis V. Lunev wrote:
>> From: Edgar Kaziakhmedov <edgar.kaziakhmedov@virtuozzo.com>
>>
>> According to the current implementation of websocket protocol in QEMU,
>> qio_channel_websock_handshake_io tries to read handshake from the
>> channel to start communication over tcp socket. But this subroutine
>> doesn't cover scenario when socket was closed while handshaking. Therefore,
>> if G_IO_IN is caught and qio_channel_websock_handshake_read returns
>> zero, error has to be set and connection has to be done.
>>
>> Such behaviour causes 100% CPU load in main QEMU loop, because main loop
>> poll continues to receive and handle G_IO_IN events from websocket.
>>
>> Step to reproduce 100% CPU load:
>> - start qemu with the simplest configuration
>>      $ qemu -vnc [::1]:1,websocket=7500
>> 2) open any vnc listener (which doesn't follow websocket protocol)
>>      $ vncviewer :7500
>> 3) kill listener
>> 4) qemu main thread eats 100% CPU usage
>>
>> Signed-off-by: Edgar Kaziakhmedov <edgar.kaziakhmedov@virtuozzo.com>
>> Signed-off-by: Denis V. Lunev <den@openvz.org>
>> CC: Daniel P. Berrange <berrange@redhat.com>
>> ---
>>   io/channel-websock.c | 8 +++++---
>>   1 file changed, 5 insertions(+), 3 deletions(-)
>>
>> diff --git a/io/channel-websock.c b/io/channel-websock.c
>> index 87ebdeb..71f046a 100644
>> --- a/io/channel-websock.c
>> +++ b/io/channel-websock.c
>> @@ -581,9 +581,11 @@ static gboolean qio_channel_websock_handshake_io(QIOChannel *ioc,
>>           return FALSE;
>>       }
>>       if (ret == 0) {
>> -        trace_qio_channel_websock_handshake_pending(ioc, G_IO_IN);
>> -        /* need more data still */
>> -        return TRUE;
>> +        /* If G_IN_IO happened and there is no data to read, close connection */
>> +        error_setg(&err, "connection was closed");
>> +        qio_task_set_error(task, err);
>> +        qio_task_complete(task);
>> +        return FALSE;
> This is bogus - 'ret == 0' from  qio_channel_websock_handshake_read()
> does *not* imply end of file.
>
> It says we have read some data, but not yet seen the end of the HTTP
> headers.
Yes, good catch.
>
> If you want to detect EOF, you need to check the qio_channel_read()
> function return value directly in that method.
Directly in qio_channel_websock_handshake_read method suppose called in 
qio_channel_websock_handshake_read routine.
>
> Regards,
> Daniel
Daniel P. Berrangé Jan. 10, 2018, 1:34 p.m. UTC | #4
On Wed, Jan 10, 2018 at 04:31:21PM +0300, Edgar Kaziakhmedov wrote:
> 
> 
> On 01/10/2018 03:33 PM, Daniel P. Berrange wrote:
> > On Thu, Dec 28, 2017 at 12:36:18PM +0300, Denis V. Lunev wrote:
> > > From: Edgar Kaziakhmedov <edgar.kaziakhmedov@virtuozzo.com>
> > > 
> > > According to the current implementation of websocket protocol in QEMU,
> > > qio_channel_websock_handshake_io tries to read handshake from the
> > > channel to start communication over tcp socket. But this subroutine
> > > doesn't cover scenario when socket was closed while handshaking. Therefore,
> > > if G_IO_IN is caught and qio_channel_websock_handshake_read returns
> > > zero, error has to be set and connection has to be done.
> > > 
> > > Such behaviour causes 100% CPU load in main QEMU loop, because main loop
> > > poll continues to receive and handle G_IO_IN events from websocket.
> > > 
> > > Step to reproduce 100% CPU load:
> > > - start qemu with the simplest configuration
> > >      $ qemu -vnc [::1]:1,websocket=7500
> > > 2) open any vnc listener (which doesn't follow websocket protocol)
> > >      $ vncviewer :7500
> > > 3) kill listener
> > > 4) qemu main thread eats 100% CPU usage
> > > 
> > > Signed-off-by: Edgar Kaziakhmedov <edgar.kaziakhmedov@virtuozzo.com>
> > > Signed-off-by: Denis V. Lunev <den@openvz.org>
> > > CC: Daniel P. Berrange <berrange@redhat.com>
> > > ---
> > >   io/channel-websock.c | 8 +++++---
> > >   1 file changed, 5 insertions(+), 3 deletions(-)
> > > 
> > > diff --git a/io/channel-websock.c b/io/channel-websock.c
> > > index 87ebdeb..71f046a 100644
> > > --- a/io/channel-websock.c
> > > +++ b/io/channel-websock.c
> > > @@ -581,9 +581,11 @@ static gboolean qio_channel_websock_handshake_io(QIOChannel *ioc,
> > >           return FALSE;
> > >       }
> > >       if (ret == 0) {
> > > -        trace_qio_channel_websock_handshake_pending(ioc, G_IO_IN);
> > > -        /* need more data still */
> > > -        return TRUE;
> > > +        /* If G_IN_IO happened and there is no data to read, close connection */
> > > +        error_setg(&err, "connection was closed");
> > > +        qio_task_set_error(task, err);
> > > +        qio_task_complete(task);
> > > +        return FALSE;
> > This is bogus - 'ret == 0' from  qio_channel_websock_handshake_read()
> > does *not* imply end of file.
> > 
> > It says we have read some data, but not yet seen the end of the HTTP
> > headers.
> Yes, good catch.
> > 
> > If you want to detect EOF, you need to check the qio_channel_read()
> > function return value directly in that method.
> Directly in qio_channel_websock_handshake_read method suppose called in
> qio_channel_websock_handshake_read routine.

In this code:


    if (!handshake_end) {
        if (ioc->encinput.offset >= 4096) {
            qio_channel_websock_handshake_send_res_err(
                ioc, QIO_CHANNEL_WEBSOCK_HANDSHAKE_RES_TOO_LARGE);
            error_setg(errp,
                       "End of headers not found in first 4096 bytes");
            return 1;
        } else {
            return 0;
        }
    }


There needs to be an extra else if branch added

  } else if (ret == 0) {
        qio_channel_websock_handshake_send_res_err(
            ioc, QIO_CHANNEL_WEBSOCK_HANDSHAKE_RES_EOF);
        error_setg(errp,
                   "End of headers not found before connection closed");
        return -1;  
  }


Regards,
Daniel
diff mbox

Patch

diff --git a/io/channel-websock.c b/io/channel-websock.c
index 87ebdeb..71f046a 100644
--- a/io/channel-websock.c
+++ b/io/channel-websock.c
@@ -581,9 +581,11 @@  static gboolean qio_channel_websock_handshake_io(QIOChannel *ioc,
         return FALSE;
     }
     if (ret == 0) {
-        trace_qio_channel_websock_handshake_pending(ioc, G_IO_IN);
-        /* need more data still */
-        return TRUE;
+        /* If G_IN_IO happened and there is no data to read, close connection */
+        error_setg(&err, "connection was closed");
+        qio_task_set_error(task, err);
+        qio_task_complete(task);
+        return FALSE;
     }
 
     if (err) {