diff mbox series

[v4,3/7] nbd/server: CVE-2024-7409: Change default max-connections to 100

Message ID 20240807174943.771624-12-eblake@redhat.com (mailing list archive)
State New, archived
Headers show
Series CVE-2024-7409 | expand

Commit Message

Eric Blake Aug. 7, 2024, 5:43 p.m. UTC
Allowing an unlimited number of clients to any web service is a recipe
for a rudimentary denial of service attack: the client merely needs to
open lots of sockets without closing them, until qemu no longer has
any more fds available to allocate.

For qemu-nbd, we default to allowing only 1 connection unless more are
explicitly asked for (-e or --shared); this was historically picked as
a nice default (without an explicit -t, a non-persistent qemu-nbd goes
away after a client disconnects, without needing any additional
follow-up commands), and we are not going to change that interface now
(besides, someday we want to point people towards qemu-storage-daemon
instead of qemu-nbd).

But for qemu proper, the QMP nbd-server-start command has historically
had a default of unlimited number of connections, in part because
unlike qemu-nbd it is inherently persistent.  Allowing multiple client
sockets is particularly useful for clients that can take advantage of
MULTI_CONN (creating parallel sockets to increase throughput),
although known clients that do so (such as libnbd's nbdcopy) typically
use only 8 or 16 connections (the benefits of scaling diminish once
more sockets are competing for kernel attention).  Picking a number
large enough for typical use cases, but not unlimited, makes it
slightly harder for a malicious client to perform a denial of service
merely by opening lots of connections withot progressing through the
handshake.

This change does not eliminate CVE-2024-7409 on its own, but reduces
the chance for fd exhaustion or unlimited memory usage as an attack
surface.  On the other hand, by itself, it makes it more obvious that
with a finite limit, we have the problem of an unauthenticated client
holding 100 fds opened as a way to block out a legitimate client from
being able to connect; thus, later patches will further add timeouts
to reject clients that are not making progress.

Suggested-by: Daniel P. Berrangé <berrange@redhat.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
---
 qapi/block-export.json         | 4 ++--
 include/block/nbd.h            | 7 +++++++
 block/monitor/block-hmp-cmds.c | 3 ++-
 blockdev-nbd.c                 | 8 ++++++++
 4 files changed, 19 insertions(+), 3 deletions(-)

Comments

Daniel P. Berrangé Aug. 7, 2024, 6:24 p.m. UTC | #1
On Wed, Aug 07, 2024 at 12:43:29PM -0500, Eric Blake wrote:
> Allowing an unlimited number of clients to any web service is a recipe
> for a rudimentary denial of service attack: the client merely needs to
> open lots of sockets without closing them, until qemu no longer has
> any more fds available to allocate.
> 
> For qemu-nbd, we default to allowing only 1 connection unless more are
> explicitly asked for (-e or --shared); this was historically picked as
> a nice default (without an explicit -t, a non-persistent qemu-nbd goes
> away after a client disconnects, without needing any additional
> follow-up commands), and we are not going to change that interface now
> (besides, someday we want to point people towards qemu-storage-daemon
> instead of qemu-nbd).
> 
> But for qemu proper, the QMP nbd-server-start command has historically
> had a default of unlimited number of connections, in part because
> unlike qemu-nbd it is inherently persistent.  Allowing multiple client
> sockets is particularly useful for clients that can take advantage of
> MULTI_CONN (creating parallel sockets to increase throughput),
> although known clients that do so (such as libnbd's nbdcopy) typically
> use only 8 or 16 connections (the benefits of scaling diminish once
> more sockets are competing for kernel attention).  Picking a number
> large enough for typical use cases, but not unlimited, makes it
> slightly harder for a malicious client to perform a denial of service
> merely by opening lots of connections withot progressing through the
> handshake.
> 
> This change does not eliminate CVE-2024-7409 on its own, but reduces
> the chance for fd exhaustion or unlimited memory usage as an attack
> surface.  On the other hand, by itself, it makes it more obvious that
> with a finite limit, we have the problem of an unauthenticated client
> holding 100 fds opened as a way to block out a legitimate client from
> being able to connect; thus, later patches will further add timeouts
> to reject clients that are not making progress.
> 
> Suggested-by: Daniel P. Berrangé <berrange@redhat.com>
> Signed-off-by: Eric Blake <eblake@redhat.com>
> ---
>  qapi/block-export.json         | 4 ++--
>  include/block/nbd.h            | 7 +++++++
>  block/monitor/block-hmp-cmds.c | 3 ++-
>  blockdev-nbd.c                 | 8 ++++++++
>  4 files changed, 19 insertions(+), 3 deletions(-)
> 
> diff --git a/qapi/block-export.json b/qapi/block-export.json
> index 665d5fd0262..ce33fe378df 100644
> --- a/qapi/block-export.json
> +++ b/qapi/block-export.json
> @@ -28,7 +28,7 @@
>  # @max-connections: The maximum number of connections to allow at the
>  #     same time, 0 for unlimited.  Setting this to 1 also stops the
>  #     server from advertising multiple client support (since 5.2;
> -#     default: 0)
> +#     default: 100)
>  #
>  # Since: 4.2
>  ##
> @@ -63,7 +63,7 @@
>  # @max-connections: The maximum number of connections to allow at the
>  #     same time, 0 for unlimited.  Setting this to 1 also stops the
>  #     server from advertising multiple client support (since 5.2;
> -#     default: 0).
> +#     default: 100).
>  #
>  # Errors:
>  #     - if the server is already running

This is considered a backwards compatibility break.
An intentional one.

Our justification is that we believe it is the least worst option
to mitigate the DoS vector. Lets explore the reasoning for that
belief...

An alternative would be to deprecate the absence of 'max-connections',
and after the deprecation period, make it in into a mandatory
parameter, forcing apps to make a decision. This would be strictly
compliant with our QAPI change policy.

How does that differ in impact from changing the defaults...

  * Changed default
     - Small risk of breaking app if needing > 100 concurrent conns
     - Immediately closes the DoS hole for all apps using QEMU

  * Deprecation & eventual change to mandatory
     - Zero risk of breaking apps today
     - Forces all apps to be changed to pass max-connections
       in 3 releases time
     - Does NOT close the DoS hole until the 3rd future release
       from now.

If we took the deprecation route, arguably any app which isn't
already setting max-connections would need to have a CVE filed
against it *today*, and thus effectively apps would be forced
into immediately setting max-connections, despite our deprecaiton
grace period supposedly giving them 3 releases to adjust.

If we took the changed default route and broke an app, it would
again have to be changed to set max-connections to a value that
satisfies its use case.


So....

If we take the deprecation route, we create work for /all/ app
developers using NBD to update their code now to fix the CVE.

If we take the changed defaults route, and our 100 value choice
is sufficient, then no apps need changing.

If we take the changed defaults route, and our 100 value choice
is NOT sufficient, then most apps still don't need changing, but
perhaps 1 or 2 do need changing.

The unpleasant bit is that if 100 is insufficient, an app
maintainer or user might not become aware of this until they
have been in production for 12 months and suddenly hit a
rare load spike.


Overall though, changing the defaults still looks likely to
be the least disruptive option for fixing the DoS, providing
our choice of 100 is a good one.

I think system emulators are likely OK given common use
cases for NBD in context of a running VM (migration and
backup related).

I've got a nagging concern someone could be doing something
adventurous with QSD though. eg using it to serve content
(perhaps readonly) to a huge pool of clients ?

Probably that's just a risk we have to take, as any app
relying on the max-connections default is vulernable.

There are no good answers AFAICT. Only bad answers. This
one feels least bad to me.

Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>


With regards,
Daniel
Eric Blake Aug. 7, 2024, 9:23 p.m. UTC | #2
On Wed, Aug 07, 2024 at 07:24:56PM GMT, Daniel P. Berrangé wrote:
> On Wed, Aug 07, 2024 at 12:43:29PM -0500, Eric Blake wrote:
> > Allowing an unlimited number of clients to any web service is a recipe
> > for a rudimentary denial of service attack: the client merely needs to
> > open lots of sockets without closing them, until qemu no longer has
> > any more fds available to allocate.
> > 
> > For qemu-nbd, we default to allowing only 1 connection unless more are
> > explicitly asked for (-e or --shared); this was historically picked as
> > a nice default (without an explicit -t, a non-persistent qemu-nbd goes
> > away after a client disconnects, without needing any additional
> > follow-up commands), and we are not going to change that interface now
> > (besides, someday we want to point people towards qemu-storage-daemon
> > instead of qemu-nbd).
> > 
> > But for qemu proper, the QMP nbd-server-start command has historically
> > had a default of unlimited number of connections, in part because
> > unlike qemu-nbd it is inherently persistent.  Allowing multiple client
> > sockets is particularly useful for clients that can take advantage of
> > MULTI_CONN (creating parallel sockets to increase throughput),
> > although known clients that do so (such as libnbd's nbdcopy) typically
> > use only 8 or 16 connections (the benefits of scaling diminish once
> > more sockets are competing for kernel attention).  Picking a number
> > large enough for typical use cases, but not unlimited, makes it
> > slightly harder for a malicious client to perform a denial of service
> > merely by opening lots of connections withot progressing through the
> > handshake.
> > 
> > This change does not eliminate CVE-2024-7409 on its own, but reduces
> > the chance for fd exhaustion or unlimited memory usage as an attack
> > surface.  On the other hand, by itself, it makes it more obvious that
> > with a finite limit, we have the problem of an unauthenticated client
> > holding 100 fds opened as a way to block out a legitimate client from
> > being able to connect; thus, later patches will further add timeouts
> > to reject clients that are not making progress.
> > 
> > Suggested-by: Daniel P. Berrangé <berrange@redhat.com>
> > Signed-off-by: Eric Blake <eblake@redhat.com>
> > ---
> >  qapi/block-export.json         | 4 ++--
> >  include/block/nbd.h            | 7 +++++++
> >  block/monitor/block-hmp-cmds.c | 3 ++-
> >  blockdev-nbd.c                 | 8 ++++++++
> >  4 files changed, 19 insertions(+), 3 deletions(-)
> > 
> > diff --git a/qapi/block-export.json b/qapi/block-export.json
> > index 665d5fd0262..ce33fe378df 100644
> > --- a/qapi/block-export.json
> > +++ b/qapi/block-export.json
> > @@ -28,7 +28,7 @@
> >  # @max-connections: The maximum number of connections to allow at the
> >  #     same time, 0 for unlimited.  Setting this to 1 also stops the
> >  #     server from advertising multiple client support (since 5.2;
> > -#     default: 0)
> > +#     default: 100)
> >  #
> >  # Since: 4.2
> >  ##
> > @@ -63,7 +63,7 @@
> >  # @max-connections: The maximum number of connections to allow at the
> >  #     same time, 0 for unlimited.  Setting this to 1 also stops the
> >  #     server from advertising multiple client support (since 5.2;
> > -#     default: 0).
> > +#     default: 100).
> >  #
> >  # Errors:
> >  #     - if the server is already running
> 
> This is considered a backwards compatibility break.
> An intentional one.
> 
> Our justification is that we believe it is the least worst option
> to mitigate the DoS vector. Lets explore the reasoning for that
> belief...

I'll probably crib quite a bit of this analysis into the commit message.

> 
> An alternative would be to deprecate the absence of 'max-connections',
> and after the deprecation period, make it in into a mandatory
> parameter, forcing apps to make a decision. This would be strictly
> compliant with our QAPI change policy.
> 
> How does that differ in impact from changing the defaults...
> 
>   * Changed default
>      - Small risk of breaking app if needing > 100 concurrent conns
>      - Immediately closes the DoS hole for all apps using QEMU

Reduces, rather than closes.  QEMU is no longer vulnerable to running
out of fds (likely) or memory (less likely, since fds are more
limited); but once there is a cap, the malicious client can still tie
up 100 fds and block any other connections from proceeding.

Presumably, though, in the bigger picture, if you have difficulties
connecting your legitimate client (because a client has tied up max
connections), your investigation will probably lead you to block the
bad client by firewall; so our reduced DoS hole is easier to recover
from than the original DoS of no resources.

> 
>   * Deprecation & eventual change to mandatory
>      - Zero risk of breaking apps today
>      - Forces all apps to be changed to pass max-connections
>        in 3 releases time
>      - Does NOT close the DoS hole until the 3rd future release
>        from now.
> 
> If we took the deprecation route, arguably any app which isn't
> already setting max-connections would need to have a CVE filed
> against it *today*, and thus effectively apps would be forced
> into immediately setting max-connections, despite our deprecaiton
> grace period supposedly giving them 3 releases to adjust.

libvirt is in this boat: a grep of libvirt.git shows it is not yet
passing max-connections (of any value), in part because until only
recently, it was still targetting older qemu that did not support
max-connections at all (and it's easier to skip a parameter globally
than to implement a feature detection bit to parse the QAPI
introspection to see which releases support it).

> 
> If we took the changed default route and broke an app, it would
> again have to be changed to set max-connections to a value that
> satisfies its use case.
> 
> 
> So....
> 
> If we take the deprecation route, we create work for /all/ app
> developers using NBD to update their code now to fix the CVE.
> 
> If we take the changed defaults route, and our 100 value choice
> is sufficient, then no apps need changing.
> 
> If we take the changed defaults route, and our 100 value choice
> is NOT sufficient, then most apps still don't need changing, but
> perhaps 1 or 2 do need changing.

Are we in agreement that 100 is generally sufficient?  Again, my
benchmark here is nbdcopy, which defaults to 4 sockets when the server
advertises MULTI_CONN; I don't know if Rich has tried testing it with
more than 16 connections, but his benchmarks also show that doubling
connections has diminishing gains.

I also know that Change Block Tracking takes advantage of multiple
sockets (one reading the qemu:dirty-bitmap:XXX data, the other(s)
using that data to drive which portions of the disk they read); but
again, I couldn't quickly find any CBT implementation that was using
100 or more connections.

> 
> The unpleasant bit is that if 100 is insufficient, an app
> maintainer or user might not become aware of this until they
> have been in production for 12 months and suddenly hit a
> rare load spike.
> 
> 
> Overall though, changing the defaults still looks likely to
> be the least disruptive option for fixing the DoS, providing
> our choice of 100 is a good one.
> 
> I think system emulators are likely OK given common use
> cases for NBD in context of a running VM (migration and
> backup related).
> 
> I've got a nagging concern someone could be doing something
> adventurous with QSD though. eg using it to serve content
> (perhaps readonly) to a huge pool of clients ?

Yeah, I could see how a large fanout of disk-images all backed by a
single read-only base image served over NBD could result in lots of
legitimate client connections.

In the KubeSAN project, we are achieving multiple-node read-write
access of a shared VG by having secondary nodes connect via nbd.ko
(configured for 8 sockets per client node) to a qemu-nbd server on the
primary node - but a quick grep of that source shows that it has an
explicit 'qemu-nbd --shared=0' for unlimited, so that one is
unimpacted by this change (not to mention this change is to qemu QMP
nbd-server-start, not to qemu-nbd --shared=N).

> 
> Probably that's just a risk we have to take, as any app
> relying on the max-connections default is vulernable.
> 
> There are no good answers AFAICT. Only bad answers. This
> one feels least bad to me.
> 
> Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
> 
> 
> With regards,
> Daniel
> -- 
> |: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
> |: https://libvirt.org         -o-            https://fstop138.berrange.com :|
> |: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|
>
diff mbox series

Patch

diff --git a/qapi/block-export.json b/qapi/block-export.json
index 665d5fd0262..ce33fe378df 100644
--- a/qapi/block-export.json
+++ b/qapi/block-export.json
@@ -28,7 +28,7 @@ 
 # @max-connections: The maximum number of connections to allow at the
 #     same time, 0 for unlimited.  Setting this to 1 also stops the
 #     server from advertising multiple client support (since 5.2;
-#     default: 0)
+#     default: 100)
 #
 # Since: 4.2
 ##
@@ -63,7 +63,7 @@ 
 # @max-connections: The maximum number of connections to allow at the
 #     same time, 0 for unlimited.  Setting this to 1 also stops the
 #     server from advertising multiple client support (since 5.2;
-#     default: 0).
+#     default: 100).
 #
 # Errors:
 #     - if the server is already running
diff --git a/include/block/nbd.h b/include/block/nbd.h
index 5fe14786414..fd5044359dc 100644
--- a/include/block/nbd.h
+++ b/include/block/nbd.h
@@ -39,6 +39,13 @@  extern const BlockExportDriver blk_exp_nbd;
  */
 #define NBD_DEFAULT_HANDSHAKE_LIMIT 10

+/*
+ * NBD_DEFAULT_MAX_CONNECTIONS: Number of client sockets to allow at
+ * once; must be large enough to allow a MULTI_CONN-aware client like
+ * nbdcopy to create its typical number of 8-16 sockets.
+ */
+#define NBD_DEFAULT_MAX_CONNECTIONS 100
+
 /* Handshake phase structs - this struct is passed on the wire */

 typedef struct NBDOption {
diff --git a/block/monitor/block-hmp-cmds.c b/block/monitor/block-hmp-cmds.c
index d954bec6f1e..bdf2eb50b68 100644
--- a/block/monitor/block-hmp-cmds.c
+++ b/block/monitor/block-hmp-cmds.c
@@ -402,7 +402,8 @@  void hmp_nbd_server_start(Monitor *mon, const QDict *qdict)
         goto exit;
     }

-    nbd_server_start(addr, NULL, NULL, 0, &local_err);
+    nbd_server_start(addr, NULL, NULL, NBD_DEFAULT_MAX_CONNECTIONS,
+                     &local_err);
     qapi_free_SocketAddress(addr);
     if (local_err != NULL) {
         goto exit;
diff --git a/blockdev-nbd.c b/blockdev-nbd.c
index 11f878b6db3..19c57897819 100644
--- a/blockdev-nbd.c
+++ b/blockdev-nbd.c
@@ -170,6 +170,10 @@  void nbd_server_start(SocketAddress *addr, const char *tls_creds,

 void nbd_server_start_options(NbdServerOptions *arg, Error **errp)
 {
+    if (!arg->has_max_connections) {
+        arg->max_connections = NBD_DEFAULT_MAX_CONNECTIONS;
+    }
+
     nbd_server_start(arg->addr, arg->tls_creds, arg->tls_authz,
                      arg->max_connections, errp);
 }
@@ -182,6 +186,10 @@  void qmp_nbd_server_start(SocketAddressLegacy *addr,
 {
     SocketAddress *addr_flat = socket_address_flatten(addr);

+    if (!has_max_connections) {
+        max_connections = NBD_DEFAULT_MAX_CONNECTIONS;
+    }
+
     nbd_server_start(addr_flat, tls_creds, tls_authz, max_connections, errp);
     qapi_free_SocketAddress(addr_flat);
 }