diff mbox series

[RFC,4/5] migration/socket: Close the listener at the end

Message ID 20210408191159.133644-5-dgilbert@redhat.com (mailing list archive)
State New, archived
Headers show
Series mptcp support | expand

Commit Message

Dr. David Alan Gilbert April 8, 2021, 7:11 p.m. UTC
From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>

Delay closing the listener until the cleanup hook at the end; mptcp
needs the listener to stay open while the other paths come in.

Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
---
 migration/multifd.c |  5 +++++
 migration/socket.c  | 24 ++++++++++++++++++------
 2 files changed, 23 insertions(+), 6 deletions(-)

Comments

Daniel P. Berrangé April 9, 2021, 9:10 a.m. UTC | #1
On Thu, Apr 08, 2021 at 08:11:58PM +0100, Dr. David Alan Gilbert (git) wrote:
> From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
> 
> Delay closing the listener until the cleanup hook at the end; mptcp
> needs the listener to stay open while the other paths come in.

So you're saying that when the 'accept(2)' call returns, we are only
guaranteed to have 1 single path accepted, and the other paths
will be accepted by the kernel asynchronously ? Hence we need to
keep listening, even though we're not going to call accept(2) again
ourselves ?

> 
> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
> ---
>  migration/multifd.c |  5 +++++
>  migration/socket.c  | 24 ++++++++++++++++++------
>  2 files changed, 23 insertions(+), 6 deletions(-)

Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>


Regards,
Daniel
Paolo Abeni April 9, 2021, 9:20 a.m. UTC | #2
Hello,

On Fri, 2021-04-09 at 10:10 +0100, Daniel P. Berrangé wrote:
> On Thu, Apr 08, 2021 at 08:11:58PM +0100, Dr. David Alan Gilbert (git) wrote:
> > From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
> > 
> > Delay closing the listener until the cleanup hook at the end; mptcp
> > needs the listener to stay open while the other paths come in.
> 
> So you're saying that when the 'accept(2)' call returns, we are only
> guaranteed to have 1 single path accepted, 

when accept() returns it's guaranteed that the first path (actually
subflow) has been created. Other subflows can be already available, or
can be created later.

> and the other paths
> will be accepted by the kernel asynchronously ? Hence we need to
> keep listening, even though we're not going to call accept(2) again
> ourselves ?

Exactly, the others subflows will be created by the kernel as needed
(according to the configuration and the following MPTCP handshakes) and
will _not_ be exposed directly to the user-space as additional fds. The
fd returned by accept() refers to the main MPTCP socket (that is, the
"aggregated" entity), and not to some specific subflow.

Please let me know if the above clarifies in some way.

Thanks!

Paolo
diff mbox series

Patch

diff --git a/migration/multifd.c b/migration/multifd.c
index a6677c45c8..cebd9029b9 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -1165,6 +1165,11 @@  bool multifd_recv_all_channels_created(void)
         return true;
     }
 
+    if (!multifd_recv_state) {
+        /* Called before any connections created */
+        return false;
+    }
+
     return thread_count == qatomic_read(&multifd_recv_state->count);
 }
 
diff --git a/migration/socket.c b/migration/socket.c
index 6016642e04..05705a32d8 100644
--- a/migration/socket.c
+++ b/migration/socket.c
@@ -126,22 +126,31 @@  static void socket_accept_incoming_migration(QIONetListener *listener,
 {
     trace_migration_socket_incoming_accepted();
 
-    qio_channel_set_name(QIO_CHANNEL(cioc), "migration-socket-incoming");
-    migration_channel_process_incoming(QIO_CHANNEL(cioc));
-
     if (migration_has_all_channels()) {
-        /* Close listening socket as its no longer needed */
-        qio_net_listener_disconnect(listener);
-        object_unref(OBJECT(listener));
+        error_report("%s: Extra incoming migration connection; ignoring",
+                     __func__);
+        return;
     }
+
+    qio_channel_set_name(QIO_CHANNEL(cioc), "migration-socket-incoming");
+    migration_channel_process_incoming(QIO_CHANNEL(cioc));
 }
 
+static void
+socket_incoming_migration_end(void *opaque)
+{
+    QIONetListener *listener = opaque;
+
+    qio_net_listener_disconnect(listener);
+    object_unref(OBJECT(listener));
+}
 
 static void
 socket_start_incoming_migration_internal(SocketAddress *saddr,
                                          Error **errp)
 {
     QIONetListener *listener = qio_net_listener_new();
+    MigrationIncomingState *mis = migration_incoming_get_current();
     size_t i;
     int num = 1;
 
@@ -156,6 +165,9 @@  socket_start_incoming_migration_internal(SocketAddress *saddr,
         return;
     }
 
+    mis->transport_data = listener;
+    mis->transport_cleanup = socket_incoming_migration_end;
+
     qio_net_listener_set_client_func_full(listener,
                                           socket_accept_incoming_migration,
                                           NULL, NULL,