diff mbox series

migration/multifd: Don't fsync when closing QIOChannelFile

Message ID 20240305174332.2553-1-farosas@suse.de (mailing list archive)
State New, archived
Headers show
Series migration/multifd: Don't fsync when closing QIOChannelFile | expand

Commit Message

Fabiano Rosas March 5, 2024, 5:43 p.m. UTC
Commit bc38feddeb ("io: fsync before closing a file channel") added a
fsync/fdatasync at the closing point of the QIOChannelFile to ensure
integrity of the migration stream in case of QEMU crash.

The decision to do the sync at qio_channel_close() was not the best
since that function runs in the main thread and the fsync can cause
QEMU to hang for several minutes, depending on the migration size and
disk speed.

To fix the hang, remove the fsync from qio_channel_file_close().

At this moment, the migration code is the only user of the fsync and
we're taking the tradeoff of not having a sync at all, leaving the
responsibility to the upper layers.

Fixes: bc38feddeb ("io: fsync before closing a file channel")
Signed-off-by: Fabiano Rosas <farosas@suse.de>
---
 docs/devel/migration/main.rst |  3 ++-
 io/channel-file.c             |  5 -----
 migration/multifd.c           | 13 -------------
 3 files changed, 2 insertions(+), 19 deletions(-)

Comments

Daniel P. Berrangé March 5, 2024, 5:49 p.m. UTC | #1
On Tue, Mar 05, 2024 at 02:43:32PM -0300, Fabiano Rosas wrote:
> Commit bc38feddeb ("io: fsync before closing a file channel") added a
> fsync/fdatasync at the closing point of the QIOChannelFile to ensure
> integrity of the migration stream in case of QEMU crash.
> 
> The decision to do the sync at qio_channel_close() was not the best
> since that function runs in the main thread and the fsync can cause
> QEMU to hang for several minutes, depending on the migration size and
> disk speed.
> 
> To fix the hang, remove the fsync from qio_channel_file_close().
> 
> At this moment, the migration code is the only user of the fsync and
> we're taking the tradeoff of not having a sync at all, leaving the
> responsibility to the upper layers.
> 
> Fixes: bc38feddeb ("io: fsync before closing a file channel")
> Signed-off-by: Fabiano Rosas <farosas@suse.de>
> ---
>  docs/devel/migration/main.rst |  3 ++-
>  io/channel-file.c             |  5 -----
>  migration/multifd.c           | 13 -------------
>  3 files changed, 2 insertions(+), 19 deletions(-)
> 
> diff --git a/docs/devel/migration/main.rst b/docs/devel/migration/main.rst
> index 8024275d6d..54385a23e5 100644
> --- a/docs/devel/migration/main.rst
> +++ b/docs/devel/migration/main.rst
> @@ -44,7 +44,8 @@ over any transport.
>  - file migration: do the migration using a file that is passed to QEMU
>    by path. A file offset option is supported to allow a management
>    application to add its own metadata to the start of the file without
> -  QEMU interference.
> +  QEMU interference. Note that QEMU does not flush cached file
> +  data/metadata at the end of migration.
>  
>  In addition, support is included for migration using RDMA, which
>  transports the page data using ``RDMA``, where the hardware takes care of
> diff --git a/io/channel-file.c b/io/channel-file.c
> index d4706fa592..a6ad7770c6 100644
> --- a/io/channel-file.c
> +++ b/io/channel-file.c
> @@ -242,11 +242,6 @@ static int qio_channel_file_close(QIOChannel *ioc,
>  {
>      QIOChannelFile *fioc = QIO_CHANNEL_FILE(ioc);
>  
> -    if (qemu_fdatasync(fioc->fd) < 0) {
> -        error_setg_errno(errp, errno,
> -                         "Unable to synchronize file data with storage device");
> -        return -1;
> -    }
>      if (qemu_close(fioc->fd) < 0) {
>          error_setg_errno(errp, errno,
>                           "Unable to close file");

Upto here:

   Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>


> diff --git a/migration/multifd.c b/migration/multifd.c
> index d4a44da559..2edcd5104e 100644
> --- a/migration/multifd.c
> +++ b/migration/multifd.c
> @@ -709,19 +709,6 @@ static bool multifd_send_cleanup_channel(MultiFDSendParams *p, Error **errp)
>  {
>      if (p->c) {
>          migration_ioc_unregister_yank(p->c);
> -        /*
> -         * An explicit close() on the channel here is normally not
> -         * required, but can be helpful for "file:" iochannels, where it
> -         * will include fdatasync() to make sure the data is flushed to the
> -         * disk backend.
> -         *
> -         * The object_unref() cannot guarantee that because: (1) finalize()
> -         * of the iochannel is only triggered on the last reference, and
> -         * it's not guaranteed that we always hold the last refcount when
> -         * reaching here, and, (2) even if finalize() is invoked, it only
> -         * does a close(fd) without data flush.
> -         */
> -        qio_channel_close(p->c, &error_abort);
>          object_unref(OBJECT(p->c));
>          p->c = NULL;
>      }

I don't think you should be removing this. Calling qio_channel_close()
remains recommended best practice, even with fdatasync() removed, as
it provides a strong guarantee that the FD is released which you don't
get if you rely on the ref count being correctly decremented in all
code paths.

With regards,
Daniel
Peter Xu March 6, 2024, 12:52 a.m. UTC | #2
On Tue, Mar 05, 2024 at 05:49:33PM +0000, Daniel P. Berrangé wrote:
> I don't think you should be removing this. Calling qio_channel_close()
> remains recommended best practice, even with fdatasync() removed, as
> it provides a strong guarantee that the FD is released which you don't
> get if you rely on the ref count being correctly decremented in all
> code paths.

Hmm, I have the confusion on why ioc->fd is more special than the ioc
itself when leaked.  It'll be a bug anyway if we leak any of them?  Leaking
fds may also help us to find such issue easier (e.g. by seeing stale fds
under /proc).  From that POV I tend to agree on the original proposal.

Now we removed the data sync, IIUC it means the mgmt can always flush the
cache with/without the fd closed in QEMU even if it's leaked.  So I don't
yet see other side effects of leaking the fd which will cause a difference
comparing to leaking the ioc?

Thanks,
Daniel P. Berrangé March 6, 2024, 9:25 a.m. UTC | #3
On Wed, Mar 06, 2024 at 08:52:41AM +0800, Peter Xu wrote:
> On Tue, Mar 05, 2024 at 05:49:33PM +0000, Daniel P. Berrangé wrote:
> > I don't think you should be removing this. Calling qio_channel_close()
> > remains recommended best practice, even with fdatasync() removed, as
> > it provides a strong guarantee that the FD is released which you don't
> > get if you rely on the ref count being correctly decremented in all
> > code paths.
> 
> Hmm, I have the confusion on why ioc->fd is more special than the ioc
> itself when leaked.  It'll be a bug anyway if we leak any of them?  Leaking
> fds may also help us to find such issue easier (e.g. by seeing stale fds
> under /proc).  From that POV I tend to agree on the original proposal.

Closing the FD would cause any registered I/O handlers callbacks to
get POLLNVAL and may well trigger cleanup that will prevent the leak.

With regards,
Daniel
Peter Xu March 6, 2024, 9:53 a.m. UTC | #4
On Wed, Mar 06, 2024 at 09:25:24AM +0000, Daniel P. Berrangé wrote:
> On Wed, Mar 06, 2024 at 08:52:41AM +0800, Peter Xu wrote:
> > On Tue, Mar 05, 2024 at 05:49:33PM +0000, Daniel P. Berrangé wrote:
> > > I don't think you should be removing this. Calling qio_channel_close()
> > > remains recommended best practice, even with fdatasync() removed, as
> > > it provides a strong guarantee that the FD is released which you don't
> > > get if you rely on the ref count being correctly decremented in all
> > > code paths.
> > 
> > Hmm, I have the confusion on why ioc->fd is more special than the ioc
> > itself when leaked.  It'll be a bug anyway if we leak any of them?  Leaking
> > fds may also help us to find such issue easier (e.g. by seeing stale fds
> > under /proc).  From that POV I tend to agree on the original proposal.
> 
> Closing the FD would cause any registered I/O handlers callbacks to
> get POLLNVAL and may well trigger cleanup that will prevent the leak.

It's not possible anymore that we will have such handler callbacks when
reaching here, am I right?  AFAIU that's my understanding after commit
9221e3c6a2 ("migration/multifd: Cleanup TLS iochannel referencing").

Would it be possible if we can assert that fact (either on "there's no
handler callback", or "we're the last reference" then it implies no
handlers) rather than doing an explicit close() (and if we do the latter,
we'd better explain the POLLNVAL bits)?

Thanks,
diff mbox series

Patch

diff --git a/docs/devel/migration/main.rst b/docs/devel/migration/main.rst
index 8024275d6d..54385a23e5 100644
--- a/docs/devel/migration/main.rst
+++ b/docs/devel/migration/main.rst
@@ -44,7 +44,8 @@  over any transport.
 - file migration: do the migration using a file that is passed to QEMU
   by path. A file offset option is supported to allow a management
   application to add its own metadata to the start of the file without
-  QEMU interference.
+  QEMU interference. Note that QEMU does not flush cached file
+  data/metadata at the end of migration.
 
 In addition, support is included for migration using RDMA, which
 transports the page data using ``RDMA``, where the hardware takes care of
diff --git a/io/channel-file.c b/io/channel-file.c
index d4706fa592..a6ad7770c6 100644
--- a/io/channel-file.c
+++ b/io/channel-file.c
@@ -242,11 +242,6 @@  static int qio_channel_file_close(QIOChannel *ioc,
 {
     QIOChannelFile *fioc = QIO_CHANNEL_FILE(ioc);
 
-    if (qemu_fdatasync(fioc->fd) < 0) {
-        error_setg_errno(errp, errno,
-                         "Unable to synchronize file data with storage device");
-        return -1;
-    }
     if (qemu_close(fioc->fd) < 0) {
         error_setg_errno(errp, errno,
                          "Unable to close file");
diff --git a/migration/multifd.c b/migration/multifd.c
index d4a44da559..2edcd5104e 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -709,19 +709,6 @@  static bool multifd_send_cleanup_channel(MultiFDSendParams *p, Error **errp)
 {
     if (p->c) {
         migration_ioc_unregister_yank(p->c);
-        /*
-         * An explicit close() on the channel here is normally not
-         * required, but can be helpful for "file:" iochannels, where it
-         * will include fdatasync() to make sure the data is flushed to the
-         * disk backend.
-         *
-         * The object_unref() cannot guarantee that because: (1) finalize()
-         * of the iochannel is only triggered on the last reference, and
-         * it's not guaranteed that we always hold the last refcount when
-         * reaching here, and, (2) even if finalize() is invoked, it only
-         * does a close(fd) without data flush.
-         */
-        qio_channel_close(p->c, &error_abort);
         object_unref(OBJECT(p->c));
         p->c = NULL;
     }