Message ID | 20230616152737.23545-4-bmeng@tinylab.org (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | net/tap: Fix QEMU frozen issue when the maximum number of file descriptors is very large | expand |
On 6/16/23 17:27, Bin Meng wrote: > When opening /proc/self/fd fails, current codes just return directly, > but we can fall back to close fds one by one. > > Signed-off-by: Bin Meng <bmeng@tinylab.org> > --- > > (no changes since v1) > > util/async-teardown.c | 6 +++++- > 1 file changed, 5 insertions(+), 1 deletion(-) > > diff --git a/util/async-teardown.c b/util/async-teardown.c > index 3ab19c8740..7e0177a8da 100644 > --- a/util/async-teardown.c > +++ b/util/async-teardown.c > @@ -48,7 +48,11 @@ static void close_all_open_fd(void) > > dir = opendir("/proc/self/fd"); > if (!dir) { > - /* If /proc is not mounted, there is nothing that can be done. */ > + /* If /proc is not mounted, close fds one by one. */ > + int open_max = sysconf(_SC_OPEN_MAX), i; > + for (i = 0; i < open_max; i++) { > + close(i); > + } > return; > } > /* Avoid closing the directory. */ Do we really need to make the 1M close calls? The process is on its way to exit anyway... r~
diff --git a/util/async-teardown.c b/util/async-teardown.c index 3ab19c8740..7e0177a8da 100644 --- a/util/async-teardown.c +++ b/util/async-teardown.c @@ -48,7 +48,11 @@ static void close_all_open_fd(void) dir = opendir("/proc/self/fd"); if (!dir) { - /* If /proc is not mounted, there is nothing that can be done. */ + /* If /proc is not mounted, close fds one by one. */ + int open_max = sysconf(_SC_OPEN_MAX), i; + for (i = 0; i < open_max; i++) { + close(i); + } return; } /* Avoid closing the directory. */
When opening /proc/self/fd fails, current codes just return directly, but we can fall back to close fds one by one. Signed-off-by: Bin Meng <bmeng@tinylab.org> --- (no changes since v1) util/async-teardown.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-)