Patchwork multipath-tools: release uxsocket and resource when cancel thread

login
register
mail settings
Submitter Wuchongyun
Date Nov. 29, 2017, 9:35 a.m.
Message ID <CEB9978CF3252343BE3C67AC9F0086A3414FA363@H3CMLB14-EX.srv.huawei-3com.com>
Download mbox | patch
Permalink /patch/10081699/
State Not Applicable, archived
Delegated to: christophe varoqui
Headers show

Comments

Wuchongyun - Nov. 29, 2017, 9:35 a.m.
Hi ,

Issue description: 
when multipathd initilaze and call uxsock_listen to create unix domain socket, but return -1 and the errno is 98 
and then the uxsock_listen return null. After multipathd startup we can't receive any multipathd commands to 
finish the new multipath creation anymore!

We found that uxlsnr thread's cleanup function not close the sockets and also not release the clients when 
cancel thread, the domain socket will be release by the system. In any special environment like themachine's 
load is very heavy, the system may not close the old domain socket when we try to create and bind the domain
socket may return 98 (Address already in use).

And also we make some experiments:
In uxsock_cleanup if we close the ux_sock first and then immdediately call ux_socket_listen to create new ux_sock
and initialization will be OK; If we don't close the ux_sock and call ux_socket_listen will return -1 and errno = 98.

So we believe that close uxsocket and release clients when cancel thread might making new starting multipathd 
thread can create uxsocket successfully, and might receiving multipathd commands properly.
And also this path can fix clients' memory leak.

Thanks,
Chongyun

Signed-off-by: wu chongyun <wu.chongyun@h3c.com>
---
multipathd/uxlsnr.c |   16 +++++++++++++++-
1 file changed, 15 insertions(+), 1 deletion(-)
Martin Wilck - Jan. 13, 2018, 7:50 p.m.
On Wed, 2017-11-29 at 09:35 +0000, Wuchongyun wrote:
> Hi ,
> 
> Issue description: 
> when multipathd initilaze and call uxsock_listen to create unix
> domain socket, but return -1 and the errno is 98 
> and then the uxsock_listen return null. After multipathd startup we
> can't receive any multipathd commands to 
> finish the new multipath creation anymore!
> 
> We found that uxlsnr thread's cleanup function not close the sockets
> and also not release the clients when 
> cancel thread, the domain socket will be release by the system. In
> any special environment like themachine's 
> load is very heavy, the system may not close the old domain socket
> when we try to create and bind the domain
> socket may return 98 (Address already in use).
> 
> And also we make some experiments:
> In uxsock_cleanup if we close the ux_sock first and then immdediately
> call ux_socket_listen to create new ux_sock
> and initialization will be OK; If we don't close the ux_sock and call
> ux_socket_listen will return -1 and errno = 98.
> 
> So we believe that close uxsocket and release clients when cancel
> thread might making new starting multipathd 
> thread can create uxsocket successfully, and might receiving
> multipathd commands properly.
> And also this path can fix clients' memory leak.

I think this is correct. But I have some remarks, see below.

> 
> Thanks,
> Chongyun
> 
> Signed-off-by: wu chongyun <wu.chongyun@h3c.com>
> ---
> multipathd/uxlsnr.c |   16 +++++++++++++++-
> 1 file changed, 15 insertions(+), 1 deletion(-)
> 
> diff --git a/multipathd/uxlsnr.c b/multipathd/uxlsnr.c
> index 98ac25a..de6950b 100644
> --- a/multipathd/uxlsnr.c
> +++ b/multipathd/uxlsnr.c
> @@ -139,6 +139,20 @@ void check_timeout(struct timespec start_time,
> char *inbuf,
> 
>  void uxsock_cleanup(void *arg)
> {
> +       struct client *client_loop;
> +       struct client *client_tmp;
> +       int ux_sock = (int)arg;
> +
> +       /*
> +       * no need to protect clients, 
> +       * because all operations to clients are only from one
> thread(uxlsnr)
> +       */
> +       list_for_each_entry_safe(client_loop, client_tmp, &clients,
> node) {
> +                dead_client(client_loop);

This takes and releases the client_lock in every loop iteration.
I'd rather take the lock, loop over all clients freeing the resources,
and relase.

> +       }
> +
> +       close(ux_sock);
> +
>        cli_exit();
>        free_polls();
> }
> @@ -162,7 +176,7 @@ void * uxsock_listen(uxsock_trigger_fn
> uxsock_trigger, void * trigger_data)
>                 return NULL;
>        }
> 
> -        pthread_cleanup_push(uxsock_cleanup, NULL);
> +       pthread_cleanup_push(uxsock_cleanup, (void*)ux_sock);

This patch has whitespace issues.

Regards,
Martin

Patch

diff --git a/multipathd/uxlsnr.c b/multipathd/uxlsnr.c
index 98ac25a..de6950b 100644
--- a/multipathd/uxlsnr.c
+++ b/multipathd/uxlsnr.c
@@ -139,6 +139,20 @@  void check_timeout(struct timespec start_time, char *inbuf,

 void uxsock_cleanup(void *arg)
{
+       struct client *client_loop;
+       struct client *client_tmp;
+       int ux_sock = (int)arg;
+
+       /*
+       * no need to protect clients, 
+       * because all operations to clients are only from one thread(uxlsnr)
+       */
+       list_for_each_entry_safe(client_loop, client_tmp, &clients, node) {
+                dead_client(client_loop);
+       }
+
+       close(ux_sock);
+
       cli_exit();
       free_polls();
}
@@ -162,7 +176,7 @@  void * uxsock_listen(uxsock_trigger_fn uxsock_trigger, void * trigger_data)
                return NULL;
       }

-        pthread_cleanup_push(uxsock_cleanup, NULL);
+       pthread_cleanup_push(uxsock_cleanup, (void*)ux_sock);

        condlog(3, "uxsock: startup listener");
       polls = (struct pollfd *)MALLOC((MIN_POLLS + 1) * sizeof(struct pollfd));