From patchwork Tue Nov 23 01:29:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: NeilBrown X-Patchwork-Id: 12633301 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 56C22C433EF for ; Tue, 23 Nov 2021 01:31:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231153AbhKWBeI (ORCPT ); Mon, 22 Nov 2021 20:34:08 -0500 Received: from smtp-out1.suse.de ([195.135.220.28]:57786 "EHLO smtp-out1.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229484AbhKWBeI (ORCPT ); Mon, 22 Nov 2021 20:34:08 -0500 Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 4D68E218B2; Tue, 23 Nov 2021 01:31:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa; t=1637631060; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=SixIWCbkNt2wko5LpfnhUePKFO3mkjl+Pjz/Oq/HzzQ=; b=DiSIIONuPh2VxsVP8LPO+kbgqy/7+G7M2QqyoppMTlkijMhTNfWKk4RRNnKWtSS5jJy2qZ dx7l/Sdc/IToGI3NUnEqulUTST0Ry+ltAxNrreqpDh7lyEKSyO83UQgbB6wFxj9jmnGm3L G4LmX4Ffs2Yf2Lvrpt+NiU3NFdecFT8= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_ed25519; t=1637631060; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=SixIWCbkNt2wko5LpfnhUePKFO3mkjl+Pjz/Oq/HzzQ=; b=bxzB/J11jYQAA4uyqoeM43qzqc/70PZGDsPQb+AWV0l/ptwSXw3uBzFXSVJCrvSmxmlTTS jSGmnrzpJIuJe4Cg== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id DDAAB13BD4; Tue, 23 Nov 2021 01:30:58 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id fha8JVJEnGGocwAAMHmgww (envelope-from ); Tue, 23 Nov 2021 01:30:58 +0000 Subject: [PATCH 01/19] SUNRPC/NFSD: clean up get/put functions. From: NeilBrown To: "J. Bruce Fields" , Chuck Lever Cc: linux-nfs@vger.kernel.org Date: Tue, 23 Nov 2021 12:29:35 +1100 Message-ID: <163763097540.7284.6343105121634986097.stgit@noble.brown> In-Reply-To: <163763078330.7284.10141477742275086758.stgit@noble.brown> References: <163763078330.7284.10141477742275086758.stgit@noble.brown> User-Agent: StGit/0.23 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org svc_destroy() is poorly named - it doesn't necessarily destroy the svc, it might just reduce the ref count. nfsd_destroy() is poorly named for the same reason. This patch: - removes the refcount functionality from svc_destroy(), moving it to a new svc_put(). Almost all previous callers of svc_destroy() now call svc_put(). - renames nfsd_destroy() to nfsd_put() and improves the code, using the new svc_destroy() rather than svc_put() - also changes svc_get() to return the serv, which simplifies some code a little. The only non-trivial part of this is that svc_destroy() would call svc_sock_update() on a non-final decrement. It can no longer do that, and svc_put() isn't really a good place of it. This call is now made from svc_exit_thread() which seems like a good place. This makes the call *before* sv_nrthreads is decremented rather than after. This is not particularly important as the call just sets a flag which causes sv_nrthreads set be checked later. A subsequent patch will improve the ordering. Signed-off-by: NeilBrown --- fs/lockd/svc.c | 12 +++--------- fs/nfs/callback.c | 20 ++++---------------- fs/nfsd/nfsctl.c | 4 ++-- fs/nfsd/nfsd.h | 2 +- fs/nfsd/nfssvc.c | 30 ++++++++++++++++-------------- include/linux/sunrpc/svc.h | 29 +++++++++++++++++++++++++---- net/sunrpc/svc.c | 19 +++++-------------- 7 files changed, 56 insertions(+), 60 deletions(-) diff --git a/fs/lockd/svc.c b/fs/lockd/svc.c index b220e1b91726..135bd86ed3ad 100644 --- a/fs/lockd/svc.c +++ b/fs/lockd/svc.c @@ -430,14 +430,8 @@ static struct svc_serv *lockd_create_svc(void) /* * Check whether we're already up and running. */ - if (nlmsvc_rqst) { - /* - * Note: increase service usage, because later in case of error - * svc_destroy() will be called. - */ - svc_get(nlmsvc_rqst->rq_server); - return nlmsvc_rqst->rq_server; - } + if (nlmsvc_rqst) + return svc_get(nlmsvc_rqst->rq_server); /* * Sanity check: if there's no pid, @@ -497,7 +491,7 @@ int lockd_up(struct net *net, const struct cred *cred) * so we exit through here on both success and failure. */ err_put: - svc_destroy(serv); + svc_put(serv); err_create: mutex_unlock(&nlmsvc_mutex); return error; diff --git a/fs/nfs/callback.c b/fs/nfs/callback.c index 86d856de1389..edbc7579b4aa 100644 --- a/fs/nfs/callback.c +++ b/fs/nfs/callback.c @@ -266,14 +266,8 @@ static struct svc_serv *nfs_callback_create_svc(int minorversion) /* * Check whether we're already up and running. */ - if (cb_info->serv) { - /* - * Note: increase service usage, because later in case of error - * svc_destroy() will be called. - */ - svc_get(cb_info->serv); - return cb_info->serv; - } + if (cb_info->serv) + return svc_get(cb_info->serv); switch (minorversion) { case 0: @@ -335,16 +329,10 @@ int nfs_callback_up(u32 minorversion, struct rpc_xprt *xprt) goto err_start; cb_info->users++; - /* - * svc_create creates the svc_serv with sv_nrthreads == 1, and then - * svc_prepare_thread increments that. So we need to call svc_destroy - * on both success and failure so that the refcount is 1 when the - * thread exits. - */ err_net: if (!cb_info->users) cb_info->serv = NULL; - svc_destroy(serv); + svc_put(serv); err_create: mutex_unlock(&nfs_callback_mutex); return ret; @@ -370,7 +358,7 @@ void nfs_callback_down(int minorversion, struct net *net) if (cb_info->users == 0) { svc_get(serv); serv->sv_ops->svo_setup(serv, NULL, 0); - svc_destroy(serv); + svc_put(serv); dprintk("nfs_callback_down: service destroyed\n"); cb_info->serv = NULL; } diff --git a/fs/nfsd/nfsctl.c b/fs/nfsd/nfsctl.c index af8531c3854a..5eb564e58a9b 100644 --- a/fs/nfsd/nfsctl.c +++ b/fs/nfsd/nfsctl.c @@ -743,7 +743,7 @@ static ssize_t __write_ports_addfd(char *buf, struct net *net, const struct cred err = svc_addsock(nn->nfsd_serv, fd, buf, SIMPLE_TRANSACTION_LIMIT, cred); if (err < 0) { - nfsd_destroy(net); + nfsd_put(net); return err; } @@ -796,7 +796,7 @@ static ssize_t __write_ports_addxprt(char *buf, struct net *net, const struct cr if (!list_empty(&nn->nfsd_serv->sv_permsocks)) nn->nfsd_serv->sv_nrthreads--; else - nfsd_destroy(net); + nfsd_put(net); return err; } diff --git a/fs/nfsd/nfsd.h b/fs/nfsd/nfsd.h index 498e5a489826..3e5008b475ff 100644 --- a/fs/nfsd/nfsd.h +++ b/fs/nfsd/nfsd.h @@ -97,7 +97,7 @@ int nfsd_pool_stats_open(struct inode *, struct file *); int nfsd_pool_stats_release(struct inode *, struct file *); void nfsd_shutdown_threads(struct net *net); -void nfsd_destroy(struct net *net); +void nfsd_put(struct net *net); bool i_am_nfsd(void); diff --git a/fs/nfsd/nfssvc.c b/fs/nfsd/nfssvc.c index 80431921e5d7..2ab0e650a0e2 100644 --- a/fs/nfsd/nfssvc.c +++ b/fs/nfsd/nfssvc.c @@ -623,7 +623,7 @@ void nfsd_shutdown_threads(struct net *net) svc_get(serv); /* Kill outstanding nfsd threads */ serv->sv_ops->svo_setup(serv, NULL, 0); - nfsd_destroy(net); + nfsd_put(net); mutex_unlock(&nfsd_mutex); /* Wait for shutdown of nfsd_serv to complete */ wait_for_completion(&nn->nfsd_shutdown_complete); @@ -656,7 +656,10 @@ int nfsd_create_serv(struct net *net) nn->nfsd_serv->sv_maxconn = nn->max_connections; error = svc_bind(nn->nfsd_serv, net); if (error < 0) { - svc_destroy(nn->nfsd_serv); + /* NOT nfsd_put() as notifiers (see below) haven't + * been set up yet. + */ + svc_put(nn->nfsd_serv); nfsd_complete_shutdown(net); return error; } @@ -697,16 +700,16 @@ int nfsd_get_nrthreads(int n, int *nthreads, struct net *net) return 0; } -void nfsd_destroy(struct net *net) +void nfsd_put(struct net *net) { struct nfsd_net *nn = net_generic(net, nfsd_net_id); - int destroy = (nn->nfsd_serv->sv_nrthreads == 1); - if (destroy) + nn->nfsd_serv->sv_nrthreads --; + if (nn->nfsd_serv->sv_nrthreads == 0) { svc_shutdown_net(nn->nfsd_serv, net); - svc_destroy(nn->nfsd_serv); - if (destroy) + svc_destroy(nn->nfsd_serv); nfsd_complete_shutdown(net); + } } int nfsd_set_nrthreads(int n, int *nthreads, struct net *net) @@ -758,7 +761,7 @@ int nfsd_set_nrthreads(int n, int *nthreads, struct net *net) if (err) break; } - nfsd_destroy(net); + nfsd_put(net); return err; } @@ -795,7 +798,7 @@ nfsd_svc(int nrservs, struct net *net, const struct cred *cred) error = nfsd_startup_net(net, cred); if (error) - goto out_destroy; + goto out_put; error = nn->nfsd_serv->sv_ops->svo_setup(nn->nfsd_serv, NULL, nrservs); if (error) @@ -808,8 +811,8 @@ nfsd_svc(int nrservs, struct net *net, const struct cred *cred) out_shutdown: if (error < 0 && !nfsd_up_before) nfsd_shutdown_net(net); -out_destroy: - nfsd_destroy(net); /* Release server */ +out_put: + nfsd_put(net); out: mutex_unlock(&nfsd_mutex); return error; @@ -982,7 +985,7 @@ nfsd(void *vrqstp) /* Release the thread */ svc_exit_thread(rqstp); - nfsd_destroy(net); + nfsd_put(net); /* Release module */ mutex_unlock(&nfsd_mutex); @@ -1109,8 +1112,7 @@ int nfsd_pool_stats_release(struct inode *inode, struct file *file) struct net *net = inode->i_sb->s_fs_info; mutex_lock(&nfsd_mutex); - /* this function really, really should have been called svc_put() */ - nfsd_destroy(net); + nfsd_put(net); mutex_unlock(&nfsd_mutex); return ret; } diff --git a/include/linux/sunrpc/svc.h b/include/linux/sunrpc/svc.h index 0ae28ae6caf2..d87c3392a1e9 100644 --- a/include/linux/sunrpc/svc.h +++ b/include/linux/sunrpc/svc.h @@ -114,15 +114,37 @@ struct svc_serv { #endif /* CONFIG_SUNRPC_BACKCHANNEL */ }; -/* - * We use sv_nrthreads as a reference count. svc_destroy() drops +/** + * svc_get() - increment reference count on a SUNRPC serv + * @serv: the svc_serv to have count incremented + * + * Returns: the svc_serv that was passed in. + * + * We use sv_nrthreads as a reference count. svc_put() drops * this refcount, so we need to bump it up around operations that * change the number of threads. Horrible, but there it is. * Should be called with the "service mutex" held. */ -static inline void svc_get(struct svc_serv *serv) +static inline struct svc_serv *svc_get(struct svc_serv *serv) { serv->sv_nrthreads++; + return serv; +} + +void svc_destroy(struct svc_serv *serv); + +/** + * svc_put - decrement reference count on a SUNRPC serv + * @serv: the svc_serv to have count decremented + * + * When the reference count reaches zero, svc_destroy() + * is called to clean up and free the serv. + */ +static inline void svc_put(struct svc_serv *serv) +{ + serv->sv_nrthreads --; + if (serv->sv_nrthreads == 0) + svc_destroy(serv); } /* @@ -514,7 +536,6 @@ struct svc_serv * svc_create_pooled(struct svc_program *, unsigned int, int svc_set_num_threads(struct svc_serv *, struct svc_pool *, int); int svc_set_num_threads_sync(struct svc_serv *, struct svc_pool *, int); int svc_pool_stats_open(struct svc_serv *serv, struct file *file); -void svc_destroy(struct svc_serv *); void svc_shutdown_net(struct svc_serv *, struct net *); int svc_process(struct svc_rqst *); int bc_svc_process(struct svc_serv *, struct rpc_rqst *, diff --git a/net/sunrpc/svc.c b/net/sunrpc/svc.c index 4292278a9552..55a1bf0d129f 100644 --- a/net/sunrpc/svc.c +++ b/net/sunrpc/svc.c @@ -528,17 +528,7 @@ EXPORT_SYMBOL_GPL(svc_shutdown_net); void svc_destroy(struct svc_serv *serv) { - dprintk("svc: svc_destroy(%s, %d)\n", - serv->sv_program->pg_name, - serv->sv_nrthreads); - - if (serv->sv_nrthreads) { - if (--(serv->sv_nrthreads) != 0) { - svc_sock_update_bufs(serv); - return; - } - } else - printk("svc_destroy: no threads for serv=%p!\n", serv); + dprintk("svc: svc_destroy(%s)\n", serv->sv_program->pg_name); del_timer_sync(&serv->sv_temptimer); @@ -892,9 +882,10 @@ svc_exit_thread(struct svc_rqst *rqstp) svc_rqst_free(rqstp); - /* Release the server */ - if (serv) - svc_destroy(serv); + if (!serv) + return; + svc_sock_update_bufs(serv); + svc_destroy(serv); } EXPORT_SYMBOL_GPL(svc_exit_thread); From patchwork Tue Nov 23 01:29:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: NeilBrown X-Patchwork-Id: 12633303 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 381F5C433F5 for ; Tue, 23 Nov 2021 01:31:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231236AbhKWBeN (ORCPT ); Mon, 22 Nov 2021 20:34:13 -0500 Received: from smtp-out2.suse.de ([195.135.220.29]:37070 "EHLO smtp-out2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229484AbhKWBeM (ORCPT ); Mon, 22 Nov 2021 20:34:12 -0500 Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id B439A1FD33; Tue, 23 Nov 2021 01:31:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa; t=1637631064; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=a0bVY7Da7DZBKUJ4QKP/kjBakF12BS7ZR80Gf4eVuL8=; b=2BqNsQ03qAER6VnwiW83yTocuZ/fqbHQUTfBcg281fkIqvGkyYd0NIFwONDjQ8I9iahkRP gU7gGxJxv2WBKxJfxdTvcj2O/JsqFSiDImGWAThPUn0hScbVXqRbNxHln5nxX3MOA33a3J +SSZCkfBD5/drHHCU7vl+Knt3aUbSh0= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_ed25519; t=1637631064; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=a0bVY7Da7DZBKUJ4QKP/kjBakF12BS7ZR80Gf4eVuL8=; b=G63oLxiIbbwygXdeUwnRIXU2vgwh8YwdqI0pCHXPWDxefs/vYkRSFfbKBNE6BWkpdECCO5 22ucAKtbxEo3emAg== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id A054213BD4; Tue, 23 Nov 2021 01:31:03 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id ai5pF1dEnGGwcwAAMHmgww (envelope-from ); Tue, 23 Nov 2021 01:31:03 +0000 Subject: [PATCH 02/19] NFSD: handle error better in write_ports_addfd() From: NeilBrown To: "J. Bruce Fields" , Chuck Lever Cc: linux-nfs@vger.kernel.org Date: Tue, 23 Nov 2021 12:29:35 +1100 Message-ID: <163763097543.7284.12067402792054742606.stgit@noble.brown> In-Reply-To: <163763078330.7284.10141477742275086758.stgit@noble.brown> References: <163763078330.7284.10141477742275086758.stgit@noble.brown> User-Agent: StGit/0.23 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org If write_ports_add() fails, we shouldn't destroy the serv, unless we had only just created it. So if there are any permanent sockets already attached, leave the serv in place. Signed-off-by: NeilBrown --- fs/nfsd/nfsctl.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/fs/nfsd/nfsctl.c b/fs/nfsd/nfsctl.c index 5eb564e58a9b..93d417871302 100644 --- a/fs/nfsd/nfsctl.c +++ b/fs/nfsd/nfsctl.c @@ -742,7 +742,7 @@ static ssize_t __write_ports_addfd(char *buf, struct net *net, const struct cred return err; err = svc_addsock(nn->nfsd_serv, fd, buf, SIMPLE_TRANSACTION_LIMIT, cred); - if (err < 0) { + if (err < 0 && list_empty(&nn->nfsd_serv->sv_permsocks)) { nfsd_put(net); return err; } From patchwork Tue Nov 23 01:29:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: NeilBrown X-Patchwork-Id: 12633305 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8E3E8C433F5 for ; Tue, 23 Nov 2021 01:31:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229982AbhKWBeS (ORCPT ); Mon, 22 Nov 2021 20:34:18 -0500 Received: from smtp-out1.suse.de ([195.135.220.28]:57838 "EHLO smtp-out1.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229484AbhKWBeS (ORCPT ); Mon, 22 Nov 2021 20:34:18 -0500 Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 340D1218B2; Tue, 23 Nov 2021 01:31:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa; t=1637631070; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=IzHrYL3gcN0tH5YdAWN1ZcYCdP1RBiH0ObDWhJxlcbM=; b=nGD8RxmG2ZpFkPiwmK4cN7sxbWfz8YB3bus29c/CbGnsl6gqrzMIUeA1dvMDaIOpXBVK/d tZ3KqD9KK1BzG5BJ4UQ3nN19NE4/80iBAl3nfFUYFf8K0GMeNBlTwgHdVOgmPvc0HnS5pD grPwNboH9fqkrHeC1BacUP/fJVQxBjE= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_ed25519; t=1637631070; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=IzHrYL3gcN0tH5YdAWN1ZcYCdP1RBiH0ObDWhJxlcbM=; b=aKK4+Wk2/cSzOEKd3WkrhLHjNvPKjENz7wKYKt510vQGpAB5HoOUeF2mbAFdX5OdKTXOFW 7Rz8fCJufg+Kx+DA== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id CBF5213BD4; Tue, 23 Nov 2021 01:31:08 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id VZOqIVxEnGGycwAAMHmgww (envelope-from ); Tue, 23 Nov 2021 01:31:08 +0000 Subject: [PATCH 03/19] SUNRPC: stop using ->sv_nrthreads as a refcount From: NeilBrown To: "J. Bruce Fields" , Chuck Lever Cc: linux-nfs@vger.kernel.org Date: Tue, 23 Nov 2021 12:29:35 +1100 Message-ID: <163763097543.7284.5781149029646373584.stgit@noble.brown> In-Reply-To: <163763078330.7284.10141477742275086758.stgit@noble.brown> References: <163763078330.7284.10141477742275086758.stgit@noble.brown> User-Agent: StGit/0.23 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org The use of sv_nrthreads as a general refcount results in clumsy code, as is seen by various comments needed to explain the situation. This patch introduces a 'struct kref' and uses that for reference counting, leaving sv_nrthreads to be a pure count of threads. The kref is managed particularly in svc_get() and svc_put(), and also nfsd_put(); svc_destroy() now takes a pointer to the embedded kref, rather than to the serv. nfsd allows the svc_serv to exist with ->sv_nrhtreads being zero. This happens when a transport is created before the first thread is started. To support this, a 'keep_active' flag is introduced which holds a ref on the svc_serv. This is set when any listening socket is successfully added (unless there are running threads), and cleared when the number of threads is set. So when the last thread exits, the nfs_serv will be destroyed. The use of 'keep_active' replaces previous code which checked if there were any permanent sockets. We no longer clear ->rq_server when nfsd() exits. This was done to prevent svc_exit_thread() from calling svc_destroy(). Instead we take an extra reference to the svc_serv to prevent svc_destroy() from being called. Signed-off-by: NeilBrown --- fs/lockd/svc.c | 4 ---- fs/nfs/callback.c | 2 +- fs/nfsd/netns.h | 7 +++++++ fs/nfsd/nfsctl.c | 22 ++++++++++------------ fs/nfsd/nfssvc.c | 42 ++++++++++++++++++++++++++---------------- include/linux/sunrpc/svc.h | 14 ++++---------- net/sunrpc/svc.c | 22 +++++++++++----------- 7 files changed, 59 insertions(+), 54 deletions(-) diff --git a/fs/lockd/svc.c b/fs/lockd/svc.c index 135bd86ed3ad..a9669b106dbd 100644 --- a/fs/lockd/svc.c +++ b/fs/lockd/svc.c @@ -486,10 +486,6 @@ int lockd_up(struct net *net, const struct cred *cred) goto err_put; } nlmsvc_users++; - /* - * Note: svc_serv structures have an initial use count of 1, - * so we exit through here on both success and failure. - */ err_put: svc_put(serv); err_create: diff --git a/fs/nfs/callback.c b/fs/nfs/callback.c index edbc7579b4aa..d9d78ffd1d65 100644 --- a/fs/nfs/callback.c +++ b/fs/nfs/callback.c @@ -169,7 +169,7 @@ static int nfs_callback_start_svc(int minorversion, struct rpc_xprt *xprt, if (nrservs < NFS4_MIN_NR_CALLBACK_THREADS) nrservs = NFS4_MIN_NR_CALLBACK_THREADS; - if (serv->sv_nrthreads-1 == nrservs) + if (serv->sv_nrthreads == nrservs) return 0; ret = serv->sv_ops->svo_setup(serv, NULL, nrservs); diff --git a/fs/nfsd/netns.h b/fs/nfsd/netns.h index 935c1028c217..08bcd8f23b01 100644 --- a/fs/nfsd/netns.h +++ b/fs/nfsd/netns.h @@ -123,6 +123,13 @@ struct nfsd_net { u32 clverifier_counter; struct svc_serv *nfsd_serv; + /* When a listening socket is added to nfsd, keep_active is set + * and this justifies a reference on nfsd_serv. This stops + * nfsd_serv from being freed. When the number of threads is + * set, keep_active is cleared and the reference is dropped. So + * when the last thread exits, the service will be destroyed. + */ + int keep_active; wait_queue_head_t ntf_wq; atomic_t ntf_refcnt; diff --git a/fs/nfsd/nfsctl.c b/fs/nfsd/nfsctl.c index 93d417871302..2bbc26fbdae8 100644 --- a/fs/nfsd/nfsctl.c +++ b/fs/nfsd/nfsctl.c @@ -742,13 +742,12 @@ static ssize_t __write_ports_addfd(char *buf, struct net *net, const struct cred return err; err = svc_addsock(nn->nfsd_serv, fd, buf, SIMPLE_TRANSACTION_LIMIT, cred); - if (err < 0 && list_empty(&nn->nfsd_serv->sv_permsocks)) { - nfsd_put(net); - return err; - } - /* Decrease the count, but don't shut down the service */ - nn->nfsd_serv->sv_nrthreads--; + if (err >= 0 && + !nn->nfsd_serv->sv_nrthreads && !xchg(&nn->keep_active, 1)) + svc_get(nn->nfsd_serv); + + nfsd_put(net); return err; } @@ -783,8 +782,10 @@ static ssize_t __write_ports_addxprt(char *buf, struct net *net, const struct cr if (err < 0 && err != -EAFNOSUPPORT) goto out_close; - /* Decrease the count, but don't shut down the service */ - nn->nfsd_serv->sv_nrthreads--; + if (!nn->nfsd_serv->sv_nrthreads && !xchg(&nn->keep_active, 1)) + svc_get(nn->nfsd_serv); + + nfsd_put(net); return 0; out_close: xprt = svc_find_xprt(nn->nfsd_serv, transport, net, PF_INET, port); @@ -793,10 +794,7 @@ static ssize_t __write_ports_addxprt(char *buf, struct net *net, const struct cr svc_xprt_put(xprt); } out_err: - if (!list_empty(&nn->nfsd_serv->sv_permsocks)) - nn->nfsd_serv->sv_nrthreads--; - else - nfsd_put(net); + nfsd_put(net); return err; } diff --git a/fs/nfsd/nfssvc.c b/fs/nfsd/nfssvc.c index 2ab0e650a0e2..5f605e7e8091 100644 --- a/fs/nfsd/nfssvc.c +++ b/fs/nfsd/nfssvc.c @@ -60,13 +60,13 @@ static __be32 nfsd_init_request(struct svc_rqst *, * extent ->sv_temp_socks and ->sv_permsocks. It also protects nfsdstats.th_cnt * * If (out side the lock) nn->nfsd_serv is non-NULL, then it must point to a - * properly initialised 'struct svc_serv' with ->sv_nrthreads > 0. That number - * of nfsd threads must exist and each must listed in ->sp_all_threads in each - * entry of ->sv_pools[]. + * properly initialised 'struct svc_serv' with ->sv_nrthreads > 0 (unless + * nn->keep_active is set). That number of nfsd threads must + * exist and each must be listed in ->sp_all_threads in some entry of + * ->sv_pools[]. * - * Transitions of the thread count between zero and non-zero are of particular - * interest since the svc_serv needs to be created and initialized at that - * point, or freed. + * Each active thread holds a counted reference on nn->nfsd_serv, as does + * the nn->keep_active flag and various transient calls to svc_get(). * * Finally, the nfsd_mutex also protects some of the global variables that are * accessed when nfsd starts and that are settable via the write_* routines in @@ -700,14 +700,22 @@ int nfsd_get_nrthreads(int n, int *nthreads, struct net *net) return 0; } +/* This is the callback for kref_put() below. + * There is no code here as the first thing to be done is + * call svc_shutdown_net(), but we cannot get the 'net' from + * the kref. So do all the work when kref_put returns true. + */ +static void nfsd_noop(struct kref *ref) +{ +} + void nfsd_put(struct net *net) { struct nfsd_net *nn = net_generic(net, nfsd_net_id); - nn->nfsd_serv->sv_nrthreads --; - if (nn->nfsd_serv->sv_nrthreads == 0) { + if (kref_put(&nn->nfsd_serv->sv_refcnt, nfsd_noop)) { svc_shutdown_net(nn->nfsd_serv, net); - svc_destroy(nn->nfsd_serv); + svc_destroy(&nn->nfsd_serv->sv_refcnt); nfsd_complete_shutdown(net); } } @@ -803,15 +811,14 @@ nfsd_svc(int nrservs, struct net *net, const struct cred *cred) NULL, nrservs); if (error) goto out_shutdown; - /* We are holding a reference to nn->nfsd_serv which - * we don't want to count in the return value, - * so subtract 1 - */ - error = nn->nfsd_serv->sv_nrthreads - 1; + error = nn->nfsd_serv->sv_nrthreads; out_shutdown: if (error < 0 && !nfsd_up_before) nfsd_shutdown_net(net); out_put: + /* Threads now hold service active */ + if (xchg(&nn->keep_active, 0)) + nfsd_put(net); nfsd_put(net); out: mutex_unlock(&nfsd_mutex); @@ -980,11 +987,15 @@ nfsd(void *vrqstp) nfsdstats.th_cnt --; out: - rqstp->rq_server = NULL; + /* Take an extra ref so that the svc_put in svc_exit_thread() + * doesn't call svc_destroy() + */ + svc_get(nn->nfsd_serv); /* Release the thread */ svc_exit_thread(rqstp); + /* Now if needed we call svc_destroy in appropriate context */ nfsd_put(net); /* Release module */ @@ -1099,7 +1110,6 @@ int nfsd_pool_stats_open(struct inode *inode, struct file *file) mutex_unlock(&nfsd_mutex); return -ENODEV; } - /* bump up the psudo refcount while traversing */ svc_get(nn->nfsd_serv); ret = svc_pool_stats_open(nn->nfsd_serv, file); mutex_unlock(&nfsd_mutex); diff --git a/include/linux/sunrpc/svc.h b/include/linux/sunrpc/svc.h index d87c3392a1e9..3903b4ae8ac5 100644 --- a/include/linux/sunrpc/svc.h +++ b/include/linux/sunrpc/svc.h @@ -85,6 +85,7 @@ struct svc_serv { struct svc_program * sv_program; /* RPC program */ struct svc_stat * sv_stats; /* RPC statistics */ spinlock_t sv_lock; + struct kref sv_refcnt; unsigned int sv_nrthreads; /* # of server threads */ unsigned int sv_maxconn; /* max connections allowed or * '0' causing max to be based @@ -119,19 +120,14 @@ struct svc_serv { * @serv: the svc_serv to have count incremented * * Returns: the svc_serv that was passed in. - * - * We use sv_nrthreads as a reference count. svc_put() drops - * this refcount, so we need to bump it up around operations that - * change the number of threads. Horrible, but there it is. - * Should be called with the "service mutex" held. */ static inline struct svc_serv *svc_get(struct svc_serv *serv) { - serv->sv_nrthreads++; + kref_get(&serv->sv_refcnt); return serv; } -void svc_destroy(struct svc_serv *serv); +void svc_destroy(struct kref *); /** * svc_put - decrement reference count on a SUNRPC serv @@ -142,9 +138,7 @@ void svc_destroy(struct svc_serv *serv); */ static inline void svc_put(struct svc_serv *serv) { - serv->sv_nrthreads --; - if (serv->sv_nrthreads == 0) - svc_destroy(serv); + kref_put(&serv->sv_refcnt, svc_destroy); } /* diff --git a/net/sunrpc/svc.c b/net/sunrpc/svc.c index 55a1bf0d129f..acddc6e12e9e 100644 --- a/net/sunrpc/svc.c +++ b/net/sunrpc/svc.c @@ -435,7 +435,7 @@ __svc_create(struct svc_program *prog, unsigned int bufsize, int npools, return NULL; serv->sv_name = prog->pg_name; serv->sv_program = prog; - serv->sv_nrthreads = 1; + kref_init(&serv->sv_refcnt); serv->sv_stats = prog->pg_stats; if (bufsize > RPCSVC_MAXPAYLOAD) bufsize = RPCSVC_MAXPAYLOAD; @@ -526,10 +526,11 @@ EXPORT_SYMBOL_GPL(svc_shutdown_net); * protect the sv_nrthreads, sv_permsocks and sv_tempsocks. */ void -svc_destroy(struct svc_serv *serv) +svc_destroy(struct kref *ref) { - dprintk("svc: svc_destroy(%s)\n", serv->sv_program->pg_name); + struct svc_serv *serv = container_of(ref, struct svc_serv, sv_refcnt); + dprintk("svc: svc_destroy(%s)\n", serv->sv_program->pg_name); del_timer_sync(&serv->sv_temptimer); /* @@ -637,6 +638,7 @@ svc_prepare_thread(struct svc_serv *serv, struct svc_pool *pool, int node) if (!rqstp) return ERR_PTR(-ENOMEM); + svc_get(serv); serv->sv_nrthreads++; spin_lock_bh(&pool->sp_lock); pool->sp_nrthreads++; @@ -776,8 +778,7 @@ int svc_set_num_threads(struct svc_serv *serv, struct svc_pool *pool, int nrservs) { if (pool == NULL) { - /* The -1 assumes caller has done a svc_get() */ - nrservs -= (serv->sv_nrthreads-1); + nrservs -= serv->sv_nrthreads; } else { spin_lock_bh(&pool->sp_lock); nrservs -= pool->sp_nrthreads; @@ -814,8 +815,7 @@ int svc_set_num_threads_sync(struct svc_serv *serv, struct svc_pool *pool, int nrservs) { if (pool == NULL) { - /* The -1 assumes caller has done a svc_get() */ - nrservs -= (serv->sv_nrthreads-1); + nrservs -= serv->sv_nrthreads; } else { spin_lock_bh(&pool->sp_lock); nrservs -= pool->sp_nrthreads; @@ -880,12 +880,12 @@ svc_exit_thread(struct svc_rqst *rqstp) list_del_rcu(&rqstp->rq_all); spin_unlock_bh(&pool->sp_lock); + serv->sv_nrthreads -= 1; + svc_sock_update_bufs(serv); + svc_rqst_free(rqstp); - if (!serv) - return; - svc_sock_update_bufs(serv); - svc_destroy(serv); + svc_put(serv); } EXPORT_SYMBOL_GPL(svc_exit_thread); From patchwork Tue Nov 23 01:29:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: NeilBrown X-Patchwork-Id: 12633307 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3B8F9C433F5 for ; Tue, 23 Nov 2021 01:31:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230214AbhKWBe0 (ORCPT ); Mon, 22 Nov 2021 20:34:26 -0500 Received: from smtp-out1.suse.de ([195.135.220.28]:57862 "EHLO smtp-out1.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229484AbhKWBe0 (ORCPT ); Mon, 22 Nov 2021 20:34:26 -0500 Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 392C4218B2; Tue, 23 Nov 2021 01:31:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa; t=1637631078; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=vySanz2YXVqdMC84ngImA7vDUMvb5gO7Kz71f8EfliM=; b=N2RXzx9tkbrMN5ewG5am9NKKryE4BZRDj46jU9zvpucDmC/wJEO7Ka9xf1lBbwsU8QfMFQ xr8x4lGcofUtLOza144Cr/CscZ04NEp3+Pvs84o63wkTX4VarcMdN3HMphjWdhDkSXE/1a W/YFZ0/GNEE4Udv6nwHVJ9rdmv+jZ5Q= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_ed25519; t=1637631078; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=vySanz2YXVqdMC84ngImA7vDUMvb5gO7Kz71f8EfliM=; b=72UO+SHH0lFUnGCCHc+m/nIc5FGmJQC9X1ZxcfR6iRRhHFOZaIo02Y4NVOrp5frorXlcAI uyXVLhCAEJpRoxBg== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 159DA13BD4; Tue, 23 Nov 2021 01:31:16 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id 8L8xMGREnGHIcwAAMHmgww (envelope-from ); Tue, 23 Nov 2021 01:31:16 +0000 Subject: [PATCH 04/19] nfsd: make nfsd_stats.th_cnt atomic_t From: NeilBrown To: "J. Bruce Fields" , Chuck Lever Cc: linux-nfs@vger.kernel.org Date: Tue, 23 Nov 2021 12:29:35 +1100 Message-ID: <163763097544.7284.7451205698169106567.stgit@noble.brown> In-Reply-To: <163763078330.7284.10141477742275086758.stgit@noble.brown> References: <163763078330.7284.10141477742275086758.stgit@noble.brown> User-Agent: StGit/0.23 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org This allows us to move the updates for th_cnt out of the mutex. This is a step towards reducing mutex coverage in nfsd(). Signed-off-by: NeilBrown --- fs/nfsd/nfssvc.c | 6 +++--- fs/nfsd/stats.c | 2 +- fs/nfsd/stats.h | 4 +--- 3 files changed, 5 insertions(+), 7 deletions(-) diff --git a/fs/nfsd/nfssvc.c b/fs/nfsd/nfssvc.c index 5f605e7e8091..fc5899502a83 100644 --- a/fs/nfsd/nfssvc.c +++ b/fs/nfsd/nfssvc.c @@ -57,7 +57,7 @@ static __be32 nfsd_init_request(struct svc_rqst *, /* * nfsd_mutex protects nn->nfsd_serv -- both the pointer itself and the members * of the svc_serv struct. In particular, ->sv_nrthreads but also to some - * extent ->sv_temp_socks and ->sv_permsocks. It also protects nfsdstats.th_cnt + * extent ->sv_temp_socks and ->sv_permsocks. * * If (out side the lock) nn->nfsd_serv is non-NULL, then it must point to a * properly initialised 'struct svc_serv' with ->sv_nrthreads > 0 (unless @@ -955,8 +955,8 @@ nfsd(void *vrqstp) allow_signal(SIGINT); allow_signal(SIGQUIT); - nfsdstats.th_cnt++; mutex_unlock(&nfsd_mutex); + atomic_inc(&nfsdstats.th_cnt); set_freezable(); @@ -983,8 +983,8 @@ nfsd(void *vrqstp) /* Clear signals before calling svc_exit_thread() */ flush_signals(current); + atomic_dec(&nfsdstats.th_cnt); mutex_lock(&nfsd_mutex); - nfsdstats.th_cnt --; out: /* Take an extra ref so that the svc_put in svc_exit_thread() diff --git a/fs/nfsd/stats.c b/fs/nfsd/stats.c index 1d3b881e7382..a8c5a02a84f0 100644 --- a/fs/nfsd/stats.c +++ b/fs/nfsd/stats.c @@ -45,7 +45,7 @@ static int nfsd_proc_show(struct seq_file *seq, void *v) percpu_counter_sum_positive(&nfsdstats.counter[NFSD_STATS_IO_WRITE])); /* thread usage: */ - seq_printf(seq, "th %u 0", nfsdstats.th_cnt); + seq_printf(seq, "th %u 0", atomic_read(&nfsdstats.th_cnt)); /* deprecated thread usage histogram stats */ for (i = 0; i < 10; i++) diff --git a/fs/nfsd/stats.h b/fs/nfsd/stats.h index 51ecda852e23..9b43dc3d9991 100644 --- a/fs/nfsd/stats.h +++ b/fs/nfsd/stats.h @@ -29,11 +29,9 @@ enum { struct nfsd_stats { struct percpu_counter counter[NFSD_STATS_COUNTERS_NUM]; - /* Protected by nfsd_mutex */ - unsigned int th_cnt; /* number of available threads */ + atomic_t th_cnt; /* number of available threads */ }; - extern struct nfsd_stats nfsdstats; extern struct svc_stat nfsd_svcstats; From patchwork Tue Nov 23 01:29:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: NeilBrown X-Patchwork-Id: 12633309 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 62448C433EF for ; Tue, 23 Nov 2021 01:31:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231451AbhKWBea (ORCPT ); Mon, 22 Nov 2021 20:34:30 -0500 Received: from smtp-out1.suse.de ([195.135.220.28]:57876 "EHLO smtp-out1.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229484AbhKWBea (ORCPT ); Mon, 22 Nov 2021 20:34:30 -0500 Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 7B069218B0; Tue, 23 Nov 2021 01:31:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa; t=1637631082; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=pn9gDOrq59XoNH0HG8W4Dfcs8NF+/Kq6he69YY+ziro=; b=NovLVDQ6qHI1YuqsJ1lJtJe1fjleI2xkuQ1+BeUFq4c6k2kxgC3QiesUzioeBFIFN73AWk 4sPvWAEWu8JU+wdka2bm9jP3OkGJzI5YoJn6QoV1av3URI7jBXGKjsuH1A6n385tK9CKqz IsqCoWZ3S1ZVROasPTL/OzlumE/L6Dc= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_ed25519; t=1637631082; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=pn9gDOrq59XoNH0HG8W4Dfcs8NF+/Kq6he69YY+ziro=; b=+NBpqDZYYfONXuBtvSO/JBiMyDW6XUFJ87174N4ERZYLb49ldxblYZtchAADvbzQynW/C3 aa3nrxKiv0k21fCw== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 6CA6313BD4; Tue, 23 Nov 2021 01:31:21 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id yuTtCmlEnGHOcwAAMHmgww (envelope-from ); Tue, 23 Nov 2021 01:31:21 +0000 Subject: [PATCH 05/19] SUNRPC: use sv_lock to protect updates to sv_nrthreads. From: NeilBrown To: "J. Bruce Fields" , Chuck Lever Cc: linux-nfs@vger.kernel.org Date: Tue, 23 Nov 2021 12:29:35 +1100 Message-ID: <163763097544.7284.15751243743215653521.stgit@noble.brown> In-Reply-To: <163763078330.7284.10141477742275086758.stgit@noble.brown> References: <163763078330.7284.10141477742275086758.stgit@noble.brown> User-Agent: StGit/0.23 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org Using sv_lock means we don't need to hold the service mutex over these updates. In particular, svc_exit_thread() no longer requires synchronisation, so threads can exit asynchronously. Signed-off-by: NeilBrown --- fs/nfsd/nfssvc.c | 5 ++--- net/sunrpc/svc.c | 9 +++++++-- 2 files changed, 9 insertions(+), 5 deletions(-) diff --git a/fs/nfsd/nfssvc.c b/fs/nfsd/nfssvc.c index fc5899502a83..e9c9fa820b17 100644 --- a/fs/nfsd/nfssvc.c +++ b/fs/nfsd/nfssvc.c @@ -55,9 +55,8 @@ static __be32 nfsd_init_request(struct svc_rqst *, struct svc_process_info *); /* - * nfsd_mutex protects nn->nfsd_serv -- both the pointer itself and the members - * of the svc_serv struct. In particular, ->sv_nrthreads but also to some - * extent ->sv_temp_socks and ->sv_permsocks. + * nfsd_mutex protects nn->nfsd_serv -- both the pointer itself and some members + * of the svc_serv struct such as ->sv_temp_socks and ->sv_permsocks. * * If (out side the lock) nn->nfsd_serv is non-NULL, then it must point to a * properly initialised 'struct svc_serv' with ->sv_nrthreads > 0 (unless diff --git a/net/sunrpc/svc.c b/net/sunrpc/svc.c index acddc6e12e9e..2b2042234e4b 100644 --- a/net/sunrpc/svc.c +++ b/net/sunrpc/svc.c @@ -523,7 +523,7 @@ EXPORT_SYMBOL_GPL(svc_shutdown_net); /* * Destroy an RPC service. Should be called with appropriate locking to - * protect the sv_nrthreads, sv_permsocks and sv_tempsocks. + * protect sv_permsocks and sv_tempsocks. */ void svc_destroy(struct kref *ref) @@ -639,7 +639,10 @@ svc_prepare_thread(struct svc_serv *serv, struct svc_pool *pool, int node) return ERR_PTR(-ENOMEM); svc_get(serv); - serv->sv_nrthreads++; + spin_lock_bh(&serv->sv_lock); + serv->sv_nrthreads += 1; + spin_unlock_bh(&serv->sv_lock); + spin_lock_bh(&pool->sp_lock); pool->sp_nrthreads++; list_add_rcu(&rqstp->rq_all, &pool->sp_all_threads); @@ -880,7 +883,9 @@ svc_exit_thread(struct svc_rqst *rqstp) list_del_rcu(&rqstp->rq_all); spin_unlock_bh(&pool->sp_lock); + spin_lock_bh(&serv->sv_lock); serv->sv_nrthreads -= 1; + spin_unlock_bh(&serv->sv_lock); svc_sock_update_bufs(serv); svc_rqst_free(rqstp); From patchwork Tue Nov 23 01:29:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: NeilBrown X-Patchwork-Id: 12633311 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6BD79C433F5 for ; Tue, 23 Nov 2021 01:31:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231474AbhKWBei (ORCPT ); Mon, 22 Nov 2021 20:34:38 -0500 Received: from smtp-out2.suse.de ([195.135.220.29]:37098 "EHLO smtp-out2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229484AbhKWBei (ORCPT ); Mon, 22 Nov 2021 20:34:38 -0500 Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id 92AA01FD39; Tue, 23 Nov 2021 01:31:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa; t=1637631090; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=55eU6IUmMGnMWDSrM7QS/etQjMpj5mfpzAQJ988Ay8s=; b=JTjGB7zhd3M+G9DwAjCsztOdwDXq4ZcE4I/NHPXNxCr9xGSAXhHDeNubDirnjkwTv5m/bg Kql8DvNOXlphPC0b3+ZNvEWFvkEB+EofPX4geoX4UXPMuIk1RHw7A9g7TjOCZ8RIVmFVUD hWiYO5ODEh/vHEg2/v0MBFMOBNXsV8g= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_ed25519; t=1637631090; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=55eU6IUmMGnMWDSrM7QS/etQjMpj5mfpzAQJ988Ay8s=; b=Bq/ncwcRYbn5ob3jbOHrcym3dbi0a5ObenVT39iSAQ11xO8fgEj1yey/a3X2f+/4YvYOEU yoj77NrEreYJzgCA== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 7844213BD4; Tue, 23 Nov 2021 01:31:29 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id vXA7DXFEnGHZcwAAMHmgww (envelope-from ); Tue, 23 Nov 2021 01:31:29 +0000 Subject: [PATCH 06/19] NFSD: narrow nfsd_mutex protection in nfsd thread From: NeilBrown To: "J. Bruce Fields" , Chuck Lever Cc: linux-nfs@vger.kernel.org Date: Tue, 23 Nov 2021 12:29:35 +1100 Message-ID: <163763097545.7284.10961768975710390541.stgit@noble.brown> In-Reply-To: <163763078330.7284.10141477742275086758.stgit@noble.brown> References: <163763078330.7284.10141477742275086758.stgit@noble.brown> User-Agent: StGit/0.23 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org There is nothing happening in the start of nfsd() that requires protection by the mutex, so don't take it until shutting down the thread - which does still require protection - but only for nfsd_put(). Signed-off-by: NeilBrown --- fs/nfsd/nfssvc.c | 8 ++------ 1 file changed, 2 insertions(+), 6 deletions(-) diff --git a/fs/nfsd/nfssvc.c b/fs/nfsd/nfssvc.c index e9c9fa820b17..097abd8b059c 100644 --- a/fs/nfsd/nfssvc.c +++ b/fs/nfsd/nfssvc.c @@ -932,9 +932,6 @@ nfsd(void *vrqstp) struct nfsd_net *nn = net_generic(net, nfsd_net_id); int err; - /* Lock module and set up kernel thread */ - mutex_lock(&nfsd_mutex); - /* At this point, the thread shares current->fs * with the init process. We need to create files with the * umask as defined by the client instead of init's umask. */ @@ -954,7 +951,6 @@ nfsd(void *vrqstp) allow_signal(SIGINT); allow_signal(SIGQUIT); - mutex_unlock(&nfsd_mutex); atomic_inc(&nfsdstats.th_cnt); set_freezable(); @@ -983,7 +979,6 @@ nfsd(void *vrqstp) flush_signals(current); atomic_dec(&nfsdstats.th_cnt); - mutex_lock(&nfsd_mutex); out: /* Take an extra ref so that the svc_put in svc_exit_thread() @@ -995,10 +990,11 @@ nfsd(void *vrqstp) svc_exit_thread(rqstp); /* Now if needed we call svc_destroy in appropriate context */ + mutex_lock(&nfsd_mutex); nfsd_put(net); + mutex_unlock(&nfsd_mutex); /* Release module */ - mutex_unlock(&nfsd_mutex); module_put_and_exit(0); return 0; } From patchwork Tue Nov 23 01:29:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: NeilBrown X-Patchwork-Id: 12633313 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 51F5CC433F5 for ; Tue, 23 Nov 2021 01:31:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231502AbhKWBeo (ORCPT ); Mon, 22 Nov 2021 20:34:44 -0500 Received: from smtp-out2.suse.de ([195.135.220.29]:37110 "EHLO smtp-out2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229484AbhKWBeo (ORCPT ); Mon, 22 Nov 2021 20:34:44 -0500 Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id 4D7B21FD39; Tue, 23 Nov 2021 01:31:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa; t=1637631096; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=1mZ9QCJmmpOfHkIM+MgeCLE+cunQFH91V1AI+zif9pc=; b=eoKscLwhGwOTOQUAg0FHSdHEJxySJJzaXM+fMAeZauPCe2q09TBXWYjXFHfXK4e1b39Uza K0vgCilQUCA41PArWN/VA+Le0lk7YsLSqlGvVcHHEG+cMEy1FYemSItO0QMP0w2upuh7hI lIQne3YrjSTFyqQqpBStncQffSFURxo= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_ed25519; t=1637631096; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=1mZ9QCJmmpOfHkIM+MgeCLE+cunQFH91V1AI+zif9pc=; b=xT8KbwWTachsmQqQDhhLvgxG5GHtXtvkXKF2/DJ27VH6VPfVl1zXtlHhBbSU9Jl0N1xXE+ M034ZqlnyY/M0wCg== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 289E013BD4; Tue, 23 Nov 2021 01:31:34 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id AXLSNHZEnGHdcwAAMHmgww (envelope-from ); Tue, 23 Nov 2021 01:31:34 +0000 Subject: [PATCH 07/19] NFSD: Make it possible to use svc_set_num_threads_sync From: NeilBrown To: "J. Bruce Fields" , Chuck Lever Cc: linux-nfs@vger.kernel.org Date: Tue, 23 Nov 2021 12:29:35 +1100 Message-ID: <163763097545.7284.3682897670212307971.stgit@noble.brown> In-Reply-To: <163763078330.7284.10141477742275086758.stgit@noble.brown> References: <163763078330.7284.10141477742275086758.stgit@noble.brown> User-Agent: StGit/0.23 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org nfsd cannot currently use svc_set_num_threads_sync. It instead uses svc_set_num_threads which does *not* wait for threads to all exit, and has a separate mechanism (nfsd_shutdown_complete) to wait for completion. The reason that nfsd is unlike other services is that nfsd threads can exit separately from svc_set_num_threads being called - they die on receipt of SIGKILL. Also, when the last thread exits, the service must be shut down (sockets closed). For this, the nfsd_mutex needs to be taken, and as that mutex needs to be held while svc_set_num_threads is called, the one cannot wait for the other. This patch changes the nfsd thread so that it can drop the ref on the service without blocking on nfsd_mutex, so that svc_set_num_threads_sync can be used: - if it can drop a non-last reference, it does that. This does not trigger shutdown and does not require a mutex. This will likely happen for all but the last thread signalled, and for all threads being shut down by nfsd_shutdown_threads() - if it can get the mutex without blocking (trylock), it does that and then drops the reference. This will likely happen for the last thread killed by SIGKILL - Otherwise there might be an unrelated task holding the mutex, possibly in another network namespace, or nfsd_shutdown_threads() might be just about to get a reference on the service, after which we can drop ours safely. We cannot conveniently get wakeup notifications on these events, and we are unlikely to need to, so we sleep briefly and check again. With this we can discard nfsd_shutdown_complete and nfsd_complete_shutdown(), and switch to svc_set_num_threads_sync. Signed-off-by: NeilBrown --- fs/nfsd/netns.h | 3 --- fs/nfsd/nfssvc.c | 41 ++++++++++++++++++++--------------------- include/linux/sunrpc/svc.h | 13 +++++++++++++ 3 files changed, 33 insertions(+), 24 deletions(-) diff --git a/fs/nfsd/netns.h b/fs/nfsd/netns.h index 08bcd8f23b01..1fd59eb0730b 100644 --- a/fs/nfsd/netns.h +++ b/fs/nfsd/netns.h @@ -134,9 +134,6 @@ struct nfsd_net { wait_queue_head_t ntf_wq; atomic_t ntf_refcnt; - /* Allow umount to wait for nfsd state cleanup */ - struct completion nfsd_shutdown_complete; - /* * clientid and stateid data for construction of net unique COPY * stateids. diff --git a/fs/nfsd/nfssvc.c b/fs/nfsd/nfssvc.c index 097abd8b059c..d0d9107a1b93 100644 --- a/fs/nfsd/nfssvc.c +++ b/fs/nfsd/nfssvc.c @@ -593,20 +593,10 @@ static const struct svc_serv_ops nfsd_thread_sv_ops = { .svo_shutdown = nfsd_last_thread, .svo_function = nfsd, .svo_enqueue_xprt = svc_xprt_do_enqueue, - .svo_setup = svc_set_num_threads, + .svo_setup = svc_set_num_threads_sync, .svo_module = THIS_MODULE, }; -static void nfsd_complete_shutdown(struct net *net) -{ - struct nfsd_net *nn = net_generic(net, nfsd_net_id); - - WARN_ON(!mutex_is_locked(&nfsd_mutex)); - - nn->nfsd_serv = NULL; - complete(&nn->nfsd_shutdown_complete); -} - void nfsd_shutdown_threads(struct net *net) { struct nfsd_net *nn = net_generic(net, nfsd_net_id); @@ -624,8 +614,6 @@ void nfsd_shutdown_threads(struct net *net) serv->sv_ops->svo_setup(serv, NULL, 0); nfsd_put(net); mutex_unlock(&nfsd_mutex); - /* Wait for shutdown of nfsd_serv to complete */ - wait_for_completion(&nn->nfsd_shutdown_complete); } bool i_am_nfsd(void) @@ -650,7 +638,6 @@ int nfsd_create_serv(struct net *net) &nfsd_thread_sv_ops); if (nn->nfsd_serv == NULL) return -ENOMEM; - init_completion(&nn->nfsd_shutdown_complete); nn->nfsd_serv->sv_maxconn = nn->max_connections; error = svc_bind(nn->nfsd_serv, net); @@ -659,7 +646,7 @@ int nfsd_create_serv(struct net *net) * been set up yet. */ svc_put(nn->nfsd_serv); - nfsd_complete_shutdown(net); + nn->nfsd_serv = NULL; return error; } @@ -715,7 +702,7 @@ void nfsd_put(struct net *net) if (kref_put(&nn->nfsd_serv->sv_refcnt, nfsd_noop)) { svc_shutdown_net(nn->nfsd_serv, net); svc_destroy(&nn->nfsd_serv->sv_refcnt); - nfsd_complete_shutdown(net); + nn->nfsd_serv = NULL; } } @@ -743,7 +730,7 @@ int nfsd_set_nrthreads(int n, int *nthreads, struct net *net) if (tot > NFSD_MAXSERVS) { /* total too large: scale down requested numbers */ for (i = 0; i < n && tot > 0; i++) { - int new = nthreads[i] * NFSD_MAXSERVS / tot; + int new = nthreads[i] * NFSD_MAXSERVS / tot; tot -= (nthreads[i] - new); nthreads[i] = new; } @@ -989,10 +976,22 @@ nfsd(void *vrqstp) /* Release the thread */ svc_exit_thread(rqstp); - /* Now if needed we call svc_destroy in appropriate context */ - mutex_lock(&nfsd_mutex); - nfsd_put(net); - mutex_unlock(&nfsd_mutex); + /* We need to drop a ref, but may not drop the last reference + * without holding nfsd_mutex, and we cannot wait for nfsd_mutex as that + * could deadlock with nfsd_shutdown_threads() waiting for us. + * So three options are: + * - drop a non-final reference, + * - get the mutex without waiting + * - sleep briefly andd try the above again + */ + while (!svc_put_not_last(nn->nfsd_serv)) { + if (mutex_trylock(&nfsd_mutex)) { + nfsd_put(net); + mutex_unlock(&nfsd_mutex); + break; + } + msleep(20); + } /* Release module */ module_put_and_exit(0); diff --git a/include/linux/sunrpc/svc.h b/include/linux/sunrpc/svc.h index 3903b4ae8ac5..36bfc0281988 100644 --- a/include/linux/sunrpc/svc.h +++ b/include/linux/sunrpc/svc.h @@ -141,6 +141,19 @@ static inline void svc_put(struct svc_serv *serv) kref_put(&serv->sv_refcnt, svc_destroy); } +/** + * svc_put_not_last - decrement non-final reference count on SUNRPC serv + * @serv: the svc_serv to have count decremented + * + * Returns: %true is refcount was decremented. + * + * If the refcount is 1, it is not decremented and instead failure is reported. + */ +static inline bool svc_put_not_last(struct svc_serv *serv) +{ + return refcount_dec_not_one(&serv->sv_refcnt.refcount); +} + /* * Maximum payload size supported by a kernel RPC server. * This is use to determine the max number of pages nfsd is From patchwork Tue Nov 23 01:29:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: NeilBrown X-Patchwork-Id: 12633315 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 895D2C433F5 for ; Tue, 23 Nov 2021 01:31:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229776AbhKWBev (ORCPT ); Mon, 22 Nov 2021 20:34:51 -0500 Received: from smtp-out2.suse.de ([195.135.220.29]:37120 "EHLO smtp-out2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229484AbhKWBeu (ORCPT ); Mon, 22 Nov 2021 20:34:50 -0500 Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id 7E4A01FD39; Tue, 23 Nov 2021 01:31:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa; t=1637631102; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=SDmTbPDwa5E0x4TGMnc+iOuJ54NJ5XIf5V3R6S6BAIs=; b=G44XSsFGT1aM7Ocn6mT8L7rn8AuoE48cxUmDb/OFw3z/QeLtRuBx1nTp5SnC2XBbMz6T8n o2UnCW9Hueuub4b+Q1qpj/Hz7I1m0jkoPcjRKxkTVtA/VEGuZBnstOwExn2yM+5ax4v7WF 3+0h5HrrTVgNYjyI4Y6RgADwQNSXxls= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_ed25519; t=1637631102; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=SDmTbPDwa5E0x4TGMnc+iOuJ54NJ5XIf5V3R6S6BAIs=; b=MUscU5ma62S/qEOysHW+19TH2wVXLRplQn9I1Wnyc9Uh7CU3TSNuNEQzppZw7Q50+II0tR tMk9Vd3y+oSYr0Dw== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 6410C13BD4; Tue, 23 Nov 2021 01:31:41 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id 6tEMCX1EnGHkcwAAMHmgww (envelope-from ); Tue, 23 Nov 2021 01:31:41 +0000 Subject: [PATCH 08/19] SUNRPC: discard svo_setup and rename svc_set_num_threads_sync() From: NeilBrown To: "J. Bruce Fields" , Chuck Lever Cc: linux-nfs@vger.kernel.org Date: Tue, 23 Nov 2021 12:29:35 +1100 Message-ID: <163763097546.7284.17574156274358076119.stgit@noble.brown> In-Reply-To: <163763078330.7284.10141477742275086758.stgit@noble.brown> References: <163763078330.7284.10141477742275086758.stgit@noble.brown> User-Agent: StGit/0.23 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org The ->svo_setup callback serves no purpose. It is always called from within the same module that chooses which callback is needed. So discard it and call the relevant function directly. Now that svc_set_num_threads() is no longer used remove it and rename svc_set_num_threads_sync() to remove the "_sync" suffix. Signed-off-by: NeilBrown --- fs/nfs/callback.c | 8 +++---- fs/nfsd/nfssvc.c | 11 ++++------ include/linux/sunrpc/svc.h | 4 ---- net/sunrpc/svc.c | 49 ++------------------------------------------ 4 files changed, 10 insertions(+), 62 deletions(-) diff --git a/fs/nfs/callback.c b/fs/nfs/callback.c index d9d78ffd1d65..6cdc9d18a7dd 100644 --- a/fs/nfs/callback.c +++ b/fs/nfs/callback.c @@ -172,9 +172,9 @@ static int nfs_callback_start_svc(int minorversion, struct rpc_xprt *xprt, if (serv->sv_nrthreads == nrservs) return 0; - ret = serv->sv_ops->svo_setup(serv, NULL, nrservs); + ret = svc_set_num_threads(serv, NULL, nrservs); if (ret) { - serv->sv_ops->svo_setup(serv, NULL, 0); + svc_set_num_threads(serv, NULL, 0); return ret; } dprintk("nfs_callback_up: service started\n"); @@ -235,14 +235,12 @@ static int nfs_callback_up_net(int minorversion, struct svc_serv *serv, static const struct svc_serv_ops nfs40_cb_sv_ops = { .svo_function = nfs4_callback_svc, .svo_enqueue_xprt = svc_xprt_do_enqueue, - .svo_setup = svc_set_num_threads_sync, .svo_module = THIS_MODULE, }; #if defined(CONFIG_NFS_V4_1) static const struct svc_serv_ops nfs41_cb_sv_ops = { .svo_function = nfs41_callback_svc, .svo_enqueue_xprt = svc_xprt_do_enqueue, - .svo_setup = svc_set_num_threads_sync, .svo_module = THIS_MODULE, }; @@ -357,7 +355,7 @@ void nfs_callback_down(int minorversion, struct net *net) cb_info->users--; if (cb_info->users == 0) { svc_get(serv); - serv->sv_ops->svo_setup(serv, NULL, 0); + svc_set_num_threads(serv, NULL, 0); svc_put(serv); dprintk("nfs_callback_down: service destroyed\n"); cb_info->serv = NULL; diff --git a/fs/nfsd/nfssvc.c b/fs/nfsd/nfssvc.c index d0d9107a1b93..020156e96bdb 100644 --- a/fs/nfsd/nfssvc.c +++ b/fs/nfsd/nfssvc.c @@ -593,7 +593,6 @@ static const struct svc_serv_ops nfsd_thread_sv_ops = { .svo_shutdown = nfsd_last_thread, .svo_function = nfsd, .svo_enqueue_xprt = svc_xprt_do_enqueue, - .svo_setup = svc_set_num_threads_sync, .svo_module = THIS_MODULE, }; @@ -611,7 +610,7 @@ void nfsd_shutdown_threads(struct net *net) svc_get(serv); /* Kill outstanding nfsd threads */ - serv->sv_ops->svo_setup(serv, NULL, 0); + svc_set_num_threads(serv, NULL, 0); nfsd_put(net); mutex_unlock(&nfsd_mutex); } @@ -750,8 +749,9 @@ int nfsd_set_nrthreads(int n, int *nthreads, struct net *net) /* apply the new numbers */ svc_get(nn->nfsd_serv); for (i = 0; i < n; i++) { - err = nn->nfsd_serv->sv_ops->svo_setup(nn->nfsd_serv, - &nn->nfsd_serv->sv_pools[i], nthreads[i]); + err = svc_set_num_threads(nn->nfsd_serv, + &nn->nfsd_serv->sv_pools[i], + nthreads[i]); if (err) break; } @@ -793,8 +793,7 @@ nfsd_svc(int nrservs, struct net *net, const struct cred *cred) error = nfsd_startup_net(net, cred); if (error) goto out_put; - error = nn->nfsd_serv->sv_ops->svo_setup(nn->nfsd_serv, - NULL, nrservs); + error = svc_set_num_threads(nn->nfsd_serv, NULL, nrservs); if (error) goto out_shutdown; error = nn->nfsd_serv->sv_nrthreads; diff --git a/include/linux/sunrpc/svc.h b/include/linux/sunrpc/svc.h index 36bfc0281988..0b38c6eaf985 100644 --- a/include/linux/sunrpc/svc.h +++ b/include/linux/sunrpc/svc.h @@ -64,9 +64,6 @@ struct svc_serv_ops { /* queue up a transport for servicing */ void (*svo_enqueue_xprt)(struct svc_xprt *); - /* set up thread (or whatever) execution context */ - int (*svo_setup)(struct svc_serv *, struct svc_pool *, int); - /* optional module to count when adding threads (pooled svcs only) */ struct module *svo_module; }; @@ -541,7 +538,6 @@ void svc_pool_map_put(void); struct svc_serv * svc_create_pooled(struct svc_program *, unsigned int, const struct svc_serv_ops *); int svc_set_num_threads(struct svc_serv *, struct svc_pool *, int); -int svc_set_num_threads_sync(struct svc_serv *, struct svc_pool *, int); int svc_pool_stats_open(struct svc_serv *serv, struct file *file); void svc_shutdown_net(struct svc_serv *, struct net *); int svc_process(struct svc_rqst *); diff --git a/net/sunrpc/svc.c b/net/sunrpc/svc.c index 2b2042234e4b..5513f8c9a8d6 100644 --- a/net/sunrpc/svc.c +++ b/net/sunrpc/svc.c @@ -743,58 +743,13 @@ svc_start_kthreads(struct svc_serv *serv, struct svc_pool *pool, int nrservs) return 0; } - -/* destroy old threads */ -static int -svc_signal_kthreads(struct svc_serv *serv, struct svc_pool *pool, int nrservs) -{ - struct task_struct *task; - unsigned int state = serv->sv_nrthreads-1; - - /* destroy old threads */ - do { - task = choose_victim(serv, pool, &state); - if (task == NULL) - break; - send_sig(SIGINT, task, 1); - nrservs++; - } while (nrservs < 0); - - return 0; -} - /* * Create or destroy enough new threads to make the number * of threads the given number. If `pool' is non-NULL, applies * only to threads in that pool, otherwise round-robins between * all pools. Caller must ensure that mutual exclusion between this and * server startup or shutdown. - * - * Destroying threads relies on the service threads filling in - * rqstp->rq_task, which only the nfs ones do. Assumes the serv - * has been created using svc_create_pooled(). - * - * Based on code that used to be in nfsd_svc() but tweaked - * to be pool-aware. */ -int -svc_set_num_threads(struct svc_serv *serv, struct svc_pool *pool, int nrservs) -{ - if (pool == NULL) { - nrservs -= serv->sv_nrthreads; - } else { - spin_lock_bh(&pool->sp_lock); - nrservs -= pool->sp_nrthreads; - spin_unlock_bh(&pool->sp_lock); - } - - if (nrservs > 0) - return svc_start_kthreads(serv, pool, nrservs); - if (nrservs < 0) - return svc_signal_kthreads(serv, pool, nrservs); - return 0; -} -EXPORT_SYMBOL_GPL(svc_set_num_threads); /* destroy old threads */ static int @@ -815,7 +770,7 @@ svc_stop_kthreads(struct svc_serv *serv, struct svc_pool *pool, int nrservs) } int -svc_set_num_threads_sync(struct svc_serv *serv, struct svc_pool *pool, int nrservs) +svc_set_num_threads(struct svc_serv *serv, struct svc_pool *pool, int nrservs) { if (pool == NULL) { nrservs -= serv->sv_nrthreads; @@ -831,7 +786,7 @@ svc_set_num_threads_sync(struct svc_serv *serv, struct svc_pool *pool, int nrser return svc_stop_kthreads(serv, pool, nrservs); return 0; } -EXPORT_SYMBOL_GPL(svc_set_num_threads_sync); +EXPORT_SYMBOL_GPL(svc_set_num_threads); /** * svc_rqst_replace_page - Replace one page in rq_pages[] From patchwork Tue Nov 23 01:29:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: NeilBrown X-Patchwork-Id: 12633317 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3B4B2C433F5 for ; Tue, 23 Nov 2021 01:31:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231601AbhKWBe5 (ORCPT ); Mon, 22 Nov 2021 20:34:57 -0500 Received: from smtp-out2.suse.de ([195.135.220.29]:37138 "EHLO smtp-out2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229484AbhKWBe5 (ORCPT ); Mon, 22 Nov 2021 20:34:57 -0500 Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id 512541FD39; Tue, 23 Nov 2021 01:31:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa; t=1637631109; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=T2s6b5kduBrx/FLQYHde/UjGKIUafDNxbBloTAKLa70=; b=KSThibDRbq9H2VtEISKTdh0UELCRcouIv5h1AORYJiuQOaCcNRPFQABbli8yfqAbrijiN2 igh/zR1YeHLu6BgRmsrfYjV3t4zpJoZt3DT1EGpYJHHlsnV6tveq4d/zC6CaFyy3IxDh0b ejDk52CuAjG1AuB2XQCpyz94iQANYNs= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_ed25519; t=1637631109; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=T2s6b5kduBrx/FLQYHde/UjGKIUafDNxbBloTAKLa70=; b=GsqGv7vJyIjkXLoyy7fdR0P0cXRgQyduOJiRASO/DwDVszj9M4LI2ymAPaf+rPbkQAjFf1 fJFLJM9rneQ59bDQ== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 438A113BD4; Tue, 23 Nov 2021 01:31:48 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id JYErAYREnGHqcwAAMHmgww (envelope-from ); Tue, 23 Nov 2021 01:31:48 +0000 Subject: [PATCH 09/19] NFSD: simplify locking for network notifier. From: NeilBrown To: "J. Bruce Fields" , Chuck Lever Cc: linux-nfs@vger.kernel.org Date: Tue, 23 Nov 2021 12:29:35 +1100 Message-ID: <163763097547.7284.1138288029197434533.stgit@noble.brown> In-Reply-To: <163763078330.7284.10141477742275086758.stgit@noble.brown> References: <163763078330.7284.10141477742275086758.stgit@noble.brown> User-Agent: StGit/0.23 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org nfsd currently maintains an open-coded read/write semaphore (refcount and wait queue) for each network namespace to ensure the nfs service isn't shut down while the notifier is running. This is excessive. As there is unlikely to be contention between notifiers and they run without sleeping, a single spinlock is sufficient to avoid problems. Signed-off-by: NeilBrown --- fs/nfsd/netns.h | 3 --- fs/nfsd/nfsctl.c | 2 -- fs/nfsd/nfssvc.c | 38 ++++++++++++++++++++------------------ 3 files changed, 20 insertions(+), 23 deletions(-) diff --git a/fs/nfsd/netns.h b/fs/nfsd/netns.h index 1fd59eb0730b..021acdc0d03b 100644 --- a/fs/nfsd/netns.h +++ b/fs/nfsd/netns.h @@ -131,9 +131,6 @@ struct nfsd_net { */ int keep_active; - wait_queue_head_t ntf_wq; - atomic_t ntf_refcnt; - /* * clientid and stateid data for construction of net unique COPY * stateids. diff --git a/fs/nfsd/nfsctl.c b/fs/nfsd/nfsctl.c index 2bbc26fbdae8..376862cf2f14 100644 --- a/fs/nfsd/nfsctl.c +++ b/fs/nfsd/nfsctl.c @@ -1483,8 +1483,6 @@ static __net_init int nfsd_init_net(struct net *net) nn->clientid_counter = nn->clientid_base + 1; nn->s2s_cp_cl_id = nn->clientid_counter++; - atomic_set(&nn->ntf_refcnt, 0); - init_waitqueue_head(&nn->ntf_wq); seqlock_init(&nn->boot_lock); return 0; diff --git a/fs/nfsd/nfssvc.c b/fs/nfsd/nfssvc.c index 020156e96bdb..070525fbc1ad 100644 --- a/fs/nfsd/nfssvc.c +++ b/fs/nfsd/nfssvc.c @@ -434,6 +434,7 @@ static void nfsd_shutdown_net(struct net *net) nfsd_shutdown_generic(); } +DEFINE_SPINLOCK(nfsd_notifier_lock); static int nfsd_inetaddr_event(struct notifier_block *this, unsigned long event, void *ptr) { @@ -443,18 +444,17 @@ static int nfsd_inetaddr_event(struct notifier_block *this, unsigned long event, struct nfsd_net *nn = net_generic(net, nfsd_net_id); struct sockaddr_in sin; - if ((event != NETDEV_DOWN) || - !atomic_inc_not_zero(&nn->ntf_refcnt)) + if (event != NETDEV_DOWN || !nn->nfsd_serv) goto out; + spin_lock(&nfsd_notifier_lock); if (nn->nfsd_serv) { dprintk("nfsd_inetaddr_event: removed %pI4\n", &ifa->ifa_local); sin.sin_family = AF_INET; sin.sin_addr.s_addr = ifa->ifa_local; svc_age_temp_xprts_now(nn->nfsd_serv, (struct sockaddr *)&sin); } - atomic_dec(&nn->ntf_refcnt); - wake_up(&nn->ntf_wq); + spin_unlock(&nfsd_notifier_lock); out: return NOTIFY_DONE; @@ -474,10 +474,10 @@ static int nfsd_inet6addr_event(struct notifier_block *this, struct nfsd_net *nn = net_generic(net, nfsd_net_id); struct sockaddr_in6 sin6; - if ((event != NETDEV_DOWN) || - !atomic_inc_not_zero(&nn->ntf_refcnt)) + if (event != NETDEV_DOWN || !nn->nfsd_serv) goto out; + spin_lock(&nfsd_notifier_lock); if (nn->nfsd_serv) { dprintk("nfsd_inet6addr_event: removed %pI6\n", &ifa->addr); sin6.sin6_family = AF_INET6; @@ -486,8 +486,8 @@ static int nfsd_inet6addr_event(struct notifier_block *this, sin6.sin6_scope_id = ifa->idev->dev->ifindex; svc_age_temp_xprts_now(nn->nfsd_serv, (struct sockaddr *)&sin6); } - atomic_dec(&nn->ntf_refcnt); - wake_up(&nn->ntf_wq); + spin_unlock(&nfsd_notifier_lock); + out: return NOTIFY_DONE; } @@ -504,7 +504,6 @@ static void nfsd_last_thread(struct svc_serv *serv, struct net *net) { struct nfsd_net *nn = net_generic(net, nfsd_net_id); - atomic_dec(&nn->ntf_refcnt); /* check if the notifier still has clients */ if (atomic_dec_return(&nfsd_notifier_refcount) == 0) { unregister_inetaddr_notifier(&nfsd_inetaddr_notifier); @@ -512,7 +511,6 @@ static void nfsd_last_thread(struct svc_serv *serv, struct net *net) unregister_inet6addr_notifier(&nfsd_inet6addr_notifier); #endif } - wait_event(nn->ntf_wq, atomic_read(&nn->ntf_refcnt) == 0); /* * write_ports can create the server without actually starting @@ -624,6 +622,7 @@ int nfsd_create_serv(struct net *net) { int error; struct nfsd_net *nn = net_generic(net, nfsd_net_id); + struct svc_serv *serv; WARN_ON(!mutex_is_locked(&nfsd_mutex)); if (nn->nfsd_serv) { @@ -633,21 +632,23 @@ int nfsd_create_serv(struct net *net) if (nfsd_max_blksize == 0) nfsd_max_blksize = nfsd_get_default_max_blksize(); nfsd_reset_versions(nn); - nn->nfsd_serv = svc_create_pooled(&nfsd_program, nfsd_max_blksize, - &nfsd_thread_sv_ops); - if (nn->nfsd_serv == NULL) + serv = svc_create_pooled(&nfsd_program, nfsd_max_blksize, + &nfsd_thread_sv_ops); + if (serv == NULL) return -ENOMEM; - nn->nfsd_serv->sv_maxconn = nn->max_connections; - error = svc_bind(nn->nfsd_serv, net); + serv->sv_maxconn = nn->max_connections; + error = svc_bind(serv, net); if (error < 0) { /* NOT nfsd_put() as notifiers (see below) haven't * been set up yet. */ - svc_put(nn->nfsd_serv); - nn->nfsd_serv = NULL; + svc_put(serv); return error; } + spin_lock(&nfsd_notifier_lock); + nn->nfsd_serv = serv; + spin_unlock(&nfsd_notifier_lock); set_max_drc(); /* check if the notifier is already set */ @@ -657,7 +658,6 @@ int nfsd_create_serv(struct net *net) register_inet6addr_notifier(&nfsd_inet6addr_notifier); #endif } - atomic_inc(&nn->ntf_refcnt); nfsd_reset_boot_verifier(nn); return 0; } @@ -701,7 +701,9 @@ void nfsd_put(struct net *net) if (kref_put(&nn->nfsd_serv->sv_refcnt, nfsd_noop)) { svc_shutdown_net(nn->nfsd_serv, net); svc_destroy(&nn->nfsd_serv->sv_refcnt); + spin_lock(&nfsd_notifier_lock); nn->nfsd_serv = NULL; + spin_unlock(&nfsd_notifier_lock); } } From patchwork Tue Nov 23 01:29:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: NeilBrown X-Patchwork-Id: 12633319 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DE7E4C433EF for ; Tue, 23 Nov 2021 01:31:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231666AbhKWBfE (ORCPT ); Mon, 22 Nov 2021 20:35:04 -0500 Received: from smtp-out2.suse.de ([195.135.220.29]:37146 "EHLO smtp-out2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229484AbhKWBfD (ORCPT ); Mon, 22 Nov 2021 20:35:03 -0500 Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id C33151FD39; Tue, 23 Nov 2021 01:31:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa; t=1637631115; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=LtR+h2/SHobdjw6xJn7imJnV1Ye8cBQSZWUn0dvaWP0=; b=1wE+Hz64io3doMFsE20ksU8FEnbMPYk81v6fgkhfhbL6XZJrcW3WKYoqaYSpNzHEXIk4El kGpgp01bf6q6AZLsiY5xGPM6lIBX6FAnsZirJPYOYrHyyAidy8mDcu1lu6x1xKpOc+nMvd iEZNBEu8GSDuoumE1r2TnQ6UWC9fKWM= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_ed25519; t=1637631115; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=LtR+h2/SHobdjw6xJn7imJnV1Ye8cBQSZWUn0dvaWP0=; b=r2qHKE8SpiDY5C2MYeCXe+/v4tS2OjgpCTU9gYpDmJATWGAGVXwh9M9ITDkrG71wq99QHQ zb2ke9Rc6QclAoBQ== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 9C85613BD4; Tue, 23 Nov 2021 01:31:54 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id QrBzFYpEnGHwcwAAMHmgww (envelope-from ); Tue, 23 Nov 2021 01:31:54 +0000 Subject: [PATCH 10/19] lockd: introduce nlmsvc_serv From: NeilBrown To: "J. Bruce Fields" , Chuck Lever Cc: linux-nfs@vger.kernel.org Date: Tue, 23 Nov 2021 12:29:35 +1100 Message-ID: <163763097547.7284.4982400127908906020.stgit@noble.brown> In-Reply-To: <163763078330.7284.10141477742275086758.stgit@noble.brown> References: <163763078330.7284.10141477742275086758.stgit@noble.brown> User-Agent: StGit/0.23 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org lockd has two globals - nlmsvc_task and nlmsvc_rqst - but mostly it wants the 'struct svc_serv', and when it doesn't want it exactly it can get to what it wants from the serv. This patch is a first step to removing nlmsvc_task and nlmsvc_rqst. It introduces nlmsvc_serv to store the 'struct svc_serv*'. This is set as soon as the serv is created, and cleared only when it is destroyed. Signed-off-by: NeilBrown --- fs/lockd/svc.c | 36 ++++++++++++++++++++---------------- 1 file changed, 20 insertions(+), 16 deletions(-) diff --git a/fs/lockd/svc.c b/fs/lockd/svc.c index a9669b106dbd..83874878f41d 100644 --- a/fs/lockd/svc.c +++ b/fs/lockd/svc.c @@ -54,6 +54,7 @@ EXPORT_SYMBOL_GPL(nlmsvc_ops); static DEFINE_MUTEX(nlmsvc_mutex); static unsigned int nlmsvc_users; +static struct svc_serv *nlmsvc_serv; static struct task_struct *nlmsvc_task; static struct svc_rqst *nlmsvc_rqst; unsigned long nlmsvc_timeout; @@ -306,13 +307,12 @@ static int lockd_inetaddr_event(struct notifier_block *this, !atomic_inc_not_zero(&nlm_ntf_refcnt)) goto out; - if (nlmsvc_rqst) { + if (nlmsvc_serv) { dprintk("lockd_inetaddr_event: removed %pI4\n", &ifa->ifa_local); sin.sin_family = AF_INET; sin.sin_addr.s_addr = ifa->ifa_local; - svc_age_temp_xprts_now(nlmsvc_rqst->rq_server, - (struct sockaddr *)&sin); + svc_age_temp_xprts_now(nlmsvc_serv, (struct sockaddr *)&sin); } atomic_dec(&nlm_ntf_refcnt); wake_up(&nlm_ntf_wq); @@ -336,14 +336,13 @@ static int lockd_inet6addr_event(struct notifier_block *this, !atomic_inc_not_zero(&nlm_ntf_refcnt)) goto out; - if (nlmsvc_rqst) { + if (nlmsvc_serv) { dprintk("lockd_inet6addr_event: removed %pI6\n", &ifa->addr); sin6.sin6_family = AF_INET6; sin6.sin6_addr = ifa->addr; if (ipv6_addr_type(&sin6.sin6_addr) & IPV6_ADDR_LINKLOCAL) sin6.sin6_scope_id = ifa->idev->dev->ifindex; - svc_age_temp_xprts_now(nlmsvc_rqst->rq_server, - (struct sockaddr *)&sin6); + svc_age_temp_xprts_now(nlmsvc_serv, (struct sockaddr *)&sin6); } atomic_dec(&nlm_ntf_refcnt); wake_up(&nlm_ntf_wq); @@ -423,15 +422,17 @@ static const struct svc_serv_ops lockd_sv_ops = { .svo_enqueue_xprt = svc_xprt_do_enqueue, }; -static struct svc_serv *lockd_create_svc(void) +static int lockd_create_svc(void) { struct svc_serv *serv; /* * Check whether we're already up and running. */ - if (nlmsvc_rqst) - return svc_get(nlmsvc_rqst->rq_server); + if (nlmsvc_serv) { + svc_get(nlmsvc_serv); + return 0; + } /* * Sanity check: if there's no pid, @@ -448,14 +449,15 @@ static struct svc_serv *lockd_create_svc(void) serv = svc_create(&nlmsvc_program, LOCKD_BUFSIZE, &lockd_sv_ops); if (!serv) { printk(KERN_WARNING "lockd_up: create service failed\n"); - return ERR_PTR(-ENOMEM); + return -ENOMEM; } + nlmsvc_serv = serv; register_inetaddr_notifier(&lockd_inetaddr_notifier); #if IS_ENABLED(CONFIG_IPV6) register_inet6addr_notifier(&lockd_inet6addr_notifier); #endif dprintk("lockd_up: service created\n"); - return serv; + return 0; } /* @@ -468,11 +470,10 @@ int lockd_up(struct net *net, const struct cred *cred) mutex_lock(&nlmsvc_mutex); - serv = lockd_create_svc(); - if (IS_ERR(serv)) { - error = PTR_ERR(serv); + error = lockd_create_svc(); + if (error) goto err_create; - } + serv = nlmsvc_serv; error = lockd_up_net(serv, net, cred); if (error < 0) { @@ -487,6 +488,8 @@ int lockd_up(struct net *net, const struct cred *cred) } nlmsvc_users++; err_put: + if (nlmsvc_users == 0) + nlmsvc_serv = NULL; svc_put(serv); err_create: mutex_unlock(&nlmsvc_mutex); @@ -501,7 +504,7 @@ void lockd_down(struct net *net) { mutex_lock(&nlmsvc_mutex); - lockd_down_net(nlmsvc_rqst->rq_server, net); + lockd_down_net(nlmsvc_serv, net); if (nlmsvc_users) { if (--nlmsvc_users) goto out; @@ -519,6 +522,7 @@ lockd_down(struct net *net) dprintk("lockd_down: service stopped\n"); lockd_svc_exit_thread(); dprintk("lockd_down: service destroyed\n"); + nlmsvc_serv = NULL; nlmsvc_task = NULL; nlmsvc_rqst = NULL; out: From patchwork Tue Nov 23 01:29:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: NeilBrown X-Patchwork-Id: 12633321 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A54AEC433EF for ; Tue, 23 Nov 2021 01:32:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230214AbhKWBfJ (ORCPT ); Mon, 22 Nov 2021 20:35:09 -0500 Received: from smtp-out1.suse.de ([195.135.220.28]:57886 "EHLO smtp-out1.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229484AbhKWBfI (ORCPT ); Mon, 22 Nov 2021 20:35:08 -0500 Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id D129E218B0; Tue, 23 Nov 2021 01:32:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa; t=1637631120; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=SRODYG02CzjFzQL4XDmO8VeXkSyZ3ESGsTuHgv14Ty8=; b=i/5vy5vdBfdIAEAV5ZEnWKizE7428B2VtVKrE187BIye1bhMtiMkPAhjDKpvlxLXTY59m9 65YGzXe9Xv0iG7gDjUKG4VQ3y2Dd5qXDVn4HcDtYq6qxoWeA4C/P6DsmLWCA2fmbeRn9a9 8tsh3R1msGzWc+jOSk5dbGHRyWkCuhY= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_ed25519; t=1637631120; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=SRODYG02CzjFzQL4XDmO8VeXkSyZ3ESGsTuHgv14Ty8=; b=CALl8v4lZ1qqH6pKd/xYECwcDT/Nqss6yHNc4gZUf9jhnhN9xbK+70pV0E5O542E6FT5un rnspWnCjrq0HgtDw== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id BACB513BD4; Tue, 23 Nov 2021 01:31:59 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id WMvRHY9EnGH+cwAAMHmgww (envelope-from ); Tue, 23 Nov 2021 01:31:59 +0000 Subject: [PATCH 11/19] lockd: simplify management of network status notifiers From: NeilBrown To: "J. Bruce Fields" , Chuck Lever Cc: linux-nfs@vger.kernel.org Date: Tue, 23 Nov 2021 12:29:35 +1100 Message-ID: <163763097547.7284.16391884357277786583.stgit@noble.brown> In-Reply-To: <163763078330.7284.10141477742275086758.stgit@noble.brown> References: <163763078330.7284.10141477742275086758.stgit@noble.brown> User-Agent: StGit/0.23 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org Now that the network status notifiers use nlmsvc_serv rather then nlmsvc_rqst the management can be simplified. Notifier unregistration synchronises with any pending notifications so providing we unregister before nlm_serv is freed no further interlock is required. So we move the unregister call to just before the thread is killed (which destroys the service) and just before the service is destroyed in the failure-path of lockd_up(). Then nlm_ntf_refcnt and nlm_ntf_wq can be removed. Signed-off-by: NeilBrown --- fs/lockd/svc.c | 35 +++++++++-------------------------- 1 file changed, 9 insertions(+), 26 deletions(-) diff --git a/fs/lockd/svc.c b/fs/lockd/svc.c index 83874878f41d..20cebb191350 100644 --- a/fs/lockd/svc.c +++ b/fs/lockd/svc.c @@ -59,9 +59,6 @@ static struct task_struct *nlmsvc_task; static struct svc_rqst *nlmsvc_rqst; unsigned long nlmsvc_timeout; -static atomic_t nlm_ntf_refcnt = ATOMIC_INIT(0); -static DECLARE_WAIT_QUEUE_HEAD(nlm_ntf_wq); - unsigned int lockd_net_id; /* @@ -303,8 +300,7 @@ static int lockd_inetaddr_event(struct notifier_block *this, struct in_ifaddr *ifa = (struct in_ifaddr *)ptr; struct sockaddr_in sin; - if ((event != NETDEV_DOWN) || - !atomic_inc_not_zero(&nlm_ntf_refcnt)) + if (event != NETDEV_DOWN) goto out; if (nlmsvc_serv) { @@ -314,8 +310,6 @@ static int lockd_inetaddr_event(struct notifier_block *this, sin.sin_addr.s_addr = ifa->ifa_local; svc_age_temp_xprts_now(nlmsvc_serv, (struct sockaddr *)&sin); } - atomic_dec(&nlm_ntf_refcnt); - wake_up(&nlm_ntf_wq); out: return NOTIFY_DONE; @@ -332,8 +326,7 @@ static int lockd_inet6addr_event(struct notifier_block *this, struct inet6_ifaddr *ifa = (struct inet6_ifaddr *)ptr; struct sockaddr_in6 sin6; - if ((event != NETDEV_DOWN) || - !atomic_inc_not_zero(&nlm_ntf_refcnt)) + if (event != NETDEV_DOWN) goto out; if (nlmsvc_serv) { @@ -344,8 +337,6 @@ static int lockd_inet6addr_event(struct notifier_block *this, sin6.sin6_scope_id = ifa->idev->dev->ifindex; svc_age_temp_xprts_now(nlmsvc_serv, (struct sockaddr *)&sin6); } - atomic_dec(&nlm_ntf_refcnt); - wake_up(&nlm_ntf_wq); out: return NOTIFY_DONE; @@ -362,14 +353,6 @@ static void lockd_unregister_notifiers(void) #if IS_ENABLED(CONFIG_IPV6) unregister_inet6addr_notifier(&lockd_inet6addr_notifier); #endif - wait_event(nlm_ntf_wq, atomic_read(&nlm_ntf_refcnt) == 0); -} - -static void lockd_svc_exit_thread(void) -{ - atomic_dec(&nlm_ntf_refcnt); - lockd_unregister_notifiers(); - svc_exit_thread(nlmsvc_rqst); } static int lockd_start_svc(struct svc_serv *serv) @@ -388,11 +371,9 @@ static int lockd_start_svc(struct svc_serv *serv) printk(KERN_WARNING "lockd_up: svc_rqst allocation failed, error=%d\n", error); - lockd_unregister_notifiers(); goto out_rqst; } - atomic_inc(&nlm_ntf_refcnt); svc_sock_update_bufs(serv); serv->sv_maxconn = nlm_max_connections; @@ -410,7 +391,7 @@ static int lockd_start_svc(struct svc_serv *serv) return 0; out_task: - lockd_svc_exit_thread(); + svc_exit_thread(nlmsvc_rqst); nlmsvc_task = NULL; out_rqst: nlmsvc_rqst = NULL; @@ -477,7 +458,6 @@ int lockd_up(struct net *net, const struct cred *cred) error = lockd_up_net(serv, net, cred); if (error < 0) { - lockd_unregister_notifiers(); goto err_put; } @@ -488,8 +468,10 @@ int lockd_up(struct net *net, const struct cred *cred) } nlmsvc_users++; err_put: - if (nlmsvc_users == 0) + if (nlmsvc_users == 0) { + lockd_unregister_notifiers(); nlmsvc_serv = NULL; + } svc_put(serv); err_create: mutex_unlock(&nlmsvc_mutex); @@ -518,13 +500,14 @@ lockd_down(struct net *net) printk(KERN_ERR "lockd_down: no lockd running.\n"); BUG(); } + lockd_unregister_notifiers(); kthread_stop(nlmsvc_task); dprintk("lockd_down: service stopped\n"); - lockd_svc_exit_thread(); + svc_exit_thread(nlmsvc_rqst); + nlmsvc_rqst = NULL; dprintk("lockd_down: service destroyed\n"); nlmsvc_serv = NULL; nlmsvc_task = NULL; - nlmsvc_rqst = NULL; out: mutex_unlock(&nlmsvc_mutex); } From patchwork Tue Nov 23 01:29:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: NeilBrown X-Patchwork-Id: 12633323 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2A814C433EF for ; Tue, 23 Nov 2021 01:32:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231719AbhKWBfO (ORCPT ); Mon, 22 Nov 2021 20:35:14 -0500 Received: from smtp-out2.suse.de ([195.135.220.29]:37156 "EHLO smtp-out2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229484AbhKWBfN (ORCPT ); Mon, 22 Nov 2021 20:35:13 -0500 Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id 54AE71FD39; Tue, 23 Nov 2021 01:32:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa; t=1637631125; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ksD387RvbGGdsJFIycbu4bFM8cYLiuD6K1tfwfwYy6I=; b=bv/gm5u5JXxB77g1NpnjfEWiPFJSRc0lZPhhSaXw7prVctCqoEKIRx4Nwdm+zQPGw40lYp fMaPnx18uPp3aYTM0fXBsOOSOtS6eHwye+nUiyP9QVzQMjP90rhIoQ/lm0uFsmlhJ0Nbbp 5fi1mlT7/jWVRZhlvSq2ALQX8XYU+hQ= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_ed25519; t=1637631125; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ksD387RvbGGdsJFIycbu4bFM8cYLiuD6K1tfwfwYy6I=; b=26yYGND/GKnHEEdidBfkpYwd84n8KWsWAX8j4qnXVjvbcb7mlV/qGyyT++YjqvyOo0shMg G/fujogjcbOnZCBQ== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 387CD13BD4; Tue, 23 Nov 2021 01:32:03 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id 5i8uOZNEnGEEdAAAMHmgww (envelope-from ); Tue, 23 Nov 2021 01:32:03 +0000 Subject: [PATCH 12/19] lockd: move lockd_start_svc() call into lockd_create_svc() From: NeilBrown To: "J. Bruce Fields" , Chuck Lever Cc: linux-nfs@vger.kernel.org Date: Tue, 23 Nov 2021 12:29:35 +1100 Message-ID: <163763097548.7284.17750816308590904344.stgit@noble.brown> In-Reply-To: <163763078330.7284.10141477742275086758.stgit@noble.brown> References: <163763078330.7284.10141477742275086758.stgit@noble.brown> User-Agent: StGit/0.23 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org lockd_start_svc() only needs to be called once, just after the svc is created. If the start fails, the svc is discarded too. It thus makes sense to call lockd_start_svc() from lockd_create_svc(). This allows us to remove the test against nlmsvc_rqst at the start of lockd_start_svc() - it must always be NULL. lockd_up() only held an extra reference on the svc until a thread was created - then it dropped it. The thread - and thus the extra reference - will remain until kthread_stop() is called. Now that the thread is created in lockd_create_svc(), the extra reference can be dropped there. So the 'serv' variable is no longer needed in lockd_up(). Signed-off-by: NeilBrown --- fs/lockd/svc.c | 22 ++++++++++------------ 1 file changed, 10 insertions(+), 12 deletions(-) diff --git a/fs/lockd/svc.c b/fs/lockd/svc.c index 20cebb191350..91e7c839841e 100644 --- a/fs/lockd/svc.c +++ b/fs/lockd/svc.c @@ -359,9 +359,6 @@ static int lockd_start_svc(struct svc_serv *serv) { int error; - if (nlmsvc_rqst) - return 0; - /* * Create the kernel thread and wait for it to start. */ @@ -406,6 +403,7 @@ static const struct svc_serv_ops lockd_sv_ops = { static int lockd_create_svc(void) { struct svc_serv *serv; + int error; /* * Check whether we're already up and running. @@ -432,6 +430,13 @@ static int lockd_create_svc(void) printk(KERN_WARNING "lockd_up: create service failed\n"); return -ENOMEM; } + + error = lockd_start_svc(serv); + /* The thread now holds the only reference */ + svc_put(serv); + if (error < 0) + return error; + nlmsvc_serv = serv; register_inetaddr_notifier(&lockd_inetaddr_notifier); #if IS_ENABLED(CONFIG_IPV6) @@ -446,7 +451,6 @@ static int lockd_create_svc(void) */ int lockd_up(struct net *net, const struct cred *cred) { - struct svc_serv *serv; int error; mutex_lock(&nlmsvc_mutex); @@ -454,25 +458,19 @@ int lockd_up(struct net *net, const struct cred *cred) error = lockd_create_svc(); if (error) goto err_create; - serv = nlmsvc_serv; - error = lockd_up_net(serv, net, cred); + error = lockd_up_net(nlmsvc_serv, net, cred); if (error < 0) { goto err_put; } - error = lockd_start_svc(serv); - if (error < 0) { - lockd_down_net(serv, net); - goto err_put; - } nlmsvc_users++; err_put: if (nlmsvc_users == 0) { lockd_unregister_notifiers(); + kthread_stop(nlmsvc_task); nlmsvc_serv = NULL; } - svc_put(serv); err_create: mutex_unlock(&nlmsvc_mutex); return error; From patchwork Tue Nov 23 01:29:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: NeilBrown X-Patchwork-Id: 12633325 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 98DEFC433EF for ; Tue, 23 Nov 2021 01:32:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230306AbhKWBfV (ORCPT ); Mon, 22 Nov 2021 20:35:21 -0500 Received: from smtp-out1.suse.de ([195.135.220.28]:57898 "EHLO smtp-out1.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230040AbhKWBfU (ORCPT ); Mon, 22 Nov 2021 20:35:20 -0500 Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id B626E218B0; Tue, 23 Nov 2021 01:32:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa; t=1637631132; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=/DdyxV03d1CRB6Sivyn6NSZWUBIVAtjE4QQl7RMvgGI=; b=LF0HjKdFUydGYzl0Vd/9baoj6chN+9r0jnJGsJz1pg890iohyyCPL4+9qpW8ITi5bLA3ZW k4tuZqR636osy9n2JUdlJzBI/+nNej4RmtH8gydE3z4oFX8VC9/q0NDUjmko6/V3W4ziwI FXsDje8sDb/Tuq3AgBfb41xVHTzwKz4= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_ed25519; t=1637631132; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=/DdyxV03d1CRB6Sivyn6NSZWUBIVAtjE4QQl7RMvgGI=; b=TE4dvbKByJcnsML0t9GF7gAxBcOaUcJwoaP19pxe4Q3DhKfXkKqajSOPY0wuVJi7mGNwYD U9Rdh8yhhEVWl6CA== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 9094013BD4; Tue, 23 Nov 2021 01:32:11 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id IyTkE5tEnGEGdAAAMHmgww (envelope-from ); Tue, 23 Nov 2021 01:32:11 +0000 Subject: [PATCH 13/19] lockd: move svc_exit_thread() into the thread From: NeilBrown To: "J. Bruce Fields" , Chuck Lever Cc: linux-nfs@vger.kernel.org Date: Tue, 23 Nov 2021 12:29:35 +1100 Message-ID: <163763097548.7284.9342112716385146027.stgit@noble.brown> In-Reply-To: <163763078330.7284.10141477742275086758.stgit@noble.brown> References: <163763078330.7284.10141477742275086758.stgit@noble.brown> User-Agent: StGit/0.23 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org The normal place to call svc_exit_thread() is from the thread itself just before it exists. Do this for lockd. This means that nlmsvc_rqst is not used out side of lockd_start_svc(), so it can be made local to that function, and renamed to 'rqst'. Signed-off-by: NeilBrown --- fs/lockd/svc.c | 23 ++++++++++++----------- 1 file changed, 12 insertions(+), 11 deletions(-) diff --git a/fs/lockd/svc.c b/fs/lockd/svc.c index 91e7c839841e..9aa499a76159 100644 --- a/fs/lockd/svc.c +++ b/fs/lockd/svc.c @@ -56,7 +56,6 @@ static DEFINE_MUTEX(nlmsvc_mutex); static unsigned int nlmsvc_users; static struct svc_serv *nlmsvc_serv; static struct task_struct *nlmsvc_task; -static struct svc_rqst *nlmsvc_rqst; unsigned long nlmsvc_timeout; unsigned int lockd_net_id; @@ -182,6 +181,11 @@ lockd(void *vrqstp) nlm_shutdown_hosts(); cancel_delayed_work_sync(&ln->grace_period_end); locks_end_grace(&ln->lockd_manager); + + dprintk("lockd_down: service stopped\n"); + + svc_exit_thread(rqstp); + return 0; } @@ -358,13 +362,14 @@ static void lockd_unregister_notifiers(void) static int lockd_start_svc(struct svc_serv *serv) { int error; + struct svc_rqst *rqst; /* * Create the kernel thread and wait for it to start. */ - nlmsvc_rqst = svc_prepare_thread(serv, &serv->sv_pools[0], NUMA_NO_NODE); - if (IS_ERR(nlmsvc_rqst)) { - error = PTR_ERR(nlmsvc_rqst); + rqst = svc_prepare_thread(serv, &serv->sv_pools[0], NUMA_NO_NODE); + if (IS_ERR(rqst)) { + error = PTR_ERR(rqst); printk(KERN_WARNING "lockd_up: svc_rqst allocation failed, error=%d\n", error); @@ -374,24 +379,23 @@ static int lockd_start_svc(struct svc_serv *serv) svc_sock_update_bufs(serv); serv->sv_maxconn = nlm_max_connections; - nlmsvc_task = kthread_create(lockd, nlmsvc_rqst, "%s", serv->sv_name); + nlmsvc_task = kthread_create(lockd, rqst, "%s", serv->sv_name); if (IS_ERR(nlmsvc_task)) { error = PTR_ERR(nlmsvc_task); printk(KERN_WARNING "lockd_up: kthread_run failed, error=%d\n", error); goto out_task; } - nlmsvc_rqst->rq_task = nlmsvc_task; + rqst->rq_task = nlmsvc_task; wake_up_process(nlmsvc_task); dprintk("lockd_up: service started\n"); return 0; out_task: - svc_exit_thread(nlmsvc_rqst); + svc_exit_thread(rqst); nlmsvc_task = NULL; out_rqst: - nlmsvc_rqst = NULL; return error; } @@ -500,9 +504,6 @@ lockd_down(struct net *net) } lockd_unregister_notifiers(); kthread_stop(nlmsvc_task); - dprintk("lockd_down: service stopped\n"); - svc_exit_thread(nlmsvc_rqst); - nlmsvc_rqst = NULL; dprintk("lockd_down: service destroyed\n"); nlmsvc_serv = NULL; nlmsvc_task = NULL; From patchwork Tue Nov 23 01:29:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: NeilBrown X-Patchwork-Id: 12633327 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6792DC433F5 for ; Tue, 23 Nov 2021 01:32:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230040AbhKWBf1 (ORCPT ); Mon, 22 Nov 2021 20:35:27 -0500 Received: from smtp-out1.suse.de ([195.135.220.28]:57932 "EHLO smtp-out1.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229874AbhKWBf1 (ORCPT ); Mon, 22 Nov 2021 20:35:27 -0500 Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 857A4218B0; Tue, 23 Nov 2021 01:32:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa; t=1637631139; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=g/3zS1w9XPAB0sKrywf5beu0ueeIr7R6pDKbEKsXTPc=; b=beNwV92Z7vpxYnolbpkMahjFsLukUlosgj+lCH/qLdZvuFJaajGtGZPjjBXM7G88z62xb5 o7j/D1WvDlz7CZ8UBAmCLdPz41TURaQmATY6WoKsNfr/ePX+2F3SPmeT0WA1ONtuJqJbd2 yFleOze1ueCFp3ylrfikMbAS1BGv6sE= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_ed25519; t=1637631139; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=g/3zS1w9XPAB0sKrywf5beu0ueeIr7R6pDKbEKsXTPc=; b=KEOU1eO74MOkTelgioGA/zG1cfvgvc6blYXyOxdUw3JiGcGUYtvqYl0dRIhPHXEmnjDDI1 6AQqxTwCVgOPbQBw== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 7193E13BD4; Tue, 23 Nov 2021 01:32:18 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id Iuj3C6JEnGETdAAAMHmgww (envelope-from ); Tue, 23 Nov 2021 01:32:18 +0000 Subject: [PATCH 14/19] lockd: introduce lockd_put() From: NeilBrown To: "J. Bruce Fields" , Chuck Lever Cc: linux-nfs@vger.kernel.org Date: Tue, 23 Nov 2021 12:29:35 +1100 Message-ID: <163763097549.7284.13864576118395419236.stgit@noble.brown> In-Reply-To: <163763078330.7284.10141477742275086758.stgit@noble.brown> References: <163763078330.7284.10141477742275086758.stgit@noble.brown> User-Agent: StGit/0.23 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org There is some cleanup that is duplicated in lockd_down() and the failure path of lockd_up(). Factor these out into a new lockd_put() and call it from both places. lockd_put() does *not* take the mutex - that must be held by the caller. It decrements nlmsvc_users and if that reaches zero, it cleans up. Signed-off-by: NeilBrown --- fs/lockd/svc.c | 64 ++++++++++++++++++++++++-------------------------------- 1 file changed, 27 insertions(+), 37 deletions(-) diff --git a/fs/lockd/svc.c b/fs/lockd/svc.c index 9aa499a76159..7f12c280fd30 100644 --- a/fs/lockd/svc.c +++ b/fs/lockd/svc.c @@ -351,14 +351,6 @@ static struct notifier_block lockd_inet6addr_notifier = { }; #endif -static void lockd_unregister_notifiers(void) -{ - unregister_inetaddr_notifier(&lockd_inetaddr_notifier); -#if IS_ENABLED(CONFIG_IPV6) - unregister_inet6addr_notifier(&lockd_inet6addr_notifier); -#endif -} - static int lockd_start_svc(struct svc_serv *serv) { int error; @@ -450,6 +442,27 @@ static int lockd_create_svc(void) return 0; } +static void lockd_put(void) +{ + if (WARN(nlmsvc_users <= 0, "lockd_down: no users!\n")) + return; + if (--nlmsvc_users) + return; + + unregister_inetaddr_notifier(&lockd_inetaddr_notifier); +#if IS_ENABLED(CONFIG_IPV6) + unregister_inet6addr_notifier(&lockd_inet6addr_notifier); +#endif + + if (nlmsvc_task) { + kthread_stop(nlmsvc_task); + dprintk("lockd_down: service stopped\n"); + nlmsvc_task = NULL; + } + nlmsvc_serv = NULL; + dprintk("lockd_down: service destroyed\n"); +} + /* * Bring up the lockd process if it's not already up. */ @@ -461,21 +474,16 @@ int lockd_up(struct net *net, const struct cred *cred) error = lockd_create_svc(); if (error) - goto err_create; + goto err; + nlmsvc_users++; error = lockd_up_net(nlmsvc_serv, net, cred); if (error < 0) { - goto err_put; + lockd_put(); + goto err; } - nlmsvc_users++; -err_put: - if (nlmsvc_users == 0) { - lockd_unregister_notifiers(); - kthread_stop(nlmsvc_task); - nlmsvc_serv = NULL; - } -err_create: +err: mutex_unlock(&nlmsvc_mutex); return error; } @@ -489,25 +497,7 @@ lockd_down(struct net *net) { mutex_lock(&nlmsvc_mutex); lockd_down_net(nlmsvc_serv, net); - if (nlmsvc_users) { - if (--nlmsvc_users) - goto out; - } else { - printk(KERN_ERR "lockd_down: no users! task=%p\n", - nlmsvc_task); - BUG(); - } - - if (!nlmsvc_task) { - printk(KERN_ERR "lockd_down: no lockd running.\n"); - BUG(); - } - lockd_unregister_notifiers(); - kthread_stop(nlmsvc_task); - dprintk("lockd_down: service destroyed\n"); - nlmsvc_serv = NULL; - nlmsvc_task = NULL; -out: + lockd_put(); mutex_unlock(&nlmsvc_mutex); } EXPORT_SYMBOL_GPL(lockd_down); From patchwork Tue Nov 23 01:29:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: NeilBrown X-Patchwork-Id: 12633329 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C0C5DC433EF for ; Tue, 23 Nov 2021 01:32:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229874AbhKWBfc (ORCPT ); Mon, 22 Nov 2021 20:35:32 -0500 Received: from smtp-out1.suse.de ([195.135.220.28]:57942 "EHLO smtp-out1.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229776AbhKWBfb (ORCPT ); Mon, 22 Nov 2021 20:35:31 -0500 Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id E4971212B6; Tue, 23 Nov 2021 01:32:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa; t=1637631143; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=R6Vy7e2dSa1wDkgESXVV84+y2o/2VJ0RFOhjGXSZJTo=; b=oeUr5bokoARNOpXoZcFULXYHAn9wLnmhASIKHKOj7hjEIZWjtbbYeSN6/55Am8fUpMNS32 XDVRC264f+nbxYvNX7UMjJmFf1ZsS2cpNfytyy+EO8bjRKK/CMJ/csJW8fr5qWKu6nvin8 AsjrWjgq6SEiVQbf0cbpqRgc4iMpaZ0= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_ed25519; t=1637631143; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=R6Vy7e2dSa1wDkgESXVV84+y2o/2VJ0RFOhjGXSZJTo=; b=XiRtcolZMbFZ7Ja7AOGRLZtOzm0De8oQEZaoTj33ceMyn+L6LPVhwJMue/IxmgNJ0RJuiA dCDweD63CMLkRoCQ== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id CA36413BD4; Tue, 23 Nov 2021 01:32:22 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id b9o0IaZEnGEddAAAMHmgww (envelope-from ); Tue, 23 Nov 2021 01:32:22 +0000 Subject: [PATCH 15/19] lockd: rename lockd_create_svc() to lockd_get() From: NeilBrown To: "J. Bruce Fields" , Chuck Lever Cc: linux-nfs@vger.kernel.org Date: Tue, 23 Nov 2021 12:29:35 +1100 Message-ID: <163763097549.7284.1248173019694047257.stgit@noble.brown> In-Reply-To: <163763078330.7284.10141477742275086758.stgit@noble.brown> References: <163763078330.7284.10141477742275086758.stgit@noble.brown> User-Agent: StGit/0.23 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org lockd_create_svc() already does an svc_get() if the service already exists, so it is more like a "get" than a "create". So: - Move the increment of nlmsvc_users into the function as well - rename to lockd_get(). It is now the inverse of lockd_put(). Signed-off-by: NeilBrown --- fs/lockd/svc.c | 10 ++++------ 1 file changed, 4 insertions(+), 6 deletions(-) diff --git a/fs/lockd/svc.c b/fs/lockd/svc.c index 7f12c280fd30..1a7c11118b32 100644 --- a/fs/lockd/svc.c +++ b/fs/lockd/svc.c @@ -396,16 +396,14 @@ static const struct svc_serv_ops lockd_sv_ops = { .svo_enqueue_xprt = svc_xprt_do_enqueue, }; -static int lockd_create_svc(void) +static int lockd_get(void) { struct svc_serv *serv; int error; - /* - * Check whether we're already up and running. - */ if (nlmsvc_serv) { svc_get(nlmsvc_serv); + nlmsvc_users++; return 0; } @@ -439,6 +437,7 @@ static int lockd_create_svc(void) register_inet6addr_notifier(&lockd_inet6addr_notifier); #endif dprintk("lockd_up: service created\n"); + nlmsvc_users++; return 0; } @@ -472,10 +471,9 @@ int lockd_up(struct net *net, const struct cred *cred) mutex_lock(&nlmsvc_mutex); - error = lockd_create_svc(); + error = lockd_get(); if (error) goto err; - nlmsvc_users++; error = lockd_up_net(nlmsvc_serv, net, cred); if (error < 0) { From patchwork Tue Nov 23 01:29:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: NeilBrown X-Patchwork-Id: 12633331 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 101C4C433EF for ; Tue, 23 Nov 2021 01:32:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229776AbhKWBfi (ORCPT ); Mon, 22 Nov 2021 20:35:38 -0500 Received: from smtp-out1.suse.de ([195.135.220.28]:57954 "EHLO smtp-out1.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229484AbhKWBfi (ORCPT ); Mon, 22 Nov 2021 20:35:38 -0500 Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 3A030212B6; Tue, 23 Nov 2021 01:32:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa; t=1637631150; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=+0Xy/1b4cpwQuEFLudNsCPNFcEJ/FPxUX2iQ0iVXkQo=; b=X/H1HHeBBvqsvLP9mGqgzCKbFmseLIuV0dDYAxKTRFym77c6JkHIe28A0E84+LAur5Uri+ MAdgDC5Ni/XmmfVZp6PRyqb/M4ffaBX0y3rwbXhEUuDctEkAXknix3AnKxQasAYaS5zzif xq/jFqAVifnjzWrIEE22CmdbfGqp5ig= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_ed25519; t=1637631150; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=+0Xy/1b4cpwQuEFLudNsCPNFcEJ/FPxUX2iQ0iVXkQo=; b=Ij7d/g9RuNW61U9CbX738O2bE8cp9n6YCKWnhpeDvn8Y9E7/aXIl3K8Qdgwn3SkB75BKtI 6EgJbOu+bTHx+WDg== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 20BFC13BD4; Tue, 23 Nov 2021 01:32:28 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id NHl5M6xEnGEqdAAAMHmgww (envelope-from ); Tue, 23 Nov 2021 01:32:28 +0000 Subject: [PATCH 16/19] SUNRPC: move the pool_map definitions (back) into svc.c From: NeilBrown To: "J. Bruce Fields" , Chuck Lever Cc: linux-nfs@vger.kernel.org Date: Tue, 23 Nov 2021 12:29:35 +1100 Message-ID: <163763097550.7284.16154772439269087144.stgit@noble.brown> In-Reply-To: <163763078330.7284.10141477742275086758.stgit@noble.brown> References: <163763078330.7284.10141477742275086758.stgit@noble.brown> User-Agent: StGit/0.23 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org These definitions are not used outside of svc.c, and there is no evidence that they ever have been. So move them into svc.c and make the declarations 'static'. Signed-off-by: NeilBrown --- include/linux/sunrpc/svc.h | 25 ------------------------- net/sunrpc/svc.c | 31 +++++++++++++++++++++++++------ 2 files changed, 25 insertions(+), 31 deletions(-) diff --git a/include/linux/sunrpc/svc.h b/include/linux/sunrpc/svc.h index 0b38c6eaf985..d69e6108cb83 100644 --- a/include/linux/sunrpc/svc.h +++ b/include/linux/sunrpc/svc.h @@ -494,29 +494,6 @@ struct svc_procedure { const char * pc_name; /* for display */ }; -/* - * Mode for mapping cpus to pools. - */ -enum { - SVC_POOL_AUTO = -1, /* choose one of the others */ - SVC_POOL_GLOBAL, /* no mapping, just a single global pool - * (legacy & UP mode) */ - SVC_POOL_PERCPU, /* one pool per cpu */ - SVC_POOL_PERNODE /* one pool per numa node */ -}; - -struct svc_pool_map { - int count; /* How many svc_servs use us */ - int mode; /* Note: int not enum to avoid - * warnings about "enumeration value - * not handled in switch" */ - unsigned int npools; - unsigned int *pool_to; /* maps pool id to cpu or node */ - unsigned int *to_pool; /* maps cpu or node to pool id */ -}; - -extern struct svc_pool_map svc_pool_map; - /* * Function prototypes. */ @@ -533,8 +510,6 @@ void svc_rqst_replace_page(struct svc_rqst *rqstp, struct page *page); void svc_rqst_free(struct svc_rqst *); void svc_exit_thread(struct svc_rqst *); -unsigned int svc_pool_map_get(void); -void svc_pool_map_put(void); struct svc_serv * svc_create_pooled(struct svc_program *, unsigned int, const struct svc_serv_ops *); int svc_set_num_threads(struct svc_serv *, struct svc_pool *, int); diff --git a/net/sunrpc/svc.c b/net/sunrpc/svc.c index 5513f8c9a8d6..f0dd9ef7e0cd 100644 --- a/net/sunrpc/svc.c +++ b/net/sunrpc/svc.c @@ -41,14 +41,35 @@ static void svc_unregister(const struct svc_serv *serv, struct net *net); #define SVC_POOL_DEFAULT SVC_POOL_GLOBAL +/* + * Mode for mapping cpus to pools. + */ +enum { + SVC_POOL_AUTO = -1, /* choose one of the others */ + SVC_POOL_GLOBAL, /* no mapping, just a single global pool + * (legacy & UP mode) */ + SVC_POOL_PERCPU, /* one pool per cpu */ + SVC_POOL_PERNODE /* one pool per numa node */ +}; + /* * Structure for mapping cpus to pools and vice versa. * Setup once during sunrpc initialisation. */ -struct svc_pool_map svc_pool_map = { + +struct svc_pool_map { + int count; /* How many svc_servs use us */ + int mode; /* Note: int not enum to avoid + * warnings about "enumeration value + * not handled in switch" */ + unsigned int npools; + unsigned int *pool_to; /* maps pool id to cpu or node */ + unsigned int *to_pool; /* maps cpu or node to pool id */ +}; + +static struct svc_pool_map svc_pool_map = { .mode = SVC_POOL_DEFAULT }; -EXPORT_SYMBOL_GPL(svc_pool_map); static DEFINE_MUTEX(svc_pool_map_mutex);/* protects svc_pool_map.count only */ @@ -222,7 +243,7 @@ svc_pool_map_init_pernode(struct svc_pool_map *m) * vice versa). Initialise the map if we're the first user. * Returns the number of pools. */ -unsigned int +static unsigned int svc_pool_map_get(void) { struct svc_pool_map *m = &svc_pool_map; @@ -257,7 +278,6 @@ svc_pool_map_get(void) mutex_unlock(&svc_pool_map_mutex); return m->npools; } -EXPORT_SYMBOL_GPL(svc_pool_map_get); /* * Drop a reference to the global map of cpus to pools. @@ -266,7 +286,7 @@ EXPORT_SYMBOL_GPL(svc_pool_map_get); * mode using the pool_mode module option without * rebooting or re-loading sunrpc.ko. */ -void +static void svc_pool_map_put(void) { struct svc_pool_map *m = &svc_pool_map; @@ -283,7 +303,6 @@ svc_pool_map_put(void) mutex_unlock(&svc_pool_map_mutex); } -EXPORT_SYMBOL_GPL(svc_pool_map_put); static int svc_pool_map_get_node(unsigned int pidx) { From patchwork Tue Nov 23 01:29:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: NeilBrown X-Patchwork-Id: 12633333 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 60B6AC433F5 for ; Tue, 23 Nov 2021 01:32:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230314AbhKWBfp (ORCPT ); Mon, 22 Nov 2021 20:35:45 -0500 Received: from smtp-out1.suse.de ([195.135.220.28]:57964 "EHLO smtp-out1.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229484AbhKWBfp (ORCPT ); Mon, 22 Nov 2021 20:35:45 -0500 Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 80BAB212B6; Tue, 23 Nov 2021 01:32:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa; t=1637631157; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=EKChihadRa55rawMFndXyV+xBWffxIzAzR7tfUS6g9M=; b=WH64G2E96S+lQ2N7KuqS5OZQKYoufL052sbN/HcaH+k58y17mMqSMRAM5H+ZkkpgrctBiA Gs1xy5PstjvMf44Lxm5WNzY808H5Qs6R/n/gZxXBWM+hRltUmKs3ifT4JbyF81c1B13J5a jyn9qgVx7qL/OZqT9vY9WomvPI2/6Yo= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_ed25519; t=1637631157; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=EKChihadRa55rawMFndXyV+xBWffxIzAzR7tfUS6g9M=; b=hfl/piZB7/CNsgv+jHKLxRmsWg8vBrb3ZXQl8NQjiPGRv4K5VMsVatnLzDKyvhjCKbr1Gk zSWygr9rwukEHNBQ== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 5D0F513BD4; Tue, 23 Nov 2021 01:32:36 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id hxQSBrREnGEudAAAMHmgww (envelope-from ); Tue, 23 Nov 2021 01:32:36 +0000 Subject: [PATCH 17/19] SUNRPC: always treat sv_nrpools==1 as "not pooled" From: NeilBrown To: "J. Bruce Fields" , Chuck Lever Cc: linux-nfs@vger.kernel.org Date: Tue, 23 Nov 2021 12:29:35 +1100 Message-ID: <163763097550.7284.8119486795641356296.stgit@noble.brown> In-Reply-To: <163763078330.7284.10141477742275086758.stgit@noble.brown> References: <163763078330.7284.10141477742275086758.stgit@noble.brown> User-Agent: StGit/0.23 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org Currently 'pooled' services hold a reference on the pool_map, and 'unpooled' services do not. svc_destroy() uses the presence of ->svo_function (via svc_serv_is_pooled()) to determine if the reference should be dropped. There is no direct correlation between being pooled and the use of svo_function, though in practice, lockd is the only non-pooled service, and the only one not to use svc_function. This is untidy and would cause problems if we changed lockd to use svc_set_num_threads(), which requires the use of ->svo_function. So change the test for "is the service pooled" to "is sv_nrpools > 1". This means that when svc_pool_map_get() returns 1, it must NOT take a reference to the pool. We discard svc_serv_is_pooled(), and test sv_nrpools directly. Signed-off-by: NeilBrown --- net/sunrpc/svc.c | 54 +++++++++++++++++++++++++++++------------------------- 1 file changed, 29 insertions(+), 25 deletions(-) diff --git a/net/sunrpc/svc.c b/net/sunrpc/svc.c index f0dd9ef7e0cd..5fbe7f55289e 100644 --- a/net/sunrpc/svc.c +++ b/net/sunrpc/svc.c @@ -37,8 +37,6 @@ static void svc_unregister(const struct svc_serv *serv, struct net *net); -#define svc_serv_is_pooled(serv) ((serv)->sv_ops->svo_function) - #define SVC_POOL_DEFAULT SVC_POOL_GLOBAL /* @@ -240,8 +238,10 @@ svc_pool_map_init_pernode(struct svc_pool_map *m) /* * Add a reference to the global map of cpus to pools (and - * vice versa). Initialise the map if we're the first user. - * Returns the number of pools. + * vice versa) if pools are in use. + * Initialise the map if we're the first user. + * Returns the number of pools. If this is '1', no reference + * was taken. */ static unsigned int svc_pool_map_get(void) @@ -253,6 +253,7 @@ svc_pool_map_get(void) if (m->count++) { mutex_unlock(&svc_pool_map_mutex); + WARN_ON_ONCE(m->npools <= 1); return m->npools; } @@ -268,29 +269,36 @@ svc_pool_map_get(void) break; } - if (npools < 0) { + if (npools <= 0) { /* default, or memory allocation failure */ npools = 1; m->mode = SVC_POOL_GLOBAL; } m->npools = npools; + if (npools == 1) + /* service is unpooled, so doesn't hold a reference */ + m->count--; + mutex_unlock(&svc_pool_map_mutex); - return m->npools; + return npools; } /* - * Drop a reference to the global map of cpus to pools. + * Drop a reference to the global map of cpus to pools, if + * pools were in use, i.e. if npools > 1. * When the last reference is dropped, the map data is * freed; this allows the sysadmin to change the pool * mode using the pool_mode module option without * rebooting or re-loading sunrpc.ko. */ static void -svc_pool_map_put(void) +svc_pool_map_put(int npools) { struct svc_pool_map *m = &svc_pool_map; + if (npools <= 1) + return; mutex_lock(&svc_pool_map_mutex); if (!--m->count) { @@ -359,21 +367,18 @@ svc_pool_for_cpu(struct svc_serv *serv, int cpu) struct svc_pool_map *m = &svc_pool_map; unsigned int pidx = 0; - /* - * An uninitialised map happens in a pure client when - * lockd is brought up, so silently treat it the - * same as SVC_POOL_GLOBAL. - */ - if (svc_serv_is_pooled(serv)) { - switch (m->mode) { - case SVC_POOL_PERCPU: - pidx = m->to_pool[cpu]; - break; - case SVC_POOL_PERNODE: - pidx = m->to_pool[cpu_to_node(cpu)]; - break; - } + if (serv->sv_nrpools <= 1) + return serv->sv_pools; + + switch (m->mode) { + case SVC_POOL_PERCPU: + pidx = m->to_pool[cpu]; + break; + case SVC_POOL_PERNODE: + pidx = m->to_pool[cpu_to_node(cpu)]; + break; } + return &serv->sv_pools[pidx % serv->sv_nrpools]; } @@ -526,7 +531,7 @@ svc_create_pooled(struct svc_program *prog, unsigned int bufsize, goto out_err; return serv; out_err: - svc_pool_map_put(); + svc_pool_map_put(npools); return NULL; } EXPORT_SYMBOL_GPL(svc_create_pooled); @@ -561,8 +566,7 @@ svc_destroy(struct kref *ref) cache_clean_deferred(serv); - if (svc_serv_is_pooled(serv)) - svc_pool_map_put(); + svc_pool_map_put(serv->sv_nrpools); kfree(serv->sv_pools); kfree(serv); From patchwork Tue Nov 23 01:29:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: NeilBrown X-Patchwork-Id: 12633335 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 59065C433F5 for ; Tue, 23 Nov 2021 01:32:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231222AbhKWBfv (ORCPT ); Mon, 22 Nov 2021 20:35:51 -0500 Received: from smtp-out2.suse.de ([195.135.220.29]:37192 "EHLO smtp-out2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229484AbhKWBfv (ORCPT ); Mon, 22 Nov 2021 20:35:51 -0500 Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id 624331FD33; Tue, 23 Nov 2021 01:32:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa; t=1637631163; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=jrvwhZvvBfRFfFycU8y9ucEnUoyec2Q+jA8SwAPxI2w=; b=eee6SChB3hCmX9vfkYV+snZnqgIPqy5sII+q/IlJ6Gvc9Sffw7eSo+hli8FD7CXVMBuSjl e3Uu4VGPE0uS6LV05pEeVfESLaWnvhFf4P/an6SWBm/fU9j1NTkcUKDxrvnDeMVT0oMFz0 rJ9u6uBCky/zUblh/6PqgcP3+3d4SGM= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_ed25519; t=1637631163; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=jrvwhZvvBfRFfFycU8y9ucEnUoyec2Q+jA8SwAPxI2w=; b=gijZEjS5CEvsYE5T4o2p+FfojrV6lrP0nAPgiC86egwJBX5P4KlTEUCoIQbBMUNQAmNYte +x5sfRQThQ+96UDQ== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 4D8F813BD4; Tue, 23 Nov 2021 01:32:42 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id 2y0wA7pEnGEzdAAAMHmgww (envelope-from ); Tue, 23 Nov 2021 01:32:42 +0000 Subject: [PATCH 18/19] lockd: use svc_set_num_threads() for thread start and stop From: NeilBrown To: "J. Bruce Fields" , Chuck Lever Cc: linux-nfs@vger.kernel.org Date: Tue, 23 Nov 2021 12:29:35 +1100 Message-ID: <163763097551.7284.17962677949356020078.stgit@noble.brown> In-Reply-To: <163763078330.7284.10141477742275086758.stgit@noble.brown> References: <163763078330.7284.10141477742275086758.stgit@noble.brown> User-Agent: StGit/0.23 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org svc_set_num_threads() does everything that lockd_start_svc() does, except set sv_maxconn. It also (when passed 0) finds the threads and stops them with kthread_stop(). So move the setting for sv_maxconn, and use svc_set_num_thread() We now don't need nlmsvc_task. Also set svc_module - just for consistency. svc_prepare_thread is now only used where it is defined, so it can be made static. Signed-off-by: NeilBrown --- fs/lockd/svc.c | 56 ++++++-------------------------------------- include/linux/sunrpc/svc.h | 2 -- net/sunrpc/svc.c | 3 +- 3 files changed, 8 insertions(+), 53 deletions(-) diff --git a/fs/lockd/svc.c b/fs/lockd/svc.c index 1a7c11118b32..93f5a4f262f9 100644 --- a/fs/lockd/svc.c +++ b/fs/lockd/svc.c @@ -55,7 +55,6 @@ EXPORT_SYMBOL_GPL(nlmsvc_ops); static DEFINE_MUTEX(nlmsvc_mutex); static unsigned int nlmsvc_users; static struct svc_serv *nlmsvc_serv; -static struct task_struct *nlmsvc_task; unsigned long nlmsvc_timeout; unsigned int lockd_net_id; @@ -292,8 +291,8 @@ static void lockd_down_net(struct svc_serv *serv, struct net *net) __func__, net->ns.inum); } } else { - pr_err("%s: no users! task=%p, net=%x\n", - __func__, nlmsvc_task, net->ns.inum); + pr_err("%s: no users! net=%x\n", + __func__, net->ns.inum); BUG(); } } @@ -351,49 +350,11 @@ static struct notifier_block lockd_inet6addr_notifier = { }; #endif -static int lockd_start_svc(struct svc_serv *serv) -{ - int error; - struct svc_rqst *rqst; - - /* - * Create the kernel thread and wait for it to start. - */ - rqst = svc_prepare_thread(serv, &serv->sv_pools[0], NUMA_NO_NODE); - if (IS_ERR(rqst)) { - error = PTR_ERR(rqst); - printk(KERN_WARNING - "lockd_up: svc_rqst allocation failed, error=%d\n", - error); - goto out_rqst; - } - - svc_sock_update_bufs(serv); - serv->sv_maxconn = nlm_max_connections; - - nlmsvc_task = kthread_create(lockd, rqst, "%s", serv->sv_name); - if (IS_ERR(nlmsvc_task)) { - error = PTR_ERR(nlmsvc_task); - printk(KERN_WARNING - "lockd_up: kthread_run failed, error=%d\n", error); - goto out_task; - } - rqst->rq_task = nlmsvc_task; - wake_up_process(nlmsvc_task); - - dprintk("lockd_up: service started\n"); - return 0; - -out_task: - svc_exit_thread(rqst); - nlmsvc_task = NULL; -out_rqst: - return error; -} - static const struct svc_serv_ops lockd_sv_ops = { .svo_shutdown = svc_rpcb_cleanup, + .svo_function = lockd, .svo_enqueue_xprt = svc_xprt_do_enqueue, + .svo_module = THIS_MODULE, }; static int lockd_get(void) @@ -425,7 +386,8 @@ static int lockd_get(void) return -ENOMEM; } - error = lockd_start_svc(serv); + serv->sv_maxconn = nlm_max_connections; + error = svc_set_num_threads(serv, NULL, 1); /* The thread now holds the only reference */ svc_put(serv); if (error < 0) @@ -453,11 +415,7 @@ static void lockd_put(void) unregister_inet6addr_notifier(&lockd_inet6addr_notifier); #endif - if (nlmsvc_task) { - kthread_stop(nlmsvc_task); - dprintk("lockd_down: service stopped\n"); - nlmsvc_task = NULL; - } + svc_set_num_threads(nlmsvc_serv, NULL, 0); nlmsvc_serv = NULL; dprintk("lockd_down: service destroyed\n"); } diff --git a/include/linux/sunrpc/svc.h b/include/linux/sunrpc/svc.h index d69e6108cb83..313dd3d1ef16 100644 --- a/include/linux/sunrpc/svc.h +++ b/include/linux/sunrpc/svc.h @@ -504,8 +504,6 @@ struct svc_serv *svc_create(struct svc_program *, unsigned int, const struct svc_serv_ops *); struct svc_rqst *svc_rqst_alloc(struct svc_serv *serv, struct svc_pool *pool, int node); -struct svc_rqst *svc_prepare_thread(struct svc_serv *serv, - struct svc_pool *pool, int node); void svc_rqst_replace_page(struct svc_rqst *rqstp, struct page *page); void svc_rqst_free(struct svc_rqst *); diff --git a/net/sunrpc/svc.c b/net/sunrpc/svc.c index 5fbe7f55289e..2aabec2b4bec 100644 --- a/net/sunrpc/svc.c +++ b/net/sunrpc/svc.c @@ -652,7 +652,7 @@ svc_rqst_alloc(struct svc_serv *serv, struct svc_pool *pool, int node) } EXPORT_SYMBOL_GPL(svc_rqst_alloc); -struct svc_rqst * +static struct svc_rqst * svc_prepare_thread(struct svc_serv *serv, struct svc_pool *pool, int node) { struct svc_rqst *rqstp; @@ -672,7 +672,6 @@ svc_prepare_thread(struct svc_serv *serv, struct svc_pool *pool, int node) spin_unlock_bh(&pool->sp_lock); return rqstp; } -EXPORT_SYMBOL_GPL(svc_prepare_thread); /* * Choose a pool in which to create a new thread, for svc_set_num_threads From patchwork Tue Nov 23 01:29:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: NeilBrown X-Patchwork-Id: 12633337 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 04596C433F5 for ; Tue, 23 Nov 2021 01:32:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230103AbhKWBf4 (ORCPT ); Mon, 22 Nov 2021 20:35:56 -0500 Received: from smtp-out2.suse.de ([195.135.220.29]:37202 "EHLO smtp-out2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229484AbhKWBf4 (ORCPT ); Mon, 22 Nov 2021 20:35:56 -0500 Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id 243E21FD33; Tue, 23 Nov 2021 01:32:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa; t=1637631168; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=M42bhQIasE2MvpVoJ/Qwc7TcaBZXRZCimM8iH/q8+qU=; b=p/AMNhp+QVBesP2HRDAEwY2GkJY3dHcyhRzRAyabR9Lt93dk2N+XhEkTLYMEuxy1trvEUe dW2gIvNaCx9+7jvCeLs35GMi+yI/6vVGx9oc5FrKKrz4BIwh+Rtwe4QMudDwENxvRgmWBS P151EO1YhXkzBTZnWW5elQcRqr3Xq9A= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_ed25519; t=1637631168; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=M42bhQIasE2MvpVoJ/Qwc7TcaBZXRZCimM8iH/q8+qU=; b=2DCHlNiAVRo2eF1UQaQdCOLebLVjP4U7e7NnG2qoexzT0ImPOvebTBMsOGUFOxuPqPZIy1 TkZJEFK2HMqFhOCQ== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id F1A4C13BD4; Tue, 23 Nov 2021 01:32:46 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id pF7eKr5EnGE5dAAAMHmgww (envelope-from ); Tue, 23 Nov 2021 01:32:46 +0000 Subject: [PATCH 19/19] NFS: switch the callback service back to non-pooled. From: NeilBrown To: "J. Bruce Fields" , Chuck Lever Cc: linux-nfs@vger.kernel.org Date: Tue, 23 Nov 2021 12:29:35 +1100 Message-ID: <163763097551.7284.13645796472410259327.stgit@noble.brown> In-Reply-To: <163763078330.7284.10141477742275086758.stgit@noble.brown> References: <163763078330.7284.10141477742275086758.stgit@noble.brown> User-Agent: StGit/0.23 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org Now that thread management is consistent there is no need for nfs-callback to use svc_create_pooled() as introduced in Commit df807fffaabd ("NFSv4.x/callback: Create the callback service through svc_create_pooled"). So switch back to svc_create(). If service pools were configured, but the number of threads were left at '1', nfs callback may not work reliably when svc_create_pooled() is used. Signed-off-by: NeilBrown --- fs/nfs/callback.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/fs/nfs/callback.c b/fs/nfs/callback.c index 6cdc9d18a7dd..c4994c1d4e36 100644 --- a/fs/nfs/callback.c +++ b/fs/nfs/callback.c @@ -286,7 +286,7 @@ static struct svc_serv *nfs_callback_create_svc(int minorversion) printk(KERN_WARNING "nfs_callback_create_svc: no kthread, %d users??\n", cb_info->users); - serv = svc_create_pooled(&nfs4_callback_program, NFS4_CALLBACK_BUFSIZE, sv_ops); + serv = svc_create(&nfs4_callback_program, NFS4_CALLBACK_BUFSIZE, sv_ops); if (!serv) { printk(KERN_ERR "nfs_callback_create_svc: create service failed\n"); return ERR_PTR(-ENOMEM);