From patchwork Tue Sep 4 21:05:32 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Trond Myklebust X-Patchwork-Id: 10587923 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 36FB9679F for ; Tue, 4 Sep 2018 21:06:28 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2546E29BBA for ; Tue, 4 Sep 2018 21:06:28 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1A41329FC2; Tue, 4 Sep 2018 21:06:28 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 56E7A29FCA for ; Tue, 4 Sep 2018 21:06:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726706AbeIEBdQ (ORCPT ); Tue, 4 Sep 2018 21:33:16 -0400 Received: from mail-it0-f68.google.com ([209.85.214.68]:37036 "EHLO mail-it0-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726284AbeIEBdQ (ORCPT ); Tue, 4 Sep 2018 21:33:16 -0400 Received: by mail-it0-f68.google.com with SMTP id h20-v6so6705858itf.2 for ; Tue, 04 Sep 2018 14:06:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=vZ1bWWadXBWnwpZbKnTYg0mZokukl0jCa+1Wo0Vp0mM=; b=fehMWm8nN3XO4w8aO1rraSdjojHVjuGlUElJQeCY1GOvde2WWCOETxJVgqCVqUnKZT iZ24EmHGNwaZU4q5VCZxFExnkq1vUfXU9rfcHngtkDLtJcQVx+Sd5NJ0ogHfd14x7oIu h/+WzhVatV4B1vq034hkzhCmWA3W4pVZEdlZzKDXfDzJ+uRtiu8Y48M+q+ZGIWyUQwqo rWKm1EPPpKiC6wVD6TRS0613koG8XldVYKiJbDm+zXE3D3fe5ondtg0+Tn+k/01PrJnw ScpJNhe7WenRIRPnGYTs4/Fg1g5VVV5wtK+4T2mrjg7MbKSyeVmwck45rpMTc3WZsKxH W4zw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=vZ1bWWadXBWnwpZbKnTYg0mZokukl0jCa+1Wo0Vp0mM=; b=ke7zpKZXBH6+7bbPGxF/DfickaNDcCXCbDYClX7rhXTerCZQ9MmxZsohZYFK0FrO5G 3a8Te2+D2gKPg29UfBITIx+vzzLGyzXfLQQYv0wqbjKhAYft/K2D/j/Jc3E8VM3oE5sC 7E/We92gvTz5W0Ot6yWb9LSLoy+V/ThyEm9+xCbqKosbZluCs2QMLr35FVrzCevdaoIf vfn/v/ZGNi6LlO35Oia4oZqcKsX010PmkDmE1DSQ16vH3w4+Axm1y+FoSkPJC6uVgjDh JbyvkyIQ0QUviF+rgB7YiU4FonJbNWEcNMNxZkGURN0pJdbe29GnOpFWWSnHPKpLC7P1 ugLg== X-Gm-Message-State: APzg51BYBBFKPR4QlWv/SiKuJ6KV6MkVJNNw3Ln+eKHNgeUECtZ5jOo/ g7eVU9fAxXj3/dJzP9X+5UA/rvY= X-Google-Smtp-Source: ANB0VdZoEIkl9i+iNpJL24aVj/11Yhwz0ufuPLd+MjSFjSL4VMYTL/QNKyTi9r4AWUZfQknQx7TBnw== X-Received: by 2002:a24:d311:: with SMTP id n17-v6mr1543569itg.99.1536095183871; Tue, 04 Sep 2018 14:06:23 -0700 (PDT) Received: from leira.trondhjem.org.localdomain (c-68-40-195-73.hsd1.mi.comcast.net. [68.40.195.73]) by smtp.gmail.com with ESMTPSA id t64-v6sm172860ita.13.2018.09.04.14.06.23 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 04 Sep 2018 14:06:23 -0700 (PDT) From: Trond Myklebust X-Google-Original-From: Trond Myklebust To: linux-nfs@vger.kernel.org Subject: [PATCH v2 17/34] SUNRPC: Distinguish between the slot allocation list and receive queue Date: Tue, 4 Sep 2018 17:05:32 -0400 Message-Id: <20180904210549.81673-18-trond.myklebust@hammerspace.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20180904210549.81673-17-trond.myklebust@hammerspace.com> References: <20180904210549.81673-1-trond.myklebust@hammerspace.com> <20180904210549.81673-2-trond.myklebust@hammerspace.com> <20180904210549.81673-3-trond.myklebust@hammerspace.com> <20180904210549.81673-4-trond.myklebust@hammerspace.com> <20180904210549.81673-5-trond.myklebust@hammerspace.com> <20180904210549.81673-6-trond.myklebust@hammerspace.com> <20180904210549.81673-7-trond.myklebust@hammerspace.com> <20180904210549.81673-8-trond.myklebust@hammerspace.com> <20180904210549.81673-9-trond.myklebust@hammerspace.com> <20180904210549.81673-10-trond.myklebust@hammerspace.com> <20180904210549.81673-11-trond.myklebust@hammerspace.com> <20180904210549.81673-12-trond.myklebust@hammerspace.com> <20180904210549.81673-13-trond.myklebust@hammerspace.com> <20180904210549.81673-14-trond.myklebust@hammerspace.com> <20180904210549.81673-15-trond.myklebust@hammerspace.com> <20180904210549.81673-16-trond.myklebust@hammerspace.com> <20180904210549.81673-17-trond.myklebust@hammerspace.com> MIME-Version: 1.0 Sender: linux-nfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP When storing a struct rpc_rqst on the slot allocation list, we currently use the same field 'rq_list' as we use to store the request on the receive queue. Since the structure is never on both lists at the same time, this is OK. However, for clarity, let's make that a union with different names for the different lists so that we can more easily distinguish between the two states. Signed-off-by: Trond Myklebust --- include/linux/sunrpc/xprt.h | 9 +++++++-- net/sunrpc/backchannel_rqst.c | 2 +- net/sunrpc/xprt.c | 16 ++++++++-------- net/sunrpc/xprtrdma/backchannel.c | 2 +- 4 files changed, 17 insertions(+), 12 deletions(-) diff --git a/include/linux/sunrpc/xprt.h b/include/linux/sunrpc/xprt.h index 4fa2af087cff..9cec2d0811f2 100644 --- a/include/linux/sunrpc/xprt.h +++ b/include/linux/sunrpc/xprt.h @@ -82,7 +82,11 @@ struct rpc_rqst { struct page **rq_enc_pages; /* scratch pages for use by gss privacy code */ void (*rq_release_snd_buf)(struct rpc_rqst *); /* release rq_enc_pages */ - struct list_head rq_list; + + union { + struct list_head rq_list; /* Slot allocation list */ + struct list_head rq_recv; /* Receive queue */ + }; void *rq_buffer; /* Call XDR encode buffer */ size_t rq_callsize; @@ -249,7 +253,8 @@ struct rpc_xprt { struct list_head bc_pa_list; /* List of preallocated * backchannel rpc_rqst's */ #endif /* CONFIG_SUNRPC_BACKCHANNEL */ - struct list_head recv; + + struct list_head recv_queue; /* Receive queue */ struct { unsigned long bind_count, /* total number of binds */ diff --git a/net/sunrpc/backchannel_rqst.c b/net/sunrpc/backchannel_rqst.c index 3c15a99b9700..92e9ad30ec2f 100644 --- a/net/sunrpc/backchannel_rqst.c +++ b/net/sunrpc/backchannel_rqst.c @@ -91,7 +91,7 @@ struct rpc_rqst *xprt_alloc_bc_req(struct rpc_xprt *xprt, gfp_t gfp_flags) return NULL; req->rq_xprt = xprt; - INIT_LIST_HEAD(&req->rq_list); + INIT_LIST_HEAD(&req->rq_recv); INIT_LIST_HEAD(&req->rq_bc_list); /* Preallocate one XDR receive buffer */ diff --git a/net/sunrpc/xprt.c b/net/sunrpc/xprt.c index 1fba837e5390..7f53e97a624f 100644 --- a/net/sunrpc/xprt.c +++ b/net/sunrpc/xprt.c @@ -708,7 +708,7 @@ static void xprt_schedule_autodisconnect(struct rpc_xprt *xprt) __must_hold(&xprt->transport_lock) { - if (list_empty(&xprt->recv) && xprt_has_timer(xprt)) + if (list_empty(&xprt->recv_queue) && xprt_has_timer(xprt)) mod_timer(&xprt->timer, xprt->last_used + xprt->idle_timeout); } @@ -718,7 +718,7 @@ xprt_init_autodisconnect(struct timer_list *t) struct rpc_xprt *xprt = from_timer(xprt, t, timer); spin_lock(&xprt->transport_lock); - if (!list_empty(&xprt->recv)) + if (!list_empty(&xprt->recv_queue)) goto out_abort; /* Reset xprt->last_used to avoid connect/autodisconnect cycling */ xprt->last_used = jiffies; @@ -848,7 +848,7 @@ struct rpc_rqst *xprt_lookup_rqst(struct rpc_xprt *xprt, __be32 xid) { struct rpc_rqst *entry; - list_for_each_entry(entry, &xprt->recv, rq_list) + list_for_each_entry(entry, &xprt->recv_queue, rq_recv) if (entry->rq_xid == xid) { trace_xprt_lookup_rqst(xprt, xid, 0); entry->rq_rtt = ktime_sub(ktime_get(), entry->rq_xtime); @@ -919,7 +919,7 @@ xprt_request_enqueue_receive(struct rpc_task *task) struct rpc_xprt *xprt = req->rq_xprt; spin_lock(&xprt->queue_lock); - if (xprt_request_data_received(task) || !list_empty(&req->rq_list)) { + if (xprt_request_data_received(task) || !list_empty(&req->rq_recv)) { spin_unlock(&xprt->queue_lock); return; } @@ -929,7 +929,7 @@ xprt_request_enqueue_receive(struct rpc_task *task) sizeof(req->rq_private_buf)); /* Add request to the receive list */ - list_add_tail(&req->rq_list, &xprt->recv); + list_add_tail(&req->rq_recv, &xprt->recv_queue); set_bit(RPC_TASK_NEED_RECV, &task->tk_runstate); spin_unlock(&xprt->queue_lock); @@ -948,7 +948,7 @@ static void xprt_request_dequeue_receive_locked(struct rpc_task *task) { clear_bit(RPC_TASK_NEED_RECV, &task->tk_runstate); - list_del_init(&task->tk_rqstp->rq_list); + list_del_init(&task->tk_rqstp->rq_recv); } /** @@ -1337,7 +1337,7 @@ xprt_request_init(struct rpc_task *task) struct rpc_xprt *xprt = task->tk_xprt; struct rpc_rqst *req = task->tk_rqstp; - INIT_LIST_HEAD(&req->rq_list); + INIT_LIST_HEAD(&req->rq_recv); req->rq_timeout = task->tk_client->cl_timeout->to_initval; req->rq_task = task; req->rq_xprt = xprt; @@ -1471,7 +1471,7 @@ static void xprt_init(struct rpc_xprt *xprt, struct net *net) spin_lock_init(&xprt->queue_lock); INIT_LIST_HEAD(&xprt->free); - INIT_LIST_HEAD(&xprt->recv); + INIT_LIST_HEAD(&xprt->recv_queue); #if defined(CONFIG_SUNRPC_BACKCHANNEL) spin_lock_init(&xprt->bc_pa_lock); INIT_LIST_HEAD(&xprt->bc_pa_list); diff --git a/net/sunrpc/xprtrdma/backchannel.c b/net/sunrpc/xprtrdma/backchannel.c index 90adeff4c06b..40c7c7306a99 100644 --- a/net/sunrpc/xprtrdma/backchannel.c +++ b/net/sunrpc/xprtrdma/backchannel.c @@ -51,7 +51,7 @@ static int rpcrdma_bc_setup_reqs(struct rpcrdma_xprt *r_xprt, rqst = &req->rl_slot; rqst->rq_xprt = xprt; - INIT_LIST_HEAD(&rqst->rq_list); + INIT_LIST_HEAD(&rqst->rq_recv); INIT_LIST_HEAD(&rqst->rq_bc_list); __set_bit(RPC_BC_PA_IN_USE, &rqst->rq_bc_pa_state); spin_lock_bh(&xprt->bc_pa_lock);