From patchwork Mon Sep 3 15:29:23 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Trond Myklebust X-Patchwork-Id: 10586079 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4C82613BB for ; Mon, 3 Sep 2018 15:30:16 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3BCCD294FC for ; Mon, 3 Sep 2018 15:30:16 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3056229635; Mon, 3 Sep 2018 15:30:16 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 18186294FC for ; Mon, 3 Sep 2018 15:30:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727410AbeICTux (ORCPT ); Mon, 3 Sep 2018 15:50:53 -0400 Received: from mail-it0-f42.google.com ([209.85.214.42]:38717 "EHLO mail-it0-f42.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727389AbeICTux (ORCPT ); Mon, 3 Sep 2018 15:50:53 -0400 Received: by mail-it0-f42.google.com with SMTP id p129-v6so1465847ite.3 for ; Mon, 03 Sep 2018 08:30:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=srqzZQa2xNak/Vfd0ceNTskJ1xxsLBguxUxfLhxblzg=; b=Odp9jWIYE81eSPQyL28ihL05uoWRSlaGH5Wl5t0LsYFa5Sjw1ffTPXIx0H5Vc21EWu uSJf4fJBx7EndHFQ1ii4PIpX2DsRq6RrkZrwq3QuFHkPQ5lzyCfqIyc4AZw+OU/9viYn vBi2OWD74Iirrcx+pfNGkUrlL6jInHXnERub3cWKzEjLpl+V2v3Fedrv5po9jUi7AecV 6OxlktZq5efpKURsH3uKbLdnvg5t9vbIHV/gf20qLHUCNT/0ktBpYaZZXT2lWUr0ZQh5 jtsH+t88bIgNW15c38b0wnqtqbSGnpuEyfFVn7nAJ3zMSvYIuoTcDynbbZ6iet1yzURk q8Ew== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=srqzZQa2xNak/Vfd0ceNTskJ1xxsLBguxUxfLhxblzg=; b=Sbmq5uolQZu8J4oCiZCqcna/XjgXB5MkST38mtpc9Knhnlx4+hGf/dVMfGn5kyE1e1 6APVIkbOjNPs/kXwnaJ7M+INQLAgA2f0CkGRB5c08y3MNLo0rtwH016q0ivH5RQswqzI M7r2xexpBACoqV10blu8RP3fekfp+vsr252C3eVRDzJiHktVvZc3Li9J9kzjVkfQFP4L zgAfDhRQpzW45fcComDjzsT0tGRgNRFqJO8TGaKfiPn4MF8A7Qc665h/xKMehHgDCnN1 CbK2lHWPd0AdQ9v0XZVKYWJpv32AKJnZm/5M4QFlcJZmZiBTt+3ZmxDv5auJ933O53Jf 5Vmw== X-Gm-Message-State: APzg51AhdRIAefqRcrSpfMVCfN2JyQnQOQCSv88SBnF/Gksa2t3oIwhJ igDsrHkTb20MUQTGwFDoVyeSQG4= X-Google-Smtp-Source: ANB0VdbL71jsytph1Ktka2CIzaX2HkUZ4e4tmQxUyVhAa7pmzMxW1+6UmIquwznqFNlReIy6I9ulOQ== X-Received: by 2002:a24:d0d7:: with SMTP id m206-v6mr5280879itg.127.1535988612339; Mon, 03 Sep 2018 08:30:12 -0700 (PDT) Received: from leira.trondhjem.org.localdomain (c-68-40-195-73.hsd1.mi.comcast.net. [68.40.195.73]) by smtp.gmail.com with ESMTPSA id c25-v6sm7040027iob.30.2018.09.03.08.30.11 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 03 Sep 2018 08:30:11 -0700 (PDT) From: Trond Myklebust X-Google-Original-From: Trond Myklebust To: linux-nfs@vger.kernel.org Subject: [PATCH 14/27] SUNRPC: Refactor xprt_transmit() to remove the reply queue code Date: Mon, 3 Sep 2018 11:29:23 -0400 Message-Id: <20180903152936.24325-15-trond.myklebust@hammerspace.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20180903152936.24325-14-trond.myklebust@hammerspace.com> References: <20180903152936.24325-1-trond.myklebust@hammerspace.com> <20180903152936.24325-2-trond.myklebust@hammerspace.com> <20180903152936.24325-3-trond.myklebust@hammerspace.com> <20180903152936.24325-4-trond.myklebust@hammerspace.com> <20180903152936.24325-5-trond.myklebust@hammerspace.com> <20180903152936.24325-6-trond.myklebust@hammerspace.com> <20180903152936.24325-7-trond.myklebust@hammerspace.com> <20180903152936.24325-8-trond.myklebust@hammerspace.com> <20180903152936.24325-9-trond.myklebust@hammerspace.com> <20180903152936.24325-10-trond.myklebust@hammerspace.com> <20180903152936.24325-11-trond.myklebust@hammerspace.com> <20180903152936.24325-12-trond.myklebust@hammerspace.com> <20180903152936.24325-13-trond.myklebust@hammerspace.com> <20180903152936.24325-14-trond.myklebust@hammerspace.com> MIME-Version: 1.0 Sender: linux-nfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Separate out the action of adding a request to the reply queue so that the backchannel code can simply skip calling it altogether. Signed-off-by: Trond Myklebust --- include/linux/sunrpc/xprt.h | 1 + net/sunrpc/clnt.c | 5 ++ net/sunrpc/xprt.c | 100 ++++++++++++++++++++++-------------- 3 files changed, 68 insertions(+), 38 deletions(-) diff --git a/include/linux/sunrpc/xprt.h b/include/linux/sunrpc/xprt.h index c25d0a5fda69..0250294c904a 100644 --- a/include/linux/sunrpc/xprt.h +++ b/include/linux/sunrpc/xprt.h @@ -334,6 +334,7 @@ void xprt_free_slot(struct rpc_xprt *xprt, struct rpc_rqst *req); void xprt_lock_and_alloc_slot(struct rpc_xprt *xprt, struct rpc_task *task); bool xprt_prepare_transmit(struct rpc_task *task); +void xprt_request_enqueue_receive(struct rpc_task *task); void xprt_transmit(struct rpc_task *task); void xprt_end_transmit(struct rpc_task *task); int xprt_adjust_timeout(struct rpc_rqst *req); diff --git a/net/sunrpc/clnt.c b/net/sunrpc/clnt.c index 66ec61347716..3d6d1b5f9e81 100644 --- a/net/sunrpc/clnt.c +++ b/net/sunrpc/clnt.c @@ -1962,6 +1962,11 @@ call_transmit(struct rpc_task *task) return; } } + + /* Add task to reply queue before transmission to avoid races */ + if (rpc_reply_expected(task)) + xprt_request_enqueue_receive(task); + if (!xprt_prepare_transmit(task)) return; task->tk_action = call_transmit_status; diff --git a/net/sunrpc/xprt.c b/net/sunrpc/xprt.c index eda305de9f77..cb3c0f7d5b3d 100644 --- a/net/sunrpc/xprt.c +++ b/net/sunrpc/xprt.c @@ -884,6 +884,57 @@ static void xprt_wait_on_pinned_rqst(struct rpc_rqst *req) wait_var_event(&req->rq_pin, !xprt_is_pinned_rqst(req)); } +static bool +xprt_request_data_received(struct rpc_task *task) +{ + return !test_bit(RPC_TASK_NEED_RECV, &task->tk_runstate) && + task->tk_rqstp->rq_reply_bytes_recvd != 0; +} + +/** + * xprt_request_enqueue_receive - Add an request to the receive queue + * @task: RPC task + * + */ +void +xprt_request_enqueue_receive(struct rpc_task *task) +{ + struct rpc_rqst *req = task->tk_rqstp; + struct rpc_xprt *xprt = req->rq_xprt; + + spin_lock(&xprt->queue_lock); + if (xprt_request_data_received(task) || !list_empty(&req->rq_list)) { + spin_unlock(&xprt->queue_lock); + return; + } + + /* Update the softirq receive buffer */ + memcpy(&req->rq_private_buf, &req->rq_rcv_buf, + sizeof(req->rq_private_buf)); + + /* Add request to the receive list */ + list_add_tail(&req->rq_list, &xprt->recv); + set_bit(RPC_TASK_NEED_RECV, &task->tk_runstate); + spin_unlock(&xprt->queue_lock); + + xprt_reset_majortimeo(req); + /* Turn off autodisconnect */ + del_singleshot_timer_sync(&xprt->timer); +} + +/** + * xprt_request_dequeue_receive_locked - Remove a request from the receive queue + * @task: RPC task + * + * Caller must hold xprt->queue_lock. + */ +static void +xprt_request_dequeue_receive_locked(struct rpc_task *task) +{ + clear_bit(RPC_TASK_NEED_RECV, &task->tk_runstate); + list_del_init(&task->tk_rqstp->rq_list); +} + /** * xprt_update_rtt - Update RPC RTT statistics * @task: RPC request that recently completed @@ -923,24 +974,16 @@ void xprt_complete_rqst(struct rpc_task *task, int copied) xprt->stat.recvs++; - list_del_init(&req->rq_list); req->rq_private_buf.len = copied; /* Ensure all writes are done before we update */ /* req->rq_reply_bytes_recvd */ smp_wmb(); req->rq_reply_bytes_recvd = copied; - clear_bit(RPC_TASK_NEED_RECV, &task->tk_runstate); + xprt_request_dequeue_receive_locked(task); rpc_wake_up_queued_task(&xprt->pending, task); } EXPORT_SYMBOL_GPL(xprt_complete_rqst); -static bool -xprt_request_data_received(struct rpc_task *task) -{ - return !test_bit(RPC_TASK_NEED_RECV, &task->tk_runstate) && - task->tk_rqstp->rq_reply_bytes_recvd != 0; -} - static void xprt_timer(struct rpc_task *task) { struct rpc_rqst *req = task->tk_rqstp; @@ -1014,32 +1057,15 @@ void xprt_transmit(struct rpc_task *task) dprintk("RPC: %5u xprt_transmit(%u)\n", task->tk_pid, req->rq_slen); - if (!req->rq_reply_bytes_recvd) { - + if (!req->rq_bytes_sent) { + if (xprt_request_data_received(task)) + return; /* Verify that our message lies in the RPCSEC_GSS window */ - if (!req->rq_bytes_sent && rpcauth_xmit_need_reencode(task)) { + if (rpcauth_xmit_need_reencode(task)) { task->tk_status = -EBADMSG; return; } - - if (list_empty(&req->rq_list) && rpc_reply_expected(task)) { - /* - * Add to the list only if we're expecting a reply - */ - /* Update the softirq receive buffer */ - memcpy(&req->rq_private_buf, &req->rq_rcv_buf, - sizeof(req->rq_private_buf)); - /* Add request to the receive list */ - spin_lock(&xprt->queue_lock); - list_add_tail(&req->rq_list, &xprt->recv); - set_bit(RPC_TASK_NEED_RECV, &task->tk_runstate); - spin_unlock(&xprt->queue_lock); - xprt_reset_majortimeo(req); - /* Turn off autodisconnect */ - del_singleshot_timer_sync(&xprt->timer); - } - } else if (xprt_request_data_received(task) && !req->rq_bytes_sent) - return; + } connect_cookie = xprt->connect_cookie; status = xprt->ops->send_request(task); @@ -1376,13 +1402,11 @@ void xprt_release(struct rpc_task *task) else if (task->tk_client) rpc_count_iostats(task, task->tk_client->cl_metrics); spin_lock(&xprt->queue_lock); - if (!list_empty(&req->rq_list)) { - list_del_init(&req->rq_list); - if (atomic_read(&req->rq_pin)) { - spin_unlock(&xprt->queue_lock); - xprt_wait_on_pinned_rqst(req); - spin_lock(&xprt->queue_lock); - } + xprt_request_dequeue_receive_locked(task); + while (xprt_is_pinned_rqst(req)) { + spin_unlock(&xprt->queue_lock); + xprt_wait_on_pinned_rqst(req); + spin_lock(&xprt->queue_lock); } spin_unlock(&xprt->queue_lock); spin_lock_bh(&xprt->transport_lock);