From patchwork Tue Dec 15 23:44:01 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: NeilBrown X-Patchwork-Id: 7858181 Return-Path: X-Original-To: patchwork-linux-nfs@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 058F09F350 for ; Tue, 15 Dec 2015 23:44:24 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 2DA7A203C1 for ; Tue, 15 Dec 2015 23:44:23 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 3E04B202F8 for ; Tue, 15 Dec 2015 23:44:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754364AbbLOXoU (ORCPT ); Tue, 15 Dec 2015 18:44:20 -0500 Received: from mx2.suse.de ([195.135.220.15]:42280 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754332AbbLOXoU (ORCPT ); Tue, 15 Dec 2015 18:44:20 -0500 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay1.suse.de (charybdis-ext.suse.de [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 42AFBABDE; Tue, 15 Dec 2015 23:44:18 +0000 (UTC) From: NeilBrown To: Trond Myklebust , Anna Schumaker Date: Wed, 16 Dec 2015 10:44:01 +1100 Cc: linux-nfs@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH] SUNRPC: restore fair scheduling to priority queues. User-Agent: Notmuch/0.20.2 (http://notmuchmail.org) Emacs/24.5.1 (x86_64-suse-linux-gnu) Message-ID: <87twnjb7lq.fsf@notabene.neil.brown.name> MIME-Version: 1.0 Sender: linux-nfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, T_RP_MATCHES_RCVD, T_TVD_MIME_EPI, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Commit: c05eecf63610 ("SUNRPC: Don't allow low priority tasks to pre-empt higher priority ones") removed the 'fair scheduling' feature from SUNRPC priority queues. This feature caused problems for some queues (send queue and session slot queue) but is still needed for others, particularly the tcp slot queue. Without fairness, reads (priority 1) can starve background writes (priority 0) so a streaming read can cause writeback to block indefinitely. This is not easy to measure with default settings as the current slot table size is much larger than the read-ahead size. However if the slot-table size is reduced (seen when backporting to older kernels with a limited size) the problem is easily demonstrated. This patch conditionally restores fair scheduling. It is now the default unless rpc_sleep_on_priority() is called directly. Then the queue switches to strict priority observance. As that function is called for both the send queue and the session slot queue and not for any others, this has exactly the desired effect. The "count" field that was removed by the previous patch is restored. A value for '255' means "strict priority queuing, no fair queuing". Any other value is a could of owners to be processed before switching to a different priority level, just like before. Signed-off-by: NeilBrown --- It is quite possible that you won't like the overloading of rpc_sleep_on_priority() to disable fair-scheduling and would prefer an extra arg to rpc_init_priority_wait_queue(). I can do it that way if you like. NeilBrown include/linux/sunrpc/sched.h | 1 + net/sunrpc/sched.c | 12 +++++++++--- 2 files changed, 10 insertions(+), 3 deletions(-) diff --git a/include/linux/sunrpc/sched.h b/include/linux/sunrpc/sched.h index d703f0ef37d8..985efe8d7e26 100644 --- a/include/linux/sunrpc/sched.h +++ b/include/linux/sunrpc/sched.h @@ -184,6 +184,7 @@ struct rpc_wait_queue { pid_t owner; /* process id of last task serviced */ unsigned char maxpriority; /* maximum priority (0 if queue is not a priority queue) */ unsigned char priority; /* current priority */ + unsigned char count; /* # task groups remaining to be serviced */ unsigned char nr; /* # tasks remaining for cookie */ unsigned short qlen; /* total # tasks waiting in queue */ struct rpc_timer timer_list; diff --git a/net/sunrpc/sched.c b/net/sunrpc/sched.c index 73ad57a59989..e8fcd4f098bb 100644 --- a/net/sunrpc/sched.c +++ b/net/sunrpc/sched.c @@ -117,6 +117,8 @@ static void rpc_set_waitqueue_priority(struct rpc_wait_queue *queue, int priorit rpc_rotate_queue_owner(queue); queue->priority = priority; } + if (queue->count != 255) + queue->count = 1 << (priority * 2); } static void rpc_set_waitqueue_owner(struct rpc_wait_queue *queue, pid_t pid) @@ -144,8 +146,10 @@ static void __rpc_add_wait_queue_priority(struct rpc_wait_queue *queue, INIT_LIST_HEAD(&task->u.tk_wait.links); if (unlikely(queue_priority > queue->maxpriority)) queue_priority = queue->maxpriority; - if (queue_priority > queue->priority) - rpc_set_waitqueue_priority(queue, queue_priority); + if (queue->count == 255) { + if (queue_priority > queue->priority) + rpc_set_waitqueue_priority(queue, queue_priority); + } q = &queue->tasks[queue_priority]; list_for_each_entry(t, q, u.tk_wait.list) { if (t->tk_owner == task->tk_owner) { @@ -401,6 +405,7 @@ void rpc_sleep_on_priority(struct rpc_wait_queue *q, struct rpc_task *task, * Protect the queue operations. */ spin_lock_bh(&q->lock); + q->count = 255; __rpc_sleep_on_priority(q, task, action, priority - RPC_PRIORITY_LOW); spin_unlock_bh(&q->lock); } @@ -478,7 +483,8 @@ static struct rpc_task *__rpc_find_next_queued_priority(struct rpc_wait_queue *q /* * Check if we need to switch queues. */ - goto new_owner; + if (queue->count == 255 || --queue->count) + goto new_owner; } /*