From patchwork Fri Aug 8 15:00:57 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Weston Andros Adamson X-Patchwork-Id: 4696691 Return-Path: X-Original-To: patchwork-linux-nfs@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id EAE6DC033A for ; Fri, 8 Aug 2014 15:01:19 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id BD9DD2015E for ; Fri, 8 Aug 2014 15:01:18 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id AE38720173 for ; Fri, 8 Aug 2014 15:01:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756256AbaHHPBP (ORCPT ); Fri, 8 Aug 2014 11:01:15 -0400 Received: from mail-ig0-f176.google.com ([209.85.213.176]:64285 "EHLO mail-ig0-f176.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752663AbaHHPBP (ORCPT ); Fri, 8 Aug 2014 11:01:15 -0400 Received: by mail-ig0-f176.google.com with SMTP id hn18so1160412igb.3 for ; Fri, 08 Aug 2014 08:01:14 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=pSY6dZa5zq3eAv+W6X9+GxeaFTotocoBZW4RcEAycEw=; b=hWur3ExcDxQAIvDqjNVS8NMCanFsWbf/d8z7Yz5Sdx26NWCZBKP/h3TdocYezEbF+J H6phg+5SVe6nY6MnemdmQsp3XSS7gUkiJv3t6tPs1qQ9VruzUgpM0MyGjT4xrBg1s6OU Ti7jmyb340YazqjRejDXAV5sZxGkJYAO1GzmDdhpvwYrDsItH1dgYCR6Xr0lxJmI42be tPfaXebX2KJOusV9yN9D4Yi+scy5PFTA/EyDwcidqRO3RbYXm/Ppd1PMGgnHinSKslzu 5hTxFWCcwKzAm7/aR+BROm43/xmFITvWjnE6k4o2yKvumotq+EBzFP2wNaYYCrknJGI/ bJ5A== X-Gm-Message-State: ALoCoQkJ678uuxAonDJdH3XtG8PPjVQraP3OVIeHBDwV54KEC/Wqdg8kiwA2crnRMz9t/iP0diTG X-Received: by 10.50.8.65 with SMTP id p1mr5957914iga.46.1407510073396; Fri, 08 Aug 2014 08:01:13 -0700 (PDT) Received: from gavrio-wifi.robotsandstuff.fake (c-98-209-19-144.hsd1.mi.comcast.net. [98.209.19.144]) by mx.google.com with ESMTPSA id ro8sm9930523igb.15.2014.08.08.08.01.12 for (version=TLSv1 cipher=RC4-SHA bits=128/128); Fri, 08 Aug 2014 08:01:13 -0700 (PDT) From: Weston Andros Adamson To: trond.myklebust@primarydata.com Cc: linux-nfs@vger.kernel.org, Weston Andros Adamson Subject: [PATCH 5/5] nfs: don't sleep with inode lock in lock_and_join_requests Date: Fri, 8 Aug 2014 11:00:57 -0400 Message-Id: <1407510057-6881-6-git-send-email-dros@primarydata.com> X-Mailer: git-send-email 1.8.5.2 (Apple Git-48) In-Reply-To: <1407510057-6881-1-git-send-email-dros@primarydata.com> References: <1407510057-6881-1-git-send-email-dros@primarydata.com> Sender: linux-nfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org X-Spam-Status: No, score=-7.6 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This handles the 'nonblock=false' case in nfs_lock_and_join_requests. If the group is already locked and blocking is allowed, drop the inode lock and wait for the group lock to be cleared before trying it all again. This should fix warnings found in peterz's tree (sched/wait branch), where might_sleep() checks are added to wait.[ch]. Reported-by: Fengguang Wu Signed-off-by: Weston Andros Adamson Reviewed-by: Peng Tao --- fs/nfs/pagelist.c | 18 ++++++++++++++++++ fs/nfs/write.c | 12 +++++++++++- include/linux/nfs_page.h | 1 + 3 files changed, 30 insertions(+), 1 deletion(-) diff --git a/fs/nfs/pagelist.c b/fs/nfs/pagelist.c index af707e0..e0c2e72 100644 --- a/fs/nfs/pagelist.c +++ b/fs/nfs/pagelist.c @@ -175,6 +175,24 @@ nfs_page_group_lock(struct nfs_page *req, bool nonblock) } /* + * nfs_page_group_lock_wait - wait for the lock to clear, but don't grab it + * @req - a request in the group + * + * This is a blocking call to wait for the group lock to be cleared. + */ +void +nfs_page_group_lock_wait(struct nfs_page *req) +{ + struct nfs_page *head = req->wb_head; + + WARN_ON_ONCE(head != head->wb_head); + + wait_on_bit(&head->wb_flags, PG_HEADLOCK, + nfs_wait_bit_uninterruptible, + TASK_UNINTERRUPTIBLE); +} + +/* * nfs_page_group_unlock - unlock the head of the page group * @req - request in group that is to be unlocked */ diff --git a/fs/nfs/write.c b/fs/nfs/write.c index 0e3186b..f744f02 100644 --- a/fs/nfs/write.c +++ b/fs/nfs/write.c @@ -478,13 +478,23 @@ try_again: return NULL; } - /* lock each request in the page group */ + /* holding inode lock, so always make a non-blocking call to try the + * page group lock */ ret = nfs_page_group_lock(head, true); if (ret < 0) { spin_unlock(&inode->i_lock); + + if (!nonblock && ret == -EAGAIN) { + nfs_page_group_lock_wait(head); + nfs_release_request(head); + goto try_again; + } + nfs_release_request(head); return ERR_PTR(ret); } + + /* lock each request in the page group */ subreq = head; do { /* diff --git a/include/linux/nfs_page.h b/include/linux/nfs_page.h index 6ad2bbc..6c3e06e 100644 --- a/include/linux/nfs_page.h +++ b/include/linux/nfs_page.h @@ -123,6 +123,7 @@ extern int nfs_wait_on_request(struct nfs_page *); extern void nfs_unlock_request(struct nfs_page *req); extern void nfs_unlock_and_release_request(struct nfs_page *); extern int nfs_page_group_lock(struct nfs_page *, bool); +extern void nfs_page_group_lock_wait(struct nfs_page *); extern void nfs_page_group_unlock(struct nfs_page *); extern bool nfs_page_group_sync_on_bit(struct nfs_page *, unsigned int);