From patchwork Sun Apr 10 07:29:44 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavel Shilovsky X-Patchwork-Id: 696401 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by demeter1.kernel.org (8.14.4/8.14.3) with ESMTP id p3A7VMWT030300 for ; Sun, 10 Apr 2011 07:31:22 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754385Ab1DJHbV (ORCPT ); Sun, 10 Apr 2011 03:31:21 -0400 Received: from mail-fx0-f46.google.com ([209.85.161.46]:59755 "EHLO mail-fx0-f46.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753955Ab1DJHbV (ORCPT ); Sun, 10 Apr 2011 03:31:21 -0400 Received: by fxm17 with SMTP id 17so3025912fxm.19 for ; Sun, 10 Apr 2011 00:31:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:sender:from:to:subject:date:message-id:x-mailer; bh=KIrTnlNCkuA6cV2adG2vOoajoUCAqvnGdizr3ILrpcc=; b=J36Q+6P4IOV1Hlq9+xGxWXwWzgQ2ZeoyHtcNZyWy8PsMF9Umdw5YPu3QDbgCwpBoeD mqchCunJfqKhj4p5xouYSFw3i4HC13j+iR33po6ZzqFcgHWTfxWzrXVXBOlNYzz/9b5E 6NR/Y2uCU4F6zgo5kkMNoftAZAx3toKdL2uXM= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=sender:from:to:subject:date:message-id:x-mailer; b=bnLMcxUpmbi80cC71QzremDbfwKo+QE3GH9XNAwULgo3Pf+4pExNHqLx4q+Ffv0RfL 8eWHQqovSjrsrE6RM6AjGP9JPbyv2T8CV/uMU9IU8prILLAKXTMfD1r578zIs+d6bUhK 9OgT712lE8Lh7RA01E0S91hdGp2CeDJ/wDqms= Received: by 10.223.83.199 with SMTP id g7mr1627344fal.142.1302420679914; Sun, 10 Apr 2011 00:31:19 -0700 (PDT) Received: from localhost.localdomain ([79.126.100.31]) by mx.google.com with ESMTPS id k5sm1273348faa.39.2011.04.10.00.31.18 (version=TLSv1/SSLv3 cipher=OTHER); Sun, 10 Apr 2011 00:31:19 -0700 (PDT) From: Pavel Shilovsky To: linux-cifs@vger.kernel.org Subject: [PATCH 1/2] CIFS: Add launder_page operation (try #2) Date: Sun, 10 Apr 2011 11:29:44 +0400 Message-Id: <1302420584-4307-1-git-send-email-piastry@etersoft.ru> X-Mailer: git-send-email 1.7.1 Sender: linux-cifs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-cifs@vger.kernel.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.6 (demeter1.kernel.org [140.211.167.41]); Sun, 10 Apr 2011 07:31:22 +0000 (UTC) Add this let us drop filemap_write_and_wait from cifs_invalidate_mapping and simplify the code to properly process invalidate logic. Signed-off-by: Pavel Shilovsky Reviewed-by: Jeff Layton --- fs/cifs/file.c | 46 ++++++++++++++++++++++++++++++++++++++++++---- 1 files changed, 42 insertions(+), 4 deletions(-) diff --git a/fs/cifs/file.c b/fs/cifs/file.c index 9c7f83f..613f965 100644 --- a/fs/cifs/file.c +++ b/fs/cifs/file.c @@ -1331,9 +1331,10 @@ retry_write: return rc; } -static int cifs_writepage(struct page *page, struct writeback_control *wbc) +static int +cifs_writepage_locked(struct page *page, struct writeback_control *wbc) { - int rc = -EFAULT; + int rc; int xid; xid = GetXid(); @@ -1353,15 +1354,29 @@ static int cifs_writepage(struct page *page, struct writeback_control *wbc) * to fail to update with the state of the page correctly. */ set_page_writeback(page); +retry_write: rc = cifs_partialpagewrite(page, 0, PAGE_CACHE_SIZE); - SetPageUptodate(page); /* BB add check for error and Clearuptodate? */ - unlock_page(page); + if (rc == -EAGAIN && wbc->sync_mode == WB_SYNC_ALL) + goto retry_write; + else if (rc == -EAGAIN) + redirty_page_for_writepage(wbc, page); + else if (rc != 0) + SetPageError(page); + else + SetPageUptodate(page); end_page_writeback(page); page_cache_release(page); FreeXid(xid); return rc; } +static int cifs_writepage(struct page *page, struct writeback_control *wbc) +{ + int rc = cifs_writepage_locked(page, wbc); + unlock_page(page); + return rc; +} + static int cifs_write_end(struct file *file, struct address_space *mapping, loff_t pos, unsigned len, unsigned copied, struct page *page, void *fsdata) @@ -2344,6 +2359,27 @@ static void cifs_invalidate_page(struct page *page, unsigned long offset) cifs_fscache_invalidate_page(page, &cifsi->vfs_inode); } +static int cifs_launder_page(struct page *page) +{ + int rc = 0; + loff_t range_start = page_offset(page); + loff_t range_end = range_start + (loff_t)(PAGE_CACHE_SIZE - 1); + struct writeback_control wbc = { + .sync_mode = WB_SYNC_ALL, + .nr_to_write = 0, + .range_start = range_start, + .range_end = range_end, + }; + + cFYI(1, "Launder page: %p", page); + + if (clear_page_dirty_for_io(page)) + rc = cifs_writepage_locked(page, &wbc); + + cifs_fscache_invalidate_page(page, page->mapping->host); + return rc; +} + void cifs_oplock_break(struct work_struct *work) { struct cifsFileInfo *cfile = container_of(work, struct cifsFileInfo, @@ -2415,6 +2451,7 @@ const struct address_space_operations cifs_addr_ops = { .set_page_dirty = __set_page_dirty_nobuffers, .releasepage = cifs_release_page, .invalidatepage = cifs_invalidate_page, + .launder_page = cifs_launder_page, /* .sync_page = cifs_sync_page, */ /* .direct_IO = */ }; @@ -2433,6 +2470,7 @@ const struct address_space_operations cifs_addr_ops_smallbuf = { .set_page_dirty = __set_page_dirty_nobuffers, .releasepage = cifs_release_page, .invalidatepage = cifs_invalidate_page, + .launder_page = cifs_launder_page, /* .sync_page = cifs_sync_page, */ /* .direct_IO = */ };