From patchwork Thu Jan 21 17:46:02 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ross Zwisler X-Patchwork-Id: 8083851 Return-Path: X-Original-To: patchwork-linux-fsdevel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id BCDF3BEEE5 for ; Thu, 21 Jan 2016 17:47:54 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id E9C9A20384 for ; Thu, 21 Jan 2016 17:47:53 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 16F382038A for ; Thu, 21 Jan 2016 17:47:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1759974AbcAURrH (ORCPT ); Thu, 21 Jan 2016 12:47:07 -0500 Received: from mga04.intel.com ([192.55.52.120]:60010 "EHLO mga04.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S965722AbcAURqX (ORCPT ); Thu, 21 Jan 2016 12:46:23 -0500 Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga104.fm.intel.com with ESMTP; 21 Jan 2016 09:46:12 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.22,326,1449561600"; d="scan'208";a="637950073" Received: from jbanerje-mobl.amr.corp.intel.com (HELO phyrexia.intel.com) ([10.254.81.91]) by FMSMGA003.fm.intel.com with ESMTP; 21 Jan 2016 09:46:11 -0800 From: Ross Zwisler To: linux-kernel@vger.kernel.org Cc: Ross Zwisler , Alexander Viro , Andrew Morton , Dan Williams , Dave Chinner , Jan Kara , Matthew Wilcox , linux-fsdevel@vger.kernel.org, linux-nvdimm@lists.01.org Subject: [PATCH v2 3/5] dax: improve documentation for fsync/msync Date: Thu, 21 Jan 2016 10:46:02 -0700 Message-Id: <1453398364-22537-4-git-send-email-ross.zwisler@linux.intel.com> X-Mailer: git-send-email 2.5.0 In-Reply-To: <1453398364-22537-1-git-send-email-ross.zwisler@linux.intel.com> References: <1453398364-22537-1-git-send-email-ross.zwisler@linux.intel.com> Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Several of the subtleties and assumptions of the DAX fsync/msync implementation are not immediately obvious, so document them with comments. Signed-off-by: Ross Zwisler Reported-by: Jan Kara Reviewed-by: Jan Kara --- fs/dax.c | 30 ++++++++++++++++++++++++++++++ 1 file changed, 30 insertions(+) diff --git a/fs/dax.c b/fs/dax.c index d589113..55ae394 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -350,6 +350,13 @@ static int dax_radix_entry(struct address_space *mapping, pgoff_t index, if (!pmd_entry || type == RADIX_DAX_PMD) goto dirty; + + /* + * We only insert dirty PMD entries into the radix tree. This + * means we don't need to worry about removing a dirty PTE + * entry and inserting a clean PMD entry, thus reducing the + * range we would flush with a follow-up fsync/msync call. + */ radix_tree_delete(&mapping->page_tree, index); mapping->nrexceptional--; } @@ -912,6 +919,21 @@ int __dax_pmd_fault(struct vm_area_struct *vma, unsigned long address, } dax_unmap_atomic(bdev, &dax); + /* + * For PTE faults we insert a radix tree entry for reads, and + * leave it clean. Then on the first write we dirty the radix + * tree entry via the dax_pnf_mkwrite() path. This sequence + * allows the dax_pfn_mkwrite() call to be simpler and avoid a + * call into get_block() to translate the pgoff to a sector in + * order to be able to create a new radix tree entry. + * + * The PMD path doesn't have an equivalent to + * dax_pfn_mkwrite(), though, so for a read followed by a + * write we traverse all the way through __dax_pmd_fault() + * twice. This means we can just skip inserting a radix tree + * entry completely on the initial read and just wait until + * the write to insert a dirty entry. + */ if (write) { error = dax_radix_entry(mapping, pgoff, dax.sector, true, true); @@ -985,6 +1007,14 @@ int dax_pfn_mkwrite(struct vm_area_struct *vma, struct vm_fault *vmf) { struct file *file = vma->vm_file; + /* + * We pass NO_SECTOR to dax_radix_entry() because we expect that a + * RADIX_DAX_PTE entry already exists in the radix tree from a + * previous call to __dax_fault(). We just want to look up that PTE + * entry using vmf->pgoff and make sure the dirty tag is set. This + * saves us from having to make a call to get_block() here to look + * up the sector. + */ dax_radix_entry(file->f_mapping, vmf->pgoff, NO_SECTOR, false, true); return VM_FAULT_NOPAGE; }