From patchwork Thu Jul 27 13:12:44 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Kara X-Patchwork-Id: 9866893 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 832276035E for ; Thu, 27 Jul 2017 13:13:09 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7586128470 for ; Thu, 27 Jul 2017 13:13:09 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 6A3F728831; Thu, 27 Jul 2017 13:13:09 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id EF93E2882A for ; Thu, 27 Jul 2017 13:13:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751633AbdG0NNH (ORCPT ); Thu, 27 Jul 2017 09:13:07 -0400 Received: from mx2.suse.de ([195.135.220.15]:52507 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1751526AbdG0NMv (ORCPT ); Thu, 27 Jul 2017 09:12:51 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (charybdis-ext.suse.de [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 6B019AE6E; Thu, 27 Jul 2017 13:12:48 +0000 (UTC) Received: by quack2.suse.cz (Postfix, from userid 1000) id 3FB9D1E341D; Thu, 27 Jul 2017 15:12:47 +0200 (CEST) From: Jan Kara To: Cc: , Ross Zwisler , Dan Williams , Andy Lutomirski , linux-nvdimm@lists.01.org, , Christoph Hellwig , Dave Chinner , Jan Kara Subject: [PATCH 6/7] dax: Implement dax_pfn_mkwrite() Date: Thu, 27 Jul 2017 15:12:44 +0200 Message-Id: <20170727131245.28279-7-jack@suse.cz> X-Mailer: git-send-email 2.12.3 In-Reply-To: <20170727131245.28279-1-jack@suse.cz> References: <20170727131245.28279-1-jack@suse.cz> Sender: linux-xfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Implement a function that marks existing page table entry (PTE or PMD) as writeable and takes care of marking it dirty in the radix tree. This function will be used to finish synchronous page fault where the page table entry is first inserted as read-only and then needs to be marked as read-write. Signed-off-by: Jan Kara --- fs/dax.c | 48 ++++++++++++++++++++++++++++++++++++++++++++++++ include/linux/dax.h | 1 + 2 files changed, 49 insertions(+) diff --git a/fs/dax.c b/fs/dax.c index 8a6cf158c691..90b763c86dc2 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -1485,3 +1485,51 @@ int dax_iomap_fault(struct vm_fault *vmf, enum page_entry_size pe_size, } } EXPORT_SYMBOL_GPL(dax_iomap_fault); + +/** + * dax_pfn_mkwrite - make page table entry writeable on a DAX file + * @vmf: The description of the fault + * @pe_size: size of entry to be marked writeable + * + * This function mark PTE or PMD entry as writeable in page tables for mmaped + * DAX file. It takes care of marking corresponding radix tree entry as dirty + * as well. + */ +int dax_pfn_mkwrite(struct vm_fault *vmf, enum page_entry_size pe_size) +{ + struct address_space *mapping = vmf->vma->vm_file->f_mapping; + void *entry, **slot; + pgoff_t index = vmf->pgoff; + pfn_t pfn = pfn_to_pfn_t(pte_pfn(vmf->orig_pte)); + int vmf_ret, error; + + spin_lock_irq(&mapping->tree_lock); + entry = get_unlocked_mapping_entry(mapping, index, &slot); + /* Did we race with someone splitting entry or so? */ + if (!entry || (pe_size == PE_SIZE_PTE && !dax_is_pte_entry(entry)) || + (pe_size == PE_SIZE_PMD && !dax_is_pmd_entry(entry))) { + put_unlocked_mapping_entry(mapping, index, entry); + spin_unlock_irq(&mapping->tree_lock); + return VM_FAULT_NOPAGE; + } + radix_tree_tag_set(&mapping->page_tree, index, PAGECACHE_TAG_DIRTY); + entry = lock_slot(mapping, slot); + spin_unlock_irq(&mapping->tree_lock); + switch (pe_size) { + case PE_SIZE_PTE: + error = vm_insert_mixed_mkwrite(vmf->vma, vmf->address, pfn); + vmf_ret = dax_fault_return(error); + break; +#ifdef CONFIG_FS_DAX_PMD + case PE_SIZE_PMD: + vmf_ret = vmf_insert_pfn_pmd(vmf->vma, vmf->address, vmf->pmd, + pfn, true); + break; +#endif + default: + vmf_ret = VM_FAULT_FALLBACK; + } + put_locked_mapping_entry(mapping, index); + return vmf_ret; +} +EXPORT_SYMBOL_GPL(dax_pfn_mkwrite); diff --git a/include/linux/dax.h b/include/linux/dax.h index 98950f4d127e..6ce5912e4516 100644 --- a/include/linux/dax.h +++ b/include/linux/dax.h @@ -92,6 +92,7 @@ ssize_t dax_iomap_rw(struct kiocb *iocb, struct iov_iter *iter, const struct iomap_ops *ops); int dax_iomap_fault(struct vm_fault *vmf, enum page_entry_size pe_size, bool sync, const struct iomap_ops *ops); +int dax_pfn_mkwrite(struct vm_fault *vmf, enum page_entry_size pe_size); int dax_delete_mapping_entry(struct address_space *mapping, pgoff_t index); int dax_invalidate_mapping_entry_sync(struct address_space *mapping, pgoff_t index);