From patchwork Fri Jul 22 12:19:39 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Kara X-Patchwork-Id: 9243473 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id D22A3602F0 for ; Fri, 22 Jul 2016 12:20:20 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C36C6266F3 for ; Fri, 22 Jul 2016 12:20:20 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B812527FA3; Fri, 22 Jul 2016 12:20:20 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5D2F9266F3 for ; Fri, 22 Jul 2016 12:20:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753938AbcGVMUB (ORCPT ); Fri, 22 Jul 2016 08:20:01 -0400 Received: from mx2.suse.de ([195.135.220.15]:49632 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753956AbcGVMTz (ORCPT ); Fri, 22 Jul 2016 08:19:55 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay1.suse.de (charybdis-ext.suse.de [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 39D4FAD93; Fri, 22 Jul 2016 12:19:51 +0000 (UTC) Received: by quack2.suse.cz (Postfix, from userid 1000) id B417E1E0F31; Fri, 22 Jul 2016 14:19:47 +0200 (CEST) From: Jan Kara To: linux-mm@kvack.org Cc: linux-fsdevel@vger.kernel.org, linux-nvdimm@lists.01.org, Dan Williams , Ross Zwisler , Jan Kara Subject: [PATCH 13/15] mm: Provide helper for finishing mkwrite faults Date: Fri, 22 Jul 2016 14:19:39 +0200 Message-Id: <1469189981-19000-14-git-send-email-jack@suse.cz> X-Mailer: git-send-email 2.6.6 In-Reply-To: <1469189981-19000-1-git-send-email-jack@suse.cz> References: <1469189981-19000-1-git-send-email-jack@suse.cz> Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Provide a helper function for finishing write faults due to PTE being read-only. The helper will be used by DAX to avoid the need of complicating generic MM code with DAX locking specifics. Signed-off-by: Jan Kara --- include/linux/mm.h | 1 + mm/memory.c | 62 +++++++++++++++++++++++++++++++++++------------------- 2 files changed, 41 insertions(+), 22 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index daf690fccc0c..32ff572a6e6c 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -601,6 +601,7 @@ static inline pte_t maybe_mkwrite(pte_t pte, struct vm_area_struct *vma) void do_set_pte(struct vm_area_struct *vma, unsigned long address, struct page *page, pte_t *pte, bool write, bool anon); int finish_fault(struct vm_area_struct *vma, struct vm_fault *vmf); +int finish_mkwrite_fault(struct vm_area_struct *vma, struct vm_fault *vmf); #endif /* diff --git a/mm/memory.c b/mm/memory.c index 1d2916c53d43..30cf7b36df48 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2262,6 +2262,41 @@ oom: return VM_FAULT_OOM; } +/** + * finish_mkrite_fault - finish page fault making PTE writeable once the page + * page is prepared + * + * @vma: virtual memory area + * @vmf: structure describing the fault + * + * This function handles all that is needed to finish a write page fault due + * to PTE being read-only once the mapped page is prepared. It handles locking + * of PTE and modifying it. The function returns 0 on success, error in case + * the PTE changed before we acquired PTE lock. + * + * The function expects the page to be locked or other protection against + * concurrent faults / writeback (such as DAX radix tree locks). + */ +int finish_mkwrite_fault(struct vm_area_struct *vma, struct vm_fault *vmf) +{ + unsigned long address = (unsigned long)vmf->virtual_address; + pte_t *pte; + spinlock_t *ptl; + + pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, address, &ptl); + /* + * We might have raced with another page fault while we + * released the pte_offset_map_lock. + */ + if (!pte_same(*pte, vmf->orig_pte)) { + pte_unmap_unlock(pte, ptl); + return -EBUSY; + } + wp_page_reuse(vma->vm_mm, vma, address, pte, ptl, vmf->orig_pte, + vmf->page); + return 0; +} + /* * Handle write page faults for VM_MIXEDMAP or VM_PFNMAP for a VM_SHARED * mapping @@ -2282,17 +2317,12 @@ static int wp_pfn_shared(struct mm_struct *mm, ret = vma->vm_ops->pfn_mkwrite(vma, &vmf); if (ret & VM_FAULT_ERROR) return ret; - page_table = pte_offset_map_lock(mm, pmd, address, &ptl); - /* - * We might have raced with another page fault while we - * released the pte_offset_map_lock. - */ - if (!pte_same(*page_table, orig_pte)) { - pte_unmap_unlock(page_table, ptl); + if (finish_mkwrite_fault(vma, &vmf) < 0) return 0; - } + } else { + wp_page_reuse(mm, vma, address, page_table, ptl, orig_pte, + NULL); } - wp_page_reuse(mm, vma, address, page_table, ptl, orig_pte, NULL); return VM_FAULT_WRITE; } @@ -2319,28 +2349,16 @@ static int wp_page_shared(struct mm_struct *mm, struct vm_area_struct *vma, put_page(old_page); return tmp; } - /* - * Since we dropped the lock we need to revalidate - * the PTE as someone else may have changed it. If - * they did, we just return, as we can count on the - * MMU to tell us if they didn't also make it writable. - */ - page_table = pte_offset_map_lock(mm, pmd, address, - &ptl); - if (!pte_same(*page_table, orig_pte)) { + if (finish_mkwrite_fault(vma, &vmf) < 0) { unlock_page(old_page); - pte_unmap_unlock(page_table, ptl); put_page(old_page); return 0; } - wp_page_reuse(mm, vma, address, page_table, ptl, orig_pte, - old_page); } else { wp_page_reuse(mm, vma, address, page_table, ptl, orig_pte, old_page); lock_page(old_page); } - fault_dirty_shared_page(vma, old_page); put_page(old_page); return VM_FAULT_WRITE;