From patchwork Tue Apr 7 08:40:06 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Boaz Harrosh X-Patchwork-Id: 6167761 Return-Path: X-Original-To: patchwork-linux-nvdimm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id AC1E99F389 for ; Tue, 7 Apr 2015 08:40:13 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 9DD172026D for ; Tue, 7 Apr 2015 08:40:12 +0000 (UTC) Received: from ml01.01.org (ml01.01.org [198.145.21.10]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 905ED20265 for ; Tue, 7 Apr 2015 08:40:11 +0000 (UTC) Received: from ml01.vlan14.01.org (localhost [IPv6:::1]) by ml01.01.org (Postfix) with ESMTP id 84F9C80FFB; Tue, 7 Apr 2015 01:40:11 -0700 (PDT) X-Original-To: linux-nvdimm@ml01.01.org Delivered-To: linux-nvdimm@ml01.01.org Received: from mail-wg0-f43.google.com (mail-wg0-f43.google.com [74.125.82.43]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by ml01.01.org (Postfix) with ESMTPS id 2A88680FE6 for ; Tue, 7 Apr 2015 01:40:10 -0700 (PDT) Received: by wgin8 with SMTP id n8so48360232wgi.0 for ; Tue, 07 Apr 2015 01:40:08 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to :cc:subject:references:in-reply-to:content-type :content-transfer-encoding; bh=x4DJEzO8notSzpPPZhWAURIZ09WRFryM7e+jQAvAQSo=; b=JDApWeIVhr8iPVDse/SLn5TXRL34wdprlNSxDPaZ7TtkxtV6n2slDGCz2EtJXRSWuL N4R+ghomYDzBdhKYca65oR+xvf8fCkiOc3VuOneRbv7irTLQutPAH1LQPXZENx0OHnT4 pXB0F/2FrOvxsN7243jUHbx+J4g9rhgHKSn6Ge2uolhbQ2oOOz/Fj7GXNlrMIbjOwqid jvTVfALtsQ6Kz8Rmiu8xZXKd/+WK3yW7PG7FOHWsSXG5YFo1CRW5KzWtMemBvnfh5LGT RkLIlyn/eKHAKSldxVGqmkwnIvY6WV/Kniixx5Wue0vGXiSWMon712PjQl1x7+mV/CQW k8XQ== X-Gm-Message-State: ALoCoQm6SeSXgB4Kkg+gg8gSz35MuUPaT8IT7nV99feQ4NYPaApke/onwScBnyc+/wfjyp7b/ul6 X-Received: by 10.180.105.74 with SMTP id gk10mr2504827wib.29.1428396008623; Tue, 07 Apr 2015 01:40:08 -0700 (PDT) Received: from [10.0.0.5] ([207.232.55.62]) by mx.google.com with ESMTPSA id cf12sm9923321wjb.10.2015.04.07.01.40.06 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 07 Apr 2015 01:40:08 -0700 (PDT) Message-ID: <552397E6.5030506@plexistor.com> Date: Tue, 07 Apr 2015 11:40:06 +0300 From: Boaz Harrosh User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.5.0 MIME-Version: 1.0 To: Dave Chinner , Matthew Wilcox , Andrew Morton , "Kirill A. Shutemov" , Jan Kara , Hugh Dickins , Mel Gorman , linux-mm@kvack.org, linux-nvdimm , linux-fsdevel , Eryu Guan , Christoph Hellwig References: <55239645.9000507@plexistor.com> In-Reply-To: <55239645.9000507@plexistor.com> Cc: Stable Tree Subject: [Linux-nvdimm] [PATCH 1/3] mm(v4.1): New pfn_mkwrite same as page_mkwrite for VM_PFNMAP X-BeenThere: linux-nvdimm@lists.01.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: "Linux-nvdimm developer list." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: linux-nvdimm-bounces@lists.01.org Sender: "Linux-nvdimm" X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP [v2] Based on linux-next/akpm [3dc4623]. For v4.1 merge window Incorporated comments from Andrew And Kirill [v1] This will allow FS that uses VM_PFNMAP | VM_MIXEDMAP (no page structs) to get notified when access is a write to a read-only PFN. This can happen if we mmap() a file then first mmap-read from it to page-in a read-only PFN, than we mmap-write to the same page. We need this functionality to fix a DAX bug, where in the scenario above we fail to set ctime/mtime though we modified the file. An xfstest is attached to this patchset that shows the failure and the fix. (A DAX patch will follow) This functionality is extra important for us, because upon dirtying of a pmem page we also want to RDMA the page to a remote cluster node. We define a new pfn_mkwrite and do not reuse page_mkwrite because 1 - The name ;-) 2 - But mainly because it would take a very long and tedious audit of all page_mkwrite functions of VM_MIXEDMAP/VM_PFNMAP users. To make sure they do not now CRASH. For example current DAX code (which this is for) would crash. If we would want to reuse page_mkwrite, We will need to first patch all users, so to not-crash-on-no-page. Then enable this patch. But even if I did that I would not sleep so well at night. Adding a new vector is the safest thing to do, and is not that expensive. an extra pointer at a static function vector per driver. Also the new vector is better for performance, because else we Will call all current Kernel vectors, so to: check-ha-no-page-do-nothing and return. No need to call it from do_shared_fault because do_wp_page is called to change pte permissions anyway. CC: Matthew Wilcox CC: Kirill A. Shutemov CC: Jan Kara CC: Andrew Morton CC: Hugh Dickins CC: Mel Gorman CC: linux-mm@kvack.org Signed-off-by: Yigal Korman Signed-off-by: Boaz Harrosh --- include/linux/mm.h | 3 +++ mm/memory.c | 35 +++++++++++++++++++++++++++++++---- 2 files changed, 34 insertions(+), 4 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index d584b95..70c47f2 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -251,6 +251,9 @@ struct vm_operations_struct { * writable, if an error is returned it will cause a SIGBUS */ int (*page_mkwrite)(struct vm_area_struct *vma, struct vm_fault *vmf); + /* same as page_mkwrite when using VM_PFNMAP|VM_MIXEDMAP */ + int (*pfn_mkwrite)(struct vm_area_struct *vma, struct vm_fault *vmf); + /* called by access_process_vm when get_user_pages() fails, typically * for use by special VMAs that can switch between memory and hardware */ diff --git a/mm/memory.c b/mm/memory.c index 59f6268..6e8f3f6 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -1982,6 +1982,19 @@ static int do_page_mkwrite(struct vm_area_struct *vma, struct page *page, return ret; } +static int do_pfn_mkwrite(struct vm_area_struct *vma, unsigned long address) +{ + struct vm_fault vmf = { + .page = 0, + .pgoff = (((address & PAGE_MASK) - vma->vm_start) + >> PAGE_SHIFT) + vma->vm_pgoff, + .virtual_address = (void __user *)(address & PAGE_MASK), + .flags = FAULT_FLAG_WRITE | FAULT_FLAG_MKWRITE, + }; + + return vma->vm_ops->pfn_mkwrite(vma, &vmf); +} + /* * Handle write page faults for pages that can be reused in the current vma * @@ -2259,14 +2272,28 @@ static int do_wp_page(struct mm_struct *mm, struct vm_area_struct *vma, * VM_PFNMAP VMA. * * We should not cow pages in a shared writeable mapping. - * Just mark the pages writable as we can't do any dirty - * accounting on raw pfn maps. + * Just mark the pages writable and/or call ops->pfn_mkwrite. */ if ((vma->vm_flags & (VM_WRITE|VM_SHARED)) == - (VM_WRITE|VM_SHARED)) + (VM_WRITE|VM_SHARED)) { + if (vma->vm_ops && vma->vm_ops->pfn_mkwrite) { + int ret; + + pte_unmap_unlock(page_table, ptl); + ret = do_pfn_mkwrite(vma, address); + if (ret & VM_FAULT_ERROR) + return ret; + page_table = pte_offset_map_lock(mm, pmd, + address, &ptl); + /* Did pfn_mkwrite already fixed up the pte */ + if (!pte_same(*page_table, orig_pte)) { + pte_unmap_unlock(page_table, ptl); + return ret; + } + } return wp_page_reuse(mm, vma, address, page_table, ptl, orig_pte, old_page, 0, 0); - + } pte_unmap_unlock(page_table, ptl); return wp_page_copy(mm, vma, address, page_table, pmd, orig_pte, old_page);