From patchwork Tue Jun 23 11:44:53 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A. Shutemov" X-Patchwork-Id: 6659751 Return-Path: X-Original-To: patchwork-linux-nvdimm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 4ACBDC05AC for ; Tue, 23 Jun 2015 11:45:19 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 1CC6220501 for ; Tue, 23 Jun 2015 11:45:14 +0000 (UTC) Received: from ml01.01.org (ml01.01.org [198.145.21.10]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 11F37205B1 for ; Tue, 23 Jun 2015 11:45:13 +0000 (UTC) Received: from ml01.vlan14.01.org (localhost [IPv6:::1]) by ml01.01.org (Postfix) with ESMTP id C7A5018280C; Tue, 23 Jun 2015 04:45:12 -0700 (PDT) X-Original-To: linux-nvdimm@lists.01.org Delivered-To: linux-nvdimm@lists.01.org Received: from johanna4.rokki.sonera.fi (mta-out1.inet.fi [62.71.2.229]) by ml01.01.org (Postfix) with ESMTP id EB4C71827FC for ; Tue, 23 Jun 2015 04:45:11 -0700 (PDT) RazorGate-KAS: Rate: 5 RazorGate-KAS: {RECEIVED: dynamic ip detected} RazorGate-KAS: Envelope from: RazorGate-KAS: Version: 5.5.3 RazorGate-KAS: LuaCore: 215 2015-05-29_17-31-22 60ae4a1b4d01d14f868b20a55aced8d7df7b2e28 RazorGate-KAS: Method: none RazorGate-KAS: Lua profiles 78662 [Jun 02 2015] RazorGate-KAS: Status: not_detected Received: from node.shutemov.name (80.220.224.16) by johanna4.rokki.sonera.fi (9.0.002.03-2-gbe5d057) id 557E8F8A00B2807C; Tue, 23 Jun 2015 14:44:59 +0300 Received: by node.shutemov.name (Postfix, from userid 1000) id 88ED74162F; Tue, 23 Jun 2015 14:44:53 +0300 (EEST) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=shutemov.name; s=default; t=1435059893; bh=OG3SFDFgKFDulBGebh2gTjNLVmRhofXBuPSYZpZlNVs=; h=Date:From:To:Cc:Subject:References:In-Reply-To; b=KDsoxYbBs3CwMIvlayL7K5840oZzkHpH1oIJdTP/cFoD6pJ3jCRccAABiYfzSSxI0 8/I5VIDTpTySKfSB/4gAc0ZyfnZ2QDccWYhE9HlBEq9jFl3KRgp5NLu0LL0F2Dsu3M e2JYtrC0pBiTvNr9NLNM/GAUpxBTGiUq+OlgASbA= Date: Tue, 23 Jun 2015 14:44:53 +0300 From: "Kirill A. Shutemov" To: Toshi Kani Subject: Re: [PATCH] mm: Fix MAP_POPULATE and mlock() for DAX Message-ID: <20150623114453.GA8603@node.dhcp.inet.fi> References: <1434493710-11138-1-git-send-email-toshi.kani@hp.com> <20150620194612.GA5268@node.dhcp.inet.fi> <1435006555.11808.210.camel@misato.fc.hp.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <1435006555.11808.210.camel@misato.fc.hp.com> User-Agent: Mutt/1.5.23.1 (2014-03-12) Cc: linux-nvdimm@lists.01.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, akpm@linux-foundation.org, kirill.shutemov@linux.intel.com X-BeenThere: linux-nvdimm@lists.01.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: "Linux-nvdimm developer list." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: linux-nvdimm-bounces@lists.01.org Sender: "Linux-nvdimm" X-Spam-Status: No, score=-3.2 required=5.0 tests=BAYES_00,DKIM_SIGNED, RP_MATCHES_RCVD, T_DKIM_INVALID, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP On Mon, Jun 22, 2015 at 02:55:55PM -0600, Toshi Kani wrote: > On Sat, 2015-06-20 at 22:46 +0300, Kirill A. Shutemov wrote: > > On Tue, Jun 16, 2015 at 04:28:30PM -0600, Toshi Kani wrote: > > > DAX has the following issues in a shared or read-only private > > > mmap'd file. > > > - mmap(MAP_POPULATE) does not pre-fault > > > - mlock() fails with -ENOMEM > > > > > > DAX uses VM_MIXEDMAP for mmap'd files, which do not have struct > > > page associated with the ranges. Both MAP_POPULATE and mlock() > > > call __mm_populate(), which in turn calls __get_user_pages(). > > > Because __get_user_pages() requires a valid page returned from > > > follow_page_mask(), MAP_POPULATE and mlock(), i.e. FOLL_POPULATE, > > > fail in the first page. > > > > > > Change __get_user_pages() to proceed FOLL_POPULATE when the > > > translation is set but its page does not exist (-EFAULT), and > > > @pages is not requested. With that, MAP_POPULATE and mlock() > > > set translations to the requested range and complete successfully. > > > > > > MAP_POPULATE still provides a major performance improvement to > > > DAX as it will avoid page faults during initial access to the > > > pages. > > > > > > mlock() continues to set VM_LOCKED to vma and populate the range. > > > Since there is no struct page, the range is pinned without marking > > > pages mlocked. > > > > > > Note, MAP_POPULATE and mlock() already work for a write-able > > > private mmap'd file on DAX since populate_vma_page_range() breaks > > > COW, which allocates page caches. > > > > I don't think that's true in all cases. > > > > We would fail to break COW for mlock() if the mapping is populated with > > read-only entries by the mlock() time. In this case follow_page_mask() > > would fail with -EFAULT and faultin_page() will never executed. > > No, mlock() always breaks COW as populate_vma_page_range() sets > FOLL_WRITE in case of write-able private mmap. > > /* > * We want to touch writable mappings with a write fault in order > * to break COW, except for shared mappings because these don't COW > * and we would not want to dirty them for nothing. > */ > if ((vma->vm_flags & (VM_WRITE | VM_SHARED)) == VM_WRITE) > gup_flags |= FOLL_WRITE; Okay, you're right it should work. What about doing this in more generic way? The totally untested patch below tries to make GUP work on DAX and other pfn maps when struct page is not required. Any comments? Reviewed-by: Toshi Kani diff --git a/mm/gup.c b/mm/gup.c index 222d57e335f9..03645f400748 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -33,6 +33,30 @@ static struct page *no_page_table(struct vm_area_struct *vma, return NULL; } +static int follow_pfn_pte(struct vm_area_struct *vma, unsigned long address, + pte_t *pte, unsigned int flags) +{ + /* No page to get reference */ + if (flags & FOLL_GET) + return -EFAULT; + + if (flags & FOLL_TOUCH) { + pte_t entry = *pte; + + if (flags & FOLL_WRITE) + entry = pte_mkdirty(entry); + entry = pte_mkyoung(entry); + + if (!pte_same(*pte, entry)) { + set_pte_at(vma->vm_mm, address, pte, entry); + update_mmu_cache(vma, address, pte); + } + } + + /* Proper page table entry exists, but no corresponding struct page */ + return -EEXIST; +} + static struct page *follow_page_pte(struct vm_area_struct *vma, unsigned long address, pmd_t *pmd, unsigned int flags) { @@ -74,10 +98,21 @@ retry: page = vm_normal_page(vma, address, pte); if (unlikely(!page)) { - if ((flags & FOLL_DUMP) || - !is_zero_pfn(pte_pfn(pte))) - goto bad_page; - page = pte_page(pte); + if (flags & FOLL_DUMP) { + /* Avoid special (like zero) pages in core dumps */ + page = ERR_PTR(-EFAULT); + goto out; + } + + if (is_zero_pfn(pte_pfn(pte))) { + page = pte_page(pte); + } else { + int ret; + + ret = follow_pfn_pte(vma, address, ptep, flags); + page = ERR_PTR(ret); + goto out; + } } if (flags & FOLL_GET) @@ -115,12 +150,9 @@ retry: unlock_page(page); } } +out: pte_unmap_unlock(ptep, ptl); return page; -bad_page: - pte_unmap_unlock(ptep, ptl); - return ERR_PTR(-EFAULT); - no_page: pte_unmap_unlock(ptep, ptl); if (!pte_none(pte)) @@ -490,9 +522,15 @@ retry: goto next_page; } BUG(); - } - if (IS_ERR(page)) + } else if (PTR_ERR(page) == -EEXIST) { + /* + * Proper page table entry exists, but no corresponding + * struct page. + */ + goto next_page; + } else if (IS_ERR(page)) { return i ? i : PTR_ERR(page); + } if (pages) { pages[i] = page; flush_anon_page(vma, page, start);