From patchwork Fri Mar 11 22:59:04 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A . Shutemov" X-Patchwork-Id: 8569871 Return-Path: X-Original-To: patchwork-linux-fsdevel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 694CE9F44D for ; Fri, 11 Mar 2016 23:00:09 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 8D64F20361 for ; Fri, 11 Mar 2016 23:00:08 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 7032820381 for ; Fri, 11 Mar 2016 23:00:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932938AbcCKW7l (ORCPT ); Fri, 11 Mar 2016 17:59:41 -0500 Received: from mga11.intel.com ([192.55.52.93]:22649 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932531AbcCKW7a (ORCPT ); Fri, 11 Mar 2016 17:59:30 -0500 Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga102.fm.intel.com with ESMTP; 11 Mar 2016 14:59:30 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.24,321,1455004800"; d="scan'208";a="935006324" Received: from black.fi.intel.com ([10.237.72.93]) by fmsmga002.fm.intel.com with ESMTP; 11 Mar 2016 14:59:26 -0800 Received: by black.fi.intel.com (Postfix, from userid 1000) id 4489881B; Sat, 12 Mar 2016 00:59:20 +0200 (EET) From: "Kirill A. Shutemov" To: Hugh Dickins , Andrea Arcangeli , Andrew Morton Cc: Dave Hansen , Vlastimil Babka , Christoph Lameter , Naoya Horiguchi , Jerome Marchand , Yang Shi , Sasha Levin , linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, "Kirill A. Shutemov" Subject: [PATCHv4 12/25] thp: skip file huge pmd on copy_huge_pmd() Date: Sat, 12 Mar 2016 01:59:04 +0300 Message-Id: <1457737157-38573-13-git-send-email-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.7.0 In-Reply-To: <1457737157-38573-1-git-send-email-kirill.shutemov@linux.intel.com> References: <1457737157-38573-1-git-send-email-kirill.shutemov@linux.intel.com> Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP copy_page_range() has a check for "Don't copy ptes where a page fault will fill them correctly." It works on VMA level. We still copy all page table entries from private mappings, even if they map page cache. We can simplify copy_huge_pmd() a bit by skipping file PMDs. We don't map file private pages with PMDs, so they only can map page cache. It's safe to skip them as they can be re-faulted later. Signed-off-by: Kirill A. Shutemov --- mm/huge_memory.c | 34 ++++++++++++++++------------------ 1 file changed, 16 insertions(+), 18 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 51c54d5a945b..0ef06d47bd24 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1082,14 +1082,15 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm, struct page *src_page; pmd_t pmd; pgtable_t pgtable = NULL; - int ret; + int ret = -ENOMEM; - if (!vma_is_dax(vma)) { - ret = -ENOMEM; - pgtable = pte_alloc_one(dst_mm, addr); - if (unlikely(!pgtable)) - goto out; - } + /* Skip if can be re-fill on fault */ + if (!vma_is_anonymous(vma)) + return 0; + + pgtable = pte_alloc_one(dst_mm, addr); + if (unlikely(!pgtable)) + goto out; dst_ptl = pmd_lock(dst_mm, dst_pmd); src_ptl = pmd_lockptr(src_mm, src_pmd); @@ -1097,7 +1098,7 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm, ret = -EAGAIN; pmd = *src_pmd; - if (unlikely(!pmd_trans_huge(pmd) && !pmd_devmap(pmd))) { + if (unlikely(!pmd_trans_huge(pmd))) { pte_free(dst_mm, pgtable); goto out_unlock; } @@ -1120,16 +1121,13 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm, goto out_unlock; } - if (!vma_is_dax(vma)) { - /* thp accounting separate from pmd_devmap accounting */ - src_page = pmd_page(pmd); - VM_BUG_ON_PAGE(!PageHead(src_page), src_page); - get_page(src_page); - page_dup_rmap(src_page, true); - add_mm_counter(dst_mm, MM_ANONPAGES, HPAGE_PMD_NR); - atomic_long_inc(&dst_mm->nr_ptes); - pgtable_trans_huge_deposit(dst_mm, dst_pmd, pgtable); - } + src_page = pmd_page(pmd); + VM_BUG_ON_PAGE(!PageHead(src_page), src_page); + get_page(src_page); + page_dup_rmap(src_page, true); + add_mm_counter(dst_mm, MM_ANONPAGES, HPAGE_PMD_NR); + atomic_long_inc(&dst_mm->nr_ptes); + pgtable_trans_huge_deposit(dst_mm, dst_pmd, pgtable); pmdp_set_wrprotect(src_mm, addr, src_pmd); pmd = pmd_mkold(pmd_wrprotect(pmd));