From patchwork Wed Jun 15 20:06:06 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A . Shutemov" X-Patchwork-Id: 9179417 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id D7D92604DB for ; Wed, 15 Jun 2016 20:19:12 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C5351269E2 for ; Wed, 15 Jun 2016 20:19:12 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B9E4027F07; Wed, 15 Jun 2016 20:19:12 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B1F86269E2 for ; Wed, 15 Jun 2016 20:19:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933028AbcFOUG7 (ORCPT ); Wed, 15 Jun 2016 16:06:59 -0400 Received: from mga02.intel.com ([134.134.136.20]:54748 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753755AbcFOUG4 (ORCPT ); Wed, 15 Jun 2016 16:06:56 -0400 Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga101.jf.intel.com with ESMTP; 15 Jun 2016 13:06:55 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.26,477,1459839600"; d="scan'208";a="988242008" Received: from black.fi.intel.com ([10.237.72.93]) by fmsmga001.fm.intel.com with ESMTP; 15 Jun 2016 13:06:49 -0700 Received: by black.fi.intel.com (Postfix, from userid 1000) id 9A090E5; Wed, 15 Jun 2016 23:06:48 +0300 (EEST) From: "Kirill A. Shutemov" To: Hugh Dickins , Andrea Arcangeli , Andrew Morton Cc: Dave Hansen , Vlastimil Babka , Christoph Lameter , Naoya Horiguchi , Jerome Marchand , Yang Shi , Sasha Levin , Andres Lagar-Cavilla , Ning Qu , linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Ebru Akagunduz , Rik van Riel , "Kirill A. Shutemov" , Joonsoo Kim , Cyrill Gorcunov , Mel Gorman , David Rientjes , "Aneesh Kumar K . V" , Johannes Weiner , Michal Hocko , Minchan Kim Subject: [PATCHv9-rebased2 01/37] mm, thp: make swapin readahead under down_read of mmap_sem Date: Wed, 15 Jun 2016 23:06:06 +0300 Message-Id: <1466021202-61880-2-git-send-email-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.8.1 In-Reply-To: <1466021202-61880-1-git-send-email-kirill.shutemov@linux.intel.com> References: <1465222029-45942-1-git-send-email-kirill.shutemov@linux.intel.com> <1466021202-61880-1-git-send-email-kirill.shutemov@linux.intel.com> Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Ebru Akagunduz Currently khugepaged makes swapin readahead under down_write. This patch supplies to make swapin readahead under down_read instead of down_write. The patch was tested with a test program that allocates 800MB of memory, writes to it, and then sleeps. The system was forced to swap out all. Afterwards, the test program touches the area by writing, it skips a page in each 20 pages of the area. Link: http://lkml.kernel.org/r/1464335964-6510-4-git-send-email-ebru.akagunduz@gmail.com Signed-off-by: Ebru Akagunduz Cc: Hugh Dickins Cc: Rik van Riel Cc: "Kirill A. Shutemov" Cc: Naoya Horiguchi Cc: Andrea Arcangeli Cc: Joonsoo Kim Cc: Cyrill Gorcunov Cc: Mel Gorman Cc: David Rientjes Cc: Vlastimil Babka Cc: Aneesh Kumar K.V Cc: Johannes Weiner Cc: Michal Hocko Cc: Minchan Kim Signed-off-by: Andrew Morton --- mm/huge_memory.c | 92 ++++++++++++++++++++++++++++++++++++++------------------ 1 file changed, 63 insertions(+), 29 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index f2bc57c45d2f..96dfe3f09bf6 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2378,6 +2378,35 @@ static bool hugepage_vma_check(struct vm_area_struct *vma) } /* + * If mmap_sem temporarily dropped, revalidate vma + * before taking mmap_sem. + * Return 0 if succeeds, otherwise return none-zero + * value (scan code). + */ + +static int hugepage_vma_revalidate(struct mm_struct *mm, + struct vm_area_struct *vma, + unsigned long address) +{ + unsigned long hstart, hend; + + if (unlikely(khugepaged_test_exit(mm))) + return SCAN_ANY_PROCESS; + + vma = find_vma(mm, address); + if (!vma) + return SCAN_VMA_NULL; + + hstart = (vma->vm_start + ~HPAGE_PMD_MASK) & HPAGE_PMD_MASK; + hend = vma->vm_end & HPAGE_PMD_MASK; + if (address < hstart || address + HPAGE_PMD_SIZE > hend) + return SCAN_ADDRESS_RANGE; + if (!hugepage_vma_check(vma)) + return SCAN_VMA_CHECK; + return 0; +} + +/* * Bring missing pages in from swap, to complete THP collapse. * Only done if khugepaged_scan_pmd believes it is worthwhile. * @@ -2385,7 +2414,7 @@ static bool hugepage_vma_check(struct vm_area_struct *vma) * but with mmap_sem held to protect against vma changes. */ -static void __collapse_huge_page_swapin(struct mm_struct *mm, +static bool __collapse_huge_page_swapin(struct mm_struct *mm, struct vm_area_struct *vma, unsigned long address, pmd_t *pmd) { @@ -2401,11 +2430,18 @@ static void __collapse_huge_page_swapin(struct mm_struct *mm, continue; swapped_in++; ret = do_swap_page(mm, vma, _address, pte, pmd, - FAULT_FLAG_ALLOW_RETRY|FAULT_FLAG_RETRY_NOWAIT, + FAULT_FLAG_ALLOW_RETRY, pteval); + /* do_swap_page returns VM_FAULT_RETRY with released mmap_sem */ + if (ret & VM_FAULT_RETRY) { + down_read(&mm->mmap_sem); + /* vma is no longer available, don't continue to swapin */ + if (hugepage_vma_revalidate(mm, vma, address)) + return false; + } if (ret & VM_FAULT_ERROR) { trace_mm_collapse_huge_page_swapin(mm, swapped_in, 0); - return; + return false; } /* pte is unmapped now, we need to map it */ pte = pte_offset_map(pmd, _address); @@ -2413,6 +2449,7 @@ static void __collapse_huge_page_swapin(struct mm_struct *mm, pte--; pte_unmap(pte); trace_mm_collapse_huge_page_swapin(mm, swapped_in, 1); + return true; } static void collapse_huge_page(struct mm_struct *mm, @@ -2427,7 +2464,6 @@ static void collapse_huge_page(struct mm_struct *mm, struct page *new_page; spinlock_t *pmd_ptl, *pte_ptl; int isolated = 0, result = 0; - unsigned long hstart, hend; struct mem_cgroup *memcg; unsigned long mmun_start; /* For mmu_notifiers */ unsigned long mmun_end; /* For mmu_notifiers */ @@ -2450,39 +2486,37 @@ static void collapse_huge_page(struct mm_struct *mm, goto out_nolock; } - /* - * Prevent all access to pagetables with the exception of - * gup_fast later hanlded by the ptep_clear_flush and the VM - * handled by the anon_vma lock + PG_lock. - */ - down_write(&mm->mmap_sem); - if (unlikely(khugepaged_test_exit(mm))) { - result = SCAN_ANY_PROCESS; + down_read(&mm->mmap_sem); + result = hugepage_vma_revalidate(mm, vma, address); + if (result) goto out; - } - vma = find_vma(mm, address); - if (!vma) { - result = SCAN_VMA_NULL; - goto out; - } - hstart = (vma->vm_start + ~HPAGE_PMD_MASK) & HPAGE_PMD_MASK; - hend = vma->vm_end & HPAGE_PMD_MASK; - if (address < hstart || address + HPAGE_PMD_SIZE > hend) { - result = SCAN_ADDRESS_RANGE; - goto out; - } - if (!hugepage_vma_check(vma)) { - result = SCAN_VMA_CHECK; - goto out; - } pmd = mm_find_pmd(mm, address); if (!pmd) { result = SCAN_PMD_NULL; goto out; } - __collapse_huge_page_swapin(mm, vma, address, pmd); + /* + * __collapse_huge_page_swapin always returns with mmap_sem + * locked. If it fails, release mmap_sem and jump directly + * label out. Continuing to collapse causes inconsistency. + */ + if (!__collapse_huge_page_swapin(mm, vma, address, pmd)) { + up_read(&mm->mmap_sem); + goto out; + } + + up_read(&mm->mmap_sem); + /* + * Prevent all access to pagetables with the exception of + * gup_fast later handled by the ptep_clear_flush and the VM + * handled by the anon_vma lock + PG_lock. + */ + down_write(&mm->mmap_sem); + result = hugepage_vma_revalidate(mm, vma, address); + if (result) + goto out; anon_vma_lock_write(vma->anon_vma);