From patchwork Fri Feb 4 17:56:52 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 12735410 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D2430C433F5 for ; Fri, 4 Feb 2022 17:56:57 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6A0896B0075; Fri, 4 Feb 2022 12:56:57 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 62A986B0078; Fri, 4 Feb 2022 12:56:57 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4A28F6B007B; Fri, 4 Feb 2022 12:56:57 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0196.hostedemail.com [216.40.44.196]) by kanga.kvack.org (Postfix) with ESMTP id 3B9116B0075 for ; Fri, 4 Feb 2022 12:56:57 -0500 (EST) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id F16978E2D8 for ; Fri, 4 Feb 2022 17:56:56 +0000 (UTC) X-FDA: 79105853232.18.8CD854C Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf10.hostedemail.com (Postfix) with ESMTP id 91034C0002 for ; Fri, 4 Feb 2022 17:56:56 +0000 (UTC) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id A10A961B53; Fri, 4 Feb 2022 17:56:55 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 62B9BC004E1; Fri, 4 Feb 2022 17:56:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1643997415; bh=yjWNMIG51D1uSvIE77XEC6AgViA+/ifmW+Z1eUPIVfA=; h=Date:To:From:In-Reply-To:Subject:From; b=1Q+CPAeA0abr+oNrkfURj85NCkf3KYsWBc1t3VWxvbv1hIfJeXvt5FhPqEPZJuJgF 5bmFB9vwfyj8Inx0pGS74HCIYeAssS3+l03leViP+/12GHKWM6Z7yWDtIqjhbZYzTA YoR8v+sb7NHhm1Ll9aQCpVjdxOzlMGwvr4k8F6kU= Received: by hp1 (sSMTP sendmail emulation); Fri, 04 Feb 2022 09:56:52 -0800 Date: Fri, 04 Feb 2022 09:56:52 -0800 To: ziy@nvidia.com,will@kernel.org,weixugc@google.com,songmuchun@bytedance.com,rppt@kernel.org,rientjes@google.com,pjt@google.com,mingo@redhat.com,jirislaby@kernel.org,hughd@google.com,hpa@zytor.com,gthelen@google.com,dave.hansen@linux.intel.com,anshuman.khandual@arm.com,aneesh.kumar@linux.ibm.com,pasha.tatashin@soleen.com,akpm@linux-foundation.org,linux-mm@kvack.org,mm-commits@vger.kernel.org,torvalds@linux-foundation.org,akpm@linux-foundation.org From: Andrew Morton In-Reply-To: <20220203204836.88dcebe504f440686cc63a60@linux-foundation.org> Subject: [patch 04/10] mm/khugepaged: unify collapse pmd clear, flush and free Message-Id: <20220204175653.62B9BC004E1@smtp.kernel.org> X-Rspamd-Queue-Id: 91034C0002 X-Rspam-User: nil Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=linux-foundation.org header.s=korg header.b=1Q+CPAeA; dmarc=none; spf=pass (imf10.hostedemail.com: domain of akpm@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=akpm@kernel.org X-Stat-Signature: zbocs7feqny5rnacmshs3mzrjympnuub X-Rspamd-Server: rspam08 X-HE-Tag: 1643997416-791732 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Pasha Tatashin Subject: mm/khugepaged: unify collapse pmd clear, flush and free Unify the code that flushes, clears pmd entry, and frees the PTE table level into a new function collapse_and_free_pmd(). This cleanup is useful as in the next patch we will add another call to this function to iterate through PTE prior to freeing the level for page table check. Link: https://lkml.kernel.org/r/20220131203249.2832273-4-pasha.tatashin@soleen.com Signed-off-by: Pasha Tatashin Acked-by: David Rientjes Cc: Aneesh Kumar K.V Cc: Anshuman Khandual Cc: Dave Hansen Cc: Greg Thelen Cc: H. Peter Anvin Cc: Hugh Dickins Cc: Ingo Molnar Cc: Jiri Slaby Cc: Mike Rapoport Cc: Muchun Song Cc: Paul Turner Cc: Wei Xu Cc: Will Deacon Cc: Zi Yan Signed-off-by: Andrew Morton --- mm/khugepaged.c | 34 ++++++++++++++++++---------------- 1 file changed, 18 insertions(+), 16 deletions(-) --- a/mm/khugepaged.c~mm-khugepaged-unify-collapse-pmd-clear-flush-and-free +++ a/mm/khugepaged.c @@ -1416,6 +1416,19 @@ static int khugepaged_add_pte_mapped_thp return 0; } +static void collapse_and_free_pmd(struct mm_struct *mm, struct vm_area_struct *vma, + unsigned long addr, pmd_t *pmdp) +{ + spinlock_t *ptl; + pmd_t pmd; + + ptl = pmd_lock(vma->vm_mm, pmdp); + pmd = pmdp_collapse_flush(vma, addr, pmdp); + spin_unlock(ptl); + mm_dec_nr_ptes(mm); + pte_free(mm, pmd_pgtable(pmd)); +} + /** * collapse_pte_mapped_thp - Try to collapse a pte-mapped THP for mm at * address haddr. @@ -1433,7 +1446,7 @@ void collapse_pte_mapped_thp(struct mm_s struct vm_area_struct *vma = find_vma(mm, haddr); struct page *hpage; pte_t *start_pte, *pte; - pmd_t *pmd, _pmd; + pmd_t *pmd; spinlock_t *ptl; int count = 0; int i; @@ -1509,12 +1522,7 @@ void collapse_pte_mapped_thp(struct mm_s } /* step 4: collapse pmd */ - ptl = pmd_lock(vma->vm_mm, pmd); - _pmd = pmdp_collapse_flush(vma, haddr, pmd); - spin_unlock(ptl); - mm_dec_nr_ptes(mm); - pte_free(mm, pmd_pgtable(_pmd)); - + collapse_and_free_pmd(mm, vma, haddr, pmd); drop_hpage: unlock_page(hpage); put_page(hpage); @@ -1552,7 +1560,7 @@ static void retract_page_tables(struct a struct vm_area_struct *vma; struct mm_struct *mm; unsigned long addr; - pmd_t *pmd, _pmd; + pmd_t *pmd; i_mmap_lock_write(mapping); vma_interval_tree_foreach(vma, &mapping->i_mmap, pgoff, pgoff) { @@ -1591,14 +1599,8 @@ static void retract_page_tables(struct a * reverse order. Trylock is a way to avoid deadlock. */ if (mmap_write_trylock(mm)) { - if (!khugepaged_test_exit(mm)) { - spinlock_t *ptl = pmd_lock(mm, pmd); - /* assume page table is clear */ - _pmd = pmdp_collapse_flush(vma, addr, pmd); - spin_unlock(ptl); - mm_dec_nr_ptes(mm); - pte_free(mm, pmd_pgtable(_pmd)); - } + if (!khugepaged_test_exit(mm)) + collapse_and_free_pmd(mm, vma, addr, pmd); mmap_write_unlock(mm); } else { /* Try again later */