From patchwork Wed Sep 30 22:21:20 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Kalesh Singh X-Patchwork-Id: 11810407 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 76DB26CA for ; Wed, 30 Sep 2020 22:22:10 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1CAFD20C09 for ; Wed, 30 Sep 2020 22:22:10 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="uc1lC+BR" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1CAFD20C09 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 4851C900004; Wed, 30 Sep 2020 18:22:09 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 40DE56B006E; Wed, 30 Sep 2020 18:22:09 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2ADDF900004; Wed, 30 Sep 2020 18:22:09 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0114.hostedemail.com [216.40.44.114]) by kanga.kvack.org (Postfix) with ESMTP id 15D0F6B006C for ; Wed, 30 Sep 2020 18:22:09 -0400 (EDT) Received: from smtpin14.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id C64FB181AE864 for ; Wed, 30 Sep 2020 22:22:08 +0000 (UTC) X-FDA: 77321151936.14.field35_540994327196 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin14.hostedemail.com (Postfix) with ESMTP id A26CE18229818 for ; Wed, 30 Sep 2020 22:22:08 +0000 (UTC) X-Spam-Summary: 1,0,0,8c27c1dcebd8eff8,d41d8cd98f00b204,3dwv1xwskco4aqbuixiydwxweewbu.secbydkn-ccalqsa.ehw@flex--kaleshsingh.bounces.google.com,,RULES_HIT:1:2:41:152:355:379:387:541:800:960:966:973:988:989:1260:1277:1311:1313:1314:1345:1359:1437:1513:1515:1516:1518:1521:1593:1594:1605:1730:1747:1777:1792:1801:2196:2198:2199:2200:2393:2559:2562:2693:2898:2903:3138:3139:3140:3141:3142:3152:3865:3866:3867:3868:3870:3871:3872:3874:4052:4250:4321:4385:4605:5007:6119:6261:6630:6653:6742:6743:7903:8603:8660:8784:9592:9969:10004:11026:11473:11657:11658:11914:12043:12291:12295:12296:12297:12438:12555:12683:12895:12986:13148:13230:14096:14097:14394:14659:21220:21433:21444:21450:21451:21611:21627:21939:21987:21990:30054:30070,0,RBL:209.85.219.202:@flex--kaleshsingh.bounces.google.com:.lbl8.mailshell.net-66.100.201.100 62.18.0.100;04y8byftmcz6pxo5qixqwph36hnktop1k768uuotpipidjpxzbj3ei99n7ux1sz.x8xc4xr3x19i5migdjqya7ao9cpe486skk9z1n5q8qcx9sjxpiyg84is65rtktd.a-lbl8.mailshell.net-223.238.25 5.100,Ca X-HE-Tag: field35_540994327196 X-Filterd-Recvd-Size: 14051 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) by imf46.hostedemail.com (Postfix) with ESMTP for ; Wed, 30 Sep 2020 22:22:07 +0000 (UTC) Received: by mail-yb1-f202.google.com with SMTP id 139so3203371ybe.15 for ; Wed, 30 Sep 2020 15:22:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:cc:content-transfer-encoding; bh=5bHwRAkIFgRUKgngW4gcP068wEdPRj/cA85ER5ge5+w=; b=uc1lC+BRciNCCenfHEg9UpdZN5PsHA6dts/eMegcTHFRbNmxAt+14yp2sd85qBnhWS /iQ6LFsh4BgYcnuYARjFzrcEz/ZOnAaRHA8vceh6iMmgL2CcCA1ShQXN6A0EkeRzto87 XP2ahfz3G1rWtHeMk2Iie02Bh5Bh0tv3f3+acB7crNXPg47WZEE6HxKCeN+Ddv1yICAG sReM4qnVO2nhs3CV0JpJOlQduy3bOPn+tHSdXaPjkP+C6Geq901I1kmdC3Y2kNiZU8q4 1q6YqqR+HGbk++rIzTfi4ZkgncraaFN0QqjeC/1v+W4BQT1+gF6vukHux4c1ngvQ68DW nFBg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:cc:content-transfer-encoding; bh=5bHwRAkIFgRUKgngW4gcP068wEdPRj/cA85ER5ge5+w=; b=GcWYf0xs278CjNKdyq1YlspyyYDgG1hCLEKkU/JFtVaXy4BkE1irptOSUsx7bPgrOq zifB4iVoUxef7I/fZGR2ISoK5eMU8ewALX9KpLUru3X+fwj/eHm1SSllbLsYOQc/KE0b QPq8w3xfpZYguy4bS206sxUpcoH0CJKRNadxyPTxsG6yqZ2H6aFv7fmwXvHHFkNZG/Tm WuUXlCD5wqAXMG1pmXaJthjVifSd3rH6Rfko4HlpSmEk4U+lmNqakdZNCjwmlUwA3/z7 kJsv5C6yOt0KjceVJrCybl+wwc31NdfPq230jjtVDlHG1jh6x6GGDYrmU2RQ7Buv98NO aahA== X-Gm-Message-State: AOAM530WY3y3rQkWXUVFaSt+sjxwcBj59CsWKDwz+grekFRwQ/i43W7h dm3qaeM3+0Na8K0upiRWJ4eHiLvKxcjbT4y0Dg== X-Google-Smtp-Source: ABdhPJwBYpAmlRALKHYyir3OblZbIEhsgUn0acpHu8aRfQEb5kag4nDCQqATpwKZEagaltStuNxSB3SptnIGiDEZdw== X-Received: from kaleshsingh.c.googlers.com ([fda3:e722:ac3:10:14:4d90:c0a8:2145]) (user=kaleshsingh job=sendgmr) by 2002:a25:5d8:: with SMTP id 207mr6822598ybf.444.1601504527236; Wed, 30 Sep 2020 15:22:07 -0700 (PDT) Date: Wed, 30 Sep 2020 22:21:20 +0000 In-Reply-To: <20200930222130.4175584-1-kaleshsingh@google.com> Message-Id: <20200930222130.4175584-4-kaleshsingh@google.com> Mime-Version: 1.0 References: <20200930222130.4175584-1-kaleshsingh@google.com> X-Mailer: git-send-email 2.28.0.709.gb0816b6eb0-goog Subject: [PATCH 3/5] mm: Speedup mremap on 1GB or larger regions From: Kalesh Singh Cc: surenb@google.com, minchan@google.com, joelaf@google.com, lokeshgidra@google.com, kaleshsingh@google.com, kernel-team@android.com, Catalin Marinas , Will Deacon , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H. Peter Anvin" , Andrew Morton , Shuah Khan , Kees Cook , "Aneesh Kumar K.V" , Peter Zijlstra , Arnd Bergmann , Sami Tolvanen , Masahiro Yamada , Frederic Weisbecker , Krzysztof Kozlowski , Hassan Naveed , Christian Brauner , Stephen Boyd , Mark Rutland , Mark Brown , Mike Rapoport , Gavin Shan , Chris von Recklinghausen , Jia He , Zhenyu Ye , John Hubbard , Sandipan Das , Dave Hansen , Ralph Campbell , Ram Pai , "Kirill A. Shutemov" , William Kucharski , Brian Geffon , Mina Almasry , Masami Hiramatsu , SeongJae Park , linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Android needs to move large memory regions for garbage collection. Optimize mremap for >= 1GB-sized regions by moving at the PUD/PGD level if the source and destination addresses are PUD-aligned. For CONFIG_PGTABLE_LEVELS == 3, moving at the PUD level in effect moves PGD entries, since the PUD entry is “folded back” onto the PGD entry. Add HAVE_MOVE_PUD so that architectures where moving at the PUD level isn't supported/tested can turn this off by not selecting the config. Signed-off-by: Kalesh Singh --- arch/Kconfig | 7 + arch/arm64/include/asm/pgtable.h | 1 + mm/mremap.c | 211 ++++++++++++++++++++++++++----- 3 files changed, 189 insertions(+), 30 deletions(-) diff --git a/arch/Kconfig b/arch/Kconfig index af14a567b493..5eabaa00bf9b 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -602,6 +602,13 @@ config HAVE_IRQ_TIME_ACCOUNTING Archs need to ensure they use a high enough resolution clock to support irq time accounting and then call enable_sched_clock_irqtime(). +config HAVE_MOVE_PUD + bool + help + Architectures that select this are able to move page tables at the + PUD level. If there are only 3 page table levels, the move effectively + happens at the PGD level. + config HAVE_MOVE_PMD bool help diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index d5d3fbe73953..8848125e3024 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -415,6 +415,7 @@ static inline pmd_t pmd_mkdevmap(pmd_t pmd) #define pfn_pud(pfn,prot) __pud(__phys_to_pud_val((phys_addr_t)(pfn) << PAGE_SHIFT) | pgprot_val(prot)) #define set_pmd_at(mm, addr, pmdp, pmd) set_pte_at(mm, addr, (pte_t *)pmdp, pmd_pte(pmd)) +#define set_pud_at(mm, addr, pudp, pud) set_pte_at(mm, addr, (pte_t *)pudp, pud_pte(pud)) #define __p4d_to_phys(p4d) __pte_to_phys(p4d_pte(p4d)) #define __phys_to_p4d_val(phys) __phys_to_pte_val(phys) diff --git a/mm/mremap.c b/mm/mremap.c index 138abbae4f75..a5a1440bd366 100644 --- a/mm/mremap.c +++ b/mm/mremap.c @@ -249,14 +249,167 @@ static bool move_normal_pmd(struct vm_area_struct *vma, unsigned long old_addr, return true; } +#else +static inline bool move_normal_pmd(struct vm_area_struct *vma, unsigned long old_addr, + unsigned long new_addr, pmd_t *old_pmd, pmd_t *new_pmd) +{ + return false; +} #endif +#ifdef CONFIG_HAVE_MOVE_PUD +static pud_t *get_old_pud(struct mm_struct *mm, unsigned long addr) +{ + pgd_t *pgd; + p4d_t *p4d; + pud_t *pud; + + pgd = pgd_offset(mm, addr); + if (pgd_none_or_clear_bad(pgd)) + return NULL; + + p4d = p4d_offset(pgd, addr); + if (p4d_none_or_clear_bad(p4d)) + return NULL; + + pud = pud_offset(p4d, addr); + if (pud_none_or_clear_bad(pud)) + return NULL; + + return pud; +} + +static pud_t *alloc_new_pud(struct mm_struct *mm, struct vm_area_struct *vma, + unsigned long addr) +{ + pgd_t *pgd; + p4d_t *p4d; + pud_t *pud; + + pgd = pgd_offset(mm, addr); + p4d = p4d_alloc(mm, pgd, addr); + if (!p4d) + return NULL; + pud = pud_alloc(mm, p4d, addr); + if (!pud) + return NULL; + + return pud; +} + +static bool move_normal_pud(struct vm_area_struct *vma, unsigned long old_addr, + unsigned long new_addr, pud_t *old_pud, pud_t *new_pud) +{ + spinlock_t *old_ptl, *new_ptl; + struct mm_struct *mm = vma->vm_mm; + pud_t pud; + + /* + * The destination pud shouldn't be established, free_pgtables() + * should have released it. + */ + if (WARN_ON_ONCE(!pud_none(*new_pud))) + return false; + + /* + * We don't have to worry about the ordering of src and dst + * ptlocks because exclusive mmap_lock prevents deadlock. + */ + old_ptl = pud_lock(vma->vm_mm, old_pud); + new_ptl = pud_lockptr(mm, new_pud); + if (new_ptl != old_ptl) + spin_lock_nested(new_ptl, SINGLE_DEPTH_NESTING); + + /* Clear the pud */ + pud = *old_pud; + pud_clear(old_pud); + + VM_BUG_ON(!pud_none(*new_pud)); + + /* Set the new pud */ + set_pud_at(mm, new_addr, new_pud, pud); + flush_tlb_range(vma, old_addr, old_addr + PUD_SIZE); + if (new_ptl != old_ptl) + spin_unlock(new_ptl); + spin_unlock(old_ptl); + + return true; +} +#else +static inline bool move_normal_pud(struct vm_area_struct *vma, unsigned long old_addr, + unsigned long new_addr, pud_t *old_pud, pud_t *new_pud) +{ + return false; +} +#endif + +enum pgt_entry { + NORMAL_PMD, + HPAGE_PMD, + NORMAL_PUD, +}; + +/* + * Returns an extent of the corresponding size for the pgt_entry specified if valid. + * Else returns a smaller extent bounded by the end of the source and destination + * pgt_entry. Returns 0 if an invalid pgt_entry is specified. + */ +static unsigned long get_extent(enum pgt_entry entry, unsigned long old_addr, + unsigned long old_end, unsigned long new_addr) +{ + unsigned long next, extent, mask, size; + + if (entry == NORMAL_PMD || entry == HPAGE_PMD) { + mask = PMD_MASK; + size = PMD_SIZE; + } else if (entry == NORMAL_PUD) { + mask = PUD_MASK; + size = PUD_SIZE; + } else + return 0; + + next = (old_addr + size) & mask; + /* even if next overflowed, extent below will be ok */ + extent = (next > old_end) ? old_end - old_addr : next - old_addr; + next = (new_addr + size) & mask; + if (extent > next - new_addr) + extent = next - new_addr; + return extent; +} + +/* + * Attempts to speedup the move by moving entry at the level corresponding to + * pgt_entry. Returns true if the move was successful, else false. + */ +static bool move_pgt_entry(enum pgt_entry entry, struct vm_area_struct *vma, + unsigned long old_addr, unsigned long new_addr, void *old_entry, + void *new_entry, bool need_rmap_locks) +{ + bool moved = false; + + /* See comment in move_ptes() */ + if (need_rmap_locks) + take_rmap_locks(vma); + if (entry == NORMAL_PMD) + moved = move_normal_pmd(vma, old_addr, new_addr, old_entry, new_entry); + else if (entry == NORMAL_PUD) + moved = move_normal_pud(vma, old_addr, new_addr, old_entry, new_entry); + else if (entry == HPAGE_PMD) + moved = move_huge_pmd(vma, old_addr, new_addr, old_entry, new_entry); + else + WARN_ON_ONCE(1); + if (need_rmap_locks) + drop_rmap_locks(vma); + + return moved; +} + unsigned long move_page_tables(struct vm_area_struct *vma, unsigned long old_addr, struct vm_area_struct *new_vma, unsigned long new_addr, unsigned long len, bool need_rmap_locks) { - unsigned long extent, next, old_end; + unsigned long extent, old_end; struct mmu_notifier_range range; pmd_t *old_pmd, *new_pmd; @@ -269,14 +422,27 @@ unsigned long move_page_tables(struct vm_area_struct *vma, for (; old_addr < old_end; old_addr += extent, new_addr += extent) { cond_resched(); - next = (old_addr + PMD_SIZE) & PMD_MASK; - /* even if next overflowed, extent below will be ok */ - extent = next - old_addr; - if (extent > old_end - old_addr) - extent = old_end - old_addr; - next = (new_addr + PMD_SIZE) & PMD_MASK; - if (extent > next - new_addr) - extent = next - new_addr; +#ifdef CONFIG_HAVE_MOVE_PUD + /* + * If extent is PUD-sized try to speed up the move by moving at the + * PUD level if possible. + */ + extent = get_extent(NORMAL_PUD, old_addr, old_end, new_addr); + if (extent == PUD_SIZE) { + pud_t *old_pud, *new_pud; + + old_pud = get_old_pud(vma->vm_mm, old_addr); + if (!old_pud) + continue; + new_pud = alloc_new_pud(vma->vm_mm, vma, new_addr); + if (!new_pud) + break; + if (move_pgt_entry(NORMAL_PUD, vma, old_addr, new_addr, + old_pud, new_pud, need_rmap_locks)) + continue; + } +#endif + extent = get_extent(NORMAL_PMD, old_addr, old_end, new_addr); old_pmd = get_old_pmd(vma->vm_mm, old_addr); if (!old_pmd) continue; @@ -284,18 +450,10 @@ unsigned long move_page_tables(struct vm_area_struct *vma, if (!new_pmd) break; if (is_swap_pmd(*old_pmd) || pmd_trans_huge(*old_pmd) || pmd_devmap(*old_pmd)) { - if (extent == HPAGE_PMD_SIZE) { - bool moved; - /* See comment in move_ptes() */ - if (need_rmap_locks) - take_rmap_locks(vma); - moved = move_huge_pmd(vma, old_addr, new_addr, - old_pmd, new_pmd); - if (need_rmap_locks) - drop_rmap_locks(vma); - if (moved) - continue; - } + if (extent == HPAGE_PMD_SIZE && + move_pgt_entry(HPAGE_PMD, vma, old_addr, new_addr, old_pmd, + new_pmd, need_rmap_locks)) + continue; split_huge_pmd(vma, old_pmd, old_addr); if (pmd_trans_unstable(old_pmd)) continue; @@ -305,15 +463,8 @@ unsigned long move_page_tables(struct vm_area_struct *vma, * If the extent is PMD-sized, try to speed the move by * moving at the PMD level if possible. */ - bool moved; - - if (need_rmap_locks) - take_rmap_locks(vma); - moved = move_normal_pmd(vma, old_addr, new_addr, - old_pmd, new_pmd); - if (need_rmap_locks) - drop_rmap_locks(vma); - if (moved) + if (move_pgt_entry(NORMAL_PMD, vma, old_addr, new_addr, old_pmd, + new_pmd, need_rmap_locks)) continue; #endif }