From patchwork Sat Oct 13 01:31:58 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joel Fernandes X-Patchwork-Id: 10639827 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 60F63157A for ; Sat, 13 Oct 2018 01:32:29 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 41EEC2B903 for ; Sat, 13 Oct 2018 01:32:29 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 357012B906; Sat, 13 Oct 2018 01:32:29 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 86C0F2B903 for ; Sat, 13 Oct 2018 01:32:28 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 705C86B000D; Fri, 12 Oct 2018 21:32:25 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 6650F6B000E; Fri, 12 Oct 2018 21:32:25 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4BD016B0010; Fri, 12 Oct 2018 21:32:25 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pf1-f200.google.com (mail-pf1-f200.google.com [209.85.210.200]) by kanga.kvack.org (Postfix) with ESMTP id 0114C6B000D for ; Fri, 12 Oct 2018 21:32:25 -0400 (EDT) Received: by mail-pf1-f200.google.com with SMTP id a64-v6so13695535pfg.16 for ; Fri, 12 Oct 2018 18:32:24 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=TuTZZG6t7Vpxk6hICMjuKsRejg5EuhIlnOB0TcQ6zaw=; b=LOQyYpwZsiQUmegTH6RAHuxcwEL7mLTEw+0bbQ6sfU62S2yljRj8sDk8bSeyPWpSh8 BE4pjTA4+oZzypUGYwBxZkO3dUhlwBwDt38PHkh/WoqE7LCnfnUiGr9ANQZbhFuOVGE7 J/z2EevWzCG62UNk+A255/VBtapAuhTbSsNs1QmaaORaumDAJlFWxx6t1uhqJczQ5eqt yhw5UzTLwIAZblwdnh0eYn5An0v4tGXmaLGcjMHzHLbybeUiXCLq9FkT43XirlCuO967 ZRWZZlzkOjGK0PipijYWAiQbAwXJH0KH/9S53XUuFaJtQA2a4njt7nPed+kx5lpMOmOY 5uww== X-Gm-Message-State: ABuFfoitzDews3OhDHLD7tOOwIidwgqLtOe6W3YFX/ZiGjQE8YHG1m5I GiCBwJE7lMOVWZ0J+kqLNbUN6kLsmqIMk9PrazBZ43MsUigeQJSJRkRkwc8xwjHHtuqzArb7ABW kOZkLMZFWC9UqHDnG4+KX2Tc1dfAJURs+5xR8yr6mEQQnVxbIO0qkzZUWAPhEX96XA8sKEwB6Xz lpfPMW4oy+VerZk1WYEPUZvdORb5wFDP+pPdalkjutuOD+cuDt4yeS+c3kf7B2brJ/V6uN5PTXo rXXIJjXx28Chye2tjUbsbwTshBS8W9cmft3646XoLDb95zyEy31AMDqMz7wiikTH8lKi7CpMGRs 6dcnZ7N13PTRmknQn8HgWj9QnYdHxji+h2hUEUQ1kXZH3mHFJPVZD6qcwf+szs6Tif5likBgeUD n X-Received: by 2002:a17:902:a9:: with SMTP id a38-v6mr2828765pla.273.1539394344650; Fri, 12 Oct 2018 18:32:24 -0700 (PDT) X-Received: by 2002:a17:902:a9:: with SMTP id a38-v6mr2828714pla.273.1539394343697; Fri, 12 Oct 2018 18:32:23 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1539394343; cv=none; d=google.com; s=arc-20160816; b=xVHDcH6iHGulQSETVMtv8ZXRfk7QavDBr4/kgw3fWmmaW4V2GMd9ZexQZaW93A93zf LclLfnaIe8cEGg1xeFzEEsUSek9p9Oq36L3pvB7/GguElwJ8R7Z6gL7CszzoVpDpFLir 4SvbJ6fqQnoixLHlBlKyqWAMvefzJ1OCwRbDS2Hde0qwTsKFo8jRDeCMjuNUDIJ+yTWn 4dbqyeN0OsG8X6nJTkhY9jLvBUzwiYSpZvJgqyZiuHjCeWrvyLXNZ9JZMlz5fMZRC/4H L2Qb1GBSZ1zNV2Yq8fgDCtg12oMace3b3NDW86NzVuAUeCggZJYoGS1hl4BOsvsnCNa9 zJmA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=TuTZZG6t7Vpxk6hICMjuKsRejg5EuhIlnOB0TcQ6zaw=; b=Rko4Pv9+D+wIKu028bWMbnetjhO1odMITkSwmMi81SnEqmpDG25ZP9Oe41uxWDQ+IM Q4r9nAg9bhp2yl4njdYlvlCuuJbYp/91EHHKE28YF9vev8W0C5S77VdPFP7QcI6se4Cj 6xL9y/YN0uRtHBdTQPTQguFdsJYCTDs9mSpWm2pXTOOecomrNq4bf0iVmTsZVjGA84RA +rL1LyOPGB3Pfenvb/2zA3tlbiBE68RUHOf5+mKUgP4AWFBaYSfoW52sD9m7y2OPJOz4 gnlNSgk/JS8rHI+4tnQSynGgpNtcown6U3eY55wF6gIMUQ7PHDES7ARXrEkZHppamP5P UCeA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@joelfernandes.org header.s=google header.b=lwmRep2s; spf=pass (google.com: domain of joel@joelfernandes.org designates 209.85.220.65 as permitted sender) smtp.mailfrom=joel@joelfernandes.org Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id t18-v6sor2578896pgk.4.2018.10.12.18.32.23 for (Google Transport Security); Fri, 12 Oct 2018 18:32:23 -0700 (PDT) Received-SPF: pass (google.com: domain of joel@joelfernandes.org designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@joelfernandes.org header.s=google header.b=lwmRep2s; spf=pass (google.com: domain of joel@joelfernandes.org designates 209.85.220.65 as permitted sender) smtp.mailfrom=joel@joelfernandes.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=joelfernandes.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=TuTZZG6t7Vpxk6hICMjuKsRejg5EuhIlnOB0TcQ6zaw=; b=lwmRep2sGNPNtJhgB9RsOPKTOOSB/krJ8anCRbmRRsLRPZ/Xgg8wtftrtImybZPJ9o ta6TKRrl0gc5jms/MCsxSVJ+O1Hrax3boWXjxUDfy0mAOjOPdatt+VtwlcaiJ8EZAXsz W1Bl/HdY1fIvlWc7La4wrjjn5Dnmh5o9mNT2I= X-Google-Smtp-Source: ACcGV61+9EmPtOdQGzTHjVGwLpiqYsmqyZ1bXa4NAgT6WNqQkE33CkAc0nRdLLEL017wJ1a4KyTnew== X-Received: by 2002:a63:b08:: with SMTP id 8-v6mr7584652pgl.130.1539394343099; Fri, 12 Oct 2018 18:32:23 -0700 (PDT) Received: from joelaf.mtv.corp.google.com ([2620:0:1000:1601:3aef:314f:b9ea:889f]) by smtp.gmail.com with ESMTPSA id q7-v6sm6507828pfd.164.2018.10.12.18.32.20 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 12 Oct 2018 18:32:21 -0700 (PDT) From: "Joel Fernandes (Google)" To: linux-kernel@vger.kernel.org Cc: kernel-team@android.com, "Joel Fernandes (Google)" , minchan@kernel.org, pantin@google.com, hughd@google.com, lokeshgidra@google.com, dancol@google.com, mhocko@kernel.org, kirill@shutemov.name, akpm@linux-foundation.org, Andrey Ryabinin , Andy Lutomirski , anton.ivanov@kot-begemot.co.uk, Borislav Petkov , Catalin Marinas , Chris Zankel , Dave Hansen , "David S. Miller" , elfring@users.sourceforge.net, Fenghua Yu , Geert Uytterhoeven , Guan Xuetao , Helge Deller , Ingo Molnar , "James E.J. Bottomley" , Jeff Dike , Jonas Bonn , Julia Lawall , kasan-dev@googlegroups.com, kvmarm@lists.cs.columbia.edu, Ley Foon Tan , linux-alpha@vger.kernel.org, linux-hexagon@vger.kernel.org, linux-ia64@vger.kernel.org, linux-m68k@lists.linux-m68k.org, linux-mips@linux-mips.org, linux-mm@kvack.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, linux-snps-arc@lists.infradead.org, linux-um@lists.infradead.org, linux-xtensa@linux-xtensa.org, Max Filippov , nios2-dev@lists.rocketboards.org, Peter Zijlstra , Richard Weinberger , Rich Felker , Sam Creasey , sparclinux@vger.kernel.org, Stafford Horne , Stefan Kristiansson , Thomas Gleixner , Tony Luck , Will Deacon , x86@kernel.org (maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)), Yoshinori Sato Subject: [PATCH 2/4] mm: speed up mremap by 500x on large regions (v2) Date: Fri, 12 Oct 2018 18:31:58 -0700 Message-Id: <20181013013200.206928-3-joel@joelfernandes.org> X-Mailer: git-send-email 2.19.0.605.g01d371f741-goog In-Reply-To: <20181013013200.206928-1-joel@joelfernandes.org> References: <20181013013200.206928-1-joel@joelfernandes.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Android needs to mremap large regions of memory during memory management related operations. The mremap system call can be really slow if THP is not enabled. The bottleneck is move_page_tables, which is copying each pte at a time, and can be really slow across a large map. Turning on THP may not be a viable option, and is not for us. This patch speeds up the performance for non-THP system by copying at the PMD level when possible. The speed up is three orders of magnitude. On a 1GB mremap, the mremap completion times drops from 160-250 millesconds to 380-400 microseconds. Before: Total mremap time for 1GB data: 242321014 nanoseconds. Total mremap time for 1GB data: 196842467 nanoseconds. Total mremap time for 1GB data: 167051162 nanoseconds. After: Total mremap time for 1GB data: 385781 nanoseconds. Total mremap time for 1GB data: 388959 nanoseconds. Total mremap time for 1GB data: 402813 nanoseconds. Incase THP is enabled, the optimization is skipped. I also flush the tlb every time we do this optimization since I couldn't find a way to determine if the low-level PTEs are dirty. It is seen that the cost of doing so is not much compared the improvement, on both x86-64 and arm64. Cc: minchan@kernel.org Cc: pantin@google.com Cc: hughd@google.com Cc: lokeshgidra@google.com Cc: dancol@google.com Cc: mhocko@kernel.org Cc: kirill@shutemov.name Cc: akpm@linux-foundation.org Cc: kernel-team@android.com Signed-off-by: Joel Fernandes (Google) Signed-off-by: Joel Fernandes (Google) --- arch/Kconfig | 5 ++++ mm/mremap.c | 65 ++++++++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 70 insertions(+) diff --git a/arch/Kconfig b/arch/Kconfig index 6801123932a5..9724fe39884f 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -518,6 +518,11 @@ config HAVE_IRQ_TIME_ACCOUNTING Archs need to ensure they use a high enough resolution clock to support irq time accounting and then call enable_sched_clock_irqtime(). +config HAVE_MOVE_PMD + bool + help + Archs that select this are able to move page tables at the PMD level. + config HAVE_ARCH_TRANSPARENT_HUGEPAGE bool diff --git a/mm/mremap.c b/mm/mremap.c index 9e68a02a52b1..2fd163cff406 100644 --- a/mm/mremap.c +++ b/mm/mremap.c @@ -191,6 +191,54 @@ static void move_ptes(struct vm_area_struct *vma, pmd_t *old_pmd, drop_rmap_locks(vma); } +static bool move_normal_pmd(struct vm_area_struct *vma, unsigned long old_addr, + unsigned long new_addr, unsigned long old_end, + pmd_t *old_pmd, pmd_t *new_pmd, bool *need_flush) +{ + spinlock_t *old_ptl, *new_ptl; + struct mm_struct *mm = vma->vm_mm; + + if ((old_addr & ~PMD_MASK) || (new_addr & ~PMD_MASK) + || old_end - old_addr < PMD_SIZE) + return false; + + /* + * The destination pmd shouldn't be established, free_pgtables() + * should have release it. + */ + if (WARN_ON(!pmd_none(*new_pmd))) + return false; + + /* + * We don't have to worry about the ordering of src and dst + * ptlocks because exclusive mmap_sem prevents deadlock. + */ + old_ptl = pmd_lock(vma->vm_mm, old_pmd); + if (old_ptl) { + pmd_t pmd; + + new_ptl = pmd_lockptr(mm, new_pmd); + if (new_ptl != old_ptl) + spin_lock_nested(new_ptl, SINGLE_DEPTH_NESTING); + + /* Clear the pmd */ + pmd = *old_pmd; + pmd_clear(old_pmd); + + VM_BUG_ON(!pmd_none(*new_pmd)); + + /* Set the new pmd */ + set_pmd_at(mm, new_addr, new_pmd, pmd); + if (new_ptl != old_ptl) + spin_unlock(new_ptl); + spin_unlock(old_ptl); + + *need_flush = true; + return true; + } + return false; +} + unsigned long move_page_tables(struct vm_area_struct *vma, unsigned long old_addr, struct vm_area_struct *new_vma, unsigned long new_addr, unsigned long len, @@ -239,7 +287,24 @@ unsigned long move_page_tables(struct vm_area_struct *vma, split_huge_pmd(vma, old_pmd, old_addr); if (pmd_trans_unstable(old_pmd)) continue; + } else if (extent == PMD_SIZE && IS_ENABLED(CONFIG_HAVE_MOVE_PMD)) { + /* + * If the extent is PMD-sized, try to speed the move by + * moving at the PMD level if possible. + */ + bool moved; + + if (need_rmap_locks) + take_rmap_locks(vma); + moved = move_normal_pmd(vma, old_addr, new_addr, + old_end, old_pmd, new_pmd, + &need_flush); + if (need_rmap_locks) + drop_rmap_locks(vma); + if (moved) + continue; } + if (pte_alloc(new_vma->vm_mm, new_pmd)) break; next = (new_addr + PMD_SIZE) & PMD_MASK;