From patchwork Tue Oct 9 20:14:00 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joel Fernandes X-Patchwork-Id: 10633247 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3AEC413BB for ; Tue, 9 Oct 2018 20:14:44 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3035D29488 for ; Tue, 9 Oct 2018 20:14:44 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 23D37294AF; Tue, 9 Oct 2018 20:14:44 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9DEC429488 for ; Tue, 9 Oct 2018 20:14:43 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 41C1D6B0006; Tue, 9 Oct 2018 16:14:42 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 3CA5F6B0007; Tue, 9 Oct 2018 16:14:42 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2BA476B0008; Tue, 9 Oct 2018 16:14:42 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pl1-f198.google.com (mail-pl1-f198.google.com [209.85.214.198]) by kanga.kvack.org (Postfix) with ESMTP id DEB136B0006 for ; Tue, 9 Oct 2018 16:14:41 -0400 (EDT) Received: by mail-pl1-f198.google.com with SMTP id b23-v6so2285665pls.8 for ; Tue, 09 Oct 2018 13:14:41 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:mime-version:content-transfer-encoding; bh=V18Z3eBngl64nu3A+zrmzE7Etk3mR/lRybg06jtcvaQ=; b=e8HHAJWMletFoppdglmdqSJ7d8Ds00PRlmzMdzRDxmaTcBhc9YdVN127LllhsfzF8U /dnJOLj/IBpgiaP46cCx5Y5IARMC8/y7LxmO9VhJuZTzPWlwlYexbCbcA6AOBWBxjbjc Wgj6/oK6vFVXN8yLUT4HCX/Hr9DIoyeoQxLJYEscPwEdZtxHHjfssH+Jdjk1qXVmGKVF HYer/fXdI08dYRU0jviRF2USxmcZKSWm+XaYCm/7zG5r1OUqP8SkF9tEtn+E0Dc4MNwZ UClZ976TF0XBXmkbxc62njCraQHf8Y3rC93px5+QxHvfHHa9cGqmNmjF3re7Dj/WQte3 R/aA== X-Gm-Message-State: ABuFfohMNUiobhwdFdD1YevCnMbE0SwYT88TXFM3p1MwldEKa2lIk7jq 7YIglKys/qHou4nvydP9QVsCCvc/kNkGHteDKCO9r8sw1QndQR0C3jk/aDlY4MBTYgXJn6hqEFD f/rLmI6y0OgRYf5a9wh1+3AqXVMywoRWT7F2gveK19f8hQCrJUUoQKtRiCddeC6R0XUmmhugyAO +niae8jEChZ9RSnJX4+ukn3pUd1YP6hmbdx6+sjKIOdTYMC7BhCAuhvvMOmBnyRr3UiQyTMzjux XcNQH/Zw+c+m3X/Q1BiFiY7fj+mtyZzRTqxMwlZzdnGPQbBUx020Nik03oPZ4xb4pJyhrDeSvOS yS6Nfks5ezyFSqdkyKknkc7W3+IDGCWai89wBbTDUCYdHkTfm/GshdK2eeYtWCo8c4IpbNEoK5K v X-Received: by 2002:a17:902:710e:: with SMTP id a14-v6mr30732713pll.179.1539116081574; Tue, 09 Oct 2018 13:14:41 -0700 (PDT) X-Received: by 2002:a17:902:710e:: with SMTP id a14-v6mr30732659pll.179.1539116080622; Tue, 09 Oct 2018 13:14:40 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1539116080; cv=none; d=google.com; s=arc-20160816; b=p605yKozV1ZA85nVKD/LuGIEc37vwEwOjgoyMZGtCJOQXZIqSi8awk4KXVGkbrrp0h C7uPRhN90fSaYV/agZEQq0LcIu6Xdn0wlUORAZ2q/o1iDUbnvJDRp6EvjexdZPVhXUCY cEzI33v9XJ4uKMkc7fJEFfXu8JJVCACiBlY9fVq2pLD9Z3HUtbpZpl3xTYI2HmrLYzc1 uiqU8e872Bj3VoCRZfMeNzbi+Rv/nI0SOKPf8nkAoaMn+b2AOPGEqbkYhlk/ax4lEHbx EA94Mbx8nErT6XhOMUlUy4jcThe7ivuQWraktIIhRagJ5ry02bHPd/qAbmEWVCoG0SUE tGNA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:dkim-signature; bh=V18Z3eBngl64nu3A+zrmzE7Etk3mR/lRybg06jtcvaQ=; b=RbcmbP+IF6hxF5TT/7F0RHkWbe6RIZ1kUKjU3rg+c3eDCmqIWTwlKsNmb9FcAj3mGz Ff0lNX5uf73a4SDtfeyzoFp3/a1xihX2aCcj61B0C19dUuGpoSu5virwU7irEx66NFH1 GRyiO8S8mKSPCRfto/aeqUnweJyWe/mouiJ7GydpPv0DivzsqF+wNizWT0vtklMVeHCT VsieaUlECgYfoal9lk8HpgETPwACZhGzCLnfMsq9VZAteqkVHOLUkRJAPX4nC/SRK+vb IRqFTvLFjD1rarxbSjhMhCZuWbtV/t8y+b7TfDCBMaVK73opB0r3UfaViJM9x1OLsUOd Ozyg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@joelfernandes.org header.s=google header.b=BeoElXDl; spf=pass (google.com: domain of joel@joelfernandes.org designates 209.85.220.65 as permitted sender) smtp.mailfrom=joel@joelfernandes.org Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id v16-v6sor16134843plo.4.2018.10.09.13.14.40 for (Google Transport Security); Tue, 09 Oct 2018 13:14:40 -0700 (PDT) Received-SPF: pass (google.com: domain of joel@joelfernandes.org designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@joelfernandes.org header.s=google header.b=BeoElXDl; spf=pass (google.com: domain of joel@joelfernandes.org designates 209.85.220.65 as permitted sender) smtp.mailfrom=joel@joelfernandes.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=joelfernandes.org; s=google; h=from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=V18Z3eBngl64nu3A+zrmzE7Etk3mR/lRybg06jtcvaQ=; b=BeoElXDlJzTZzVxV/Ugq+efw1Z1UhukGXfG0nBm39VLNGuUIQGhfUStb8xl10LFnn1 QvM/8w5aImUG7sgfTeytSBUunMi6mULgs9DC2NDSmrWZBr6lQTLjxT0blWXN8tzL+/0g mjdqRhLnuChSYvjtl1k/nNhL4+gdGnOT67yDA= X-Google-Smtp-Source: ACcGV61V0RCzBgoeI8y85vA2psUTe3/j++wWW2JskMRkOixECYpMBrd8kiRPSIdV29vesF4/SmKLJQ== X-Received: by 2002:a17:902:8543:: with SMTP id d3-v6mr30716888plo.81.1539116079648; Tue, 09 Oct 2018 13:14:39 -0700 (PDT) Received: from joelaf.mtv.corp.google.com ([2620:0:1000:1601:3aef:314f:b9ea:889f]) by smtp.gmail.com with ESMTPSA id z4-v6sm25555664pgs.50.2018.10.09.13.14.27 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 09 Oct 2018 13:14:31 -0700 (PDT) From: "Joel Fernandes (Google)" To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: kernel-team@android.com, Joel Fernandes , minchan@google.com, hughd@google.com, lokeshgidra@google.com, Andrew Morton , Greg Kroah-Hartman , Kate Stewart , Philippe Ombredanne , Thomas Gleixner Subject: [PATCH] mm: Speed up mremap on large regions Date: Tue, 9 Oct 2018 13:14:00 -0700 Message-Id: <20181009201400.168705-1-joel@joelfernandes.org> X-Mailer: git-send-email 2.19.0.605.g01d371f741-goog MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Android needs to mremap large regions of memory during memory management related operations. The mremap system call can be really slow if THP is not enabled. The bottleneck is move_page_tables, which is copying each pte at a time, and can be really slow across a large map. Turning on THP may not be a viable option, and is not for us. This patch speeds up the performance for non-THP system by copying at the PMD level when possible. The speed up is three orders of magnitude. On a 1GB mremap, the mremap completion times drops from 160-250 millesconds to 380-400 microseconds. Before: Total mremap time for 1GB data: 242321014 nanoseconds. Total mremap time for 1GB data: 196842467 nanoseconds. Total mremap time for 1GB data: 167051162 nanoseconds. After: Total mremap time for 1GB data: 385781 nanoseconds. Total mremap time for 1GB data: 388959 nanoseconds. Total mremap time for 1GB data: 402813 nanoseconds. Incase THP is enabled, the optimization is skipped. I also flush the tlb every time we do this optimization since I couldn't find a way to determine if the low-level PTEs are dirty. It is seen that the cost of doing so is not much compared the improvement, on both x86-64 and arm64. Cc: minchan@google.com Cc: hughd@google.com Cc: lokeshgidra@google.com Cc: kernel-team@android.com Signed-off-by: Joel Fernandes (Google) --- mm/mremap.c | 62 +++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 62 insertions(+) diff --git a/mm/mremap.c b/mm/mremap.c index 5c2e18505f75..68ddc9e9dfde 100644 --- a/mm/mremap.c +++ b/mm/mremap.c @@ -191,6 +191,54 @@ static void move_ptes(struct vm_area_struct *vma, pmd_t *old_pmd, drop_rmap_locks(vma); } +bool move_normal_pmd(struct vm_area_struct *vma, unsigned long old_addr, + unsigned long new_addr, unsigned long old_end, + pmd_t *old_pmd, pmd_t *new_pmd, bool *need_flush) +{ + spinlock_t *old_ptl, *new_ptl; + struct mm_struct *mm = vma->vm_mm; + + if ((old_addr & ~PMD_MASK) || (new_addr & ~PMD_MASK) + || old_end - old_addr < PMD_SIZE) + return false; + + /* + * The destination pmd shouldn't be established, free_pgtables() + * should have release it. + */ + if (WARN_ON(!pmd_none(*new_pmd))) + return false; + + /* + * We don't have to worry about the ordering of src and dst + * ptlocks because exclusive mmap_sem prevents deadlock. + */ + old_ptl = pmd_lock(vma->vm_mm, old_pmd); + if (old_ptl) { + pmd_t pmd; + + new_ptl = pmd_lockptr(mm, new_pmd); + if (new_ptl != old_ptl) + spin_lock_nested(new_ptl, SINGLE_DEPTH_NESTING); + + /* Clear the pmd */ + pmd = *old_pmd; + pmd_clear(old_pmd); + + VM_BUG_ON(!pmd_none(*new_pmd)); + + /* Set the new pmd */ + set_pmd_at(mm, new_addr, new_pmd, pmd); + if (new_ptl != old_ptl) + spin_unlock(new_ptl); + spin_unlock(old_ptl); + + *need_flush = true; + return true; + } + return false; +} + unsigned long move_page_tables(struct vm_area_struct *vma, unsigned long old_addr, struct vm_area_struct *new_vma, unsigned long new_addr, unsigned long len, @@ -239,7 +287,21 @@ unsigned long move_page_tables(struct vm_area_struct *vma, split_huge_pmd(vma, old_pmd, old_addr); if (pmd_trans_unstable(old_pmd)) continue; + } else if (extent == PMD_SIZE) { + bool moved; + + /* See comment in move_ptes() */ + if (need_rmap_locks) + take_rmap_locks(vma); + moved = move_normal_pmd(vma, old_addr, new_addr, + old_end, old_pmd, new_pmd, + &need_flush); + if (need_rmap_locks) + drop_rmap_locks(vma); + if (moved) + continue; } + if (pte_alloc(new_vma->vm_mm, new_pmd, new_addr)) break; next = (new_addr + PMD_SIZE) & PMD_MASK;