Message ID | 20201005154017.474722-3-kaleshsingh@google.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show
Return-Path: <SRS0=rTPO=DM=kvack.org=owner-linux-mm@kernel.org> Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 689AF6CA for <patchwork-linux-mm@patchwork.kernel.org>; Mon, 5 Oct 2020 15:40:48 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 15C7D2085B for <patchwork-linux-mm@patchwork.kernel.org>; Mon, 5 Oct 2020 15:40:47 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="G05LJGMS" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 15C7D2085B Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 34DD4900004; Mon, 5 Oct 2020 11:40:47 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 2FB7D8E0001; Mon, 5 Oct 2020 11:40:47 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1C302900004; Mon, 5 Oct 2020 11:40:47 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0050.hostedemail.com [216.40.44.50]) by kanga.kvack.org (Postfix) with ESMTP id D9E1D8E0001 for <linux-mm@kvack.org>; Mon, 5 Oct 2020 11:40:46 -0400 (EDT) Received: from smtpin21.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 68EC6180AD804 for <linux-mm@kvack.org>; Mon, 5 Oct 2020 15:40:46 +0000 (UTC) X-FDA: 77338284492.21.group61_5a1717b271bf Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin21.hostedemail.com (Postfix) with ESMTP id 435E8180442C3 for <linux-mm@kvack.org>; Mon, 5 Oct 2020 15:40:46 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,3fd57xwskcoctjunbqbrwpqpxxpun.lxvurwdg-vvtejlt.xap@flex--kaleshsingh.bounces.google.com,,RULES_HIT:30012:30054,0,RBL:209.85.160.201:@flex--kaleshsingh.bounces.google.com:.lbl8.mailshell.net-66.100.201.100 62.18.0.100;04yg8fg1zcdc49o6td46n53xosegdocuyuqmsgaiyd9yy45sy33sded9youmjd9.eirtwyd7qe5oretjtjsjtqgzcjsntidarpcdybwgyz49q4i7b4a57m1usdp6qgj.o-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:mail-archive.com-dnsbl7.mailshell.net-127.2.0.20,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: group61_5a1717b271bf X-Filterd-Recvd-Size: 6422 Received: from mail-qt1-f201.google.com (mail-qt1-f201.google.com [209.85.160.201]) by imf40.hostedemail.com (Postfix) with ESMTP for <linux-mm@kvack.org>; Mon, 5 Oct 2020 15:40:45 +0000 (UTC) Received: by mail-qt1-f201.google.com with SMTP id h31so6776926qtd.14 for <linux-mm@kvack.org>; Mon, 05 Oct 2020 08:40:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:cc; bh=KNj3DAoXl6FY4HLNmjfsIeWPw6NmzA5m1h25nAFriyA=; b=G05LJGMSS4nGQxD55nnPiFMozAlJTpF0Yva9yVs0JE0FGn+2WW0CmY9ZOVCFAH4Z08 qjjcoVjiDSxIUSuj0Dngh1WTo5LuQB3XA6AK/9P3MzCbu+HVGt9WGL37M/edJjtq2vDr tAXqm5Wotf38Vf24CdvV4NyTXKwJnCgDZi6EqBWIVZsI0+8rG2YODm1uLsE4UcaTqEeL 7SrrLz38CluwfmZwRU65Be9YOO61GwbfrBcpNxLjYG6lHqk0pb4bGrPV5ZbCmQ0akf/i 1U5p5SFA/SVIQOb+TiA0cicTuyLSptJMMSieasKQqoQJGauvubqhaxnYxjU9NfxRRYjg jqUQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:cc; bh=KNj3DAoXl6FY4HLNmjfsIeWPw6NmzA5m1h25nAFriyA=; b=ToMWYRtuaDVK4gJ1Pn5XNblKmD0G7ABWriBNZQGEEwkaeT0es/xP85BLQcKhTm3XUo c4UGb8jYrytWSI5cMdEl4yB1tngcoqP6MzmR5VuCD/caMWj1DAm3QCMWTB93kjEguLi6 yJ5vCKqYdJr4pQvFQgKEJWrgWlchvl4/vUt7WM0d5iyKmVdSm/MQBBr5Ry4jFLyikvAN lUUWhGAHaZEuqjtHH6fuJPUkLUsx6Rl7nbzoKLrNdy1v1VuI4Cm8FhiGYJc9Mh9Jxnho KzFQ5w3qHHeHfFcuRaAfvSJE2sS1pgMqYr2r0HaqwLfgc5jbwZx1ojE4FMjynI48U39O 1sMw== X-Gm-Message-State: AOAM531semrTCCIfCibFVZ5/uAsyyO6bAphLcGUFe61r+jfNXbNtE6Ol 5kq2pxxF2r00WFLMr273lqHia+CY/Rx/0mjG5w== X-Google-Smtp-Source: ABdhPJwG12ZLaDAfde6kCuzuNBgRgVAZ2cfeDTVTpsfwVYnSTyYZDI3BCmNc0oTZe2CrTCv/sza/LsBHBT7q4e3JtA== X-Received: from kaleshsingh.c.googlers.com ([fda3:e722:ac3:10:14:4d90:c0a8:2145]) (user=kaleshsingh job=sendgmr) by 2002:a0c:c284:: with SMTP id b4mr203054qvi.6.1601912444879; Mon, 05 Oct 2020 08:40:44 -0700 (PDT) Date: Mon, 5 Oct 2020 15:40:05 +0000 In-Reply-To: <20201005154017.474722-1-kaleshsingh@google.com> Message-Id: <20201005154017.474722-3-kaleshsingh@google.com> Mime-Version: 1.0 References: <20201005154017.474722-1-kaleshsingh@google.com> X-Mailer: git-send-email 2.28.0.806.g8561365e88-goog Subject: [PATCH v3 2/5] arm64: mremap speedup - Enable HAVE_MOVE_PMD From: Kalesh Singh <kaleshsingh@google.com> Cc: surenb@google.com, minchan@google.com, joelaf@google.com, lokeshgidra@google.com, kaleshsingh@google.com, kernel-team@android.com, Catalin Marinas <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>, Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>, x86@kernel.org, "H. Peter Anvin" <hpa@zytor.com>, Andrew Morton <akpm@linux-foundation.org>, Shuah Khan <shuah@kernel.org>, "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com>, Kees Cook <keescook@chromium.org>, Peter Zijlstra <peterz@infradead.org>, Masahiro Yamada <masahiroy@kernel.org>, Arnd Bergmann <arnd@arndb.de>, Sami Tolvanen <samitolvanen@google.com>, Frederic Weisbecker <frederic@kernel.org>, Krzysztof Kozlowski <krzk@kernel.org>, Hassan Naveed <hnaveed@wavecomp.com>, Christian Brauner <christian.brauner@ubuntu.com>, Mark Rutland <mark.rutland@arm.com>, Mark Brown <broonie@kernel.org>, Mike Rapoport <rppt@kernel.org>, Gavin Shan <gshan@redhat.com>, Zhenyu Ye <yezhenyu2@huawei.com>, Jia He <justin.he@arm.com>, John Hubbard <jhubbard@nvidia.com>, Colin Ian King <colin.king@canonical.com>, Ram Pai <linuxram@us.ibm.com>, Dave Hansen <dave.hansen@intel.com>, Mina Almasry <almasrymina@google.com>, Ralph Campbell <rcampbell@nvidia.com>, "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>, Sandipan Das <sandipan@linux.ibm.com>, Zi Yan <ziy@nvidia.com>, Brian Geffon <bgeffon@google.com>, Masami Hiramatsu <mhiramat@kernel.org>, Jason Gunthorpe <jgg@ziepe.ca>, SeongJae Park <sjpark@amazon.de>, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org Content-Type: text/plain; charset="UTF-8" X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: <linux-mm.kvack.org> |
Series | Speed up mremap on large regions | expand |
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 6d232837cbee..844d089668e3 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -121,6 +121,7 @@ config ARM64 select GENERIC_VDSO_TIME_NS select HANDLE_DOMAIN_IRQ select HARDIRQS_SW_RESEND + select HAVE_MOVE_PMD select HAVE_PCI select HAVE_ACPI_APEI if (ACPI && EFI) select HAVE_ALIGNED_STRUCT_PAGE if SLUB
HAVE_MOVE_PMD enables remapping pages at the PMD level if both the source and destination addresses are PMD-aligned. HAVE_MOVE_PMD is already enabled on x86. The original patch [1] that introduced this config did not enable it on arm64 at the time because of performance issues with flushing the TLB on every PMD move. These issues have since been addressed in more recent releases with improvements to the arm64 TLB invalidation and core mmu_gather code as Will Deacon mentioned in [2]. From the data below, it can be inferred that there is approximately 8x improvement in performance when HAVE_MOVE_PMD is enabled on arm64. --------- Test Results ---------- The following results were obtained on an arm64 device running a 5.4 kernel, by remapping a PMD-aligned, 1GB sized region to a PMD-aligned destination. The results from 10 iterations of the test are given below. All times are in nanoseconds. Control HAVE_MOVE_PMD 9220833 1247761 9002552 1219896 9254115 1094792 8725885 1227760 9308646 1043698 9001667 1101771 8793385 1159896 8774636 1143594 9553125 1025833 9374010 1078125 9100885.4 1134312.6 <-- Mean Time in nanoseconds Total mremap time for a 1GB sized PMD-aligned region drops from ~9.1 milliseconds to ~1.1 milliseconds. (~8x speedup). [1] https://lore.kernel.org/r/20181108181201.88826-3-joelaf@google.com [2] https://www.mail-archive.com/linuxppc-dev@lists.ozlabs.org/msg140837.html Signed-off-by: Kalesh Singh <kaleshsingh@google.com> --- arch/arm64/Kconfig | 1 + 1 file changed, 1 insertion(+)