From patchwork Wed Oct 14 00:53:06 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Kalesh Singh X-Patchwork-Id: 11836585 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6F1CB1130 for ; Wed, 14 Oct 2020 00:54:04 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id EF24420EDD for ; Wed, 14 Oct 2020 00:54:03 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="XDA/zDw/" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org EF24420EDD Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id E8D4A6B0068; Tue, 13 Oct 2020 20:54:02 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id E153B6B006E; Tue, 13 Oct 2020 20:54:02 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CB5746B0070; Tue, 13 Oct 2020 20:54:02 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0194.hostedemail.com [216.40.44.194]) by kanga.kvack.org (Postfix) with ESMTP id 8C58F6B0068 for ; Tue, 13 Oct 2020 20:54:02 -0400 (EDT) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 180C0180AD806 for ; Wed, 14 Oct 2020 00:54:02 +0000 (UTC) X-FDA: 77368709124.16.sheep74_5d04d2227207 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin16.hostedemail.com (Postfix) with ESMTP id EE33E101F98F4 for ; Wed, 14 Oct 2020 00:54:01 +0000 (UTC) X-Spam-Summary: 1,0,0,57ed3751ff57fe3a,d41d8cd98f00b204,3keygxwskcnsh7ibpepfkdedlldib.9ljifkru-jjhs79h.lod@flex--kaleshsingh.bounces.google.com,,RULES_HIT:4:41:69:152:355:379:387:541:800:960:968:973:988:989:1260:1277:1311:1313:1314:1345:1359:1431:1437:1500:1513:1515:1516:1518:1521:1593:1594:1730:1747:1777:1792:2198:2199:2393:2538:2559:2562:2640:2693:2899:2915:2987:3138:3139:3140:3141:3142:3152:3355:3865:3866:3867:3868:3870:3871:3872:3874:4250:4321:4605:5007:6119:6238:6261:6630:6653:6742:6743:7875:7903:7904:8603:9969:10004:11026:11232:11233:11473:11657:11658:11914:12043:12291:12296:12297:12438:12555:12895:12986:13161:13229:14394:14659:14877:21444:21451:21627:21796:21990:30003:30029:30036:30054:30056:30064:30067:30069:30070,0,RBL:209.85.219.202:@flex--kaleshsingh.bounces.google.com:.lbl8.mailshell.net-66.100.201.100 62.18.0.100;04yge4yeqph6zygsk38s7baufphuzyc8q6ewfihd8prcfo3fc3fepruzzaxsuzf.8rx45k3ytf6sxotcapdabka17e1jxywohi18jud3c79qgej1zpib37oo1feq71m.k-lbl8.mailshell.net-223 .238.255 X-HE-Tag: sheep74_5d04d2227207 X-Filterd-Recvd-Size: 18928 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) by imf12.hostedemail.com (Postfix) with ESMTP for ; Wed, 14 Oct 2020 00:54:01 +0000 (UTC) Received: by mail-yb1-f202.google.com with SMTP id r75so1556366ybc.20 for ; Tue, 13 Oct 2020 17:54:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:cc:content-transfer-encoding; bh=vil/uL7k8s9vtbowO5/5vNSXN57mCTz8YajPzcwGOYo=; b=XDA/zDw/XziikYhzcUiHrGjIxpVIVjsuR9oRHfgopmN2G+FSJh8fDAtoxxvf627uGA iijgtyy6sMJFryyZcrZV5JHg9hBeIUIl0GuxU9bf3W3GUCP0VMFueM3+CK2OYa4KRlrL HQaKTszHv/rYbztZozLRE8eBAsjnChz/ka3ho4Lp/Uj1K7l9L5ZPC5AbPWFYaapn1/uK CFuThUhXbUqxGnG2JLYdgc6SnvDv80arqBxvSGU5KmyyWwijiDNoAtPpEvmytb0uANq2 qp5ovFKzC8CAlEEa8DHUvfgdnMHcdNq2rlWjDXdbAnoVETxxxXkzSOlBsjbv+igxRP3u 7y5w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:cc:content-transfer-encoding; bh=vil/uL7k8s9vtbowO5/5vNSXN57mCTz8YajPzcwGOYo=; b=m/bGh9vqHRlsARXwMEDotMPgqnDeNpalJurJTu69Pb96X7ezABdYjNgwRFPJE+3CFN uWdGNPBkyrJqNqwLN0+xDzbDh2Uxbe8A7F5fSoW93UdsVSaX9II6vGvBCIevOMtg9eRA ZQXTWEJVUV6oMAeEnAgl+WCZrKnr3Zd2dYVpsSKenv/BJdjPFjoXojRNkjMp0itq/j9I 3Jzo4RL0JzTY9fQQDWBSKowwns2QhNevyRvBF5AIXOZZv/9nu6a6eyKVNf+VzyJegKQc JIhHAjeUxfJPEKmmxHoluvTf1U9Q4JzeyKxtI9qXC/OeB92v+we79CZR/aSPjMQrpi9i nIwQ== X-Gm-Message-State: AOAM5304JJtFgvVl8AirB3aVnnTGfVQToGgRXV0bTWVYyztqS1wlRWbB M7WvJtHDzZOeEfmwbBkxsI7/x+JxArLgREZ9Tg== X-Google-Smtp-Source: ABdhPJwKdKSl7EYGwQz8WliaY39Q2V0m15g/zQ6c3ooOMo8q2QDX1m6UP5TWoz4wBtmijgyCk1XJ9v3pM8UzX2UpvQ== X-Received: from kaleshsingh.c.googlers.com ([fda3:e722:ac3:10:14:4d90:c0a8:2145]) (user=kaleshsingh job=sendgmr) by 2002:a25:ab23:: with SMTP id u32mr3791534ybi.221.1602636840675; Tue, 13 Oct 2020 17:54:00 -0700 (PDT) Date: Wed, 14 Oct 2020 00:53:06 +0000 In-Reply-To: <20201014005320.2233162-1-kaleshsingh@google.com> Message-Id: <20201014005320.2233162-2-kaleshsingh@google.com> Mime-Version: 1.0 References: <20201014005320.2233162-1-kaleshsingh@google.com> X-Mailer: git-send-email 2.28.0.1011.ga647a8990f-goog Subject: [PATCH v4 1/5] kselftests: vm: Add mremap tests From: Kalesh Singh Cc: surenb@google.com, minchan@google.com, joelaf@google.com, lokeshgidra@google.com, kaleshsingh@google.com, kernel-team@android.com, John Hubbard , Shuah Khan , Andrew Morton , "Kirill A . Shutemov" , Catalin Marinas , Will Deacon , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H. Peter Anvin" , Peter Zijlstra , Kees Cook , "Aneesh Kumar K.V" , Masahiro Yamada , Josh Poimboeuf , Sami Tolvanen , Krzysztof Kozlowski , Frederic Weisbecker , Hassan Naveed , Arnd Bergmann , Christian Brauner , Anshuman Khandual , Mike Rapoport , Gavin Shan , Steven Price , Jia He , Ralph Campbell , Zi Yan , Mina Almasry , Ram Pai , Sandipan Das , Dave Hansen , Masami Hiramatsu , Brian Geffon , SeongJae Park , linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Test mremap on regions of various sizes and alignments and validate data after remapping. Also provide total time for remapping the region which is useful for performance comparison of the mremap optimizations that move pages at the PMD/PUD levels if HAVE_MOVE_PMD and/or HAVE_MOVE_PUD are enabled. Signed-off-by: Kalesh Singh Reviewed-by: John Hubbard Cc: Shuah Khan Cc: Andrew Morton Cc: Kirill A. Shutemov --- Changes in v2: - Reduce test time by only validating a certain threshold of the remapped region (4MB by default). The -t flag can be used to set a custom threshold in MB or no threshold by passing 0. (-t0). mremap time is not provided in stdout for only partially validated regions. This time is only applicable for comparison if the entire mapped region was faulted in. - Use a random pattern for validating the remapped region. The -p flag can be used to run the tests with a specified seed for the random pattern. - Print test configs (threshold_mb and pattern_seed) to stdout. - Remove MAKE_SIMPLE_TEST macro. - Define named flags instead of 0 / 1. - Add comments for destination address' align_mask and offset. Changes in v3: - Remove unused PATTERN_SIZE definition. - Make lines 80 cols or less where they don’t need to be longer. - Add John Hubbard’s Reviewed-by tag. tools/testing/selftests/vm/.gitignore | 1 + tools/testing/selftests/vm/Makefile | 1 + tools/testing/selftests/vm/mremap_test.c | 344 +++++++++++++++++++++++ tools/testing/selftests/vm/run_vmtests | 11 + 4 files changed, 357 insertions(+) create mode 100644 tools/testing/selftests/vm/mremap_test.c diff --git a/tools/testing/selftests/vm/.gitignore b/tools/testing/selftests/vm/.gitignore index 849e8226395a..b3a183c36cb5 100644 --- a/tools/testing/selftests/vm/.gitignore +++ b/tools/testing/selftests/vm/.gitignore @@ -8,6 +8,7 @@ thuge-gen compaction_test mlock2-tests mremap_dontunmap +mremap_test on-fault-limit transhuge-stress protection_keys diff --git a/tools/testing/selftests/vm/Makefile b/tools/testing/selftests/vm/Makefile index a9026706d597..f044808b45fa 100644 --- a/tools/testing/selftests/vm/Makefile +++ b/tools/testing/selftests/vm/Makefile @@ -16,6 +16,7 @@ TEST_GEN_FILES += map_populate TEST_GEN_FILES += mlock-random-test TEST_GEN_FILES += mlock2-tests TEST_GEN_FILES += mremap_dontunmap +TEST_GEN_FILES += mremap_test TEST_GEN_FILES += on-fault-limit TEST_GEN_FILES += thuge-gen TEST_GEN_FILES += transhuge-stress diff --git a/tools/testing/selftests/vm/mremap_test.c b/tools/testing/selftests/vm/mremap_test.c new file mode 100644 index 000000000000..9c391d016922 --- /dev/null +++ b/tools/testing/selftests/vm/mremap_test.c @@ -0,0 +1,344 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright 2020 Google LLC + */ +#define _GNU_SOURCE + +#include +#include +#include +#include +#include + +#include "../kselftest.h" + +#define EXPECT_SUCCESS 0 +#define EXPECT_FAILURE 1 +#define NON_OVERLAPPING 0 +#define OVERLAPPING 1 +#define NS_PER_SEC 1000000000ULL +#define VALIDATION_DEFAULT_THRESHOLD 4 /* 4MB */ +#define VALIDATION_NO_THRESHOLD 0 /* Verify the entire region */ + +#define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0])) +#define MIN(X, Y) ((X) < (Y) ? (X) : (Y)) + +struct config { + unsigned long long src_alignment; + unsigned long long dest_alignment; + unsigned long long region_size; + int overlapping; +}; + +struct test { + const char *name; + struct config config; + int expect_failure; +}; + +enum { + _1KB = 1ULL << 10, /* 1KB -> not page aligned */ + _4KB = 4ULL << 10, + _8KB = 8ULL << 10, + _1MB = 1ULL << 20, + _2MB = 2ULL << 20, + _4MB = 4ULL << 20, + _1GB = 1ULL << 30, + _2GB = 2ULL << 30, + PTE = _4KB, + PMD = _2MB, + PUD = _1GB, +}; + +#define MAKE_TEST(source_align, destination_align, size, \ + overlaps, should_fail, test_name) \ +{ \ + .name = test_name, \ + .config = { \ + .src_alignment = source_align, \ + .dest_alignment = destination_align, \ + .region_size = size, \ + .overlapping = overlaps, \ + }, \ + .expect_failure = should_fail \ +} + +/* + * Returns the start address of the mapping on success, else returns + * NULL on failure. + */ +static void *get_source_mapping(struct config c) +{ + unsigned long long addr = 0ULL; + void *src_addr = NULL; +retry: + addr += c.src_alignment; + src_addr = mmap((void *) addr, c.region_size, PROT_READ | PROT_WRITE, + MAP_FIXED | MAP_ANONYMOUS | MAP_SHARED, -1, 0); + if (src_addr == MAP_FAILED) { + if (errno == EPERM) + goto retry; + goto error; + } + /* + * Check that the address is aligned to the specified alignment. + * Addresses which have alignments that are multiples of that + * specified are not considered valid. For instance, 1GB address is + * 2MB-aligned, however it will not be considered valid for a + * requested alignment of 2MB. This is done to reduce coincidental + * alignment in the tests. + */ + if (((unsigned long long) src_addr & (c.src_alignment - 1)) || + !((unsigned long long) src_addr & c.src_alignment)) + goto retry; + + if (!src_addr) + goto error; + + return src_addr; +error: + ksft_print_msg("Failed to map source region: %s\n", + strerror(errno)); + return NULL; +} + +/* Returns the time taken for the remap on success else returns -1. */ +static long long remap_region(struct config c, unsigned int threshold_mb, + char pattern_seed) +{ + void *addr, *src_addr, *dest_addr; + unsigned long long i; + struct timespec t_start = {0, 0}, t_end = {0, 0}; + long long start_ns, end_ns, align_mask, ret, offset; + unsigned long long threshold; + + if (threshold_mb == VALIDATION_NO_THRESHOLD) + threshold = c.region_size; + else + threshold = MIN(threshold_mb * _1MB, c.region_size); + + src_addr = get_source_mapping(c); + if (!src_addr) { + ret = -1; + goto out; + } + + /* Set byte pattern */ + srand(pattern_seed); + for (i = 0; i < threshold; i++) + memset((char *) src_addr + i, (char) rand(), 1); + + /* Mask to zero out lower bits of address for alignment */ + align_mask = ~(c.dest_alignment - 1); + /* Offset of destination address from the end of the source region */ + offset = (c.overlapping) ? -c.dest_alignment : c.dest_alignment; + addr = (void *) (((unsigned long long) src_addr + c.region_size + + offset) & align_mask); + + /* See comment in get_source_mapping() */ + if (!((unsigned long long) addr & c.dest_alignment)) + addr = (void *) ((unsigned long long) addr | c.dest_alignment); + + clock_gettime(CLOCK_MONOTONIC, &t_start); + dest_addr = mremap(src_addr, c.region_size, c.region_size, + MREMAP_MAYMOVE|MREMAP_FIXED, (char *) addr); + clock_gettime(CLOCK_MONOTONIC, &t_end); + + if (dest_addr == MAP_FAILED) { + ksft_print_msg("mremap failed: %s\n", strerror(errno)); + ret = -1; + goto clean_up_src; + } + + /* Verify byte pattern after remapping */ + srand(pattern_seed); + for (i = 0; i < threshold; i++) { + char c = (char) rand(); + + if (((char *) dest_addr)[i] != c) { + ksft_print_msg("Data after remap doesn't match at offset %d\n", + i); + ksft_print_msg("Expected: %#x\t Got: %#x\n", c & 0xff, + ((char *) dest_addr)[i] & 0xff); + ret = -1; + goto clean_up_dest; + } + } + + start_ns = t_start.tv_sec * NS_PER_SEC + t_start.tv_nsec; + end_ns = t_end.tv_sec * NS_PER_SEC + t_end.tv_nsec; + ret = end_ns - start_ns; + +/* + * Since the destination address is specified using MREMAP_FIXED, subsequent + * mremap will unmap any previous mapping at the address range specified by + * dest_addr and region_size. This significantly affects the remap time of + * subsequent tests. So we clean up mappings after each test. + */ +clean_up_dest: + munmap(dest_addr, c.region_size); +clean_up_src: + munmap(src_addr, c.region_size); +out: + return ret; +} + +static void run_mremap_test_case(struct test test_case, int *failures, + unsigned int threshold_mb, + unsigned int pattern_seed) +{ + long long remap_time = remap_region(test_case.config, threshold_mb, + pattern_seed); + + if (remap_time < 0) { + if (test_case.expect_failure) + ksft_test_result_pass("%s\n\tExpected mremap failure\n", + test_case.name); + else { + ksft_test_result_fail("%s\n", test_case.name); + *failures += 1; + } + } else { + /* + * Comparing mremap time is only applicable if entire region + * was faulted in. + */ + if (threshold_mb == VALIDATION_NO_THRESHOLD || + test_case.config.region_size <= threshold_mb * _1MB) + ksft_test_result_pass("%s\n\tmremap time: %12lldns\n", + test_case.name, remap_time); + else + ksft_test_result_pass("%s\n", test_case.name); + } +} + +static void usage(const char *cmd) +{ + fprintf(stderr, + "Usage: %s [[-t ] [-p ]]\n" + "-t\t only validate threshold_mb of the remapped region\n" + " \t if 0 is supplied no threshold is used; all tests\n" + " \t are run and remapped regions validated fully.\n" + " \t The default threshold used is 4MB.\n" + "-p\t provide a seed to generate the random pattern for\n" + " \t validating the remapped region.\n", cmd); +} + +static int parse_args(int argc, char **argv, unsigned int *threshold_mb, + unsigned int *pattern_seed) +{ + const char *optstr = "t:p:"; + int opt; + + while ((opt = getopt(argc, argv, optstr)) != -1) { + switch (opt) { + case 't': + *threshold_mb = atoi(optarg); + break; + case 'p': + *pattern_seed = atoi(optarg); + break; + default: + usage(argv[0]); + return -1; + } + } + + if (optind < argc) { + usage(argv[0]); + return -1; + } + + return 0; +} + +int main(int argc, char **argv) +{ + int failures = 0; + int i, run_perf_tests; + unsigned int threshold_mb = VALIDATION_DEFAULT_THRESHOLD; + unsigned int pattern_seed; + time_t t; + + pattern_seed = (unsigned int) time(&t); + + if (parse_args(argc, argv, &threshold_mb, &pattern_seed) < 0) + exit(EXIT_FAILURE); + + ksft_print_msg("Test configs:\n\tthreshold_mb=%u\n\tpattern_seed=%u\n\n", + threshold_mb, pattern_seed); + + struct test test_cases[] = { + /* Expected mremap failures */ + MAKE_TEST(_4KB, _4KB, _4KB, OVERLAPPING, EXPECT_FAILURE, + "mremap - Source and Destination Regions Overlapping"), + MAKE_TEST(_4KB, _1KB, _4KB, NON_OVERLAPPING, EXPECT_FAILURE, + "mremap - Destination Address Misaligned (1KB-aligned)"), + MAKE_TEST(_1KB, _4KB, _4KB, NON_OVERLAPPING, EXPECT_FAILURE, + "mremap - Source Address Misaligned (1KB-aligned)"), + + /* Src addr PTE aligned */ + MAKE_TEST(PTE, PTE, _8KB, NON_OVERLAPPING, EXPECT_SUCCESS, + "8KB mremap - Source PTE-aligned, Destination PTE-aligned"), + + /* Src addr 1MB aligned */ + MAKE_TEST(_1MB, PTE, _2MB, NON_OVERLAPPING, EXPECT_SUCCESS, + "2MB mremap - Source 1MB-aligned, Destination PTE-aligned"), + MAKE_TEST(_1MB, _1MB, _2MB, NON_OVERLAPPING, EXPECT_SUCCESS, + "2MB mremap - Source 1MB-aligned, Destination 1MB-aligned"), + + /* Src addr PMD aligned */ + MAKE_TEST(PMD, PTE, _4MB, NON_OVERLAPPING, EXPECT_SUCCESS, + "4MB mremap - Source PMD-aligned, Destination PTE-aligned"), + MAKE_TEST(PMD, _1MB, _4MB, NON_OVERLAPPING, EXPECT_SUCCESS, + "4MB mremap - Source PMD-aligned, Destination 1MB-aligned"), + MAKE_TEST(PMD, PMD, _4MB, NON_OVERLAPPING, EXPECT_SUCCESS, + "4MB mremap - Source PMD-aligned, Destination PMD-aligned"), + + /* Src addr PUD aligned */ + MAKE_TEST(PUD, PTE, _2GB, NON_OVERLAPPING, EXPECT_SUCCESS, + "2GB mremap - Source PUD-aligned, Destination PTE-aligned"), + MAKE_TEST(PUD, _1MB, _2GB, NON_OVERLAPPING, EXPECT_SUCCESS, + "2GB mremap - Source PUD-aligned, Destination 1MB-aligned"), + MAKE_TEST(PUD, PMD, _2GB, NON_OVERLAPPING, EXPECT_SUCCESS, + "2GB mremap - Source PUD-aligned, Destination PMD-aligned"), + MAKE_TEST(PUD, PUD, _2GB, NON_OVERLAPPING, EXPECT_SUCCESS, + "2GB mremap - Source PUD-aligned, Destination PUD-aligned"), + }; + + struct test perf_test_cases[] = { + /* + * mremap 1GB region - Page table level aligned time + * comparison. + */ + MAKE_TEST(PTE, PTE, _1GB, NON_OVERLAPPING, EXPECT_SUCCESS, + "1GB mremap - Source PTE-aligned, Destination PTE-aligned"), + MAKE_TEST(PMD, PMD, _1GB, NON_OVERLAPPING, EXPECT_SUCCESS, + "1GB mremap - Source PMD-aligned, Destination PMD-aligned"), + MAKE_TEST(PUD, PUD, _1GB, NON_OVERLAPPING, EXPECT_SUCCESS, + "1GB mremap - Source PUD-aligned, Destination PUD-aligned"), + }; + + run_perf_tests = (threshold_mb == VALIDATION_NO_THRESHOLD) || + (threshold_mb * _1MB >= _1GB); + + ksft_set_plan(ARRAY_SIZE(test_cases) + (run_perf_tests ? + ARRAY_SIZE(perf_test_cases) : 0)); + + for (i = 0; i < ARRAY_SIZE(test_cases); i++) + run_mremap_test_case(test_cases[i], &failures, threshold_mb, + pattern_seed); + + if (run_perf_tests) { + ksft_print_msg("\n%s\n", + "mremap HAVE_MOVE_PMD/PUD optimization time comparison for 1GB region:"); + for (i = 0; i < ARRAY_SIZE(perf_test_cases); i++) + run_mremap_test_case(perf_test_cases[i], &failures, + threshold_mb, pattern_seed); + } + + if (failures > 0) + ksft_exit_fail(); + else + ksft_exit_pass(); +} diff --git a/tools/testing/selftests/vm/run_vmtests b/tools/testing/selftests/vm/run_vmtests index a3f4f30f0a2e..d578ad831813 100755 --- a/tools/testing/selftests/vm/run_vmtests +++ b/tools/testing/selftests/vm/run_vmtests @@ -241,6 +241,17 @@ else echo "[PASS]" fi +echo "-------------------" +echo "running mremap_test" +echo "-------------------" +./mremap_test +if [ $? -ne 0 ]; then + echo "[FAIL]" + exitcode=1 +else + echo "[PASS]" +fi + echo "-----------------" echo "running thuge-gen" echo "-----------------" From patchwork Wed Oct 14 00:53:07 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kalesh Singh X-Patchwork-Id: 11836587 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B34441130 for ; Wed, 14 Oct 2020 00:54:16 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 6E88D21D7B for ; Wed, 14 Oct 2020 00:54:16 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="G3gn7F39" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6E88D21D7B Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id ABC2A6B0070; Tue, 13 Oct 2020 20:54:15 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id A44F06B0071; Tue, 13 Oct 2020 20:54:15 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 90D8C900002; Tue, 13 Oct 2020 20:54:15 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0150.hostedemail.com [216.40.44.150]) by kanga.kvack.org (Postfix) with ESMTP id 6132E6B0070 for ; Tue, 13 Oct 2020 20:54:15 -0400 (EDT) Received: from smtpin20.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id EE1868249980 for ; Wed, 14 Oct 2020 00:54:14 +0000 (UTC) X-FDA: 77368709628.20.apple20_3f0ce6727207 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin20.hostedemail.com (Postfix) with ESMTP id C908C180C07AB for ; Wed, 14 Oct 2020 00:54:14 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,3nuygxwskcogukvocrcsxqrqyyqvo.mywvsxeh-wwufkmu.ybq@flex--kaleshsingh.bounces.google.com,,RULES_HIT:30012:30054,0,RBL:209.85.160.201:@flex--kaleshsingh.bounces.google.com:.lbl8.mailshell.net-66.100.201.100 62.18.0.100;04yr8cdfc8ktrgq9o9qrcrw4jtugaocq7inxtd3xcrjih6dcckbyhe5591c98gw.67uogc1ixojuridm4jedndins1q6dhqi8dcdybwgyz49pkzrtukxywnusdp6qgj.o-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:mail-archive.com-dnsbl7.mailshell.net-127.2.0.20,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: apple20_3f0ce6727207 X-Filterd-Recvd-Size: 6742 Received: from mail-qt1-f201.google.com (mail-qt1-f201.google.com [209.85.160.201]) by imf20.hostedemail.com (Postfix) with ESMTP for ; Wed, 14 Oct 2020 00:54:14 +0000 (UTC) Received: by mail-qt1-f201.google.com with SMTP id e19so1155947qtq.17 for ; Tue, 13 Oct 2020 17:54:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:cc; bh=TvfbBMDu2+s5YUKb/QcLi0FhzELrCQpbsXkcpSpM+Nk=; b=G3gn7F39iSNCnEd7q8+PVe/kEIgBPlqHPHHGmF2OPFgrjxC5QoofX4MDKRT2oC2O9r hV6n/EbqK4wdl8qi76HX5kMbwSIvdJepo6MbII62VZq0Uy9vM7eSpi7J/fSzRBT/x9VJ JmtkqQIxbilR0WKfR9oQqXFr+CHI0qaZQjOkejT7V7SEYKvDoY1wJYkfB/RI+1G+l9wa 4F+5fRoIJwXTMOXh4Al28eIZc71nv6AtVZ00GVY1xil+lb0YwwPFO+2RInHsXiVBfhWx 25Jblpp8+HkwMPVx/nTi2L97HVLH2HmXAPfuxhlAVRLFAVdeUvqn9aN9v2fAuYxiIA5i 9iKA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:cc; bh=TvfbBMDu2+s5YUKb/QcLi0FhzELrCQpbsXkcpSpM+Nk=; b=buelwA5wYpULEAuQWysqZPCP6CKQBqWzHxSh74hpN65Pmyghnpzt/dZxK0OZCEc8Qu X40xYQ8j6NfTTZcDerClQS9joEmh+VQFzSTCd+MiL9RIjApqqi5bqkKL67qbJ3KWY5Xz D5c9G/2nBz/8L/dloBpKE96L3SrormjQDumuwjmsMd4TkxIUIvotvg4eBMRx8aG9b+tM XvntQZkZZKRMHS8/1s+4pjzZ+3XtOiQmtRDYOM3PcjjRtb2NmZQrOhuVBewy/8SowNJQ TZRbuOdpGh6j/b7HcX9Vi8mdHJSWygF3XxF9HQJIalozkE2oVfqT2d0OnezaMZQ+PMgK GTrg== X-Gm-Message-State: AOAM532sa1aqrangf2bhJACUM9DFs+IJs52r2EB+Ae8aXNJ/FF5SgnSg qXFon2MJChN0of+paWRhUI5QZUsYfLr9ag7E4A== X-Google-Smtp-Source: ABdhPJw3tFbAz+wnBZvW4VQhuqFS/HjgU1bMTVbFcRuRjcOeKAwU0JK4uA8tv1QRCL/TMwPmerYSJ8B0YbB0LRqP4g== X-Received: from kaleshsingh.c.googlers.com ([fda3:e722:ac3:10:14:4d90:c0a8:2145]) (user=kaleshsingh job=sendgmr) by 2002:ad4:46a8:: with SMTP id br8mr2664505qvb.24.1602636853194; Tue, 13 Oct 2020 17:54:13 -0700 (PDT) Date: Wed, 14 Oct 2020 00:53:07 +0000 In-Reply-To: <20201014005320.2233162-1-kaleshsingh@google.com> Message-Id: <20201014005320.2233162-3-kaleshsingh@google.com> Mime-Version: 1.0 References: <20201014005320.2233162-1-kaleshsingh@google.com> X-Mailer: git-send-email 2.28.0.1011.ga647a8990f-goog Subject: [PATCH v4 2/5] arm64: mremap speedup - Enable HAVE_MOVE_PMD From: Kalesh Singh Cc: surenb@google.com, minchan@google.com, joelaf@google.com, lokeshgidra@google.com, kaleshsingh@google.com, kernel-team@android.com, "Kirill A . Shutemov" , Catalin Marinas , Will Deacon , Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H. Peter Anvin" , Shuah Khan , Peter Zijlstra , Kees Cook , "Aneesh Kumar K.V" , Sami Tolvanen , Masahiro Yamada , Josh Poimboeuf , Frederic Weisbecker , Krzysztof Kozlowski , Hassan Naveed , Arnd Bergmann , Christian Brauner , Anshuman Khandual , Mark Brown , Gavin Shan , Mike Rapoport , Steven Price , Jia He , John Hubbard , Mike Kravetz , Greg Kroah-Hartman , Ram Pai , Mina Almasry , Ralph Campbell , Sandipan Das , Dave Hansen , Masami Hiramatsu , Jason Gunthorpe , Brian Geffon , SeongJae Park , linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: HAVE_MOVE_PMD enables remapping pages at the PMD level if both the source and destination addresses are PMD-aligned. HAVE_MOVE_PMD is already enabled on x86. The original patch [1] that introduced this config did not enable it on arm64 at the time because of performance issues with flushing the TLB on every PMD move. These issues have since been addressed in more recent releases with improvements to the arm64 TLB invalidation and core mmu_gather code as Will Deacon mentioned in [2]. From the data below, it can be inferred that there is approximately 8x improvement in performance when HAVE_MOVE_PMD is enabled on arm64. --------- Test Results ---------- The following results were obtained on an arm64 device running a 5.4 kernel, by remapping a PMD-aligned, 1GB sized region to a PMD-aligned destination. The results from 10 iterations of the test are given below. All times are in nanoseconds. Control HAVE_MOVE_PMD 9220833 1247761 9002552 1219896 9254115 1094792 8725885 1227760 9308646 1043698 9001667 1101771 8793385 1159896 8774636 1143594 9553125 1025833 9374010 1078125 9100885.4 1134312.6 <-- Mean Time in nanoseconds Total mremap time for a 1GB sized PMD-aligned region drops from ~9.1 milliseconds to ~1.1 milliseconds. (~8x speedup). [1] https://lore.kernel.org/r/20181108181201.88826-3-joelaf@google.com [2] https://www.mail-archive.com/linuxppc-dev@lists.ozlabs.org/msg140837.html Signed-off-by: Kalesh Singh Acked-by: Kirill A. Shutemov Cc: Catalin Marinas Cc: Will Deacon Cc: Andrew Morton --- Changes in v4: - Add Kirill's Acked-by. arch/arm64/Kconfig | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 4b136e923ccb..434d6791e869 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -123,6 +123,7 @@ config ARM64 select GENERIC_VDSO_TIME_NS select HANDLE_DOMAIN_IRQ select HARDIRQS_SW_RESEND + select HAVE_MOVE_PMD select HAVE_PCI select HAVE_ACPI_APEI if (ACPI && EFI) select HAVE_ALIGNED_STRUCT_PAGE if SLUB From patchwork Wed Oct 14 00:53:08 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Kalesh Singh X-Patchwork-Id: 11836589 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A963C1130 for ; Wed, 14 Oct 2020 00:54:29 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 52BAE21D40 for ; Wed, 14 Oct 2020 00:54:29 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="eFflQM/+" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 52BAE21D40 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 8BFB06B0072; Tue, 13 Oct 2020 20:54:28 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 86E946B0073; Tue, 13 Oct 2020 20:54:28 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 70F9D900002; Tue, 13 Oct 2020 20:54:28 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0176.hostedemail.com [216.40.44.176]) by kanga.kvack.org (Postfix) with ESMTP id 3B38C6B0072 for ; Tue, 13 Oct 2020 20:54:28 -0400 (EDT) Received: from smtpin05.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id CCFC61EE6 for ; Wed, 14 Oct 2020 00:54:27 +0000 (UTC) X-FDA: 77368710174.05.eggs65_5c1214c27207 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin05.hostedemail.com (Postfix) with ESMTP id AE49718014EDB for ; Wed, 14 Oct 2020 00:54:27 +0000 (UTC) X-Spam-Summary: 1,0,0,d1935ae8bac1de55,d41d8cd98f00b204,3qkygxwskcpuhxibpepfkdedlldib.zljifkru-jjhsxzh.lod@flex--kaleshsingh.bounces.google.com,,RULES_HIT:4:41:152:355:379:387:541:800:960:966:967:973:982:988:989:1260:1277:1311:1313:1314:1345:1359:1431:1437:1513:1515:1516:1518:1521:1593:1594:1605:1730:1747:1777:1792:2196:2198:2199:2200:2393:2525:2559:2563:2637:2682:2685:2693:2859:2895:2898:2903:2933:2937:2939:2942:2945:2947:2951:2954:3022:3138:3139:3140:3141:3142:3152:3865:3866:3867:3868:3870:3871:3872:3874:3934:3936:3938:3941:3944:3947:3950:3953:3956:3959:4250:4321:4385:4605:5007:6119:6261:6630:6653:6742:6743:7875:7903:8603:8660:8784:9010:9025:9592:9969:10004:11026:11232:11473:11658:11914:12043:12291:12295:12296:12297:12438:12555:12683:12895:12986:13148:13161:13221:13229:13230:14096:14097:14394:14659:21220:21433:21444:21450:21451:21627:21795:21939:21987:21990:30051:30054:30064:30070,0,RBL:209.85.217.74:@flex--kaleshsingh.bounces.google.com:.lbl8.mailshell.net-66.100.201.100 6 2.18.0.1 X-HE-Tag: eggs65_5c1214c27207 X-Filterd-Recvd-Size: 16128 Received: from mail-vs1-f74.google.com (mail-vs1-f74.google.com [209.85.217.74]) by imf41.hostedemail.com (Postfix) with ESMTP for ; Wed, 14 Oct 2020 00:54:27 +0000 (UTC) Received: by mail-vs1-f74.google.com with SMTP id d9so488415vsl.12 for ; Tue, 13 Oct 2020 17:54:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:cc:content-transfer-encoding; bh=IzZGCID9b9HhzvZ8/RaJRTkHytRRFfTSaspmRMNwBaA=; b=eFflQM/+RG4BVBAYNVEyyuipSodkeK8wBPKcxdINO7USZ9W/P0l35dgH9vl1VIZeJ9 bkiZQM7OsCg60ny6AwEjl2XwAPMh0AGcBwhyXH+Suu5GVLf9M2QF5n/qFt2CbJyBsVaP fwLS3MlHGuZvTfQkza3F8/AIzqsg+Gy4OLLQMsmJQ4gFMBbEG4khhuZPJ+qt2wkGS4/b 8Ww9904eDDdSwERaVP8fqC1FRqmRHw1svdmTCUG8UxkTqoCm1gGL2523YVhvrT1zH/XD tXXjMs3ezBMgWipmuQy0W+MhUsjB6r4HZ5DMcPI0+jhBIa8a2TNELZ0LOr7Aeyiui0YP TQ+A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:cc:content-transfer-encoding; bh=IzZGCID9b9HhzvZ8/RaJRTkHytRRFfTSaspmRMNwBaA=; b=bhJ+6GamwYC825BNqJXu6PjFHLDWePiVO+rtNMom6oSxKx2rHn19g4i4SJsYU88yeb 6HfFhUkwWJDltyUHU3C3Ypu427QzvBbolsQpE+QF1YtwF8IrRHiEq+eyRdyECxStWf3L vTE02vC831p9HZ9OO4nhk0Z3m7BeImkVW2f3EHBZqjES+1QKmhE9bTTe4YfVtKkZAS/t pcKjoYGgJgoqAYt0SCO2yGsxQWgm2Xdock5QR5K7ZsZ9RahciOtpE8glnsalVgVdMBut ueURYYaEBJXpygFGQaVVvtswUTmwCSW64gMDW0mJ4iXj6tkZAKb9ch71jWg1bR7cFrBt iJ0Q== X-Gm-Message-State: AOAM531j27CfeoA1uICEvkzOOA8Hq9UCc1nPOpe0bfLswhNtiK8qVTA1 fFJyiZ6PbMiSvD8ZLHA3nvZoIH29Th+/ImyvFw== X-Google-Smtp-Source: ABdhPJyvWVIRplGNDjzvs0XGL0v2uPPAxCKlSxv45bumE3CxcrygOi7erdOIOjU1LBZ+S2o7fjKUgCNTdGy1u8o7iA== X-Received: from kaleshsingh.c.googlers.com ([fda3:e722:ac3:10:14:4d90:c0a8:2145]) (user=kaleshsingh job=sendgmr) by 2002:a9f:3dc7:: with SMTP id e7mr1870481uaj.3.1602636866064; Tue, 13 Oct 2020 17:54:26 -0700 (PDT) Date: Wed, 14 Oct 2020 00:53:08 +0000 In-Reply-To: <20201014005320.2233162-1-kaleshsingh@google.com> Message-Id: <20201014005320.2233162-4-kaleshsingh@google.com> Mime-Version: 1.0 References: <20201014005320.2233162-1-kaleshsingh@google.com> X-Mailer: git-send-email 2.28.0.1011.ga647a8990f-goog Subject: [PATCH v4 3/5] mm: Speedup mremap on 1GB or larger regions From: Kalesh Singh Cc: surenb@google.com, minchan@google.com, joelaf@google.com, lokeshgidra@google.com, kaleshsingh@google.com, kernel-team@android.com, kernel test robot , "Kirill A . Shutemov" , Andrew Morton , Catalin Marinas , Will Deacon , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H. Peter Anvin" , Shuah Khan , Peter Zijlstra , "Aneesh Kumar K.V" , Kees Cook , Masahiro Yamada , Arnd Bergmann , Sami Tolvanen , Josh Poimboeuf , Frederic Weisbecker , Krzysztof Kozlowski , Hassan Naveed , Christian Brauner , Stephen Boyd , Anshuman Khandual , Gavin Shan , Mike Rapoport , Steven Price , Jia He , John Hubbard , Jason Gunthorpe , Yang Shi , Mina Almasry , Ralph Campbell , Ram Pai , Sandipan Das , Dave Hansen , Brian Geffon , Masami Hiramatsu , Ira Weiny , SeongJae Park , linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Android needs to move large memory regions for garbage collection. The GC requires moving physical pages of multi-gigabyte heap using mremap. During this move, the application threads have to be paused for correctness. It is critical to keep this pause as short as possible to avoid jitters during user interaction. Optimize mremap for >= 1GB-sized regions by moving at the PUD/PGD level if the source and destination addresses are PUD-aligned. For CONFIG_PGTABLE_LEVELS == 3, moving at the PUD level in effect moves PGD entries, since the PUD entry is “folded back” onto the PGD entry. Add HAVE_MOVE_PUD so that architectures where moving at the PUD level isn't supported/tested can turn this off by not selecting the config. Fix build test error from v1 of this series reported by kernel test robot in [1]. [1] https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org/thread/CKPGL4FH4NG7TGH2CVYX2UX76L25BTA3/ Signed-off-by: Kalesh Singh Reported-by: kernel test robot Acked-by: Kirill A. Shutemov Cc: Andrew Morton --- Changes in v2: - Update commit message with description of Android GC's use case. - Move set_pud_at() to a separate patch. - Use switch() instead of ifs in move_pgt_entry() - Fix build test error reported by kernel test robot on x86_64 in [1]. Guard move_huge_pmd() with IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE), since this section doesn't get optimized out in the kernel test robot's build test when HAVE_MOVE_PUD is enabled. - Keep WARN_ON_ONCE(1) instead of BUILD_BUG() for the aforementioned reason. Changes in v3: - Move get_old_pud() and alloc_new_pud() out of #ifdef CONFIG_HAVE_MOVE_PUD. - Have get_old_pmd() and alloc_new_pmd() use get_old_pud() and alloc_old_pud(). - Use switch() in get_extent() instead of ifs. - Add BUILD_BUG() to default case of get_extent(). - Replace #ifdef CONFIG_HAVE_MOVE_PMD/PUD in move_page_tables() with IS_ENABLED(CONFIG_HAVE_MOVE_PMD/PUD). - Make lines 80 cols or less, where they don’t need to be longer. - s/= /= /g (Fixed double spaces after '='). Changes in v4: - Add Kirill's Acked-by. arch/Kconfig | 7 ++ mm/mremap.c | 230 ++++++++++++++++++++++++++++++++++++++++++--------- 2 files changed, 197 insertions(+), 40 deletions(-) diff --git a/arch/Kconfig b/arch/Kconfig index 76ec3395b843..79da6d714264 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -608,6 +608,13 @@ config HAVE_IRQ_TIME_ACCOUNTING Archs need to ensure they use a high enough resolution clock to support irq time accounting and then call enable_sched_clock_irqtime(). +config HAVE_MOVE_PUD + bool + help + Architectures that select this are able to move page tables at the + PUD level. If there are only 3 page table levels, the move effectively + happens at the PGD level. + config HAVE_MOVE_PMD bool help diff --git a/mm/mremap.c b/mm/mremap.c index 138abbae4f75..078f731277b6 100644 --- a/mm/mremap.c +++ b/mm/mremap.c @@ -30,12 +30,11 @@ #include "internal.h" -static pmd_t *get_old_pmd(struct mm_struct *mm, unsigned long addr) +static pud_t *get_old_pud(struct mm_struct *mm, unsigned long addr) { pgd_t *pgd; p4d_t *p4d; pud_t *pud; - pmd_t *pmd; pgd = pgd_offset(mm, addr); if (pgd_none_or_clear_bad(pgd)) @@ -49,6 +48,18 @@ static pmd_t *get_old_pmd(struct mm_struct *mm, unsigned long addr) if (pud_none_or_clear_bad(pud)) return NULL; + return pud; +} + +static pmd_t *get_old_pmd(struct mm_struct *mm, unsigned long addr) +{ + pud_t *pud; + pmd_t *pmd; + + pud = get_old_pud(mm, addr); + if (!pud) + return NULL; + pmd = pmd_offset(pud, addr); if (pmd_none(*pmd)) return NULL; @@ -56,19 +67,27 @@ static pmd_t *get_old_pmd(struct mm_struct *mm, unsigned long addr) return pmd; } -static pmd_t *alloc_new_pmd(struct mm_struct *mm, struct vm_area_struct *vma, +static pud_t *alloc_new_pud(struct mm_struct *mm, struct vm_area_struct *vma, unsigned long addr) { pgd_t *pgd; p4d_t *p4d; - pud_t *pud; - pmd_t *pmd; pgd = pgd_offset(mm, addr); p4d = p4d_alloc(mm, pgd, addr); if (!p4d) return NULL; - pud = pud_alloc(mm, p4d, addr); + + return pud_alloc(mm, p4d, addr); +} + +static pmd_t *alloc_new_pmd(struct mm_struct *mm, struct vm_area_struct *vma, + unsigned long addr) +{ + pud_t *pud; + pmd_t *pmd; + + pud = alloc_new_pud(mm, vma, addr); if (!pud) return NULL; @@ -249,14 +268,148 @@ static bool move_normal_pmd(struct vm_area_struct *vma, unsigned long old_addr, return true; } +#else +static inline bool move_normal_pmd(struct vm_area_struct *vma, + unsigned long old_addr, unsigned long new_addr, pmd_t *old_pmd, + pmd_t *new_pmd) +{ + return false; +} #endif +#ifdef CONFIG_HAVE_MOVE_PUD +static bool move_normal_pud(struct vm_area_struct *vma, unsigned long old_addr, + unsigned long new_addr, pud_t *old_pud, pud_t *new_pud) +{ + spinlock_t *old_ptl, *new_ptl; + struct mm_struct *mm = vma->vm_mm; + pud_t pud; + + /* + * The destination pud shouldn't be established, free_pgtables() + * should have released it. + */ + if (WARN_ON_ONCE(!pud_none(*new_pud))) + return false; + + /* + * We don't have to worry about the ordering of src and dst + * ptlocks because exclusive mmap_lock prevents deadlock. + */ + old_ptl = pud_lock(vma->vm_mm, old_pud); + new_ptl = pud_lockptr(mm, new_pud); + if (new_ptl != old_ptl) + spin_lock_nested(new_ptl, SINGLE_DEPTH_NESTING); + + /* Clear the pud */ + pud = *old_pud; + pud_clear(old_pud); + + VM_BUG_ON(!pud_none(*new_pud)); + + /* Set the new pud */ + set_pud_at(mm, new_addr, new_pud, pud); + flush_tlb_range(vma, old_addr, old_addr + PUD_SIZE); + if (new_ptl != old_ptl) + spin_unlock(new_ptl); + spin_unlock(old_ptl); + + return true; +} +#else +static inline bool move_normal_pud(struct vm_area_struct *vma, + unsigned long old_addr, unsigned long new_addr, pud_t *old_pud, + pud_t *new_pud) +{ + return false; +} +#endif + +enum pgt_entry { + NORMAL_PMD, + HPAGE_PMD, + NORMAL_PUD, +}; + +/* + * Returns an extent of the corresponding size for the pgt_entry specified if + * valid. Else returns a smaller extent bounded by the end of the source and + * destination pgt_entry. + */ +static unsigned long get_extent(enum pgt_entry entry, unsigned long old_addr, + unsigned long old_end, unsigned long new_addr) +{ + unsigned long next, extent, mask, size; + + switch (entry) { + case HPAGE_PMD: + case NORMAL_PMD: + mask = PMD_MASK; + size = PMD_SIZE; + break; + case NORMAL_PUD: + mask = PUD_MASK; + size = PUD_SIZE; + break; + default: + BUILD_BUG(); + break; + } + + next = (old_addr + size) & mask; + /* even if next overflowed, extent below will be ok */ + extent = (next > old_end) ? old_end - old_addr : next - old_addr; + next = (new_addr + size) & mask; + if (extent > next - new_addr) + extent = next - new_addr; + return extent; +} + +/* + * Attempts to speedup the move by moving entry at the level corresponding to + * pgt_entry. Returns true if the move was successful, else false. + */ +static bool move_pgt_entry(enum pgt_entry entry, struct vm_area_struct *vma, + unsigned long old_addr, unsigned long new_addr, + void *old_entry, void *new_entry, bool need_rmap_locks) +{ + bool moved = false; + + /* See comment in move_ptes() */ + if (need_rmap_locks) + take_rmap_locks(vma); + + switch (entry) { + case NORMAL_PMD: + moved = move_normal_pmd(vma, old_addr, new_addr, old_entry, + new_entry); + break; + case NORMAL_PUD: + moved = move_normal_pud(vma, old_addr, new_addr, old_entry, + new_entry); + break; + case HPAGE_PMD: + moved = IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) && + move_huge_pmd(vma, old_addr, new_addr, old_entry, + new_entry); + break; + default: + WARN_ON_ONCE(1); + break; + } + + if (need_rmap_locks) + drop_rmap_locks(vma); + + return moved; +} + unsigned long move_page_tables(struct vm_area_struct *vma, unsigned long old_addr, struct vm_area_struct *new_vma, unsigned long new_addr, unsigned long len, bool need_rmap_locks) { - unsigned long extent, next, old_end; + unsigned long extent, old_end; struct mmu_notifier_range range; pmd_t *old_pmd, *new_pmd; @@ -269,53 +422,50 @@ unsigned long move_page_tables(struct vm_area_struct *vma, for (; old_addr < old_end; old_addr += extent, new_addr += extent) { cond_resched(); - next = (old_addr + PMD_SIZE) & PMD_MASK; - /* even if next overflowed, extent below will be ok */ - extent = next - old_addr; - if (extent > old_end - old_addr) - extent = old_end - old_addr; - next = (new_addr + PMD_SIZE) & PMD_MASK; - if (extent > next - new_addr) - extent = next - new_addr; + /* + * If extent is PUD-sized try to speed up the move by moving at the + * PUD level if possible. + */ + extent = get_extent(NORMAL_PUD, old_addr, old_end, new_addr); + if (IS_ENABLED(CONFIG_HAVE_MOVE_PUD) && extent == PUD_SIZE) { + pud_t *old_pud, *new_pud; + + old_pud = get_old_pud(vma->vm_mm, old_addr); + if (!old_pud) + continue; + new_pud = alloc_new_pud(vma->vm_mm, vma, new_addr); + if (!new_pud) + break; + if (move_pgt_entry(NORMAL_PUD, vma, old_addr, new_addr, + old_pud, new_pud, need_rmap_locks)) + continue; + } + + extent = get_extent(NORMAL_PMD, old_addr, old_end, new_addr); old_pmd = get_old_pmd(vma->vm_mm, old_addr); if (!old_pmd) continue; new_pmd = alloc_new_pmd(vma->vm_mm, vma, new_addr); if (!new_pmd) break; - if (is_swap_pmd(*old_pmd) || pmd_trans_huge(*old_pmd) || pmd_devmap(*old_pmd)) { - if (extent == HPAGE_PMD_SIZE) { - bool moved; - /* See comment in move_ptes() */ - if (need_rmap_locks) - take_rmap_locks(vma); - moved = move_huge_pmd(vma, old_addr, new_addr, - old_pmd, new_pmd); - if (need_rmap_locks) - drop_rmap_locks(vma); - if (moved) - continue; - } + if (is_swap_pmd(*old_pmd) || pmd_trans_huge(*old_pmd) || + pmd_devmap(*old_pmd)) { + if (extent == HPAGE_PMD_SIZE && + move_pgt_entry(HPAGE_PMD, vma, old_addr, new_addr, + old_pmd, new_pmd, need_rmap_locks)) + continue; split_huge_pmd(vma, old_pmd, old_addr); if (pmd_trans_unstable(old_pmd)) continue; - } else if (extent == PMD_SIZE) { -#ifdef CONFIG_HAVE_MOVE_PMD + } else if (IS_ENABLED(CONFIG_HAVE_MOVE_PMD) && + extent == PMD_SIZE) { /* * If the extent is PMD-sized, try to speed the move by * moving at the PMD level if possible. */ - bool moved; - - if (need_rmap_locks) - take_rmap_locks(vma); - moved = move_normal_pmd(vma, old_addr, new_addr, - old_pmd, new_pmd); - if (need_rmap_locks) - drop_rmap_locks(vma); - if (moved) + if (move_pgt_entry(NORMAL_PMD, vma, old_addr, new_addr, + old_pmd, new_pmd, need_rmap_locks)) continue; -#endif } if (pte_alloc(new_vma->vm_mm, new_pmd)) From patchwork Wed Oct 14 00:53:09 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kalesh Singh X-Patchwork-Id: 11836591 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 28C991130 for ; Wed, 14 Oct 2020 00:54:42 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C73AB21D40 for ; Wed, 14 Oct 2020 00:54:41 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="CMRAL6JA" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C73AB21D40 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 0D0796B0074; Tue, 13 Oct 2020 20:54:41 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 081356B0075; Tue, 13 Oct 2020 20:54:41 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E3D4A900002; Tue, 13 Oct 2020 20:54:40 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0088.hostedemail.com [216.40.44.88]) by kanga.kvack.org (Postfix) with ESMTP id ACFF36B0074 for ; Tue, 13 Oct 2020 20:54:40 -0400 (EDT) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 3DCB61EE6 for ; Wed, 14 Oct 2020 00:54:40 +0000 (UTC) X-FDA: 77368710720.23.lip16_090fafc27207 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin23.hostedemail.com (Postfix) with ESMTP id 1B27337604 for ; Wed, 14 Oct 2020 00:54:40 +0000 (UTC) X-Spam-Summary: 1,0,0,9de1292bc378a99a,d41d8cd98f00b204,3tkygxwskcamndohvkvlqjkjrrjoh.frpolqx0-ppnydfn.ruj@flex--kaleshsingh.bounces.google.com,,RULES_HIT:41:152:355:379:387:541:800:960:973:988:989:1260:1277:1313:1314:1345:1359:1431:1437:1516:1518:1535:1542:1593:1594:1711:1730:1747:1777:1792:2393:2559:2562:2693:3138:3139:3140:3141:3142:3152:3353:3865:3866:3867:3868:3870:3871:3872:3874:4117:4250:4321:4605:5007:6261:6630:6653:6742:6743:7875:7903:9121:9969:10004:10400:11026:11232:11233:11473:11657:11658:11914:12043:12295:12296:12297:12438:12555:12895:12986:13161:13229:14096:14097:14181:14394:14659:14721:21444:21451:21627:30054,0,RBL:209.85.217.74:@flex--kaleshsingh.bounces.google.com:.lbl8.mailshell.net-66.100.201.100 62.18.0.100;04yf5g448zh6dk56pmkorkb65ec1cyp9nzba4y7rptabptagnn9nxawowjaymwx.agmihh83tk9ejqm4zshnyra8ktkkxc3b5htmd8icho7htt7zsejoq3ok8td9yqt.r-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0 ,DNSBL:n X-HE-Tag: lip16_090fafc27207 X-Filterd-Recvd-Size: 6989 Received: from mail-vs1-f74.google.com (mail-vs1-f74.google.com [209.85.217.74]) by imf31.hostedemail.com (Postfix) with ESMTP for ; Wed, 14 Oct 2020 00:54:39 +0000 (UTC) Received: by mail-vs1-f74.google.com with SMTP id y196so496039vsc.16 for ; Tue, 13 Oct 2020 17:54:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:cc; bh=htuM5XIelPXQQXL8oIf2e/a5DAtFUl+iOYnn4bwvhGc=; b=CMRAL6JAtDqSgRV7ZpF02r6NBp/0nc+94iGdBDTPU4PFiUZAv/Y8jRqN27jzxwPPnp vFLsngZcAyUU4xkRcpm1NFeKRmryyQP+fIVaPMNUpCX7nipK//0mhwToBcbbm//49Us7 ECHrX/3nZUKbULbZaqhOX1/zGLBAGzJ3WfYq2z5KhQXv6oORHwIN3MpulDFaG6oq722y 2/Kw2pi8QZLqn1DCHLaOFIvLY5/Yvg/aRP6S/Rz0VbFMh8XltQKWyAkMsZt3sNViURIJ l3jVm32z1TgDhRNocvrC/nsAUzCWhkrMOnW355gs+qtWsum8sO+E9aDZI11FZLqRBAOQ saZQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:cc; bh=htuM5XIelPXQQXL8oIf2e/a5DAtFUl+iOYnn4bwvhGc=; b=iMxfg8Gpeg0m/a8QBMYbQzRL8UKb2tcja5GTO7ntOPPOAyeXYiGr3X4qo4/YUzXvPc ewxz9lkl+TYcT86oz8I3aoe/6nLFxdJyTC0ecG3YYqnmm1/mNhuvqTZK+blB2Ir0UFP0 UqrWqKSckoWedxOvIXZJErdhG4v8rLmEJPtKe6Iu1gKzRJDDTbHTvt7l4HcHOVNwdh86 GpWN6rehftija0IMIl4vHVbRbCfyhqJjMnJjkYwWDHNWYAt41vl11bB3rXfGSXoT0cF9 LidsK4Nmd0S7aVhB8OXUpMc1tv80S8J+gHfJIJx0yYctL9VsNg4SeBRiqpRJdunjPO9h y4Fw== X-Gm-Message-State: AOAM532FWevVgkG2ruvUi5kEK9Lm9oN78vDllgBPq59b2Q2MC5Vhmd38 jZPW9fQRYUjuy2Yi78PJHd0431E4k9L7c/rA1Q== X-Google-Smtp-Source: ABdhPJzHDHcV5e3UInygktDbdw7yXMcUWhr9PYP72zl9/yMs684UPaTPlpY9Qpc7526JumFcKN8SYtlhOEeScdw1og== X-Received: from kaleshsingh.c.googlers.com ([fda3:e722:ac3:10:14:4d90:c0a8:2145]) (user=kaleshsingh job=sendgmr) by 2002:a05:6122:804:: with SMTP id 4mr1935610vkj.15.1602636878883; Tue, 13 Oct 2020 17:54:38 -0700 (PDT) Date: Wed, 14 Oct 2020 00:53:09 +0000 In-Reply-To: <20201014005320.2233162-1-kaleshsingh@google.com> Message-Id: <20201014005320.2233162-5-kaleshsingh@google.com> Mime-Version: 1.0 References: <20201014005320.2233162-1-kaleshsingh@google.com> X-Mailer: git-send-email 2.28.0.1011.ga647a8990f-goog Subject: [PATCH v4 4/5] arm64: mremap speedup - Enable HAVE_MOVE_PUD From: Kalesh Singh Cc: surenb@google.com, minchan@google.com, joelaf@google.com, lokeshgidra@google.com, kaleshsingh@google.com, kernel-team@android.com, "Kirill A . Shutemov" , Catalin Marinas , Will Deacon , Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H. Peter Anvin" , Shuah Khan , Peter Zijlstra , "Aneesh Kumar K.V" , Kees Cook , Josh Poimboeuf , Sami Tolvanen , Masahiro Yamada , Arnd Bergmann , Frederic Weisbecker , Krzysztof Kozlowski , Hassan Naveed , Christian Brauner , Stephen Boyd , Anshuman Khandual , Gavin Shan , Mike Rapoport , Steven Price , Jia He , John Hubbard , Ram Pai , Ralph Campbell , Mina Almasry , Sandipan Das , Dave Hansen , Brian Geffon , Masami Hiramatsu , Kamalesh Babulal , SeongJae Park , linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: HAVE_MOVE_PUD enables remapping pages at the PUD level if both the source and destination addresses are PUD-aligned. With HAVE_MOVE_PUD enabled it can be inferred that there is approximately a 19x improvement in performance on arm64. (See data below). ------- Test Results --------- The following results were obtained using a 5.4 kernel, by remapping a PUD-aligned, 1GB sized region to a PUD-aligned destination. The results from 10 iterations of the test are given below: Total mremap times for 1GB data on arm64. All times are in nanoseconds. Control HAVE_MOVE_PUD 1247761 74271 1219896 46771 1094792 59687 1227760 48385 1043698 76666 1101771 50365 1159896 52500 1143594 75261 1025833 61354 1078125 48697 1134312.6 59395.7 <-- Mean time in nanoseconds A 1GB mremap completion time drops from ~1.1 milliseconds to ~59 microseconds on arm64. (~19x speed up). Signed-off-by: Kalesh Singh Acked-by: Kirill A. Shutemov Cc: Catalin Marinas Cc: Will Deacon Cc: Andrew Morton --- Changes in v3: - Add set_pud_at() macro - Used by move_normal_pud(). Changes in v4: - Add Kirill's Acked-by. arch/arm64/Kconfig | 1 + arch/arm64/include/asm/pgtable.h | 1 + 2 files changed, 2 insertions(+) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 434d6791e869..7191a79fb44d 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -124,6 +124,7 @@ config ARM64 select HANDLE_DOMAIN_IRQ select HARDIRQS_SW_RESEND select HAVE_MOVE_PMD + select HAVE_MOVE_PUD select HAVE_PCI select HAVE_ACPI_APEI if (ACPI && EFI) select HAVE_ALIGNED_STRUCT_PAGE if SLUB diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index a11bf52e0c38..0b0b36974757 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -454,6 +454,7 @@ static inline pmd_t pmd_mkdevmap(pmd_t pmd) #define pfn_pud(pfn,prot) __pud(__phys_to_pud_val((phys_addr_t)(pfn) << PAGE_SHIFT) | pgprot_val(prot)) #define set_pmd_at(mm, addr, pmdp, pmd) set_pte_at(mm, addr, (pte_t *)pmdp, pmd_pte(pmd)) +#define set_pud_at(mm, addr, pudp, pud) set_pte_at(mm, addr, (pte_t *)pudp, pud_pte(pud)) #define __p4d_to_phys(p4d) __pte_to_phys(p4d_pte(p4d)) #define __phys_to_p4d_val(phys) __phys_to_pte_val(phys) From patchwork Wed Oct 14 00:53:10 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kalesh Singh X-Patchwork-Id: 11836593 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 19E6B14B2 for ; Wed, 14 Oct 2020 00:54:54 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D0A77208B3 for ; Wed, 14 Oct 2020 00:54:53 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="jw3IYuqh" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D0A77208B3 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 00D716B0078; Tue, 13 Oct 2020 20:54:53 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id ED6E66B007B; Tue, 13 Oct 2020 20:54:52 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D9F82900002; Tue, 13 Oct 2020 20:54:52 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0165.hostedemail.com [216.40.44.165]) by kanga.kvack.org (Postfix) with ESMTP id A9B666B0078 for ; Tue, 13 Oct 2020 20:54:52 -0400 (EDT) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 42EFC8249980 for ; Wed, 14 Oct 2020 00:54:52 +0000 (UTC) X-FDA: 77368711224.19.sense74_4005b5927207 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin19.hostedemail.com (Postfix) with ESMTP id 2485A1AD1BA for ; Wed, 14 Oct 2020 00:54:52 +0000 (UTC) X-Spam-Summary: 1,0,0,dc6ab23a6f39f846,d41d8cd98f00b204,3wkygxwskca8zp0t7w7x2vwv33v0t.r310x29c-11zaprz.36v@flex--kaleshsingh.bounces.google.com,,RULES_HIT:41:152:355:379:387:541:800:960:973:988:989:1260:1277:1313:1314:1345:1359:1431:1437:1516:1518:1535:1541:1593:1594:1711:1730:1747:1777:1792:2393:2559:2562:3138:3139:3140:3141:3142:3152:3167:3352:3865:3866:3867:3868:3870:3871:3872:3874:4117:4250:4605:5007:6261:6630:6653:6742:6743:7875:7903:9969:10004:10400:11026:11232:11473:11658:11914:12043:12295:12296:12297:12438:12555:12895:12986:13069:13161:13229:13311:13357:14096:14097:14181:14394:14659:14721:21444:21451:21627:30054,0,RBL:209.85.160.201:@flex--kaleshsingh.bounces.google.com:.lbl8.mailshell.net-62.18.0.100 66.100.201.100;04yr9ohcpx3f3naqx7tzbfkbpn3yyocnnw8fei7grxth6mo5khyuccz9nw5fssu.53rghccfi4s97diawrkyrym57hw8n3qryn49wkmqg8dehbrredjrbccdc6ot1bb.r-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DN SBL:none X-HE-Tag: sense74_4005b5927207 X-Filterd-Recvd-Size: 6123 Received: from mail-qt1-f201.google.com (mail-qt1-f201.google.com [209.85.160.201]) by imf32.hostedemail.com (Postfix) with ESMTP for ; Wed, 14 Oct 2020 00:54:51 +0000 (UTC) Received: by mail-qt1-f201.google.com with SMTP id c4so1154338qtx.20 for ; Tue, 13 Oct 2020 17:54:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:cc; bh=+sRzkUE+scrQPOvt1hpaJRFlbY+mLiTASxfxA+AvDzY=; b=jw3IYuqhYk3bNCks48afsywsfyXpRda22JQ0j0vZleZASO4yZwS1bbAuklB3w73SDu nVAc5RTpyaQR83VAqaD5XtkkdxVMh+0WBUMs27BJbmYfuDR1MW5kWP4aQfuVJrwkj8hh ZUBpB5ib1AH/FLfmvSzZMAn6wB7H0IpN8t+GBiHXJ7r60tAz2aTiCECnYOE5ke/1OYb5 WloVpKbiCNwWXyA4+oAnzoKO4VvGLId6i4oBNn44HJh83Bd1hMW1gtw8oN04wN5EK01s gowS/qx8dBNy3W+Q/r388jANZjwOF21UjBdBC73CjCeTbc9Ads94fCnBq1FmwxpHOw7G axtA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:cc; bh=+sRzkUE+scrQPOvt1hpaJRFlbY+mLiTASxfxA+AvDzY=; b=Lsxz8P7hNltlqlqCZbdobLgv8Y8I5Y247CkRkRN5WYHBEwXT0VIDYUmanG7hJ6UzBA d2lmRiSrKB/PAi8kria2FgJY52ujoSJfmMXfKoHdJoIhgXtzrUZZUwo3GALQBJgDdMr9 VKAwzmVy+Vs+nPdwazWhcktm6nZRXhP8ylXUz6cWmPYNey4VM2czJrpE6kiSwte6rE80 1YfQe3x3dk5nP5DaxTXsVu1edWo0s06JQNJ9Ng9ZfN/m6cKdRWjCI1wjHZBFrqh9Rega Imi7pxsvUh1aTdO+bVXV8NwPFZCvQ7Uw1Ic59nrTOqFW3t5GyA7Ty8hr/URL2j5BTbDQ jsYQ== X-Gm-Message-State: AOAM530t5Iylwtax9aqTMU5JR0SpMZXwmDYxTBtdggD0gyTlWrQiafa6 wNG+C4li0URu+SnBbdvMzQ4UEIa08D7gni7MNA== X-Google-Smtp-Source: ABdhPJyfDvKvdmHyyap2eUwDhm0Me8oBd98Fxe3Nu61XFgLJneYAMnVgTd1Giuoe//Q/MlLcMZyeZ5E1JL9dS9zouw== X-Received: from kaleshsingh.c.googlers.com ([fda3:e722:ac3:10:14:4d90:c0a8:2145]) (user=kaleshsingh job=sendgmr) by 2002:ad4:4150:: with SMTP id z16mr2651885qvp.50.1602636890894; Tue, 13 Oct 2020 17:54:50 -0700 (PDT) Date: Wed, 14 Oct 2020 00:53:10 +0000 In-Reply-To: <20201014005320.2233162-1-kaleshsingh@google.com> Message-Id: <20201014005320.2233162-6-kaleshsingh@google.com> Mime-Version: 1.0 References: <20201014005320.2233162-1-kaleshsingh@google.com> X-Mailer: git-send-email 2.28.0.1011.ga647a8990f-goog Subject: [PATCH v4 5/5] x86: mremap speedup - Enable HAVE_MOVE_PUD From: Kalesh Singh Cc: surenb@google.com, minchan@google.com, joelaf@google.com, lokeshgidra@google.com, kaleshsingh@google.com, kernel-team@android.com, "Kirill A . Shutemov" , Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H . Peter Anvin" , Catalin Marinas , Will Deacon , x86@kernel.org, Shuah Khan , Peter Zijlstra , Kees Cook , "Aneesh Kumar K.V" , Arnd Bergmann , Sami Tolvanen , Masahiro Yamada , Krzysztof Kozlowski , Frederic Weisbecker , Hassan Naveed , Christian Brauner , Anshuman Khandual , Mark Rutland , Gavin Shan , Mike Rapoport , Steven Price , Jia He , John Hubbard , Ram Pai , Sandipan Das , Zi Yan , Mina Almasry , Ralph Campbell , Dave Hansen , Brian Geffon , Masami Hiramatsu , SeongJae Park , linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: HAVE_MOVE_PUD enables remapping pages at the PUD level if both the source and destination addresses are PUD-aligned. With HAVE_MOVE_PUD enabled it can be inferred that there is approximately a 13x improvement in performance on x86. (See data below). ------- Test Results --------- The following results were obtained using a 5.4 kernel, by remapping a PUD-aligned, 1GB sized region to a PUD-aligned destination. The results from 10 iterations of the test are given below: Total mremap times for 1GB data on x86. All times are in nanoseconds. Control HAVE_MOVE_PUD 180394 15089 235728 14056 238931 25741 187330 13838 241742 14187 177925 14778 182758 14728 160872 14418 205813 15107 245722 13998 205721.5 15594 <-- Mean time in nanoseconds A 1GB mremap completion time drops from ~205 microseconds to ~15 microseconds on x86. (~13x speed up). Signed-off-by: Kalesh Singh Acked-by: Kirill A. Shutemov Cc: Andrew Morton Cc: Thomas Gleixner Cc: Ingo Molnar Cc: Borislav Petkov Cc: H. Peter Anvin Acked-by: Ingo Molnar --- Changes in v4: - Add Kirill's Acked-by. arch/x86/Kconfig | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 835d93006bd6..e199760d54fc 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -198,6 +198,7 @@ config X86 select HAVE_MIXED_BREAKPOINTS_REGS select HAVE_MOD_ARCH_SPECIFIC select HAVE_MOVE_PMD + select HAVE_MOVE_PUD select HAVE_NMI select HAVE_OPROFILE select HAVE_OPTPROBES