From patchwork Thu Dec 2 11:53:48 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 12694501 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id BE653C433EF for ; Thu, 2 Dec 2021 11:58:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=DdGO73mPjsAqko16kYp6kcRRpjusFNTYwTKW39hFIV8=; b=qJwoUhxKq+pRc6 4uL1uD+9DD8FHCSzyL8gPDASSiusi/GKnJW7Y0QpHu4mXaVFfj/lMI220aE8MdMeicNzmXTA8om8u TnBJ64uKSNxiiT4vj97pSdQ6D8YbL5RMBD0V5iJJnYLlru+XDiY9GUrfKR5jrlqvMmp5kW2nci6Ga k6suMNIyVb7PEPTPk7GHzrsIZeOLziumVEzgM2BRaLU7v61h0ctfOeAptHkU4JKWct30swyeEDqsk 9+gXT7leRExJrI4wJbRklQnexIGGD6H12MaOy+gK+QE1Bttfl6OgV4JpSExi3Nujj/3az6ryDNseY eyMpBWc9ZhEAx2aOx64A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mskhR-00C5m2-Ok; Thu, 02 Dec 2021 11:56:30 +0000 Received: from mail-wm1-x32a.google.com ([2a00:1450:4864:20::32a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mskf5-00C4iO-J4 for linux-arm-kernel@lists.infradead.org; Thu, 02 Dec 2021 11:54:05 +0000 Received: by mail-wm1-x32a.google.com with SMTP id o19-20020a1c7513000000b0033a93202467so3366743wmc.2 for ; Thu, 02 Dec 2021 03:54:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=YvIcgLxrIDvH3u3VEeMqsMbudoeAwAVc7Uo9Je5NGBY=; b=pSIzMsaslA9EVJaNgs3xMkTko2prnzFfJrTevKg9mX+roOdF4fP23gzHkQ7VCvZ1g0 oXNEXmhJubYKXuBnlwEy9cd52uqnFBx/25KkRfIJR5l0JZRphzKsuCbdFWZcoNfMzUy+ CO3dnMWYWkSIVVgH9k752Npsn0zGo4GpbsD+WvWEu55Zt2TWrDSqx9ameh7tEUfyWwgl YpFEtLFr9eCVF8AETZ/3Bxe+OC3+Jd7X+Sz1mRRZKZHLrkbWghexJcz6gmLw10nfcZee 3PBb0KzQE0DyI2EwZ6neSRND9T+8PNb2i6npDxjbspj19dB7rzjnhVyv/F4WhJ/QF1Y8 XYIg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=YvIcgLxrIDvH3u3VEeMqsMbudoeAwAVc7Uo9Je5NGBY=; b=JE4CkDpGV617JH2588uQkBc7gXoFfgnKqPZu9Mrqc7fhgqG9cwI7mCILObxE7qFixo ddOklO1ZeCF3dNbDHglFZGOfEXwOKsmo8nef9bhFsXdJ4KrKMMOurNnzYSlgpGQgJji0 dXDe4T7VkPiL2Ll80yVmKNFlEGqlUgYt5/k0yozGFlqPK87Hmt9X9eXW19znsfizikjV n8j8v3fqFxQGO4WTqTcnew4C29uk0cNEVALkV4SyCJlUcpbBH/8wrs08GGa0Lkw3ycZj H+tsFYLcL+F+f+o2te26RZtov9dDcu+82mMGVKkoTlzvjnkU4+cNPOeE4GHtvrC33HVl BzDQ== X-Gm-Message-State: AOAM5312uElrM+JrSIN05RqVqmRf8cdNP9L2enewcMXbKvRYsk5urowz 7Mmp/pqdLauY84bNFMLIpMftoQ== X-Google-Smtp-Source: ABdhPJwWre1np96zYEEAG1G85CB6tdS9Z4+KdP/oDY6HW2w48L0J621+a9wJdSQ0VjBOVUC06fo7UA== X-Received: by 2002:a05:600c:4f55:: with SMTP id m21mr5860807wmq.68.1638446042164; Thu, 02 Dec 2021 03:54:02 -0800 (PST) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id n32sm2101850wms.1.2021.12.02.03.53.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 02 Dec 2021 03:53:59 -0800 (PST) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id CA4531FF9C; Thu, 2 Dec 2021 11:53:52 +0000 (GMT) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: pbonzini@redhat.com, drjones@redhat.com, thuth@redhat.com Cc: kvm@vger.kernel.org, qemu-arm@nongnu.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, christoffer.dall@arm.com, maz@kernel.org, =?utf-8?q?Alex_Benn=C3=A9e?= , Mark Rutland Subject: [kvm-unit-tests PATCH v9 5/9] arm/tlbflush-code: TLB flush during code execution Date: Thu, 2 Dec 2021 11:53:48 +0000 Message-Id: <20211202115352.951548-6-alex.bennee@linaro.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211202115352.951548-1-alex.bennee@linaro.org> References: <20211202115352.951548-1-alex.bennee@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211202_035403_684461_EEC96F38 X-CRM114-Status: GOOD ( 25.85 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org This adds a fairly brain dead torture test for TLB flushes intended for stressing the MTTCG QEMU build. It takes the usual -smp option for multiple CPUs. By default it CPU0 will do a TLBIALL flush after each cycle. You can pass options via -append to control additional aspects of the test: - "page" flush each page in turn (one per function) - "self" do the flush after each computation cycle - "verbose" report progress on each computation cycle Signed-off-by: Alex Bennée CC: Mark Rutland Message-Id: <20211118184650.661575-7-alex.bennee@linaro.org> --- v9 - move tests back into unittests.cfg (with nodefault mttcg) - replace printf with report_info - drop accel = tcg --- arm/Makefile.common | 1 + arm/tlbflush-code.c | 209 ++++++++++++++++++++++++++++++++++++++++++++ arm/unittests.cfg | 25 ++++++ 3 files changed, 235 insertions(+) create mode 100644 arm/tlbflush-code.c diff --git a/arm/Makefile.common b/arm/Makefile.common index 99bcf3fc..e3f04f2d 100644 --- a/arm/Makefile.common +++ b/arm/Makefile.common @@ -12,6 +12,7 @@ tests-common += $(TEST_DIR)/gic.flat tests-common += $(TEST_DIR)/psci.flat tests-common += $(TEST_DIR)/sieve.flat tests-common += $(TEST_DIR)/pl031.flat +tests-common += $(TEST_DIR)/tlbflush-code.flat tests-all = $(tests-common) $(tests) all: directories $(tests-all) diff --git a/arm/tlbflush-code.c b/arm/tlbflush-code.c new file mode 100644 index 00000000..bf9eb111 --- /dev/null +++ b/arm/tlbflush-code.c @@ -0,0 +1,209 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* + * TLB Flush Race Tests + * + * These tests are designed to test for incorrect TLB flush semantics + * under emulation. The initial CPU will set all the others working a + * compuation task and will then trigger TLB flushes across the + * system. It doesn't actually need to re-map anything but the flushes + * themselves will trigger QEMU's TCG self-modifying code detection + * which will invalidate any generated code causing re-translation. + * Eventually the code buffer will fill and a general tb_lush() will + * be triggered. + * + * Copyright (C) 2016-2021, Linaro, Alex Bennée + * + * This work is licensed under the terms of the GNU LGPL, version 2. + */ + +#include +#include +#include +#include +#include + +#define SEQ_LENGTH 10 +#define SEQ_HASH 0x7cd707fe + +static cpumask_t smp_test_complete; +static int flush_count = 1000000; +static bool flush_self; +static bool flush_page; +static bool flush_verbose; + +/* + * Work functions + * + * These work functions need to be: + * + * - page aligned, so we can flush one function at a time + * - have branches, so QEMU TCG generates multiple basic blocks + * - call across pages, so we exercise the TCG basic block slow path + */ + +/* Adler32 */ +__attribute__((aligned(PAGE_SIZE))) static +uint32_t hash_array(const void *buf, size_t buflen) +{ + const uint8_t *data = (uint8_t *) buf; + uint32_t s1 = 1; + uint32_t s2 = 0; + + for (size_t n = 0; n < buflen; n++) { + s1 = (s1 + data[n]) % 65521; + s2 = (s2 + s1) % 65521; + } + return (s2 << 16) | s1; +} + +__attribute__((aligned(PAGE_SIZE))) static +void create_fib_sequence(int length, unsigned int *array) +{ + int i; + + /* first two values */ + array[0] = 0; + array[1] = 1; + for (i = 2; i < length; i++) + array[i] = array[i-2] + array[i-1]; +} + +__attribute__((aligned(PAGE_SIZE))) static +unsigned long long factorial(unsigned int n) +{ + unsigned int i; + unsigned long long fac = 1; + + for (i = 1; i <= n; i++) + fac = fac * i; + return fac; +} + +__attribute__((aligned(PAGE_SIZE))) static +void factorial_array(unsigned int n, unsigned int *input, + unsigned long long *output) +{ + unsigned int i; + + for (i = 0; i < n; i++) + output[i] = factorial(input[i]); +} + +__attribute__((aligned(PAGE_SIZE))) static +unsigned int do_computation(void) +{ + unsigned int fib_array[SEQ_LENGTH]; + unsigned long long facfib_array[SEQ_LENGTH]; + uint32_t fib_hash, facfib_hash; + + create_fib_sequence(SEQ_LENGTH, &fib_array[0]); + fib_hash = hash_array(&fib_array[0], sizeof(fib_array)); + factorial_array(SEQ_LENGTH, &fib_array[0], &facfib_array[0]); + facfib_hash = hash_array(&facfib_array[0], sizeof(facfib_array)); + + return (fib_hash ^ facfib_hash); +} + +/* This provides a table of the work functions so we can flush each + * page individually + */ +static void *pages[] = {&hash_array, &create_fib_sequence, &factorial, + &factorial_array, &do_computation}; + +static void do_flush(int i) +{ + if (flush_page) + flush_tlb_page((unsigned long)pages[i % ARRAY_SIZE(pages)]); + else + flush_tlb_all(); +} + + +static void just_compute(void) +{ + int i, errors = 0; + int cpu = smp_processor_id(); + + uint32_t result; + + report_info("CPU%d online", cpu); + + for (i = 0 ; i < flush_count; i++) { + result = do_computation(); + + if (result != SEQ_HASH) { + errors++; + report_info("CPU%d: seq%d 0x%"PRIx32"!=0x%x", + cpu, i, result, SEQ_HASH); + } + + if (flush_verbose && (i % 1000) == 0) + report_info("CPU%d: seq%d", cpu, i); + + if (flush_self) + do_flush(i); + } + + report(errors == 0, "CPU%d: Done - Errors: %d", cpu, errors); + + cpumask_set_cpu(cpu, &smp_test_complete); + if (cpu != 0) + halt(); +} + +static void just_flush(void) +{ + int cpu = smp_processor_id(); + int i = 0; + + /* + * Set our CPU as done, keep flushing until everyone else + * finished + */ + cpumask_set_cpu(cpu, &smp_test_complete); + + while (!cpumask_full(&smp_test_complete)) + do_flush(i++); + + report_info("CPU%d: Done - Triggered %d flushes", cpu, i); +} + +int main(int argc, char **argv) +{ + int cpu, i; + char prefix[100]; + + for (i = 0; i < argc; i++) { + char *arg = argv[i]; + + if (strcmp(arg, "page") == 0) + flush_page = true; + + if (strcmp(arg, "self") == 0) + flush_self = true; + + if (strcmp(arg, "verbose") == 0) + flush_verbose = true; + } + + snprintf(prefix, sizeof(prefix), "tlbflush_%s_%s", + flush_page ? "page" : "all", + flush_self ? "self" : "other"); + report_prefix_push(prefix); + + for_each_present_cpu(cpu) { + if (cpu == 0) + continue; + smp_boot_secondary(cpu, just_compute); + } + + if (flush_self) + just_compute(); + else + just_flush(); + + while (!cpumask_full(&smp_test_complete)) + cpu_relax(); + + return report_summary(); +} diff --git a/arm/unittests.cfg b/arm/unittests.cfg index 945c2d07..34c8a95b 100644 --- a/arm/unittests.cfg +++ b/arm/unittests.cfg @@ -241,3 +241,28 @@ arch = arm64 file = cache.flat arch = arm64 groups = cache + +# TLB Torture Tests +[tlbflush-code::all_other] +file = tlbflush-code.flat +smp = $(($MAX_SMP>4?4:$MAX_SMP)) +groups = nodefault mttcg + +[tlbflush-code::page_other] +file = tlbflush-code.flat +smp = $(($MAX_SMP>4?4:$MAX_SMP)) +extra_params = -append 'page' +groups = nodefault mttcg + +[tlbflush-code::all_self] +file = tlbflush-code.flat +smp = $(($MAX_SMP>4?4:$MAX_SMP)) +extra_params = -append 'self' +groups = nodefault mttcg + +[tlbflush-code::page_self] +file = tlbflush-code.flat +smp = $(($MAX_SMP>4?4:$MAX_SMP)) +extra_params = -append 'page self' +groups = nodefault mttcg +