From patchwork Fri Jul 31 15:54:01 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 6914431 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id CF1169F38B for ; Fri, 31 Jul 2015 15:54:49 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id AD25320569 for ; Fri, 31 Jul 2015 15:54:48 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 705D520558 for ; Fri, 31 Jul 2015 15:54:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753176AbbGaPyg (ORCPT ); Fri, 31 Jul 2015 11:54:36 -0400 Received: from mail-wi0-f182.google.com ([209.85.212.182]:34673 "EHLO mail-wi0-f182.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752942AbbGaPyb (ORCPT ); Fri, 31 Jul 2015 11:54:31 -0400 Received: by wibud3 with SMTP id ud3so63499917wib.1 for ; Fri, 31 Jul 2015 08:54:30 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-type:content-transfer-encoding; bh=lPsVcPs0aigB3uKz3zQeChnj+7aNMzE0l5hRIixViBI=; b=SAuqSigbnWIG9jnSTZ2guCSfZdC1k7EHfQbnFJIg82JIDqCodpiS7lB+fvZzpbM5MW UNhklp1a3NxzD1d/+unw9Wn16OIFnxPhHims4m6VqjhnaQBpHUelzpQ/1VouetKpKpEu CBt2zGdHyoBGuTo2XNkQxsuAnLdMMdhDUKrGjz+7IJTG/RfvXyijgOBhVCf07gEq/sVU aYjcxtJ4zCLEzAk+bEbzaNkXQ1gsGO1zypJEzr40vJg6t+jokxTLPGzQYbAJxXGr+VZh enmSCfNLFyn2TuvBtkR/y+AvlRvFnJlhvfC8brRqGqo1aglyiUzKX1cPAlqNrsbsB6TB yYlw== X-Gm-Message-State: ALoCoQkBLrCqCXraDS6Ov0oOJG+rYrRuzjOESnc44Mv0WfTe0J7RdIS9b+eoB3AWe04UqPq7NsmB X-Received: by 10.194.142.146 with SMTP id rw18mr6929596wjb.110.1438358070091; Fri, 31 Jul 2015 08:54:30 -0700 (PDT) Received: from zen.linaro.local ([81.128.185.34]) by smtp.gmail.com with ESMTPSA id r19sm5121694wib.7.2015.07.31.08.54.27 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 31 Jul 2015 08:54:29 -0700 (PDT) Received: from zen.linaroharston (localhost [127.0.0.1]) by zen.linaro.local (Postfix) with ESMTP id 81C803E13D6; Fri, 31 Jul 2015 16:54:23 +0100 (BST) From: =?UTF-8?q?Alex=20Benn=C3=A9e?= To: mttcg@listserver.greensocs.com, mark.burton@greensocs.com, fred.konrad@greensocs.com Cc: qemu-devel@nongnu.org, peter.maydell@linaro.org, a.spyridakis@virtualopensystems.com, drjones@redhat.com, a.rigo@virtualopensystems.com, claudio.fontana@huawei.com, kvm@vger.kernel.org, =?UTF-8?q?Alex=20Benn=C3=A9e?= , =?UTF-8?q?Alex=20Benn=C3=A9e?= Subject: [kvm-unit-tests PATCH v5 11/11] new: arm/barrier-test for memory barriers Date: Fri, 31 Jul 2015 16:54:01 +0100 Message-Id: <1438358041-18021-12-git-send-email-alex.bennee@linaro.org> X-Mailer: git-send-email 2.5.0 In-Reply-To: <1438358041-18021-1-git-send-email-alex.bennee@linaro.org> References: <1438358041-18021-1-git-send-email-alex.bennee@linaro.org> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Spam-Status: No, score=-8.3 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Alex Bennée This test has been written mainly to stress multi-threaded TCG behaviour but will demonstrate failure by default on real hardware. The test takes the following parameters: - "lock" use GCC's locking semantics - "excl" use load/store exclusive semantics - "acqrel" use acquire/release semantics Currently excl/acqrel lock up on MTTCG Signed-off-by: Alex Bennée --- arm/barrier-test.c | 206 +++++++++++++++++++++++++++++++++++++++++++ config/config-arm-common.mak | 2 + 2 files changed, 208 insertions(+) create mode 100644 arm/barrier-test.c diff --git a/arm/barrier-test.c b/arm/barrier-test.c new file mode 100644 index 0000000..53d690b --- /dev/null +++ b/arm/barrier-test.c @@ -0,0 +1,206 @@ +#include +#include +#include +#include +#include + +#include + +#define MAX_CPUS 4 + +/* How many increments to do */ +static int increment_count = 10000000; + + +/* shared value, we use the alignment to ensure the global_lock value + * doesn't share a page */ +static unsigned int shared_value; + +/* PAGE_SIZE * uint32_t means we span several pages */ +static uint32_t memory_array[PAGE_SIZE]; + +__attribute__((aligned(PAGE_SIZE))) static unsigned int per_cpu_value[MAX_CPUS]; +__attribute__((aligned(PAGE_SIZE))) static cpumask_t smp_test_complete; +__attribute__((aligned(PAGE_SIZE))) static int global_lock; + +struct isaac_ctx prng_context[MAX_CPUS]; + +void (*inc_fn)(void); + +static void lock(int *lock_var) +{ + while (__sync_lock_test_and_set(lock_var, 1)); +} +static void unlock(int *lock_var) +{ + __sync_lock_release(lock_var); +} + +static void increment_shared(void) +{ + shared_value++; +} + +static void increment_shared_with_lock(void) +{ + lock(&global_lock); + shared_value++; + unlock(&global_lock); +} + +static void increment_shared_with_excl(void) +{ +#if defined (__LP64__) || defined (_LP64) + asm volatile( + "1: ldxr w0, [%[sptr]]\n" + " add w0, w0, #0x1\n" + " stxr w1, w0, [%[sptr]]\n" + " cbnz w1, 1b\n" + : /* out */ + : [sptr] "r" (&shared_value) /* in */ + : "w0", "w1", "cc"); +#else + asm volatile( + "1: ldrex r0, [%[sptr]]\n" + " add r0, r0, #0x1\n" + " strexeq r1, r0, [%[sptr]]\n" + " cmpeq r1, #0\n" + " bne 1b\n" + : /* out */ + : [sptr] "r" (&shared_value) /* in */ + : "r0", "r1", "cc"); +#endif +} + +static void increment_shared_with_acqrel(void) +{ +#if defined (__LP64__) || defined (_LP64) + asm volatile( + " ldar w0, [%[sptr]]\n" + " add w0, w0, #0x1\n" + " str w0, [%[sptr]]\n" + : /* out */ + : [sptr] "r" (&shared_value) /* in */ + : "w0"); +#else + /* ARMv7 has no acquire/release semantics but we + * can ensure the results of the write are propagated + * with the use of barriers. + */ + asm volatile( + "1: ldrex r0, [%[sptr]]\n" + " add r0, r0, #0x1\n" + " strexeq r1, r0, [%[sptr]]\n" + " cmpeq r1, #0\n" + " bne 1b\n" + " dmb\n" + : /* out */ + : [sptr] "r" (&shared_value) /* in */ + : "r0", "r1", "cc"); +#endif + +} + +/* The idea of this is just to generate some random load/store + * activity which may or may not race with an un-barried incremented + * of the shared counter + */ +static void shuffle_memory(int cpu) +{ + int i; + uint32_t lspat = isaac_next_uint32(&prng_context[cpu]); + uint32_t seq = isaac_next_uint32(&prng_context[cpu]); + int count = seq & 0x1f; + uint32_t val=0; + + seq >>= 5; + + for (i=0; i>= PAGE_SHIFT; + seq ^= lspat; + lspat >>= 1; + } + +} + +static void do_increment(void) +{ + int i; + int cpu = smp_processor_id(); + + printf("CPU%d online\n", cpu); + + for (i=0; i < increment_count; i++) { + per_cpu_value[cpu]++; + inc_fn(); + + shuffle_memory(cpu); + } + + printf("CPU%d: Done, %d incs\n", cpu, per_cpu_value[cpu]); + + cpumask_set_cpu(cpu, &smp_test_complete); + if (cpu != 0) + halt(); +} + +int main(int argc, char **argv) +{ + int cpu; + unsigned int i, sum = 0; + static const unsigned char seed[] = "myseed"; + + inc_fn = &increment_shared; + + isaac_init(&prng_context[0], &seed[0], sizeof(seed)); + + for (i=0; i