From patchwork Tue Sep 16 02:06:15 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Jones X-Patchwork-Id: 4914311 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id ED02BBEEA5 for ; Tue, 16 Sep 2014 02:06:37 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 1D8402026F for ; Tue, 16 Sep 2014 02:06:37 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 1325020222 for ; Tue, 16 Sep 2014 02:06:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754256AbaIPCGc (ORCPT ); Mon, 15 Sep 2014 22:06:32 -0400 Received: from mx1.redhat.com ([209.132.183.28]:49672 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754048AbaIPCGb (ORCPT ); Mon, 15 Sep 2014 22:06:31 -0400 Received: from int-mx09.intmail.prod.int.phx2.redhat.com (int-mx09.intmail.prod.int.phx2.redhat.com [10.5.11.22]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s8G26MWt026055 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL); Mon, 15 Sep 2014 22:06:22 -0400 Received: from hawk.usersys.redhat.com (dhcp-1-153.brq.redhat.com [10.34.1.153]) by int-mx09.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id s8G26JqV011926; Mon, 15 Sep 2014 22:06:20 -0400 From: Andrew Jones To: kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Cc: christoffer.dall@linaro.org, marc.zyngier@arm.com, pbonzini@redhat.com Subject: [PATCH kvm-unit-tests] arm: fix crash when caches are off Date: Tue, 16 Sep 2014 04:06:15 +0200 Message-Id: <1410833175-25547-1-git-send-email-drjones@redhat.com> X-Scanned-By: MIMEDefang 2.68 on 10.5.11.22 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Spam-Status: No, score=-7.6 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP We shouldn't try Load-Exclusive instructions unless we've enabled memory management, as these instructions depend on the data cache unit's coherency monitor. This patch adds a new setup boolean, initialized to false, that is used to guard Load-Exclusive instructions. Eventually we'll add more setup code that sets it true. Note: This problem became visible on boards with Cortex-A7 processors. Testing with Cortex-A15 didn't expose it, as those may have an external coherency monitor that still allows the instruction to execute (on A7s we got data aborts). Although even on A15's it's not clear from the spec if the instructions will behave as expected while caches are off, so we no longer allow Load-Exclusive instructions on those processors without caches enabled either. Signed-off-by: Andrew Jones --- lib/arm/asm/setup.h | 2 ++ lib/arm/setup.c | 1 + lib/arm/spinlock.c | 10 ++++++++++ 3 files changed, 13 insertions(+) diff --git a/lib/arm/asm/setup.h b/lib/arm/asm/setup.h index 21445ef2085fc..9c54c184e2866 100644 --- a/lib/arm/asm/setup.h +++ b/lib/arm/asm/setup.h @@ -20,6 +20,8 @@ extern phys_addr_t __phys_offset, __phys_end; #define PHYS_SIZE (1ULL << PHYS_SHIFT) #define PHYS_MASK (PHYS_SIZE - 1ULL) +extern bool mem_caches_enabled; + #define L1_CACHE_SHIFT 6 #define L1_CACHE_BYTES (1 << L1_CACHE_SHIFT) #define SMP_CACHE_BYTES L1_CACHE_BYTES diff --git a/lib/arm/setup.c b/lib/arm/setup.c index 3941c9757dcb2..f7ed639c9d499 100644 --- a/lib/arm/setup.c +++ b/lib/arm/setup.c @@ -25,6 +25,7 @@ u32 cpus[NR_CPUS] = { [0 ... NR_CPUS-1] = (~0UL) }; int nr_cpus; phys_addr_t __phys_offset, __phys_end; +bool mem_caches_enabled; static void cpu_set(int fdtnode __unused, u32 regval, void *info __unused) { diff --git a/lib/arm/spinlock.c b/lib/arm/spinlock.c index d8a6d4c3383d6..43539c5e84062 100644 --- a/lib/arm/spinlock.c +++ b/lib/arm/spinlock.c @@ -1,12 +1,22 @@ #include "libcflat.h" #include "asm/spinlock.h" #include "asm/barrier.h" +#include "asm/setup.h" void spin_lock(struct spinlock *lock) { u32 val, fail; dmb(); + + /* + * Without caches enabled Load-Exclusive instructions may fail. + * In that case we do nothing, and just hope the caller knows + * what they're doing. + */ + if (!mem_caches_enabled) + return; + do { asm volatile( "1: ldrex %0, [%2]\n"