From patchwork Tue Nov 13 16:08:14 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Etienne CARRIERE X-Patchwork-Id: 1740761 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork2.kernel.org Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) by patchwork2.kernel.org (Postfix) with ESMTP id D6082DF264 for ; Wed, 14 Nov 2012 11:43:37 +0000 (UTC) Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.76 #1 (Red Hat Linux)) id 1TYbKK-0000wo-Fb; Wed, 14 Nov 2012 11:40:19 +0000 Received: from eu1sys200aog114.obsmtp.com ([207.126.144.137]) by merlin.infradead.org with smtps (Exim 4.76 #1 (Red Hat Linux)) id 1TYJPp-000721-JN for linux-arm-kernel@lists.infradead.org; Tue, 13 Nov 2012 16:32:47 +0000 Received: from beta.dmz-eu.st.com ([164.129.1.35]) (using TLSv1) by eu1sys200aob114.postini.com ([207.126.147.11]) with SMTP ID DSNKUKJ2Jp4cDasw/Dk6fR3j/2QfuXprjtWB@postini.com; Tue, 13 Nov 2012 16:32:45 UTC Received: from zeta.dmz-eu.st.com (zeta.dmz-eu.st.com [164.129.230.9]) by beta.dmz-eu.st.com (STMicroelectronics) with ESMTP id C00871B1; Tue, 13 Nov 2012 16:08:16 +0000 (GMT) Received: from Webmail-eu.st.com (safex1hubcas5.st.com [10.75.90.71]) by zeta.dmz-eu.st.com (STMicroelectronics) with ESMTP id 073864D06; Tue, 13 Nov 2012 16:08:16 +0000 (GMT) Received: from SAFEX1MAIL3.st.com ([10.75.90.7]) by Safex1hubcas5.st.com ([10.75.90.71]) with mapi; Tue, 13 Nov 2012 17:08:15 +0100 From: Etienne CARRIERE To: "linux-arm-kernel@lists.infradead.org" , Will Deacon , Russell King , Marc Zyngier , Catalin Marinas Date: Tue, 13 Nov 2012 17:08:14 +0100 Subject: [PATCH 1/2] arm/mm: L2CC shared mutex with ARM TZ Thread-Topic: [PATCH 1/2] arm/mm: L2CC shared mutex with ARM TZ Thread-Index: Ac3BtQe/EhZdn2pXQSKbv0Ebemby3Q== Message-ID: <0154077FE026E54BB093CA7EB3FD1AE32B57AF1B59@SAFEX1MAIL3.st.com> Accept-Language: fr-FR, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: acceptlanguage: fr-FR, en-US MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20121113_113246_374653_D061C9BA X-CRM114-Status: GOOD ( 22.43 ) X-Spam-Score: -4.2 (----) X-Spam-Report: SpamAssassin version 3.3.2 on merlin.infradead.org summary: Content analysis details: (-4.2 points) pts rule name description ---- ---------------------- -------------------------------------------------- -2.3 RCVD_IN_DNSWL_MED RBL: Sender listed at http://www.dnswl.org/, medium trust [207.126.144.137 listed in list.dnswl.org] -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1% [score: 0.0000] 0.0 HTML_MESSAGE BODY: HTML included in message X-Mailman-Approved-At: Wed, 14 Nov 2012 06:26:44 -0500 Cc: Etienne CARRIERE , "Linus Walleij \(linus.walleij@linaro.org\)" , Rabin VINCENT , Srinidhi KASAGAR X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: linux-arm-kernel-bounces@lists.infradead.org Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org From: Etienne Carriere Secure code in TrustZone space may need to perform L2 cache maintenance operations. A shared mutex is required to synchronize linux l2cc maintenance and TZ l2cc maintenance. The TZ mutex is an "arch_spinlock": a 32bit DDR cell (ARMv7-A mutex). Linux L2 cache driver must lock TZ mutex if enabled. Signed-off-by: Etienne Carriere --- arch/arm/include/asm/outercache.h | 9 ++++ arch/arm/mm/cache-l2x0.c | 87 +++++++++++++++++++++++++++++---------- 2 files changed, 74 insertions(+), 22 deletions(-) { unsigned long flags; - raw_spin_lock_irqsave(&l2x0_lock, flags); + l2x0_spin_lock_irqsave(flags); cache_sync(); - raw_spin_unlock_irqrestore(&l2x0_lock, flags); + l2x0_spin_unlock_irqrestore(flags); } static void __l2x0_flush_all(void) @@ -145,9 +165,9 @@ static void l2x0_flush_all(void) unsigned long flags; /* clean all ways */ - raw_spin_lock_irqsave(&l2x0_lock, flags); + l2x0_spin_lock_irqsave(flags); __l2x0_flush_all(); - raw_spin_unlock_irqrestore(&l2x0_lock, flags); + l2x0_spin_unlock_irqrestore(flags); } static void l2x0_clean_all(void) @@ -155,11 +175,11 @@ static void l2x0_clean_all(void) unsigned long flags; /* clean all ways */ - raw_spin_lock_irqsave(&l2x0_lock, flags); + l2x0_spin_lock_irqsave(flags); writel_relaxed(l2x0_way_mask, l2x0_base + L2X0_CLEAN_WAY); cache_wait_way(l2x0_base + L2X0_CLEAN_WAY, l2x0_way_mask); cache_sync(); - raw_spin_unlock_irqrestore(&l2x0_lock, flags); + l2x0_spin_unlock_irqrestore(flags); } static void l2x0_inv_all(void) @@ -167,13 +187,13 @@ static void l2x0_inv_all(void) unsigned long flags; /* invalidate all ways */ - raw_spin_lock_irqsave(&l2x0_lock, flags); + l2x0_spin_lock_irqsave(flags); /* Invalidating when L2 is enabled is a nono */ BUG_ON(readl(l2x0_base + L2X0_CTRL) & 1); writel_relaxed(l2x0_way_mask, l2x0_base + L2X0_INV_WAY); cache_wait_way(l2x0_base + L2X0_INV_WAY, l2x0_way_mask); cache_sync(); - raw_spin_unlock_irqrestore(&l2x0_lock, flags); + l2x0_spin_unlock_irqrestore(flags); } static void l2x0_inv_range(unsigned long start, unsigned long end) @@ -181,7 +201,7 @@ static void l2x0_inv_range(unsigned long start, unsigned long end) void __iomem *base = l2x0_base; unsigned long flags; - raw_spin_lock_irqsave(&l2x0_lock, flags); + l2x0_spin_lock_irqsave(flags); if (start & (CACHE_LINE_SIZE - 1)) { start &= ~(CACHE_LINE_SIZE - 1); debug_writel(0x03); @@ -206,13 +226,13 @@ static void l2x0_inv_range(unsigned long start, unsigned long end) } if (blk_end < end) { - raw_spin_unlock_irqrestore(&l2x0_lock, flags); - raw_spin_lock_irqsave(&l2x0_lock, flags); + l2x0_spin_unlock_irqrestore(flags); + l2x0_spin_lock_irqsave(flags); } } cache_wait(base + L2X0_INV_LINE_PA, 1); cache_sync(); - raw_spin_unlock_irqrestore(&l2x0_lock, flags); + l2x0_spin_unlock_irqrestore(flags); } static void l2x0_clean_range(unsigned long start, unsigned long end) @@ -225,7 +245,7 @@ static void l2x0_clean_range(unsigned long start, unsigned long end) return; } - raw_spin_lock_irqsave(&l2x0_lock, flags); + l2x0_spin_lock_irqsave(flags); start &= ~(CACHE_LINE_SIZE - 1); while (start < end) { unsigned long blk_end = start + min(end - start, 4096UL); @@ -236,13 +256,13 @@ static void l2x0_clean_range(unsigned long start, unsigned long end) } if (blk_end < end) { - raw_spin_unlock_irqrestore(&l2x0_lock, flags); - raw_spin_lock_irqsave(&l2x0_lock, flags); + l2x0_spin_unlock_irqrestore(flags); + l2x0_spin_lock_irqsave(flags); } } cache_wait(base + L2X0_CLEAN_LINE_PA, 1); cache_sync(); - raw_spin_unlock_irqrestore(&l2x0_lock, flags); + l2x0_spin_unlock_irqrestore(flags); } static void l2x0_flush_range(unsigned long start, unsigned long end) @@ -255,7 +275,7 @@ static void l2x0_flush_range(unsigned long start, unsigned long end) return; } - raw_spin_lock_irqsave(&l2x0_lock, flags); + l2x0_spin_lock_irqsave(flags); start &= ~(CACHE_LINE_SIZE - 1); while (start < end) { unsigned long blk_end = start + min(end - start, 4096UL); @@ -268,24 +288,24 @@ static void l2x0_flush_range(unsigned long start, unsigned long end) debug_writel(0x00); if (blk_end < end) { - raw_spin_unlock_irqrestore(&l2x0_lock, flags); - raw_spin_lock_irqsave(&l2x0_lock, flags); + l2x0_spin_unlock_irqrestore(flags); + l2x0_spin_lock_irqsave(flags); } } cache_wait(base + L2X0_CLEAN_INV_LINE_PA, 1); cache_sync(); - raw_spin_unlock_irqrestore(&l2x0_lock, flags); + l2x0_spin_unlock_irqrestore(flags); } static void l2x0_disable(void) { unsigned long flags; - raw_spin_lock_irqsave(&l2x0_lock, flags); + l2x0_spin_lock_irqsave(flags); __l2x0_flush_all(); writel_relaxed(0, l2x0_base + L2X0_CTRL); dsb(); - raw_spin_unlock_irqrestore(&l2x0_lock, flags); + l2x0_spin_unlock_irqrestore(flags); } static void l2x0_unlock(u32 cache_id) @@ -307,6 +327,28 @@ static void l2x0_unlock(u32 cache_id) } } +/* Enable/disable external mutex shared with TZ code */ +static bool l2x0_tz_mutex_cfg(unsigned long addr) +{ + unsigned long flags; + + raw_spin_lock_irqsave(&l2x0_lock, flags); + + if (addr && l2x0_tz_mutex && (addr != (uint)l2x0_tz_mutex)) { + raw_spin_unlock_irqrestore(&l2x0_lock, flags); + pr_err("%s: a TZ mutex is already enabled\n", __func__); + return false; + } + + l2x0_tz_mutex = (arch_spinlock_t *)addr; + /* insure mutex ptr is updated before lock is released */ + smp_wmb(); + + raw_spin_unlock_irqrestore(&l2x0_lock, flags); + pr_debug("\n%s: %sable TZ mutex\n\n", __func__, (addr) ? "en" : "dis"); + return true; +} + void __init l2x0_init(void __iomem *base, u32 aux_val, u32 aux_mask) { u32 aux; @@ -380,6 +422,7 @@ void __init l2x0_init(void __iomem *base, u32 aux_val, u32 aux_mask) outer_cache.inv_all = l2x0_inv_all; outer_cache.disable = l2x0_disable; outer_cache.set_debug = l2x0_set_debug; + outer_cache.tz_mutex = l2x0_tz_mutex_cfg; printk(KERN_INFO "%s cache controller enabled\n", type); printk(KERN_INFO "l2x0: %d ways, CACHE_ID 0x%08x, AUX_CTRL 0x%08x, Cache size: %d B\n", -- 1.7.11.3 diff --git a/arch/arm/include/asm/outercache.h b/arch/arm/include/asm/outercache.h index 53426c6..7aa5eac 100644 --- a/arch/arm/include/asm/outercache.h +++ b/arch/arm/include/asm/outercache.h @@ -35,6 +35,7 @@ struct outer_cache_fns { #endif void (*set_debug)(unsigned long); void (*resume)(void); + bool (*tz_mutex)(unsigned long); }; #ifdef CONFIG_OUTER_CACHE @@ -81,6 +82,13 @@ static inline void outer_resume(void) outer_cache.resume(); } +static inline bool outer_tz_mutex(unsigned long addr) +{ + if (outer_cache.tz_mutex) + return outer_cache.tz_mutex(addr); + return false; +} + #else static inline void outer_inv_range(phys_addr_t start, phys_addr_t end) @@ -92,6 +100,7 @@ static inline void outer_flush_range(phys_addr_t start, phys_addr_t end) static inline void outer_flush_all(void) { } static inline void outer_inv_all(void) { } static inline void outer_disable(void) { } +static inline bool outer_tz_mutex(unsigned long addr) { return false; } #endif diff --git a/arch/arm/mm/cache-l2x0.c b/arch/arm/mm/cache-l2x0.c index a53fd2a..eacdc74 100644 --- a/arch/arm/mm/cache-l2x0.c +++ b/arch/arm/mm/cache-l2x0.c @@ -41,6 +41,26 @@ struct l2x0_of_data { void (*resume)(void); }; +/* + * arch_spinlock (single 32bit DDR mutex cell) pointer to synchronise + * L2CC maintenance between linux world and secure world (ARM TZ). + */ +arch_spinlock_t *l2x0_tz_mutex; + +#define l2x0_spin_lock_irqsave(flags) \ + do { \ + raw_spin_lock_irqsave(&l2x0_lock, flags); \ + if (l2x0_tz_mutex) \ + arch_spin_lock(l2x0_tz_mutex); \ + } while (0) + +#define l2x0_spin_unlock_irqrestore(flags) \ + do { \ + if (l2x0_tz_mutex) \ + arch_spin_unlock(l2x0_tz_mutex); \ + raw_spin_unlock_irqrestore(&l2x0_lock, flags); \ + } while (0) + static inline void cache_wait_way(void __iomem *reg, unsigned long mask) { /* wait for cache operation by line or way to complete */ @@ -126,9 +146,9 @@ static void l2x0_cache_sync(void)