From patchwork Mon Jun 13 15:03:05 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vladimir Murzin X-Patchwork-Id: 9173369 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 0A7F96075D for ; Mon, 13 Jun 2016 15:08:16 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id EE7AF20499 for ; Mon, 13 Jun 2016 15:08:15 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E32FF25EF7; Mon, 13 Jun 2016 15:08:15 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 30A8420499 for ; Mon, 13 Jun 2016 15:08:15 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1bCTQm-0006yh-Jd; Mon, 13 Jun 2016 15:05:36 +0000 Received: from foss.arm.com ([217.140.101.70]) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1bCTPM-0004Ie-86 for linux-arm-kernel@lists.infradead.org; Mon, 13 Jun 2016 15:04:12 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 4F275959; Mon, 13 Jun 2016 08:04:22 -0700 (PDT) Received: from bc-c5-1-15.euhpc.arm.com. (bc-c5-1-15.euhpc.arm.com [10.6.16.35]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 85A123F253; Mon, 13 Jun 2016 08:03:41 -0700 (PDT) From: Vladimir Murzin To: linux@arm.linux.org.uk Subject: [PATCH 06/10] ARM: V7M: Implement cache macros for V7M Date: Mon, 13 Jun 2016 16:03:05 +0100 Message-Id: <1465830189-20128-7-git-send-email-vladimir.murzin@arm.com> X-Mailer: git-send-email 2.0.0 In-Reply-To: <1465830189-20128-1-git-send-email-vladimir.murzin@arm.com> References: <1465830189-20128-1-git-send-email-vladimir.murzin@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20160613_080408_436552_D8ACEE1D X-CRM114-Status: GOOD ( 19.41 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kernel@pengutronix.de, manabian@gmail.com, stefan@agner.ch, kbuild-all@01.org, mcoquelin.stm32@gmail.com, alexandre.torgue@gmail.com, linux-arm-kernel@lists.infradead.org MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP From: Jonathan Austin This commit implements the cache operation macros for V7M, paving the way for caches to be used on V7 with a future commit. Because the cache operations in V7M are memory mapped, in most operations an extra register is required compared to the V7 version, where the type of operation is encoded in the instruction, not the address that is written to. Thus, an extra register argument has been added to the cache operation macros, that is required in V7M but ignored/unused in V7. In almost all cases there was a spare temporary register, but in places where the register allocation was tighter the M_CLASS macro has been used to avoid clobbering new registers. Signed-off-by: Jonathan Austin Signed-off-by: Vladimir Murzin --- Changelog: RFC -> v1 - M_CLASS() macro is used instead of THUMB() where appropriate - open-coded implentation of dccmvau and icimvau instead of macros, since the latter would mark the wrong instruction as user-accessable (per Russell) - dccimvac is updated per Russell preference arch/arm/mm/cache-v7.S | 48 ++++++++++---- arch/arm/mm/v7-cache-macros.S | 21 +++--- arch/arm/mm/v7m-cache-macros.S | 142 ++++++++++++++++++++++++++++++++++++++++ 3 files changed, 189 insertions(+), 22 deletions(-) create mode 100644 arch/arm/mm/v7m-cache-macros.S diff --git a/arch/arm/mm/cache-v7.S b/arch/arm/mm/cache-v7.S index 49b9bfe..4677d37 100644 --- a/arch/arm/mm/cache-v7.S +++ b/arch/arm/mm/cache-v7.S @@ -17,7 +17,11 @@ #include #include "proc-macros.S" +#ifdef CONFIG_CPU_V7M +#include "v7m-cache-macros.S" +#else #include "v7-cache-macros.S" +#endif /* * The secondary kernel init calls v7_flush_dcache_all before it enables @@ -35,7 +39,7 @@ ENTRY(v7_invalidate_l1) mov r0, #0 - write_csselr r0 + write_csselr r0, r1 read_ccsidr r0 movw r1, #0x7fff and r2, r1, r0, lsr #13 @@ -56,7 +60,7 @@ ENTRY(v7_invalidate_l1) mov r5, r3, lsl r1 mov r6, r2, lsl r0 orr r5, r5, r6 @ Reg = (Temp< + * + * The 'unused' parameters are to keep the macro signatures in sync with the + * V7M versions, which require a tmp register for certain operations (see + * v7m-cache-macros.S). GAS supports omitting optional arguments but doesn't + * happily ignore additional undefined ones. */ .macro read_ctr, rt @@ -29,21 +34,21 @@ mrc p15, 1, \rt, c0, c0, 1 .endm -.macro write_csselr, rt +.macro write_csselr, rt, unused mcr p15, 2, \rt, c0, c0, 0 .endm /* * dcisw: invalidate data cache by set/way */ -.macro dcisw, rt +.macro dcisw, rt, unused mcr p15, 0, \rt, c7, c6, 2 .endm /* * dccisw: clean and invalidate data cache by set/way */ -.macro dccisw, rt +.macro dccisw, rt, unused mcr p15, 0, \rt, c7, c14, 2 .endm @@ -51,7 +56,7 @@ * dccimvac: Clean and invalidate data cache line by MVA to PoC. */ .irp c,,eq,ne,cs,cc,mi,pl,vs,vc,hi,ls,ge,lt,gt,le,hs,lo -.macro dccimvac\c, rt +.macro dccimvac\c, rt, unused mcr\c p15, 0, \rt, c7, c14, 1 .endm .endr @@ -59,28 +64,28 @@ /* * dcimvac: Invalidate data cache line by MVA to PoC */ -.macro dcimvac, rt +.macro dcimvac, rt, unused mcr p15, 0, r0, c7, c6, 1 .endm /* * dccmvau: Clean data cache line by MVA to PoU */ -.macro dccmvau, rt +.macro dccmvau, rt, unused mcr p15, 0, \rt, c7, c11, 1 .endm /* * dccmvac: Clean data cache line by MVA to PoC */ -.macro dccmvac, rt +.macro dccmvac, rt, unused mcr p15, 0, \rt, c7, c10, 1 .endm /* * icimvau: Invalidate instruction caches by MVA to PoU */ -.macro icimvau, rt +.macro icimvau, rt, unused mcr p15, 0, \rt, c7, c5, 1 .endm diff --git a/arch/arm/mm/v7m-cache-macros.S b/arch/arm/mm/v7m-cache-macros.S new file mode 100644 index 0000000..8c1999a --- /dev/null +++ b/arch/arm/mm/v7m-cache-macros.S @@ -0,0 +1,142 @@ +/* + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write to the Free Software + * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. + * + * Copyright (C) 2012 ARM Limited + * + * Author: Jonathan Austin + */ +#include "asm/v7m.h" +#include "asm/assembler.h" + +/* Generic V7M read/write macros for memory mapped cache operations */ +.macro v7m_cache_read, rt, reg + movw \rt, #:lower16:BASEADDR_V7M_SCB + \reg + movt \rt, #:upper16:BASEADDR_V7M_SCB + \reg + ldr \rt, [\rt] +.endm + +.macro v7m_cacheop, rt, tmp, op, c = al + movw\c \tmp, #:lower16:BASEADDR_V7M_SCB + \op + movt\c \tmp, #:upper16:BASEADDR_V7M_SCB + \op + str\c \rt, [\tmp] +.endm + +/* read/write cache properties */ +.macro read_ctr, rt + v7m_cache_read \rt, V7M_SCB_CTR +.endm + +.macro read_ccsidr, rt + v7m_cache_read \rt, V7M_SCB_CCSIDR +.endm + +.macro read_clidr, rt + v7m_cache_read \rt, V7M_SCB_CLIDR +.endm + +.macro write_csselr, rt, tmp + v7m_cacheop \rt, \tmp, V7M_SCB_CSSELR +.endm + +/* + * dcisw: Invalidate data cache by set/way + */ +.macro dcisw, rt, tmp + v7m_cacheop \rt, \tmp, V7M_SCB_DCISW +.endm + +/* + * dccisw: Clean and invalidate data cache by set/way + */ +.macro dccisw, rt, tmp + v7m_cacheop \rt, \tmp, V7M_SCB_DCCISW +.endm + +/* + * dccimvac: Clean and invalidate data cache line by MVA to PoC. + */ +.irp c,,eq,ne,cs,cc,mi,pl,vs,vc,hi,ls,ge,lt,gt,le,hs,lo +.macro dccimvac\c, rt, tmp + v7m_cacheop \rt, \tmp, V7M_SCB_DCCIMVAC, \c +.endm +.endr + +/* + * dcimvac: Invalidate data cache line by MVA to PoC + */ +.macro dcimvac, rt, tmp + v7m_cacheop \rt, \tmp, V7M_SCB_DCIMVAC +.endm + +/* + * dccmvau: Clean data cache line by MVA to PoU + */ +.macro dccmvau, rt, tmp + v7m_cacheop \rt, \tmp, V7M_SCB_DCCMVAU +.endm + +/* + * dccmvac: Clean data cache line by MVA to PoC + */ +.macro dccmvac, rt, tmp + v7m_cacheop \rt, \tmp, V7M_SCB_DCCMVAC +.endm + +/* + * icimvau: Invalidate instruction caches by MVA to PoU + */ +.macro icimvau, rt, tmp + v7m_cacheop \rt, \tmp, V7M_SCB_ICIMVAU +.endm + +/* + * Invalidate the icache, inner shareable if SMP, invalidate BTB for UP. + * rt data ignored by ICIALLU(IS), so can be used for the address + */ +.macro invalidate_icache, rt + v7m_cacheop \rt, \rt, V7M_SCB_ICIALLU + mov \rt, #0 +.endm + +/* + * Invalidate the BTB, inner shareable if SMP. + * rt data ignored by BPIALL, so it can be used for the address + */ +.macro invalidate_bp, rt + v7m_cacheop \rt, \rt, V7M_SCB_BPIALL + mov \rt, #0 +.endm + +/* + * dcache_line_size - get the minimum D-cache line size from the CTR register + * on ARMv7. + */ +.macro dcache_line_size, reg, tmp + read_ctr \tmp + lsr \tmp, \tmp, #16 + and \tmp, \tmp, #0xf @ cache line size encoding + mov \reg, #4 @ bytes per word + mov \reg, \reg, lsl \tmp @ actual cache line size +.endm + +/* + * icache_line_size - get the minimum I-cache line size from the CTR register + * on ARMv7. + */ +.macro icache_line_size, reg, tmp + read_ctr \tmp + and \tmp, \tmp, #0xf @ cache line size encoding + mov \reg, #4 @ bytes per word + mov \reg, \reg, lsl \tmp @ actual cache line size +.endm