From patchwork Tue Nov 6 12:41:51 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: tip-bot for Dave Martin X-Patchwork-Id: 1704291 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork2.kernel.org Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) by patchwork2.kernel.org (Postfix) with ESMTP id CAEECDF230 for ; Tue, 6 Nov 2012 12:43:45 +0000 (UTC) Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.76 #1 (Red Hat Linux)) id 1TViTm-0005BR-If; Tue, 06 Nov 2012 12:42:06 +0000 Received: from mail-bk0-f49.google.com ([209.85.214.49]) by merlin.infradead.org with esmtps (Exim 4.76 #1 (Red Hat Linux)) id 1TViTi-00059u-Km for linux-arm-kernel@lists.infradead.org; Tue, 06 Nov 2012 12:42:03 +0000 Received: by mail-bk0-f49.google.com with SMTP id j4so178516bkw.36 for ; Tue, 06 Nov 2012 04:42:00 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=from:to:cc:subject:date:message-id:x-mailer:x-gm-message-state; bh=UM2L9bsiGE6cPpUerok85wCXeSR/O8gzIgeGl+zIXbU=; b=UMeM6DC3O57i+MXpynIYgq7RNDeKsp/ZqFZfjEcwCzdKoO+T9oUgsIM3h2exAn+nUe 0R7hTNJaS/l/RMtU4Bjfu5JVNsWBLhiMRXsjzmLQd1YK+FUcORqlWK4wquBrvdqbfhK5 8UCjLjIu93Ii5QhBxWyamZGnux47hzQIvUrR7uClhuo9YXskcdWbXpq7JuS3Ljc2D0GX 9q38e7VXKEuGCWjoeuc6ZIX2NpprBZE1TLGzCraq6VOm9h475mt7glIjjvGoRuKAQCWX Tnqt4kUg518dbvZjtpVo1ZqkyiDUaQDdbg9cAq8EhakpIEZJvL4U0njVwAGKCnrS8tTo 2dqA== Received: by 10.205.132.72 with SMTP id ht8mr229214bkc.72.1352205719841; Tue, 06 Nov 2012 04:41:59 -0800 (PST) Received: from e103592.peterhouse.linaro.org (fw-lnat.cambridge.arm.com. [217.140.96.63]) by mx.google.com with ESMTPS id fm5sm11927891bkc.5.2012.11.06.04.41.57 (version=SSLv3 cipher=OTHER); Tue, 06 Nov 2012 04:41:58 -0800 (PST) From: Dave Martin To: linux-arm-kernel@lists.infradead.org Subject: [PATCH] ARM: decompressor: Enable unaligned memory access for v6 and above Date: Tue, 6 Nov 2012 12:41:51 +0000 Message-Id: <1352205711-15787-1-git-send-email-dave.martin@linaro.org> X-Mailer: git-send-email 1.7.4.1 X-Gm-Message-State: ALoCoQkDK0sA4scVLkeneQss7lTKk8OIS9ozEzJHibaISNtiwhqojmTNjtu5YmvZ45aj+5wtIpi0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20121106_074202_882547_599448B4 X-CRM114-Status: GOOD ( 13.05 ) X-Spam-Score: -2.6 (--) X-Spam-Report: SpamAssassin version 3.3.2 on merlin.infradead.org summary: Content analysis details: (-2.6 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.7 RCVD_IN_DNSWL_LOW RBL: Sender listed at http://www.dnswl.org/, low trust [209.85.214.49 listed in list.dnswl.org] -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1% [score: 0.0000] Cc: Nicolas Pitre , Rob Herring , patches@linaro.org X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: linux-arm-kernel-bounces@lists.infradead.org Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org Modern GCC can generate code which makes use of the CPU's native unaligned memory access capabilities. This is useful for the C decompressor implementations used for unpacking compressed kernels. This patch disables alignment faults and enables the v6 unaligned access model on CPUs which support these features (i.e., v6 and later), allowing full unaligned access support for C code in the decompressor. The decompressor C code must not be built to assume that unaligned access works if support for v5 or older platforms is included in the kernel. For correct code generation, C decompressor code must always use the get_unaligned and put_unaligned accessors when dealing with unaligned pointers, regardless of this patch. Signed-off-by: Dave Martin Acked-by: Nicolas Pitre --- This is the same as the previous post, with an additional comment in the commit message regarding the use of {get,put}_unaligned, as suggested by Nico. Tested on ARM1136JF-S (Integrator/CP) and ARM1176JZF-S (RealView PB1176JZF-S). ARM1176 is like v7 regarding the MIDR and SCTLR alignment control bits, so this tests the v7 code path. arch/arm/boot/compressed/head.S | 14 +++++++++++++- 1 files changed, 13 insertions(+), 1 deletions(-) diff --git a/arch/arm/boot/compressed/head.S b/arch/arm/boot/compressed/head.S index 90275f0..49ca86e 100644 --- a/arch/arm/boot/compressed/head.S +++ b/arch/arm/boot/compressed/head.S @@ -652,6 +652,15 @@ __setup_mmu: sub r3, r4, #16384 @ Page directory size mov pc, lr ENDPROC(__setup_mmu) +@ Enable unaligned access on v6, to allow better code generation +@ for the decompressor C code: +__armv6_mmu_cache_on: + mrc p15, 0, r0, c1, c0, 0 @ read SCTLR + bic r0, r0, #2 @ A (no unaligned access fault) + orr r0, r0, #1 << 22 @ U (v6 unaligned access model) + mcr p15, 0, r0, c1, c0, 0 @ write SCTLR + b __armv4_mmu_cache_on + __arm926ejs_mmu_cache_on: #ifdef CONFIG_CPU_DCACHE_WRITETHROUGH mov r0, #4 @ put dcache in WT mode @@ -694,6 +703,9 @@ __armv7_mmu_cache_on: bic r0, r0, #1 << 28 @ clear SCTLR.TRE orr r0, r0, #0x5000 @ I-cache enable, RR cache replacement orr r0, r0, #0x003c @ write buffer + bic r0, r0, #2 @ A (no unaligned access fault) + orr r0, r0, #1 << 22 @ U (v6 unaligned access model) + @ (needed for ARM1176) #ifdef CONFIG_MMU #ifdef CONFIG_CPU_ENDIAN_BE8 orr r0, r0, #1 << 25 @ big-endian page tables @@ -914,7 +926,7 @@ proc_types: .word 0x0007b000 @ ARMv6 .word 0x000ff000 - W(b) __armv4_mmu_cache_on + W(b) __armv6_mmu_cache_on W(b) __armv4_mmu_cache_off W(b) __armv6_mmu_cache_flush