From patchwork Wed Aug 8 12:23:46 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: tip-bot for Dave Martin X-Patchwork-Id: 1294891 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork1.kernel.org Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) by patchwork1.kernel.org (Postfix) with ESMTP id C01663FCFC for ; Wed, 8 Aug 2012 12:27:25 +0000 (UTC) Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.76 #1 (Red Hat Linux)) id 1Sz5JE-0002dQ-Ms; Wed, 08 Aug 2012 12:24:20 +0000 Received: from mail-ey0-f177.google.com ([209.85.215.177]) by merlin.infradead.org with esmtps (Exim 4.76 #1 (Red Hat Linux)) id 1Sz5Iw-0002c8-3k for linux-arm-kernel@lists.infradead.org; Wed, 08 Aug 2012 12:24:02 +0000 Received: by eaai12 with SMTP id i12so200700eaa.36 for ; Wed, 08 Aug 2012 05:24:00 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=from:to:cc:subject:date:message-id:x-mailer:in-reply-to:references :x-gm-message-state; bh=L9J4I+zINeLDfbED5H9hv2PE1AXOM/2Bu87FSN69ZLE=; b=hDJeYSHuWJTuAoFhWIHpTfJ7Udw5MYvrBSXzCHEPu8fDhnppYUZ+9GgXrX/q1ics7p 1iY8JyvqxlXZg1KcqYAd8UYagxIX9/gPTQKmrLrM3B1k385Z3rj6KCv1C+6Yhn9E74M1 LX3PdM3bgbS5Te278Tpm52ALjlUy/Sm+10CfqGj7cDd/akZ1BRF06zgzWYlcwJh+CHV2 jZCphqEyZt7+sOTk6dlRzVkYnHcFYjgauV6Kpi0pmJv1j0OWXi7FGp362AruIWPiakRd E1WQfphRqOMfAgh6B2Y2TavAWIX5gdh57Wwi4cp1yzYXQOKp1OkiKTDHZT4HNb3Jq7BN RlYA== Received: by 10.14.198.200 with SMTP id v48mr22279924een.3.1344428640774; Wed, 08 Aug 2012 05:24:00 -0700 (PDT) Received: from e103592.peterhouse.linaro.org (fw-lnat.cambridge.arm.com. [217.140.96.63]) by mx.google.com with ESMTPS id e7sm28153355eep.2.2012.08.08.05.23.59 (version=SSLv3 cipher=OTHER); Wed, 08 Aug 2012 05:24:00 -0700 (PDT) From: Dave Martin To: linux-arm-kernel@lists.infradead.org Subject: [PATCH v2 REPOST 1/4] ARM: opcodes: Don't define the thumb32 byteswapping macros for BE32 Date: Wed, 8 Aug 2012 13:23:46 +0100 Message-Id: <1344428629-12787-2-git-send-email-dave.martin@linaro.org> X-Mailer: git-send-email 1.7.4.1 In-Reply-To: <1344428629-12787-1-git-send-email-dave.martin@linaro.org> References: <1344428629-12787-1-git-send-email-dave.martin@linaro.org> X-Gm-Message-State: ALoCoQkibIRZmkcLpSH8v7Jn4bfrer+CbIQ2SQLbgz5bVMlOvYjoxisCd8A++5DRyZOskzmwBqTm X-Spam-Note: CRM114 invocation failed X-Spam-Score: -2.6 (--) X-Spam-Report: SpamAssassin version 3.3.2 on merlin.infradead.org summary: Content analysis details: (-2.6 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.7 RCVD_IN_DNSWL_LOW RBL: Sender listed at http://www.dnswl.org/, low trust [209.85.215.177 listed in list.dnswl.org] -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1% [score: 0.0000] Cc: Nicolas Pitre , Christoffer Dall , Ian Campbell , Stefano Stabellini , Marc Zyngier , Rusty Russell , Will Deacon , Jon Medhurst , Rabin Vincent , patches@linaro.org X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: linux-arm-kernel-bounces@lists.infradead.org Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org The existing __mem_to_opcode_thumb32() is incorrect for BE32 platforms. However, these don't support Thumb-2 kernels, so this option is not so relevant for those platforms anyway. This operation is complicated by the lack of unaligned memory access support prior to ARMv6. Rather than provide a "working" macro which will probably won't get used (or worse, will get misused), this patch removes the macro for BE32 kernels. People manipulating Thumb opcodes prior to ARMv6 should almost certainly be splitting these operations into halfwords anyway, using __opcode_thumb32_{first,second,compose}() and the 16-bit opcode transformations. Signed-off-by: Dave Martin Acked-by: Nicolas Pitre --- arch/arm/include/asm/opcodes.h | 15 ++++++++++++++- 1 files changed, 14 insertions(+), 1 deletions(-) diff --git a/arch/arm/include/asm/opcodes.h b/arch/arm/include/asm/opcodes.h index 19c48de..6bf54f9 100644 --- a/arch/arm/include/asm/opcodes.h +++ b/arch/arm/include/asm/opcodes.h @@ -49,18 +49,31 @@ extern asmlinkage unsigned int arm_check_condition(u32 opcode, u32 psr); #include #ifdef CONFIG_CPU_ENDIAN_BE8 + #define __opcode_to_mem_arm(x) swab32(x) #define __opcode_to_mem_thumb16(x) swab16(x) #define __opcode_to_mem_thumb32(x) swahb32(x) -#else + +#else /* ! CONFIG_CPU_ENDIAN_BE8 */ + #define __opcode_to_mem_arm(x) ((u32)(x)) #define __opcode_to_mem_thumb16(x) ((u16)(x)) +#ifndef CONFIG_CPU_ENDIAN_BE32 +/* + * On BE32 systems, using 32-bit accesses to store Thumb instructions will not + * work in all cases, due to alignment constraints. For now, a correct + * version is not provided for BE32. + */ #define __opcode_to_mem_thumb32(x) swahw32(x) #endif +#endif /* ! CONFIG_CPU_ENDIAN_BE8 */ + #define __mem_to_opcode_arm(x) __opcode_to_mem_arm(x) #define __mem_to_opcode_thumb16(x) __opcode_to_mem_thumb16(x) +#ifndef CONFIG_CPU_ENDIAN_BE32 #define __mem_to_opcode_thumb32(x) __opcode_to_mem_thumb32(x) +#endif /* Operations specific to Thumb opcodes */