From patchwork Thu Jun 19 10:49:20 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 4382431 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 113C79F26E for ; Thu, 19 Jun 2014 10:52:27 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 37B3820386 for ; Thu, 19 Jun 2014 10:52:26 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 4AD342037F for ; Thu, 19 Jun 2014 10:52:25 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1WxZv8-0005CP-Ac; Thu, 19 Jun 2014 10:50:18 +0000 Received: from cam-admin0.cambridge.arm.com ([217.140.96.50]) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1WxZv3-0003y2-AF for linux-arm-kernel@lists.infradead.org; Thu, 19 Jun 2014 10:50:14 +0000 Received: from leverpostej.cambridge.arm.com (leverpostej.cambridge.arm.com [10.1.205.151]) by cam-admin0.cambridge.arm.com (8.12.6/8.12.6) with ESMTP id s5JAnSwp019478; Thu, 19 Jun 2014 11:49:51 +0100 (BST) From: Mark Rutland To: linux-arm-kernel@lists.infradead.org Subject: [PATCHv3 1/4] arm64: head.S: remove unnecessary function alignment Date: Thu, 19 Jun 2014 11:49:20 +0100 Message-Id: <1403174963-10730-2-git-send-email-mark.rutland@arm.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1403174963-10730-1-git-send-email-mark.rutland@arm.com> References: <1403174963-10730-1-git-send-email-mark.rutland@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20140619_035013_734994_94CAB0FE X-CRM114-Status: UNSURE ( 9.69 ) X-CRM114-Notice: Please train this message. X-Spam-Score: -5.0 (-----) Cc: Mark Rutland , rob.herring@linaro.org, lauraa@codeaurora.org, peter.maydell@linaro.org, geoff@infradead.org, catalin.marinas@arm.com, will.deacon@arm.com, leif.lindholm@linaro.org, marc.zyngier@arm.com, kevin.hilman@linaro.org, ijc@hellion.org.uk, trini@ti.com, dave.martin@arm.com X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Currently __turn_mmu_on is aligned to 64 bytes to ensure that it doesn't span any page boundary, which simplifies the idmap and spares us requiring an additional page table to map half of the function. In keeping with other important requirements in architecture code, this fact is undocumented. Additionally, as the function consists of three instructions totalling 12 bytes with no literal pool data, a smaller alignment of 16 bytes would be sufficient. This patch reduces the alignment to 16 bytes and documents the underlying reason for the alignment. This reduces the required alignment of the entire .head.text section from 64 bytes to 16 bytes, though it may still be aligned to a larger value depending on TEXT_OFFSET. Signed-off-by: Mark Rutland Tested-by: Laura Abbott --- arch/arm64/kernel/head.S | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index a96d3a6..7ec7817 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -456,8 +456,13 @@ ENDPROC(__enable_mmu) * x27 = *virtual* address to jump to upon completion * * other registers depend on the function called upon completion + * + * We align the entire function to the smallest power of two larger than it to + * ensure it fits within a single block map entry. Otherwise were PHYS_OFFSET + * close to the end of a 512MB or 1GB block we might require an additional + * table to map the entire function. */ - .align 6 + .align 4 __turn_mmu_on: msr sctlr_el1, x0 isb