From patchwork Wed Mar 22 11:38:25 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Srinivas Ramana X-Patchwork-Id: 9638619 X-Patchwork-Delegate: agross@codeaurora.org Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 2EE0160327 for ; Wed, 22 Mar 2017 11:47:30 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 248E32842B for ; Wed, 22 Mar 2017 11:47:30 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 193472846B; Wed, 22 Mar 2017 11:47:30 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B75B32842B for ; Wed, 22 Mar 2017 11:47:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933563AbdCVLr3 (ORCPT ); Wed, 22 Mar 2017 07:47:29 -0400 Received: from smtp.codeaurora.org ([198.145.29.96]:60050 "EHLO smtp.codeaurora.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933495AbdCVLr2 (ORCPT ); Wed, 22 Mar 2017 07:47:28 -0400 Received: by smtp.codeaurora.org (Postfix, from userid 1000) id 98BB060BF7; Wed, 22 Mar 2017 11:38:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=codeaurora.org; s=default; t=1490182740; bh=lAcrk7UUsWi/v7FjKk7oiEdX7n+NRVFDf/R4COw6+ak=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=NOq/sbMlEzOuDsmN1sOT1HlKLNu2I1lM1yCcSoaQHb94Y887yKHFGEThIISUh27gF cvAU72Uahx+p4kRmzv249HkTm4h7JGAYpTOc0Q2TrDjn6LQ1K4h6JN7/rbDitKu3UC DePJ9E5pPfPdmX+9lE3P9AeazL1o8OOSK9UBSKwI= Received: from sramana-linux.qualcomm.com (unknown [202.46.23.54]) (using TLSv1.1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) (Authenticated sender: sramana@smtp.codeaurora.org) by smtp.codeaurora.org (Postfix) with ESMTPSA id D17AF6089E; Wed, 22 Mar 2017 11:38:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=codeaurora.org; s=default; t=1490182738; bh=lAcrk7UUsWi/v7FjKk7oiEdX7n+NRVFDf/R4COw6+ak=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=UR/daWtoUW+Fx4MlS1ITAGc21YKxq+YF6nyyqg6KrYsVi+CxswfCqkD+e1zFqGAKQ 7NKFI8SUujQ4dX52zKAQ3g1rMwXeyld2u39YmuzCm8O2BVcG4EtH3fUXwkJx8l0jk+ U/s1Crgq6ovMKjO2Q/0nOloFXZXbRTrwm61B3srM= DMARC-Filter: OpenDMARC Filter v1.3.2 smtp.codeaurora.org D17AF6089E Authentication-Results: pdx-caf-mail.web.codeaurora.org; dmarc=none (p=none dis=none) header.from=codeaurora.org Authentication-Results: pdx-caf-mail.web.codeaurora.org; spf=none smtp.mailfrom=sramana@codeaurora.org From: Srinivas Ramana To: catalin.marinas@arm.com, will.deacon@arm.com, ard.biesheuvel@linaro.org Cc: linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-arm-msm@vger.kernel.org, Neeraj Upadhyay , Srinivas Ramana Subject: [PATCH v2] arm64: kaslr: Fix up the kernel image alignment Date: Wed, 22 Mar 2017 17:08:25 +0530 Message-Id: <1490182705-14243-1-git-send-email-sramana@codeaurora.org> X-Mailer: git-send-email 1.8.2.1 In-Reply-To: <904FACBF-3DFE-4DDE-ACB5-7109A137D477@linaro.org> References: <904FACBF-3DFE-4DDE-ACB5-7109A137D477@linaro.org> Sender: linux-arm-msm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Neeraj Upadhyay If kernel image extends across alignment boundary, existing code increases the KASLR offset by size of kernel image. The offset is masked after resizing. There are cases, where after masking, we may still have kernel image extending across boundary. This eventually results in only 2MB block getting mapped while creating the page tables. This results in data aborts while accessing unmapped regions during second relocation (with kaslr offset) in __primary_switch. To fix this problem, round up the kernel image size, by swapper block size, before adding it for correction. For example consider below case, where kernel image still crosses 1GB alignment boundary, after masking the offset, which is fixed by rounding up kernel image size. SWAPPER_TABLE_SHIFT = 30 Swapper using section maps with section size 2MB. CONFIG_PGTABLE_LEVELS = 3 VA_BITS = 39 _text : 0xffffff8008080000 _end : 0xffffff800aa1b000 offset : 0x1f35600000 mask = ((1UL << (VA_BITS - 2)) - 1) & ~(SZ_2M - 1) (_text + offset) >> SWAPPER_TABLE_SHIFT = 0x3fffffe7c (_end + offset) >> SWAPPER_TABLE_SHIFT = 0x3fffffe7d offset after existing correction (before mask) = 0x1f37f9b000 (_text + offset) >> SWAPPER_TABLE_SHIFT = 0x3fffffe7d (_end + offset) >> SWAPPER_TABLE_SHIFT = 0x3fffffe7d offset (after mask) = 0x1f37e00000 (_text + offset) >> SWAPPER_TABLE_SHIFT = 0x3fffffe7c (_end + offset) >> SWAPPER_TABLE_SHIFT = 0x3fffffe7d new offset w/ rounding up = 0x1f38000000 (_text + offset) >> SWAPPER_TABLE_SHIFT = 0x3fffffe7d (_end + offset) >> SWAPPER_TABLE_SHIFT = 0x3fffffe7d Fixes: f80fb3a3d508 ("arm64: add support for kernel ASLR") Signed-off-by: Neeraj Upadhyay Signed-off-by: Srinivas Ramana Reviewed-by: Ard Biesheuvel --- arch/arm64/kernel/kaslr.c | 10 +++++++--- 1 file changed, 7 insertions(+), 3 deletions(-) diff --git a/arch/arm64/kernel/kaslr.c b/arch/arm64/kernel/kaslr.c index 769f24ef628c..d7e90d97f5c4 100644 --- a/arch/arm64/kernel/kaslr.c +++ b/arch/arm64/kernel/kaslr.c @@ -131,11 +131,15 @@ u64 __init kaslr_early_init(u64 dt_phys, u64 modulo_offset) /* * The kernel Image should not extend across a 1GB/32MB/512MB alignment * boundary (for 4KB/16KB/64KB granule kernels, respectively). If this - * happens, increase the KASLR offset by the size of the kernel image. + * happens, increase the KASLR offset by the size of the kernel image + * rounded up by SWAPPER_BLOCK_SIZE. */ if ((((u64)_text + offset + modulo_offset) >> SWAPPER_TABLE_SHIFT) != - (((u64)_end + offset + modulo_offset) >> SWAPPER_TABLE_SHIFT)) - offset = (offset + (u64)(_end - _text)) & mask; + (((u64)_end + offset + modulo_offset) >> SWAPPER_TABLE_SHIFT)) { + u64 kimg_sz = _end - _text; + offset = (offset + round_up(kimg_sz, SWAPPER_BLOCK_SIZE)) + & mask; + } if (IS_ENABLED(CONFIG_KASAN)) /*