From patchwork Wed Mar 22 08:55:43 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Srinivas Ramana X-Patchwork-Id: 9638311 X-Patchwork-Delegate: agross@codeaurora.org Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 43A7D60327 for ; Wed, 22 Mar 2017 08:56:07 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 34590281F9 for ; Wed, 22 Mar 2017 08:56:07 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2867B2833B; Wed, 22 Mar 2017 08:56:07 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 74DE6281F9 for ; Wed, 22 Mar 2017 08:56:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933850AbdCVI4G (ORCPT ); Wed, 22 Mar 2017 04:56:06 -0400 Received: from smtp.codeaurora.org ([198.145.29.96]:42244 "EHLO smtp.codeaurora.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933849AbdCVI4D (ORCPT ); Wed, 22 Mar 2017 04:56:03 -0400 Received: by smtp.codeaurora.org (Postfix, from userid 1000) id 3E06760817; Wed, 22 Mar 2017 08:56:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=codeaurora.org; s=default; t=1490172962; bh=YgfcKgAPGmcbOykk0WmadaomV68S/yHWEMS1FcLjqBQ=; h=From:To:Cc:Subject:Date:From; b=WICdAN62ojxi8zkZ/yDTbu2hl7gRBLGbmkxXMlX+ReKmsmy0kfdRen7jlGaTuMHII +Q620ZBgnrWvyOk+I8QUN7ZURBDGIxn/0tDZH/0oz+jXi1p/cyAOge4rfQa3ktANI1 Js16gk9JN3d/VVIjxvVN4WzW4YYAFZDCKBhBZE1o= Received: from sramana-linux.qualcomm.com (unknown [202.46.23.54]) (using TLSv1.1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) (Authenticated sender: sramana@smtp.codeaurora.org) by smtp.codeaurora.org (Postfix) with ESMTPSA id E308A60817; Wed, 22 Mar 2017 08:55:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=codeaurora.org; s=default; t=1490172961; bh=YgfcKgAPGmcbOykk0WmadaomV68S/yHWEMS1FcLjqBQ=; h=From:To:Cc:Subject:Date:From; b=VEvpqR8a0ofZZjw6/Yye9tjvbJAeGHR/n/ytB1wymFM72KmU+8dpnJqFeEB7NBsVK Opv3wMdNWtpI+X2qKim5bkEzDOhfsWsdmeEvOYjKo/ijC35mWFngck3K/dj1990Fhs J+BboFhtiu9KXowhMFiKG65vBlGKva6sf+ZfJngk= DMARC-Filter: OpenDMARC Filter v1.3.2 smtp.codeaurora.org E308A60817 Authentication-Results: pdx-caf-mail.web.codeaurora.org; dmarc=none (p=none dis=none) header.from=codeaurora.org Authentication-Results: pdx-caf-mail.web.codeaurora.org; spf=none smtp.mailfrom=sramana@codeaurora.org From: Srinivas Ramana To: catalin.marinas@arm.com, will.deacon@arm.com, ard.biesheuvel@linaro.org Cc: linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-arm-msm@vger.kernel.org, Neeraj Upadhyay , Srinivas Ramana Subject: [PATCH] arm64: kaslr: Add 2MB correction for aligning kernel image Date: Wed, 22 Mar 2017 14:25:43 +0530 Message-Id: <1490172943-826-1-git-send-email-sramana@codeaurora.org> X-Mailer: git-send-email 1.8.2.1 Sender: linux-arm-msm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Neeraj Upadhyay If kernel image extends across alignment boundary, existing code increases the KASLR offset by size of kernel image. The offset is masked after resizing. There are cases, where after masking, we may still have kernel image extending across boundary. This eventually results in only 2MB block getting mapped while creating the page tables. This results in data aborts while accessing unmapped regions during second relocation (with kaslr offset) in __primary_switch. To fix this problem, add a 2MB correction to offset along with the correction of kernel image size, before applying mask. For example consider below case, where kernel image still crosses 1GB alignment boundary, after masking the offset, which is fixed by adding 2MB correction. SWAPPER_TABLE_SHIFT = 30 Swapper using section maps with section size 2MB. CONFIG_PGTABLE_LEVELS = 3 VA_BITS = 39 _text : 0xffffff8008080000 _end : 0xffffff800aa1b000 offset : 0x1f35600000 mask = ((1UL << (VA_BITS - 2)) - 1) & ~(SZ_2M - 1) (_text + offset) >> SWAPPER_TABLE_SHIFT = 0x3fffffe7c (_end + offset) >> SWAPPER_TABLE_SHIFT = 0x3fffffe7d offset after existing correction (before mask) = 0x1f37f9b000 (_text + offset) >> SWAPPER_TABLE_SHIFT = 0x3fffffe7d (_end + offset) >> SWAPPER_TABLE_SHIFT = 0x3fffffe7d offset (after mask) = 0x1f37e00000 (_text + offset) >> SWAPPER_TABLE_SHIFT = 0x3fffffe7c (_end + offset) >> SWAPPER_TABLE_SHIFT = 0x3fffffe7d new offset w/ 2MB correction (before mask) = 0x1f37819b00 new offset w/ 2MB correction (after mask) = 0x1f38000000 (_text + offset) >> SWAPPER_TABLE_SHIFT = 0x3fffffe7d (_end + offset) >> SWAPPER_TABLE_SHIFT = 0x3fffffe7d Fixes: f80fb3a3d508 ("arm64: add support for kernel ASLR") Signed-off-by: Neeraj Upadhyay Signed-off-by: Srinivas Ramana --- arch/arm64/kernel/kaslr.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/arm64/kernel/kaslr.c b/arch/arm64/kernel/kaslr.c index 769f24ef628c..7b8af985e497 100644 --- a/arch/arm64/kernel/kaslr.c +++ b/arch/arm64/kernel/kaslr.c @@ -135,7 +135,7 @@ u64 __init kaslr_early_init(u64 dt_phys, u64 modulo_offset) */ if ((((u64)_text + offset + modulo_offset) >> SWAPPER_TABLE_SHIFT) != (((u64)_end + offset + modulo_offset) >> SWAPPER_TABLE_SHIFT)) - offset = (offset + (u64)(_end - _text)) & mask; + offset = (offset + (u64)(_end - _text) + SZ_2M) & mask; if (IS_ENABLED(CONFIG_KASAN)) /*