From patchwork Wed Feb 15 15:38:00 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 9574311 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 106C86045F for ; Wed, 15 Feb 2017 15:39:46 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0138B27E5A for ; Wed, 15 Feb 2017 15:39:46 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id EA09A284EE; Wed, 15 Feb 2017 15:39:45 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.1 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id 247C727E5A for ; Wed, 15 Feb 2017 15:39:44 +0000 (UTC) Received: (qmail 7492 invoked by uid 550); 15 Feb 2017 15:39:40 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 7236 invoked from network); 15 Feb 2017 15:39:37 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=XzaJ7xyA3rVcLKE+Zy/GZ2tQWqdxjMRBc12+3YyrQL4=; b=Q+eG1kPhVr56VWjQLA1an8hoRv8dB/F0TNheH793URMVkH060RLTiHuene2gqfI7Jt OovanYLtX85M3ZNm+9engKSukWUYN7srzwbGtZRQVpsYZpmLGtur/CBhJlxZVD2uDoMb Ex7dL71nIlzoxDFWGtxeqKgZIM0hVgU2D63ik= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=XzaJ7xyA3rVcLKE+Zy/GZ2tQWqdxjMRBc12+3YyrQL4=; b=FPPRkmirSqPfpdlaU+np1EqDc1TdjjT3ek7pLd8hVY/fEfGhIZYIqdEJlIb7mnuhIH ZR5sx3h4Qqrgg1HKA9qL0BvpUGWSv95o5yp6fALFnTgOlwaU1TFo1eF22gBgoiDb7McN xNMxdKhOhmULikKpFFicvfR0R2KsyyB69nJVBtAvD1KLbs1bOg8puMDNmilO2KrZRjsM vn5/BE/by1MX5IonJ6ZTdG3n8KllTDdgfNszr9OLH44wRGdGVTbFnn0fJkZqkKC+piu+ Rj9GGQluLEPBKPYjApWs5ZxhJAGhRSfc3BxH+pwpGgpAWMCMN7RKPV27ew+oI40mS8E4 9gVA== X-Gm-Message-State: AMke39laqaosPd69pE5+zZR933NcvBTWqDZYpMXbzi/HW5cs2NvmR+z+b7dxzsY2gI9xl0ZF X-Received: by 10.223.176.93 with SMTP id g29mr30454254wra.7.1487173166259; Wed, 15 Feb 2017 07:39:26 -0800 (PST) From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org, mark.rutland@arm.com, will.deacon@arm.com, catalin.marinas@arm.com, keescook@chromium.org, labbott@fedoraproject.org, james.morse@arm.com Cc: kernel-hardening@lists.openwall.com, Ard Biesheuvel Date: Wed, 15 Feb 2017 15:38:00 +0000 Message-Id: <1487173081-13425-3-git-send-email-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1487173081-13425-1-git-send-email-ard.biesheuvel@linaro.org> References: <1487173081-13425-1-git-send-email-ard.biesheuvel@linaro.org> Subject: [kernel-hardening] [RFC PATCH 2/3] arm64: boot: align __inittext with swapper block on relocatable kernels X-Virus-Scanned: ClamAV using ClamSMTP The relocatable kernel needs to do a relocation pass regardless of whether it was loaded at the virtual offset it was linked at. This means we could completely ignore TEXT_OFFSET if we wanted to (and this is actually what the KASLR aware image loader in the stub does already), as long as we adhere to the segment alignment, which is at least 64 KB. Whether the .init.text region requires a RWX mapping depends on whether it shares a swapper block with other sections that require a writable mapping. Since the TEXT_OFFSET field gives us control over the placement of the Image with respect to a swapper block boundary, we can override its value with a value that puts __inittext_begin right on a swapper block boundary as well. This removes the need to use RWX mappings entirely, given that relocatable kernels have at least a couple of MBs of .rela data, which sits between the executable and the writable bits of the __init segment. Signed-off-by: Ard Biesheuvel --- arch/arm64/kernel/image.h | 23 +++++++++++++++++--- 1 file changed, 20 insertions(+), 3 deletions(-) diff --git a/arch/arm64/kernel/image.h b/arch/arm64/kernel/image.h index c7fcb232fe47..98e191cd97b1 100644 --- a/arch/arm64/kernel/image.h +++ b/arch/arm64/kernel/image.h @@ -62,13 +62,30 @@ (__HEAD_FLAG_PHYS_BASE << 3)) /* + * The relocatable kernel does not care about TEXT_OFFSET, as long as the + * image is loaded at the correct segment alignment. So let's tweak the + * effective TEXT_OFFSET header field so that __init_begin coincides with + * a swapper block boundary: this way, we will not need to create any RWX + * mappings for the kernel, even in the earliest stages. (Note that this is + * already guaranteed if SWAPPER_BLOCK_SIZE <= SEGMENT_ALIGN.) + * If it turns out that the boot loader ignores the TEXT_OFFSET field, we can + * happily boot as before, with the only difference being that we had to use + * an early RWX mapping for .init.text. + */ +#if defined(CONFIG_RELOCATABLE) && SWAPPER_BLOCK_SIZE > PAGE_SIZE +__eff_text_offset = SWAPPER_BLOCK_SIZE - ((__init_begin - TEXT_OFFSET) & (SWAPPER_BLOCK_SIZE - 1)); +#else +__eff_text_offset = TEXT_OFFSET; +#endif + +/* * These will output as part of the Image header, which should be little-endian * regardless of the endianness of the kernel. While constant values could be * endian swapped in head.S, all are done here for consistency. */ -#define HEAD_SYMBOLS \ - DEFINE_IMAGE_LE64(_kernel_size_le, _end - _text); \ - DEFINE_IMAGE_LE64(_kernel_offset_le, TEXT_OFFSET); \ +#define HEAD_SYMBOLS \ + DEFINE_IMAGE_LE64(_kernel_size_le, _end - _text); \ + DEFINE_IMAGE_LE64(_kernel_offset_le, __eff_text_offset); \ DEFINE_IMAGE_LE64(_kernel_flags_le, __HEAD_FLAGS); #ifdef CONFIG_EFI