From patchwork Mon Mar 30 02:20:23 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jason Yan X-Patchwork-Id: 11464461 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C99A192A for ; Mon, 30 Mar 2020 02:24:10 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id 381032076A for ; Mon, 30 Mar 2020 02:24:10 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 381032076A Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-18289-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 28407 invoked by uid 550); 30 Mar 2020 02:24:08 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 28375 invoked from network); 30 Mar 2020 02:24:08 -0000 From: Jason Yan To: , , , , , , , , , CC: , , , Jason Yan Subject: [PATCH v5 6/6] powerpc/fsl_booke/kaslr: rename kaslr-booke32.rst to kaslr-booke.rst and add 64bit part Date: Mon, 30 Mar 2020 10:20:23 +0800 Message-ID: <20200330022023.3691-7-yanaijie@huawei.com> X-Mailer: git-send-email 2.17.2 In-Reply-To: <20200330022023.3691-1-yanaijie@huawei.com> References: <20200330022023.3691-1-yanaijie@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.124.28] X-CFilter-Loop: Reflected Now we support both 32 and 64 bit KASLR for fsl booke. Add document for 64 bit part and rename kaslr-booke32.rst to kaslr-booke.rst. Signed-off-by: Jason Yan Cc: Scott Wood Cc: Diana Craciun Cc: Michael Ellerman Cc: Christophe Leroy Cc: Benjamin Herrenschmidt Cc: Paul Mackerras Cc: Nicholas Piggin Cc: Kees Cook --- Documentation/powerpc/index.rst | 2 +- .../{kaslr-booke32.rst => kaslr-booke.rst} | 35 ++++++++++++++++--- 2 files changed, 32 insertions(+), 5 deletions(-) rename Documentation/powerpc/{kaslr-booke32.rst => kaslr-booke.rst} (59%) diff --git a/Documentation/powerpc/index.rst b/Documentation/powerpc/index.rst index 0d45f0fc8e57..3bad36943b22 100644 --- a/Documentation/powerpc/index.rst +++ b/Documentation/powerpc/index.rst @@ -20,7 +20,7 @@ powerpc hvcs imc isa-versions - kaslr-booke32 + kaslr-booke mpc52xx papr_hcalls pci_iov_resource_on_powernv diff --git a/Documentation/powerpc/kaslr-booke32.rst b/Documentation/powerpc/kaslr-booke.rst similarity index 59% rename from Documentation/powerpc/kaslr-booke32.rst rename to Documentation/powerpc/kaslr-booke.rst index 8b259fdfdf03..27a862963242 100644 --- a/Documentation/powerpc/kaslr-booke32.rst +++ b/Documentation/powerpc/kaslr-booke.rst @@ -1,15 +1,18 @@ .. SPDX-License-Identifier: GPL-2.0 -=========================== -KASLR for Freescale BookE32 -=========================== +========================= +KASLR for Freescale BookE +========================= The word KASLR stands for Kernel Address Space Layout Randomization. This document tries to explain the implementation of the KASLR for -Freescale BookE32. KASLR is a security feature that deters exploit +Freescale BookE. KASLR is a security feature that deters exploit attempts relying on knowledge of the location of kernel internals. +KASLR for Freescale BookE32 +------------------------- + Since CONFIG_RELOCATABLE has already supported, what we need to do is map or copy kernel to a proper place and relocate. Freescale Book-E parts expect lowmem to be mapped by fixed TLB entries(TLB1). The TLB1 @@ -38,5 +41,29 @@ bit of the entropy to decide the index of the 64M zone. Then we chose a kernstart_virt_addr + +KASLR for Freescale BookE64 +--------------------------- + +The implementation for Freescale BookE64 is similar to BookE32. One +difference is that Freescale BookE64 set up a TLB mapping of 1G during +booting. Another difference is that ppc64 needs the kernel to be +64K-aligned. So we can randomize the kernel in this 1G mapping and make +it 64K-aligned. This can save some code to creat another TLB map at early +boot. The disadvantage is that we only have about 1G/64K = 16384 slots to +put the kernel in:: + + KERNELBASE + + 64K |--> kernel <--| + | | | + +--+--+--+ +--+--+--+--+--+--+--+--+--+ +--+--+ + | | | |....| | | | | | | | | |....| | | + +--+--+--+ +--+--+--+--+--+--+--+--+--+ +--+--+ + | | 1G + |-----> offset <-----| + + kernstart_virt_addr + To enable KASLR, set CONFIG_RANDOMIZE_BASE = y. If KASLR is enable and you want to disable it at runtime, add "nokaslr" to the kernel cmdline.