From patchwork Fri Aug 11 21:13:02 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tycho Andersen X-Patchwork-Id: 9896537 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 124EF602DA for ; Fri, 11 Aug 2017 21:13:20 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0550528D09 for ; Fri, 11 Aug 2017 21:13:20 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id EE6B228D15; Fri, 11 Aug 2017 21:13:19 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.3 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id 2DF4628D09 for ; Fri, 11 Aug 2017 21:13:18 +0000 (UTC) Received: (qmail 14260 invoked by uid 550); 11 Aug 2017 21:13:17 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 14238 invoked from network); 11 Aug 2017 21:13:16 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=docker.com; s=google; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to:user-agent; bh=R+/aK353Yi3TL5eDYZxqxiydwGRJ6HPeStFuz+MVcu4=; b=WM3NgPhx1BkJ6HAMfthi31NOMtq1U4u4Fum2FDQj2YCZbEgZU9BmiVdfdg4m7fwtQK oRDGW0s6bHlfECze9oC0tUAxct47VFAMTpufMhFmlTtsswBoa6DIRLYRFcv190ycFbUX hVRpG3h6MIsu8IHXdmmtOT6Vw+U7iXqq+A0ZA= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to:user-agent; bh=R+/aK353Yi3TL5eDYZxqxiydwGRJ6HPeStFuz+MVcu4=; b=FfSo1jHfPPJe70Uuc1if7sUcXkU8uf7vir3lRf377zIYqNSPfDOTEeOa7IdiSqp0MA Mw6jlojVmCxiTblpMLUHnNrBgW15ic8+zH/rH2MX7EH+biISVb4eYGQAZEJcm8WEbzBF s+RsMdjWmfyPJCAUvl5OOjH0f+0xZnpQjvS8/NyAQQAXZTP+r6+XtSrAXwfnBiAsTdbw PBQw7EIU80KzO+V2Cvy5cRdf2LC64+H00SZO3LwbJFdtUj9dvSk93tJTVd4SsO5p1IHO hMBIyKvCpHOikbQsbqUdr8MGZdpU3zV8o2KAq/pC8UyfFreWQHuZwUqlxG9HmBUceFwZ UOXw== X-Gm-Message-State: AIVw110gVKtLF5v8ki1FAMRfV3kgPHk1pJI9b5QM6IqIePGuZHxq3enW PdOWdjHUfkutoJyk X-Received: by 10.107.7.33 with SMTP id 33mr13226536ioh.5.1502485984475; Fri, 11 Aug 2017 14:13:04 -0700 (PDT) Date: Fri, 11 Aug 2017 15:13:02 -0600 From: Tycho Andersen To: Laura Abbott Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, kernel-hardening@lists.openwall.com, Marco Benatto , Juerg Haefliger Message-ID: <20170811211302.limmjv4rmq23b25b@smitten> References: <20170809200755.11234-1-tycho@docker.com> <20170809200755.11234-7-tycho@docker.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: User-Agent: NeoMutt/20170113 (1.7.2) Subject: Re: [kernel-hardening] [PATCH v5 06/10] arm64/mm: Disable section mappings if XPFO is enabled X-Virus-Scanned: ClamAV using ClamSMTP Hi Laura, On Fri, Aug 11, 2017 at 10:25:14AM -0700, Laura Abbott wrote: > On 08/09/2017 01:07 PM, Tycho Andersen wrote: > > From: Juerg Haefliger > > > > XPFO (eXclusive Page Frame Ownership) doesn't support section mappings > > yet, so disable it if XPFO is turned on. > > > > Signed-off-by: Juerg Haefliger > > Tested-by: Tycho Andersen > > --- > > arch/arm64/mm/mmu.c | 14 +++++++++++++- > > 1 file changed, 13 insertions(+), 1 deletion(-) > > > > diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c > > index f1eb15e0e864..38026b3ccb46 100644 > > --- a/arch/arm64/mm/mmu.c > > +++ b/arch/arm64/mm/mmu.c > > @@ -176,6 +176,18 @@ static void alloc_init_cont_pte(pmd_t *pmd, unsigned long addr, > > } while (addr = next, addr != end); > > } > > > > +static inline bool use_section_mapping(unsigned long addr, unsigned long next, > > + unsigned long phys) > > +{ > > + if (IS_ENABLED(CONFIG_XPFO)) > > + return false; > > + > > + if (((addr | next | phys) & ~SECTION_MASK) != 0) > > + return false; > > + > > + return true; > > +} > > + > > static void init_pmd(pud_t *pud, unsigned long addr, unsigned long end, > > phys_addr_t phys, pgprot_t prot, > > phys_addr_t (*pgtable_alloc)(void), int flags) > > @@ -190,7 +202,7 @@ static void init_pmd(pud_t *pud, unsigned long addr, unsigned long end, > > next = pmd_addr_end(addr, end); > > > > /* try section mapping first */ > > - if (((addr | next | phys) & ~SECTION_MASK) == 0 && > > + if (use_section_mapping(addr, next, phys) && > > (flags & NO_BLOCK_MAPPINGS) == 0) { > > pmd_set_huge(pmd, phys, prot); > > > > > > There is already similar logic to disable section mappings for > debug_pagealloc at the start of map_mem, can you take advantage > of that? You're suggesting something like this instead? Seems to work fine. Cheers, Tycho diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 38026b3ccb46..3b2c17bbbf12 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -434,6 +434,8 @@ static void __init map_mem(pgd_t *pgd) if (debug_pagealloc_enabled()) flags = NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS; + if (IS_ENABLED(CONFIG_XPFO)) + flags |= NO_BLOCK_MAPPINGS; /* * Take care not to create a writable alias for the