From patchwork Wed Aug 9 20:07:50 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tycho Andersen X-Patchwork-Id: 9891915 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id E796460363 for ; Wed, 9 Aug 2017 20:12:25 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DA4182896D for ; Wed, 9 Aug 2017 20:12:25 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id CF0852899C; Wed, 9 Aug 2017 20:12:25 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.1 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id 00BE82896D for ; Wed, 9 Aug 2017 20:12:24 +0000 (UTC) Received: (qmail 3386 invoked by uid 550); 9 Aug 2017 20:09:37 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 32238 invoked from network); 9 Aug 2017 20:09:10 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=docker.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=bXrNDhfI2x8Ak6Pn1f030/VYe5t1eGaTq6vC86PLWLo=; b=NPmWgZFmGFUOE75TTcYQQsqSeyvcU2wpJkoVFMGBoVRuXk0GP9MU+LEwu1fii5c96F RW+t4vWZ4ZaCc8ZitDFSgjt/rkNW/zksmzjCdBivbhE10ueDcEPJ11I7Z8VDj3W/EmuT kVKRij4WHAhe5wYneVa/VgeNMXadze4iKzETk= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=bXrNDhfI2x8Ak6Pn1f030/VYe5t1eGaTq6vC86PLWLo=; b=ErgSXDv9WR9xJWme9Zy3Q3c9O/hwSg5iskXTSqyIL5JgxJ/2DlKw48CT/yabZUr5fS A6QtzRD10KnYXxQvKOUjGLMX/n2A/cwiMvQ6pbo/Tvvvb00mobTmzUf2ej5f/HCRQ14S k5/PUzGTL3Q/nwcJUIU4SFaObi0H/C9klBknMdtoNZag5zOqPv1qeK7Wzfpj8OhEjXvX KOmoIFArQhYYMCwqm8lBFELJZTPcctYOsv3Vn5ye30Re+xzVmhpPmXZah6Tsucnmfi4W FRXN+A4CIjRSq86IjQ5BDsmL3mDfS4CJvGTI80HK0n2A+y3PNfvVNdOWUyM0b0YgpLD+ Bc4A== X-Gm-Message-State: AHYfb5hFyqpYw38iQe2zdtformmMZJFvly/EHch+O/oI49nCmiANSVZF kMTa1ufZ99HWEBE+ X-Received: by 10.36.249.131 with SMTP id l125mr8883678ith.73.1502309339275; Wed, 09 Aug 2017 13:08:59 -0700 (PDT) From: Tycho Andersen To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, kernel-hardening@lists.openwall.com, Marco Benatto , Juerg Haefliger , Juerg Haefliger Date: Wed, 9 Aug 2017 14:07:50 -0600 Message-Id: <20170809200755.11234-6-tycho@docker.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20170809200755.11234-1-tycho@docker.com> References: <20170809200755.11234-1-tycho@docker.com> Subject: [kernel-hardening] [PATCH v5 05/10] arm64/mm: Add support for XPFO X-Virus-Scanned: ClamAV using ClamSMTP From: Juerg Haefliger Enable support for eXclusive Page Frame Ownership (XPFO) for arm64 and provide a hook for updating a single kernel page table entry (which is required by the generic XPFO code). At the moment, only 64k page sizes are supported. Signed-off-by: Juerg Haefliger Tested-by: Tycho Andersen --- arch/arm64/Kconfig | 1 + arch/arm64/mm/Makefile | 2 ++ arch/arm64/mm/xpfo.c | 64 ++++++++++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 67 insertions(+) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index dfd908630631..2ddae41e0793 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -121,6 +121,7 @@ config ARM64 select SPARSE_IRQ select SYSCTL_EXCEPTION_TRACE select THREAD_INFO_IN_TASK + select ARCH_SUPPORTS_XPFO if ARM64_64K_PAGES help ARM 64-bit (AArch64) Linux support. diff --git a/arch/arm64/mm/Makefile b/arch/arm64/mm/Makefile index 9b0ba191e48e..22e5cab543d8 100644 --- a/arch/arm64/mm/Makefile +++ b/arch/arm64/mm/Makefile @@ -11,3 +11,5 @@ KASAN_SANITIZE_physaddr.o += n obj-$(CONFIG_KASAN) += kasan_init.o KASAN_SANITIZE_kasan_init.o := n + +obj-$(CONFIG_XPFO) += xpfo.o diff --git a/arch/arm64/mm/xpfo.c b/arch/arm64/mm/xpfo.c new file mode 100644 index 000000000000..de03a652d48a --- /dev/null +++ b/arch/arm64/mm/xpfo.c @@ -0,0 +1,64 @@ +/* + * Copyright (C) 2017 Hewlett Packard Enterprise Development, L.P. + * Copyright (C) 2016 Brown University. All rights reserved. + * + * Authors: + * Juerg Haefliger + * Vasileios P. Kemerlis + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License version 2 as published by + * the Free Software Foundation. + */ + +#include +#include + +#include + +/* + * Lookup the page table entry for a virtual address and return a pointer to + * the entry. Based on x86 tree. + */ +static pte_t *lookup_address(unsigned long addr) +{ + pgd_t *pgd; + pud_t *pud; + pmd_t *pmd; + + pgd = pgd_offset_k(addr); + if (pgd_none(*pgd)) + return NULL; + + BUG_ON(pgd_bad(*pgd)); + + pud = pud_offset(pgd, addr); + if (pud_none(*pud)) + return NULL; + + BUG_ON(pud_bad(*pud)); + + pmd = pmd_offset(pud, addr); + if (pmd_none(*pmd)) + return NULL; + + BUG_ON(pmd_bad(*pmd)); + + return pte_offset_kernel(pmd, addr); +} + +/* Update a single kernel page table entry */ +inline void set_kpte(void *kaddr, struct page *page, pgprot_t prot) +{ + pte_t *pte = lookup_address((unsigned long)kaddr); + + set_pte(pte, pfn_pte(page_to_pfn(page), prot)); +} + +inline void xpfo_flush_kernel_page(struct page *page, int order) +{ + unsigned long kaddr = (unsigned long)page_address(page); + unsigned long size = PAGE_SIZE; + + flush_tlb_kernel_range(kaddr, kaddr + (1 << order) * size); +}