From patchwork Wed Jun 7 21:16:53 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tycho Andersen X-Patchwork-Id: 9772849 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id CCE976034B for ; Wed, 7 Jun 2017 21:18:23 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D87142847A for ; Wed, 7 Jun 2017 21:18:23 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id CD0ED2848F; Wed, 7 Jun 2017 21:18:23 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.1 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id 86EFF2847A for ; Wed, 7 Jun 2017 21:18:22 +0000 (UTC) Received: (qmail 11723 invoked by uid 550); 7 Jun 2017 21:18:13 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 10030 invoked from network); 7 Jun 2017 21:18:05 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=docker.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=IQGX/vsjm0I10T0NNqUX8aR95z1sdJIDIcfxZ6zSEp8=; b=iRk1YrM94OjYxQreyGSNLi+ozSgZoT/R2B5xZOg1OGSOe/CDmXo/17lUALTj+SX3sf w2RyothhYoB1+s1s3eHpog6e62BcVqbUijTvm2HVie56ox4gwdY9ysuqbVGln6ZX0UlV qblBALz08wNGVKUy+y6En0CoresuScj9dDQ9Y= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=IQGX/vsjm0I10T0NNqUX8aR95z1sdJIDIcfxZ6zSEp8=; b=MUYAoKOV+jShyM+q5hz7SlYdAe3SLpLxs+BGahhoh1HHdXxcqVn12yw0k0ZwBP6+HC IWgpjZa0tF/Ca7twpfA0TjcQCsCeeYULMyL5+8qOtdtasIyqWpKDNHaBW7iCXQ5WkIdK UqnQ1mHfNIQVP4t4heqrEOoBXUpb6JX/jpzeXSSMbs38zme2AZac1e9Ca/0KlczbnNZm 3AWfmGPnpI60T+mw2pa1rXIUUah5MKGsOEBH9jUpzfk4/+++QrSwvW12mAqTVbRRNcKV ngOZag1n2Vh4weycKw+uByEufLGEQCDeMrRCud9Xpr9hESxYNLYlWPukLV4tg/ngdbF/ b8Iw== X-Gm-Message-State: AODbwcDIGWKGphOYXGugQQDMxwmxedt0hwy1UPmvgUamIDdZhbRJcIMZ PG/5rNC9+2aTGbCy X-Received: by 10.107.181.21 with SMTP id e21mr28327854iof.156.1496870273276; Wed, 07 Jun 2017 14:17:53 -0700 (PDT) From: Tycho Andersen To: linux-mm@kvack.org Cc: Juerg Haefliger , kernel-hardening@lists.openwall.com, Tycho Andersen Date: Wed, 7 Jun 2017 15:16:53 -0600 Message-Id: <20170607211653.14536-4-tycho@docker.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20170607211653.14536-1-tycho@docker.com> References: <20170607211653.14536-1-tycho@docker.com> Subject: [kernel-hardening] [RFC v4 3/3] xpfo: add support for hugepages X-Virus-Scanned: ClamAV using ClamSMTP Based on an earlier draft by Marco Benatto. Signed-off-by: Tycho Andersen CC: Juerg Haefliger --- arch/x86/include/asm/pgtable.h | 22 +++++++++++++++ arch/x86/mm/pageattr.c | 21 +++------------ arch/x86/mm/xpfo.c | 61 +++++++++++++++++++++++++++++++++++++++++- include/linux/xpfo.h | 1 + mm/xpfo.c | 8 ++---- 5 files changed, 88 insertions(+), 25 deletions(-) diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h index f5af95a0c6b8..58bb43d8b9c1 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -1185,6 +1185,28 @@ static inline u16 pte_flags_pkey(unsigned long pte_flags) #endif } +/* + * The current flushing context - we pass it instead of 5 arguments: + */ +struct cpa_data { + unsigned long *vaddr; + pgd_t *pgd; + pgprot_t mask_set; + pgprot_t mask_clr; + unsigned long numpages; + int flags; + unsigned long pfn; + unsigned force_split : 1; + int curpage; + struct page **pages; +}; + +int +try_preserve_large_page(pte_t *kpte, unsigned long address, + struct cpa_data *cpa); +int split_large_page(struct cpa_data *cpa, pte_t *kpte, + unsigned long address); + #include #endif /* __ASSEMBLY__ */ diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c index 1dcd2be4cce4..6d6a78e6e023 100644 --- a/arch/x86/mm/pageattr.c +++ b/arch/x86/mm/pageattr.c @@ -26,21 +26,6 @@ #include #include -/* - * The current flushing context - we pass it instead of 5 arguments: - */ -struct cpa_data { - unsigned long *vaddr; - pgd_t *pgd; - pgprot_t mask_set; - pgprot_t mask_clr; - unsigned long numpages; - int flags; - unsigned long pfn; - unsigned force_split : 1; - int curpage; - struct page **pages; -}; /* * Serialize cpa() (for !DEBUG_PAGEALLOC which uses large identity mappings) @@ -506,7 +491,7 @@ static void __set_pmd_pte(pte_t *kpte, unsigned long address, pte_t pte) #endif } -static int +int try_preserve_large_page(pte_t *kpte, unsigned long address, struct cpa_data *cpa) { @@ -740,8 +725,8 @@ __split_large_page(struct cpa_data *cpa, pte_t *kpte, unsigned long address, return 0; } -static int split_large_page(struct cpa_data *cpa, pte_t *kpte, - unsigned long address) +int split_large_page(struct cpa_data *cpa, pte_t *kpte, + unsigned long address) { struct page *base; diff --git a/arch/x86/mm/xpfo.c b/arch/x86/mm/xpfo.c index c24b06c9b4ab..818da3ebc077 100644 --- a/arch/x86/mm/xpfo.c +++ b/arch/x86/mm/xpfo.c @@ -13,11 +13,70 @@ #include +#include + /* Update a single kernel page table entry */ inline void set_kpte(void *kaddr, struct page *page, pgprot_t prot) { unsigned int level; pte_t *pte = lookup_address((unsigned long)kaddr, &level); - set_pte_atomic(pte, pfn_pte(page_to_pfn(page), canon_pgprot(prot))); + + BUG_ON(!pte); + + switch (level) { + case PG_LEVEL_4K: + set_pte_atomic(pte, pfn_pte(page_to_pfn(page), canon_pgprot(prot))); + break; + case PG_LEVEL_2M: + case PG_LEVEL_1G: { + struct cpa_data cpa; + int do_split; + + memset(&cpa, 0, sizeof(cpa)); + cpa.vaddr = kaddr; + cpa.pages = &page; + cpa.mask_set = prot; + pgprot_val(cpa.mask_clr) = ~pgprot_val(prot); + cpa.numpages = 1; + cpa.flags = 0; + cpa.curpage = 0; + cpa.force_split = 0; + + do_split = try_preserve_large_page(pte, (unsigned long)kaddr, &cpa); + if (do_split < 0) + BUG_ON(split_large_page(&cpa, pte, (unsigned long)kaddr)); + + break; + } + default: + BUG(); + } + +} + +inline void xpfo_flush_kernel_page(struct page *page, int order) +{ + int level; + unsigned long size, kaddr; + + kaddr = (unsigned long)page_address(page); + lookup_address(kaddr, &level); + + + switch (level) { + case PG_LEVEL_4K: + size = PAGE_SIZE; + break; + case PG_LEVEL_2M: + size = PMD_SIZE; + break; + case PG_LEVEL_1G: + size = PUD_SIZE; + break; + default: + BUG(); + } + + flush_tlb_kernel_range(kaddr, kaddr + (1 << order) * size); } diff --git a/include/linux/xpfo.h b/include/linux/xpfo.h index 031cbee22a41..a0f0101720f6 100644 --- a/include/linux/xpfo.h +++ b/include/linux/xpfo.h @@ -19,6 +19,7 @@ extern struct page_ext_operations page_xpfo_ops; void set_kpte(void *kaddr, struct page *page, pgprot_t prot); +void xpfo_flush_kernel_page(struct page *page, int order); void xpfo_kmap(void *kaddr, struct page *page); void xpfo_kunmap(void *kaddr, struct page *page); diff --git a/mm/xpfo.c b/mm/xpfo.c index 8384058136b1..895de28108da 100644 --- a/mm/xpfo.c +++ b/mm/xpfo.c @@ -78,7 +78,6 @@ void xpfo_alloc_pages(struct page *page, int order, gfp_t gfp) { int i, flush_tlb = 0; struct xpfo *xpfo; - unsigned long kaddr; if (!static_branch_unlikely(&xpfo_inited)) return; @@ -109,11 +108,8 @@ void xpfo_alloc_pages(struct page *page, int order, gfp_t gfp) } } - if (flush_tlb) { - kaddr = (unsigned long)page_address(page); - flush_tlb_kernel_range(kaddr, kaddr + (1 << order) * - PAGE_SIZE); - } + if (flush_tlb) + xpfo_flush_kernel_page(page, order); } void xpfo_free_pages(struct page *page, int order)