From patchwork Wed Aug 9 20:07:54 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tycho Andersen X-Patchwork-Id: 9891909 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id CCC5560363 for ; Wed, 9 Aug 2017 20:11:54 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BF24128A94 for ; Wed, 9 Aug 2017 20:11:54 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B39AE28AA9; Wed, 9 Aug 2017 20:11:54 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.1 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id C87CE28A95 for ; Wed, 9 Aug 2017 20:11:53 +0000 (UTC) Received: (qmail 3181 invoked by uid 550); 9 Aug 2017 20:09:34 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 32456 invoked from network); 9 Aug 2017 20:09:17 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=docker.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=/ejsu5XCLe8S/YK69gwLtoRl+eiAB+8LsJt/VoWMKzQ=; b=Qm4BcEP3GblXYVfIFki8Jamf8+Yn3WhSYKIlV2XkP3Nx8N2vxrE9RW9lv/xwXRQRoa ahNWeoh5Wos+fnL6SCSMIPf2tDwVV1uo5DKwjBnmIBgQ2lg2DYStngb+CcQdAJ8exHUb MiUhLuhChss2nhlSG8GjttANH9hk659VJDkRk= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=/ejsu5XCLe8S/YK69gwLtoRl+eiAB+8LsJt/VoWMKzQ=; b=T0U1hz9Hnr04OhWETz7WyUNHvAJilX4wiunt1hka2aiBeFbgveJPdF0ZogR7RFXQ41 jJ50/18d+0fIeLHvYk2nZFIotO24P9gZ9E0MH8LsVk7/7/+VOFMTLU5Xfx+C6Q8Vx4WQ qJnrR7mrOvMGevM9AitlswnKN8FyPzliRuao9Y0WKVj6i0F3AS25HDFMQfMpkqp6bH6E kHHFltTCoJRryHPX8sl1HJP1kEtXSHxThnMRpRyABtLU7e7R2DFUsmI6M0W89n2t4dTo /VqUIchJ/X81Elc/NagHN/Z4DGT1/ybOBslJuAOjdk3gVNzqeA+gG14UcsYd7VIGlLUr jsnw== X-Gm-Message-State: AIVw110Q/H+WbNQi2lH74V6+LOE67mSbpac6q30Vuecm3IjXndGJXWwV Sb1Rd9kc8Im1GzrG X-Received: by 10.107.16.100 with SMTP id y97mr7499112ioi.117.1502309345385; Wed, 09 Aug 2017 13:09:05 -0700 (PDT) From: Tycho Andersen To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, kernel-hardening@lists.openwall.com, Marco Benatto , Juerg Haefliger , Tycho Andersen Date: Wed, 9 Aug 2017 14:07:54 -0600 Message-Id: <20170809200755.11234-10-tycho@docker.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20170809200755.11234-1-tycho@docker.com> References: <20170809200755.11234-1-tycho@docker.com> Subject: [kernel-hardening] [PATCH v5 09/10] mm: add a user_virt_to_phys symbol X-Virus-Scanned: ClamAV using ClamSMTP We need someting like this for testing XPFO. Since it's architecture specific, putting it in the test code is slightly awkward, so let's make it an arch-specific symbol and export it for use in LKDTM. Signed-off-by: Tycho Andersen Tested-by: Marco Benatto --- arch/arm64/mm/xpfo.c | 51 ++++++++++++++++++++++++++++++++++++++++++++++ arch/x86/mm/xpfo.c | 57 ++++++++++++++++++++++++++++++++++++++++++++++++++++ include/linux/xpfo.h | 4 ++++ 3 files changed, 112 insertions(+) diff --git a/arch/arm64/mm/xpfo.c b/arch/arm64/mm/xpfo.c index c4deb2b720cf..a221799a9242 100644 --- a/arch/arm64/mm/xpfo.c +++ b/arch/arm64/mm/xpfo.c @@ -107,3 +107,54 @@ inline void xpfo_dma_map_unmap_area(bool map, const void *addr, size_t size, local_irq_restore(flags); } + +/* Convert a user space virtual address to a physical address. + * Shamelessly copied from slow_virt_to_phys() and lookup_address() in + * arch/x86/mm/pageattr.c + */ +phys_addr_t user_virt_to_phys(unsigned long addr) +{ + phys_addr_t phys_addr; + unsigned long offset; + pgd_t *pgd; + p4d_t *p4d; + pud_t *pud; + pmd_t *pmd; + pte_t *pte; + + pgd = pgd_offset(current->mm, addr); + if (pgd_none(*pgd)) + return 0; + + p4d = p4d_offset(pgd, addr); + if (p4d_none(*p4d)) + return 0; + + pud = pud_offset(p4d, addr); + if (pud_none(*pud)) + return 0; + + if (pud_sect(*pud) || !pud_present(*pud)) { + phys_addr = (unsigned long)pud_pfn(*pud) << PAGE_SHIFT; + offset = addr & ~PUD_MASK; + goto out; + } + + pmd = pmd_offset(pud, addr); + if (pmd_none(*pmd)) + return 0; + + if (pmd_sect(*pmd) || !pmd_present(*pmd)) { + phys_addr = (unsigned long)pmd_pfn(*pmd) << PAGE_SHIFT; + offset = addr & ~PMD_MASK; + goto out; + } + + pte = pte_offset_kernel(pmd, addr); + phys_addr = (phys_addr_t)pte_pfn(*pte) << PAGE_SHIFT; + offset = addr & ~PAGE_MASK; + +out: + return (phys_addr_t)(phys_addr | offset); +} +EXPORT_SYMBOL(user_virt_to_phys); diff --git a/arch/x86/mm/xpfo.c b/arch/x86/mm/xpfo.c index 3635b37f2fc5..a1344f27406c 100644 --- a/arch/x86/mm/xpfo.c +++ b/arch/x86/mm/xpfo.c @@ -94,3 +94,60 @@ inline void xpfo_flush_kernel_page(struct page *page, int order) flush_tlb_kernel_range(kaddr, kaddr + (1 << order) * size); } + +/* Convert a user space virtual address to a physical address. + * Shamelessly copied from slow_virt_to_phys() and lookup_address() in + * arch/x86/mm/pageattr.c + */ +phys_addr_t user_virt_to_phys(unsigned long addr) +{ + phys_addr_t phys_addr; + unsigned long offset; + pgd_t *pgd; + p4d_t *p4d; + pud_t *pud; + pmd_t *pmd; + pte_t *pte; + + pgd = pgd_offset(current->mm, addr); + if (pgd_none(*pgd)) + return 0; + + p4d = p4d_offset(pgd, addr); + if (p4d_none(*p4d)) + return 0; + + if (p4d_large(*p4d) || !p4d_present(*p4d)) { + phys_addr = (unsigned long)p4d_pfn(*p4d) << PAGE_SHIFT; + offset = addr & ~P4D_MASK; + goto out; + } + + pud = pud_offset(p4d, addr); + if (pud_none(*pud)) + return 0; + + if (pud_large(*pud) || !pud_present(*pud)) { + phys_addr = (unsigned long)pud_pfn(*pud) << PAGE_SHIFT; + offset = addr & ~PUD_MASK; + goto out; + } + + pmd = pmd_offset(pud, addr); + if (pmd_none(*pmd)) + return 0; + + if (pmd_large(*pmd) || !pmd_present(*pmd)) { + phys_addr = (unsigned long)pmd_pfn(*pmd) << PAGE_SHIFT; + offset = addr & ~PMD_MASK; + goto out; + } + + pte = pte_offset_kernel(pmd, addr); + phys_addr = (phys_addr_t)pte_pfn(*pte) << PAGE_SHIFT; + offset = addr & ~PAGE_MASK; + +out: + return (phys_addr_t)(phys_addr | offset); +} +EXPORT_SYMBOL(user_virt_to_phys); diff --git a/include/linux/xpfo.h b/include/linux/xpfo.h index 6b61f7b820f4..449cd8dbf064 100644 --- a/include/linux/xpfo.h +++ b/include/linux/xpfo.h @@ -16,6 +16,8 @@ #ifdef CONFIG_XPFO +#include + extern struct page_ext_operations page_xpfo_ops; void set_kpte(void *kaddr, struct page *page, pgprot_t prot); @@ -29,6 +31,8 @@ void xpfo_free_pages(struct page *page, int order); bool xpfo_page_is_unmapped(struct page *page); +extern phys_addr_t user_virt_to_phys(unsigned long addr); + #else /* !CONFIG_XPFO */ static inline void xpfo_kmap(void *kaddr, struct page *page) { }