From patchwork Wed Aug 23 17:13:02 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tycho Andersen X-Patchwork-Id: 9917937 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id B82A7603FA for ; Wed, 23 Aug 2017 17:13:39 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9F3C4286D5 for ; Wed, 23 Aug 2017 17:13:39 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 93E78288BD; Wed, 23 Aug 2017 17:13:39 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.3 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id 62404286E1 for ; Wed, 23 Aug 2017 17:13:37 +0000 (UTC) Received: (qmail 28173 invoked by uid 550); 23 Aug 2017 17:13:17 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 28141 invoked from network); 23 Aug 2017 17:13:15 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=docker.com; s=google; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to:user-agent; bh=s2lmP6m7OvBqjTelfprhjX4U1TkU4ylVOBSDKRNMBKQ=; b=U66m9/Bg5Mf96K044sN+4kUT0GPIoN1zJA0gBReYnk5ouoLk9R9mkyLY/ImzzNCzJv oGF3ppQ06iEUkcwQA7qZpzPiB34ya49SAa0bLp8hHLKTzYmNmTBBRJRBxQiAFGYQdQrM XCTeNZWqrJkCPYexhEJPCR6aidirQT7V/u/4E= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to:user-agent; bh=s2lmP6m7OvBqjTelfprhjX4U1TkU4ylVOBSDKRNMBKQ=; b=c2vDBm/yOPKEDUVlJH93rDO4dfNdLKcKrbOuiOMCnQ68BDutBVgd0Vxa5O7PIgLw4n BEZsJgCE1y6dNRZ7edsY50iZbp8MgBoajjXMT1qMdUBwA5CcVhl82mlNaWX3nTtp2iiM j9Nkmys8nMhpe+QNZ+20C1g91Yq0fqiuICibIs8fMxUvfttzRxmTUqwcXelvgwq6IKA+ GTNpOAW4G+E/tqn6jAVHBVVYC1XqAmY47TV3F5pJk0AW9UKxd+aS+/ju4ITmrKf9KTcw qWrdVxik96xOKJI/CgVel1TjbdlvD3EZTwy9gqsD52DyZSDHMY7ABmvxll60GNyyAEez ZukQ== X-Gm-Message-State: AHYfb5iaRI3u5SooquJj7RMa3dJlNrCmRnkR1dTIDM82vEugT9TINgxb H7VNyuVq52pmldQM X-Received: by 10.107.159.194 with SMTP id i185mr2881245ioe.332.1503508383965; Wed, 23 Aug 2017 10:13:03 -0700 (PDT) Date: Wed, 23 Aug 2017 11:13:02 -0600 From: Tycho Andersen To: Mark Rutland Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, kernel-hardening@lists.openwall.com, Marco Benatto , Juerg Haefliger Message-ID: <20170823171302.ubnv7qyrexhhpbs7@smitten> References: <20170809200755.11234-1-tycho@docker.com> <20170809200755.11234-5-tycho@docker.com> <20170812112603.GB16374@remoulade> <20170814163536.6njceqc3dip5lrlu@smitten> <20170814165047.GB23428@leverpostej> <20170823165842.k5lbxom45avvd7g2@smitten> <20170823170443.GD12567@leverpostej> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20170823170443.GD12567@leverpostej> User-Agent: NeoMutt/20170113 (1.7.2) Subject: Re: [kernel-hardening] [PATCH v5 04/10] arm64: Add __flush_tlb_one() X-Virus-Scanned: ClamAV using ClamSMTP On Wed, Aug 23, 2017 at 06:04:43PM +0100, Mark Rutland wrote: > On Wed, Aug 23, 2017 at 10:58:42AM -0600, Tycho Andersen wrote: > > Hi Mark, > > > > On Mon, Aug 14, 2017 at 05:50:47PM +0100, Mark Rutland wrote: > > > That said, is there any reason not to use flush_tlb_kernel_range() > > > directly? > > > > So it turns out that there is a difference between __flush_tlb_one() and > > flush_tlb_kernel_range() on x86: flush_tlb_kernel_range() flushes all the TLBs > > via on_each_cpu(), where as __flush_tlb_one() only flushes the local TLB (which > > I think is enough here). > > That sounds suspicious; I don't think that __flush_tlb_one() is > sufficient. > > If you only do local TLB maintenance, then the page is left accessible > to other CPUs via the (stale) kernel mappings. i.e. the page isn't > exclusively mapped by userspace. I thought so too, so I tried to test it with something like the patch below. But it correctly failed for me when using __flush_tlb_one(). I suppose I'm doing something wrong in the test, but I'm not sure what. Tycho From 1d1b0a18d56cf1634072096231bfbaa96cb2aa16 Mon Sep 17 00:00:00 2001 From: Tycho Andersen Date: Tue, 22 Aug 2017 18:07:12 -0600 Subject: [PATCH] add XPFO_SMP test Signed-off-by: Tycho Andersen --- drivers/misc/lkdtm.h | 1 + drivers/misc/lkdtm_core.c | 1 + drivers/misc/lkdtm_xpfo.c | 139 ++++++++++++++++++++++++++++++++++++++++++---- 3 files changed, 130 insertions(+), 11 deletions(-) diff --git a/drivers/misc/lkdtm.h b/drivers/misc/lkdtm.h index fc53546113c1..34a6ee37f216 100644 --- a/drivers/misc/lkdtm.h +++ b/drivers/misc/lkdtm.h @@ -67,5 +67,6 @@ void lkdtm_USERCOPY_KERNEL(void); /* lkdtm_xpfo.c */ void lkdtm_XPFO_READ_USER(void); void lkdtm_XPFO_READ_USER_HUGE(void); +void lkdtm_XPFO_SMP(void); #endif diff --git a/drivers/misc/lkdtm_core.c b/drivers/misc/lkdtm_core.c index 164bc404f416..9544e329de4b 100644 --- a/drivers/misc/lkdtm_core.c +++ b/drivers/misc/lkdtm_core.c @@ -237,6 +237,7 @@ struct crashtype crashtypes[] = { CRASHTYPE(USERCOPY_KERNEL), CRASHTYPE(XPFO_READ_USER), CRASHTYPE(XPFO_READ_USER_HUGE), + CRASHTYPE(XPFO_SMP), }; diff --git a/drivers/misc/lkdtm_xpfo.c b/drivers/misc/lkdtm_xpfo.c index c72509128eb3..7600fdcae22f 100644 --- a/drivers/misc/lkdtm_xpfo.c +++ b/drivers/misc/lkdtm_xpfo.c @@ -4,22 +4,27 @@ #include "lkdtm.h" +#include #include #include #include +#include -void read_user_with_flags(unsigned long flags) +#include +#include + +#define XPFO_DATA 0xdeadbeef + +static unsigned long do_map(unsigned long flags) { - unsigned long user_addr, user_data = 0xdeadbeef; - phys_addr_t phys_addr; - void *virt_addr; + unsigned long user_addr, user_data = XPFO_DATA; user_addr = vm_mmap(NULL, 0, PAGE_SIZE, PROT_READ | PROT_WRITE | PROT_EXEC, flags, 0); if (user_addr >= TASK_SIZE) { pr_warn("Failed to allocate user memory\n"); - return; + return 0; } if (copy_to_user((void __user *)user_addr, &user_data, @@ -28,25 +33,61 @@ void read_user_with_flags(unsigned long flags) goto free_user; } + return user_addr; + +free_user: + vm_munmap(user_addr, PAGE_SIZE); + return 0; +} + +static unsigned long *user_to_kernel(unsigned long user_addr) +{ + phys_addr_t phys_addr; + void *virt_addr; + phys_addr = user_virt_to_phys(user_addr); if (!phys_addr) { pr_warn("Failed to get physical address of user memory\n"); - goto free_user; + return 0; } virt_addr = phys_to_virt(phys_addr); if (phys_addr != virt_to_phys(virt_addr)) { pr_warn("Physical address of user memory seems incorrect\n"); - goto free_user; + return 0; } + return virt_addr; +} + +static void read_map(unsigned long *virt_addr) +{ pr_info("Attempting bad read from kernel address %p\n", virt_addr); - if (*(unsigned long *)virt_addr == user_data) - pr_info("Huh? Bad read succeeded?!\n"); + if (*(unsigned long *)virt_addr == XPFO_DATA) + pr_err("FAIL: Bad read succeeded?!\n"); else - pr_info("Huh? Bad read didn't fail but data is incorrect?!\n"); + pr_err("FAIL: Bad read didn't fail but data is incorrect?!\n"); +} + +static void read_user_with_flags(unsigned long flags) +{ + unsigned long user_addr, *kernel; + + user_addr = do_map(flags); + if (!user_addr) { + pr_err("FAIL: map failed\n"); + return; + } + + kernel = user_to_kernel(user_addr); + if (!kernel) { + pr_err("FAIL: user to kernel conversion failed\n"); + goto free_user; + } + + read_map(kernel); - free_user: +free_user: vm_munmap(user_addr, PAGE_SIZE); } @@ -60,3 +101,79 @@ void lkdtm_XPFO_READ_USER_HUGE(void) { read_user_with_flags(MAP_PRIVATE | MAP_ANONYMOUS | MAP_HUGETLB); } + +struct smp_arg { + struct completion map_done; + unsigned long *virt_addr; + unsigned int cpu; +}; + +static int smp_reader(void *parg) +{ + struct smp_arg *arg = parg; + + if (arg->cpu != smp_processor_id()) { + pr_err("FAIL: scheduled on wrong CPU?\n"); + return 0; + } + + wait_for_completion(&arg->map_done); + + if (arg->virt_addr) + read_map(arg->virt_addr); + + return 0; +} + +/* The idea here is to read from the kernel's map on a different thread than + * did the mapping (and thus the TLB flushing), to make sure that the page + * faults on other cores too. + */ +void lkdtm_XPFO_SMP(void) +{ + unsigned long user_addr; + struct task_struct *thread; + int ret; + struct smp_arg arg; + + init_completion(&arg.map_done); + + if (num_online_cpus() < 2) { + pr_err("not enough to do a multi cpu test\n"); + return; + } + + arg.cpu = (smp_processor_id() + 1) % num_online_cpus(); + thread = kthread_create(smp_reader, &arg, "lkdtm_xpfo_test"); + if (IS_ERR(thread)) { + pr_err("couldn't create kthread? %ld\n", PTR_ERR(thread)); + return; + } + + kthread_bind(thread, arg.cpu); + get_task_struct(thread); + wake_up_process(thread); + + user_addr = do_map(MAP_PRIVATE | MAP_ANONYMOUS); + if (user_addr) { + arg.virt_addr = user_to_kernel(user_addr); + /* child thread checks for failure */ + } + + complete(&arg.map_done); + + /* there must be a better way to do this. */ + while (1) { + if (thread->exit_state) + break; + msleep_interruptible(100); + } + + ret = kthread_stop(thread); + if (ret != SIGKILL) + pr_err("FAIL: thread wasn't killed: %d\n", ret); + put_task_struct(thread); + + if (user_addr) + vm_munmap(user_addr, PAGE_SIZE); +}