From patchwork Tue Oct 23 21:34:49 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Igor Stoppa X-Patchwork-Id: 10653697 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1209213A4 for ; Tue, 23 Oct 2018 21:36:06 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id EE8422A3C7 for ; Tue, 23 Oct 2018 21:36:05 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E16EF2A3CD; Tue, 23 Oct 2018 21:36:05 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C883E2A3C7 for ; Tue, 23 Oct 2018 21:36:04 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 90CAC6B0003; Tue, 23 Oct 2018 17:36:03 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 8BC7C6B0005; Tue, 23 Oct 2018 17:36:03 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7AA836B0006; Tue, 23 Oct 2018 17:36:03 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-lj1-f198.google.com (mail-lj1-f198.google.com [209.85.208.198]) by kanga.kvack.org (Postfix) with ESMTP id 055186B0003 for ; Tue, 23 Oct 2018 17:36:03 -0400 (EDT) Received: by mail-lj1-f198.google.com with SMTP id s14-v6so987746lji.2 for ; Tue, 23 Oct 2018 14:36:02 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references:reply-to; bh=lWr5dc2eNvTcJb+c+uRTUJwhKcPYmmWV6t14BAAM0+g=; b=UoKrjwz7fDZ/YpNYjBp/sPWDRr17yDYzZfI5HsajGlqZwYS8C06wnYdKzI/W1jZZ0u FBaNQ3OyYonZ7FWvAKjeH7u2BvIOn1rwlfsToG/xs5AXpEbyc1eSx4PouSvwwK1W17aY 84ver/HMQXOYg0qhgihlVrrWUtrjWNFpiKl69C2LgaVUSLxrOhB2wf9YxhuJmDbpUhK4 hCJvRobc6Gs314iZ5LjZ36v8eYbzg6rfgUlT8FBkHMVnvOEh/jZftitEiCivZ5DFiaPH KKvThucSE2Hk9dKj5wmAV9Jg0h4HUiA9Nvm1R60y0G8kap3wPJLkubJ2gZDdhlKPuiXi jM7A== X-Gm-Message-State: ABuFfohaKrOAMnKSwOv4JJ0OcUXqbo+KhPL4iSuQcY3/XY2dzLRutmOk R9IAi/ARfpSGUkEuXOhTKzVEfqEaHcMPXuwHHYU3E2bLdi+Z0KC8/mBb921wAHlm95SBxkGs76n Hvp15/MZqBSlQLjovzk7dODw0OZiyJLaOWI4fbFb8HBgC7fMLTz0h8G4OBq6e/TIKR3VusjcVCq cnFr+1FI98a4H9CxK6z3f4w0ALdJEqCFqsTdpltAQy6sYLkiyqZL04mZxf2I8/RHW5SWpsFZ6h+ /MblBKFOOMqDrpKSSjWuPcUAxtkoHdpWgjYiZsSVvcT7yY4tiqXK9/70VreY6KFW8/SEUA81aaS rl2d+h7ZoMa6lWUojm8nww7yQcrZSyXuOHhaH8og3IyYHh3oKt2uR1S2gVrATwLB4sx597yeuyv Q X-Received: by 2002:a19:d70f:: with SMTP id o15mr3374082lfg.134.1540330562049; Tue, 23 Oct 2018 14:36:02 -0700 (PDT) X-Received: by 2002:a19:d70f:: with SMTP id o15mr3374026lfg.134.1540330560490; Tue, 23 Oct 2018 14:36:00 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1540330560; cv=none; d=google.com; s=arc-20160816; b=MEAdePJ/uJWyoeBItRBb0dJc3Ba4+mxqgZ741UwcYCbkrlOPn87iE44fLyCWHLdHCX ukvXWwB23CSiO/R9ZR1nxRy+BEb2Ivd6eGajlEuftNq5co8GscFQ9tbrdbfjTeyIWiKb 5HQqXPqp0kyhMrmRTXh8hm2cfpM1s+mQoRpA+2I5ls64r0Mo4Hc+R9VoyyzrMs4BU9u1 kYmolSGqKdhFb1AvuUOt4N61XSKuU7wL/UtL3NBf7DIH1dkMja/n9fTFgTc/ih2C3r42 033at/a3lBx/XlIIFoY1sNcWQnjGrcqGYWvhnNnzycM1fRxr3ZpRa80SYkQjysG42nX4 TzsQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=reply-to:references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=lWr5dc2eNvTcJb+c+uRTUJwhKcPYmmWV6t14BAAM0+g=; b=Qsq9fZ6u5LpLsxG7lOJj/nYsmWKP8CxgmDa97AXh4MFuxIoiRgnzklC2rpqsOHp6q6 A8t1GFEPvGQUjrqo8yG3+31Igxt/DzA8o0W6sExd7YJJoKM263d2JBmAnZgJVOSt7ess 3gvvHXbV3EYe4k6uyENVorBPsL2f9LJ1e1s8z6tHi53Wxh3slZgljE0bK9YQR6I+PZO+ HxDEC2LbdUqP16y4tY/GPOAFgRgKwYowu4e2bhfIC2zu9pSSYJtlNtTbFZK+yvWtRqbO n13YHuuqwP7rHLNxxbH18IzESV2x7wVF9e1js4YmmIBwOFf9zW1vfOkQvhqpQyardUwW 2cmQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=K5HssUtW; spf=pass (google.com: domain of igor.stoppa@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=igor.stoppa@gmail.com; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id q22-v6sor916918lfa.49.2018.10.23.14.36.00 for (Google Transport Security); Tue, 23 Oct 2018 14:36:00 -0700 (PDT) Received-SPF: pass (google.com: domain of igor.stoppa@gmail.com designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=K5HssUtW; spf=pass (google.com: domain of igor.stoppa@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=igor.stoppa@gmail.com; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references:reply-to; bh=lWr5dc2eNvTcJb+c+uRTUJwhKcPYmmWV6t14BAAM0+g=; b=K5HssUtWyqhs3CTyKNXHpefeYDtWkJbXaAX1XLxON4iECRSRB6QjMc5RCzYk0y3yfd 3c1iOvdGFcidkqsyYIElslGkPelZCRE/JLn4hSqqkZH2OH3h57NZu/YFoNNNAjmaYHMg 36urDvGuUDvpjx00WQrlW9VtHoMHr8CdHjsfQij3WvBYK3waVusZpw/gnqTE7YAg+T8z SO7hquz3XCCU2aYUVW1L7jBQGaZliOW0tEQjX97ZIN9jZwLllZqgweeeNE0Lm7lP2AZN mpclQP5Np4932IIJTLzPM1PMBsppoxoRMBjfYYZLQ0qprQKK8NxeH80SI6RaTIOohnqU wQQA== X-Google-Smtp-Source: ACcGV63HspKtXzSwNphXjTw4U5KnYFl/LeBjRKri4bDR8FgHFnbdnJMdM8PgkfyEiZhsocjgUN4K2w== X-Received: by 2002:a19:2648:: with SMTP id m69-v6mr12821527lfm.78.1540330559911; Tue, 23 Oct 2018 14:35:59 -0700 (PDT) Received: from localhost.localdomain (91-159-62-169.elisa-laajakaista.fi. [91.159.62.169]) by smtp.gmail.com with ESMTPSA id y127-v6sm377950lfc.13.2018.10.23.14.35.58 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 23 Oct 2018 14:35:59 -0700 (PDT) From: Igor Stoppa X-Google-Original-From: Igor Stoppa To: Mimi Zohar , Kees Cook , Matthew Wilcox , Dave Chinner , James Morris , Michal Hocko , kernel-hardening@lists.openwall.com, linux-integrity@vger.kernel.org, linux-security-module@vger.kernel.org Cc: igor.stoppa@huawei.com, Dave Hansen , Jonathan Corbet , Laura Abbott , Vlastimil Babka , "Kirill A. Shutemov" , Andrew Morton , Pavel Tatashin , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 02/17] prmem: write rare for static allocation Date: Wed, 24 Oct 2018 00:34:49 +0300 Message-Id: <20181023213504.28905-3-igor.stoppa@huawei.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181023213504.28905-1-igor.stoppa@huawei.com> References: <20181023213504.28905-1-igor.stoppa@huawei.com> Reply-To: Igor Stoppa X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Implementation of write rare for statically allocated data, located in a specific memory section through the use of the __write_rare label. The basic functions are wr_memcpy() and wr_memset(): the write rare counterparts of memcpy() and memset() respectively. To minimize chances of attacks, this implementation does not unprotect existing memory pages. Instead, it remaps them, one by one, at random free locations, as writable. Each page is mapped as writable strictly for the time needed to perform changes in said page. While a page is remapped, interrupts are disabled on the core performing the write rare operation, to avoid being frozen mid-air by an attack using interrupts for stretching the duration of the alternate mapping. OTOH, to avoid introducing unpredictable delays, the interrupts are re-enabled inbetween page remapping, when write operations are either completed or not yet started, and there is not alternate, writable mapping to exploit. Signed-off-by: Igor Stoppa CC: Michal Hocko CC: Vlastimil Babka CC: "Kirill A. Shutemov" CC: Andrew Morton CC: Pavel Tatashin CC: linux-mm@kvack.org CC: linux-kernel@vger.kernel.org --- MAINTAINERS | 7 ++ include/linux/prmem.h | 213 ++++++++++++++++++++++++++++++++++++++++++ mm/Makefile | 1 + mm/prmem.c | 10 ++ 4 files changed, 231 insertions(+) create mode 100644 include/linux/prmem.h create mode 100644 mm/prmem.c diff --git a/MAINTAINERS b/MAINTAINERS index b2f710eee67a..e566c5d09faf 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -9454,6 +9454,13 @@ F: kernel/sched/membarrier.c F: include/uapi/linux/membarrier.h F: arch/powerpc/include/asm/membarrier.h +MEMORY HARDENING +M: Igor Stoppa +L: kernel-hardening@lists.openwall.com +S: Maintained +F: include/linux/prmem.h +F: mm/prmem.c + MEMORY MANAGEMENT L: linux-mm@kvack.org W: http://www.linux-mm.org diff --git a/include/linux/prmem.h b/include/linux/prmem.h new file mode 100644 index 000000000000..3ba41d76a582 --- /dev/null +++ b/include/linux/prmem.h @@ -0,0 +1,213 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * prmem.h: Header for memory protection library + * + * (C) Copyright 2018 Huawei Technologies Co. Ltd. + * Author: Igor Stoppa + * + * Support for: + * - statically allocated write rare data + */ + +#ifndef _LINUX_PRMEM_H +#define _LINUX_PRMEM_H + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +/* ============================ Write Rare ============================ */ + +extern const char WR_ERR_RANGE_MSG[]; +extern const char WR_ERR_PAGE_MSG[]; + +/* + * The following two variables are statically allocated by the linker + * script at the the boundaries of the memory region (rounded up to + * multiples of PAGE_SIZE) reserved for __wr_after_init. + */ +extern long __start_wr_after_init; +extern long __end_wr_after_init; + +static __always_inline bool __is_wr_after_init(const void *ptr, size_t size) +{ + size_t start = (size_t)&__start_wr_after_init; + size_t end = (size_t)&__end_wr_after_init; + size_t low = (size_t)ptr; + size_t high = (size_t)ptr + size; + + return likely(start <= low && low < high && high <= end); +} + +/** + * wr_memset() - sets n bytes of the destination to the c value + * @dst: beginning of the memory to write to + * @c: byte to replicate + * @size: amount of bytes to copy + * + * Returns true on success, false otherwise. + */ +static __always_inline +bool wr_memset(const void *dst, const int c, size_t n_bytes) +{ + size_t size; + unsigned long flags; + uintptr_t d = (uintptr_t)dst; + + if (WARN(!__is_wr_after_init(dst, n_bytes), WR_ERR_RANGE_MSG)) + return false; + while (n_bytes) { + struct page *page; + uintptr_t base; + uintptr_t offset; + uintptr_t offset_complement; + + local_irq_save(flags); + page = virt_to_page(d); + offset = d & ~PAGE_MASK; + offset_complement = PAGE_SIZE - offset; + size = min(n_bytes, offset_complement); + base = (uintptr_t)vmap(&page, 1, VM_MAP, PAGE_KERNEL); + if (WARN(!base, WR_ERR_PAGE_MSG)) { + local_irq_restore(flags); + return false; + } + memset((void *)(base + offset), c, size); + vunmap((void *)base); + d += size; + n_bytes -= size; + local_irq_restore(flags); + } + return true; +} + +/** + * wr_memcpy() - copyes n bytes from source to destination + * @dst: beginning of the memory to write to + * @src: beginning of the memory to read from + * @n_bytes: amount of bytes to copy + * + * Returns true on success, false otherwise. + */ +static __always_inline +bool wr_memcpy(const void *dst, const void *src, size_t n_bytes) +{ + size_t size; + unsigned long flags; + uintptr_t d = (uintptr_t)dst; + uintptr_t s = (uintptr_t)src; + + if (WARN(!__is_wr_after_init(dst, n_bytes), WR_ERR_RANGE_MSG)) + return false; + while (n_bytes) { + struct page *page; + uintptr_t base; + uintptr_t offset; + uintptr_t offset_complement; + + local_irq_save(flags); + page = virt_to_page(d); + offset = d & ~PAGE_MASK; + offset_complement = PAGE_SIZE - offset; + size = (size_t)min(n_bytes, offset_complement); + base = (uintptr_t)vmap(&page, 1, VM_MAP, PAGE_KERNEL); + if (WARN(!base, WR_ERR_PAGE_MSG)) { + local_irq_restore(flags); + return false; + } + __write_once_size((void *)(base + offset), (void *)s, size); + vunmap((void *)base); + d += size; + s += size; + n_bytes -= size; + local_irq_restore(flags); + } + return true; +} + +/* + * rcu_assign_pointer is a macro, which takes advantage of being able to + * take the address of the destination parameter "p", so that it can be + * passed to WRITE_ONCE(), which is called in one of the branches of + * rcu_assign_pointer() and also, being a macro, can rely on the + * preprocessor for taking the address of its parameter. + * For the sake of staying compatible with the API, also + * wr_rcu_assign_pointer() is a macro that accepts a pointer as parameter, + * instead of the address of said pointer. + * However it is simply a wrapper to __wr_rcu_ptr(), which receives the + * address of the pointer. + */ +static __always_inline +uintptr_t __wr_rcu_ptr(const void *dst_p_p, const void *src_p) +{ + unsigned long flags; + struct page *page; + void *base; + uintptr_t offset; + const size_t size = sizeof(void *); + + if (WARN(!__is_wr_after_init(dst_p_p, size), WR_ERR_RANGE_MSG)) + return (uintptr_t)NULL; + local_irq_save(flags); + page = virt_to_page(dst_p_p); + offset = (uintptr_t)dst_p_p & ~PAGE_MASK; + base = vmap(&page, 1, VM_MAP, PAGE_KERNEL); + if (WARN(!base, WR_ERR_PAGE_MSG)) { + local_irq_restore(flags); + return (uintptr_t)NULL; + } + rcu_assign_pointer((*(void **)(offset + (uintptr_t)base)), src_p); + vunmap(base); + local_irq_restore(flags); + return (uintptr_t)src_p; +} + +#define wr_rcu_assign_pointer(p, v) __wr_rcu_ptr(&p, v) + +#define __wr_simple(dst_ptr, src_ptr) \ + wr_memcpy(dst_ptr, src_ptr, sizeof(*(src_ptr))) + +#define __wr_safe(dst_ptr, src_ptr, \ + unique_dst_ptr, unique_src_ptr) \ +({ \ + typeof(dst_ptr) unique_dst_ptr = (dst_ptr); \ + typeof(src_ptr) unique_src_ptr = (src_ptr); \ + \ + wr_memcpy(unique_dst_ptr, unique_src_ptr, \ + sizeof(*(unique_src_ptr))); \ +}) + +#define __safe_ops(dst, src) \ + (__typecheck(dst, src) && __no_side_effects(dst, src)) + +/** + * wr - copies an object over another of same type and size + * @dst_ptr: address of the destination object + * @src_ptr: address of the source object + */ +#define wr(dst_ptr, src_ptr) \ + __builtin_choose_expr(__safe_ops(dst_ptr, src_ptr), \ + __wr_simple(dst_ptr, src_ptr), \ + __wr_safe(dst_ptr, src_ptr, \ + __UNIQUE_ID(__dst_ptr), \ + __UNIQUE_ID(__src_ptr))) + +/** + * wr_ptr() - alters a pointer in write rare memory + * @dst: target for write + * @val: new value + * + * Returns true on success, false otherwise. + */ +static __always_inline +bool wr_ptr(const void *dst, const void *val) +{ + return wr_memcpy(dst, &val, sizeof(val)); +} +#endif diff --git a/mm/Makefile b/mm/Makefile index 26ef77a3883b..215c6a6d7304 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -64,6 +64,7 @@ obj-$(CONFIG_SPARSEMEM) += sparse.o obj-$(CONFIG_SPARSEMEM_VMEMMAP) += sparse-vmemmap.o obj-$(CONFIG_SLOB) += slob.o obj-$(CONFIG_MMU_NOTIFIER) += mmu_notifier.o +obj-$(CONFIG_PRMEM) += prmem.o obj-$(CONFIG_KSM) += ksm.o obj-$(CONFIG_PAGE_POISONING) += page_poison.o obj-$(CONFIG_SLAB) += slab.o diff --git a/mm/prmem.c b/mm/prmem.c new file mode 100644 index 000000000000..de9258f5f29a --- /dev/null +++ b/mm/prmem.c @@ -0,0 +1,10 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * prmem.c: Memory Protection Library + * + * (C) Copyright 2017-2018 Huawei Technologies Co. Ltd. + * Author: Igor Stoppa + */ + +const char WR_ERR_RANGE_MSG[] = "Write rare on invalid memory range."; +const char WR_ERR_PAGE_MSG[] = "Failed to remap write rare page.";