From patchwork Tue Oct 23 21:34:49 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Igor Stoppa X-Patchwork-Id: 10653697 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1209213A4 for ; Tue, 23 Oct 2018 21:36:06 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id EE8422A3C7 for ; Tue, 23 Oct 2018 21:36:05 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E16EF2A3CD; Tue, 23 Oct 2018 21:36:05 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C883E2A3C7 for ; Tue, 23 Oct 2018 21:36:04 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 90CAC6B0003; Tue, 23 Oct 2018 17:36:03 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 8BC7C6B0005; Tue, 23 Oct 2018 17:36:03 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7AA836B0006; Tue, 23 Oct 2018 17:36:03 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-lj1-f198.google.com (mail-lj1-f198.google.com [209.85.208.198]) by kanga.kvack.org (Postfix) with ESMTP id 055186B0003 for ; Tue, 23 Oct 2018 17:36:03 -0400 (EDT) Received: by mail-lj1-f198.google.com with SMTP id s14-v6so987746lji.2 for ; Tue, 23 Oct 2018 14:36:02 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references:reply-to; bh=lWr5dc2eNvTcJb+c+uRTUJwhKcPYmmWV6t14BAAM0+g=; b=UoKrjwz7fDZ/YpNYjBp/sPWDRr17yDYzZfI5HsajGlqZwYS8C06wnYdKzI/W1jZZ0u FBaNQ3OyYonZ7FWvAKjeH7u2BvIOn1rwlfsToG/xs5AXpEbyc1eSx4PouSvwwK1W17aY 84ver/HMQXOYg0qhgihlVrrWUtrjWNFpiKl69C2LgaVUSLxrOhB2wf9YxhuJmDbpUhK4 hCJvRobc6Gs314iZ5LjZ36v8eYbzg6rfgUlT8FBkHMVnvOEh/jZftitEiCivZ5DFiaPH KKvThucSE2Hk9dKj5wmAV9Jg0h4HUiA9Nvm1R60y0G8kap3wPJLkubJ2gZDdhlKPuiXi jM7A== X-Gm-Message-State: ABuFfohaKrOAMnKSwOv4JJ0OcUXqbo+KhPL4iSuQcY3/XY2dzLRutmOk R9IAi/ARfpSGUkEuXOhTKzVEfqEaHcMPXuwHHYU3E2bLdi+Z0KC8/mBb921wAHlm95SBxkGs76n Hvp15/MZqBSlQLjovzk7dODw0OZiyJLaOWI4fbFb8HBgC7fMLTz0h8G4OBq6e/TIKR3VusjcVCq cnFr+1FI98a4H9CxK6z3f4w0ALdJEqCFqsTdpltAQy6sYLkiyqZL04mZxf2I8/RHW5SWpsFZ6h+ /MblBKFOOMqDrpKSSjWuPcUAxtkoHdpWgjYiZsSVvcT7yY4tiqXK9/70VreY6KFW8/SEUA81aaS rl2d+h7ZoMa6lWUojm8nww7yQcrZSyXuOHhaH8og3IyYHh3oKt2uR1S2gVrATwLB4sx597yeuyv Q X-Received: by 2002:a19:d70f:: with SMTP id o15mr3374082lfg.134.1540330562049; Tue, 23 Oct 2018 14:36:02 -0700 (PDT) X-Received: by 2002:a19:d70f:: with SMTP id o15mr3374026lfg.134.1540330560490; Tue, 23 Oct 2018 14:36:00 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1540330560; cv=none; d=google.com; s=arc-20160816; b=MEAdePJ/uJWyoeBItRBb0dJc3Ba4+mxqgZ741UwcYCbkrlOPn87iE44fLyCWHLdHCX ukvXWwB23CSiO/R9ZR1nxRy+BEb2Ivd6eGajlEuftNq5co8GscFQ9tbrdbfjTeyIWiKb 5HQqXPqp0kyhMrmRTXh8hm2cfpM1s+mQoRpA+2I5ls64r0Mo4Hc+R9VoyyzrMs4BU9u1 kYmolSGqKdhFb1AvuUOt4N61XSKuU7wL/UtL3NBf7DIH1dkMja/n9fTFgTc/ih2C3r42 033at/a3lBx/XlIIFoY1sNcWQnjGrcqGYWvhnNnzycM1fRxr3ZpRa80SYkQjysG42nX4 TzsQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=reply-to:references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=lWr5dc2eNvTcJb+c+uRTUJwhKcPYmmWV6t14BAAM0+g=; b=Qsq9fZ6u5LpLsxG7lOJj/nYsmWKP8CxgmDa97AXh4MFuxIoiRgnzklC2rpqsOHp6q6 A8t1GFEPvGQUjrqo8yG3+31Igxt/DzA8o0W6sExd7YJJoKM263d2JBmAnZgJVOSt7ess 3gvvHXbV3EYe4k6uyENVorBPsL2f9LJ1e1s8z6tHi53Wxh3slZgljE0bK9YQR6I+PZO+ HxDEC2LbdUqP16y4tY/GPOAFgRgKwYowu4e2bhfIC2zu9pSSYJtlNtTbFZK+yvWtRqbO n13YHuuqwP7rHLNxxbH18IzESV2x7wVF9e1js4YmmIBwOFf9zW1vfOkQvhqpQyardUwW 2cmQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=K5HssUtW; spf=pass (google.com: domain of igor.stoppa@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=igor.stoppa@gmail.com; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id q22-v6sor916918lfa.49.2018.10.23.14.36.00 for (Google Transport Security); Tue, 23 Oct 2018 14:36:00 -0700 (PDT) Received-SPF: pass (google.com: domain of igor.stoppa@gmail.com designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=K5HssUtW; spf=pass (google.com: domain of igor.stoppa@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=igor.stoppa@gmail.com; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references:reply-to; bh=lWr5dc2eNvTcJb+c+uRTUJwhKcPYmmWV6t14BAAM0+g=; b=K5HssUtWyqhs3CTyKNXHpefeYDtWkJbXaAX1XLxON4iECRSRB6QjMc5RCzYk0y3yfd 3c1iOvdGFcidkqsyYIElslGkPelZCRE/JLn4hSqqkZH2OH3h57NZu/YFoNNNAjmaYHMg 36urDvGuUDvpjx00WQrlW9VtHoMHr8CdHjsfQij3WvBYK3waVusZpw/gnqTE7YAg+T8z SO7hquz3XCCU2aYUVW1L7jBQGaZliOW0tEQjX97ZIN9jZwLllZqgweeeNE0Lm7lP2AZN mpclQP5Np4932IIJTLzPM1PMBsppoxoRMBjfYYZLQ0qprQKK8NxeH80SI6RaTIOohnqU wQQA== X-Google-Smtp-Source: ACcGV63HspKtXzSwNphXjTw4U5KnYFl/LeBjRKri4bDR8FgHFnbdnJMdM8PgkfyEiZhsocjgUN4K2w== X-Received: by 2002:a19:2648:: with SMTP id m69-v6mr12821527lfm.78.1540330559911; Tue, 23 Oct 2018 14:35:59 -0700 (PDT) Received: from localhost.localdomain (91-159-62-169.elisa-laajakaista.fi. [91.159.62.169]) by smtp.gmail.com with ESMTPSA id y127-v6sm377950lfc.13.2018.10.23.14.35.58 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 23 Oct 2018 14:35:59 -0700 (PDT) From: Igor Stoppa X-Google-Original-From: Igor Stoppa To: Mimi Zohar , Kees Cook , Matthew Wilcox , Dave Chinner , James Morris , Michal Hocko , kernel-hardening@lists.openwall.com, linux-integrity@vger.kernel.org, linux-security-module@vger.kernel.org Cc: igor.stoppa@huawei.com, Dave Hansen , Jonathan Corbet , Laura Abbott , Vlastimil Babka , "Kirill A. Shutemov" , Andrew Morton , Pavel Tatashin , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 02/17] prmem: write rare for static allocation Date: Wed, 24 Oct 2018 00:34:49 +0300 Message-Id: <20181023213504.28905-3-igor.stoppa@huawei.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181023213504.28905-1-igor.stoppa@huawei.com> References: <20181023213504.28905-1-igor.stoppa@huawei.com> Reply-To: Igor Stoppa X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Implementation of write rare for statically allocated data, located in a specific memory section through the use of the __write_rare label. The basic functions are wr_memcpy() and wr_memset(): the write rare counterparts of memcpy() and memset() respectively. To minimize chances of attacks, this implementation does not unprotect existing memory pages. Instead, it remaps them, one by one, at random free locations, as writable. Each page is mapped as writable strictly for the time needed to perform changes in said page. While a page is remapped, interrupts are disabled on the core performing the write rare operation, to avoid being frozen mid-air by an attack using interrupts for stretching the duration of the alternate mapping. OTOH, to avoid introducing unpredictable delays, the interrupts are re-enabled inbetween page remapping, when write operations are either completed or not yet started, and there is not alternate, writable mapping to exploit. Signed-off-by: Igor Stoppa CC: Michal Hocko CC: Vlastimil Babka CC: "Kirill A. Shutemov" CC: Andrew Morton CC: Pavel Tatashin CC: linux-mm@kvack.org CC: linux-kernel@vger.kernel.org --- MAINTAINERS | 7 ++ include/linux/prmem.h | 213 ++++++++++++++++++++++++++++++++++++++++++ mm/Makefile | 1 + mm/prmem.c | 10 ++ 4 files changed, 231 insertions(+) create mode 100644 include/linux/prmem.h create mode 100644 mm/prmem.c diff --git a/MAINTAINERS b/MAINTAINERS index b2f710eee67a..e566c5d09faf 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -9454,6 +9454,13 @@ F: kernel/sched/membarrier.c F: include/uapi/linux/membarrier.h F: arch/powerpc/include/asm/membarrier.h +MEMORY HARDENING +M: Igor Stoppa +L: kernel-hardening@lists.openwall.com +S: Maintained +F: include/linux/prmem.h +F: mm/prmem.c + MEMORY MANAGEMENT L: linux-mm@kvack.org W: http://www.linux-mm.org diff --git a/include/linux/prmem.h b/include/linux/prmem.h new file mode 100644 index 000000000000..3ba41d76a582 --- /dev/null +++ b/include/linux/prmem.h @@ -0,0 +1,213 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * prmem.h: Header for memory protection library + * + * (C) Copyright 2018 Huawei Technologies Co. Ltd. + * Author: Igor Stoppa + * + * Support for: + * - statically allocated write rare data + */ + +#ifndef _LINUX_PRMEM_H +#define _LINUX_PRMEM_H + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +/* ============================ Write Rare ============================ */ + +extern const char WR_ERR_RANGE_MSG[]; +extern const char WR_ERR_PAGE_MSG[]; + +/* + * The following two variables are statically allocated by the linker + * script at the the boundaries of the memory region (rounded up to + * multiples of PAGE_SIZE) reserved for __wr_after_init. + */ +extern long __start_wr_after_init; +extern long __end_wr_after_init; + +static __always_inline bool __is_wr_after_init(const void *ptr, size_t size) +{ + size_t start = (size_t)&__start_wr_after_init; + size_t end = (size_t)&__end_wr_after_init; + size_t low = (size_t)ptr; + size_t high = (size_t)ptr + size; + + return likely(start <= low && low < high && high <= end); +} + +/** + * wr_memset() - sets n bytes of the destination to the c value + * @dst: beginning of the memory to write to + * @c: byte to replicate + * @size: amount of bytes to copy + * + * Returns true on success, false otherwise. + */ +static __always_inline +bool wr_memset(const void *dst, const int c, size_t n_bytes) +{ + size_t size; + unsigned long flags; + uintptr_t d = (uintptr_t)dst; + + if (WARN(!__is_wr_after_init(dst, n_bytes), WR_ERR_RANGE_MSG)) + return false; + while (n_bytes) { + struct page *page; + uintptr_t base; + uintptr_t offset; + uintptr_t offset_complement; + + local_irq_save(flags); + page = virt_to_page(d); + offset = d & ~PAGE_MASK; + offset_complement = PAGE_SIZE - offset; + size = min(n_bytes, offset_complement); + base = (uintptr_t)vmap(&page, 1, VM_MAP, PAGE_KERNEL); + if (WARN(!base, WR_ERR_PAGE_MSG)) { + local_irq_restore(flags); + return false; + } + memset((void *)(base + offset), c, size); + vunmap((void *)base); + d += size; + n_bytes -= size; + local_irq_restore(flags); + } + return true; +} + +/** + * wr_memcpy() - copyes n bytes from source to destination + * @dst: beginning of the memory to write to + * @src: beginning of the memory to read from + * @n_bytes: amount of bytes to copy + * + * Returns true on success, false otherwise. + */ +static __always_inline +bool wr_memcpy(const void *dst, const void *src, size_t n_bytes) +{ + size_t size; + unsigned long flags; + uintptr_t d = (uintptr_t)dst; + uintptr_t s = (uintptr_t)src; + + if (WARN(!__is_wr_after_init(dst, n_bytes), WR_ERR_RANGE_MSG)) + return false; + while (n_bytes) { + struct page *page; + uintptr_t base; + uintptr_t offset; + uintptr_t offset_complement; + + local_irq_save(flags); + page = virt_to_page(d); + offset = d & ~PAGE_MASK; + offset_complement = PAGE_SIZE - offset; + size = (size_t)min(n_bytes, offset_complement); + base = (uintptr_t)vmap(&page, 1, VM_MAP, PAGE_KERNEL); + if (WARN(!base, WR_ERR_PAGE_MSG)) { + local_irq_restore(flags); + return false; + } + __write_once_size((void *)(base + offset), (void *)s, size); + vunmap((void *)base); + d += size; + s += size; + n_bytes -= size; + local_irq_restore(flags); + } + return true; +} + +/* + * rcu_assign_pointer is a macro, which takes advantage of being able to + * take the address of the destination parameter "p", so that it can be + * passed to WRITE_ONCE(), which is called in one of the branches of + * rcu_assign_pointer() and also, being a macro, can rely on the + * preprocessor for taking the address of its parameter. + * For the sake of staying compatible with the API, also + * wr_rcu_assign_pointer() is a macro that accepts a pointer as parameter, + * instead of the address of said pointer. + * However it is simply a wrapper to __wr_rcu_ptr(), which receives the + * address of the pointer. + */ +static __always_inline +uintptr_t __wr_rcu_ptr(const void *dst_p_p, const void *src_p) +{ + unsigned long flags; + struct page *page; + void *base; + uintptr_t offset; + const size_t size = sizeof(void *); + + if (WARN(!__is_wr_after_init(dst_p_p, size), WR_ERR_RANGE_MSG)) + return (uintptr_t)NULL; + local_irq_save(flags); + page = virt_to_page(dst_p_p); + offset = (uintptr_t)dst_p_p & ~PAGE_MASK; + base = vmap(&page, 1, VM_MAP, PAGE_KERNEL); + if (WARN(!base, WR_ERR_PAGE_MSG)) { + local_irq_restore(flags); + return (uintptr_t)NULL; + } + rcu_assign_pointer((*(void **)(offset + (uintptr_t)base)), src_p); + vunmap(base); + local_irq_restore(flags); + return (uintptr_t)src_p; +} + +#define wr_rcu_assign_pointer(p, v) __wr_rcu_ptr(&p, v) + +#define __wr_simple(dst_ptr, src_ptr) \ + wr_memcpy(dst_ptr, src_ptr, sizeof(*(src_ptr))) + +#define __wr_safe(dst_ptr, src_ptr, \ + unique_dst_ptr, unique_src_ptr) \ +({ \ + typeof(dst_ptr) unique_dst_ptr = (dst_ptr); \ + typeof(src_ptr) unique_src_ptr = (src_ptr); \ + \ + wr_memcpy(unique_dst_ptr, unique_src_ptr, \ + sizeof(*(unique_src_ptr))); \ +}) + +#define __safe_ops(dst, src) \ + (__typecheck(dst, src) && __no_side_effects(dst, src)) + +/** + * wr - copies an object over another of same type and size + * @dst_ptr: address of the destination object + * @src_ptr: address of the source object + */ +#define wr(dst_ptr, src_ptr) \ + __builtin_choose_expr(__safe_ops(dst_ptr, src_ptr), \ + __wr_simple(dst_ptr, src_ptr), \ + __wr_safe(dst_ptr, src_ptr, \ + __UNIQUE_ID(__dst_ptr), \ + __UNIQUE_ID(__src_ptr))) + +/** + * wr_ptr() - alters a pointer in write rare memory + * @dst: target for write + * @val: new value + * + * Returns true on success, false otherwise. + */ +static __always_inline +bool wr_ptr(const void *dst, const void *val) +{ + return wr_memcpy(dst, &val, sizeof(val)); +} +#endif diff --git a/mm/Makefile b/mm/Makefile index 26ef77a3883b..215c6a6d7304 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -64,6 +64,7 @@ obj-$(CONFIG_SPARSEMEM) += sparse.o obj-$(CONFIG_SPARSEMEM_VMEMMAP) += sparse-vmemmap.o obj-$(CONFIG_SLOB) += slob.o obj-$(CONFIG_MMU_NOTIFIER) += mmu_notifier.o +obj-$(CONFIG_PRMEM) += prmem.o obj-$(CONFIG_KSM) += ksm.o obj-$(CONFIG_PAGE_POISONING) += page_poison.o obj-$(CONFIG_SLAB) += slab.o diff --git a/mm/prmem.c b/mm/prmem.c new file mode 100644 index 000000000000..de9258f5f29a --- /dev/null +++ b/mm/prmem.c @@ -0,0 +1,10 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * prmem.c: Memory Protection Library + * + * (C) Copyright 2017-2018 Huawei Technologies Co. Ltd. + * Author: Igor Stoppa + */ + +const char WR_ERR_RANGE_MSG[] = "Write rare on invalid memory range."; +const char WR_ERR_PAGE_MSG[] = "Failed to remap write rare page."; From patchwork Tue Oct 23 21:34:50 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Igor Stoppa X-Patchwork-Id: 10653699 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A9FCE13A4 for ; Tue, 23 Oct 2018 21:36:08 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 941802A3C7 for ; Tue, 23 Oct 2018 21:36:08 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 879092A3CD; Tue, 23 Oct 2018 21:36:08 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 66C9F2A3C7 for ; Tue, 23 Oct 2018 21:36:07 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5DBBE6B0005; Tue, 23 Oct 2018 17:36:04 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 513E86B0006; Tue, 23 Oct 2018 17:36:04 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3DF4D6B0007; Tue, 23 Oct 2018 17:36:04 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-lj1-f198.google.com (mail-lj1-f198.google.com [209.85.208.198]) by kanga.kvack.org (Postfix) with ESMTP id BE8666B0005 for ; Tue, 23 Oct 2018 17:36:03 -0400 (EDT) Received: by mail-lj1-f198.google.com with SMTP id k10-v6so988973ljc.4 for ; Tue, 23 Oct 2018 14:36:03 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references:reply-to; bh=6jXsMcNOASyEUd+AGjVZam8sdvDi65Nq+ICQnyI7NC8=; b=BFwrPJxs3IPrVbK3rbdZZVPqnZPmtDa8XTfgOJDp0yTbSapR0QBdZIcZPmXHjMzlam Gkf4skzdD3AHKHKn0+gQsYUrD56QacyWpCvLrGbTAFFYZygpRhZK2GbI7BgYtUhV8bKg orVPTdyhW6ST+6MfvLE0tvMBOnzwJHkQXN9fcE04KT1LotlNVb5SKF9UpqXY2E9/suUJ vzqa7zTN5FJmtuJkmlAXoy8z5Y1PLzfi9Jorrgrx5ioIzr90QUniUfrwLtRIOQvo8xM1 W8WHHXLiTK9O+L5QCRmQ1hSLQOhSqLXcxrlYlLg2pg3PF7wymfKboGC1pWBxVNjYVCkc f4Lg== X-Gm-Message-State: AGRZ1gLzv9YOrtkcMLdJMaUxg9InDTBOXuLkfBqVsSiZIKgE5NOybN99 qkWCVfHBFDhSTMWUlKa8t8D2lCLnl/ZBtUbojIah5Mo6xpsTQqbbBBZ3CWPffj6R/kLQ7nZRf4r lIdeoK1KqLfgOP5ZCh85Q2iHb1Yra7rtfvDchTTe/zOkj7JeRhtucD/Ixr3iJd/IHkb+R/w6I4p nkSGUZXeBkXf0IejzaGlXTp5W1CMpn3JgTrkORaYzSQNyeQgulZJwfXoORfwnQaa2jweXXgfOBF CkQ1/go7sGimJGNNl8GJDSd9SXE7FNqxZgj6NmN+anyhkcOtRj/Sf1q42BKbZ0iiVNLXjpPOC4+ 1JZVl2FZ/T26VDWWp/yF1MNrO0Keb9V/wF3FIGFEUttf6NmpyX02HRUxEZ1KJeiJzwmYM4qU7Za / X-Received: by 2002:a2e:710c:: with SMTP id m12-v6mr444ljc.66.1540330563046; Tue, 23 Oct 2018 14:36:03 -0700 (PDT) X-Received: by 2002:a2e:710c:: with SMTP id m12-v6mr414ljc.66.1540330561935; Tue, 23 Oct 2018 14:36:01 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1540330561; cv=none; d=google.com; s=arc-20160816; b=T5D8PsIrM69BMrVT58/eomj6mgSnbWJ0R9Aq3A3hOKsbeJkeRgR+OHNi7dL7U+EWhB koN+zX+GmzadRKpOoum8PjQ5e03R0DLxkVZBZzt1crm/NiaAcoS1A5CY0TPcAz/IfKZq biRPchyiLiLswKXBVAyr6+k1RczUjv5Jf+RhGaf5b17kmfbLnTW8Y7kb/jAMxXAv0rZh Lm0u1p3iOhYZZHuBOSARCi7iZSSPFWcNMToPwkZvEWYykfXhGVAAwTuK9MjK58jqJuOe VjJfwbcO33E3dyJoJkxOdXKjaqIYDgFSrptGspE3ItpyiP2Fz+4IRsqSYVKtYlij5p05 OdFA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=reply-to:references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=6jXsMcNOASyEUd+AGjVZam8sdvDi65Nq+ICQnyI7NC8=; b=ecf1NdIGe+QEmCXJsLxt+iyu7fiF+N4MQ0VJU5PWto9eJF6u3U4bkidH11JAOdXrro 0fT4VD1tE8oOmRMl1eeOpNF8lmxES9XLGpeMw67pa12Phg/CTbNYo6MwbkDo2yh161wb LoL1/AKrQB/x5F1UNSH88HMjtsj8g5DQAqoX0XCZ2TELLn88CfBBB7u5TYkPr2woMz79 CHF+45bHGzKjwHvDPxXLKlQHcV5l6FuQrWD/gBl0SQThNYdVnL78BfaS45aoZDr80wHY khvRPkJxoOlECbfZFdwCCk8LpT51F+LRbl/eJvgx6SuExP2Z4BsY1WWq9RZg6OFXXraS WyqA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=D2P3EntG; spf=pass (google.com: domain of igor.stoppa@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=igor.stoppa@gmail.com; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id p20-v6sor851409lfp.41.2018.10.23.14.36.01 for (Google Transport Security); Tue, 23 Oct 2018 14:36:01 -0700 (PDT) Received-SPF: pass (google.com: domain of igor.stoppa@gmail.com designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=D2P3EntG; spf=pass (google.com: domain of igor.stoppa@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=igor.stoppa@gmail.com; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references:reply-to; bh=6jXsMcNOASyEUd+AGjVZam8sdvDi65Nq+ICQnyI7NC8=; b=D2P3EntG9KSCG0qbpHMKa1F9RkSy7Fi1vmdHYMZHlxzpBrRFHHM9NmSDNJ6UUFNIPF XSi1KAxTa0aD5TdT4IY4q5HajQnHuMo2x/tOfbQmk3e8Gv/273K7kgKcTKKN8ICPJDC8 4nfmvBBbpIlF4xCgqU3MJDPqYXXkrivUSNIamTfQjKKdnAPxoAywQP5MZOy3u2PJdNbN 2NQqXXVPhvdWOQYVBqXNNymOa3SgIAbrVfxW3CzlsnFmaD5CaeVIfl4C900BUKgXU5an FofC3Bb9uPmIhTPS+tO5MFQl0G6FTHn7fQfWj2uSai/ifDGqQQf+uh5p7/Lk5iQgg19Q Wf2w== X-Google-Smtp-Source: ACcGV602U2IyIsctESkBUXllpEO17589f4KnIPcsRRLcANMrqKwA6sberqeguWGCzYnkS5FtCx9hDw== X-Received: by 2002:a19:c954:: with SMTP id z81mr8626755lff.150.1540330561406; Tue, 23 Oct 2018 14:36:01 -0700 (PDT) Received: from localhost.localdomain (91-159-62-169.elisa-laajakaista.fi. [91.159.62.169]) by smtp.gmail.com with ESMTPSA id y127-v6sm377950lfc.13.2018.10.23.14.36.00 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 23 Oct 2018 14:36:00 -0700 (PDT) From: Igor Stoppa X-Google-Original-From: Igor Stoppa To: Mimi Zohar , Kees Cook , Matthew Wilcox , Dave Chinner , James Morris , Michal Hocko , kernel-hardening@lists.openwall.com, linux-integrity@vger.kernel.org, linux-security-module@vger.kernel.org Cc: igor.stoppa@huawei.com, Dave Hansen , Jonathan Corbet , Laura Abbott , Andrew Morton , Chintan Pandya , Joe Perches , "Luis R. Rodriguez" , Thomas Gleixner , Kate Stewart , Greg Kroah-Hartman , Philippe Ombredanne , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 03/17] prmem: vmalloc support for dynamic allocation Date: Wed, 24 Oct 2018 00:34:50 +0300 Message-Id: <20181023213504.28905-4-igor.stoppa@huawei.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181023213504.28905-1-igor.stoppa@huawei.com> References: <20181023213504.28905-1-igor.stoppa@huawei.com> Reply-To: Igor Stoppa X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Prepare vmalloc for: - tagging areas used for dynamic allocation of protected memory - supporting various tags, related to the property that an area might have - extrapolating the pool containing a given area - chaining the areas in each pool - extrapolating the area containing a given memory address NOTE: Since there is a list_head structure that is used only when disposing of the allocation (the field purge_list), there are two pointers for the take, before it comes the time of freeing the allocation. To avoid increasing the size of the vmap_area structure, instead of using a standard doubly linked list for tracking the chain of vmap_areas, only one pointer is spent for this purpose, in a single linked list, while the other is used to provide a direct connection to the parent pool. Signed-off-by: Igor Stoppa CC: Michal Hocko CC: Andrew Morton CC: Chintan Pandya CC: Joe Perches CC: "Luis R. Rodriguez" CC: Thomas Gleixner CC: Kate Stewart CC: Greg Kroah-Hartman CC: Philippe Ombredanne CC: linux-mm@kvack.org CC: linux-kernel@vger.kernel.org --- include/linux/vmalloc.h | 12 +++++++++++- mm/vmalloc.c | 2 +- 2 files changed, 12 insertions(+), 2 deletions(-) diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h index 398e9c95cd61..4d14a3b8089e 100644 --- a/include/linux/vmalloc.h +++ b/include/linux/vmalloc.h @@ -21,6 +21,9 @@ struct notifier_block; /* in notifier.h */ #define VM_UNINITIALIZED 0x00000020 /* vm_struct is not fully initialized */ #define VM_NO_GUARD 0x00000040 /* don't add guard page */ #define VM_KASAN 0x00000080 /* has allocated kasan shadow memory */ +#define VM_PMALLOC 0x00000100 /* pmalloc area - see docs */ +#define VM_PMALLOC_WR 0x00000200 /* pmalloc write rare area */ +#define VM_PMALLOC_PROTECTED 0x00000400 /* pmalloc protected area */ /* bits [20..32] reserved for arch specific ioremap internals */ /* @@ -48,7 +51,13 @@ struct vmap_area { unsigned long flags; struct rb_node rb_node; /* address sorted rbtree */ struct list_head list; /* address sorted list */ - struct llist_node purge_list; /* "lazy purge" list */ + union { + struct llist_node purge_list; /* "lazy purge" list */ + struct { + struct vmap_area *next; + struct pmalloc_pool *pool; + }; + }; struct vm_struct *vm; struct rcu_head rcu_head; }; @@ -134,6 +143,7 @@ extern struct vm_struct *__get_vm_area_caller(unsigned long size, const void *caller); extern struct vm_struct *remove_vm_area(const void *addr); extern struct vm_struct *find_vm_area(const void *addr); +extern struct vmap_area *find_vmap_area(unsigned long addr); extern int map_vm_area(struct vm_struct *area, pgprot_t prot, struct page **pages); diff --git a/mm/vmalloc.c b/mm/vmalloc.c index a728fc492557..15850005fea5 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -742,7 +742,7 @@ static void free_unmap_vmap_area(struct vmap_area *va) free_vmap_area_noflush(va); } -static struct vmap_area *find_vmap_area(unsigned long addr) +struct vmap_area *find_vmap_area(unsigned long addr) { struct vmap_area *va; From patchwork Tue Oct 23 21:34:52 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Igor Stoppa X-Patchwork-Id: 10653701 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5160B14BB for ; Tue, 23 Oct 2018 21:36:11 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3A1C82A3C7 for ; Tue, 23 Oct 2018 21:36:11 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2D6842A3CD; Tue, 23 Oct 2018 21:36:11 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8B2FA2A3C7 for ; Tue, 23 Oct 2018 21:36:10 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0442B6B0007; Tue, 23 Oct 2018 17:36:08 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id EB32B6B000D; Tue, 23 Oct 2018 17:36:07 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D2AF76B000E; Tue, 23 Oct 2018 17:36:07 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-lj1-f200.google.com (mail-lj1-f200.google.com [209.85.208.200]) by kanga.kvack.org (Postfix) with ESMTP id 5FF796B0007 for ; Tue, 23 Oct 2018 17:36:07 -0400 (EDT) Received: by mail-lj1-f200.google.com with SMTP id s7-v6so995183ljh.3 for ; Tue, 23 Oct 2018 14:36:07 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references:reply-to; bh=9TeBZENTG4lUlljtOFNGnausIJ9loexqClGDhlY00q0=; b=A355ojvNJ9956vTCCcTb4G9dr0hu6Qun82+bS4D5B09EP3x2GRXrQhgKpYGV7HqqsM 3hU8lL9yAD3DhjzdVzf2PpSOqHH39OwYkG+8dUwBY5cR/O3w21P73zqD2K73i2+OKBCI 5GvZTBAVLBA3AWeX87I2QUK1X/7eHCmFyDA/nqgCYB5vwFlY2qFNroVrK2u90JoNZUig cm8ZTRHJ1+wCsFUAOJdgSan1gSrnYx6R+lCPD2GS/p6SRQ6Dpa55CwD26IbhS1L8YwJb fH1jNiJ2/+LMhu2WBYDYXoiZXFQUQfiEYxV3FenBPSui2BZ80QqhMiFA0WFEcSWXavh1 P0jQ== X-Gm-Message-State: ABuFfog4UPvEvJXbAzE82396cR4sXBa026T75voKEs51g6Y0q31q5eYB 6K+eIfoXLcmHU3r7n83T62MTMXz9nJyiIEkfzv2i3n1bz7/HHZZek0r+67G9L9aeuh2phTjsFD7 ZCH9MtvcmAWfewGIvokKqole9czpP+v7gVW9k/sC3io4qX40GydT20g7GLEpklu4kdP/GNeCLNi 5acA7X+YkYYeNwylKwUp83RHDcmr020iTWcyB9Wl/AbRdMIQJDR5MF/viIqlyaukSoLg7N5EF6W B1gkx/Pz9259cE+9jDVbD/wNvfRC0Rf7hSJaNH0WME1sNBJktBZ4fVMSZ9Pj0ptb97ysigJB/hZ ApGySrhYeExBC3bB5i1/95xb9nwcxrlM4TxmQLcSu+G/YZa/UXJnZF8gpP9+Gw/Os15iV/+KnRl B X-Received: by 2002:a19:9844:: with SMTP id a65-v6mr14016441lfe.63.1540330566617; Tue, 23 Oct 2018 14:36:06 -0700 (PDT) X-Received: by 2002:a19:9844:: with SMTP id a65-v6mr14016380lfe.63.1540330564939; Tue, 23 Oct 2018 14:36:04 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1540330564; cv=none; d=google.com; s=arc-20160816; b=JE8sHpoqhO1/1pWytriOwEu+ju+2HMMsG80iPwntB0oXebTyGh+pnEWQaJgSQ19ygm BuuSFroqTFZJnqC9xz528Pic1WH6EBRVGfwE4wADBK7ALmXFwMA0GB4PbKNzMnV3G88N wcld/p5rJWZdwa42+LnGEMAe/JgrYjti0GX8/mN/cM7NDDhQc7mkKZsc1paI/DF4dEy5 OizzQU9f42ZaVNdBBJly8zWd8xJB4OsE8SY45QE1IZ05uFVcC7cIejgmRT6JoDmUlHpD ScweerTAyQLQDc6DHVQRbvd31R96yCwxi6Kagp9fY8R9f1WVMibhb1fhI6YCCLXKLfx4 Slfg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=reply-to:references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=9TeBZENTG4lUlljtOFNGnausIJ9loexqClGDhlY00q0=; b=X1o1gH51BK6LyO0jO1c1gIoP+rDRbaUgOwywjrSkQ4CBd455MSlcqrmBgA3iLITh3H LDD0K4deSF3zE0OjBwYkVZnbMaimArlIjbVTfg/xzw9RokrBk9AglMKI5v0knQceubGS GxhOB4kbhzmS7eF/THC3cLOFx53xLbl8MX85QBAFhP7blkV4R4wNBnqCg/3TxLx7T/cm 9dfPTjrJW74XOzf2mp3IJU4TZ8dtzTY8XKKHPrkkBd7e+4Zz4ASpHNNvs1wuHQ2r/lMb J5Fsw6rvlmbJN7UNp8WC+SetwUmDUZe+iluuECHIibY2yy3MNNWS3A9OTcHnMl/5nqaM /WjA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=JtoyO3mZ; spf=pass (google.com: domain of igor.stoppa@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=igor.stoppa@gmail.com; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id x14-v6sor1728479ljh.27.2018.10.23.14.36.04 for (Google Transport Security); Tue, 23 Oct 2018 14:36:04 -0700 (PDT) Received-SPF: pass (google.com: domain of igor.stoppa@gmail.com designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=JtoyO3mZ; spf=pass (google.com: domain of igor.stoppa@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=igor.stoppa@gmail.com; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references:reply-to; bh=9TeBZENTG4lUlljtOFNGnausIJ9loexqClGDhlY00q0=; b=JtoyO3mZ13IGzAbCIgy6S5RYsmImtY6hfg34KQRQ2X3TR1VVri5yBZ/LUjBcD0TzXa 0s8L1kB/YwwSuVWbA3820PquvwCu3SkJCcO1geuaAaOGfUNh9JAvXkHX3kScSRePUVzs LpBgTT1JYuKJ1aT49DAJUFExgxQPahyLwVz4BPku8ogRZBoPGO9RswhALPMjDi8XbttO OjGxJeSt7f0J7GWI7q5EdlapOM+CLE2SPmDyplRRclUZ20Yaw/5nyVtaOA6eSk9/RtMb rtjjIO80ygFKzuE+VuAYK60mF+XSs2WkRVh4yBh7Fgr9V0xTBNIf9zkz+/Qr1wr90r6X Paxg== X-Google-Smtp-Source: AJdET5fm/SfTwnUQw/H/6N0AMH3Waqo8igwVte/fLnkcWUycwtZsZmlgduLgWV39YuSoalp0uJ0pjw== X-Received: by 2002:a2e:84c5:: with SMTP id q5-v6mr5777858ljh.65.1540330564560; Tue, 23 Oct 2018 14:36:04 -0700 (PDT) Received: from localhost.localdomain (91-159-62-169.elisa-laajakaista.fi. [91.159.62.169]) by smtp.gmail.com with ESMTPSA id y127-v6sm377950lfc.13.2018.10.23.14.36.03 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 23 Oct 2018 14:36:03 -0700 (PDT) From: Igor Stoppa X-Google-Original-From: Igor Stoppa To: Mimi Zohar , Kees Cook , Matthew Wilcox , Dave Chinner , James Morris , Michal Hocko , kernel-hardening@lists.openwall.com, linux-integrity@vger.kernel.org, linux-security-module@vger.kernel.org Cc: igor.stoppa@huawei.com, Dave Hansen , Jonathan Corbet , Laura Abbott , Vlastimil Babka , "Kirill A. Shutemov" , Andrew Morton , Pavel Tatashin , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 05/17] prmem: shorthands for write rare on common types Date: Wed, 24 Oct 2018 00:34:52 +0300 Message-Id: <20181023213504.28905-6-igor.stoppa@huawei.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181023213504.28905-1-igor.stoppa@huawei.com> References: <20181023213504.28905-1-igor.stoppa@huawei.com> Reply-To: Igor Stoppa X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Wrappers around the basic write rare functionality, addressing several common data types found in the kernel, allowing to specify the new values through immediates, like constants and defines. Note: The list is not complete and could be expanded. Signed-off-by: Igor Stoppa CC: Michal Hocko CC: Vlastimil Babka CC: "Kirill A. Shutemov" CC: Andrew Morton CC: Pavel Tatashin CC: linux-mm@kvack.org CC: linux-kernel@vger.kernel.org --- MAINTAINERS | 1 + include/linux/prmemextra.h | 133 +++++++++++++++++++++++++++++++++++++ 2 files changed, 134 insertions(+) create mode 100644 include/linux/prmemextra.h diff --git a/MAINTAINERS b/MAINTAINERS index e566c5d09faf..df7221eca160 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -9459,6 +9459,7 @@ M: Igor Stoppa L: kernel-hardening@lists.openwall.com S: Maintained F: include/linux/prmem.h +F: include/linux/prmemextra.h F: mm/prmem.c MEMORY MANAGEMENT diff --git a/include/linux/prmemextra.h b/include/linux/prmemextra.h new file mode 100644 index 000000000000..36995717720e --- /dev/null +++ b/include/linux/prmemextra.h @@ -0,0 +1,133 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * prmemextra.h: Shorthands for write rare of basic data types + * + * (C) Copyright 2018 Huawei Technologies Co. Ltd. + * Author: Igor Stoppa + * + */ + +#ifndef _LINUX_PRMEMEXTRA_H +#define _LINUX_PRMEMEXTRA_H + +#include + +/** + * wr_char - alters a char in write rare memory + * @dst: target for write + * @val: new value + * + * Returns true on success, false otherwise. + */ +static __always_inline +bool wr_char(const char *dst, const char val) +{ + return wr_memcpy(dst, &val, sizeof(val)); +} + +/** + * wr_short - alters a short in write rare memory + * @dst: target for write + * @val: new value + * + * Returns true on success, false otherwise. + */ +static __always_inline +bool wr_short(const short *dst, const short val) +{ + return wr_memcpy(dst, &val, sizeof(val)); +} + +/** + * wr_ushort - alters an unsigned short in write rare memory + * @dst: target for write + * @val: new value + * + * Returns true on success, false otherwise. + */ +static __always_inline +bool wr_ushort(const unsigned short *dst, const unsigned short val) +{ + return wr_memcpy(dst, &val, sizeof(val)); +} + +/** + * wr_int - alters an int in write rare memory + * @dst: target for write + * @val: new value + * + * Returns true on success, false otherwise. + */ +static __always_inline +bool wr_int(const int *dst, const int val) +{ + return wr_memcpy(dst, &val, sizeof(val)); +} + +/** + * wr_uint - alters an unsigned int in write rare memory + * @dst: target for write + * @val: new value + * + * Returns true on success, false otherwise. + */ +static __always_inline +bool wr_uint(const unsigned int *dst, const unsigned int val) +{ + return wr_memcpy(dst, &val, sizeof(val)); +} + +/** + * wr_long - alters a long in write rare memory + * @dst: target for write + * @val: new value + * + * Returns true on success, false otherwise. + */ +static __always_inline +bool wr_long(const long *dst, const long val) +{ + return wr_memcpy(dst, &val, sizeof(val)); +} + +/** + * wr_ulong - alters an unsigned long in write rare memory + * @dst: target for write + * @val: new value + * + * Returns true on success, false otherwise. + */ +static __always_inline +bool wr_ulong(const unsigned long *dst, const unsigned long val) +{ + return wr_memcpy(dst, &val, sizeof(val)); +} + +/** + * wr_longlong - alters a long long in write rare memory + * @dst: target for write + * @val: new value + * + * Returns true on success, false otherwise. + */ +static __always_inline +bool wr_longlong(const long long *dst, const long long val) +{ + return wr_memcpy(dst, &val, sizeof(val)); +} + +/** + * wr_ulonglong - alters an unsigned long long in write rare memory + * @dst: target for write + * @val: new value + * + * Returns true on success, false otherwise. + */ +static __always_inline +bool wr_ulonglong(const unsigned long long *dst, + const unsigned long long val) +{ + return wr_memcpy(dst, &val, sizeof(val)); +} + +#endif From patchwork Tue Oct 23 21:34:53 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Igor Stoppa X-Patchwork-Id: 10653705 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7925113A4 for ; Tue, 23 Oct 2018 21:36:16 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 62EA42A3C9 for ; Tue, 23 Oct 2018 21:36:16 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 56EA92A3D1; Tue, 23 Oct 2018 21:36:16 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9FB762A3C9 for ; Tue, 23 Oct 2018 21:36:14 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id F298C6B000A; Tue, 23 Oct 2018 17:36:09 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id EFF746B000D; Tue, 23 Oct 2018 17:36:09 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D27D76B000E; Tue, 23 Oct 2018 17:36:09 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-lj1-f198.google.com (mail-lj1-f198.google.com [209.85.208.198]) by kanga.kvack.org (Postfix) with ESMTP id 3C8CA6B000A for ; Tue, 23 Oct 2018 17:36:09 -0400 (EDT) Received: by mail-lj1-f198.google.com with SMTP id m10-v6so142744ljj.8 for ; Tue, 23 Oct 2018 14:36:09 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references:reply-to; bh=+x3wjQbIZdeZGc6lP1F3P1cnjhuI1El1LgW0Njws/+0=; b=bV7DQQD/SlMM9AsAe8szq1a0Q+vrOgKo7fQ2f5QUuf4OkD9dJuk+QL8xwLasT2fubg ILenVly7u+oWzlc62lflVTwKzbgTLwl574t2LAyDj6yRbA6NNf8JfRhHILxvlybFH3fU i8J+a6sIUdYMwcAmQi8mBSJGKRM8ubniytzwz9WKPvwv70vUyWmCLADE82vurxdSt0uh aqyMNzMAbCxPL7bxGZlKXIx8THJMtgrmmrouptSeQGARh0XsTLo/o/aDpUyUYetDR0Uu F5oreB+XIYr2YbhbEe/iYJN5e1ZFARPBLjjs/XZHndvnxS22JkxJxI/7rCsHT660u+xn x+iQ== X-Gm-Message-State: ABuFfohYWZKfyL1uylhd3wsVUrqkpo98JRyy/NpB8WN+sNteKaDyOuk2 x4zeDDoy4Kk9mLYMEMmR4Jt3BCeebAV2yYKuggvpoSUx5lD6Q4W+oGYNNbQte9ZSQEs4U57y9Bb s8i+yzS/HBEIg/7LKymd0438CzI9vDXNRkpYurzrFxF0gS8f3/xmRwKVQY6uMKw/E4ZAiMPKlug Xc1Y21o/8JKEU48ZePBFzz/SuU0nzheMe4B4zkKx0xpdeyv0fOvUv4JG1lIsHmRuDMBVGVtDDBb ypIpqGlgR88V8P78Marlb9ubU6tTCgEc5oskM6EgUpKcMAF74iVSEGlNB8vF+pMiRm/j0NEf8Ms 57smAc+gWaVdVffrNQQSXD41JqIC6XWGLXHapxIw4x+R4WI0X7Z3584oGmBulKDo6vaoUPT6x8n 2 X-Received: by 2002:a19:2b54:: with SMTP id r81mr7364889lfr.34.1540330568446; Tue, 23 Oct 2018 14:36:08 -0700 (PDT) X-Received: by 2002:a19:2b54:: with SMTP id r81mr7364844lfr.34.1540330566765; Tue, 23 Oct 2018 14:36:06 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1540330566; cv=none; d=google.com; s=arc-20160816; b=mCZ/61lZ1kfOveYWGEIXDcc4VT2tlgX+Kt/Kr01g1HbhTk23tkPHrTLBKzH7TK+vlI RDT76rmT6lJOr/zZmN+dPuyLGm2Wj7hchvva3YOHzhiOU7pRBZhDY/ERUDpyMtIva23U 6PIphT7q0ncE3sZ4MSZlyrhh29eFUB7xfcwyrI/oTh64PQB3goFkIEOOLhHokR4/Bvoy 0cmbgP5wv/u5kSf+zjUayok9x/zlMtO99+ghr9aaCmAe2yRi1Y1wIiKEnGjVnxW7pNRB O/f1gA+yH1fxUeedU482XIm/2IAC+F6RcL/EAq6+8OAkwtumAHmdEXf6lRbhExbIgVOq IVIg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=reply-to:references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=+x3wjQbIZdeZGc6lP1F3P1cnjhuI1El1LgW0Njws/+0=; b=SjMuyEMjPry9OZZw/cAZVMiuiaebyFB5wVWzuWgQGr1bSTPL2+yChNKIjTl/AoAFck 7TXfraKxBTMkrB+JIN1KFXVabZSI+uorZet3A+yp9sOooJniVC8iBiJcbxKr3ZFcZgcE U9yGmprb8rmiIhUg7/BgUU6SAcWEIKsJtZmqVTNOcnGPxHNru/bvqREw7HR7y1so55UT phP/Ad9VapJW9ImGCmXkHXqLHGMmIYs3WR2s80BAJbyTtcB7apGNs1QtNh3s2emBItQU 1796Tjj3WmfXgDasf7qVPPhoUI/P53q8KrqMRO7xHV4SmWNr/wGGQ6nYMJMCXCwwdtDX SK9w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=C4wdxqvK; spf=pass (google.com: domain of igor.stoppa@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=igor.stoppa@gmail.com; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id d4-v6sor951213lfb.56.2018.10.23.14.36.06 for (Google Transport Security); Tue, 23 Oct 2018 14:36:06 -0700 (PDT) Received-SPF: pass (google.com: domain of igor.stoppa@gmail.com designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=C4wdxqvK; spf=pass (google.com: domain of igor.stoppa@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=igor.stoppa@gmail.com; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references:reply-to; bh=+x3wjQbIZdeZGc6lP1F3P1cnjhuI1El1LgW0Njws/+0=; b=C4wdxqvKukh1hR2H021cjWjr1iH+HfKmKOvjaW9GflyS+FVznw8fUOCxP21W7DQVwF datBHa6jy+35tG+E5UZL2nvoqZCsTC8roGcDdK6M6oes+nrdJYpQK2qF21d/UM7QjeqH czZuiCwokScBpZpqEpVFCVl1KRBGTaltxH9vhbX5GRk42gPCBPcH//iKgmWz0l4EMFrg iyDL/XZmSzY0RMuF6i5w3sA/FjEDqfpwUHku015OFS2WW9SGgFkh8lHLOh2jDK+LUiJN 8RV3ST40MTbH0t92c5BFOtAm4SfafZTcNTOjZJeeDMjIPkNGShA0J8NOxCA6Y56uFx6R wdOg== X-Google-Smtp-Source: AJdET5c3IN0lDdmdWIU8iP0QNMnvHqIvH9eoglVtGKf8blfOXP0cHcpfvNfKzfj4EPX/HHCVxbEsaA== X-Received: by 2002:a19:ab1a:: with SMTP id u26-v6mr501479lfe.103.1540330566192; Tue, 23 Oct 2018 14:36:06 -0700 (PDT) Received: from localhost.localdomain (91-159-62-169.elisa-laajakaista.fi. [91.159.62.169]) by smtp.gmail.com with ESMTPSA id y127-v6sm377950lfc.13.2018.10.23.14.36.04 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 23 Oct 2018 14:36:05 -0700 (PDT) From: Igor Stoppa X-Google-Original-From: Igor Stoppa To: Mimi Zohar , Kees Cook , Matthew Wilcox , Dave Chinner , James Morris , Michal Hocko , kernel-hardening@lists.openwall.com, linux-integrity@vger.kernel.org, linux-security-module@vger.kernel.org Cc: igor.stoppa@huawei.com, Dave Hansen , Jonathan Corbet , Laura Abbott , Vlastimil Babka , "Kirill A. Shutemov" , Andrew Morton , Pavel Tatashin , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 06/17] prmem: test cases for memory protection Date: Wed, 24 Oct 2018 00:34:53 +0300 Message-Id: <20181023213504.28905-7-igor.stoppa@huawei.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181023213504.28905-1-igor.stoppa@huawei.com> References: <20181023213504.28905-1-igor.stoppa@huawei.com> Reply-To: Igor Stoppa X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP The test cases verify the various interfaces offered by both prmem.h and prmemextra.h The tests avoid triggering crashes, by not performing actions that would be treated as illegal. That part is handled in the lkdtm patch. Signed-off-by: Igor Stoppa CC: Michal Hocko CC: Vlastimil Babka CC: "Kirill A. Shutemov" CC: Andrew Morton CC: Pavel Tatashin CC: linux-mm@kvack.org CC: linux-kernel@vger.kernel.org --- MAINTAINERS | 2 + mm/Kconfig.debug | 9 + mm/Makefile | 1 + mm/test_pmalloc.c | 633 +++++++++++++++++++++++++++++++++++++++++++ mm/test_write_rare.c | 236 ++++++++++++++++ 5 files changed, 881 insertions(+) create mode 100644 mm/test_pmalloc.c create mode 100644 mm/test_write_rare.c diff --git a/MAINTAINERS b/MAINTAINERS index df7221eca160..ea979a5a9ec9 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -9461,6 +9461,8 @@ S: Maintained F: include/linux/prmem.h F: include/linux/prmemextra.h F: mm/prmem.c +F: mm/test_write_rare.c +F: mm/test_pmalloc.c MEMORY MANAGEMENT L: linux-mm@kvack.org diff --git a/mm/Kconfig.debug b/mm/Kconfig.debug index 9a7b8b049d04..57de5b3c0bae 100644 --- a/mm/Kconfig.debug +++ b/mm/Kconfig.debug @@ -94,3 +94,12 @@ config DEBUG_RODATA_TEST depends on STRICT_KERNEL_RWX ---help--- This option enables a testcase for the setting rodata read-only. + +config DEBUG_PRMEM_TEST + tristate "Run self test for protected memory" + depends on STRICT_KERNEL_RWX + select PRMEM + default n + help + Tries to verify that the memory protection works correctly and that + the memory is effectively protected. diff --git a/mm/Makefile b/mm/Makefile index 215c6a6d7304..93b503d4659f 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -65,6 +65,7 @@ obj-$(CONFIG_SPARSEMEM_VMEMMAP) += sparse-vmemmap.o obj-$(CONFIG_SLOB) += slob.o obj-$(CONFIG_MMU_NOTIFIER) += mmu_notifier.o obj-$(CONFIG_PRMEM) += prmem.o +obj-$(CONFIG_DEBUG_PRMEM_TEST) += test_write_rare.o test_pmalloc.o obj-$(CONFIG_KSM) += ksm.o obj-$(CONFIG_PAGE_POISONING) += page_poison.o obj-$(CONFIG_SLAB) += slab.o diff --git a/mm/test_pmalloc.c b/mm/test_pmalloc.c new file mode 100644 index 000000000000..f9ee8fb29eea --- /dev/null +++ b/mm/test_pmalloc.c @@ -0,0 +1,633 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * test_pmalloc.c + * + * (C) Copyright 2018 Huawei Technologies Co. Ltd. + * Author: Igor Stoppa + */ + +#include +#include +#include +#include +#include +#include + +#ifdef pr_fmt +#undef pr_fmt +#endif + +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt + +#define SIZE_1 (PAGE_SIZE * 3) +#define SIZE_2 1000 + +static const char MSG_NO_POOL[] = "Cannot allocate memory for the pool."; +static const char MSG_NO_PMEM[] = "Cannot allocate memory from the pool."; + +#define pr_success(test_name) \ + pr_info(test_name " test passed") + +/* --------------- tests the basic life-cycle of a pool --------------- */ + +static bool is_address_protected(void *p) +{ + struct page *page; + struct vmap_area *area; + + if (unlikely(!is_vmalloc_addr(p))) + return false; + page = vmalloc_to_page(p); + if (unlikely(!page)) + return false; + wmb(); /* Flush changes to the page table - is it needed? */ + area = find_vmap_area((uintptr_t)p); + if (unlikely((!area) || (!area->vm) || + ((area->vm->flags & VM_PMALLOC_PROTECTED_MASK) != + VM_PMALLOC_PROTECTED_MASK))) + return false; + return true; +} + +static bool create_and_destroy_pool(void) +{ + static struct pmalloc_pool *pool; + + pool = pmalloc_create_pool(PMALLOC_MODE_RO); + if (WARN(!pool, MSG_NO_POOL)) + return false; + pmalloc_destroy_pool(pool); + pr_success("pool creation and destruction"); + return true; +} + +/* verifies that it's possible to allocate from the pool */ +static bool test_alloc(void) +{ + static struct pmalloc_pool *pool; + static void *p; + + pool = pmalloc_create_pool(PMALLOC_MODE_RO); + if (WARN(!pool, MSG_NO_POOL)) + return false; + p = pmalloc(pool, SIZE_1 - 1); + pmalloc_destroy_pool(pool); + if (WARN(!p, MSG_NO_PMEM)) + return false; + pr_success("allocation capability"); + return true; +} + +/* ----------------------- tests self protection ----------------------- */ + +static bool test_auto_ro(void) +{ + struct pmalloc_pool *pool; + int *first_chunk; + int *second_chunk; + bool retval = false; + + pool = pmalloc_create_pool(PMALLOC_MODE_AUTO_RO); + if (WARN(!pool, MSG_NO_POOL)) + return false; + first_chunk = (int *)pmalloc(pool, PMALLOC_DEFAULT_REFILL_SIZE); + if (WARN(!first_chunk, MSG_NO_PMEM)) + goto error; + second_chunk = (int *)pmalloc(pool, PMALLOC_DEFAULT_REFILL_SIZE); + if (WARN(!second_chunk, MSG_NO_PMEM)) + goto error; + if (WARN(!is_address_protected(first_chunk), + "Failed to automatically write protect exhausted vmarea")) + goto error; + pr_success("AUTO_RO"); + retval = true; +error: + pmalloc_destroy_pool(pool); + return retval; +} + +static bool test_auto_wr(void) +{ + struct pmalloc_pool *pool; + int *first_chunk; + int *second_chunk; + bool retval = false; + + pool = pmalloc_create_pool(PMALLOC_MODE_AUTO_WR); + if (WARN(!pool, MSG_NO_POOL)) + return false; + first_chunk = (int *)pmalloc(pool, PMALLOC_DEFAULT_REFILL_SIZE); + if (WARN(!first_chunk, MSG_NO_PMEM)) + goto error; + second_chunk = (int *)pmalloc(pool, PMALLOC_DEFAULT_REFILL_SIZE); + if (WARN(!second_chunk, MSG_NO_PMEM)) + goto error; + if (WARN(!is_address_protected(first_chunk), + "Failed to automatically write protect exhausted vmarea")) + goto error; + pr_success("AUTO_WR"); + retval = true; +error: + pmalloc_destroy_pool(pool); + return retval; +} + +static bool test_start_wr(void) +{ + struct pmalloc_pool *pool; + int *chunks[2]; + bool retval = false; + int i; + + pool = pmalloc_create_pool(PMALLOC_MODE_START_WR); + if (WARN(!pool, MSG_NO_POOL)) + return false; + for (i = 0; i < 2; i++) { + chunks[i] = (int *)pmalloc(pool, 1); + if (WARN(!chunks[i], MSG_NO_PMEM)) + goto error; + if (WARN(!is_address_protected(chunks[i]), + "vmarea was not protected from the start")) + goto error; + } + if (WARN(vmalloc_to_page(chunks[0]) != vmalloc_to_page(chunks[1]), + "START_WR: mostly empty vmap area not reused")) + goto error; + pr_success("START_WR"); + retval = true; +error: + pmalloc_destroy_pool(pool); + return retval; +} + +static bool test_self_protection(void) +{ + if (WARN(!(test_auto_ro() && + test_auto_wr() && + test_start_wr()), + "self protection tests failed")) + return false; + pr_success("self protection"); + return true; +} + +/* ----------------- tests basic write rare functions ----------------- */ + +#define INSERT_OFFSET (PAGE_SIZE * 3 / 2) +#define INSERT_SIZE (PAGE_SIZE * 2) +#define REGION_SIZE (PAGE_SIZE * 5) +static bool test_wr_memset(void) +{ + struct pmalloc_pool *pool; + char *region; + unsigned int i; + int retval = false; + + pool = pmalloc_create_pool(PMALLOC_MODE_START_WR); + if (WARN(!pool, MSG_NO_POOL)) + return false; + region = pzalloc(pool, REGION_SIZE); + if (WARN(!region, MSG_NO_PMEM)) + goto destroy_pool; + for (i = 0; i < REGION_SIZE; i++) + if (WARN(region[i], "Failed to memset wr memory")) + goto destroy_pool; + retval = !wr_memset(region + INSERT_OFFSET, 1, INSERT_SIZE); + if (WARN(retval, "wr_memset failed")) + goto destroy_pool; + for (i = 0; i < REGION_SIZE; i++) + if (i >= INSERT_OFFSET && + i < (INSERT_SIZE + INSERT_OFFSET)) { + if (WARN(!region[i], + "Failed to alter target area")) + goto destroy_pool; + } else { + if (WARN(region[i] != 0, + "Unexpected alteration outside region")) + goto destroy_pool; + } + retval = true; + pr_success("wr_memset"); +destroy_pool: + pmalloc_destroy_pool(pool); + return retval; +} + +static bool test_wr_strdup(void) +{ + const char src[] = "Some text for testing pstrdup()"; + struct pmalloc_pool *pool; + char *dst; + int retval = false; + + pool = pmalloc_create_pool(PMALLOC_MODE_WR); + if (WARN(!pool, MSG_NO_POOL)) + return false; + dst = pstrdup(pool, src); + if (WARN(!dst || strcmp(src, dst), "pmalloc wr strdup failed")) + goto destroy_pool; + retval = true; + pr_success("pmalloc wr strdup"); +destroy_pool: + pmalloc_destroy_pool(pool); + return retval; +} + +/* Verify write rare across multiple pages, unaligned to PAGE_SIZE. */ +static bool test_wr_copy(void) +{ + struct pmalloc_pool *pool; + char *region; + char *mod; + unsigned int i; + int retval = false; + + pool = pmalloc_create_pool(PMALLOC_MODE_WR); + if (WARN(!pool, MSG_NO_POOL)) + return false; + region = pzalloc(pool, REGION_SIZE); + if (WARN(!region, MSG_NO_PMEM)) + goto destroy_pool; + mod = vmalloc(INSERT_SIZE); + if (WARN(!mod, "Failed to allocate memory from vmalloc")) + goto destroy_pool; + memset(mod, 0xA5, INSERT_SIZE); + pmalloc_protect_pool(pool); + retval = !wr_memcpy(region + INSERT_OFFSET, mod, INSERT_SIZE); + if (WARN(retval, "wr_copy failed")) + goto free_mod; + + for (i = 0; i < REGION_SIZE; i++) + if (i >= INSERT_OFFSET && + i < (INSERT_SIZE + INSERT_OFFSET)) { + if (WARN(region[i] != (char)0xA5, + "Failed to alter target area")) + goto free_mod; + } else { + if (WARN(region[i] != 0, + "Unexpected alteration outside region")) + goto free_mod; + } + retval = true; + pr_success("wr_copy"); +free_mod: + vfree(mod); +destroy_pool: + pmalloc_destroy_pool(pool); + return retval; +} + +/* ----------------- tests specialized write actions ------------------- */ + +#define TEST_ARRAY_SIZE 5 +#define TEST_ARRAY_TARGET (TEST_ARRAY_SIZE / 2) + +static bool test_wr_char(void) +{ + struct pmalloc_pool *pool; + char *array; + unsigned int i; + bool retval = false; + + pool = pmalloc_create_pool(PMALLOC_MODE_WR); + if (WARN(!pool, MSG_NO_POOL)) + return false; + array = pmalloc(pool, sizeof(char) * TEST_ARRAY_SIZE); + if (WARN(!array, MSG_NO_PMEM)) + goto destroy_pool; + for (i = 0; i < TEST_ARRAY_SIZE; i++) + array[i] = (char)0xA5; + pmalloc_protect_pool(pool); + if (WARN(!wr_char(array + TEST_ARRAY_TARGET, (char)0x5A), + "Failed to alter char variable")) + goto destroy_pool; + for (i = 0; i < TEST_ARRAY_SIZE; i++) + if (WARN(array[i] != (i == TEST_ARRAY_TARGET ? + (char)0x5A : (char)0xA5), + "Unexpected value in test array.")) + goto destroy_pool; + retval = true; + pr_success("wr_char"); +destroy_pool: + pmalloc_destroy_pool(pool); + return retval; +} + +static bool test_wr_short(void) +{ + struct pmalloc_pool *pool; + short *array; + unsigned int i; + bool retval = false; + + pool = pmalloc_create_pool(PMALLOC_MODE_WR); + if (WARN(!pool, MSG_NO_POOL)) + return false; + array = pmalloc(pool, sizeof(short) * TEST_ARRAY_SIZE); + if (WARN(!array, MSG_NO_PMEM)) + goto destroy_pool; + for (i = 0; i < TEST_ARRAY_SIZE; i++) + array[i] = (short)0xA5; + pmalloc_protect_pool(pool); + if (WARN(!wr_short(array + TEST_ARRAY_TARGET, (short)0x5A), + "Failed to alter short variable")) + goto destroy_pool; + for (i = 0; i < TEST_ARRAY_SIZE; i++) + if (WARN(array[i] != (i == TEST_ARRAY_TARGET ? + (short)0x5A : (short)0xA5), + "Unexpected value in test array.")) + goto destroy_pool; + retval = true; + pr_success("wr_short"); +destroy_pool: + pmalloc_destroy_pool(pool); + return retval; +} + +static bool test_wr_ushort(void) +{ + struct pmalloc_pool *pool; + unsigned short *array; + unsigned int i; + bool retval = false; + + pool = pmalloc_create_pool(PMALLOC_MODE_WR); + if (WARN(!pool, MSG_NO_POOL)) + return false; + array = pmalloc(pool, sizeof(unsigned short) * TEST_ARRAY_SIZE); + if (WARN(!array, MSG_NO_PMEM)) + goto destroy_pool; + for (i = 0; i < TEST_ARRAY_SIZE; i++) + array[i] = (unsigned short)0xA5; + pmalloc_protect_pool(pool); + if (WARN(!wr_ushort(array + TEST_ARRAY_TARGET, + (unsigned short)0x5A), + "Failed to alter unsigned short variable")) + goto destroy_pool; + for (i = 0; i < TEST_ARRAY_SIZE; i++) + if (WARN(array[i] != (i == TEST_ARRAY_TARGET ? + (unsigned short)0x5A : + (unsigned short)0xA5), + "Unexpected value in test array.")) + goto destroy_pool; + retval = true; + pr_success("wr_ushort"); +destroy_pool: + pmalloc_destroy_pool(pool); + return retval; +} + +static bool test_wr_int(void) +{ + struct pmalloc_pool *pool; + int *array; + unsigned int i; + bool retval = false; + + pool = pmalloc_create_pool(PMALLOC_MODE_WR); + if (WARN(!pool, MSG_NO_POOL)) + return false; + array = pmalloc(pool, sizeof(int) * TEST_ARRAY_SIZE); + if (WARN(!array, MSG_NO_PMEM)) + goto destroy_pool; + for (i = 0; i < TEST_ARRAY_SIZE; i++) + array[i] = 0xA5; + pmalloc_protect_pool(pool); + if (WARN(!wr_int(array + TEST_ARRAY_TARGET, 0x5A), + "Failed to alter int variable")) + goto destroy_pool; + for (i = 0; i < TEST_ARRAY_SIZE; i++) + if (WARN(array[i] != (i == TEST_ARRAY_TARGET ? 0x5A : 0xA5), + "Unexpected value in test array.")) + goto destroy_pool; + retval = true; + pr_success("wr_int"); +destroy_pool: + pmalloc_destroy_pool(pool); + return retval; +} + +static bool test_wr_uint(void) +{ + struct pmalloc_pool *pool; + unsigned int *array; + unsigned int i; + bool retval = false; + + pool = pmalloc_create_pool(PMALLOC_MODE_WR); + if (WARN(!pool, MSG_NO_POOL)) + return false; + array = pmalloc(pool, sizeof(unsigned int) * TEST_ARRAY_SIZE); + if (WARN(!array, MSG_NO_PMEM)) + goto destroy_pool; + for (i = 0; i < TEST_ARRAY_SIZE; i++) + array[i] = 0xA5; + pmalloc_protect_pool(pool); + if (WARN(!wr_uint(array + TEST_ARRAY_TARGET, 0x5A), + "Failed to alter unsigned int variable")) + goto destroy_pool; + for (i = 0; i < TEST_ARRAY_SIZE; i++) + if (WARN(array[i] != (i == TEST_ARRAY_TARGET ? 0x5A : 0xA5), + "Unexpected value in test array.")) + goto destroy_pool; + retval = true; + pr_success("wr_uint"); +destroy_pool: + pmalloc_destroy_pool(pool); + return retval; +} + +static bool test_wr_long(void) +{ + struct pmalloc_pool *pool; + long *array; + unsigned int i; + bool retval = false; + + pool = pmalloc_create_pool(PMALLOC_MODE_WR); + if (WARN(!pool, MSG_NO_POOL)) + return false; + array = pmalloc(pool, sizeof(long) * TEST_ARRAY_SIZE); + if (WARN(!array, MSG_NO_PMEM)) + goto destroy_pool; + for (i = 0; i < TEST_ARRAY_SIZE; i++) + array[i] = 0xA5; + pmalloc_protect_pool(pool); + if (WARN(!wr_long(array + TEST_ARRAY_TARGET, 0x5A), + "Failed to alter long variable")) + goto destroy_pool; + for (i = 0; i < TEST_ARRAY_SIZE; i++) + if (WARN(array[i] != (i == TEST_ARRAY_TARGET ? 0x5A : 0xA5), + "Unexpected value in test array.")) + goto destroy_pool; + retval = true; + pr_success("wr_long"); +destroy_pool: + pmalloc_destroy_pool(pool); + return retval; +} + +static bool test_wr_ulong(void) +{ + struct pmalloc_pool *pool; + unsigned long *array; + unsigned int i; + bool retval = false; + + pool = pmalloc_create_pool(PMALLOC_MODE_WR); + if (WARN(!pool, MSG_NO_POOL)) + return false; + array = pmalloc(pool, sizeof(unsigned long) * TEST_ARRAY_SIZE); + if (WARN(!array, MSG_NO_PMEM)) + goto destroy_pool; + for (i = 0; i < TEST_ARRAY_SIZE; i++) + array[i] = 0xA5; + pmalloc_protect_pool(pool); + if (WARN(!wr_ulong(array + TEST_ARRAY_TARGET, 0x5A), + "Failed to alter unsigned long variable")) + goto destroy_pool; + for (i = 0; i < TEST_ARRAY_SIZE; i++) + if (WARN(array[i] != (i == TEST_ARRAY_TARGET ? 0x5A : 0xA5), + "Unexpected value in test array.")) + goto destroy_pool; + retval = true; + pr_success("wr_ulong"); +destroy_pool: + pmalloc_destroy_pool(pool); + return retval; +} + +static bool test_wr_longlong(void) +{ + struct pmalloc_pool *pool; + long long *array; + unsigned int i; + bool retval = false; + + pool = pmalloc_create_pool(PMALLOC_MODE_WR); + if (WARN(!pool, MSG_NO_POOL)) + return false; + array = pmalloc(pool, sizeof(long long) * TEST_ARRAY_SIZE); + if (WARN(!array, MSG_NO_PMEM)) + goto destroy_pool; + for (i = 0; i < TEST_ARRAY_SIZE; i++) + array[i] = 0xA5; + pmalloc_protect_pool(pool); + if (WARN(!wr_longlong(array + TEST_ARRAY_TARGET, 0x5A), + "Failed to alter long variable")) + goto destroy_pool; + for (i = 0; i < TEST_ARRAY_SIZE; i++) + if (WARN(array[i] != (i == TEST_ARRAY_TARGET ? 0x5A : 0xA5), + "Unexpected value in test array.")) + goto destroy_pool; + retval = true; + pr_success("wr_longlong"); +destroy_pool: + pmalloc_destroy_pool(pool); + return retval; +} + +static bool test_wr_ulonglong(void) +{ + struct pmalloc_pool *pool; + unsigned long long *array; + unsigned int i; + bool retval = false; + + pool = pmalloc_create_pool(PMALLOC_MODE_WR); + if (WARN(!pool, MSG_NO_POOL)) + return false; + array = pmalloc(pool, sizeof(unsigned long long) * TEST_ARRAY_SIZE); + if (WARN(!array, MSG_NO_PMEM)) + goto destroy_pool; + for (i = 0; i < TEST_ARRAY_SIZE; i++) + array[i] = 0xA5; + pmalloc_protect_pool(pool); + if (WARN(!wr_ulonglong(array + TEST_ARRAY_TARGET, 0x5A), + "Failed to alter unsigned long long variable")) + goto destroy_pool; + for (i = 0; i < TEST_ARRAY_SIZE; i++) + if (WARN(array[i] != (i == TEST_ARRAY_TARGET ? 0x5A : 0xA5), + "Unexpected value in test array.")) + goto destroy_pool; + retval = true; + pr_success("wr_ulonglong"); +destroy_pool: + pmalloc_destroy_pool(pool); + return retval; +} + +static bool test_wr_ptr(void) +{ + struct pmalloc_pool *pool; + int **array; + unsigned int i; + bool retval = false; + + pool = pmalloc_create_pool(PMALLOC_MODE_WR); + if (WARN(!pool, MSG_NO_POOL)) + return false; + array = pmalloc(pool, sizeof(int *) * TEST_ARRAY_SIZE); + if (WARN(!array, MSG_NO_PMEM)) + goto destroy_pool; + for (i = 0; i < TEST_ARRAY_SIZE; i++) + array[i] = NULL; + pmalloc_protect_pool(pool); + if (WARN(!wr_ptr(array + TEST_ARRAY_TARGET, array), + "Failed to alter ptr variable")) + goto destroy_pool; + for (i = 0; i < TEST_ARRAY_SIZE; i++) + if (WARN(array[i] != (i == TEST_ARRAY_TARGET ? + (void *)array : NULL), + "Unexpected value in test array.")) + goto destroy_pool; + retval = true; + pr_success("wr_ptr"); +destroy_pool: + pmalloc_destroy_pool(pool); + return retval; + +} + +static bool test_specialized_wrs(void) +{ + if (WARN(!(test_wr_char() && + test_wr_short() && + test_wr_ushort() && + test_wr_int() && + test_wr_uint() && + test_wr_long() && + test_wr_ulong() && + test_wr_longlong() && + test_wr_ulonglong() && + test_wr_ptr()), + "specialized write rare failed")) + return false; + pr_success("specialized write rare"); + return true; + +} + +/* + * test_pmalloc() -main entry point for running the test cases + */ +static int __init test_pmalloc_init_module(void) +{ + if (WARN(!(create_and_destroy_pool() && + test_alloc() && + test_self_protection() && + test_wr_memset() && + test_wr_strdup() && + test_wr_copy() && + test_specialized_wrs()), + "protected memory allocator test failed")) + return -EFAULT; + pr_success("protected memory allocator"); + return 0; +} + +module_init(test_pmalloc_init_module); + +MODULE_LICENSE("GPL v2"); +MODULE_AUTHOR("Igor Stoppa "); +MODULE_DESCRIPTION("Test module for pmalloc."); diff --git a/mm/test_write_rare.c b/mm/test_write_rare.c new file mode 100644 index 000000000000..e19473bb319b --- /dev/null +++ b/mm/test_write_rare.c @@ -0,0 +1,236 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * test_write_rare.c + * + * (C) Copyright 2018 Huawei Technologies Co. Ltd. + * Author: Igor Stoppa + * + * Caveat: the tests which perform modifications are run *during* init, so + * the memory they use could be still altered through a direct write + * operation. But the purpose of these tests is to confirm that the + * modification through remapping works correctly. This doesn't depend on + * the read/write status of the original mapping. + */ + +#include +#include +#include +#include +#include + +#ifdef pr_fmt +#undef pr_fmt +#endif + +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt + +#define pr_success(test_name) \ + pr_info(test_name " test passed") + +static int scalar __wr_after_init = 0xA5A5; + +/* The section must occupy a non-zero number of whole pages */ +static bool test_alignment(void) +{ + size_t pstart = (size_t)&__start_wr_after_init; + size_t pend = (size_t)&__end_wr_after_init; + + if (WARN((pstart & ~PAGE_MASK) || (pend & ~PAGE_MASK) || + (pstart >= pend), "Boundaries test failed.")) + return false; + pr_success("Boundaries"); + return true; +} + +/* Alter a scalar value */ +static bool test_simple_write(void) +{ + int new_val = 0x5A5A; + + if (WARN(!__is_wr_after_init(&scalar, sizeof(scalar)), + "The __wr_after_init modifier did NOT work.")) + return false; + + if (WARN(!wr(&scalar, &new_val) || scalar != new_val, + "Scalar write rare test failed")) + return false; + + pr_success("Scalar write rare"); + return true; +} + +#define LARGE_SIZE (PAGE_SIZE * 5) +#define CHANGE_SIZE (PAGE_SIZE * 2) +#define CHANGE_OFFSET (PAGE_SIZE / 2) + +static char large[LARGE_SIZE] __wr_after_init; + + +/* Alter data across multiple pages */ +static bool test_cross_page_write(void) +{ + unsigned int i; + char *src; + bool check; + + src = vmalloc(PAGE_SIZE * 2); + if (WARN(!src, "could not allocate memory")) + return false; + + for (i = 0; i < LARGE_SIZE; i++) + large[i] = 0xA5; + + for (i = 0; i < CHANGE_SIZE; i++) + src[i] = 0x5A; + + check = wr_memcpy(large + CHANGE_OFFSET, src, CHANGE_SIZE); + vfree(src); + if (WARN(!check, "The wr_memcpy() failed")) + return false; + + for (i = CHANGE_OFFSET; i < CHANGE_OFFSET + CHANGE_SIZE; i++) + if (WARN(large[i] != 0x5A, + "Cross-page write rare test failed")) + return false; + + pr_success("Cross-page write rare"); + return true; +} + +static bool test_memsetting(void) +{ + unsigned int i; + + wr_memset(large, 0, LARGE_SIZE); + for (i = 0; i < LARGE_SIZE; i++) + if (WARN(large[i], "Failed to reset memory")) + return false; + wr_memset(large + CHANGE_OFFSET, 1, CHANGE_SIZE); + for (i = 0; i < CHANGE_OFFSET; i++) + if (WARN(large[i], "Failed to set memory")) + return false; + for (i = CHANGE_OFFSET; i < CHANGE_OFFSET + CHANGE_SIZE; i++) + if (WARN(!large[i], "Failed to set memory")) + return false; + for (i = CHANGE_OFFSET + CHANGE_SIZE; i < LARGE_SIZE; i++) + if (WARN(large[i], "Failed to set memory")) + return false; + pr_success("Memsetting"); + return true; +} + +#define INIT_VAL 1 +#define END_VAL 4 + +/* Various tests for the shorthands provided for standard types. */ +static char char_var __wr_after_init = INIT_VAL; +static bool test_char(void) +{ + return wr_char(&char_var, END_VAL) && char_var == END_VAL; +} + +static short short_var __wr_after_init = INIT_VAL; +static bool test_short(void) +{ + return wr_short(&short_var, END_VAL) && + short_var == END_VAL; +} + +static unsigned short ushort_var __wr_after_init = INIT_VAL; +static bool test_ushort(void) +{ + return wr_ushort(&ushort_var, END_VAL) && + ushort_var == END_VAL; +} + +static int int_var __wr_after_init = INIT_VAL; +static bool test_int(void) +{ + return wr_int(&int_var, END_VAL) && + int_var == END_VAL; +} + +static unsigned int uint_var __wr_after_init = INIT_VAL; +static bool test_uint(void) +{ + return wr_uint(&uint_var, END_VAL) && + uint_var == END_VAL; +} + +static long long_var __wr_after_init = INIT_VAL; +static bool test_long(void) +{ + return wr_long(&long_var, END_VAL) && + long_var == END_VAL; +} + +static unsigned long ulong_var __wr_after_init = INIT_VAL; +static bool test_ulong(void) +{ + return wr_ulong(&ulong_var, END_VAL) && + ulong_var == END_VAL; +} + +static long long longlong_var __wr_after_init = INIT_VAL; +static bool test_longlong(void) +{ + return wr_longlong(&longlong_var, END_VAL) && + longlong_var == END_VAL; +} + +static unsigned long long ulonglong_var __wr_after_init = INIT_VAL; +static bool test_ulonglong(void) +{ + return wr_ulonglong(&ulonglong_var, END_VAL) && + ulonglong_var == END_VAL; +} + +static int referred_value = INIT_VAL; +static int *reference __wr_after_init; +static bool test_ptr(void) +{ + return wr_ptr(&reference, &referred_value) && + reference == &referred_value; +} + +static int *rcu_ptr __wr_after_init __aligned(sizeof(void *)); +static bool test_rcu_ptr(void) +{ + uintptr_t addr = wr_rcu_assign_pointer(rcu_ptr, &referred_value); + + return (addr == (uintptr_t)&referred_value) && + referred_value == *(int *)addr; +} + +static bool test_specialized_write_rare(void) +{ + if (WARN(!(test_char() && test_short() && + test_ushort() && test_int() && + test_uint() && test_long() && test_ulong() && + test_long() && test_ulong() && + test_longlong() && test_ulonglong() && + test_ptr() && test_rcu_ptr()), + "Specialized write rare test failed")) + return false; + pr_success("Specialized write rare"); + return true; +} + +static int __init test_static_wr_init_module(void) +{ + if (WARN(!(test_alignment() && + test_simple_write() && + test_cross_page_write() && + test_memsetting() && + test_specialized_write_rare()), + "static rare-write test failed")) + return -EFAULT; + pr_success("static write_rare"); + return 0; +} + +module_init(test_static_wr_init_module); + +MODULE_LICENSE("GPL v2"); +MODULE_AUTHOR("Igor Stoppa "); +MODULE_DESCRIPTION("Test module for static write rare."); From patchwork Tue Oct 23 21:34:55 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Igor Stoppa X-Patchwork-Id: 10653707 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B055814BB for ; Tue, 23 Oct 2018 21:36:19 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9A5262A3C9 for ; Tue, 23 Oct 2018 21:36:19 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8D6A92A3D1; Tue, 23 Oct 2018 21:36:19 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id CBDD42A3C9 for ; Tue, 23 Oct 2018 21:36:18 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 010416B000E; Tue, 23 Oct 2018 17:36:12 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id ED6B06B0269; Tue, 23 Oct 2018 17:36:11 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DA4F26B026A; Tue, 23 Oct 2018 17:36:11 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-lj1-f197.google.com (mail-lj1-f197.google.com [209.85.208.197]) by kanga.kvack.org (Postfix) with ESMTP id 5AD9B6B000E for ; Tue, 23 Oct 2018 17:36:11 -0400 (EDT) Received: by mail-lj1-f197.google.com with SMTP id r24-v6so956230ljr.18 for ; Tue, 23 Oct 2018 14:36:11 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references:reply-to; bh=MxJyklPQotWhYa2J56SVcBSG16bYJ6WnJInuOUKovF4=; b=WKVzqQmjlrUnpalBrGIPn6G9+RWRs+UrlxOE6E80tfRKUJr5UIBju7u0ZbLllakW2e 5/tNgXZjjDwZqRhdym/6gqLQ4F/Vw63vS/ubPGRZ0nL1PRVo/aNRT+avsWm/QSmenTzf eF33pnmTiIrxUtp1en9KLmvoOgkoML4yOfOAfFvnOslPlP3JXuz6R+D8bULHf0hgbee+ VHVZKJ/jWU0taHiLyCDeTlySXOn5TVPzVGOFBw7HDe6YxQ9yfm4k2ctyxZ1GS3zTqVYj T0gOZHgp5ncSISP3vj643vW2HJmfP/UsLMMkHvN7zuasYfNLYcyoqp9n0WrFM4daQO9C vXHQ== X-Gm-Message-State: ABuFfojbXfC5kt5GtEWSC9slUJ9o2ss1NexM+4zmb1qbw+At151VSyqC BEJgT0Gv4n7YCYdmdPl3Tqj0fpwyFlHRBFv1L/dOLHXsPY6pH0ziQBnre94EjBR//u/1k6cmvHD dr0kZFy3NO66aKN1j/qvxAYApA6yr002kiOiKKCmP7e8P9wrjkCUqbRQV+u7dTqYbCtegr0tAme pjKdO6O0FZQy/E/rLyz9CrhuWn+cm0ROHqflPzYYlCRPTRSHLRXj2oLLe/JWtYi1I6/Zpx/Q78g X7Jx025J5fMhU73hJH8RHqIFyKbC2QbisHK2+VtCCfdk49asOs5oqTfx2RzHm29ZwJ3amWHwTk2 SsbN6aYl+U51lz+nG0qludw71OhRSVent8iJrAMRPD4QPyDqJ7A2kmetI5JZ41tzgGE1XgKZRfy 4 X-Received: by 2002:a2e:988e:: with SMTP id b14-v6mr2758989ljj.171.1540330570609; Tue, 23 Oct 2018 14:36:10 -0700 (PDT) X-Received: by 2002:a2e:988e:: with SMTP id b14-v6mr2758955ljj.171.1540330569401; Tue, 23 Oct 2018 14:36:09 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1540330569; cv=none; d=google.com; s=arc-20160816; b=MvKNx7+tLAAkk0k/b/hiNZ079jyzQ6jnhzTqo+Evx6P8F+/Eg/HJ9JzRxhr5AAJaW4 PyxArogEt/JWt106SJhsbnFY2LL+ZEzlDtJ/uSOfGZbg4ziV/Us2j3tjOjrJX2dRNJW7 U90XC49MkL2n95F21UfT5tGjU/C2jrnIq3HwFfX4RB3s54lYM+5m1fKmerH43JgNT/2h GE0CPAgyWiKpxA4jKRuj5gM3Xs+lrjhg++R0/fBHWbVhVBzatow1CbxxLgUV5KkCTiSB /Ghvqdk+TzOY3hYJVn+0Z5NBwPJYy8CEWRsd1/mV4wCW1W+lB1eBLgnhKTVU0DSydqle BQOA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=reply-to:references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=MxJyklPQotWhYa2J56SVcBSG16bYJ6WnJInuOUKovF4=; b=t0jjhuwpMYMBqxjQyRlUAK/jTiX9kEtGkiEmg8xkXW6XCXr2FONO31JZ7YvCBqJB82 7/K5SjZGWzpiDWpdtnc0JuEqhtdmoCz7BK+r7TpbPhb5otFoMe4JNgqknGa1rVL4RoDG ANh63fuV+DH3o8PNI50yoUUMMCyFOd2Pit/Pj15srsBpSBCiuD+jhLisWX6qyVVu+0hA IFNknj28LTDSoH5OvpKl+nFzONl9bG4hfJZQK/E+GyHvsDGo6bRstDoegap9M6bPEWRH jg0tq8YpgZvjqoBLjUS50aBH6kgt0ND3NQsS+H6/o+8DqJ97gZmsRlEwwxIqe1AqrMiu 3P+g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=PdlteOY4; spf=pass (google.com: domain of igor.stoppa@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=igor.stoppa@gmail.com; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id p20-v6sor851492lfp.41.2018.10.23.14.36.09 for (Google Transport Security); Tue, 23 Oct 2018 14:36:09 -0700 (PDT) Received-SPF: pass (google.com: domain of igor.stoppa@gmail.com designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=PdlteOY4; spf=pass (google.com: domain of igor.stoppa@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=igor.stoppa@gmail.com; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references:reply-to; bh=MxJyklPQotWhYa2J56SVcBSG16bYJ6WnJInuOUKovF4=; b=PdlteOY4o40QUDNjAThlAoTsVYTFd7zs3m3+vkS0xTuwkOXqjHspfcVAbQ+Eoq0IJl v9yCNh2HFu+CuGbd89TL4zyyHJHsybDQ5YR4q3QnmnxG8byJu7Mnlr9H2+e2fDSqcJ20 poPB83K9CC9XjqrXwH9vNnu+MF/PRBAlbQhlDOtGu3zaiW8TwcRqVWjIQU8e4S2gnZoe skIGMSbLBUSAnikw86eGtMDLm7ZsZV11Su0egjC0UHX9TQDCuZYZpUVrTL/8aKvmuMD1 jbaXi1ySxuzr6JaOyAgNi0/p7XDQifApzUZnsPpFC7qUfRxurqopGqRggnBteHuHhzIj 27pQ== X-Google-Smtp-Source: AJdET5e0SCGJjOdZZvB3VL2+kCFX64ASEyEQtsS+KMf/Fg23oC2goGdqiz7dgHvR3CIhTem3KMXl1A== X-Received: by 2002:a19:9cd5:: with SMTP id f204-v6mr196018lfe.41.1540330568969; Tue, 23 Oct 2018 14:36:08 -0700 (PDT) Received: from localhost.localdomain (91-159-62-169.elisa-laajakaista.fi. [91.159.62.169]) by smtp.gmail.com with ESMTPSA id y127-v6sm377950lfc.13.2018.10.23.14.36.07 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 23 Oct 2018 14:36:08 -0700 (PDT) From: Igor Stoppa X-Google-Original-From: Igor Stoppa To: Mimi Zohar , Kees Cook , Matthew Wilcox , Dave Chinner , James Morris , Michal Hocko , kernel-hardening@lists.openwall.com, linux-integrity@vger.kernel.org, linux-security-module@vger.kernel.org Cc: igor.stoppa@huawei.com, Dave Hansen , Jonathan Corbet , Laura Abbott , Vlastimil Babka , "Kirill A. Shutemov" , Andrew Morton , Pavel Tatashin , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 08/17] prmem: struct page: track vmap_area Date: Wed, 24 Oct 2018 00:34:55 +0300 Message-Id: <20181023213504.28905-9-igor.stoppa@huawei.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181023213504.28905-1-igor.stoppa@huawei.com> References: <20181023213504.28905-1-igor.stoppa@huawei.com> Reply-To: Igor Stoppa X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP When a page is used for virtual memory, it is often necessary to obtain a handler to the corresponding vmap_area, which refers to the virtually continuous area generated, when invoking vmalloc. The struct page has a "private" field, which can be re-used, to store a pointer to the parent area. Note: in practice a virtual memory area is characterized both by a struct vmap_area and a struct vm_struct. The reason for referring from a page to its vmap_area, rather than to the vm_struct, is that the vmap_area contains a struct vm_struct *vm field, which can be used to reach also the information stored in the corresponding vm_struct. This link, however, is unidirectional, and it's not possible to easily identify the corresponding vmap_area, given a reference to a vm_struct. Furthermore, the struct vmap_area contains a list head node which is normally used only when it's queued for free and can be put to some other use during normal operations. The connection between each page and its vmap_area avoids more expensive searches through the btree of vmap_areas. Therefore, it is possible for find_vamp_area to be again a static function, while the rest of the code will rely on the direct reference from struct page. Signed-off-by: Igor Stoppa CC: Michal Hocko CC: Vlastimil Babka CC: "Kirill A. Shutemov" CC: Andrew Morton CC: Pavel Tatashin CC: linux-mm@kvack.org CC: linux-kernel@vger.kernel.org --- include/linux/mm_types.h | 25 ++++++++++++++++++------- include/linux/prmem.h | 13 ++++++++----- include/linux/vmalloc.h | 1 - mm/prmem.c | 2 +- mm/test_pmalloc.c | 12 ++++-------- mm/vmalloc.c | 9 ++++++++- 6 files changed, 39 insertions(+), 23 deletions(-) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 5ed8f6292a53..8403bdd12d1f 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -87,13 +87,24 @@ struct page { /* See page-flags.h for PAGE_MAPPING_FLAGS */ struct address_space *mapping; pgoff_t index; /* Our offset within mapping. */ - /** - * @private: Mapping-private opaque data. - * Usually used for buffer_heads if PagePrivate. - * Used for swp_entry_t if PageSwapCache. - * Indicates order in the buddy system if PageBuddy. - */ - unsigned long private; + union { + /** + * @private: Mapping-private opaque data. + * Usually used for buffer_heads if + * PagePrivate. + * Used for swp_entry_t if PageSwapCache. + * Indicates order in the buddy system if + * PageBuddy. + */ + unsigned long private; + /** + * @area: reference to the containing area + * For pages that are mapped into a virtually + * contiguous area, avoids performing a more + * expensive lookup. + */ + struct vmap_area *area; + }; }; struct { /* slab, slob and slub */ union { diff --git a/include/linux/prmem.h b/include/linux/prmem.h index 26fd48410d97..cf713fc1c8bb 100644 --- a/include/linux/prmem.h +++ b/include/linux/prmem.h @@ -54,14 +54,17 @@ static __always_inline bool __is_wr_after_init(const void *ptr, size_t size) static __always_inline bool __is_wr_pool(const void *ptr, size_t size) { - struct vmap_area *area; + struct vm_struct *vm; + struct page *page; if (!is_vmalloc_addr(ptr)) return false; - area = find_vmap_area((unsigned long)ptr); - return area && area->vm && (area->vm->size >= size) && - ((area->vm->flags & (VM_PMALLOC | VM_PMALLOC_WR)) == - (VM_PMALLOC | VM_PMALLOC_WR)); + page = vmalloc_to_page(ptr); + if (!(page && page->area && page->area->vm)) + return false; + vm = page->area->vm; + return ((vm->size >= size) && + ((vm->flags & VM_PMALLOC_WR_MASK) == VM_PMALLOC_WR_MASK)); } /** diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h index 4d14a3b8089e..43a444f8b1e9 100644 --- a/include/linux/vmalloc.h +++ b/include/linux/vmalloc.h @@ -143,7 +143,6 @@ extern struct vm_struct *__get_vm_area_caller(unsigned long size, const void *caller); extern struct vm_struct *remove_vm_area(const void *addr); extern struct vm_struct *find_vm_area(const void *addr); -extern struct vmap_area *find_vmap_area(unsigned long addr); extern int map_vm_area(struct vm_struct *area, pgprot_t prot, struct page **pages); diff --git a/mm/prmem.c b/mm/prmem.c index 7dd13ea43304..96abf04909e7 100644 --- a/mm/prmem.c +++ b/mm/prmem.c @@ -150,7 +150,7 @@ static int grow(struct pmalloc_pool *pool, size_t min_size) if (WARN(!addr, "Failed to allocate %zd bytes", PAGE_ALIGN(size))) return -ENOMEM; - new_area = find_vmap_area((uintptr_t)addr); + new_area = vmalloc_to_page(addr)->area; tag_mask = VM_PMALLOC; if (pool->mode & PMALLOC_WR) tag_mask |= VM_PMALLOC_WR; diff --git a/mm/test_pmalloc.c b/mm/test_pmalloc.c index f9ee8fb29eea..c64872ff05ea 100644 --- a/mm/test_pmalloc.c +++ b/mm/test_pmalloc.c @@ -38,15 +38,11 @@ static bool is_address_protected(void *p) if (unlikely(!is_vmalloc_addr(p))) return false; page = vmalloc_to_page(p); - if (unlikely(!page)) + if (unlikely(!(page && page->area && page->area->vm))) return false; - wmb(); /* Flush changes to the page table - is it needed? */ - area = find_vmap_area((uintptr_t)p); - if (unlikely((!area) || (!area->vm) || - ((area->vm->flags & VM_PMALLOC_PROTECTED_MASK) != - VM_PMALLOC_PROTECTED_MASK))) - return false; - return true; + area = page->area; + return (area->vm->flags & VM_PMALLOC_PROTECTED_MASK) == + VM_PMALLOC_PROTECTED_MASK; } static bool create_and_destroy_pool(void) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 15850005fea5..ffef705f0523 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -742,7 +742,7 @@ static void free_unmap_vmap_area(struct vmap_area *va) free_vmap_area_noflush(va); } -struct vmap_area *find_vmap_area(unsigned long addr) +static struct vmap_area *find_vmap_area(unsigned long addr) { struct vmap_area *va; @@ -1523,6 +1523,7 @@ static void __vunmap(const void *addr, int deallocate_pages) struct page *page = area->pages[i]; BUG_ON(!page); + page->area = NULL; __free_pages(page, 0); } @@ -1731,8 +1732,10 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align, const void *caller) { struct vm_struct *area; + struct vmap_area *va; void *addr; unsigned long real_size = size; + unsigned int i; size = PAGE_ALIGN(size); if (!size || (size >> PAGE_SHIFT) > totalram_pages) @@ -1747,6 +1750,10 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align, if (!addr) return NULL; + va = __find_vmap_area((unsigned long)addr); + for (i = 0; i < va->vm->nr_pages; i++) + va->vm->pages[i]->area = va; + /* * In this function, newly allocated vm_struct has VM_UNINITIALIZED * flag. It means that vm_struct is not fully initialized. From patchwork Tue Oct 23 21:34:56 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Igor Stoppa X-Patchwork-Id: 10653709 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0452D14BB for ; Tue, 23 Oct 2018 21:36:23 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E40CD2A3CD for ; Tue, 23 Oct 2018 21:36:22 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D89612A3D2; Tue, 23 Oct 2018 21:36:22 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5E1CB2A3CD for ; Tue, 23 Oct 2018 21:36:22 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 00C8B6B0269; Tue, 23 Oct 2018 17:36:13 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id EFF6D6B026A; Tue, 23 Oct 2018 17:36:12 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D9E4F6B026B; Tue, 23 Oct 2018 17:36:12 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-lf1-f69.google.com (mail-lf1-f69.google.com [209.85.167.69]) by kanga.kvack.org (Postfix) with ESMTP id 653016B0269 for ; Tue, 23 Oct 2018 17:36:12 -0400 (EDT) Received: by mail-lf1-f69.google.com with SMTP id q62-v6so243550lfg.4 for ; Tue, 23 Oct 2018 14:36:12 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references:reply-to; bh=pGpymzbPbfMd0UY2O/PnMm8wYTcULxEjQ2M1QiLWUtQ=; b=lS3mHrU/hsMismAEwEp6n474/dVRrozYA5oPWv4te6thyxNFaO6vA/G9be6t9hQgPw botz66hr650x+mVx94W33n1Eki4htJTlg9TIhArhoPFK4T9pKWC/FiUaTHjSU+2wXCvR or2sKWvFoo665v6sKqMmAmo+CIxw+ZtEW4U4a+eGNGDWSAS7VJ6CNrUAJB3PnPy7tLSF HRS3x88HCWHjZYHkxziH6vUfeJbaqDSZohQCubIeaU5HIWAsT2o+gS+IUrXNBOcO3c7s bk36PgQo2mvKP7Myy45aDbljzPTr20L8G8tcsem0AJ0Tl8vRSQWjV96ccqhmrb9EyEGN TudQ== X-Gm-Message-State: ABuFfohJPxK3diFmEQ7PGjm/mwfH9NglxGUCZOw3+ouEtPbdA9vzQRHz I8IHJhOYu7DEZOxn8P06xVCUCZc1dUPf1HRBLuZIgTHKM7I2Xif/hasVEQ/tJ+qohJwcNnCRx9c QSzCR8J+WfHkehpBEhZevpd4vYX08XicOL8+quNNVsGggD4kwaoTHJKxQCFcHhDtpN0So6d9HxK p+u/XOUkMZFayoOe5KBkPZ7cBhiHBIYlCF/mvzGJfrZhJp//rBwcCLEXInIMQJ0L3JpGJe3ujKL GFolHD2sNfGWS62OpesUQSFCRQRX6h8VtNoP6464/LvDiA/j9AzY0o8ReEk00NWC/9uS60Xfc56 KfS4Og2B0fI8ti/7epX76fkhDU/U9oye1ZMYPQSbpGhouZ64DvN+CAXoyPGWmqPn+SbwqQSbmrI P X-Received: by 2002:a19:59d3:: with SMTP id n202-v6mr13260190lfb.125.1540330571682; Tue, 23 Oct 2018 14:36:11 -0700 (PDT) X-Received: by 2002:a19:59d3:: with SMTP id n202-v6mr13260166lfb.125.1540330570725; Tue, 23 Oct 2018 14:36:10 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1540330570; cv=none; d=google.com; s=arc-20160816; b=vg4YTBpYgR+opLtM3P07ZGCdvSFKGSIMtFhExhAFhroSNpwFqE2qGKFxxaIV0NyZnB K/5JrY5bfZ0c4LOlLjvbILZV9zzpxbJWCZFohUE59hg5RNkYPjH3+6u+WVQBQh72IpDo gop8kjScPRYhiJYpvDgg2xFYM2NuZ7evtbQuQ37pz0jLW3Qdin/dmqodpu+6Di5Ly0Tv dGBKrepb525ctSaD4srJaIGNXkGKYBLj0EFXILjRAqaHUCnA5XiAPrn+7cowU01+GSd0 QaNelUjBlcpse0/6y97CdEB1SF9VeuxpMUAF0PrjMwbfyZCWX4KJ+BtVSraLp4WIbzom Uzdw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=reply-to:references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=pGpymzbPbfMd0UY2O/PnMm8wYTcULxEjQ2M1QiLWUtQ=; b=pedGLwVL0FGfXIOy/ESogp6spFNiq8qC07vxyFM3Sw8714B2SXrnrvK+dkRL5Q9/UC nYZfIQWF7ilg83fdmgz25FSmr312NoHjgw8K/wHlEBDBBp48I+2kJfz6Giqf3uWecJR5 zru7uxR01jTliW9lrKBU3+yePgNj9BYkzOn63mmzhwojEJyzuhILILGxvfjn60z3jy5u nIVvNusk0yI9BEDNqvQVB8xbyes/8Gs8UnbS9i8+EyUxiILheXLk0X8NLK4F7ZHETN6e Kpw7Byaua0+A9bQVXGAAdkik7H02M0MeN+vDmPbE+uZ2m4jZAkDAt/G37hZ98Osjmmfl iR9w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=etVVDgLo; spf=pass (google.com: domain of igor.stoppa@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=igor.stoppa@gmail.com; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id k189sor918522lfk.58.2018.10.23.14.36.10 for (Google Transport Security); Tue, 23 Oct 2018 14:36:10 -0700 (PDT) Received-SPF: pass (google.com: domain of igor.stoppa@gmail.com designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=etVVDgLo; spf=pass (google.com: domain of igor.stoppa@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=igor.stoppa@gmail.com; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references:reply-to; bh=pGpymzbPbfMd0UY2O/PnMm8wYTcULxEjQ2M1QiLWUtQ=; b=etVVDgLocA4AjN+jxAXLRtXyFmyCKDXzTHMYFdYt+3XfgEEQNZehB6ZAubiCYjyvy0 C5GrkJFQrsxCI5EKgJi2Hh9syjIDh3WhQy6uGJipgVAL10O0CGT25+kfZsvBKU7+PtEs hAvdrW/iABvj+SaTIJ7FSEwZoqcJMNTTM7IyOYe6ITWgreOcX1fRhiAu4Aie0D3L+6Ah 0CL0xm+qqL4/H+nxwdF5UjsZq5aTf3xxTf9y0JGxopS2MHxBOZ9aDWIX2bUY5gqeHeBx G6gvyawpjFtqRLlCBActSQCZFme72y8lx8IlyO4u8ngtDN4fbCBBViLuXUbGABcQ4L4f JH8Q== X-Google-Smtp-Source: AJdET5exRyQPy5ypGmiJjncDtEgR9KmCyJjS90PbVTRQUbcCgZs2pc0eSgkwJSERums9s+tyA2MWxw== X-Received: by 2002:a19:3809:: with SMTP id f9mr170930lfa.148.1540330570334; Tue, 23 Oct 2018 14:36:10 -0700 (PDT) Received: from localhost.localdomain (91-159-62-169.elisa-laajakaista.fi. [91.159.62.169]) by smtp.gmail.com with ESMTPSA id y127-v6sm377950lfc.13.2018.10.23.14.36.09 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 23 Oct 2018 14:36:09 -0700 (PDT) From: Igor Stoppa X-Google-Original-From: Igor Stoppa To: Mimi Zohar , Kees Cook , Matthew Wilcox , Dave Chinner , James Morris , Michal Hocko , kernel-hardening@lists.openwall.com, linux-integrity@vger.kernel.org, linux-security-module@vger.kernel.org Cc: igor.stoppa@huawei.com, Dave Hansen , Jonathan Corbet , Laura Abbott , Chris von Recklinghausen , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 09/17] prmem: hardened usercopy Date: Wed, 24 Oct 2018 00:34:56 +0300 Message-Id: <20181023213504.28905-10-igor.stoppa@huawei.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181023213504.28905-1-igor.stoppa@huawei.com> References: <20181023213504.28905-1-igor.stoppa@huawei.com> Reply-To: Igor Stoppa X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Prevent leaks of protected memory to userspace. The protection from overwrited from userspace is already available, once the memory is write protected. Signed-off-by: Igor Stoppa CC: Kees Cook CC: Chris von Recklinghausen CC: linux-mm@kvack.org CC: linux-kernel@vger.kernel.org --- include/linux/prmem.h | 24 ++++++++++++++++++++++++ mm/usercopy.c | 5 +++++ 2 files changed, 29 insertions(+) diff --git a/include/linux/prmem.h b/include/linux/prmem.h index cf713fc1c8bb..919d853ddc15 100644 --- a/include/linux/prmem.h +++ b/include/linux/prmem.h @@ -273,6 +273,30 @@ struct pmalloc_pool { uint8_t mode; }; +void __noreturn usercopy_abort(const char *name, const char *detail, + bool to_user, unsigned long offset, + unsigned long len); + +/** + * check_pmalloc_object() - helper for hardened usercopy + * @ptr: the beginning of the memory to check + * @n: the size of the memory to check + * @to_user: copy to userspace or from userspace + * + * If the check is ok, it will fall-through, otherwise it will abort. + * The function is inlined, to minimize the performance impact of the + * extra check that can end up on a hot path. + * Non-exhaustive micro benchmarking with QEMU x86_64 shows a reduction of + * the time spent in this fragment by 60%, when inlined. + */ +static inline +void check_pmalloc_object(const void *ptr, unsigned long n, bool to_user) +{ + if (unlikely(__is_wr_after_init(ptr, n) || __is_wr_pool(ptr, n))) + usercopy_abort("pmalloc", "accessing pmalloc obj", to_user, + (const unsigned long)ptr, n); +} + /* * The write rare functionality is fully implemented as __always_inline, * to prevent having an internal function call that is capable of modifying diff --git a/mm/usercopy.c b/mm/usercopy.c index 852eb4e53f06..a080dd37b684 100644 --- a/mm/usercopy.c +++ b/mm/usercopy.c @@ -22,8 +22,10 @@ #include #include #include +#include #include + /* * Checks if a given pointer and length is contained by the current * stack frame (if possible). @@ -284,6 +286,9 @@ void __check_object_size(const void *ptr, unsigned long n, bool to_user) /* Check for object in kernel to avoid text exposure. */ check_kernel_text_object((const unsigned long)ptr, n, to_user); + + /* Check if object is from a pmalloc chunk. */ + check_pmalloc_object(ptr, n, to_user); } EXPORT_SYMBOL(__check_object_size);