From patchwork Tue Oct 23 21:34:55 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Igor Stoppa X-Patchwork-Id: 10653707 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B055814BB for ; Tue, 23 Oct 2018 21:36:19 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9A5262A3C9 for ; Tue, 23 Oct 2018 21:36:19 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8D6A92A3D1; Tue, 23 Oct 2018 21:36:19 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id CBDD42A3C9 for ; Tue, 23 Oct 2018 21:36:18 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 010416B000E; Tue, 23 Oct 2018 17:36:12 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id ED6B06B0269; Tue, 23 Oct 2018 17:36:11 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DA4F26B026A; Tue, 23 Oct 2018 17:36:11 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-lj1-f197.google.com (mail-lj1-f197.google.com [209.85.208.197]) by kanga.kvack.org (Postfix) with ESMTP id 5AD9B6B000E for ; Tue, 23 Oct 2018 17:36:11 -0400 (EDT) Received: by mail-lj1-f197.google.com with SMTP id r24-v6so956230ljr.18 for ; Tue, 23 Oct 2018 14:36:11 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references:reply-to; bh=MxJyklPQotWhYa2J56SVcBSG16bYJ6WnJInuOUKovF4=; b=WKVzqQmjlrUnpalBrGIPn6G9+RWRs+UrlxOE6E80tfRKUJr5UIBju7u0ZbLllakW2e 5/tNgXZjjDwZqRhdym/6gqLQ4F/Vw63vS/ubPGRZ0nL1PRVo/aNRT+avsWm/QSmenTzf eF33pnmTiIrxUtp1en9KLmvoOgkoML4yOfOAfFvnOslPlP3JXuz6R+D8bULHf0hgbee+ VHVZKJ/jWU0taHiLyCDeTlySXOn5TVPzVGOFBw7HDe6YxQ9yfm4k2ctyxZ1GS3zTqVYj T0gOZHgp5ncSISP3vj643vW2HJmfP/UsLMMkHvN7zuasYfNLYcyoqp9n0WrFM4daQO9C vXHQ== X-Gm-Message-State: ABuFfojbXfC5kt5GtEWSC9slUJ9o2ss1NexM+4zmb1qbw+At151VSyqC BEJgT0Gv4n7YCYdmdPl3Tqj0fpwyFlHRBFv1L/dOLHXsPY6pH0ziQBnre94EjBR//u/1k6cmvHD dr0kZFy3NO66aKN1j/qvxAYApA6yr002kiOiKKCmP7e8P9wrjkCUqbRQV+u7dTqYbCtegr0tAme pjKdO6O0FZQy/E/rLyz9CrhuWn+cm0ROHqflPzYYlCRPTRSHLRXj2oLLe/JWtYi1I6/Zpx/Q78g X7Jx025J5fMhU73hJH8RHqIFyKbC2QbisHK2+VtCCfdk49asOs5oqTfx2RzHm29ZwJ3amWHwTk2 SsbN6aYl+U51lz+nG0qludw71OhRSVent8iJrAMRPD4QPyDqJ7A2kmetI5JZ41tzgGE1XgKZRfy 4 X-Received: by 2002:a2e:988e:: with SMTP id b14-v6mr2758989ljj.171.1540330570609; Tue, 23 Oct 2018 14:36:10 -0700 (PDT) X-Received: by 2002:a2e:988e:: with SMTP id b14-v6mr2758955ljj.171.1540330569401; Tue, 23 Oct 2018 14:36:09 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1540330569; cv=none; d=google.com; s=arc-20160816; b=MvKNx7+tLAAkk0k/b/hiNZ079jyzQ6jnhzTqo+Evx6P8F+/Eg/HJ9JzRxhr5AAJaW4 PyxArogEt/JWt106SJhsbnFY2LL+ZEzlDtJ/uSOfGZbg4ziV/Us2j3tjOjrJX2dRNJW7 U90XC49MkL2n95F21UfT5tGjU/C2jrnIq3HwFfX4RB3s54lYM+5m1fKmerH43JgNT/2h GE0CPAgyWiKpxA4jKRuj5gM3Xs+lrjhg++R0/fBHWbVhVBzatow1CbxxLgUV5KkCTiSB /Ghvqdk+TzOY3hYJVn+0Z5NBwPJYy8CEWRsd1/mV4wCW1W+lB1eBLgnhKTVU0DSydqle BQOA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=reply-to:references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=MxJyklPQotWhYa2J56SVcBSG16bYJ6WnJInuOUKovF4=; b=t0jjhuwpMYMBqxjQyRlUAK/jTiX9kEtGkiEmg8xkXW6XCXr2FONO31JZ7YvCBqJB82 7/K5SjZGWzpiDWpdtnc0JuEqhtdmoCz7BK+r7TpbPhb5otFoMe4JNgqknGa1rVL4RoDG ANh63fuV+DH3o8PNI50yoUUMMCyFOd2Pit/Pj15srsBpSBCiuD+jhLisWX6qyVVu+0hA IFNknj28LTDSoH5OvpKl+nFzONl9bG4hfJZQK/E+GyHvsDGo6bRstDoegap9M6bPEWRH jg0tq8YpgZvjqoBLjUS50aBH6kgt0ND3NQsS+H6/o+8DqJ97gZmsRlEwwxIqe1AqrMiu 3P+g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=PdlteOY4; spf=pass (google.com: domain of igor.stoppa@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=igor.stoppa@gmail.com; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id p20-v6sor851492lfp.41.2018.10.23.14.36.09 for (Google Transport Security); Tue, 23 Oct 2018 14:36:09 -0700 (PDT) Received-SPF: pass (google.com: domain of igor.stoppa@gmail.com designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=PdlteOY4; spf=pass (google.com: domain of igor.stoppa@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=igor.stoppa@gmail.com; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references:reply-to; bh=MxJyklPQotWhYa2J56SVcBSG16bYJ6WnJInuOUKovF4=; b=PdlteOY4o40QUDNjAThlAoTsVYTFd7zs3m3+vkS0xTuwkOXqjHspfcVAbQ+Eoq0IJl v9yCNh2HFu+CuGbd89TL4zyyHJHsybDQ5YR4q3QnmnxG8byJu7Mnlr9H2+e2fDSqcJ20 poPB83K9CC9XjqrXwH9vNnu+MF/PRBAlbQhlDOtGu3zaiW8TwcRqVWjIQU8e4S2gnZoe skIGMSbLBUSAnikw86eGtMDLm7ZsZV11Su0egjC0UHX9TQDCuZYZpUVrTL/8aKvmuMD1 jbaXi1ySxuzr6JaOyAgNi0/p7XDQifApzUZnsPpFC7qUfRxurqopGqRggnBteHuHhzIj 27pQ== X-Google-Smtp-Source: AJdET5e0SCGJjOdZZvB3VL2+kCFX64ASEyEQtsS+KMf/Fg23oC2goGdqiz7dgHvR3CIhTem3KMXl1A== X-Received: by 2002:a19:9cd5:: with SMTP id f204-v6mr196018lfe.41.1540330568969; Tue, 23 Oct 2018 14:36:08 -0700 (PDT) Received: from localhost.localdomain (91-159-62-169.elisa-laajakaista.fi. [91.159.62.169]) by smtp.gmail.com with ESMTPSA id y127-v6sm377950lfc.13.2018.10.23.14.36.07 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 23 Oct 2018 14:36:08 -0700 (PDT) From: Igor Stoppa X-Google-Original-From: Igor Stoppa To: Mimi Zohar , Kees Cook , Matthew Wilcox , Dave Chinner , James Morris , Michal Hocko , kernel-hardening@lists.openwall.com, linux-integrity@vger.kernel.org, linux-security-module@vger.kernel.org Cc: igor.stoppa@huawei.com, Dave Hansen , Jonathan Corbet , Laura Abbott , Vlastimil Babka , "Kirill A. Shutemov" , Andrew Morton , Pavel Tatashin , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 08/17] prmem: struct page: track vmap_area Date: Wed, 24 Oct 2018 00:34:55 +0300 Message-Id: <20181023213504.28905-9-igor.stoppa@huawei.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181023213504.28905-1-igor.stoppa@huawei.com> References: <20181023213504.28905-1-igor.stoppa@huawei.com> Reply-To: Igor Stoppa X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP When a page is used for virtual memory, it is often necessary to obtain a handler to the corresponding vmap_area, which refers to the virtually continuous area generated, when invoking vmalloc. The struct page has a "private" field, which can be re-used, to store a pointer to the parent area. Note: in practice a virtual memory area is characterized both by a struct vmap_area and a struct vm_struct. The reason for referring from a page to its vmap_area, rather than to the vm_struct, is that the vmap_area contains a struct vm_struct *vm field, which can be used to reach also the information stored in the corresponding vm_struct. This link, however, is unidirectional, and it's not possible to easily identify the corresponding vmap_area, given a reference to a vm_struct. Furthermore, the struct vmap_area contains a list head node which is normally used only when it's queued for free and can be put to some other use during normal operations. The connection between each page and its vmap_area avoids more expensive searches through the btree of vmap_areas. Therefore, it is possible for find_vamp_area to be again a static function, while the rest of the code will rely on the direct reference from struct page. Signed-off-by: Igor Stoppa CC: Michal Hocko CC: Vlastimil Babka CC: "Kirill A. Shutemov" CC: Andrew Morton CC: Pavel Tatashin CC: linux-mm@kvack.org CC: linux-kernel@vger.kernel.org --- include/linux/mm_types.h | 25 ++++++++++++++++++------- include/linux/prmem.h | 13 ++++++++----- include/linux/vmalloc.h | 1 - mm/prmem.c | 2 +- mm/test_pmalloc.c | 12 ++++-------- mm/vmalloc.c | 9 ++++++++- 6 files changed, 39 insertions(+), 23 deletions(-) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 5ed8f6292a53..8403bdd12d1f 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -87,13 +87,24 @@ struct page { /* See page-flags.h for PAGE_MAPPING_FLAGS */ struct address_space *mapping; pgoff_t index; /* Our offset within mapping. */ - /** - * @private: Mapping-private opaque data. - * Usually used for buffer_heads if PagePrivate. - * Used for swp_entry_t if PageSwapCache. - * Indicates order in the buddy system if PageBuddy. - */ - unsigned long private; + union { + /** + * @private: Mapping-private opaque data. + * Usually used for buffer_heads if + * PagePrivate. + * Used for swp_entry_t if PageSwapCache. + * Indicates order in the buddy system if + * PageBuddy. + */ + unsigned long private; + /** + * @area: reference to the containing area + * For pages that are mapped into a virtually + * contiguous area, avoids performing a more + * expensive lookup. + */ + struct vmap_area *area; + }; }; struct { /* slab, slob and slub */ union { diff --git a/include/linux/prmem.h b/include/linux/prmem.h index 26fd48410d97..cf713fc1c8bb 100644 --- a/include/linux/prmem.h +++ b/include/linux/prmem.h @@ -54,14 +54,17 @@ static __always_inline bool __is_wr_after_init(const void *ptr, size_t size) static __always_inline bool __is_wr_pool(const void *ptr, size_t size) { - struct vmap_area *area; + struct vm_struct *vm; + struct page *page; if (!is_vmalloc_addr(ptr)) return false; - area = find_vmap_area((unsigned long)ptr); - return area && area->vm && (area->vm->size >= size) && - ((area->vm->flags & (VM_PMALLOC | VM_PMALLOC_WR)) == - (VM_PMALLOC | VM_PMALLOC_WR)); + page = vmalloc_to_page(ptr); + if (!(page && page->area && page->area->vm)) + return false; + vm = page->area->vm; + return ((vm->size >= size) && + ((vm->flags & VM_PMALLOC_WR_MASK) == VM_PMALLOC_WR_MASK)); } /** diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h index 4d14a3b8089e..43a444f8b1e9 100644 --- a/include/linux/vmalloc.h +++ b/include/linux/vmalloc.h @@ -143,7 +143,6 @@ extern struct vm_struct *__get_vm_area_caller(unsigned long size, const void *caller); extern struct vm_struct *remove_vm_area(const void *addr); extern struct vm_struct *find_vm_area(const void *addr); -extern struct vmap_area *find_vmap_area(unsigned long addr); extern int map_vm_area(struct vm_struct *area, pgprot_t prot, struct page **pages); diff --git a/mm/prmem.c b/mm/prmem.c index 7dd13ea43304..96abf04909e7 100644 --- a/mm/prmem.c +++ b/mm/prmem.c @@ -150,7 +150,7 @@ static int grow(struct pmalloc_pool *pool, size_t min_size) if (WARN(!addr, "Failed to allocate %zd bytes", PAGE_ALIGN(size))) return -ENOMEM; - new_area = find_vmap_area((uintptr_t)addr); + new_area = vmalloc_to_page(addr)->area; tag_mask = VM_PMALLOC; if (pool->mode & PMALLOC_WR) tag_mask |= VM_PMALLOC_WR; diff --git a/mm/test_pmalloc.c b/mm/test_pmalloc.c index f9ee8fb29eea..c64872ff05ea 100644 --- a/mm/test_pmalloc.c +++ b/mm/test_pmalloc.c @@ -38,15 +38,11 @@ static bool is_address_protected(void *p) if (unlikely(!is_vmalloc_addr(p))) return false; page = vmalloc_to_page(p); - if (unlikely(!page)) + if (unlikely(!(page && page->area && page->area->vm))) return false; - wmb(); /* Flush changes to the page table - is it needed? */ - area = find_vmap_area((uintptr_t)p); - if (unlikely((!area) || (!area->vm) || - ((area->vm->flags & VM_PMALLOC_PROTECTED_MASK) != - VM_PMALLOC_PROTECTED_MASK))) - return false; - return true; + area = page->area; + return (area->vm->flags & VM_PMALLOC_PROTECTED_MASK) == + VM_PMALLOC_PROTECTED_MASK; } static bool create_and_destroy_pool(void) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 15850005fea5..ffef705f0523 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -742,7 +742,7 @@ static void free_unmap_vmap_area(struct vmap_area *va) free_vmap_area_noflush(va); } -struct vmap_area *find_vmap_area(unsigned long addr) +static struct vmap_area *find_vmap_area(unsigned long addr) { struct vmap_area *va; @@ -1523,6 +1523,7 @@ static void __vunmap(const void *addr, int deallocate_pages) struct page *page = area->pages[i]; BUG_ON(!page); + page->area = NULL; __free_pages(page, 0); } @@ -1731,8 +1732,10 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align, const void *caller) { struct vm_struct *area; + struct vmap_area *va; void *addr; unsigned long real_size = size; + unsigned int i; size = PAGE_ALIGN(size); if (!size || (size >> PAGE_SHIFT) > totalram_pages) @@ -1747,6 +1750,10 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align, if (!addr) return NULL; + va = __find_vmap_area((unsigned long)addr); + for (i = 0; i < va->vm->nr_pages; i++) + va->vm->pages[i]->area = va; + /* * In this function, newly allocated vm_struct has VM_UNINITIALIZED * flag. It means that vm_struct is not fully initialized.