From patchwork Fri Feb 23 14:48:03 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Igor Stoppa X-Patchwork-Id: 10237943 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 61777605BA for ; Fri, 23 Feb 2018 14:50:48 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5203F29661 for ; Fri, 23 Feb 2018 14:50:48 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4633229667; Fri, 23 Feb 2018 14:50:48 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id 5C14329661 for ; Fri, 23 Feb 2018 14:50:47 +0000 (UTC) Received: (qmail 28586 invoked by uid 550); 23 Feb 2018 14:50:45 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 28566 invoked from network); 23 Feb 2018 14:50:44 -0000 From: Igor Stoppa To: , , , CC: , , , , , Igor Stoppa Subject: [PATCH 3/7] struct page: add field for vm_struct Date: Fri, 23 Feb 2018 16:48:03 +0200 Message-ID: <20180223144807.1180-4-igor.stoppa@huawei.com> X-Mailer: git-send-email 2.14.1 In-Reply-To: <20180223144807.1180-1-igor.stoppa@huawei.com> References: <20180223144807.1180-1-igor.stoppa@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.122.225.51] X-CFilter-Loop: Reflected X-Virus-Scanned: ClamAV using ClamSMTP When a page is used for virtual memory, it is often necessary to obtain a handler to the corresponding vm_struct, which refers to the virtually continuous area generated when invoking vmalloc. The struct page has a "mapping" field, which can be re-used, to store a pointer to the parent area. This will avoid more expensive searches, later on. Signed-off-by: Igor Stoppa --- include/linux/mm_types.h | 1 + mm/vmalloc.c | 5 +++++ 2 files changed, 6 insertions(+) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index fd1af6b9591d..c3a4825e10c0 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -84,6 +84,7 @@ struct page { void *s_mem; /* slab first object */ atomic_t compound_mapcount; /* first tail page */ /* page_deferred_list().next -- second tail page */ + struct vm_struct *area; }; /* Second double word */ diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 673942094328..14d99ed22397 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -1536,6 +1536,7 @@ static void __vunmap(const void *addr, int deallocate_pages) struct page *page = area->pages[i]; BUG_ON(!page); + page->area = NULL; __free_pages(page, 0); } @@ -1744,6 +1745,7 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align, const void *caller) { struct vm_struct *area; + unsigned int i; void *addr; unsigned long real_size = size; @@ -1769,6 +1771,9 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align, kmemleak_vmalloc(area, size, gfp_mask); + for (i = 0; i < area->nr_pages; i++) + area->pages[i]->area = area; + return addr; fail: