From patchwork Tue Oct 23 21:34:50 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Igor Stoppa X-Patchwork-Id: 10653731 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 44CB313A4 for ; Tue, 23 Oct 2018 21:36:49 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2EB662A3D2 for ; Tue, 23 Oct 2018 21:36:49 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1E1612A3D3; Tue, 23 Oct 2018 21:36:49 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.0 required=2.0 tests=BAYES_00,DKIM_ADSP_CUSTOM_MED, DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id 1DB372A3D1 for ; Tue, 23 Oct 2018 21:36:47 +0000 (UTC) Received: (qmail 13702 invoked by uid 550); 23 Oct 2018 21:36:13 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 13575 invoked from network); 23 Oct 2018 21:36:13 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references:reply-to; bh=6jXsMcNOASyEUd+AGjVZam8sdvDi65Nq+ICQnyI7NC8=; b=D2P3EntG9KSCG0qbpHMKa1F9RkSy7Fi1vmdHYMZHlxzpBrRFHHM9NmSDNJ6UUFNIPF XSi1KAxTa0aD5TdT4IY4q5HajQnHuMo2x/tOfbQmk3e8Gv/273K7kgKcTKKN8ICPJDC8 4nfmvBBbpIlF4xCgqU3MJDPqYXXkrivUSNIamTfQjKKdnAPxoAywQP5MZOy3u2PJdNbN 2NQqXXVPhvdWOQYVBqXNNymOa3SgIAbrVfxW3CzlsnFmaD5CaeVIfl4C900BUKgXU5an FofC3Bb9uPmIhTPS+tO5MFQl0G6FTHn7fQfWj2uSai/ifDGqQQf+uh5p7/Lk5iQgg19Q Wf2w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:reply-to; bh=6jXsMcNOASyEUd+AGjVZam8sdvDi65Nq+ICQnyI7NC8=; b=NW2lFBZ0C8xEiRi4SqA6PpxQeZUfUwfVVR/ZNno0X65zDwvrpwFNXWE3kD7kn12d9q ZivBtdb8CRId7qgcOdxUs5HGot8P287FkGjD+sLVuWS3AVBz4XhnU3Yxxhep4Vkb/QI6 CcwCL4UVdkNpNYFcXheVY7watbSPdoJwVx6xuHFgCnH3RGCnL5hwPyFs8K2WUBDOOuni e3jPd0eYyhRZ9o0GviTaYBx2WHQ+cvMjRW4PtdsQnan8euCfWoGhPcmZLWsDABGQ1TAd +WZ2TRAmYbJh3GJQ49rE1zyRj1BXDLboLQIgn8H0TyCas4/mEtELA+9FIENsmGTlomrf bf/w== X-Gm-Message-State: ABuFfoh0+opQAJktKlOO8bzE0KLMv8F6FSQaciASGfkGmq2bdPR9FFvC QD8aGvvT8C/+dzx+ijIZLns= X-Google-Smtp-Source: ACcGV602U2IyIsctESkBUXllpEO17589f4KnIPcsRRLcANMrqKwA6sberqeguWGCzYnkS5FtCx9hDw== X-Received: by 2002:a19:c954:: with SMTP id z81mr8626755lff.150.1540330561406; Tue, 23 Oct 2018 14:36:01 -0700 (PDT) From: Igor Stoppa X-Google-Original-From: Igor Stoppa To: Mimi Zohar , Kees Cook , Matthew Wilcox , Dave Chinner , James Morris , Michal Hocko , kernel-hardening@lists.openwall.com, linux-integrity@vger.kernel.org, linux-security-module@vger.kernel.org Cc: igor.stoppa@huawei.com, Dave Hansen , Jonathan Corbet , Laura Abbott , Andrew Morton , Chintan Pandya , Joe Perches , "Luis R. Rodriguez" , Thomas Gleixner , Kate Stewart , Greg Kroah-Hartman , Philippe Ombredanne , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 03/17] prmem: vmalloc support for dynamic allocation Date: Wed, 24 Oct 2018 00:34:50 +0300 Message-Id: <20181023213504.28905-4-igor.stoppa@huawei.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181023213504.28905-1-igor.stoppa@huawei.com> References: <20181023213504.28905-1-igor.stoppa@huawei.com> X-Virus-Scanned: ClamAV using ClamSMTP Prepare vmalloc for: - tagging areas used for dynamic allocation of protected memory - supporting various tags, related to the property that an area might have - extrapolating the pool containing a given area - chaining the areas in each pool - extrapolating the area containing a given memory address NOTE: Since there is a list_head structure that is used only when disposing of the allocation (the field purge_list), there are two pointers for the take, before it comes the time of freeing the allocation. To avoid increasing the size of the vmap_area structure, instead of using a standard doubly linked list for tracking the chain of vmap_areas, only one pointer is spent for this purpose, in a single linked list, while the other is used to provide a direct connection to the parent pool. Signed-off-by: Igor Stoppa CC: Michal Hocko CC: Andrew Morton CC: Chintan Pandya CC: Joe Perches CC: "Luis R. Rodriguez" CC: Thomas Gleixner CC: Kate Stewart CC: Greg Kroah-Hartman CC: Philippe Ombredanne CC: linux-mm@kvack.org CC: linux-kernel@vger.kernel.org --- include/linux/vmalloc.h | 12 +++++++++++- mm/vmalloc.c | 2 +- 2 files changed, 12 insertions(+), 2 deletions(-) diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h index 398e9c95cd61..4d14a3b8089e 100644 --- a/include/linux/vmalloc.h +++ b/include/linux/vmalloc.h @@ -21,6 +21,9 @@ struct notifier_block; /* in notifier.h */ #define VM_UNINITIALIZED 0x00000020 /* vm_struct is not fully initialized */ #define VM_NO_GUARD 0x00000040 /* don't add guard page */ #define VM_KASAN 0x00000080 /* has allocated kasan shadow memory */ +#define VM_PMALLOC 0x00000100 /* pmalloc area - see docs */ +#define VM_PMALLOC_WR 0x00000200 /* pmalloc write rare area */ +#define VM_PMALLOC_PROTECTED 0x00000400 /* pmalloc protected area */ /* bits [20..32] reserved for arch specific ioremap internals */ /* @@ -48,7 +51,13 @@ struct vmap_area { unsigned long flags; struct rb_node rb_node; /* address sorted rbtree */ struct list_head list; /* address sorted list */ - struct llist_node purge_list; /* "lazy purge" list */ + union { + struct llist_node purge_list; /* "lazy purge" list */ + struct { + struct vmap_area *next; + struct pmalloc_pool *pool; + }; + }; struct vm_struct *vm; struct rcu_head rcu_head; }; @@ -134,6 +143,7 @@ extern struct vm_struct *__get_vm_area_caller(unsigned long size, const void *caller); extern struct vm_struct *remove_vm_area(const void *addr); extern struct vm_struct *find_vm_area(const void *addr); +extern struct vmap_area *find_vmap_area(unsigned long addr); extern int map_vm_area(struct vm_struct *area, pgprot_t prot, struct page **pages); diff --git a/mm/vmalloc.c b/mm/vmalloc.c index a728fc492557..15850005fea5 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -742,7 +742,7 @@ static void free_unmap_vmap_area(struct vmap_area *va) free_vmap_area_noflush(va); } -static struct vmap_area *find_vmap_area(unsigned long addr) +struct vmap_area *find_vmap_area(unsigned long addr) { struct vmap_area *va;