From patchwork Fri Nov 2 19:25:17 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Edgecombe, Rick P" X-Patchwork-Id: 10666055 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CACD517D4 for ; Fri, 2 Nov 2018 19:30:16 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BBC842BFF6 for ; Fri, 2 Nov 2018 19:30:16 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id AF9C92C0B2; Fri, 2 Nov 2018 19:30:16 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DC6192BFF6 for ; Fri, 2 Nov 2018 19:30:15 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B2D456B0008; Fri, 2 Nov 2018 15:30:11 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id AB5476B000C; Fri, 2 Nov 2018 15:30:11 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9A35D6B000D; Fri, 2 Nov 2018 15:30:11 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pg1-f200.google.com (mail-pg1-f200.google.com [209.85.215.200]) by kanga.kvack.org (Postfix) with ESMTP id 5A2726B0008 for ; Fri, 2 Nov 2018 15:30:11 -0400 (EDT) Received: by mail-pg1-f200.google.com with SMTP id w28-v6so2569705pgl.15 for ; Fri, 02 Nov 2018 12:30:11 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:in-reply-to:references; bh=TJh+w8CNnabkJx8JRHe4cHGlxnbn3O57KM+PK0Ez4t0=; b=LPqChHCGDkPrPTT1MRKkjGk4Lmu03TS5d/OgOBtsIUUr1hftLloxWy52Ozm//FLj6t fQNy3EUu5kJ4UKcg7RZp7uEY54RJp7edA7HcqfCSF6St61EjKkDeXICfDnnoHRbGf+0r HFF46U5H0TdJbbB6bbJyTcUAkz0WcN+5SATiyQCuWMd2HCzRotKnXHWc9+hVJb34TrmR 8nvlbfpL7Op9CqQACjiA6Mt1+bTtmKki5Y2wlkiUrp4RjmrVZZEMta6SDECyXNEj/zCF SQYbeK6xiQykDy4YxPXWGzvhIzuD6q9EAov3PdaATO/BRQXK4S6R5vEhs5wI5OX1qkMl n6TQ== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of rick.p.edgecombe@intel.com designates 134.134.136.31 as permitted sender) smtp.mailfrom=rick.p.edgecombe@intel.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com X-Gm-Message-State: AGRZ1gJt0pOXEl2P2UuuwVFcU5Urusjgi1rdR3Mv2v7CPfov4FIEEWQC yiYsSIkVCtDGGDh+xfIJ3r5ZZYeLjuyoUzMWQlx/FCAT+I1az3zH4d9e0aH6uoeEZt+DAEYY/MB dlZKpjFxgIvMkkCCnFYr3pgbPbOVjoU0qRt+USrwLar/e3ZlfinJwlJFO1tKftfQuAA== X-Received: by 2002:a63:e645:: with SMTP id p5-v6mr11494883pgj.218.1541187010968; Fri, 02 Nov 2018 12:30:10 -0700 (PDT) X-Google-Smtp-Source: AJdET5fZnMUv+FYUV0Vogj3Oq1sEwOE86qKx36kcw6gBCddl8ooszyi7mdwVu27rxkNPy0itbt+C X-Received: by 2002:a63:e645:: with SMTP id p5-v6mr11494822pgj.218.1541187009718; Fri, 02 Nov 2018 12:30:09 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1541187009; cv=none; d=google.com; s=arc-20160816; b=iMREeHuzI3SsVLhdYa4ZHtACb4nAoxI1znZQi7NymX6LBc/36ACl5dXilmAScwP29/ C9H21XgU0M9HDcBka8XyT3JR4mvWw7tm7EjJlz+xG7cJ1fqCs1PMU8pS56bpm5nJNDIV bF2L4iNgO9Ze4MgKePKPKsr6ZAXYMAL8kAxkA4wsPyltIbv72sgm1fQBt5w1cGcsqnJo vut/ES+0twBL8NZ8rchqRE5NfvaNYFrlRWvo8dFawBMJ8CFTKs6RMAt9CWZb5xW6I2FQ RYlpUlg74Eq28LKQBooLKFG+k5tgO95j+SomyG6yC9h/QCwx4XDQRuQa9xIP7AX27a8i m2Pw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from; bh=TJh+w8CNnabkJx8JRHe4cHGlxnbn3O57KM+PK0Ez4t0=; b=bQYU83P/D/fWFYkjQ4B/ExbzAsaFCX76/SJY05yhxfhDlMuWDPLUeIJfZx9Pk2NIEf w2GuXoZ3n9nHTq8jEwSzw61te4oz0dz6bidMeoxt4LV2JgV4gBNplnIuZx44Ct3FzPAD f2zdBGIaIA04ICUzNEnOjMlfmFUtFIB579P5AFXRGFGO7nOU6PenWZRpLlJ2sCY9oyAR VAWpsXzDXbTxpuKoumPHmcWUQA+umVxcU05maBvoDUi5m8KgK89LagGTXdN2tM4fKYNd jkJoBdCqrT3BpI+7YoiRNi6qwPQSlCVrNjBjaTo+8ZRBHOBzvUgqCO0PUEAhPi9xCuF0 QJZg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of rick.p.edgecombe@intel.com designates 134.134.136.31 as permitted sender) smtp.mailfrom=rick.p.edgecombe@intel.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from mga06.intel.com (mga06.intel.com. [134.134.136.31]) by mx.google.com with ESMTPS id 30si817802pgr.396.2018.11.02.12.30.09 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 02 Nov 2018 12:30:09 -0700 (PDT) Received-SPF: pass (google.com: domain of rick.p.edgecombe@intel.com designates 134.134.136.31 as permitted sender) client-ip=134.134.136.31; Authentication-Results: mx.google.com; spf=pass (google.com: domain of rick.p.edgecombe@intel.com designates 134.134.136.31 as permitted sender) smtp.mailfrom=rick.p.edgecombe@intel.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by orsmga104.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 02 Nov 2018 12:30:09 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.54,457,1534834800"; d="scan'208";a="276669253" Received: from rpedgeco-desk5.jf.intel.com ([10.54.75.168]) by fmsmga005.fm.intel.com with ESMTP; 02 Nov 2018 12:30:08 -0700 From: Rick Edgecombe To: jeyu@kernel.org, akpm@linux-foundation.org, willy@infradead.org, tglx@linutronix.de, mingo@redhat.com, hpa@zytor.com, x86@kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, kernel-hardening@lists.openwall.com, daniel@iogearbox.net, jannh@google.com, keescook@chromium.org Cc: kristen@linux.intel.com, dave.hansen@intel.com, arjan@linux.intel.com, Rick Edgecombe Subject: [PATCH v8 1/4] vmalloc: Add __vmalloc_node_try_addr function Date: Fri, 2 Nov 2018 12:25:17 -0700 Message-Id: <20181102192520.4522-2-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181102192520.4522-1-rick.p.edgecombe@intel.com> References: <20181102192520.4522-1-rick.p.edgecombe@intel.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Create __vmalloc_node_try_addr function that tries to allocate at a specific address without triggering any lazy purging. In order to support this behavior a try_addr argument was plugged into several of the static helpers. This also changes logic in __get_vm_area_node to be faster in cases where allocations fail due to no space, which is a lot more common when trying specific addresses. Signed-off-by: Rick Edgecombe --- include/linux/vmalloc.h | 3 + mm/vmalloc.c | 128 +++++++++++++++++++++++++++++----------- 2 files changed, 95 insertions(+), 36 deletions(-) diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h index 398e9c95cd61..6eaa89612372 100644 --- a/include/linux/vmalloc.h +++ b/include/linux/vmalloc.h @@ -82,6 +82,9 @@ extern void *__vmalloc_node_range(unsigned long size, unsigned long align, unsigned long start, unsigned long end, gfp_t gfp_mask, pgprot_t prot, unsigned long vm_flags, int node, const void *caller); +extern void *__vmalloc_node_try_addr(unsigned long addr, unsigned long size, + gfp_t gfp_mask, pgprot_t prot, unsigned long vm_flags, + int node, const void *caller); #ifndef CONFIG_MMU extern void *__vmalloc_node_flags(unsigned long size, int node, gfp_t flags); static inline void *__vmalloc_node_flags_caller(unsigned long size, int node, diff --git a/mm/vmalloc.c b/mm/vmalloc.c index a728fc492557..8d01f503e20d 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -326,6 +326,9 @@ EXPORT_SYMBOL(vmalloc_to_pfn); #define VM_LAZY_FREE 0x02 #define VM_VM_AREA 0x04 +#define VMAP_MAY_PURGE 0x2 +#define VMAP_NO_PURGE 0x1 + static DEFINE_SPINLOCK(vmap_area_lock); /* Export for kexec only */ LIST_HEAD(vmap_area_list); @@ -402,12 +405,12 @@ static BLOCKING_NOTIFIER_HEAD(vmap_notify_list); static struct vmap_area *alloc_vmap_area(unsigned long size, unsigned long align, unsigned long vstart, unsigned long vend, - int node, gfp_t gfp_mask) + int node, gfp_t gfp_mask, int try_purge) { struct vmap_area *va; struct rb_node *n; unsigned long addr; - int purged = 0; + int purged = try_purge & VMAP_NO_PURGE; struct vmap_area *first; BUG_ON(!size); @@ -860,7 +863,7 @@ static void *new_vmap_block(unsigned int order, gfp_t gfp_mask) va = alloc_vmap_area(VMAP_BLOCK_SIZE, VMAP_BLOCK_SIZE, VMALLOC_START, VMALLOC_END, - node, gfp_mask); + node, gfp_mask, VMAP_MAY_PURGE); if (IS_ERR(va)) { kfree(vb); return ERR_CAST(va); @@ -1170,8 +1173,9 @@ void *vm_map_ram(struct page **pages, unsigned int count, int node, pgprot_t pro addr = (unsigned long)mem; } else { struct vmap_area *va; - va = alloc_vmap_area(size, PAGE_SIZE, - VMALLOC_START, VMALLOC_END, node, GFP_KERNEL); + va = alloc_vmap_area(size, PAGE_SIZE, VMALLOC_START, + VMALLOC_END, node, GFP_KERNEL, + VMAP_MAY_PURGE); if (IS_ERR(va)) return NULL; @@ -1372,7 +1376,8 @@ static void clear_vm_uninitialized_flag(struct vm_struct *vm) static struct vm_struct *__get_vm_area_node(unsigned long size, unsigned long align, unsigned long flags, unsigned long start, - unsigned long end, int node, gfp_t gfp_mask, const void *caller) + unsigned long end, int node, gfp_t gfp_mask, int try_purge, + const void *caller) { struct vmap_area *va; struct vm_struct *area; @@ -1386,16 +1391,17 @@ static struct vm_struct *__get_vm_area_node(unsigned long size, align = 1ul << clamp_t(int, get_count_order_long(size), PAGE_SHIFT, IOREMAP_MAX_ORDER); - area = kzalloc_node(sizeof(*area), gfp_mask & GFP_RECLAIM_MASK, node); - if (unlikely(!area)) - return NULL; - if (!(flags & VM_NO_GUARD)) size += PAGE_SIZE; - va = alloc_vmap_area(size, align, start, end, node, gfp_mask); - if (IS_ERR(va)) { - kfree(area); + va = alloc_vmap_area(size, align, start, end, node, gfp_mask, + try_purge); + if (IS_ERR(va)) + return NULL; + + area = kzalloc_node(sizeof(*area), gfp_mask & GFP_RECLAIM_MASK, node); + if (unlikely(!area)) { + free_vmap_area(va); return NULL; } @@ -1408,7 +1414,8 @@ struct vm_struct *__get_vm_area(unsigned long size, unsigned long flags, unsigned long start, unsigned long end) { return __get_vm_area_node(size, 1, flags, start, end, NUMA_NO_NODE, - GFP_KERNEL, __builtin_return_address(0)); + GFP_KERNEL, VMAP_MAY_PURGE, + __builtin_return_address(0)); } EXPORT_SYMBOL_GPL(__get_vm_area); @@ -1417,7 +1424,7 @@ struct vm_struct *__get_vm_area_caller(unsigned long size, unsigned long flags, const void *caller) { return __get_vm_area_node(size, 1, flags, start, end, NUMA_NO_NODE, - GFP_KERNEL, caller); + GFP_KERNEL, VMAP_MAY_PURGE, caller); } /** @@ -1432,7 +1439,7 @@ struct vm_struct *__get_vm_area_caller(unsigned long size, unsigned long flags, struct vm_struct *get_vm_area(unsigned long size, unsigned long flags) { return __get_vm_area_node(size, 1, flags, VMALLOC_START, VMALLOC_END, - NUMA_NO_NODE, GFP_KERNEL, + NUMA_NO_NODE, GFP_KERNEL, VMAP_MAY_PURGE, __builtin_return_address(0)); } @@ -1440,7 +1447,8 @@ struct vm_struct *get_vm_area_caller(unsigned long size, unsigned long flags, const void *caller) { return __get_vm_area_node(size, 1, flags, VMALLOC_START, VMALLOC_END, - NUMA_NO_NODE, GFP_KERNEL, caller); + NUMA_NO_NODE, GFP_KERNEL, VMAP_MAY_PURGE, + caller); } /** @@ -1709,26 +1717,10 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask, return NULL; } -/** - * __vmalloc_node_range - allocate virtually contiguous memory - * @size: allocation size - * @align: desired alignment - * @start: vm area range start - * @end: vm area range end - * @gfp_mask: flags for the page level allocator - * @prot: protection mask for the allocated pages - * @vm_flags: additional vm area flags (e.g. %VM_NO_GUARD) - * @node: node to use for allocation or NUMA_NO_NODE - * @caller: caller's return address - * - * Allocate enough pages to cover @size from the page level - * allocator with @gfp_mask flags. Map them into contiguous - * kernel virtual space, using a pagetable protection of @prot. - */ -void *__vmalloc_node_range(unsigned long size, unsigned long align, +static void *__vmalloc_node_range_opts(unsigned long size, unsigned long align, unsigned long start, unsigned long end, gfp_t gfp_mask, pgprot_t prot, unsigned long vm_flags, int node, - const void *caller) + int try_purge, const void *caller) { struct vm_struct *area; void *addr; @@ -1739,7 +1731,8 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align, goto fail; area = __get_vm_area_node(size, align, VM_ALLOC | VM_UNINITIALIZED | - vm_flags, start, end, node, gfp_mask, caller); + vm_flags, start, end, node, gfp_mask, + try_purge, caller); if (!area) goto fail; @@ -1764,6 +1757,69 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align, return NULL; } +/** + * __vmalloc_node_range - allocate virtually contiguous memory + * @size: allocation size + * @align: desired alignment + * @start: vm area range start + * @end: vm area range end + * @gfp_mask: flags for the page level allocator + * @prot: protection mask for the allocated pages + * @vm_flags: additional vm area flags (e.g. %VM_NO_GUARD) + * @node: node to use for allocation or NUMA_NO_NODE + * @caller: caller's return address + * + * Allocate enough pages to cover @size from the page level + * allocator with @gfp_mask flags. Map them into contiguous + * kernel virtual space, using a pagetable protection of @prot. + */ +void *__vmalloc_node_range(unsigned long size, unsigned long align, + unsigned long start, unsigned long end, gfp_t gfp_mask, + pgprot_t prot, unsigned long vm_flags, int node, + const void *caller) +{ + return __vmalloc_node_range_opts(size, align, start, end, gfp_mask, + prot, vm_flags, node, VMAP_MAY_PURGE, + caller); +} + +/** + * __vmalloc_try_addr - try to alloc at a specific address + * @addr: address to try + * @size: size to try + * @gfp_mask: flags for the page level allocator + * @prot: protection mask for the allocated pages + * @vm_flags: additional vm area flags (e.g. %VM_NO_GUARD) + * @node: node to use for allocation or NUMA_NO_NODE + * @caller: caller's return address + * + * Try to allocate at the specific address. If it succeeds the address is + * returned. If it fails NULL is returned. It will not try to purge lazy + * free vmap areas in order to fit. + */ +void *__vmalloc_node_try_addr(unsigned long addr, unsigned long size, + gfp_t gfp_mask, pgprot_t prot, unsigned long vm_flags, + int node, const void *caller) +{ + unsigned long addr_end; + unsigned long vsize = PAGE_ALIGN(size); + + if (!vsize || (vsize >> PAGE_SHIFT) > totalram_pages) + return NULL; + + if (!(vm_flags & VM_NO_GUARD)) + vsize += PAGE_SIZE; + + addr_end = addr + vsize; + + if (addr > addr_end) + return NULL; + + return __vmalloc_node_range_opts(size, 1, addr, addr_end, + gfp_mask | __GFP_NOWARN, prot, vm_flags, node, + VMAP_NO_PURGE, caller); +} + /** * __vmalloc_node - allocate virtually contiguous memory * @size: allocation size