From patchwork Wed Aug 29 22:59:37 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Edgecombe, Rick P" X-Patchwork-Id: 10581037 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8E4B35A4 for ; Wed, 29 Aug 2018 22:59:51 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7ED8E2B157 for ; Wed, 29 Aug 2018 22:59:51 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 7349F2B16D; Wed, 29 Aug 2018 22:59:51 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4B1B82B157 for ; Wed, 29 Aug 2018 22:59:50 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8003C6B4E25; Wed, 29 Aug 2018 18:59:47 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 787F66B4E28; Wed, 29 Aug 2018 18:59:47 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 64DA26B4E29; Wed, 29 Aug 2018 18:59:47 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pl1-f200.google.com (mail-pl1-f200.google.com [209.85.214.200]) by kanga.kvack.org (Postfix) with ESMTP id 1F6856B4E25 for ; Wed, 29 Aug 2018 18:59:47 -0400 (EDT) Received: by mail-pl1-f200.google.com with SMTP id b93-v6so2868730plb.10 for ; Wed, 29 Aug 2018 15:59:47 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:in-reply-to:references; bh=y9y4Vx2iOX2o/hg2UHXLgGMAdsn0DmQc363NJwvUEWs=; b=S64KCH4A6aO1lzVNx7An9GYYI6HevTecQ0vRAK3w7igZ6IZm60/pP5qr7F5K3SHJDA w7A1RUa0I6mEsP5uUBJorwldarPjnRPQVDj9FWRJSp+EpfclE2C4zj85ST8/tOkFHjSZ EnL50I4jRG69Ka2RVAPFLSHOHN2/wHV7pc0zwoq46XT3AwyR8qBfusezPtx+2hKFKjyw pzaeOfUScNLnKwhjJSDXsbaH3N7GdqIJt2Y1VXWoGSnV7a8TDHeKnFjFGlMBNKLWuHR/ DbHaEGC4EsGy+k/pmK1yW+Vz5PHTrfC8BntQQaPqY1yU8GYmLWNfHN0aGgzfjDt6ia+6 bwIw== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of rick.p.edgecombe@intel.com designates 134.134.136.20 as permitted sender) smtp.mailfrom=rick.p.edgecombe@intel.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com X-Gm-Message-State: APzg51Cqi3VWtz4SrzPIZulSjkrqAcMj08R6SQLUq9LIPL50lmF4DcOK IcFvppivYntALFW7eE404xS88E5Z54Y9zb+q3D5UVJZK2JPK7ed0S4mPU3+6OffaNSEbNSSA+Rj Hf4xgtvQLWou7rOL3fidbcmYd7kH6sngDmjS3hlm/2rgbArBMnFvgR7Dj2gzhhn7iGA== X-Received: by 2002:a63:f60:: with SMTP id 32-v6mr7394177pgp.399.1535583586741; Wed, 29 Aug 2018 15:59:46 -0700 (PDT) X-Google-Smtp-Source: ANB0VdYKL1ZAllK55d3FgSICNCLXigEcxSHcbvXpZKjNBNpD3OZRMtIVIa27y/kJS6Uw1Ez7gd5n X-Received: by 2002:a63:f60:: with SMTP id 32-v6mr7394153pgp.399.1535583585825; Wed, 29 Aug 2018 15:59:45 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1535583585; cv=none; d=google.com; s=arc-20160816; b=ddGibFF27gHXJxA2le/qAgmOv29e6zQ+waKWxn7lanc61yh7SyBJf62SPjVKeS/r6R Jx0SywE4T/QEv0jQXDgjAnwv69+rrkkuCMiAibhMsfIgxnqL+PcerlN2Ke5KbTtzTc8A +njqntSgwJW6z3wXIOzZwjc2QTTfIL1DtyYvJTf1dowdq/gUB4rqncxRfRNMrkS+Hmn5 9YgP3IwfYxiT030QLQzDXWErHHApNTxnikHH19bhHrdtLaihRGrpMzEcaNcIzJf7GxfE ESR0chgGrP0rYv1L6VfMIOsY7OREBL4jYyFvdxwLe8pX1bBnTVtA7ONB8JCpR0gcQPYm K45w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :arc-authentication-results; bh=y9y4Vx2iOX2o/hg2UHXLgGMAdsn0DmQc363NJwvUEWs=; b=AdBrEpj86yNYXRWZ/WZdhrTLZHSS691qUA1W4Po+6M8GjAdRb4MI0W6qkPkr+q9lCb T2xvEK4dFghupf9wssfyQWsLPb3djRvAMJz6pe98goDdG4W3GRy2YPeMyfgVhqq6N18d mwdIc5OGYaEvbgs8+CpGQpHCDfF/ttcWrzDj+rJdHcWpThyzURtKzHcQ10xWO2pavXpb 3e5VEw/XiaU4PRKuoU3m7SUCoPsi/UbtCceJdMNtCdouTMH2NVax3YDhGzdF3AsTKsCn 6c+UhvEDUQuk2z5cK6RoTAj+v3ToElO5YsMC6wIEyWtjLixn3riwWt9CbRN8fENFujVa K1jg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of rick.p.edgecombe@intel.com designates 134.134.136.20 as permitted sender) smtp.mailfrom=rick.p.edgecombe@intel.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from mga02.intel.com (mga02.intel.com. [134.134.136.20]) by mx.google.com with ESMTPS id o13-v6si4876523pll.86.2018.08.29.15.59.45 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 29 Aug 2018 15:59:45 -0700 (PDT) Received-SPF: pass (google.com: domain of rick.p.edgecombe@intel.com designates 134.134.136.20 as permitted sender) client-ip=134.134.136.20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of rick.p.edgecombe@intel.com designates 134.134.136.20 as permitted sender) smtp.mailfrom=rick.p.edgecombe@intel.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga101.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 29 Aug 2018 15:59:44 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.53,305,1531810800"; d="scan'208";a="85609395" Received: from rpedgeco-desk5.jf.intel.com ([10.54.75.168]) by fmsmga001.fm.intel.com with ESMTP; 29 Aug 2018 15:59:41 -0700 From: Rick Edgecombe To: tglx@linutronix.de, mingo@redhat.com, hpa@zytor.com, x86@kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, kernel-hardening@lists.openwall.com, daniel@iogearbox.net, jannh@google.com, keescook@chromium.org Cc: kristen@linux.intel.com, dave.hansen@intel.com, arjan@linux.intel.com, Rick Edgecombe Subject: [PATCH v4 1/3] vmalloc: Add __vmalloc_node_try_addr function Date: Wed, 29 Aug 2018 15:59:37 -0700 Message-Id: <1535583579-6138-2-git-send-email-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1535583579-6138-1-git-send-email-rick.p.edgecombe@intel.com> References: <1535583579-6138-1-git-send-email-rick.p.edgecombe@intel.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Create __vmalloc_node_try_addr function that tries to allocate at a specific address and supports caller specified behavior for whether any lazy purging happens if there is a collision. This new function draws from the __vmalloc_node_range implementation. Attempts to merge the two into a single allocator resulted in logic that was difficult to follow, so they are left separate. Signed-off-by: Rick Edgecombe --- include/linux/vmalloc.h | 3 + mm/vmalloc.c | 177 +++++++++++++++++++++++++++++++++++++++++++++++- 2 files changed, 179 insertions(+), 1 deletion(-) diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h index 398e9c9..c7712c8 100644 --- a/include/linux/vmalloc.h +++ b/include/linux/vmalloc.h @@ -82,6 +82,9 @@ extern void *__vmalloc_node_range(unsigned long size, unsigned long align, unsigned long start, unsigned long end, gfp_t gfp_mask, pgprot_t prot, unsigned long vm_flags, int node, const void *caller); +extern void *__vmalloc_node_try_addr(unsigned long addr, unsigned long size, + gfp_t gfp_mask, pgprot_t prot, unsigned long vm_flags, + int node, int try_purge, const void *caller); #ifndef CONFIG_MMU extern void *__vmalloc_node_flags(unsigned long size, int node, gfp_t flags); static inline void *__vmalloc_node_flags_caller(unsigned long size, int node, diff --git a/mm/vmalloc.c b/mm/vmalloc.c index a728fc4..1954458 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -1709,6 +1709,181 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask, return NULL; } +static bool pvm_find_next_prev(unsigned long end, + struct vmap_area **pnext, + struct vmap_area **pprev); + +/* Try to allocate a region of KVA of the specified address and size. */ +static struct vmap_area *try_alloc_vmap_area(unsigned long addr, + unsigned long size, int node, gfp_t gfp_mask, + int try_purge) +{ + struct vmap_area *va; + struct vmap_area *cur_va = NULL; + struct vmap_area *first_before = NULL; + int need_purge = 0; + int blocked = 0; + int purged = 0; + unsigned long addr_end; + + WARN_ON(!size); + WARN_ON(offset_in_page(size)); + + addr_end = addr + size; + if (addr > addr_end) + return ERR_PTR(-EOVERFLOW); + + might_sleep(); + + va = kmalloc_node(sizeof(struct vmap_area), + gfp_mask & GFP_RECLAIM_MASK, node); + if (unlikely(!va)) + return ERR_PTR(-ENOMEM); + + /* + * Only scan the relevant parts containing pointers to other objects + * to avoid false negatives. + */ + kmemleak_scan_area(&va->rb_node, SIZE_MAX, gfp_mask & GFP_RECLAIM_MASK); + +retry: + spin_lock(&vmap_area_lock); + + pvm_find_next_prev(addr, &cur_va, &first_before); + + if (!cur_va) + goto found; + + /* + * If there is no VA that starts before the target address, start the + * check from the closest VA in order to cover the case where the + * allocation overlaps at the end. + */ + if (first_before && addr < first_before->va_end) + cur_va = first_before; + + /* Linearly search through to make sure there is a hole */ + while (cur_va->va_start < addr_end) { + if (cur_va->va_end > addr) { + if (cur_va->flags & VM_LAZY_FREE) { + need_purge = 1; + } else { + blocked = 1; + break; + } + } + + if (list_is_last(&cur_va->list, &vmap_area_list)) + break; + + cur_va = list_next_entry(cur_va, list); + } + + /* + * If a non-lazy free va blocks the allocation, or + * we are not supposed to purge, but we need to, the + * allocation fails. + */ + if (blocked || (need_purge && !try_purge)) + goto fail; + + if (try_purge && need_purge) { + /* if purged once before, give up */ + if (purged) + goto fail; + + /* + * If the va blocking the allocation is set to + * be purged then purge all vmap_areas that are + * set to purged since this will flush the TLBs + * anyway. + */ + spin_unlock(&vmap_area_lock); + purge_vmap_area_lazy(); + need_purge = 0; + purged = 1; + goto retry; + } + +found: + va->va_start = addr; + va->va_end = addr_end; + va->flags = 0; + __insert_vmap_area(va); + spin_unlock(&vmap_area_lock); + + return va; +fail: + spin_unlock(&vmap_area_lock); + kfree(va); + if (need_purge && !blocked) + return ERR_PTR(-EUCLEAN); + return ERR_PTR(-EBUSY); +} + +/** + * __vmalloc_try_addr - try to alloc at a specific address + * @addr: address to try + * @size: size to try + * @gfp_mask: flags for the page level allocator + * @prot: protection mask for the allocated pages + * @vm_flags: additional vm area flags (e.g. %VM_NO_GUARD) + * @node: node to use for allocation or NUMA_NO_NODE + * @try_purge: try to purge if needed to fulfill and allocation + * @caller: caller's return address + * + * Try to allocate at the specific address. If it succeeds the address is + * returned. If it fails an EBUSY ERR_PTR is returned. If try_purge is + * zero, it will return an EUCLEAN ERR_PTR if it could have allocated if it + * was allowed to purge. It may trigger TLB flushes if a purge is needed, + * and try_purge is set. + */ +void *__vmalloc_node_try_addr(unsigned long addr, unsigned long size, + gfp_t gfp_mask, pgprot_t prot, unsigned long vm_flags, + int node, int try_purge, const void *caller) +{ + struct vmap_area *va; + struct vm_struct *area; + void *alloc_addr; + unsigned long real_size = size; + + size = PAGE_ALIGN(size); + if (!size || (size >> PAGE_SHIFT) > totalram_pages) + return NULL; + + WARN_ON(in_interrupt()); + + if (!(vm_flags & VM_NO_GUARD)) + size += PAGE_SIZE; + + va = try_alloc_vmap_area(addr, size, node, gfp_mask, try_purge); + if (IS_ERR(va)) + goto fail; + + area = kzalloc_node(sizeof(*area), gfp_mask & GFP_RECLAIM_MASK, node); + if (unlikely(!area)) { + warn_alloc(gfp_mask, NULL, "kmalloc: allocation failure"); + return ERR_PTR(-ENOMEM); + } + + setup_vmalloc_vm(area, va, vm_flags, caller); + + alloc_addr = __vmalloc_area_node(area, gfp_mask, prot, node); + if (!alloc_addr) { + warn_alloc(gfp_mask, NULL, + "vmalloc: allocation failure: %lu bytes", real_size); + return ERR_PTR(-ENOMEM); + } + + clear_vm_uninitialized_flag(area); + + kmemleak_vmalloc(area, real_size, gfp_mask); + + return alloc_addr; +fail: + return va; +} + /** * __vmalloc_node_range - allocate virtually contiguous memory * @size: allocation size @@ -2355,7 +2530,6 @@ void free_vm_area(struct vm_struct *area) } EXPORT_SYMBOL_GPL(free_vm_area); -#ifdef CONFIG_SMP static struct vmap_area *node_to_va(struct rb_node *n) { return rb_entry_safe(n, struct vmap_area, rb_node); @@ -2403,6 +2577,7 @@ static bool pvm_find_next_prev(unsigned long end, return true; } +#ifdef CONFIG_SMP /** * pvm_determine_end - find the highest aligned address between two vmap_areas * @pnext: in/out arg for the next vmap_area