From patchwork Thu Jul 26 18:54:56 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Battersby X-Patchwork-Id: 10546325 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2ADEB112E for ; Thu, 26 Jul 2018 18:55:02 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1BF0828910 for ; Thu, 26 Jul 2018 18:55:02 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 0E21B2BC7B; Thu, 26 Jul 2018 18:55:02 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id CAD4F28910 for ; Thu, 26 Jul 2018 18:55:00 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id EA86A6B000C; Thu, 26 Jul 2018 14:54:59 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id E32476B000D; Thu, 26 Jul 2018 14:54:59 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CFAB86B000E; Thu, 26 Jul 2018 14:54:59 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-qk0-f200.google.com (mail-qk0-f200.google.com [209.85.220.200]) by kanga.kvack.org (Postfix) with ESMTP id 9FEEA6B000C for ; Thu, 26 Jul 2018 14:54:59 -0400 (EDT) Received: by mail-qk0-f200.google.com with SMTP id u68-v6so2176274qku.5 for ; Thu, 26 Jul 2018 11:54:59 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:subject :to:message-id:date:user-agent:mime-version :content-transfer-encoding:content-language; bh=68bX3Qw6D4dFbkZkB1tOWb2ZdeZBq+b+BQ/X9xrzGoo=; b=R2E1dvujFC3kWe2gQBA8dIxxUnX/PcJS0FqoxbLqwdJK6sU1TEGfQkQwoxSiz98NCS S1qzcHRbKaoJDcVX4/gpZk+/BYYWOjSyY9fBE79eJizQx3fcrp8rFrRKlDReDjpWj3y8 5fueuj+nrGZgOOUPqKvTtdHmhq28wjO/QU7LZAtWQI6bAEpAVPKUunmKnLh6DPvDO4Ku 7h2qmEFxHt2Bvwqw+vHnvBtEjb3917AEfV+o7BYsPVCKqfHmm+GFXdDa+P7Aj8HLZXx6 H0Aw72xjHHRc1lKo9U3tPD7G6mk/nzvHRnR7gZchzp2FbTGndWUUGknaLm84KNFLo8Sg tIAw== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of btv1==745586be884==tonyb@cybernetics.com designates 173.71.130.66 as permitted sender) smtp.mailfrom="btv1==745586be884==tonyb@cybernetics.com" X-Gm-Message-State: AOUpUlGNfAMY/XIAyE0eROLYi3BjSqSclQjNXUdx+x7BEJlmAsVpVh5W V9IVkrlRDbGX3ThLgaEwUHipfoj1WUSAbb8b8dGoRQVc5LTDZmV4ax1Vl1HnjC2BB8IoRrfLa7N ljh/jn1fcduyv8VAPuKOfoqaJmv34GkudOlgEFmF4D0mgUJ6aZfazWvD5kog0KRWHdg== X-Received: by 2002:a0c:983d:: with SMTP id c58-v6mr2817607qvd.221.1532631299313; Thu, 26 Jul 2018 11:54:59 -0700 (PDT) X-Google-Smtp-Source: AAOMgpdvaL8YlYeYm/HrSXIHA9KYgn/Pn1o73Oh3UWBOOUQAohmxw+Ii6ITBmcfq7ebmzghTFyT+ X-Received: by 2002:a0c:983d:: with SMTP id c58-v6mr2817561qvd.221.1532631298381; Thu, 26 Jul 2018 11:54:58 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1532631298; cv=none; d=google.com; s=arc-20160816; b=qnuQZ/7KkoW1zATDaO2CtdDnDtXf5B4JJlkugaTh6VOzWRm/CSIA/fW226rUlwm4qB ncHayNQrJFcSzy10SWGiGqLNKXAjoexUpu0zlkvg9/FgDoDRryD9PL9OXVKb17F0WERb BV64kdAHdc0MfoLKTpMEKIoiqF+MXihfEF+1hTr3JcbGzRtvJm2Wg/aZCU0XbYMNGtdX imFLDaZ0soFkICzTtTBdLhJN2D6G6DS63CCpNgqBgSjA5ld4dtyPE2QCgIhotJoSbNYt Rbt1PkqUcBmB4YqtiFn2DWD5aO65+gs00mq5pgSHJTEL7dydYfFPSMoZHUP4mWkyfMvX dLVg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-language:content-transfer-encoding:mime-version:user-agent :date:message-id:to:subject:from:arc-authentication-results; bh=68bX3Qw6D4dFbkZkB1tOWb2ZdeZBq+b+BQ/X9xrzGoo=; b=ENyogwL4xSM2nK/suE9yDYG4UWNIyn5at1Rsvh0wIHkboC6TiH27Wfa4qQLauDEntr XNnjRcRn8/ZD0+I1IiG3cJwpyChjV46Q0t+pWUnda8Uu4cJkyQt0WAJKRPn2HqBthxW/ rJ/WhF679XyXWPkiJk88fAi6jZTxWOETJ/y87CEZu/ip0/jXDbFzec3wsh5hmr9EdQpj ASkmOIqbDMCpskl2EbUZuILeSQ7svoYL4ClgLz7y01+37WhBBtUODE1hyU0s/dCyS+s0 WLU6EAcrvwGvlj66i/Op7G9yvC2j5MbACiXaMuN5RVNiCSsXIOj54/JHAFbrbFNXiAqG TfRw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of btv1==745586be884==tonyb@cybernetics.com designates 173.71.130.66 as permitted sender) smtp.mailfrom="btv1==745586be884==tonyb@cybernetics.com" Received: from mail.cybernetics.com (mail.cybernetics.com. [173.71.130.66]) by mx.google.com with ESMTPS id j125-v6si1957933qkc.143.2018.07.26.11.54.58 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 26 Jul 2018 11:54:58 -0700 (PDT) Received-SPF: pass (google.com: domain of btv1==745586be884==tonyb@cybernetics.com designates 173.71.130.66 as permitted sender) client-ip=173.71.130.66; Authentication-Results: mx.google.com; spf=pass (google.com: domain of btv1==745586be884==tonyb@cybernetics.com designates 173.71.130.66 as permitted sender) smtp.mailfrom="btv1==745586be884==tonyb@cybernetics.com" X-ASG-Debug-ID: 1532631296-0fb3b01fb33a2770001-v9ZeMO Received: from cybernetics.com ([10.157.1.126]) by mail.cybernetics.com with ESMTP id EjELGtzuCzCSTUNj (version=SSLv3 cipher=DES-CBC3-SHA bits=112 verify=NO); Thu, 26 Jul 2018 14:54:56 -0400 (EDT) X-Barracuda-Envelope-From: tonyb@cybernetics.com X-ASG-Whitelist: Client Received: from [10.157.2.224] (account tonyb HELO [192.168.200.1]) by cybernetics.com (CommuniGate Pro SMTP 5.1.14) with ESMTPSA id 8304012; Thu, 26 Jul 2018 14:54:56 -0400 From: Tony Battersby Subject: [PATCH 2/3] dmapool: improve scalability of dma_pool_free To: Christoph Hellwig , Marek Szyprowski , Matthew Wilcox , Sathya Prakash , Chaitra P B , Suganath Prabu Subramani , iommu@lists.linux-foundation.org, linux-mm@kvack.org, linux-scsi , MPT-FusionLinux.pdl@broadcom.com X-ASG-Orig-Subj: [PATCH 2/3] dmapool: improve scalability of dma_pool_free Message-ID: <1288e597-a67a-25b3-b7c6-db883ca67a25@cybernetics.com> Date: Thu, 26 Jul 2018 14:54:56 -0400 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.9.1 MIME-Version: 1.0 Content-Language: en-US X-Barracuda-Connect: UNKNOWN[10.157.1.126] X-Barracuda-Start-Time: 1532631296 X-Barracuda-Encrypted: DES-CBC3-SHA X-Barracuda-URL: https://10.157.1.122:443/cgi-mod/mark.cgi X-Barracuda-Scan-Msg-Size: 8846 X-Barracuda-BRTS-Status: 1 X-Virus-Scanned: by bsmtpd at cybernetics.com X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP dma_pool_free() scales poorly when the pool contains many pages because pool_find_page() does a linear scan of all allocated pages. Improve its scalability by replacing the linear scan with a red-black tree lookup. In big O notation, this improves the algorithm from O(n^2) to O(n * log n). Signed-off-by: Tony Battersby --- I moved some code from dma_pool_destroy() into pool_free_page() to avoid code repetition. --- a/mm/dmapool.c +++ b/mm/dmapool.c @@ -15,11 +15,12 @@ * Many older drivers still have their own code to do this. * * The current design of this allocator is fairly simple. The pool is - * represented by the 'struct dma_pool' which keeps a doubly-linked list of - * allocated pages. Each page in the page_list is split into blocks of at - * least 'size' bytes. Free blocks are tracked in an unsorted singly-linked - * list of free blocks within the page. Used blocks aren't tracked, but we - * keep a count of how many are currently allocated from each page. + * represented by the 'struct dma_pool' which keeps a red-black tree of all + * allocated pages, keyed by DMA address for fast lookup when freeing. + * Each page in the page_tree is split into blocks of at least 'size' bytes. + * Free blocks are tracked in an unsorted singly-linked list of free blocks + * within the page. Used blocks aren't tracked, but we keep a count of how + * many are currently allocated from each page. * * The avail_page_list keeps track of pages that have one or more free blocks * available to (re)allocate. Pages are moved in and out of avail_page_list @@ -41,13 +42,14 @@ #include #include #include +#include #if defined(CONFIG_DEBUG_SLAB) || defined(CONFIG_SLUB_DEBUG_ON) #define DMAPOOL_DEBUG 1 #endif struct dma_pool { /* the pool */ - struct list_head page_list; + struct rb_root page_tree; struct list_head avail_page_list; spinlock_t lock; size_t size; @@ -59,7 +61,7 @@ struct dma_pool { /* the pool */ }; struct dma_page { /* cacheable header for 'allocation' bytes */ - struct list_head page_list; + struct rb_node page_node; struct list_head avail_page_link; void *vaddr; dma_addr_t dma; @@ -78,6 +80,7 @@ show_pools(struct device *dev, struct de char *next; struct dma_page *page; struct dma_pool *pool; + struct rb_node *node; next = buf; size = PAGE_SIZE; @@ -92,7 +95,10 @@ show_pools(struct device *dev, struct de unsigned blocks = 0; spin_lock_irq(&pool->lock); - list_for_each_entry(page, &pool->page_list, page_list) { + for (node = rb_first(&pool->page_tree); + node; + node = rb_next(node)) { + page = rb_entry(node, struct dma_page, page_node); pages++; blocks += page->in_use; } @@ -169,7 +175,7 @@ struct dma_pool *dma_pool_create(const c retval->dev = dev; - INIT_LIST_HEAD(&retval->page_list); + retval->page_tree = RB_ROOT; INIT_LIST_HEAD(&retval->avail_page_list); spin_lock_init(&retval->lock); retval->size = size; @@ -210,6 +216,65 @@ struct dma_pool *dma_pool_create(const c } EXPORT_SYMBOL(dma_pool_create); +/* + * Find the dma_page that manages the given DMA address. + */ +static struct dma_page *pool_find_page(struct dma_pool *pool, dma_addr_t dma) +{ + struct rb_node *node = pool->page_tree.rb_node; + + while (node) { + struct dma_page *page = + container_of(node, struct dma_page, page_node); + + if (dma < page->dma) + node = node->rb_left; + else if ((dma - page->dma) >= pool->allocation) + node = node->rb_right; + else + return page; + } + return NULL; +} + +/* + * Insert a dma_page into the page_tree. + */ +static int pool_insert_page(struct dma_pool *pool, struct dma_page *new_page) +{ + dma_addr_t dma = new_page->dma; + struct rb_node **node = &(pool->page_tree.rb_node), *parent = NULL; + + while (*node) { + struct dma_page *this_page = + container_of(*node, struct dma_page, page_node); + + parent = *node; + if (dma < this_page->dma) + node = &((*node)->rb_left); + else if (likely((dma - this_page->dma) >= pool->allocation)) + node = &((*node)->rb_right); + else { + /* + * A page that overlaps the new DMA range is already + * present in the tree. This should not happen. + */ + WARN(1, + "%s: %s: DMA address overlap: old 0x%llx new 0x%llx len %zu\n", + pool->dev ? dev_name(pool->dev) : "(nodev)", + pool->name, (u64) this_page->dma, (u64) dma, + pool->allocation); + return -1; + } + } + + /* Add new node and rebalance tree. */ + rb_link_node(&new_page->page_node, parent, node); + rb_insert_color(&new_page->page_node, &pool->page_tree); + + return 0; +} + static void pool_initialise_page(struct dma_pool *pool, struct dma_page *page) { unsigned int offset = 0; @@ -254,15 +319,36 @@ static inline bool is_page_busy(struct d return page->in_use != 0; } -static void pool_free_page(struct dma_pool *pool, struct dma_page *page) +static void pool_free_page(struct dma_pool *pool, + struct dma_page *page, + bool destroying_pool) { - dma_addr_t dma = page->dma; - + if (destroying_pool && is_page_busy(page)) { + if (pool->dev) + dev_err(pool->dev, + "dma_pool_destroy %s, %p busy\n", + pool->name, page->vaddr); + else + pr_err("dma_pool_destroy %s, %p busy\n", + pool->name, page->vaddr); + /* leak the still-in-use consistent memory */ + } else { #ifdef DMAPOOL_DEBUG - memset(page->vaddr, POOL_POISON_FREED, pool->allocation); + memset(page->vaddr, POOL_POISON_FREED, pool->allocation); #endif - dma_free_coherent(pool->dev, pool->allocation, page->vaddr, dma); - list_del(&page->page_list); + dma_free_coherent(pool->dev, + pool->allocation, + page->vaddr, + page->dma); + } + + /* + * If the pool is being destroyed, it is not safe to modify the + * page_tree while iterating over it, and it is also unnecessary since + * the whole tree will be discarded anyway. + */ + if (!destroying_pool) + rb_erase(&page->page_node, &pool->page_tree); list_del(&page->avail_page_link); kfree(page); } @@ -277,6 +363,7 @@ static void pool_free_page(struct dma_po */ void dma_pool_destroy(struct dma_pool *pool) { + struct dma_page *page, *tmp_page; bool empty = false; if (unlikely(!pool)) @@ -292,24 +379,11 @@ void dma_pool_destroy(struct dma_pool *p device_remove_file(pool->dev, &dev_attr_pools); mutex_unlock(&pools_reg_lock); - while (!list_empty(&pool->page_list)) { - struct dma_page *page; - page = list_entry(pool->page_list.next, - struct dma_page, page_list); - if (is_page_busy(page)) { - if (pool->dev) - dev_err(pool->dev, - "dma_pool_destroy %s, %p busy\n", - pool->name, page->vaddr); - else - pr_err("dma_pool_destroy %s, %p busy\n", - pool->name, page->vaddr); - /* leak the still-in-use consistent memory */ - list_del(&page->page_list); - list_del(&page->avail_page_link); - kfree(page); - } else - pool_free_page(pool, page); + rbtree_postorder_for_each_entry_safe(page, + tmp_page, + &pool->page_tree, + page_node) { + pool_free_page(pool, page, true); } kfree(pool); @@ -353,7 +427,15 @@ void *dma_pool_alloc(struct dma_pool *po spin_lock_irqsave(&pool->lock, flags); - list_add(&page->page_list, &pool->page_list); + if (unlikely(pool_insert_page(pool, page))) { + /* + * This should not happen, so something must have gone horribly + * wrong. Instead of crashing, intentionally leak the memory + * and make for the exit. + */ + spin_unlock_irqrestore(&pool->lock, flags); + return NULL; + } list_add(&page->avail_page_link, &pool->avail_page_list); ready: page->in_use++; @@ -400,19 +482,6 @@ void *dma_pool_alloc(struct dma_pool *po } EXPORT_SYMBOL(dma_pool_alloc); -static struct dma_page *pool_find_page(struct dma_pool *pool, dma_addr_t dma) -{ - struct dma_page *page; - - list_for_each_entry(page, &pool->page_list, page_list) { - if (dma < page->dma) - continue; - if ((dma - page->dma) < pool->allocation) - return page; - } - return NULL; -} - /** * dma_pool_free - put block back into dma pool * @pool: the dma pool holding the block @@ -484,7 +553,7 @@ void dma_pool_free(struct dma_pool *pool page->offset = offset; /* * Resist a temptation to do - * if (!is_page_busy(page)) pool_free_page(pool, page); + * if (!is_page_busy(page)) pool_free_page(pool, page, false); * Better have a few empty pages hang around. */ spin_unlock_irqrestore(&pool->lock, flags);