From patchwork Tue May 31 18:12:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Battersby X-Patchwork-Id: 12865985 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 822A2C433F5 for ; Tue, 31 May 2022 18:12:30 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 19C656B0073; Tue, 31 May 2022 14:12:30 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 147E76B0074; Tue, 31 May 2022 14:12:30 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F28DA6B0075; Tue, 31 May 2022 14:12:29 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id DF2FF6B0073 for ; Tue, 31 May 2022 14:12:29 -0400 (EDT) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id B5F7B20772 for ; Tue, 31 May 2022 18:12:29 +0000 (UTC) X-FDA: 79526833218.23.A46CE24 Received: from mail.cybernetics.com (mail.cybernetics.com [173.71.130.66]) by imf24.hostedemail.com (Postfix) with ESMTP id 5142E180053 for ; Tue, 31 May 2022 18:12:14 +0000 (UTC) X-ASG-Debug-ID: 1654020747-1cf43917f334afd0001-v9ZeMO Received: from cybernetics.com ([10.10.4.126]) by mail.cybernetics.com with ESMTP id KkLxI1f9KEPaUEs3; Tue, 31 May 2022 14:12:27 -0400 (EDT) X-Barracuda-Envelope-From: tonyb@cybernetics.com X-ASG-Whitelist: Client DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=cybernetics.com; s=mail; bh=rbu53iA56EgPM9xWDcYjPsiC8+GQnnX8Zrf5JmbXf/U=; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:References:Cc:To:From: Content-Language:Subject:MIME-Version:Date:Message-ID; b=MLxPLW10G4ATiUw5UYmx t78zeovkkrHB3/myd4m5l4OI/fZWQFzn7qyXTKUPDmIdClw0OPbpFdbxl6a1IjoezSDDbHOd4+quj 9nKyBRjVcAeJ7AsvtIMU/2MvrJwgBQe6xiU9thrtItTyCQrp31Wt8yCATFW8TrVFv8J9qgPiug= Received: from [10.157.2.224] (HELO [192.168.200.1]) by cybernetics.com (CommuniGate Pro SMTP 7.1.1) with ESMTPS id 11829186; Tue, 31 May 2022 14:12:27 -0400 Message-ID: <7f6f9ff5-cdb9-e386-988d-fa013538dee7@cybernetics.com> Date: Tue, 31 May 2022 14:12:27 -0400 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.9.1 Subject: [PATCH 01/10] dmapool: remove checks for dev == NULL Content-Language: en-US X-ASG-Orig-Subj: [PATCH 01/10] dmapool: remove checks for dev == NULL From: Tony Battersby To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: iommu@lists.linux-foundation.org, kernel-team@fb.com, Matthew Wilcox , Keith Busch , Andy Shevchenko , Robin Murphy , Tony Lindgren References: <9b08ab7c-b80b-527d-9adf-7716b0868fbc@cybernetics.com> In-Reply-To: <9b08ab7c-b80b-527d-9adf-7716b0868fbc@cybernetics.com> X-Barracuda-Connect: UNKNOWN[10.10.4.126] X-Barracuda-Start-Time: 1654020747 X-Barracuda-URL: https://10.10.4.122:443/cgi-mod/mark.cgi X-Barracuda-BRTS-Status: 1 X-Virus-Scanned: by bsmtpd at cybernetics.com X-Barracuda-Scan-Msg-Size: 3586 X-Stat-Signature: mj15imqqubw7s43oazs15xdjdkknpr9q X-Rspam-User: Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=cybernetics.com header.s=mail header.b=MLxPLW10; dmarc=pass (policy=none) header.from=cybernetics.com; spf=pass (imf24.hostedemail.com: domain of "btv1==1503f279fc1==tonyb@cybernetics.com" designates 173.71.130.66 as permitted sender) smtp.mailfrom="btv1==1503f279fc1==tonyb@cybernetics.com" X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 5142E180053 X-HE-Tag: 1654020734-170713 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: dmapool originally tried to support pools without a device because dma_alloc_coherent() supports allocations without a device. But nobody ended up using dma pools without a device, so the current checks in dmapool.c for pool->dev == NULL are both insufficient and causing bloat. Remove them. Signed-off-by: Tony Battersby Reviewed-by: Robin Murphy --- mm/dmapool.c | 42 +++++++++++------------------------------- 1 file changed, 11 insertions(+), 31 deletions(-) diff --git a/mm/dmapool.c b/mm/dmapool.c index a7eb5d0eb2da..0f89de408cbe 100644 --- a/mm/dmapool.c +++ b/mm/dmapool.c @@ -275,7 +275,7 @@ void dma_pool_destroy(struct dma_pool *pool) mutex_lock(&pools_reg_lock); mutex_lock(&pools_lock); list_del(&pool->pools); - if (pool->dev && list_empty(&pool->dev->dma_pools)) + if (list_empty(&pool->dev->dma_pools)) empty = true; mutex_unlock(&pools_lock); if (empty) @@ -284,12 +284,8 @@ void dma_pool_destroy(struct dma_pool *pool) list_for_each_entry_safe(page, tmp, &pool->page_list, page_list) { if (is_page_busy(page)) { - if (pool->dev) - dev_err(pool->dev, "%s %s, %p busy\n", __func__, - pool->name, page->vaddr); - else - pr_err("%s %s, %p busy\n", __func__, - pool->name, page->vaddr); + dev_err(pool->dev, "%s %s, %p busy\n", __func__, + pool->name, page->vaddr); /* leak the still-in-use consistent memory */ list_del(&page->page_list); kfree(page); @@ -351,12 +347,8 @@ void *dma_pool_alloc(struct dma_pool *pool, gfp_t mem_flags, for (i = sizeof(page->offset); i < pool->size; i++) { if (data[i] == POOL_POISON_FREED) continue; - if (pool->dev) - dev_err(pool->dev, "%s %s, %p (corrupted)\n", - __func__, pool->name, retval); - else - pr_err("%s %s, %p (corrupted)\n", - __func__, pool->name, retval); + dev_err(pool->dev, "%s %s, %p (corrupted)\n", + __func__, pool->name, retval); /* * Dump the first 4 bytes even if they are not @@ -411,12 +403,8 @@ void dma_pool_free(struct dma_pool *pool, void *vaddr, dma_addr_t dma) page = pool_find_page(pool, dma); if (!page) { spin_unlock_irqrestore(&pool->lock, flags); - if (pool->dev) - dev_err(pool->dev, "%s %s, %p/%pad (bad dma)\n", - __func__, pool->name, vaddr, &dma); - else - pr_err("%s %s, %p/%pad (bad dma)\n", - __func__, pool->name, vaddr, &dma); + dev_err(pool->dev, "%s %s, %p/%pad (bad dma)\n", + __func__, pool->name, vaddr, &dma); return; } @@ -426,12 +414,8 @@ void dma_pool_free(struct dma_pool *pool, void *vaddr, dma_addr_t dma) #ifdef DMAPOOL_DEBUG if ((dma - page->dma) != offset) { spin_unlock_irqrestore(&pool->lock, flags); - if (pool->dev) - dev_err(pool->dev, "%s %s, %p (bad vaddr)/%pad\n", - __func__, pool->name, vaddr, &dma); - else - pr_err("%s %s, %p (bad vaddr)/%pad\n", - __func__, pool->name, vaddr, &dma); + dev_err(pool->dev, "%s %s, %p (bad vaddr)/%pad\n", + __func__, pool->name, vaddr, &dma); return; } { @@ -442,12 +426,8 @@ void dma_pool_free(struct dma_pool *pool, void *vaddr, dma_addr_t dma) continue; } spin_unlock_irqrestore(&pool->lock, flags); - if (pool->dev) - dev_err(pool->dev, "%s %s, dma %pad already free\n", - __func__, pool->name, &dma); - else - pr_err("%s %s, dma %pad already free\n", - __func__, pool->name, &dma); + dev_err(pool->dev, "%s %s, dma %pad already free\n", + __func__, pool->name, &dma); return; } } From patchwork Tue May 31 18:13:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Battersby X-Patchwork-Id: 12865989 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CB186C433FE for ; Tue, 31 May 2022 18:13:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 706A06B0073; Tue, 31 May 2022 14:13:35 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6B9806B0074; Tue, 31 May 2022 14:13:35 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 57FF26B0075; Tue, 31 May 2022 14:13:35 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 4ADC56B0073 for ; Tue, 31 May 2022 14:13:35 -0400 (EDT) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 17F91355B2 for ; Tue, 31 May 2022 18:13:35 +0000 (UTC) X-FDA: 79526835990.02.48A8BE7 Received: from mail.cybernetics.com (mail.cybernetics.com [173.71.130.66]) by imf10.hostedemail.com (Postfix) with ESMTP id 6D119C0052 for ; Tue, 31 May 2022 18:12:51 +0000 (UTC) X-ASG-Debug-ID: 1654020812-1cf43917f334afe0001-v9ZeMO Received: from cybernetics.com ([10.10.4.126]) by mail.cybernetics.com with ESMTP id nHifSkbHIxrhwGDp; Tue, 31 May 2022 14:13:32 -0400 (EDT) X-Barracuda-Envelope-From: tonyb@cybernetics.com X-ASG-Whitelist: Client DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=cybernetics.com; s=mail; bh=FrnayQTLRg3bImjySCMn4AK6vTiCgnyfnm+pKLSvhx0=; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:References:Cc:To:From: Content-Language:Subject:MIME-Version:Date:Message-ID; b=G9I1y0Cm5SI5a20B9p45 Ch/vtXr3T2oFr+6ys40VeK3ubfORXd8KFUN5jCE/6JUWAIy1cBsSzzxHDqoXYj3XE0YBtRWEEJxSV VLobdcVMmmCaDWU+U0KnUIuyfrTSUdqRMkNdomm+YVy9APQXHE5/7HR7dEPHZYkExGOv45asZE= Received: from [10.157.2.224] (HELO [192.168.200.1]) by cybernetics.com (CommuniGate Pro SMTP 7.1.1) with ESMTPS id 11829190; Tue, 31 May 2022 14:13:32 -0400 Message-ID: <81004e69-d91f-9fbb-2b94-217b48a064c3@cybernetics.com> Date: Tue, 31 May 2022 14:13:32 -0400 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.9.1 Subject: [PATCH 02/10] dmapool: cleanup integer types Content-Language: en-US X-ASG-Orig-Subj: [PATCH 02/10] dmapool: cleanup integer types From: Tony Battersby To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: iommu@lists.linux-foundation.org, kernel-team@fb.com, Matthew Wilcox , Keith Busch , Andy Shevchenko , Robin Murphy , Tony Lindgren References: <9b08ab7c-b80b-527d-9adf-7716b0868fbc@cybernetics.com> In-Reply-To: <9b08ab7c-b80b-527d-9adf-7716b0868fbc@cybernetics.com> X-Barracuda-Connect: UNKNOWN[10.10.4.126] X-Barracuda-Start-Time: 1654020812 X-Barracuda-URL: https://10.10.4.122:443/cgi-mod/mark.cgi X-Barracuda-BRTS-Status: 1 X-Virus-Scanned: by bsmtpd at cybernetics.com X-Barracuda-Scan-Msg-Size: 3356 X-Stat-Signature: 3bt33he51in3d36tnuqm386w6winw9yj X-Rspam-User: Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=cybernetics.com header.s=mail header.b=G9I1y0Cm; spf=pass (imf10.hostedemail.com: domain of "btv1==1503f279fc1==tonyb@cybernetics.com" designates 173.71.130.66 as permitted sender) smtp.mailfrom="btv1==1503f279fc1==tonyb@cybernetics.com"; dmarc=pass (policy=none) header.from=cybernetics.com X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 6D119C0052 X-HE-Tag: 1654020771-294824 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: To represent the size of a single allocation, dmapool currently uses 'unsigned int' in some places and 'size_t' in other places. Standardize on 'unsigned int' to reduce overhead, but use 'size_t' when counting all the blocks in the entire pool. Signed-off-by: Tony Battersby --- This puts an upper bound on 'size' of INT_MAX to avoid overflowing the following comparison in pool_initialise_page(): unsigned int offset = 0; unsigned int next = offset + pool->size; if (unlikely((next + pool->size) > ... 'boundary' is passed in as a size_t but gets stored as an unsigned int. 'boundary' values >= 'allocation' do not have any effect, so clipping 'boundary' to 'allocation' keeps it within the range of unsigned int without affecting anything else. A few lines above (not in the diff) you can see that if 'boundary' is passed in as 0 then it is set to 'allocation', so it is nothing new. For reference, here is the relevant code after being patched: if (!boundary) boundary = allocation; else if ((boundary < size) || (boundary & (boundary - 1))) return NULL; boundary = min(boundary, allocation); mm/dmapool.c | 19 +++++++++++-------- 1 file changed, 11 insertions(+), 8 deletions(-) diff --git a/mm/dmapool.c b/mm/dmapool.c index 0f89de408cbe..d7b372248111 100644 --- a/mm/dmapool.c +++ b/mm/dmapool.c @@ -43,10 +43,10 @@ struct dma_pool { /* the pool */ struct list_head page_list; spinlock_t lock; - size_t size; + unsigned int size; struct device *dev; - size_t allocation; - size_t boundary; + unsigned int allocation; + unsigned int boundary; char name[32]; struct list_head pools; }; @@ -80,7 +80,7 @@ static ssize_t pools_show(struct device *dev, struct device_attribute *attr, cha mutex_lock(&pools_lock); list_for_each_entry(pool, &dev->dma_pools, pools) { unsigned pages = 0; - unsigned blocks = 0; + size_t blocks = 0; spin_lock_irq(&pool->lock); list_for_each_entry(page, &pool->page_list, page_list) { @@ -90,9 +90,10 @@ static ssize_t pools_show(struct device *dev, struct device_attribute *attr, cha spin_unlock_irq(&pool->lock); /* per-pool info, no real statistics yet */ - temp = scnprintf(next, size, "%-16s %4u %4zu %4zu %2u\n", + temp = scnprintf(next, size, "%-16s %4zu %4zu %4u %2u\n", pool->name, blocks, - pages * (pool->allocation / pool->size), + (size_t) pages * + (pool->allocation / pool->size), pool->size, pages); size -= temp; next += temp; @@ -139,7 +140,7 @@ struct dma_pool *dma_pool_create(const char *name, struct device *dev, else if (align & (align - 1)) return NULL; - if (size == 0) + if (size == 0 || size > INT_MAX) return NULL; else if (size < 4) size = 4; @@ -152,6 +153,8 @@ struct dma_pool *dma_pool_create(const char *name, struct device *dev, else if ((boundary < size) || (boundary & (boundary - 1))) return NULL; + boundary = min(boundary, allocation); + retval = kmalloc(sizeof(*retval), GFP_KERNEL); if (!retval) return retval; @@ -312,7 +315,7 @@ void *dma_pool_alloc(struct dma_pool *pool, gfp_t mem_flags, { unsigned long flags; struct dma_page *page; - size_t offset; + unsigned int offset; void *retval; might_alloc(mem_flags); From patchwork Tue May 31 18:14:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Battersby X-Patchwork-Id: 12865990 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 81069C433EF for ; Tue, 31 May 2022 18:14:26 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 20BB16B0072; Tue, 31 May 2022 14:14:26 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1BAD96B0074; Tue, 31 May 2022 14:14:26 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0A51D6B0075; Tue, 31 May 2022 14:14:26 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id F04036B0072 for ; Tue, 31 May 2022 14:14:25 -0400 (EDT) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id B961720567 for ; Tue, 31 May 2022 18:14:25 +0000 (UTC) X-FDA: 79526838090.07.CDAD702 Received: from mail.cybernetics.com (mail.cybernetics.com [173.71.130.66]) by imf15.hostedemail.com (Postfix) with ESMTP id 585ACA004C for ; Tue, 31 May 2022 18:14:02 +0000 (UTC) X-ASG-Debug-ID: 1654020863-1cf43917f334b000001-v9ZeMO Received: from cybernetics.com ([10.10.4.126]) by mail.cybernetics.com with ESMTP id oOy2yfF6AsqWuHgq; Tue, 31 May 2022 14:14:23 -0400 (EDT) X-Barracuda-Envelope-From: tonyb@cybernetics.com X-ASG-Whitelist: Client DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=cybernetics.com; s=mail; bh=RasPjcgW3MbjOYmk499B8lRQ6/y64Pgdc8ibAu0hQ94=; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:References:Cc:To:From: Content-Language:Subject:MIME-Version:Date:Message-ID; b=Bt5vfluu/fRwhcrHUI+Q RP31wdoXrw1Y6q/RDhu8jjMvd8Xv/prWojQ2sBb93UI9yWCDk8+z3hJRHrqRB9Kyl/P33NDljwTGT 6LP6dpwARPHNZ8W9RmxQn7ynBzURXdzx83Lam9kFm87wr8w+LJVJi4f0pmJ+NCuSFMASSmTsmw= Received: from [10.157.2.224] (HELO [192.168.200.1]) by cybernetics.com (CommuniGate Pro SMTP 7.1.1) with ESMTPS id 11829197; Tue, 31 May 2022 14:14:23 -0400 Message-ID: Date: Tue, 31 May 2022 14:14:23 -0400 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.9.1 Subject: [PATCH 03/10] dmapool: fix boundary comparison Content-Language: en-US X-ASG-Orig-Subj: [PATCH 03/10] dmapool: fix boundary comparison From: Tony Battersby To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: iommu@lists.linux-foundation.org, kernel-team@fb.com, Matthew Wilcox , Keith Busch , Andy Shevchenko , Robin Murphy , Tony Lindgren References: <9b08ab7c-b80b-527d-9adf-7716b0868fbc@cybernetics.com> In-Reply-To: <9b08ab7c-b80b-527d-9adf-7716b0868fbc@cybernetics.com> X-Barracuda-Connect: UNKNOWN[10.10.4.126] X-Barracuda-Start-Time: 1654020863 X-Barracuda-URL: https://10.10.4.122:443/cgi-mod/mark.cgi X-Barracuda-BRTS-Status: 1 X-Virus-Scanned: by bsmtpd at cybernetics.com X-Barracuda-Scan-Msg-Size: 1388 X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 585ACA004C X-Stat-Signature: kwuog55336q33xr3nbmdf6thj98q1485 X-Rspam-User: Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=cybernetics.com header.s=mail header.b=Bt5vfluu; spf=pass (imf15.hostedemail.com: domain of "btv1==1503f279fc1==tonyb@cybernetics.com" designates 173.71.130.66 as permitted sender) smtp.mailfrom="btv1==1503f279fc1==tonyb@cybernetics.com"; dmarc=pass (policy=none) header.from=cybernetics.com X-HE-Tag: 1654020842-755466 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Fix the boundary comparison when constructing the list of free blocks for the case that 'size' is a power of two. Since 'boundary' is also a power of two, that would make 'boundary' a multiple of 'size', in which case a single block would never cross the boundary. This bug would cause some of the allocated memory to be wasted (but not leaked). Example: size = 512 boundary = 2048 allocation = 4096 Address range 0 - 511 512 - 1023 1024 - 1535 1536 - 2047 * 2048 - 2559 2560 - 3071 3072 - 3583 3584 - 4095 * Prior to this fix, the address ranges marked with "*" would not have been used even though they didn't cross the given boundary. Fixes: e34f44b3517f ("pool: Improve memory usage for devices which can't cross boundaries") Signed-off-by: Tony Battersby --- mm/dmapool.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/dmapool.c b/mm/dmapool.c index d7b372248111..782143144a32 100644 --- a/mm/dmapool.c +++ b/mm/dmapool.c @@ -210,7 +210,7 @@ static void pool_initialise_page(struct dma_pool *pool, struct dma_page *page) do { unsigned int next = offset + pool->size; - if (unlikely((next + pool->size) >= next_boundary)) { + if (unlikely((next + pool->size) > next_boundary)) { next = next_boundary; next_boundary += pool->boundary; } From patchwork Tue May 31 18:17:44 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Battersby X-Patchwork-Id: 12865997 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A1A8DC433F5 for ; Tue, 31 May 2022 18:17:47 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 192A46B0072; Tue, 31 May 2022 14:17:47 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 14CF96B0073; Tue, 31 May 2022 14:17:47 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0085A6B0074; Tue, 31 May 2022 14:17:46 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id E1C496B0072 for ; Tue, 31 May 2022 14:17:46 -0400 (EDT) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id B3768206D2 for ; Tue, 31 May 2022 18:17:46 +0000 (UTC) X-FDA: 79526846532.11.5F5C34B Received: from mail.cybernetics.com (mail.cybernetics.com [173.71.130.66]) by imf26.hostedemail.com (Postfix) with ESMTP id F3696140050 for ; Tue, 31 May 2022 18:17:41 +0000 (UTC) X-ASG-Debug-ID: 1654021064-1cf43917f334b060001-v9ZeMO Received: from cybernetics.com ([10.10.4.126]) by mail.cybernetics.com with ESMTP id FnUZ9qI4R7alRl3A; Tue, 31 May 2022 14:17:44 -0400 (EDT) X-Barracuda-Envelope-From: tonyb@cybernetics.com X-ASG-Whitelist: Client DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=cybernetics.com; s=mail; bh=rVEbYmgCF5qNaIoY0xv9QyqFa+k5xItzdRlzexwASwY=; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:References:Cc:To:From: Content-Language:Subject:MIME-Version:Date:Message-ID; b=PXGrGGD2CLPjjbPjqwhg zGiqsmvhXj07rY/Ga/zD8aEmocIXs+Ic868AT/FzjqouS9izAh8h4wg0ckuZd+Ja1jlF7EjM0eYd3 mlXd+sZ5m0ONdtjtAziIP1j0Set+ooCEjaMO/cnHIX+CZIE7D8h6dIEr0gMQUBlyfSMfftPsDw= Received: from [10.157.2.224] (HELO [192.168.200.1]) by cybernetics.com (CommuniGate Pro SMTP 7.1.1) with ESMTPS id 11829214; Tue, 31 May 2022 14:17:44 -0400 Message-ID: Date: Tue, 31 May 2022 14:17:44 -0400 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.9.1 Subject: [PATCH 04/10] dmapool: improve accuracy of debug statistics Content-Language: en-US X-ASG-Orig-Subj: [PATCH 04/10] dmapool: improve accuracy of debug statistics From: Tony Battersby To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: iommu@lists.linux-foundation.org, kernel-team@fb.com, Matthew Wilcox , Keith Busch , Andy Shevchenko , Robin Murphy , Tony Lindgren References: <9b08ab7c-b80b-527d-9adf-7716b0868fbc@cybernetics.com> In-Reply-To: <9b08ab7c-b80b-527d-9adf-7716b0868fbc@cybernetics.com> X-Barracuda-Connect: UNKNOWN[10.10.4.126] X-Barracuda-Start-Time: 1654021064 X-Barracuda-URL: https://10.10.4.122:443/cgi-mod/mark.cgi X-Barracuda-BRTS-Status: 1 X-Virus-Scanned: by bsmtpd at cybernetics.com X-Barracuda-Scan-Msg-Size: 1718 Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=cybernetics.com header.s=mail header.b=PXGrGGD2; dmarc=pass (policy=none) header.from=cybernetics.com; spf=pass (imf26.hostedemail.com: domain of "btv1==1503f279fc1==tonyb@cybernetics.com" designates 173.71.130.66 as permitted sender) smtp.mailfrom="btv1==1503f279fc1==tonyb@cybernetics.com" X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: F3696140050 X-Rspam-User: X-Stat-Signature: kmwiwoce6hkf968s61698yzspmfnxi85 X-HE-Tag: 1654021061-630613 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The "total number of blocks in pool" debug statistic currently does not take the boundary value into account, so it diverges from the "total number of blocks in use" statistic when a boundary is in effect. Add a calculation for the number of blocks per allocation that takes the boundary into account, and use it to replace the inaccurate calculation. This depends on the patch "dmapool: fix boundary comparison" for the calculated blks_per_alloc value to be correct. Signed-off-by: Tony Battersby --- mm/dmapool.c | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/mm/dmapool.c b/mm/dmapool.c index 782143144a32..9e30f4425dea 100644 --- a/mm/dmapool.c +++ b/mm/dmapool.c @@ -47,6 +47,7 @@ struct dma_pool { /* the pool */ struct device *dev; unsigned int allocation; unsigned int boundary; + unsigned int blks_per_alloc; char name[32]; struct list_head pools; }; @@ -92,8 +93,7 @@ static ssize_t pools_show(struct device *dev, struct device_attribute *attr, cha /* per-pool info, no real statistics yet */ temp = scnprintf(next, size, "%-16s %4zu %4zu %4u %2u\n", pool->name, blocks, - (size_t) pages * - (pool->allocation / pool->size), + (size_t) pages * pool->blks_per_alloc, pool->size, pages); size -= temp; next += temp; @@ -168,6 +168,9 @@ struct dma_pool *dma_pool_create(const char *name, struct device *dev, retval->size = size; retval->boundary = boundary; retval->allocation = allocation; + retval->blks_per_alloc = + (allocation / boundary) * (boundary / size) + + (allocation % boundary) / size; INIT_LIST_HEAD(&retval->pools); From patchwork Tue May 31 18:18:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Battersby X-Patchwork-Id: 12865998 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 600A9C433F5 for ; Tue, 31 May 2022 18:18:44 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E68B06B0073; Tue, 31 May 2022 14:18:43 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id DF3546B0074; Tue, 31 May 2022 14:18:43 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CB5D56B0075; Tue, 31 May 2022 14:18:43 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id B9A576B0073 for ; Tue, 31 May 2022 14:18:43 -0400 (EDT) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 83167206D2 for ; Tue, 31 May 2022 18:18:43 +0000 (UTC) X-FDA: 79526848926.17.5FC3482 Received: from mail.cybernetics.com (mail.cybernetics.com [173.71.130.66]) by imf09.hostedemail.com (Postfix) with ESMTP id 1B350140063 for ; Tue, 31 May 2022 18:18:27 +0000 (UTC) X-ASG-Debug-ID: 1654021121-1cf43917f334b070001-v9ZeMO Received: from cybernetics.com ([10.10.4.126]) by mail.cybernetics.com with ESMTP id yj29GjpIIDRwn4BW; Tue, 31 May 2022 14:18:41 -0400 (EDT) X-Barracuda-Envelope-From: tonyb@cybernetics.com X-ASG-Whitelist: Client DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=cybernetics.com; s=mail; bh=Z2Ky3BhmVZCP6nQJsAzLenh1k2tIAiuPv7c6Ywt4xp0=; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:References:Cc:To:From: Content-Language:Subject:MIME-Version:Date:Message-ID; b=l4gmGdtEEc475y+cPS0i b3CXKfRcWn4bzL+nlsGSrpLAqCAgnrQN4vO8XeOkunQPpE6QaJJS41Y/Ya9CMBv8n21a+SHmXaAVC XqS+12dpa3XgoD+yv6qzm0y6xH8gkXnhke2W5TRMUbJSCpDTjZ3EVBmeUajwr+X7dja0fJwqU8= Received: from [10.157.2.224] (HELO [192.168.200.1]) by cybernetics.com (CommuniGate Pro SMTP 7.1.1) with ESMTPS id 11829213; Tue, 31 May 2022 14:18:41 -0400 Message-ID: <0c6c1548-6e3a-0d8d-4bb7-471fdfb403ca@cybernetics.com> Date: Tue, 31 May 2022 14:18:41 -0400 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.9.1 Subject: [PATCH 05/10] dmapool: debug: prevent endless loop in case of corruption Content-Language: en-US X-ASG-Orig-Subj: [PATCH 05/10] dmapool: debug: prevent endless loop in case of corruption From: Tony Battersby To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: iommu@lists.linux-foundation.org, kernel-team@fb.com, Matthew Wilcox , Keith Busch , Andy Shevchenko , Robin Murphy , Tony Lindgren References: <9b08ab7c-b80b-527d-9adf-7716b0868fbc@cybernetics.com> In-Reply-To: <9b08ab7c-b80b-527d-9adf-7716b0868fbc@cybernetics.com> X-Barracuda-Connect: UNKNOWN[10.10.4.126] X-Barracuda-Start-Time: 1654021121 X-Barracuda-URL: https://10.10.4.122:443/cgi-mod/mark.cgi X-Barracuda-BRTS-Status: 1 X-Virus-Scanned: by bsmtpd at cybernetics.com X-Barracuda-Scan-Msg-Size: 1849 X-Stat-Signature: j5neysrrd6n3anwxd8yneonjjzq1srox X-Rspam-User: Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=cybernetics.com header.s=mail header.b=l4gmGdtE; spf=pass (imf09.hostedemail.com: domain of "btv1==1503f279fc1==tonyb@cybernetics.com" designates 173.71.130.66 as permitted sender) smtp.mailfrom="btv1==1503f279fc1==tonyb@cybernetics.com"; dmarc=pass (policy=none) header.from=cybernetics.com X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 1B350140063 X-HE-Tag: 1654021107-467752 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Prevent a possible endless loop with DMAPOOL_DEBUG enabled if a buggy driver corrupts DMA pool memory. Signed-off-by: Tony Battersby --- mm/dmapool.c | 37 ++++++++++++++++++++++++++++++------- 1 file changed, 30 insertions(+), 7 deletions(-) diff --git a/mm/dmapool.c b/mm/dmapool.c index 9e30f4425dea..7a9161d4f7a6 100644 --- a/mm/dmapool.c +++ b/mm/dmapool.c @@ -426,16 +426,39 @@ void dma_pool_free(struct dma_pool *pool, void *vaddr, dma_addr_t dma) } { unsigned int chain = page->offset; + unsigned int free_blks = 0; + while (chain < pool->allocation) { - if (chain != offset) { - chain = *(int *)(page->vaddr + chain); - continue; + if (unlikely(chain == offset)) { + spin_unlock_irqrestore(&pool->lock, flags); + dev_err(pool->dev, + "%s %s, dma %pad already free\n", + __func__, pool->name, &dma); + return; } - spin_unlock_irqrestore(&pool->lock, flags); - dev_err(pool->dev, "%s %s, dma %pad already free\n", - __func__, pool->name, &dma); - return; + + /* + * A buggy driver could corrupt the freelist by + * use-after-free, buffer overflow, etc. Besides + * checking for corruption, this also prevents an + * endless loop in case corruption causes a circular + * loop in the freelist. + */ + if (unlikely(++free_blks + page->in_use > + pool->blks_per_alloc)) { + freelist_corrupt: + spin_unlock_irqrestore(&pool->lock, flags); + dev_err(pool->dev, + "%s %s, freelist corrupted\n", + __func__, pool->name); + return; + } + + chain = *(int *)(page->vaddr + chain); } + if (unlikely(free_blks + page->in_use != + pool->blks_per_alloc)) + goto freelist_corrupt; } memset(vaddr, POOL_POISON_FREED, pool->size); #endif From patchwork Tue May 31 18:20:51 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Battersby X-Patchwork-Id: 12865999 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 65733C433FE for ; Tue, 31 May 2022 18:20:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 04BB56B0072; Tue, 31 May 2022 14:20:55 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id F3CF06B0073; Tue, 31 May 2022 14:20:54 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E29606B0074; Tue, 31 May 2022 14:20:54 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id CCB1A6B0072 for ; Tue, 31 May 2022 14:20:54 -0400 (EDT) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay12.hostedemail.com (Postfix) with ESMTP id 5D53E1212BD for ; Tue, 31 May 2022 18:20:54 +0000 (UTC) X-FDA: 79526854428.05.AED053E Received: from mail.cybernetics.com (mail.cybernetics.com [173.71.130.66]) by imf06.hostedemail.com (Postfix) with ESMTP id 0DC44180057 for ; Tue, 31 May 2022 18:20:49 +0000 (UTC) X-ASG-Debug-ID: 1654021252-1cf43917f334b0a0001-v9ZeMO Received: from cybernetics.com ([10.10.4.126]) by mail.cybernetics.com with ESMTP id ozEi06WftSCDC9Dn; Tue, 31 May 2022 14:20:52 -0400 (EDT) X-Barracuda-Envelope-From: tonyb@cybernetics.com X-ASG-Whitelist: Client DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=cybernetics.com; s=mail; bh=UMD3peLOU8xJDRQl6tZsqYH8OxWwTPFkkNKZO4VgqaY=; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:References:Cc:To:From: Content-Language:Subject:MIME-Version:Date:Message-ID; b=rydeBuE75zkiJLNnZRfb /WOveUINnngDKbuBAnssoipJio6AJ8XrL7lA7d2NMt/+12EgQmGf8wBBwWfKyMEadPRL7VtaKsi4T JRDTXjFqiek64nTQq0wNQegdcxjMhWIPs7A+BZAqIBPh7j1EN9i6chqiPkEMvl7Zyb2O2XsC+w= Received: from [10.157.2.224] (HELO [192.168.200.1]) by cybernetics.com (CommuniGate Pro SMTP 7.1.1) with ESMTPS id 11829220; Tue, 31 May 2022 14:20:52 -0400 Message-ID: Date: Tue, 31 May 2022 14:20:51 -0400 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.9.1 Subject: [PATCH 06/10] dmapool: ignore init_on_free when DMAPOOL_DEBUG enabled Content-Language: en-US X-ASG-Orig-Subj: [PATCH 06/10] dmapool: ignore init_on_free when DMAPOOL_DEBUG enabled From: Tony Battersby To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: iommu@lists.linux-foundation.org, kernel-team@fb.com, Matthew Wilcox , Keith Busch , Andy Shevchenko , Robin Murphy , Tony Lindgren References: <9b08ab7c-b80b-527d-9adf-7716b0868fbc@cybernetics.com> In-Reply-To: <9b08ab7c-b80b-527d-9adf-7716b0868fbc@cybernetics.com> X-Barracuda-Connect: UNKNOWN[10.10.4.126] X-Barracuda-Start-Time: 1654021252 X-Barracuda-URL: https://10.10.4.122:443/cgi-mod/mark.cgi X-Barracuda-BRTS-Status: 1 X-Virus-Scanned: by bsmtpd at cybernetics.com X-Barracuda-Scan-Msg-Size: 1451 X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 0DC44180057 X-Stat-Signature: wbnphkmrhip7ipxwiodzig3ntszyaagn X-Rspam-User: Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=cybernetics.com header.s=mail header.b=rydeBuE7; spf=pass (imf06.hostedemail.com: domain of "btv1==1503f279fc1==tonyb@cybernetics.com" designates 173.71.130.66 as permitted sender) smtp.mailfrom="btv1==1503f279fc1==tonyb@cybernetics.com"; dmarc=pass (policy=none) header.from=cybernetics.com X-HE-Tag: 1654021249-750753 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: There are two cases: 1) In the normal case that the memory is being freed correctly, then DMAPOOL_DEBUG will memset the memory anyway, so speed things up by avoiding a double-memset of the same memory. 2) In the abnormal case that DMAPOOL_DEBUG detects that a driver passes incorrect parameters to dma_pool_free() (e.g. double-free, invalid free, mismatched vaddr/dma, etc.), then that is a kernel bug, and we don't want to clear the passed-in possibly-invalid memory pointer because we can't be sure that the memory is really free. So don't clear it just because init_on_free=1. Signed-off-by: Tony Battersby --- mm/dmapool.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/mm/dmapool.c b/mm/dmapool.c index 7a9161d4f7a6..49019ef6dd83 100644 --- a/mm/dmapool.c +++ b/mm/dmapool.c @@ -415,8 +415,6 @@ void dma_pool_free(struct dma_pool *pool, void *vaddr, dma_addr_t dma) } offset = vaddr - page->vaddr; - if (want_init_on_free()) - memset(vaddr, 0, pool->size); #ifdef DMAPOOL_DEBUG if ((dma - page->dma) != offset) { spin_unlock_irqrestore(&pool->lock, flags); @@ -461,6 +459,9 @@ void dma_pool_free(struct dma_pool *pool, void *vaddr, dma_addr_t dma) goto freelist_corrupt; } memset(vaddr, POOL_POISON_FREED, pool->size); +#else + if (want_init_on_free()) + memset(vaddr, 0, pool->size); #endif page->in_use--; From patchwork Tue May 31 18:21:35 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Battersby X-Patchwork-Id: 12866000 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CD899C433EF for ; Tue, 31 May 2022 18:21:38 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6CD446B0073; Tue, 31 May 2022 14:21:38 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 678E06B0074; Tue, 31 May 2022 14:21:38 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 565E46B0075; Tue, 31 May 2022 14:21:38 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 4885E6B0073 for ; Tue, 31 May 2022 14:21:38 -0400 (EDT) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay13.hostedemail.com (Postfix) with ESMTP id 12F646136B for ; Tue, 31 May 2022 18:21:38 +0000 (UTC) X-FDA: 79526856276.02.9EC00EE Received: from mail.cybernetics.com (mail.cybernetics.com [173.71.130.66]) by imf18.hostedemail.com (Postfix) with ESMTP id 119B21C004B for ; Tue, 31 May 2022 18:21:17 +0000 (UTC) X-ASG-Debug-ID: 1654021295-1cf43917f334b0b0001-v9ZeMO Received: from cybernetics.com ([10.10.4.126]) by mail.cybernetics.com with ESMTP id QlerWlG95wKKsGNK; Tue, 31 May 2022 14:21:35 -0400 (EDT) X-Barracuda-Envelope-From: tonyb@cybernetics.com X-ASG-Whitelist: Client DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=cybernetics.com; s=mail; bh=oESBKLtZD0XfE8ixha9kfGkjSenTdPQVKS9D5YrpgSM=; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:References:Cc:To:From: Content-Language:Subject:MIME-Version:Date:Message-ID; b=E06FpEHrfSATsbsioMp8 Ae89/HwwmRl53JAltDNa/4r2t995qyZVnUrG5PTWufTxHMM9HZasP0FlsIdfwNZD3wSKdGWvfSF4w 0Sx3tUnjdQ1AdRBSGzvjqUkh0hQq84PRsu5g+uKPf3Cw9WgT6Itnr8UU18IZBHv8nQ52Q5CCMM= Received: from [10.157.2.224] (HELO [192.168.200.1]) by cybernetics.com (CommuniGate Pro SMTP 7.1.1) with ESMTPS id 11829227; Tue, 31 May 2022 14:21:35 -0400 Message-ID: <35eeaddc-b27c-aee7-8c0f-96afcb2858d5@cybernetics.com> Date: Tue, 31 May 2022 14:21:35 -0400 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.9.1 Subject: [PATCH 07/10] dmapool: speedup DMAPOOL_DEBUG with init_on_alloc Content-Language: en-US X-ASG-Orig-Subj: [PATCH 07/10] dmapool: speedup DMAPOOL_DEBUG with init_on_alloc From: Tony Battersby To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: iommu@lists.linux-foundation.org, kernel-team@fb.com, Matthew Wilcox , Keith Busch , Andy Shevchenko , Robin Murphy , Tony Lindgren References: <9b08ab7c-b80b-527d-9adf-7716b0868fbc@cybernetics.com> In-Reply-To: <9b08ab7c-b80b-527d-9adf-7716b0868fbc@cybernetics.com> X-Barracuda-Connect: UNKNOWN[10.10.4.126] X-Barracuda-Start-Time: 1654021295 X-Barracuda-URL: https://10.10.4.122:443/cgi-mod/mark.cgi X-Barracuda-BRTS-Status: 1 X-Virus-Scanned: by bsmtpd at cybernetics.com X-Barracuda-Scan-Msg-Size: 688 Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=cybernetics.com header.s=mail header.b=E06FpEHr; dmarc=pass (policy=none) header.from=cybernetics.com; spf=pass (imf18.hostedemail.com: domain of "btv1==1503f279fc1==tonyb@cybernetics.com" designates 173.71.130.66 as permitted sender) smtp.mailfrom="btv1==1503f279fc1==tonyb@cybernetics.com" X-Stat-Signature: 5w3kjwen171owbf95bmiq3yhaitmqifk X-Rspam-User: X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 119B21C004B X-HE-Tag: 1654021277-77276 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Avoid double-memset of the same allocated memory in dma_pool_alloc() when both DMAPOOL_DEBUG is enabled and init_on_alloc=1. Signed-off-by: Tony Battersby --- mm/dmapool.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/dmapool.c b/mm/dmapool.c index 49019ef6dd83..8749a9d7927e 100644 --- a/mm/dmapool.c +++ b/mm/dmapool.c @@ -365,7 +365,7 @@ void *dma_pool_alloc(struct dma_pool *pool, gfp_t mem_flags, break; } } - if (!(mem_flags & __GFP_ZERO)) + if (!want_init_on_alloc(mem_flags)) memset(retval, POOL_POISON_ALLOCATED, pool->size); #endif spin_unlock_irqrestore(&pool->lock, flags); From patchwork Tue May 31 18:22:21 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Battersby X-Patchwork-Id: 12866001 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A1E0AC433EF for ; Tue, 31 May 2022 18:22:24 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3C6896B0072; Tue, 31 May 2022 14:22:24 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3735B6B0074; Tue, 31 May 2022 14:22:24 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 210586B0075; Tue, 31 May 2022 14:22:24 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 117A26B0072 for ; Tue, 31 May 2022 14:22:24 -0400 (EDT) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id CC8B2356BD for ; Tue, 31 May 2022 18:22:23 +0000 (UTC) X-FDA: 79526858166.08.1BB8420 Received: from mail.cybernetics.com (mail.cybernetics.com [173.71.130.66]) by imf29.hostedemail.com (Postfix) with ESMTP id 63826120056 for ; Tue, 31 May 2022 18:22:10 +0000 (UTC) X-ASG-Debug-ID: 1654021341-1cf43917f334b0c0001-v9ZeMO Received: from cybernetics.com ([10.10.4.126]) by mail.cybernetics.com with ESMTP id ZzsyGX4QF38BVlyE; Tue, 31 May 2022 14:22:21 -0400 (EDT) X-Barracuda-Envelope-From: tonyb@cybernetics.com X-ASG-Whitelist: Client DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=cybernetics.com; s=mail; bh=eQIejJQOI/tAoaluQWJF6F3pwzd0w/dyu0/0oFwF/ME=; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:References:Cc:To:From: Content-Language:Subject:MIME-Version:Date:Message-ID; b=F5jnWTUFSe+/KQ5AraCn x2M3sgS5D7KBF0kTbxgh7aTkSikU4O+wLG1/eYe2hQgBrdXZpcZLc+QHWFzhFiJ92GFCKNWUEd5+3 oUhgNCdJEQ4T0fqGo8bOwBceIA6Bh8pHjcMOaKZj21/jCvAUBq7twmACoj8fHfQ7+QWinqXjTs= Received: from [10.157.2.224] (HELO [192.168.200.1]) by cybernetics.com (CommuniGate Pro SMTP 7.1.1) with ESMTPS id 11829234; Tue, 31 May 2022 14:22:21 -0400 Message-ID: <30fd23ae-7035-5ce3-5643-89a5956f1e79@cybernetics.com> Date: Tue, 31 May 2022 14:22:21 -0400 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.9.1 Subject: [PATCH 08/10] dmapool: cleanup dma_pool_destroy Content-Language: en-US X-ASG-Orig-Subj: [PATCH 08/10] dmapool: cleanup dma_pool_destroy From: Tony Battersby To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: iommu@lists.linux-foundation.org, kernel-team@fb.com, Matthew Wilcox , Keith Busch , Andy Shevchenko , Robin Murphy , Tony Lindgren References: <9b08ab7c-b80b-527d-9adf-7716b0868fbc@cybernetics.com> In-Reply-To: <9b08ab7c-b80b-527d-9adf-7716b0868fbc@cybernetics.com> X-Barracuda-Connect: UNKNOWN[10.10.4.126] X-Barracuda-Start-Time: 1654021341 X-Barracuda-URL: https://10.10.4.122:443/cgi-mod/mark.cgi X-Virus-Scanned: by bsmtpd at cybernetics.com X-Barracuda-Scan-Msg-Size: 2711 X-Barracuda-BRTS-Status: 1 X-Stat-Signature: eceqitg1ik7qfrus4yuid3ye1qnwscn7 X-Rspam-User: Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=cybernetics.com header.s=mail header.b=F5jnWTUF; spf=pass (imf29.hostedemail.com: domain of "btv1==1503f279fc1==tonyb@cybernetics.com" designates 173.71.130.66 as permitted sender) smtp.mailfrom="btv1==1503f279fc1==tonyb@cybernetics.com"; dmarc=pass (policy=none) header.from=cybernetics.com X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 63826120056 X-HE-Tag: 1654021330-566537 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Remove a small amount of code duplication between dma_pool_destroy() and pool_free_page() in preparation for adding more code without having to duplicate it. No functional changes. Signed-off-by: Tony Battersby --- mm/dmapool.c | 34 ++++++++++++++++++++-------------- 1 file changed, 20 insertions(+), 14 deletions(-) diff --git a/mm/dmapool.c b/mm/dmapool.c index 8749a9d7927e..58c11dcaa4e4 100644 --- a/mm/dmapool.c +++ b/mm/dmapool.c @@ -250,14 +250,25 @@ static inline bool is_page_busy(struct dma_page *page) return page->in_use != 0; } -static void pool_free_page(struct dma_pool *pool, struct dma_page *page) +static void pool_free_page(struct dma_pool *pool, + struct dma_page *page, + bool destroying_pool) { + void *vaddr = page->vaddr; dma_addr_t dma = page->dma; + if (destroying_pool && is_page_busy(page)) { + dev_err(pool->dev, + "dma_pool_destroy %s, %p busy\n", + pool->name, vaddr); + /* leak the still-in-use consistent memory */ + } else { #ifdef DMAPOOL_DEBUG - memset(page->vaddr, POOL_POISON_FREED, pool->allocation); + memset(vaddr, POOL_POISON_FREED, pool->allocation); #endif - dma_free_coherent(pool->dev, pool->allocation, page->vaddr, dma); + dma_free_coherent(pool->dev, pool->allocation, vaddr, dma); + } + list_del(&page->page_list); kfree(page); } @@ -272,7 +283,7 @@ static void pool_free_page(struct dma_pool *pool, struct dma_page *page) */ void dma_pool_destroy(struct dma_pool *pool) { - struct dma_page *page, *tmp; + struct dma_page *page; bool empty = false; if (unlikely(!pool)) @@ -288,15 +299,10 @@ void dma_pool_destroy(struct dma_pool *pool) device_remove_file(pool->dev, &dev_attr_pools); mutex_unlock(&pools_reg_lock); - list_for_each_entry_safe(page, tmp, &pool->page_list, page_list) { - if (is_page_busy(page)) { - dev_err(pool->dev, "%s %s, %p busy\n", __func__, - pool->name, page->vaddr); - /* leak the still-in-use consistent memory */ - list_del(&page->page_list); - kfree(page); - } else - pool_free_page(pool, page); + while ((page = list_first_entry_or_null(&pool->page_list, + struct dma_page, + page_list))) { + pool_free_page(pool, page, true); } kfree(pool); @@ -469,7 +475,7 @@ void dma_pool_free(struct dma_pool *pool, void *vaddr, dma_addr_t dma) page->offset = offset; /* * Resist a temptation to do - * if (!is_page_busy(page)) pool_free_page(pool, page); + * if (!is_page_busy(page)) pool_free_page(pool, page, false); * Better have a few empty pages hang around. */ spin_unlock_irqrestore(&pool->lock, flags); From patchwork Tue May 31 18:23:02 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Battersby X-Patchwork-Id: 12866004 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 37AD1C433EF for ; Tue, 31 May 2022 18:23:06 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C3C876B0073; Tue, 31 May 2022 14:23:05 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C0EE06B0074; Tue, 31 May 2022 14:23:05 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AD5A36B0075; Tue, 31 May 2022 14:23:05 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id A0DB36B0073 for ; Tue, 31 May 2022 14:23:05 -0400 (EDT) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay12.hostedemail.com (Postfix) with ESMTP id 2F2F3120564 for ; Tue, 31 May 2022 18:23:05 +0000 (UTC) X-FDA: 79526859930.18.0DAFB04 Received: from mail.cybernetics.com (mail.cybernetics.com [173.71.130.66]) by imf27.hostedemail.com (Postfix) with ESMTP id E771940053 for ; Tue, 31 May 2022 18:23:00 +0000 (UTC) X-ASG-Debug-ID: 1654021382-1cf43917f334b0e0001-v9ZeMO Received: from cybernetics.com ([10.10.4.126]) by mail.cybernetics.com with ESMTP id rbOwRDxhxvyi8U8j; Tue, 31 May 2022 14:23:02 -0400 (EDT) X-Barracuda-Envelope-From: tonyb@cybernetics.com X-ASG-Whitelist: Client DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=cybernetics.com; s=mail; bh=DOnD3hVWfzzlQtOH7c2ljAgtJzaUt05vjMFxkgGY0NM=; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:References:Cc:To:From: Content-Language:Subject:MIME-Version:Date:Message-ID; b=q/GVT243covpjwcmjMW5 z4a6gNMYZabMfljvc8LnFzU6VBDffzO74DsNJcFVzgVVWgtszVkbPOdj7dQVe8BH9hQSlkbXhFiKm x/uNZmiZQcoW7bKGHC3BqfdQw5ypKnuPYBbR9W0R5xk2kzXdVLhFDQXi2vNknYGNjdMVKjqrCY= Received: from [10.157.2.224] (HELO [192.168.200.1]) by cybernetics.com (CommuniGate Pro SMTP 7.1.1) with ESMTPS id 11829238; Tue, 31 May 2022 14:23:02 -0400 Message-ID: Date: Tue, 31 May 2022 14:23:02 -0400 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.9.1 Subject: [PATCH 09/10] dmapool: improve scalability of dma_pool_alloc Content-Language: en-US X-ASG-Orig-Subj: [PATCH 09/10] dmapool: improve scalability of dma_pool_alloc From: Tony Battersby To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: iommu@lists.linux-foundation.org, kernel-team@fb.com, Matthew Wilcox , Keith Busch , Andy Shevchenko , Robin Murphy , Tony Lindgren References: <9b08ab7c-b80b-527d-9adf-7716b0868fbc@cybernetics.com> In-Reply-To: <9b08ab7c-b80b-527d-9adf-7716b0868fbc@cybernetics.com> X-Barracuda-Connect: UNKNOWN[10.10.4.126] X-Barracuda-Start-Time: 1654021382 X-Barracuda-URL: https://10.10.4.122:443/cgi-mod/mark.cgi X-Barracuda-BRTS-Status: 1 X-Virus-Scanned: by bsmtpd at cybernetics.com X-Barracuda-Scan-Msg-Size: 3568 X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: E771940053 X-Stat-Signature: 75x4qej679i5kwoiqx4cw59sxuyjrgpk X-Rspam-User: Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=cybernetics.com header.s=mail header.b="q/GVT243"; spf=pass (imf27.hostedemail.com: domain of "btv1==1503f279fc1==tonyb@cybernetics.com" designates 173.71.130.66 as permitted sender) smtp.mailfrom="btv1==1503f279fc1==tonyb@cybernetics.com"; dmarc=pass (policy=none) header.from=cybernetics.com X-HE-Tag: 1654021380-610855 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: dma_pool_alloc() scales poorly when allocating a large number of pages because it does a linear scan of all previously-allocated pages before allocating a new one. Improve its scalability by maintaining a separate list of pages that have free blocks ready to (re)allocate. In big O notation, this improves the algorithm from O(n^2) to O(n). Signed-off-by: Tony Battersby --- mm/dmapool.c | 27 +++++++++++++++++++++++---- 1 file changed, 23 insertions(+), 4 deletions(-) diff --git a/mm/dmapool.c b/mm/dmapool.c index 58c11dcaa4e4..b3dd2ace0d2a 100644 --- a/mm/dmapool.c +++ b/mm/dmapool.c @@ -17,6 +17,10 @@ * least 'size' bytes. Free blocks are tracked in an unsorted singly-linked * list of free blocks within the page. Used blocks aren't tracked, but we * keep a count of how many are currently allocated from each page. + * + * The avail_page_list keeps track of pages that have one or more free blocks + * available to (re)allocate. Pages are moved in and out of avail_page_list + * as their blocks are allocated and freed. */ #include @@ -42,6 +46,7 @@ struct dma_pool { /* the pool */ struct list_head page_list; + struct list_head avail_page_list; spinlock_t lock; unsigned int size; struct device *dev; @@ -54,6 +59,7 @@ struct dma_pool { /* the pool */ struct dma_page { /* cacheable header for 'allocation' bytes */ struct list_head page_list; + struct list_head avail_page_link; void *vaddr; dma_addr_t dma; unsigned int in_use; @@ -164,6 +170,7 @@ struct dma_pool *dma_pool_create(const char *name, struct device *dev, retval->dev = dev; INIT_LIST_HEAD(&retval->page_list); + INIT_LIST_HEAD(&retval->avail_page_list); spin_lock_init(&retval->lock); retval->size = size; retval->boundary = boundary; @@ -270,6 +277,7 @@ static void pool_free_page(struct dma_pool *pool, } list_del(&page->page_list); + list_del(&page->avail_page_link); kfree(page); } @@ -330,10 +338,11 @@ void *dma_pool_alloc(struct dma_pool *pool, gfp_t mem_flags, might_alloc(mem_flags); spin_lock_irqsave(&pool->lock, flags); - list_for_each_entry(page, &pool->page_list, page_list) { - if (page->offset < pool->allocation) - goto ready; - } + page = list_first_entry_or_null(&pool->avail_page_list, + struct dma_page, + avail_page_link); + if (page) + goto ready; /* pool_alloc_page() might sleep, so temporarily drop &pool->lock */ spin_unlock_irqrestore(&pool->lock, flags); @@ -345,10 +354,13 @@ void *dma_pool_alloc(struct dma_pool *pool, gfp_t mem_flags, spin_lock_irqsave(&pool->lock, flags); list_add(&page->page_list, &pool->page_list); + list_add(&page->avail_page_link, &pool->avail_page_list); ready: page->in_use++; offset = page->offset; page->offset = *(int *)(page->vaddr + offset); + if (page->offset >= pool->allocation) + list_del_init(&page->avail_page_link); retval = offset + page->vaddr; *handle = offset + page->dma; #ifdef DMAPOOL_DEBUG @@ -470,6 +482,13 @@ void dma_pool_free(struct dma_pool *pool, void *vaddr, dma_addr_t dma) memset(vaddr, 0, pool->size); #endif + /* + * list_empty() on the page tests if the page is already linked into + * avail_page_list to avoid adding it more than once. + */ + if (list_empty(&page->avail_page_link)) + list_add(&page->avail_page_link, &pool->avail_page_list); + page->in_use--; *(int *)vaddr = page->offset; page->offset = offset; From patchwork Tue May 31 18:23:44 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Battersby X-Patchwork-Id: 12866005 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 59098C433EF for ; Tue, 31 May 2022 18:23:48 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CA3B86B0072; Tue, 31 May 2022 14:23:47 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C2BCB6B0074; Tue, 31 May 2022 14:23:47 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id ACE0E6B0075; Tue, 31 May 2022 14:23:47 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 9C2A56B0072 for ; Tue, 31 May 2022 14:23:47 -0400 (EDT) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 6083C20796 for ; Tue, 31 May 2022 18:23:47 +0000 (UTC) X-FDA: 79526861694.09.610D694 Received: from mail.cybernetics.com (mail.cybernetics.com [173.71.130.66]) by imf13.hostedemail.com (Postfix) with ESMTP id 64A4E20064 for ; Tue, 31 May 2022 18:23:14 +0000 (UTC) X-ASG-Debug-ID: 1654021424-1cf43917f334b100001-v9ZeMO Received: from cybernetics.com ([10.10.4.126]) by mail.cybernetics.com with ESMTP id DaFkvBiBCRTezthM; Tue, 31 May 2022 14:23:44 -0400 (EDT) X-Barracuda-Envelope-From: tonyb@cybernetics.com X-ASG-Whitelist: Client DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=cybernetics.com; s=mail; bh=R86aZY51t2u2IQYQVHsV2xLBx7C5x16DOE0BI7kIaD0=; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:References:Cc:To:From: Content-Language:Subject:MIME-Version:Date:Message-ID; b=PIl/mfpVeGR7pK26lLIL gnhce3+ZeKiFpemaNrrs1l5X4DngDsIEVNLKfAhTglIjzl+cvY33UqCrmlUzwPOmaRCet3xUvt5OC PVWsBnrBRmW5aGCHNFC18P+06bzg2BtKSMlsWVXFKrE3OktjOIg9MRYf3K1G/daJJx45TKOB7w= Received: from [10.157.2.224] (HELO [192.168.200.1]) by cybernetics.com (CommuniGate Pro SMTP 7.1.1) with ESMTPS id 11829239; Tue, 31 May 2022 14:23:44 -0400 Message-ID: <801335ba-00f3-12ae-59e0-119d7d8fd8cd@cybernetics.com> Date: Tue, 31 May 2022 14:23:44 -0400 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.9.1 Subject: [PATCH 10/10] dmapool: improve scalability of dma_pool_free Content-Language: en-US X-ASG-Orig-Subj: [PATCH 10/10] dmapool: improve scalability of dma_pool_free From: Tony Battersby To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: iommu@lists.linux-foundation.org, kernel-team@fb.com, Matthew Wilcox , Keith Busch , Andy Shevchenko , Robin Murphy , Tony Lindgren References: <9b08ab7c-b80b-527d-9adf-7716b0868fbc@cybernetics.com> In-Reply-To: <9b08ab7c-b80b-527d-9adf-7716b0868fbc@cybernetics.com> X-Barracuda-Connect: UNKNOWN[10.10.4.126] X-Barracuda-Start-Time: 1654021424 X-Barracuda-URL: https://10.10.4.122:443/cgi-mod/mark.cgi X-Barracuda-BRTS-Status: 1 X-Virus-Scanned: by bsmtpd at cybernetics.com X-Barracuda-Scan-Msg-Size: 7804 Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=cybernetics.com header.s=mail header.b="PIl/mfpV"; dmarc=pass (policy=none) header.from=cybernetics.com; spf=pass (imf13.hostedemail.com: domain of "btv1==1503f279fc1==tonyb@cybernetics.com" designates 173.71.130.66 as permitted sender) smtp.mailfrom="btv1==1503f279fc1==tonyb@cybernetics.com" X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 64A4E20064 X-Rspam-User: X-Stat-Signature: u9kxyub77tao4tgye5hp4gqokj146rg6 X-HE-Tag: 1654021394-695978 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: dma_pool_free() scales poorly when the pool contains many pages because pool_find_page() does a linear scan of all allocated pages. Improve its scalability by replacing the linear scan with a red-black tree lookup. In big O notation, this improves the algorithm from O(n^2) to O(n * log n). Signed-off-by: Tony Battersby --- mm/dmapool.c | 128 ++++++++++++++++++++++++++++++++++++++++----------- 1 file changed, 100 insertions(+), 28 deletions(-) diff --git a/mm/dmapool.c b/mm/dmapool.c index b3dd2ace0d2a..24535483f781 100644 --- a/mm/dmapool.c +++ b/mm/dmapool.c @@ -12,11 +12,12 @@ * Many older drivers still have their own code to do this. * * The current design of this allocator is fairly simple. The pool is - * represented by the 'struct dma_pool' which keeps a doubly-linked list of - * allocated pages. Each page in the page_list is split into blocks of at - * least 'size' bytes. Free blocks are tracked in an unsorted singly-linked - * list of free blocks within the page. Used blocks aren't tracked, but we - * keep a count of how many are currently allocated from each page. + * represented by the 'struct dma_pool' which keeps a red-black tree of all + * allocated pages, keyed by DMA address for fast lookup when freeing. + * Each page in the page_tree is split into blocks of at least 'size' bytes. + * Free blocks are tracked in an unsorted singly-linked list of free blocks + * within the page. Used blocks aren't tracked, but we keep a count of how + * many are currently allocated from each page. * * The avail_page_list keeps track of pages that have one or more free blocks * available to (re)allocate. Pages are moved in and out of avail_page_list @@ -36,6 +37,7 @@ #include #include #include +#include #include #include #include @@ -45,7 +47,7 @@ #endif struct dma_pool { /* the pool */ - struct list_head page_list; + struct rb_root page_tree; struct list_head avail_page_list; spinlock_t lock; unsigned int size; @@ -58,7 +60,7 @@ struct dma_pool { /* the pool */ }; struct dma_page { /* cacheable header for 'allocation' bytes */ - struct list_head page_list; + struct rb_node page_node; struct list_head avail_page_link; void *vaddr; dma_addr_t dma; @@ -69,6 +71,11 @@ struct dma_page { /* cacheable header for 'allocation' bytes */ static DEFINE_MUTEX(pools_lock); static DEFINE_MUTEX(pools_reg_lock); +static inline struct dma_page *rb_to_dma_page(struct rb_node *node) +{ + return rb_entry(node, struct dma_page, page_node); +} + static ssize_t pools_show(struct device *dev, struct device_attribute *attr, char *buf) { unsigned temp; @@ -76,6 +83,7 @@ static ssize_t pools_show(struct device *dev, struct device_attribute *attr, cha char *next; struct dma_page *page; struct dma_pool *pool; + struct rb_node *node; next = buf; size = PAGE_SIZE; @@ -90,7 +98,10 @@ static ssize_t pools_show(struct device *dev, struct device_attribute *attr, cha size_t blocks = 0; spin_lock_irq(&pool->lock); - list_for_each_entry(page, &pool->page_list, page_list) { + for (node = rb_first(&pool->page_tree); + node; + node = rb_next(node)) { + page = rb_to_dma_page(node); pages++; blocks += page->in_use; } @@ -169,7 +180,7 @@ struct dma_pool *dma_pool_create(const char *name, struct device *dev, retval->dev = dev; - INIT_LIST_HEAD(&retval->page_list); + retval->page_tree = RB_ROOT; INIT_LIST_HEAD(&retval->avail_page_list); spin_lock_init(&retval->lock); retval->size = size; @@ -213,6 +224,63 @@ struct dma_pool *dma_pool_create(const char *name, struct device *dev, } EXPORT_SYMBOL(dma_pool_create); +/* + * Find the dma_page that manages the given DMA address. + */ +static struct dma_page *pool_find_page(struct dma_pool *pool, dma_addr_t dma) +{ + struct rb_node *node = pool->page_tree.rb_node; + + while (node) { + struct dma_page *page = rb_to_dma_page(node); + + if (dma < page->dma) + node = node->rb_left; + else if ((dma - page->dma) >= pool->allocation) + node = node->rb_right; + else + return page; + } + return NULL; +} + +/* + * Insert a dma_page into the page_tree. + */ +static int pool_insert_page(struct dma_pool *pool, struct dma_page *new_page) +{ + dma_addr_t dma = new_page->dma; + struct rb_node **node = &(pool->page_tree.rb_node), *parent = NULL; + + while (*node) { + struct dma_page *this_page = rb_to_dma_page(*node); + + parent = *node; + if (dma < this_page->dma) + node = &((*node)->rb_left); + else if (likely((dma - this_page->dma) >= pool->allocation)) + node = &((*node)->rb_right); + else { + /* + * A page that overlaps the new DMA range is already + * present in the tree. This should not happen. + */ + WARN(1, + "%s: %s: DMA address overlap: old %pad new %pad len %u\n", + dev_name(pool->dev), + pool->name, &this_page->dma, &dma, + pool->allocation); + return -1; + } + } + + /* Add new node and rebalance tree. */ + rb_link_node(&new_page->page_node, parent, node); + rb_insert_color(&new_page->page_node, &pool->page_tree); + + return 0; +} + static void pool_initialise_page(struct dma_pool *pool, struct dma_page *page) { unsigned int offset = 0; @@ -276,8 +344,16 @@ static void pool_free_page(struct dma_pool *pool, dma_free_coherent(pool->dev, pool->allocation, vaddr, dma); } - list_del(&page->page_list); list_del(&page->avail_page_link); + + /* + * If the pool is being destroyed, it is not safe to modify the + * page_tree while iterating over it, and it is also unnecessary since + * the whole tree will be discarded anyway. + */ + if (!destroying_pool) + rb_erase(&page->page_node, &pool->page_tree); + kfree(page); } @@ -291,7 +367,7 @@ static void pool_free_page(struct dma_pool *pool, */ void dma_pool_destroy(struct dma_pool *pool) { - struct dma_page *page; + struct dma_page *page, *tmp; bool empty = false; if (unlikely(!pool)) @@ -307,9 +383,10 @@ void dma_pool_destroy(struct dma_pool *pool) device_remove_file(pool->dev, &dev_attr_pools); mutex_unlock(&pools_reg_lock); - while ((page = list_first_entry_or_null(&pool->page_list, - struct dma_page, - page_list))) { + rbtree_postorder_for_each_entry_safe(page, + tmp, + &pool->page_tree, + page_node) { pool_free_page(pool, page, true); } @@ -353,7 +430,15 @@ void *dma_pool_alloc(struct dma_pool *pool, gfp_t mem_flags, spin_lock_irqsave(&pool->lock, flags); - list_add(&page->page_list, &pool->page_list); + if (unlikely(pool_insert_page(pool, page))) { + /* + * This should not happen, so something must have gone horribly + * wrong. Instead of crashing, intentionally leak the memory + * and make for the exit. + */ + spin_unlock_irqrestore(&pool->lock, flags); + return NULL; + } list_add(&page->avail_page_link, &pool->avail_page_list); ready: page->in_use++; @@ -395,19 +480,6 @@ void *dma_pool_alloc(struct dma_pool *pool, gfp_t mem_flags, } EXPORT_SYMBOL(dma_pool_alloc); -static struct dma_page *pool_find_page(struct dma_pool *pool, dma_addr_t dma) -{ - struct dma_page *page; - - list_for_each_entry(page, &pool->page_list, page_list) { - if (dma < page->dma) - continue; - if ((dma - page->dma) < pool->allocation) - return page; - } - return NULL; -} - /** * dma_pool_free - put block back into dma pool * @pool: the dma pool holding the block