From patchwork Thu Aug 2 19:57:28 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Battersby X-Patchwork-Id: 10554091 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id F039C1708 for ; Thu, 2 Aug 2018 19:57:33 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E93642C186 for ; Thu, 2 Aug 2018 19:57:33 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id DCEF42C1A9; Thu, 2 Aug 2018 19:57:33 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 551392C186 for ; Thu, 2 Aug 2018 19:57:33 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6BD1E6B0006; Thu, 2 Aug 2018 15:57:32 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 66CB76B0008; Thu, 2 Aug 2018 15:57:32 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 55C8E6B000A; Thu, 2 Aug 2018 15:57:32 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-qt0-f197.google.com (mail-qt0-f197.google.com [209.85.216.197]) by kanga.kvack.org (Postfix) with ESMTP id 24F7C6B0006 for ; Thu, 2 Aug 2018 15:57:32 -0400 (EDT) Received: by mail-qt0-f197.google.com with SMTP id j11-v6so2534343qtp.0 for ; Thu, 02 Aug 2018 12:57:32 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:subject :to:message-id:date:user-agent:mime-version :content-transfer-encoding:content-language; bh=9BS29YX+U1LrTpod6pA1tEtD8WGt4UQXC5nY7xETXAU=; b=r1jtohE6YvJF4BrG8lVi8t3Fl+t0f7+YMqVCeCtRrClY2E6TwkdkQ8qtrrVjxi4eHr FB6tnK8OdkH72ol/ihOK6kNxL745WaQTIIjDaxBnNAevdT3yPmXhad10zZr0aAIyBjLC T02bp1Ky4PMHiiBWjYGgQyx1OE+HCNsUz2kZxefu7uffLLBaVQb6Ud/+8LUbnn1sx5vT Q6mi1mwnx5mNmmKCCA7+qy5OVztajIuO6moUULynokfHAGgsmdzjAj3mNzs1JStwZ3ES Kxcuh07mA9ToozRx/zDPvE4HLNqrBkfoebyvlhWWGFLCh8OsxA77xlaNTA/26BhM/sf3 sMLA== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of btv1==75224751413==tonyb@cybernetics.com designates 173.71.130.66 as permitted sender) smtp.mailfrom="btv1==75224751413==tonyb@cybernetics.com" X-Gm-Message-State: AOUpUlF51q4Sbblr+06qIuUyZ6ea29nKncGuA6+TI5dI8/QlLiAq3lkB H+bTot/nkjJ20zPSll0OFfoiGgZBLXpPlGvTEJt0A/IbHSqNjD+rGJV/Hl/YTNml1bGZoQzcfyb PN2Zmar6CxGQb1BIdy9DHoeF4ocVpzyOSLxHhXEX2xWBr/CSAZGEDs8roZjGMRXV9VA== X-Received: by 2002:a37:f50f:: with SMTP id l15-v6mr882068qkk.251.1533239851920; Thu, 02 Aug 2018 12:57:31 -0700 (PDT) X-Google-Smtp-Source: AAOMgpf0wbOMi0S7JzRVPoV7RQGHSdltEUkM79vHHnKMcMVIGgRbTRXT5MGv6MSTvWzm+Vk5o0U8 X-Received: by 2002:a37:f50f:: with SMTP id l15-v6mr882026qkk.251.1533239851095; Thu, 02 Aug 2018 12:57:31 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1533239851; cv=none; d=google.com; s=arc-20160816; b=JXqfrzqOZGGENfijNM81Ds/uz9R+OuzMZGakRp6+i0KO1wCesvoQEgFrwCUykd8qXs DEpzy0oFm5UvU51LROhCBZh9UwJtR4HYna5DvbiD/Pe2RWPWhHMhHF0Ftqc2YfQ3uuh6 pkhiPOmVnlW0rx6+Ad8PGz1RCy1KMoJHIJjw/OTRjuA9xeXLNu2yxtTbuedApqUXCMtj 4qju4OzMev+1vIDhVHEn71+cm+vW+TwFFWyu1ynZsMvWNXhkQyzA4Ip+z+OmfJI90RGP p8l6CWyjBPupfyiH7U23WgHfW2zA643vEqB0xdXanQkWWNLXbJbd3TbsegYUvyc4fner jfHA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-language:content-transfer-encoding:mime-version:user-agent :date:message-id:to:subject:from:arc-authentication-results; bh=9BS29YX+U1LrTpod6pA1tEtD8WGt4UQXC5nY7xETXAU=; b=E5dg9y81GbVic72kHUKPp+8mu6+NdbKqxHlmL4ckRee1r+mmiZSw3vOr4kAw20SAFt vylPtj5Da5k5uIg3nuorl4UhHOfGnqgwTgArOCO7rNjbyy4x/jt6xec8Pnc7Fwh1mz6F 7HkqbIRCfO3p6p/n7xTSBPOyAvdkpdi2JfV6TTGTjZ72OJYjgIAgSvQRc40qd+VY0Ama NUxJI/gDqcTz0q/6fEZlJwGVRHjpQJmBixbu0KV2NSk3VZiTkb7JS/KbZ3ZBElt9o3lo bR2HLwHerjG00S3dM3/VPI09iYawTpARHsYblLc0QiT8TAFuw7BvvcfSmMsamcAemCWG 7TZw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of btv1==75224751413==tonyb@cybernetics.com designates 173.71.130.66 as permitted sender) smtp.mailfrom="btv1==75224751413==tonyb@cybernetics.com" Received: from mail.cybernetics.com (mail.cybernetics.com. [173.71.130.66]) by mx.google.com with ESMTPS id u186-v6si2793887qkd.81.2018.08.02.12.57.31 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 02 Aug 2018 12:57:31 -0700 (PDT) Received-SPF: pass (google.com: domain of btv1==75224751413==tonyb@cybernetics.com designates 173.71.130.66 as permitted sender) client-ip=173.71.130.66; Authentication-Results: mx.google.com; spf=pass (google.com: domain of btv1==75224751413==tonyb@cybernetics.com designates 173.71.130.66 as permitted sender) smtp.mailfrom="btv1==75224751413==tonyb@cybernetics.com" X-ASG-Debug-ID: 1533239849-0fb3b01fb33f5930001-v9ZeMO Received: from cybernetics.com ([10.157.1.126]) by mail.cybernetics.com with ESMTP id ClQEcugQvUhx27fP (version=SSLv3 cipher=DES-CBC3-SHA bits=112 verify=NO); Thu, 02 Aug 2018 15:57:29 -0400 (EDT) X-Barracuda-Envelope-From: tonyb@cybernetics.com X-ASG-Whitelist: Client Received: from [10.157.2.224] (account tonyb HELO [192.168.200.1]) by cybernetics.com (CommuniGate Pro SMTP 5.1.14) with ESMTPSA id 8317831; Thu, 02 Aug 2018 15:57:28 -0400 From: Tony Battersby Subject: [PATCH v2 2/9] dmapool: cleanup error messages To: Matthew Wilcox , Christoph Hellwig , Marek Szyprowski , Sathya Prakash , Chaitra P B , Suganath Prabu Subramani , iommu@lists.linux-foundation.org, linux-mm , linux-scsi@vger.kernel.org, MPT-FusionLinux.pdl@broadcom.com X-ASG-Orig-Subj: [PATCH v2 2/9] dmapool: cleanup error messages Message-ID: Date: Thu, 2 Aug 2018 15:57:28 -0400 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.9.1 MIME-Version: 1.0 Content-Language: en-US X-Barracuda-Connect: UNKNOWN[10.157.1.126] X-Barracuda-Start-Time: 1533239849 X-Barracuda-Encrypted: DES-CBC3-SHA X-Barracuda-URL: https://10.157.1.122:443/cgi-mod/mark.cgi X-Barracuda-Scan-Msg-Size: 3366 X-Virus-Scanned: by bsmtpd at cybernetics.com X-Barracuda-BRTS-Status: 1 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Remove code duplication in error messages. It is now safe to pas a NULL dev to dev_err(), so the checks to avoid doing so are no longer necessary. Example: Error message with dev != NULL: mpt3sas 0000:02:00.0: dma_pool_destroy chain pool, (____ptrval____) busy Same error message with dev == NULL before patch: dma_pool_destroy chain pool, (____ptrval____) busy Same error message with dev == NULL after patch: (NULL device *): dma_pool_destroy chain pool, (____ptrval____) busy Signed-off-by: Tony Battersby --- linux/mm/dmapool.c.orig 2018-08-02 09:54:25.000000000 -0400 +++ linux/mm/dmapool.c 2018-08-02 09:57:58.000000000 -0400 @@ -289,13 +289,9 @@ void dma_pool_destroy(struct dma_pool *p page = list_entry(pool->page_list.next, struct dma_page, page_list); if (is_page_busy(page)) { - if (pool->dev) - dev_err(pool->dev, - "dma_pool_destroy %s, %p busy\n", - pool->name, page->vaddr); - else - pr_err("dma_pool_destroy %s, %p busy\n", - pool->name, page->vaddr); + dev_err(pool->dev, + "dma_pool_destroy %s, %p busy\n", + pool->name, page->vaddr); /* leak the still-in-use consistent memory */ list_del(&page->page_list); kfree(page); @@ -357,13 +353,9 @@ void *dma_pool_alloc(struct dma_pool *po for (i = sizeof(page->offset); i < pool->size; i++) { if (data[i] == POOL_POISON_FREED) continue; - if (pool->dev) - dev_err(pool->dev, - "dma_pool_alloc %s, %p (corrupted)\n", - pool->name, retval); - else - pr_err("dma_pool_alloc %s, %p (corrupted)\n", - pool->name, retval); + dev_err(pool->dev, + "dma_pool_alloc %s, %p (corrupted)\n", + pool->name, retval); /* * Dump the first 4 bytes even if they are not @@ -418,13 +410,9 @@ void dma_pool_free(struct dma_pool *pool page = pool_find_page(pool, dma); if (!page) { spin_unlock_irqrestore(&pool->lock, flags); - if (pool->dev) - dev_err(pool->dev, - "dma_pool_free %s, %p/%lx (bad dma)\n", - pool->name, vaddr, (unsigned long)dma); - else - pr_err("dma_pool_free %s, %p/%lx (bad dma)\n", - pool->name, vaddr, (unsigned long)dma); + dev_err(pool->dev, + "dma_pool_free %s, %p/%lx (bad dma)\n", + pool->name, vaddr, (unsigned long)dma); return; } @@ -432,13 +420,9 @@ void dma_pool_free(struct dma_pool *pool #ifdef DMAPOOL_DEBUG if ((dma - page->dma) != offset) { spin_unlock_irqrestore(&pool->lock, flags); - if (pool->dev) - dev_err(pool->dev, - "dma_pool_free %s, %p (bad vaddr)/%pad\n", - pool->name, vaddr, &dma); - else - pr_err("dma_pool_free %s, %p (bad vaddr)/%pad\n", - pool->name, vaddr, &dma); + dev_err(pool->dev, + "dma_pool_free %s, %p (bad vaddr)/%pad\n", + pool->name, vaddr, &dma); return; } { @@ -449,12 +433,9 @@ void dma_pool_free(struct dma_pool *pool continue; } spin_unlock_irqrestore(&pool->lock, flags); - if (pool->dev) - dev_err(pool->dev, "dma_pool_free %s, dma %pad already free\n", - pool->name, &dma); - else - pr_err("dma_pool_free %s, dma %pad already free\n", - pool->name, &dma); + dev_err(pool->dev, + "dma_pool_free %s, dma %pad already free\n", + pool->name, &dma); return; } } From patchwork Thu Aug 2 19:58:04 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Battersby X-Patchwork-Id: 10554097 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B9811174A for ; Thu, 2 Aug 2018 19:58:08 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B5E6B2AAA1 for ; Thu, 2 Aug 2018 19:58:08 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A964E2B2B4; Thu, 2 Aug 2018 19:58:08 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=unavailable version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4C6802AAA1 for ; Thu, 2 Aug 2018 19:58:08 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 731C76B000A; Thu, 2 Aug 2018 15:58:07 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 6E26A6B000C; Thu, 2 Aug 2018 15:58:07 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5D1596B000D; Thu, 2 Aug 2018 15:58:07 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-qt0-f198.google.com (mail-qt0-f198.google.com [209.85.216.198]) by kanga.kvack.org (Postfix) with ESMTP id 344F86B000A for ; Thu, 2 Aug 2018 15:58:07 -0400 (EDT) Received: by mail-qt0-f198.google.com with SMTP id i9-v6so2538791qtj.3 for ; Thu, 02 Aug 2018 12:58:07 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:subject :to:message-id:date:user-agent:mime-version :content-transfer-encoding:content-language; bh=QH1IIg0CaNqSGBNxYjpd2D1uslcpAwDjwG+aYibniNg=; b=BW6oRw4N2AAAe+ioly/H4iPvar4gdgi+WWwyXTGP8etWjBO89ijRJfhqFi2ymNJsme tm8CIpJ8nGbOhGEAkBKKGRHSGcVBOh6U/UJjUxrTk/nI80L1EHh7+09bskKBxD7AJ8by OWKfXC0TkOuGJ10wmlQ7kBcH5CZLZv7ZA3CUUmSi6HuHl3yZNbz4p/Eh5Poe3ATwePJA e6bcqN0B4vTPM0kXG+PH2hIh852Q2bsfCRvklzaVTQnItflmQMF6vhnWGCf7s6UtKqT1 lSGp9SA5tbPeTflKhmnhVD9dWhBFIDDP+80NaNIm72HSORNNlrMAYQ6Jpog6IZ+KT2yi 6CDQ== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of btv1==75224751413==tonyb@cybernetics.com designates 173.71.130.66 as permitted sender) smtp.mailfrom="btv1==75224751413==tonyb@cybernetics.com" X-Gm-Message-State: AOUpUlGJCJ8M6j3SnT3btbXMjbid2fIq2TCzyJnPl2FhnkUfNFKl254J BDHNoCemKRwBEtOQ4VdklGDIYyTem9GIKuxFYQcdUQKxsyUHNy1Km298aJ546iQn9aitbJ45XFh JbrDYRJrK5jciVA7JsJFWq4UZbEp1dW9PERo41dGGh7ZkyeFO5lC/b+bn5zXLve66Mg== X-Received: by 2002:a0c:f94e:: with SMTP id i14-v6mr904084qvo.73.1533239887007; Thu, 02 Aug 2018 12:58:07 -0700 (PDT) X-Google-Smtp-Source: AAOMgpdCT0Q58o3e3ZdvLzn/DX47ukz2997JK5XxsevlFlqsrtiYkJ8v2FcCKfYAHu20G8dfJOKz X-Received: by 2002:a0c:f94e:: with SMTP id i14-v6mr904053qvo.73.1533239886449; Thu, 02 Aug 2018 12:58:06 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1533239886; cv=none; d=google.com; s=arc-20160816; b=RTAQYGTlZF2wSWBKRFnvrrhm+lL3q9Fj9qGqi7ywjsdnnor955qxEaNG4l6Z+vnHns hRo0eChBB+cBxYO/oTmI1J8vWdHZDSsRLOZdVI6LwvOXVxzzpUjFzw08SX+XxSCQf3u0 jPAN9P3g6xtCxAdxgUuAmydeuYy/+Edvf1/geyW9c3UwhRzPm29UaTDmvRf7GfoMt3Z+ 71mWbv/uv5i6BchUMVXDHTtFDlTysqinbfbu8PCz+OGWFKiYLcTzvzdbMzQdveLF6vug AJc7XtIwD3n1kbv3RYAE72giwIdM+mxs8oMX5Dvp/5Q5GuJaq0tEyyzBvVj/nkScbK7n 037g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-language:content-transfer-encoding:mime-version:user-agent :date:message-id:to:subject:from:arc-authentication-results; bh=QH1IIg0CaNqSGBNxYjpd2D1uslcpAwDjwG+aYibniNg=; b=IsIFTcgc88QEPq1BudNYp6Tj6SnwLP8ymm8e6NpnBfKf1Ls007u85M1NOm/zw21fOi kN135Z3rYEJ0LYexyPVKphDoULXz2UvDwC4M1C5urmKmFYBy2pRhi5/fnBdJw0DvUU+5 xZDdT1kv/s+ZtSYuhbtuKWY5JBWTnKNd4aCsDAEGvITKCTtTYYH9YeG7illShj8lFLPz vAx47HImrM52ZGV95R0mpD2ZurEDl9WKqaC+EZdZf8yybcLd6CpH5xVHZi0IWEUAniFP oaGMexjZ/TnrlsVyk0OtZPvfpAlSs60wGPnc4Xb+ndMPCuq19JdY8Oxi1U/plm+89fxp 4NMQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of btv1==75224751413==tonyb@cybernetics.com designates 173.71.130.66 as permitted sender) smtp.mailfrom="btv1==75224751413==tonyb@cybernetics.com" Received: from mail.cybernetics.com (mail.cybernetics.com. [173.71.130.66]) by mx.google.com with ESMTPS id t34-v6si2690540qth.151.2018.08.02.12.58.06 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 02 Aug 2018 12:58:06 -0700 (PDT) Received-SPF: pass (google.com: domain of btv1==75224751413==tonyb@cybernetics.com designates 173.71.130.66 as permitted sender) client-ip=173.71.130.66; Authentication-Results: mx.google.com; spf=pass (google.com: domain of btv1==75224751413==tonyb@cybernetics.com designates 173.71.130.66 as permitted sender) smtp.mailfrom="btv1==75224751413==tonyb@cybernetics.com" X-ASG-Debug-ID: 1533239884-0fb3b01fb33f5950001-v9ZeMO Received: from cybernetics.com ([10.157.1.126]) by mail.cybernetics.com with ESMTP id a0tb9ZvUiGIb8fnH (version=SSLv3 cipher=DES-CBC3-SHA bits=112 verify=NO); Thu, 02 Aug 2018 15:58:04 -0400 (EDT) X-Barracuda-Envelope-From: tonyb@cybernetics.com X-ASG-Whitelist: Client Received: from [10.157.2.224] (account tonyb HELO [192.168.200.1]) by cybernetics.com (CommuniGate Pro SMTP 5.1.14) with ESMTPSA id 8317835; Thu, 02 Aug 2018 15:58:04 -0400 From: Tony Battersby Subject: [PATCH v2 3/9] dmapool: cleanup dma_pool_destroy To: Matthew Wilcox , Christoph Hellwig , Marek Szyprowski , Sathya Prakash , Chaitra P B , Suganath Prabu Subramani , iommu@lists.linux-foundation.org, linux-mm , linux-scsi@vger.kernel.org, MPT-FusionLinux.pdl@broadcom.com X-ASG-Orig-Subj: [PATCH v2 3/9] dmapool: cleanup dma_pool_destroy Message-ID: <924a1e83-285a-a258-ed45-ad035411bd83@cybernetics.com> Date: Thu, 2 Aug 2018 15:58:04 -0400 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.9.1 MIME-Version: 1.0 Content-Language: en-US X-Barracuda-Connect: UNKNOWN[10.157.1.126] X-Barracuda-Start-Time: 1533239884 X-Barracuda-Encrypted: DES-CBC3-SHA X-Barracuda-URL: https://10.157.1.122:443/cgi-mod/mark.cgi X-Barracuda-Scan-Msg-Size: 2073 X-Virus-Scanned: by bsmtpd at cybernetics.com X-Barracuda-BRTS-Status: 1 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Remove a small amount of code duplication between dma_pool_destroy() and pool_free_page() in preparation for adding more code without having to duplicate it. No functional changes. Signed-off-by: Tony Battersby --- linux/mm/dmapool.c.orig 2018-08-02 09:59:15.000000000 -0400 +++ linux/mm/dmapool.c 2018-08-02 10:01:26.000000000 -0400 @@ -249,13 +249,22 @@ static inline bool is_page_busy(struct d static void pool_free_page(struct dma_pool *pool, struct dma_page *page) { + void *vaddr = page->vaddr; dma_addr_t dma = page->dma; + list_del(&page->page_list); + + if (is_page_busy(page)) { + dev_err(pool->dev, + "dma_pool_destroy %s, %p busy\n", + pool->name, vaddr); + /* leak the still-in-use consistent memory */ + } else { #ifdef DMAPOOL_DEBUG - memset(page->vaddr, POOL_POISON_FREED, pool->allocation); + memset(vaddr, POOL_POISON_FREED, pool->allocation); #endif - dma_free_coherent(pool->dev, pool->allocation, page->vaddr, dma); - list_del(&page->page_list); + dma_free_coherent(pool->dev, pool->allocation, vaddr, dma); + } kfree(page); } @@ -269,6 +278,7 @@ static void pool_free_page(struct dma_po */ void dma_pool_destroy(struct dma_pool *pool) { + struct dma_page *page; bool empty = false; if (unlikely(!pool)) @@ -284,19 +294,10 @@ void dma_pool_destroy(struct dma_pool *p device_remove_file(pool->dev, &dev_attr_pools); mutex_unlock(&pools_reg_lock); - while (!list_empty(&pool->page_list)) { - struct dma_page *page; - page = list_entry(pool->page_list.next, - struct dma_page, page_list); - if (is_page_busy(page)) { - dev_err(pool->dev, - "dma_pool_destroy %s, %p busy\n", - pool->name, page->vaddr); - /* leak the still-in-use consistent memory */ - list_del(&page->page_list); - kfree(page); - } else - pool_free_page(pool, page); + while ((page = list_first_entry_or_null(&pool->page_list, + struct dma_page, + page_list))) { + pool_free_page(pool, page); } kfree(pool); From patchwork Thu Aug 2 19:58:40 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Battersby X-Patchwork-Id: 10554101 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 29E6C1708 for ; Thu, 2 Aug 2018 19:58:46 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 25B222C07C for ; Thu, 2 Aug 2018 19:58:46 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1754A2C085; Thu, 2 Aug 2018 19:58:46 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=unavailable version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 76AE32C07C for ; Thu, 2 Aug 2018 19:58:45 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9C8596B000D; Thu, 2 Aug 2018 15:58:44 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 977BE6B000E; Thu, 2 Aug 2018 15:58:44 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 86AA96B0010; Thu, 2 Aug 2018 15:58:44 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-qk0-f200.google.com (mail-qk0-f200.google.com [209.85.220.200]) by kanga.kvack.org (Postfix) with ESMTP id 5F6DD6B000D for ; Thu, 2 Aug 2018 15:58:44 -0400 (EDT) Received: by mail-qk0-f200.google.com with SMTP id l15-v6so3018578qki.18 for ; Thu, 02 Aug 2018 12:58:44 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:subject :to:message-id:date:user-agent:mime-version :content-transfer-encoding:content-language; bh=DUg+9O2mHCnpWzr3U5aK8KcJVPJKbfLzR3qi70smxnU=; b=blDB7km342flDJMbg0S0HVVxpgEyafqKivSQ8hMw/9zF5OHh0dX1X6+XPvai6AtBpp FhPebHqDhMHfl0achCExbFpS4FGWkMVpCJNcwyvYkTjnZH6Ampcamwu/4fFkoumxh6K7 7c284M5SMT+0xiM/4sYsoKWGdhwHvslJXA9BCzTeVtu5KK0OiGju0D1se6eOm1Xj8Si6 OYtEOAMoaZVeuSwI3PARZCPwW5L+UJCB5k9pC9aiG3xO5mE4pTrIBKILhn8UAtVDFQLN nu7RiMSi+C/RRrv25qSxW8I3H6PLbW35ca0fMdH+ZhZznYEIPZrqaBhG33mPurGyJTV2 aHeQ== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of btv1==75224751413==tonyb@cybernetics.com designates 173.71.130.66 as permitted sender) smtp.mailfrom="btv1==75224751413==tonyb@cybernetics.com" X-Gm-Message-State: AOUpUlE8nR36H96b74B0HUVxvlbZpzPw2mxPONRnEG5m2PZTrmA2PF8x USy8Rg6Q2L898S0U8Ty6/3m3I9AZfKH27Gvdoe12h4G+WXjlp4uPcwI8eD24Bo/ATQFBGqFkkvu fqagLZXWFaMSHIzNpiqnxSy+H1ma1e23wnM/ijFYSZlC55WIRPApefPsAuhovMnv0fw== X-Received: by 2002:a0c:88c2:: with SMTP id 2-v6mr921030qvo.51.1533239924136; Thu, 02 Aug 2018 12:58:44 -0700 (PDT) X-Google-Smtp-Source: AAOMgperRffHGhGnEIayxiEaZaehGu7BrUjhdcTSt6+qHqlah+OK50tcgyZGu6BBIGqRXJOLZMN4 X-Received: by 2002:a0c:88c2:: with SMTP id 2-v6mr920979qvo.51.1533239923034; Thu, 02 Aug 2018 12:58:43 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1533239923; cv=none; d=google.com; s=arc-20160816; b=O5PztmGQj5yyX6oG95m5VlH5Zaixfq5Q6yvRrFf/QQTEfr7F/jtqzE3myKWFsNgvXT iO3t8h3nI6cELF+w1vn3Bld8d6JuxvMrTgQhmvKXeT8WBop68BcAWzmQ5ivY/zjSzj1r gCgdwH6DMzCscPJ8wKMrgtUCv2QN44DoljEKCepy3RfeSrYB+kCCqDo2JKDCiB+6xnTU NSDJtbDFpBf5a6ArZCwRygoY/CKYXuKb/p/9AVOOIc3923KDamUpH+t1WTv9b9hMFcAC AY4zAn4ZvK4wLkzC8AqQgUXBBF2zf5azHh5PaeZEoutkuvPqssSrLHqecgZn6PSm+bMp bVbA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-language:content-transfer-encoding:mime-version:user-agent :date:message-id:to:subject:from:arc-authentication-results; bh=DUg+9O2mHCnpWzr3U5aK8KcJVPJKbfLzR3qi70smxnU=; b=s44zC4OfZC0j3pw8L5nhEXmRrDH9LQVA3Rvght8Tpr3+SCGZovjS2XEOvB/WAVUprI kHgNDs+Fj2bTB7dm98XLIX4KV80BaYUyhn9RvE+8xB+dqEvNgUBtRwMsGleAfNwWw0E7 8rCDXHXfGBdghTxBVoBsDzqQsQXTYa4lm7HrNSNcPHrXIYPhype182RkGS/1nYSlNABP iRx0viUOLetAp4MJWxHpbNkCfpizjB4YbRiEG5Leq3XD/Gu/3dgg8rI6G5QWjkJ4NkO9 9zfPJoZAfxbjYCcscpAXJjSKraNaqL9uH9STeFSQUWlA/WOQHI0KYTOdREQimk/mdKwL brsA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of btv1==75224751413==tonyb@cybernetics.com designates 173.71.130.66 as permitted sender) smtp.mailfrom="btv1==75224751413==tonyb@cybernetics.com" Received: from mail.cybernetics.com (mail.cybernetics.com. [173.71.130.66]) by mx.google.com with ESMTPS id g12-v6si1045063qtc.225.2018.08.02.12.58.42 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 02 Aug 2018 12:58:43 -0700 (PDT) Received-SPF: pass (google.com: domain of btv1==75224751413==tonyb@cybernetics.com designates 173.71.130.66 as permitted sender) client-ip=173.71.130.66; Authentication-Results: mx.google.com; spf=pass (google.com: domain of btv1==75224751413==tonyb@cybernetics.com designates 173.71.130.66 as permitted sender) smtp.mailfrom="btv1==75224751413==tonyb@cybernetics.com" X-ASG-Debug-ID: 1533239921-0fb3b01fb33f5990001-v9ZeMO Received: from cybernetics.com ([10.157.1.126]) by mail.cybernetics.com with ESMTP id mAhDNUnZILFnEUKx (version=SSLv3 cipher=DES-CBC3-SHA bits=112 verify=NO); Thu, 02 Aug 2018 15:58:41 -0400 (EDT) X-Barracuda-Envelope-From: tonyb@cybernetics.com X-ASG-Whitelist: Client Received: from [10.157.2.224] (account tonyb HELO [192.168.200.1]) by cybernetics.com (CommuniGate Pro SMTP 5.1.14) with ESMTPSA id 8317833; Thu, 02 Aug 2018 15:58:40 -0400 From: Tony Battersby Subject: [PATCH v2 4/9] dmapool: improve scalability of dma_pool_alloc To: Matthew Wilcox , Christoph Hellwig , Marek Szyprowski , Sathya Prakash , Chaitra P B , Suganath Prabu Subramani , iommu@lists.linux-foundation.org, linux-mm , linux-scsi , MPT-FusionLinux.pdl@broadcom.com X-ASG-Orig-Subj: [PATCH v2 4/9] dmapool: improve scalability of dma_pool_alloc Message-ID: <1dbe6204-17fc-efd9-2381-48186cae2b94@cybernetics.com> Date: Thu, 2 Aug 2018 15:58:40 -0400 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.9.1 MIME-Version: 1.0 Content-Language: en-US X-Barracuda-Connect: UNKNOWN[10.157.1.126] X-Barracuda-Start-Time: 1533239921 X-Barracuda-Encrypted: DES-CBC3-SHA X-Barracuda-URL: https://10.157.1.122:443/cgi-mod/mark.cgi X-Barracuda-Scan-Msg-Size: 7247 X-Virus-Scanned: by bsmtpd at cybernetics.com X-Barracuda-BRTS-Status: 1 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP dma_pool_alloc() scales poorly when allocating a large number of pages because it does a linear scan of all previously-allocated pages before allocating a new one. Improve its scalability by maintaining a separate list of pages that have free blocks ready to (re)allocate. In big O notation, this improves the algorithm from O(n^2) to O(n). Signed-off-by: Tony Battersby --- Changes since v1: *) In v1, there was one (original) list for all pages and one (new) list for pages with free blocks. In v2, there is one list for pages with free blocks and one list for pages without free blocks, and pages are moved back and forth between the two lists. This is to avoid bloating struct dma_page with extra list pointers, which is important so that a later patch can move its fields into struct page. *) Use list_first_entry_or_null instead of !list_empty/list_first_entry. Note that pool_find_page() will be removed entirely by a later patch, so the extra code there won't stay for long. --- linux/mm/dmapool.c.orig 2018-08-02 10:01:26.000000000 -0400 +++ linux/mm/dmapool.c 2018-08-02 10:03:46.000000000 -0400 @@ -15,11 +15,16 @@ * Many older drivers still have their own code to do this. * * The current design of this allocator is fairly simple. The pool is - * represented by the 'struct dma_pool' which keeps a doubly-linked list of - * allocated pages. Each page in the page_list is split into blocks of at - * least 'size' bytes. Free blocks are tracked in an unsorted singly-linked - * list of free blocks within the page. Used blocks aren't tracked, but we - * keep a count of how many are currently allocated from each page. + * represented by the 'struct dma_pool'. Each allocated page is split into + * blocks of at least 'size' bytes. Free blocks are tracked in an unsorted + * singly-linked list of free blocks within the page. Used blocks aren't + * tracked, but we keep a count of how many are currently allocated from each + * page. + * + * The pool keeps two doubly-linked list of allocated pages. The 'available' + * list tracks pages that have one or more free blocks, and the 'full' list + * tracks pages that have no free blocks. Pages are moved from one list to + * the other as their blocks are allocated and freed. */ #include @@ -43,7 +48,10 @@ #endif struct dma_pool { /* the pool */ - struct list_head page_list; +#define POOL_FULL_IDX 0 +#define POOL_AVAIL_IDX 1 +#define POOL_N_LISTS 2 + struct list_head page_list[POOL_N_LISTS]; spinlock_t lock; size_t size; struct device *dev; @@ -54,7 +62,7 @@ struct dma_pool { /* the pool */ }; struct dma_page { /* cacheable header for 'allocation' bytes */ - struct list_head page_list; + struct list_head dma_list; void *vaddr; dma_addr_t dma; unsigned int in_use; @@ -70,7 +78,6 @@ show_pools(struct device *dev, struct de unsigned temp; unsigned size; char *next; - struct dma_page *page; struct dma_pool *pool; next = buf; @@ -84,11 +91,18 @@ show_pools(struct device *dev, struct de list_for_each_entry(pool, &dev->dma_pools, pools) { unsigned pages = 0; unsigned blocks = 0; + int list_idx; spin_lock_irq(&pool->lock); - list_for_each_entry(page, &pool->page_list, page_list) { - pages++; - blocks += page->in_use; + for (list_idx = 0; list_idx < POOL_N_LISTS; list_idx++) { + struct dma_page *page; + + list_for_each_entry(page, + &pool->page_list[list_idx], + dma_list) { + pages++; + blocks += page->in_use; + } } spin_unlock_irq(&pool->lock); @@ -163,7 +177,8 @@ struct dma_pool *dma_pool_create(const c retval->dev = dev; - INIT_LIST_HEAD(&retval->page_list); + INIT_LIST_HEAD(&retval->page_list[0]); + INIT_LIST_HEAD(&retval->page_list[1]); spin_lock_init(&retval->lock); retval->size = size; retval->boundary = boundary; @@ -252,7 +267,7 @@ static void pool_free_page(struct dma_po void *vaddr = page->vaddr; dma_addr_t dma = page->dma; - list_del(&page->page_list); + list_del(&page->dma_list); if (is_page_busy(page)) { dev_err(pool->dev, @@ -278,8 +293,8 @@ static void pool_free_page(struct dma_po */ void dma_pool_destroy(struct dma_pool *pool) { - struct dma_page *page; bool empty = false; + int list_idx; if (unlikely(!pool)) return; @@ -294,10 +309,15 @@ void dma_pool_destroy(struct dma_pool *p device_remove_file(pool->dev, &dev_attr_pools); mutex_unlock(&pools_reg_lock); - while ((page = list_first_entry_or_null(&pool->page_list, - struct dma_page, - page_list))) { - pool_free_page(pool, page); + for (list_idx = 0; list_idx < POOL_N_LISTS; list_idx++) { + struct dma_page *page; + + while ((page = list_first_entry_or_null( + &pool->page_list[list_idx], + struct dma_page, + dma_list))) { + pool_free_page(pool, page); + } } kfree(pool); @@ -325,10 +345,11 @@ void *dma_pool_alloc(struct dma_pool *po might_sleep_if(gfpflags_allow_blocking(mem_flags)); spin_lock_irqsave(&pool->lock, flags); - list_for_each_entry(page, &pool->page_list, page_list) { - if (page->offset < pool->allocation) - goto ready; - } + page = list_first_entry_or_null(&pool->page_list[POOL_AVAIL_IDX], + struct dma_page, + dma_list); + if (page) + goto ready; /* pool_alloc_page() might sleep, so temporarily drop &pool->lock */ spin_unlock_irqrestore(&pool->lock, flags); @@ -339,11 +360,16 @@ void *dma_pool_alloc(struct dma_pool *po spin_lock_irqsave(&pool->lock, flags); - list_add(&page->page_list, &pool->page_list); + list_add(&page->dma_list, &pool->page_list[POOL_AVAIL_IDX]); ready: page->in_use++; offset = page->offset; page->offset = *(int *)(page->vaddr + offset); + if (page->offset >= pool->allocation) { + /* Move page from the "available" list to the "full" list. */ + list_del(&page->dma_list); + list_add(&page->dma_list, &pool->page_list[POOL_FULL_IDX]); + } retval = offset + page->vaddr; *handle = offset + page->dma; #ifdef DMAPOOL_DEBUG @@ -381,13 +407,19 @@ EXPORT_SYMBOL(dma_pool_alloc); static struct dma_page *pool_find_page(struct dma_pool *pool, dma_addr_t dma) { - struct dma_page *page; + int list_idx; + + for (list_idx = 0; list_idx < POOL_N_LISTS; list_idx++) { + struct dma_page *page; - list_for_each_entry(page, &pool->page_list, page_list) { - if (dma < page->dma) - continue; - if ((dma - page->dma) < pool->allocation) - return page; + list_for_each_entry(page, + &pool->page_list[list_idx], + dma_list) { + if (dma < page->dma) + continue; + if ((dma - page->dma) < pool->allocation) + return page; + } } return NULL; } @@ -444,6 +476,11 @@ void dma_pool_free(struct dma_pool *pool #endif page->in_use--; + if (page->offset >= pool->allocation) { + /* Move page from the "full" list to the "available" list. */ + list_del(&page->dma_list); + list_add(&page->dma_list, &pool->page_list[POOL_AVAIL_IDX]); + } *(int *)vaddr = page->offset; page->offset = offset; /* From patchwork Thu Aug 2 19:59:15 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Battersby X-Patchwork-Id: 10554105 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D01A61708 for ; Thu, 2 Aug 2018 19:59:19 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id CB6672C07C for ; Thu, 2 Aug 2018 19:59:19 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id BF4832C085; Thu, 2 Aug 2018 19:59:19 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=unavailable version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5DFA42C07C for ; Thu, 2 Aug 2018 19:59:19 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7A5B76B0010; Thu, 2 Aug 2018 15:59:18 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 755326B0266; Thu, 2 Aug 2018 15:59:18 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 66E176B0269; Thu, 2 Aug 2018 15:59:18 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-qk0-f199.google.com (mail-qk0-f199.google.com [209.85.220.199]) by kanga.kvack.org (Postfix) with ESMTP id 3E7DA6B0010 for ; Thu, 2 Aug 2018 15:59:18 -0400 (EDT) Received: by mail-qk0-f199.google.com with SMTP id l15-v6so3020285qki.18 for ; Thu, 02 Aug 2018 12:59:18 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:subject :to:message-id:date:user-agent:mime-version :content-transfer-encoding:content-language; bh=3fEel7hlTSSFKrwRSpePNyc1pZ3+U90/zz9Bfp8y71I=; b=XFw6y6s5akKBdP3L87pFYWEQWFGzMf7ryVJTdz7CiiCoWHzhqyU7y6HMaV2RYImBZl CbztV+b56nbayaVugWsSoOPCh4Q/9F2D2jtGX41KEjj2SGNVyKBsYFHarw/LNHkaV6yZ yQrg2SyvNlJ3c7g8hMPPhYqEaQKij590P0eZm6WDf2loTVef8T+whZWyipRHxsvDqTTF JhVALXZPUc8X2lsJuwS4dUtRWtYe4MZMzjUJOig4t3ussk1PZI8rSKDYkrW8B5xL3hlf ewu8JaZCO/UPHpAhvWHbwCOdFS0DWjSd5QJMtYRCv9dCJnmi84Z0eDLjiFh+PgvzHBMv pDvQ== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of btv1==75224751413==tonyb@cybernetics.com designates 173.71.130.66 as permitted sender) smtp.mailfrom="btv1==75224751413==tonyb@cybernetics.com" X-Gm-Message-State: AOUpUlEhoEcKJG+CvfJwWIZj9F9zIAFjNtexgjUi/VQ5C3GgWhMjM0Ak bZ6omD4mPHSSpe1SImWRUl1spFtB8w5ehgRJWmEmAdrgtls1I4yVtkUh0ka2/fUFefajFgEsqMe T+N0h1hk+Pfy0UwEAAIAkxw8WJaLdciO89BDcyl3w8WjyHjWY0BbCHebUG1Qp+gkj1w== X-Received: by 2002:a0c:86f3:: with SMTP id 48-v6mr888792qvg.165.1533239958033; Thu, 02 Aug 2018 12:59:18 -0700 (PDT) X-Google-Smtp-Source: AAOMgpeQyIVl4qwDspVVhZkCepmb3HfeZgo7upXeZFm0UR0vlaYI8+dam+6UhgrFUTrn38zd1IKo X-Received: by 2002:a0c:86f3:: with SMTP id 48-v6mr888763qvg.165.1533239957421; Thu, 02 Aug 2018 12:59:17 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1533239957; cv=none; d=google.com; s=arc-20160816; b=TPgzhA+hCUkERv1kTXCfqQbs2R5iIrqT4ObwPboZQ2LIF/PPSOz8a+xw34GVdiFS7a 4+B4u7EM6lkF+vcafyBzxZFgsqby4shcGKzuXym5CxcOCfUV7sq2fbjFqCmuy3xK30RA j941yNEVMS5fl/oCIxekJML2uSlQIwIKqQLuPcUdq7UczZvozvGxNoXy+GzXM7M8tE5M SJI3HQP2UZWvTulu1A98NlPOnDEMhacd0LO/l2YZzCsPq4/lGzUH26jMo929//3R47vd iG6CXriGOu/2RkVe5/Kw/njj5Gwamkz4V3brwSJtkmUteuBdJq2PiaJUK66ve1RwRdRa p+/A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-language:content-transfer-encoding:mime-version:user-agent :date:message-id:to:subject:from:arc-authentication-results; bh=3fEel7hlTSSFKrwRSpePNyc1pZ3+U90/zz9Bfp8y71I=; b=ea2rivUEz0EpiriYp6ZiidVaU1JhcQPDD8FWrcFQpYKJvP0wWLM9Yuj8CMyRygWje9 0ht82UbPOI45qqh0O0aJMwZBYUKUHYjmp/IXEV+rULo4zB2qKqMRmTtwSDmlo6SzOQcY s4EKlFPtVn+x2WLjqJ2ZU0n/bWOXwipEDAfzeVjyBxLil9ns2JDcoiReTrXlqNEK2F9a QsVtjxhzUnrD78kVN9QVb4H+BDk54Wo8c/JaO3SHpy9Y/8A9gJYj4QcClGc2QB3DA+ro rSMul5W1wYJGy3vbzYj7+DMkjEaCdggTtMd4HRwJbNMS6DZDOefBTiQ7kvUCD+C2/JY+ Bp7g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of btv1==75224751413==tonyb@cybernetics.com designates 173.71.130.66 as permitted sender) smtp.mailfrom="btv1==75224751413==tonyb@cybernetics.com" Received: from mail.cybernetics.com (mail.cybernetics.com. [173.71.130.66]) by mx.google.com with ESMTPS id x14-v6si2671510qtx.347.2018.08.02.12.59.17 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 02 Aug 2018 12:59:17 -0700 (PDT) Received-SPF: pass (google.com: domain of btv1==75224751413==tonyb@cybernetics.com designates 173.71.130.66 as permitted sender) client-ip=173.71.130.66; Authentication-Results: mx.google.com; spf=pass (google.com: domain of btv1==75224751413==tonyb@cybernetics.com designates 173.71.130.66 as permitted sender) smtp.mailfrom="btv1==75224751413==tonyb@cybernetics.com" X-ASG-Debug-ID: 1533239955-0fb3b01fb33f59b0001-v9ZeMO Received: from cybernetics.com ([10.157.1.126]) by mail.cybernetics.com with ESMTP id k9ypWt4peWnGoYEU (version=SSLv3 cipher=DES-CBC3-SHA bits=112 verify=NO); Thu, 02 Aug 2018 15:59:15 -0400 (EDT) X-Barracuda-Envelope-From: tonyb@cybernetics.com X-ASG-Whitelist: Client Received: from [10.157.2.224] (account tonyb HELO [192.168.200.1]) by cybernetics.com (CommuniGate Pro SMTP 5.1.14) with ESMTPSA id 8317839; Thu, 02 Aug 2018 15:59:15 -0400 From: Tony Battersby Subject: [PATCH v2 5/9] dmapool: rename fields in dma_page To: Matthew Wilcox , Christoph Hellwig , Marek Szyprowski , Sathya Prakash , Chaitra P B , Suganath Prabu Subramani , iommu@lists.linux-foundation.org, linux-mm@kvack.org, linux-scsi@vger.kernel.org, MPT-FusionLinux.pdl@broadcom.com X-ASG-Orig-Subj: [PATCH v2 5/9] dmapool: rename fields in dma_page Message-ID: Date: Thu, 2 Aug 2018 15:59:15 -0400 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.9.1 MIME-Version: 1.0 Content-Language: en-US X-Barracuda-Connect: UNKNOWN[10.157.1.126] X-Barracuda-Start-Time: 1533239955 X-Barracuda-Encrypted: DES-CBC3-SHA X-Barracuda-URL: https://10.157.1.122:443/cgi-mod/mark.cgi X-Barracuda-Scan-Msg-Size: 3378 X-Virus-Scanned: by bsmtpd at cybernetics.com X-Barracuda-BRTS-Status: 1 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Rename fields in 'struct dma_page' in preparation for moving them into 'struct page'. No functional changes. in_use -> dma_in_use offset -> dma_free_o Signed-off-by: Tony Battersby --- linux/mm/dmapool.c.orig 2018-08-02 10:03:46.000000000 -0400 +++ linux/mm/dmapool.c 2018-08-02 10:06:32.000000000 -0400 @@ -65,8 +65,8 @@ struct dma_page { /* cacheable header f struct list_head dma_list; void *vaddr; dma_addr_t dma; - unsigned int in_use; - unsigned int offset; + unsigned int dma_in_use; + unsigned int dma_free_o; }; static DEFINE_MUTEX(pools_lock); @@ -101,7 +101,7 @@ show_pools(struct device *dev, struct de &pool->page_list[list_idx], dma_list) { pages++; - blocks += page->in_use; + blocks += page->dma_in_use; } } spin_unlock_irq(&pool->lock); @@ -248,8 +248,8 @@ static struct dma_page *pool_alloc_page( memset(page->vaddr, POOL_POISON_FREED, pool->allocation); #endif pool_initialise_page(pool, page); - page->in_use = 0; - page->offset = 0; + page->dma_in_use = 0; + page->dma_free_o = 0; } else { kfree(page); page = NULL; @@ -259,7 +259,7 @@ static struct dma_page *pool_alloc_page( static inline bool is_page_busy(struct dma_page *page) { - return page->in_use != 0; + return page->dma_in_use != 0; } static void pool_free_page(struct dma_pool *pool, struct dma_page *page) @@ -362,10 +362,10 @@ void *dma_pool_alloc(struct dma_pool *po list_add(&page->dma_list, &pool->page_list[POOL_AVAIL_IDX]); ready: - page->in_use++; - offset = page->offset; - page->offset = *(int *)(page->vaddr + offset); - if (page->offset >= pool->allocation) { + page->dma_in_use++; + offset = page->dma_free_o; + page->dma_free_o = *(int *)(page->vaddr + offset); + if (page->dma_free_o >= pool->allocation) { /* Move page from the "available" list to the "full" list. */ list_del(&page->dma_list); list_add(&page->dma_list, &pool->page_list[POOL_FULL_IDX]); @@ -376,8 +376,8 @@ void *dma_pool_alloc(struct dma_pool *po { int i; u8 *data = retval; - /* page->offset is stored in first 4 bytes */ - for (i = sizeof(page->offset); i < pool->size; i++) { + /* page->dma_free_o is stored in first 4 bytes */ + for (i = sizeof(page->dma_free_o); i < pool->size; i++) { if (data[i] == POOL_POISON_FREED) continue; dev_err(pool->dev, @@ -459,7 +459,7 @@ void dma_pool_free(struct dma_pool *pool return; } { - unsigned int chain = page->offset; + unsigned int chain = page->dma_free_o; while (chain < pool->allocation) { if (chain != offset) { chain = *(int *)(page->vaddr + chain); @@ -475,14 +475,14 @@ void dma_pool_free(struct dma_pool *pool memset(vaddr, POOL_POISON_FREED, pool->size); #endif - page->in_use--; - if (page->offset >= pool->allocation) { + page->dma_in_use--; + if (page->dma_free_o >= pool->allocation) { /* Move page from the "full" list to the "available" list. */ list_del(&page->dma_list); list_add(&page->dma_list, &pool->page_list[POOL_AVAIL_IDX]); } - *(int *)vaddr = page->offset; - page->offset = offset; + *(int *)vaddr = page->dma_free_o; + page->dma_free_o = offset; /* * Resist a temptation to do * if (!is_page_busy(page)) pool_free_page(pool, page); From patchwork Thu Aug 2 19:59:53 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Battersby X-Patchwork-Id: 10554109 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1B7B914E2 for ; Thu, 2 Aug 2018 20:00:01 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1599B2C089 for ; Thu, 2 Aug 2018 20:00:01 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 09AA22C08F; Thu, 2 Aug 2018 20:00:01 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=unavailable version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 51E702C094 for ; Thu, 2 Aug 2018 20:00:00 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 72F556B0269; Thu, 2 Aug 2018 15:59:59 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 6DFAF6B026A; Thu, 2 Aug 2018 15:59:59 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5A99D6B026B; Thu, 2 Aug 2018 15:59:59 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-qt0-f200.google.com (mail-qt0-f200.google.com [209.85.216.200]) by kanga.kvack.org (Postfix) with ESMTP id 3154D6B0269 for ; Thu, 2 Aug 2018 15:59:59 -0400 (EDT) Received: by mail-qt0-f200.google.com with SMTP id b8-v6so2481190qto.16 for ; Thu, 02 Aug 2018 12:59:59 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:subject :to:message-id:date:user-agent:mime-version :content-transfer-encoding:content-language; bh=BgugMbxSPQvMae6/uaSaO8CveqAD4MBPuQg9v4ZYxBQ=; b=r6S0gimF//eOpF3NG4cFHhAfIJAzB5qFEbu85t52iKYWdPFQvIdHzy4RcFCZoCw6dX oVgji8meLAcayDAyBNK31AOSVgTUWcv075kjcDuuFLa7AECyGVQLuAfbLc1r72nJoK3w 3kFox/MHfcCanP2FPZ1FmmAKKenyqM3dLyMFGwC5KR7bQUxJPwtszpVCGI776KHTTpYC r39fvPgnsjBA1N9fnY3kjG0T9elwdR6jPsnRin364lHUtFVqyE3h2FCfdSuKS6uDVjK8 NOTU3Beq4GlB9bNawHNgQ2K5f3Chlg5tgKq6MuQZR90rzNvPPXPRriiUBRgByfCVCX0q 3c8g== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of btv1==75224751413==tonyb@cybernetics.com designates 173.71.130.66 as permitted sender) smtp.mailfrom="btv1==75224751413==tonyb@cybernetics.com" X-Gm-Message-State: AOUpUlH3pF+uo5CRJuHAOVGFz2WgmpKqy5kPPnzpA5tVEbctKqYpLfei b9E7/5jQytyZ8VVAP74DjOoqC9232SJnemq8lRBhTff+2/O+psJOeq/ZTqBet05eoQ6IKpn0qR1 p3mKWsF/Lz6zge3I1WyrgSMUIu5DfeLRaWWz8z15lgxre0VkmqwIdmxQFPoQj7tvfdw== X-Received: by 2002:ac8:34d3:: with SMTP id x19-v6mr910582qtb.81.1533239998932; Thu, 02 Aug 2018 12:59:58 -0700 (PDT) X-Google-Smtp-Source: AAOMgpenJh3b8ie2Ka+qFFybS/45FnyFCSJrY9dRw3M/1wXpP4mZWJw7Q8Fs5C+uPAwfXGxwlj/5 X-Received: by 2002:ac8:34d3:: with SMTP id x19-v6mr910525qtb.81.1533239997924; Thu, 02 Aug 2018 12:59:57 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1533239997; cv=none; d=google.com; s=arc-20160816; b=y0GHrBhnR0ygi/fNdMN+s6adNlWOvXifuownxOZCqS83CAAeCJeNfZ0JNE9NEOINUN WF/7wmIqQEaB/MkA9OGPOrBxiFGvUKWmr0mVd8voiqWDjPDVdTZHQtx1ntzAjm3V6W94 Dp9gq6cB0wjR0qKECzaDOeEt4s4ejqWGVliGqTqn+z5reXbeeYCift3xkiScriVDzfEc ov5DQSKeKh16GmbSLej/qHzCUw++5dcH9u3ELwRptJi673VWL1suRJKPDphdJ++ucoJB ePZEJp4y8V5l97vLXmuLZPbYmOqnUSKJ7Ya3jKgm1FGvYt/HIHOGp4sfQ3zxYKsq1ttV xBwg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-language:content-transfer-encoding:mime-version:user-agent :date:message-id:to:subject:from:arc-authentication-results; bh=BgugMbxSPQvMae6/uaSaO8CveqAD4MBPuQg9v4ZYxBQ=; b=pdrUsfJlXnCjgNPObgplyBY5QHDqH4N84m4IKQ1bmrsCrmWS5MoTu7d/abj8HS7txD vSHdmWHilUNsIZOeS96CmzefbUBZKCTLjo86/TuYC24IsZa5pwb1dW8mFit2U77DRvRG //HVHJsOC+eh8AiH7pOf/fXRbuNK4oyHVBhq7vJvnuowFy9oh+xzcxnrIQ2rBt2vkpTi Krhf2usOiLsLezrnA8579dIscpkX/qy6uPlOpdfTmn7C8nYBzIXHd8xzcluwZc+Rb0lA eoJyRmTF7i3XU2JTAIK5cdpxP4Zv9HlaMR1UPQS4SQ2uZDFt42++LPJpMzuPteebbgnM 3YWQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of btv1==75224751413==tonyb@cybernetics.com designates 173.71.130.66 as permitted sender) smtp.mailfrom="btv1==75224751413==tonyb@cybernetics.com" Received: from mail.cybernetics.com (mail.cybernetics.com. [173.71.130.66]) by mx.google.com with ESMTPS id w36-v6si1354568qvc.209.2018.08.02.12.59.57 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 02 Aug 2018 12:59:57 -0700 (PDT) Received-SPF: pass (google.com: domain of btv1==75224751413==tonyb@cybernetics.com designates 173.71.130.66 as permitted sender) client-ip=173.71.130.66; Authentication-Results: mx.google.com; spf=pass (google.com: domain of btv1==75224751413==tonyb@cybernetics.com designates 173.71.130.66 as permitted sender) smtp.mailfrom="btv1==75224751413==tonyb@cybernetics.com" X-ASG-Debug-ID: 1533239994-0fb3b01fb33f59e0001-v9ZeMO Received: from cybernetics.com ([10.157.1.126]) by mail.cybernetics.com with ESMTP id X3Aga30FNfhmhCsL (version=SSLv3 cipher=DES-CBC3-SHA bits=112 verify=NO); Thu, 02 Aug 2018 15:59:54 -0400 (EDT) X-Barracuda-Envelope-From: tonyb@cybernetics.com X-ASG-Whitelist: Client Received: from [10.157.2.224] (account tonyb HELO [192.168.200.1]) by cybernetics.com (CommuniGate Pro SMTP 5.1.14) with ESMTPSA id 8317837; Thu, 02 Aug 2018 15:59:53 -0400 From: Tony Battersby Subject: [PATCH v2 6/9] dmapool: improve scalability of dma_pool_free To: Matthew Wilcox , Christoph Hellwig , Marek Szyprowski , Sathya Prakash , Chaitra P B , Suganath Prabu Subramani , iommu@lists.linux-foundation.org, linux-mm@kvack.org, linux-scsi@vger.kernel.org, MPT-FusionLinux.pdl@broadcom.com X-ASG-Orig-Subj: [PATCH v2 6/9] dmapool: improve scalability of dma_pool_free Message-ID: Date: Thu, 2 Aug 2018 15:59:53 -0400 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.9.1 MIME-Version: 1.0 Content-Language: en-US X-Barracuda-Connect: UNKNOWN[10.157.1.126] X-Barracuda-Start-Time: 1533239994 X-Barracuda-Encrypted: DES-CBC3-SHA X-Barracuda-URL: https://10.157.1.122:443/cgi-mod/mark.cgi X-Barracuda-Scan-Msg-Size: 9688 X-Barracuda-BRTS-Status: 1 X-Virus-Scanned: by bsmtpd at cybernetics.com X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP dma_pool_free() scales poorly when the pool contains many pages because pool_find_page() does a linear scan of all allocated pages. Improve its scalability by replacing the linear scan with virt_to_page() and storing dmapool private data directly in 'struct page', thereby eliminating 'struct dma_page'. In big O notation, this improves the algorithm from O(n^2) to O(n) while also reducing memory usage. Thanks to Matthew Wilcox for the suggestion to use struct page. Signed-off-by: Tony Battersby --- Completely rewritten since v1. Prior to this patch, if you passed dma_pool_free() a bad dma address, then pool_find_page() wouldn't be able to find it in the pool, so it would print an error and return. But this patch removes pool_find_page(), so I moved one of the faster sanity checks from DMAPOOL_DEBUG to always-enabled. It should be cheap enough, especially given the speed improvement this patch set gives overall. The check will at least verify that the page was probably allocated by a dma pool (by checking that page->dma is consistent with the passed-in dma address), although it can't verify that it was the same pool that is being passed to dma_pool_free(). I would have liked to add a pointer from the 'struct page' back to the 'struct dma_pool', but there isn't enough space in 'struct page' without going through painful measures that aren't worth it for a debug check. --- linux/include/linux/mm_types.h.orig 2018-08-01 17:59:46.000000000 -0400 +++ linux/include/linux/mm_types.h 2018-08-01 17:59:56.000000000 -0400 @@ -153,6 +153,12 @@ struct page { unsigned long _zd_pad_1; /* uses mapping */ }; + struct { /* dma_pool pages */ + struct list_head dma_list; + dma_addr_t dma; + unsigned int dma_free_o; + }; + /** @rcu_head: You can use this to free a page by RCU. */ struct rcu_head rcu_head; }; @@ -174,6 +180,8 @@ struct page { unsigned int active; /* SLAB */ int units; /* SLOB */ + + unsigned int dma_in_use; /* dma_pool pages */ }; /* Usage count. *DO NOT USE DIRECTLY*. See page_ref.h */ --- linux/mm/dmapool.c.orig 2018-08-02 10:07:47.000000000 -0400 +++ linux/mm/dmapool.c 2018-08-02 10:10:38.000000000 -0400 @@ -25,6 +25,10 @@ * list tracks pages that have one or more free blocks, and the 'full' list * tracks pages that have no free blocks. Pages are moved from one list to * the other as their blocks are allocated and freed. + * + * When allocating DMA pages, we use some available space in 'struct page' to + * store data private to dmapool; search 'dma_pool' in the definition of + * 'struct page' for details. */ #include @@ -61,14 +65,6 @@ struct dma_pool { /* the pool */ struct list_head pools; }; -struct dma_page { /* cacheable header for 'allocation' bytes */ - struct list_head dma_list; - void *vaddr; - dma_addr_t dma; - unsigned int dma_in_use; - unsigned int dma_free_o; -}; - static DEFINE_MUTEX(pools_lock); static DEFINE_MUTEX(pools_reg_lock); @@ -95,7 +91,7 @@ show_pools(struct device *dev, struct de spin_lock_irq(&pool->lock); for (list_idx = 0; list_idx < POOL_N_LISTS; list_idx++) { - struct dma_page *page; + struct page *page; list_for_each_entry(page, &pool->page_list[list_idx], @@ -218,7 +214,7 @@ struct dma_pool *dma_pool_create(const c } EXPORT_SYMBOL(dma_pool_create); -static void pool_initialise_page(struct dma_pool *pool, struct dma_page *page) +static void pool_initialize_free_block_list(struct dma_pool *pool, void *vaddr) { unsigned int offset = 0; unsigned int next_boundary = pool->boundary; @@ -229,47 +225,57 @@ static void pool_initialise_page(struct next = next_boundary; next_boundary += pool->boundary; } - *(int *)(page->vaddr + offset) = next; + *(int *)(vaddr + offset) = next; offset = next; } while (offset < pool->allocation); } -static struct dma_page *pool_alloc_page(struct dma_pool *pool, gfp_t mem_flags) +static struct page *pool_alloc_page(struct dma_pool *pool, gfp_t mem_flags) { - struct dma_page *page; + struct page *page; + dma_addr_t dma; + void *vaddr; - page = kmalloc(sizeof(*page), mem_flags); - if (!page) + vaddr = dma_alloc_coherent(pool->dev, pool->allocation, &dma, + mem_flags); + if (!vaddr) return NULL; - page->vaddr = dma_alloc_coherent(pool->dev, pool->allocation, - &page->dma, mem_flags); - if (page->vaddr) { + #ifdef DMAPOOL_DEBUG - memset(page->vaddr, POOL_POISON_FREED, pool->allocation); + memset(vaddr, POOL_POISON_FREED, pool->allocation); #endif - pool_initialise_page(pool, page); - page->dma_in_use = 0; - page->dma_free_o = 0; - } else { - kfree(page); - page = NULL; - } + pool_initialize_free_block_list(pool, vaddr); + + page = virt_to_page(vaddr); + page->dma = dma; + page->dma_free_o = 0; + page->dma_in_use = 0; + return page; } -static inline bool is_page_busy(struct dma_page *page) +static inline bool is_page_busy(struct page *page) { return page->dma_in_use != 0; } -static void pool_free_page(struct dma_pool *pool, struct dma_page *page) +static void pool_free_page(struct dma_pool *pool, struct page *page) { - void *vaddr = page->vaddr; + /* Save local copies of some page fields. */ + void *vaddr = page_to_virt(page); + bool busy = is_page_busy(page); dma_addr_t dma = page->dma; list_del(&page->dma_list); - if (is_page_busy(page)) { + /* Clear all the page fields we use. */ + page->dma_list.next = NULL; + page->dma_list.prev = NULL; + page->dma = 0; + page->dma_free_o = 0; + page_mapcount_reset(page); /* clear dma_in_use */ + + if (busy) { dev_err(pool->dev, "dma_pool_destroy %s, %p busy\n", pool->name, vaddr); @@ -280,7 +286,6 @@ static void pool_free_page(struct dma_po #endif dma_free_coherent(pool->dev, pool->allocation, vaddr, dma); } - kfree(page); } /** @@ -310,11 +315,11 @@ void dma_pool_destroy(struct dma_pool *p mutex_unlock(&pools_reg_lock); for (list_idx = 0; list_idx < POOL_N_LISTS; list_idx++) { - struct dma_page *page; + struct page *page; while ((page = list_first_entry_or_null( &pool->page_list[list_idx], - struct dma_page, + struct page, dma_list))) { pool_free_page(pool, page); } @@ -338,15 +343,16 @@ void *dma_pool_alloc(struct dma_pool *po dma_addr_t *handle) { unsigned long flags; - struct dma_page *page; + struct page *page; size_t offset; void *retval; + void *vaddr; might_sleep_if(gfpflags_allow_blocking(mem_flags)); spin_lock_irqsave(&pool->lock, flags); page = list_first_entry_or_null(&pool->page_list[POOL_AVAIL_IDX], - struct dma_page, + struct page, dma_list); if (page) goto ready; @@ -362,15 +368,16 @@ void *dma_pool_alloc(struct dma_pool *po list_add(&page->dma_list, &pool->page_list[POOL_AVAIL_IDX]); ready: + vaddr = page_to_virt(page); page->dma_in_use++; offset = page->dma_free_o; - page->dma_free_o = *(int *)(page->vaddr + offset); + page->dma_free_o = *(int *)(vaddr + offset); if (page->dma_free_o >= pool->allocation) { /* Move page from the "available" list to the "full" list. */ list_del(&page->dma_list); list_add(&page->dma_list, &pool->page_list[POOL_FULL_IDX]); } - retval = offset + page->vaddr; + retval = offset + vaddr; *handle = offset + page->dma; #ifdef DMAPOOL_DEBUG { @@ -405,25 +412,6 @@ void *dma_pool_alloc(struct dma_pool *po } EXPORT_SYMBOL(dma_pool_alloc); -static struct dma_page *pool_find_page(struct dma_pool *pool, dma_addr_t dma) -{ - int list_idx; - - for (list_idx = 0; list_idx < POOL_N_LISTS; list_idx++) { - struct dma_page *page; - - list_for_each_entry(page, - &pool->page_list[list_idx], - dma_list) { - if (dma < page->dma) - continue; - if ((dma - page->dma) < pool->allocation) - return page; - } - } - return NULL; -} - /** * dma_pool_free - put block back into dma pool * @pool: the dma pool holding the block @@ -435,34 +423,35 @@ static struct dma_page *pool_find_page(s */ void dma_pool_free(struct dma_pool *pool, void *vaddr, dma_addr_t dma) { - struct dma_page *page; + struct page *page; unsigned long flags; unsigned int offset; - spin_lock_irqsave(&pool->lock, flags); - page = pool_find_page(pool, dma); - if (!page) { - spin_unlock_irqrestore(&pool->lock, flags); + if (unlikely(!virt_addr_valid(vaddr))) { dev_err(pool->dev, - "dma_pool_free %s, %p/%lx (bad dma)\n", - pool->name, vaddr, (unsigned long)dma); + "dma_pool_free %s, %p (bad vaddr)/%pad\n", + pool->name, vaddr, &dma); return; } - offset = vaddr - page->vaddr; -#ifdef DMAPOOL_DEBUG - if ((dma - page->dma) != offset) { - spin_unlock_irqrestore(&pool->lock, flags); + page = virt_to_page(vaddr); + offset = offset_in_page(vaddr); + + if (unlikely((dma - page->dma) != offset)) { dev_err(pool->dev, - "dma_pool_free %s, %p (bad vaddr)/%pad\n", + "dma_pool_free %s, %p (bad vaddr)/%pad (or bad dma)\n", pool->name, vaddr, &dma); return; } + + spin_lock_irqsave(&pool->lock, flags); +#ifdef DMAPOOL_DEBUG { + void *page_vaddr = vaddr - offset; unsigned int chain = page->dma_free_o; while (chain < pool->allocation) { if (chain != offset) { - chain = *(int *)(page->vaddr + chain); + chain = *(int *)(page_vaddr + chain); continue; } spin_unlock_irqrestore(&pool->lock, flags); From patchwork Thu Aug 2 20:00:37 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Battersby X-Patchwork-Id: 10554127 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7BDDF14E2 for ; Thu, 2 Aug 2018 20:00:42 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 774D82C2A0 for ; Thu, 2 Aug 2018 20:00:42 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 6B7C32C3E3; Thu, 2 Aug 2018 20:00:42 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=unavailable version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 121C32C2A0 for ; Thu, 2 Aug 2018 20:00:41 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1D99D6B026B; Thu, 2 Aug 2018 16:00:41 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 1AEE66B026C; Thu, 2 Aug 2018 16:00:41 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0C6236B026D; Thu, 2 Aug 2018 16:00:41 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-qt0-f198.google.com (mail-qt0-f198.google.com [209.85.216.198]) by kanga.kvack.org (Postfix) with ESMTP id D6A736B026B for ; Thu, 2 Aug 2018 16:00:40 -0400 (EDT) Received: by mail-qt0-f198.google.com with SMTP id d18-v6so2488877qtj.20 for ; Thu, 02 Aug 2018 13:00:40 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:subject :to:message-id:date:user-agent:mime-version :content-transfer-encoding:content-language; bh=QOzyhxEvzZ+IGkjgO7u2FzWHPW8+QfByiNXss0nlHKA=; b=WMj+EEvuAP4lb3kQf56Y+FRvasIHuZkP9/jeKrGoTPhpLWpWj5UclAnwwJ7i02VYUN VKoJliTLS44cMOcp/t3QeiW73vnbS/aUrCkJHd54uZmb7LjOS1/431pVJk+UDtSXy67J ebm1ojSF8NRbPll3INziyP3sR5pxkf1AeSkcAoS5K1GJkL8n3gYkjlZvhH63XHTDgW22 TdPkYms15qvJ4zU7iWUch+sm+QakQaVQixvUhwWIXATGND7z0SJTFizUCcF1BK/hGb+6 8XqydG6a3XlrfRpZNmu4e7bXJ4pIDBpdW7egPbk1BnLdedGRCezJ5R7xbxdStl4Sb0UN preg== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of btv1==75224751413==tonyb@cybernetics.com designates 173.71.130.66 as permitted sender) smtp.mailfrom="btv1==75224751413==tonyb@cybernetics.com" X-Gm-Message-State: AOUpUlGEYeffI4EQ1UiSB8jLy+bYlxMJTDBJGhjlGjE4gBAW7ksHSJqN 5ObJvZf8GH4nZqCqJupeIWcJoDJizKm1IQZ4fNYMl4BeMxd4DNye+ZSMjBDtJ4sOc44anrBFptT FZTmiJGhp5kckep13E/XJJxWtZCkbJvUk1nXtaJSRowGH1QTTDeNpW7QmAP88/RlxlA== X-Received: by 2002:a0c:c176:: with SMTP id i51-v6mr976595qvh.40.1533240040660; Thu, 02 Aug 2018 13:00:40 -0700 (PDT) X-Google-Smtp-Source: AAOMgpe129h+JxGEGtZ06W6INAgxDegyw4O6osKaiFEg+ll395H6EVd4Ea2/9p0/4vG7riJM6QBH X-Received: by 2002:a0c:c176:: with SMTP id i51-v6mr976545qvh.40.1533240040034; Thu, 02 Aug 2018 13:00:40 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1533240039; cv=none; d=google.com; s=arc-20160816; b=V74Ism3MVVVZiz2+ZQcDPg2gpgD/iAKpxQByg8mfBRtWgV9DjHJvngIFNXDrAB1qVX bPGXRnoHtKtq6XDVVPHXUrKN3NbwDwBaaKgMNmcJU9HojtBbiAngnfderzqCkVRCQkGi JnW2TdGw+X7i3mIZZn7pZRJelbmIr2dTZrgxo/W+t+fkazpZuiOnS+EZV8wzUTschoVW rnUP7sgi+QT6AgAXCLh4mKi+yc/M+fOg/8R6FhughvDL1w5BrCm+S8saYBzgSjQqN7L0 9BPfhrYJ2JWgHopDt5FpD1MIeyy+utfDKZp9MjD6iR2TIffoUw+0Ia5wurKfkW0xvP9Z utGg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-language:content-transfer-encoding:mime-version:user-agent :date:message-id:to:subject:from:arc-authentication-results; bh=QOzyhxEvzZ+IGkjgO7u2FzWHPW8+QfByiNXss0nlHKA=; b=s4Jr0gzaXTWrACyd/RHN8USubtrveduf8ok4ScI8tUybodsdQ3Xk6AL/oEa+4fKE8Q 7/Nq7CQaDFdfywVXfHdEtkzsYukqYT5ygzhf0Tj09goK9cX/XM2uZuxiWpjrso0vv80Q 2Pr1khn7sckg3ajQ0kGHl5zmW39McaN9+D0VFRqlr6KhIIoN2WzvZ5BQwCmei6Mj6+RO t/Jj2mUUzNe3t+mLOyy+WPI5PeqyCcadJG2NGEEtlra5sqHBskR1GaCPDS72N+iyafvz nHdrE8KOEcDPG+U+iEvjVDf9jCwuQsxBQS9seKxrZEGZfMUTX3eaX2lMLISlrhcbpG+c 966w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of btv1==75224751413==tonyb@cybernetics.com designates 173.71.130.66 as permitted sender) smtp.mailfrom="btv1==75224751413==tonyb@cybernetics.com" Received: from mail.cybernetics.com (mail.cybernetics.com. [173.71.130.66]) by mx.google.com with ESMTPS id l54-v6si2750111qtk.229.2018.08.02.13.00.39 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 02 Aug 2018 13:00:39 -0700 (PDT) Received-SPF: pass (google.com: domain of btv1==75224751413==tonyb@cybernetics.com designates 173.71.130.66 as permitted sender) client-ip=173.71.130.66; Authentication-Results: mx.google.com; spf=pass (google.com: domain of btv1==75224751413==tonyb@cybernetics.com designates 173.71.130.66 as permitted sender) smtp.mailfrom="btv1==75224751413==tonyb@cybernetics.com" X-ASG-Debug-ID: 1533240037-0fb3b01fb33f5a20001-v9ZeMO Received: from cybernetics.com ([10.157.1.126]) by mail.cybernetics.com with ESMTP id AwJ773DfukFnPLt9 (version=SSLv3 cipher=DES-CBC3-SHA bits=112 verify=NO); Thu, 02 Aug 2018 16:00:37 -0400 (EDT) X-Barracuda-Envelope-From: tonyb@cybernetics.com X-ASG-Whitelist: Client Received: from [10.157.2.224] (account tonyb HELO [192.168.200.1]) by cybernetics.com (CommuniGate Pro SMTP 5.1.14) with ESMTPSA id 8317844; Thu, 02 Aug 2018 16:00:37 -0400 From: Tony Battersby Subject: [PATCH v2 7/9] dmapool: debug: prevent endless loop in case of corruption To: Matthew Wilcox , Christoph Hellwig , Marek Szyprowski , Sathya Prakash , Chaitra P B , Suganath Prabu Subramani , iommu@lists.linux-foundation.org, linux-mm@kvack.org, linux-scsi@vger.kernel.org, MPT-FusionLinux.pdl@broadcom.com X-ASG-Orig-Subj: [PATCH v2 7/9] dmapool: debug: prevent endless loop in case of corruption Message-ID: <36e483e9-d779-497a-551e-32f96e184b49@cybernetics.com> Date: Thu, 2 Aug 2018 16:00:37 -0400 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.9.1 MIME-Version: 1.0 Content-Language: en-US X-Barracuda-Connect: UNKNOWN[10.157.1.126] X-Barracuda-Start-Time: 1533240037 X-Barracuda-Encrypted: DES-CBC3-SHA X-Barracuda-URL: https://10.157.1.122:443/cgi-mod/mark.cgi X-Barracuda-Scan-Msg-Size: 1676 X-Virus-Scanned: by bsmtpd at cybernetics.com X-Barracuda-BRTS-Status: 1 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Prevent a possible endless loop with DMAPOOL_DEBUG enabled if a buggy driver corrupts DMA pool memory. Signed-off-by: Tony Battersby --- linux/mm/dmapool.c.orig 2018-08-02 10:14:25.000000000 -0400 +++ linux/mm/dmapool.c 2018-08-02 10:16:17.000000000 -0400 @@ -449,16 +449,35 @@ void dma_pool_free(struct dma_pool *pool { void *page_vaddr = vaddr - offset; unsigned int chain = page->dma_free_o; + size_t total_free = 0; + while (chain < pool->allocation) { - if (chain != offset) { - chain = *(int *)(page_vaddr + chain); - continue; + if (unlikely(chain == offset)) { + spin_unlock_irqrestore(&pool->lock, flags); + dev_err(pool->dev, + "dma_pool_free %s, dma %pad already free\n", + pool->name, &dma); + return; + } + + /* + * The calculation of the number of blocks per + * allocation is actually more complicated than this + * because of the boundary value. But this comparison + * does not need to be exact; it just needs to prevent + * an endless loop in case a buggy driver causes a + * circular loop in the freelist. + */ + total_free += pool->size; + if (unlikely(total_free >= pool->allocation)) { + spin_unlock_irqrestore(&pool->lock, flags); + dev_err(pool->dev, + "dma_pool_free %s, freelist corrupted\n", + pool->name); + return; } - spin_unlock_irqrestore(&pool->lock, flags); - dev_err(pool->dev, - "dma_pool_free %s, dma %pad already free\n", - pool->name, &dma); - return; + + chain = *(int *)(page_vaddr + chain); } } memset(vaddr, POOL_POISON_FREED, pool->size); From patchwork Thu Aug 2 20:01:12 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Battersby X-Patchwork-Id: 10554145 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 131BD13BF for ; Thu, 2 Aug 2018 20:01:19 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0B4BD2C3F3 for ; Thu, 2 Aug 2018 20:01:19 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id F347E2C3F6; Thu, 2 Aug 2018 20:01:18 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=unavailable version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id EB14F2C3F3 for ; Thu, 2 Aug 2018 20:01:17 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E59A66B026D; Thu, 2 Aug 2018 16:01:16 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id E0AD96B026E; Thu, 2 Aug 2018 16:01:16 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D20FA6B026F; Thu, 2 Aug 2018 16:01:16 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-qk0-f200.google.com (mail-qk0-f200.google.com [209.85.220.200]) by kanga.kvack.org (Postfix) with ESMTP id A291E6B026D for ; Thu, 2 Aug 2018 16:01:16 -0400 (EDT) Received: by mail-qk0-f200.google.com with SMTP id e3-v6so3030836qkj.17 for ; Thu, 02 Aug 2018 13:01:16 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:subject :to:message-id:date:user-agent:mime-version :content-transfer-encoding:content-language; bh=dxoTA3Pqj5yqHzkSo5wibG9E1A0bMA3E/kXolek6Tkc=; b=DIIrqWiJ2dPTkuTd6c/7f4CUMKqOrr9/HJWDBAD/YEBClf/PAC5hGs0z/sPh3bOp8M 6F13+IzgAyyhADAs24g2EqTvMeCip5cVE0iQYOWvaMN8qXq6HjdHn5OpfDeP4PuNtZoG uw/BFHc8MPPmGc7SKmHfFz2jApJ9t/4NeSjpNflANCFPnsqEYep0vfFVG89vIzR2o2Kq +UQ4X1XscxoNfCqE7Rmqe6pB5EsT0g9Rrx10VcXwuQ43a1eRLA71qbRp0IYMr/qyYrps d2QJbm/1hYF/kHduTDxa2WrU7/KBErkbKFmIG0PJ9VMHKPHB24dzPvsAkGWqsU7WzuMr v2WA== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of btv1==75224751413==tonyb@cybernetics.com designates 173.71.130.66 as permitted sender) smtp.mailfrom="btv1==75224751413==tonyb@cybernetics.com" X-Gm-Message-State: AOUpUlHjEk6oZizvWnzdurNDGiCO7JDOldC/WAMhzkEK8GT2ZyrTiJT0 ZN6TdM2CrMsM4kBORtmlzlo14kIT8Dg6JuYdB5hxqYw569az5WVzK+uCdJbekAHGcAxa5PTg5Ma P8ECLtscLxZ1oqIHEsLdH7frAmbVnJHuW4/n2pcdwIhVa9GIUs8p6J6L89gAw27A8ww== X-Received: by 2002:a37:139a:: with SMTP id 26-v6mr896807qkt.129.1533240076408; Thu, 02 Aug 2018 13:01:16 -0700 (PDT) X-Google-Smtp-Source: AAOMgpdF/f/TD0er0+tsqg1bHko5HUINs0NYxUykh0sHTpkhw2M8YhHB5h89wV5oHdu2JAqW34hH X-Received: by 2002:a37:139a:: with SMTP id 26-v6mr896700qkt.129.1533240074999; Thu, 02 Aug 2018 13:01:14 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1533240074; cv=none; d=google.com; s=arc-20160816; b=0MUcBOkkEvMRpnP3h0/UKvzHPwelLwWuEVma/tkp6pfQvkNCSlbAbBya79ltICxOM9 PmC/571uNkxXgZLH+6cnjiv3T7Te5rnAVRC0cBjyikpvWxLTGskHv2IEWpz4kcNot3+g lQFyNbwtAM1WTOiJTn783M90M9rXR2XhJI6er8WdJh6Ann4oV4S2SF0KBzQClyOXHXfn g7MRsEk++9ZJRhGdym3PHjiwQ8BSgOon3agwY0V5Ma6sHsqlWJoryIxuo3QOWKtkdpUk jfJzE6wMGK4rTfXVgU8c+E2XSagWhug/RHcxO5tK7ABpm+eDm7AJcIHC2KUbrP8I5KwJ sy1g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-language:content-transfer-encoding:mime-version:user-agent :date:message-id:to:subject:from:arc-authentication-results; bh=dxoTA3Pqj5yqHzkSo5wibG9E1A0bMA3E/kXolek6Tkc=; b=NjaLr02rhU6uJau63MCb/WNZsDGpwueqRFvZRCrtVmIIrdjPa1SUK9fg2X772fpAI9 EHCTSFGC2sgVRLHJbT6RDmOg9V2R82aCMWm0cT4lsHsaOyeJvZBheHAqNSEkNpw81InW V4Oa7eoWW2t3fpP27ypOMCCeqDWlDfiKKhAGlfe+Ch8dbfBUtJMbo1tml0zeQF+KJmpz sUvruDUS+lQepsNxams+WR65f5kmYG2uMAsJ2xZLdQkOU4qK3UKci+sUsg5jIBd1qeU5 gNxrRFO76VD/y1GNqMaCIVo2iYoW9VHn4VWQSYvJu2MWKOiZUdxJFqtPTMNXbeOTLFhz MU5w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of btv1==75224751413==tonyb@cybernetics.com designates 173.71.130.66 as permitted sender) smtp.mailfrom="btv1==75224751413==tonyb@cybernetics.com" Received: from mail.cybernetics.com (mail.cybernetics.com. [173.71.130.66]) by mx.google.com with ESMTPS id q13-v6si2644705qvd.106.2018.08.02.13.01.14 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 02 Aug 2018 13:01:14 -0700 (PDT) Received-SPF: pass (google.com: domain of btv1==75224751413==tonyb@cybernetics.com designates 173.71.130.66 as permitted sender) client-ip=173.71.130.66; Authentication-Results: mx.google.com; spf=pass (google.com: domain of btv1==75224751413==tonyb@cybernetics.com designates 173.71.130.66 as permitted sender) smtp.mailfrom="btv1==75224751413==tonyb@cybernetics.com" X-ASG-Debug-ID: 1533240072-0fb3b01fb33f5a40001-v9ZeMO Received: from cybernetics.com ([10.157.1.126]) by mail.cybernetics.com with ESMTP id sRuXKKDmzkiURATG (version=SSLv3 cipher=DES-CBC3-SHA bits=112 verify=NO); Thu, 02 Aug 2018 16:01:12 -0400 (EDT) X-Barracuda-Envelope-From: tonyb@cybernetics.com X-ASG-Whitelist: Client Received: from [10.157.2.224] (account tonyb HELO [192.168.200.1]) by cybernetics.com (CommuniGate Pro SMTP 5.1.14) with ESMTPSA id 8317849; Thu, 02 Aug 2018 16:01:12 -0400 From: Tony Battersby Subject: [PATCH v2 8/9] dmapool: reduce footprint in struct page To: Matthew Wilcox , Christoph Hellwig , Marek Szyprowski , Sathya Prakash , Chaitra P B , Suganath Prabu Subramani , iommu@lists.linux-foundation.org, linux-mm@kvack.org, linux-scsi@vger.kernel.org, MPT-FusionLinux.pdl@broadcom.com X-ASG-Orig-Subj: [PATCH v2 8/9] dmapool: reduce footprint in struct page Message-ID: <0ccfd31b-0a3f-9ae8-85c8-e176cd5453a9@cybernetics.com> Date: Thu, 2 Aug 2018 16:01:12 -0400 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.9.1 MIME-Version: 1.0 Content-Language: en-US X-Barracuda-Connect: UNKNOWN[10.157.1.126] X-Barracuda-Start-Time: 1533240072 X-Barracuda-Encrypted: DES-CBC3-SHA X-Barracuda-URL: https://10.157.1.122:443/cgi-mod/mark.cgi X-Barracuda-Scan-Msg-Size: 13516 X-Virus-Scanned: by bsmtpd at cybernetics.com X-Barracuda-BRTS-Status: 1 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP This is my attempt to shrink 'dma_free_o' and 'dma_in_use' in 'struct page' (originally 'offset' and 'in_use' in 'struct dma_page') to 16-bit so that it is unnecessary to use the '_mapcount' field of 'struct page'. However, it adds complexity and makes allocating and freeing up to 20% slower for little gain, so I am NOT recommending that it be merged at this time. I am posting it just for reference in case someone finds it useful in the future. The main difficulty is supporting archs that have PAGE_SIZE > 64 KiB, for which a 16-bit byte offset is insufficient to cover the entire page. So I took the approach of converting everything from a "byte offset" into a "block index". That way the code can split any PAGE_SIZE into as many as 65535 blocks (one 16-bit index value is reserved for the list terminator). For example, with PAGE_SIZE of 1 MiB, you get 65535 blocks for 'size' <= 16. But that introduces a lot of ugly math due to the 'boundary' checking, which makes the code slower and more complex. I wrote a standalone program that iterates over all the combinations of PAGE_SIZE, 'size', and 'boundary', and performs a series of consistency checks on pool_blk_idx_to_offset(), pool_offset_to_blk_idx(), and pool_initialize_free_block_list(). The math may be ugly but I am pretty sure it is correct. One of the nice things about this is that dma_pool_free() can do some additional sanity checks: *) Check that the offset of the passed-in address corresponds to a valid block offset. *) With DMAPOOL_DEBUG enabled, check that the number of blocks in the freelist exactly matches the number that should be there. This improves the debug check I added in a previous patch by adding the calculation for pool->blks_per_alloc. NOT for merging. --- linux/include/linux/mm_types.h.orig 2018-08-01 12:25:25.000000000 -0400 +++ linux/include/linux/mm_types.h 2018-08-01 12:25:52.000000000 -0400 @@ -156,7 +156,8 @@ struct page { struct { /* dma_pool pages */ struct list_head dma_list; dma_addr_t dma; - unsigned int dma_free_o; + unsigned short dma_free_idx; + unsigned short dma_in_use; }; /** @rcu_head: You can use this to free a page by RCU. */ @@ -180,8 +181,6 @@ struct page { unsigned int active; /* SLAB */ int units; /* SLOB */ - - unsigned int dma_in_use; /* dma_pool pages */ }; /* Usage count. *DO NOT USE DIRECTLY*. See page_ref.h */ --- linux/mm/dmapool.c.orig 2018-08-02 14:02:42.000000000 -0400 +++ linux/mm/dmapool.c 2018-08-02 14:03:31.000000000 -0400 @@ -51,16 +51,25 @@ #define DMAPOOL_DEBUG 1 #endif +/* + * This matches the type of struct page::dma_free_idx, which is 16-bit to + * conserve space in struct page. + */ +typedef unsigned short pool_idx_t; +#define POOL_IDX_MAX USHRT_MAX + struct dma_pool { /* the pool */ #define POOL_FULL_IDX 0 #define POOL_AVAIL_IDX 1 #define POOL_N_LISTS 2 struct list_head page_list[POOL_N_LISTS]; spinlock_t lock; - size_t size; struct device *dev; - size_t allocation; - size_t boundary; + unsigned int size; + unsigned int allocation; + unsigned int boundary_shift; + unsigned int blks_per_boundary; + unsigned int blks_per_alloc; char name[32]; struct list_head pools; }; @@ -103,9 +112,9 @@ show_pools(struct device *dev, struct de spin_unlock_irq(&pool->lock); /* per-pool info, no real statistics yet */ - temp = scnprintf(next, size, "%-16s %4u %4zu %4zu %2u\n", + temp = scnprintf(next, size, "%-16s %4u %4u %4u %2u\n", pool->name, blocks, - pages * (pool->allocation / pool->size), + pages * pool->blks_per_alloc, pool->size, pages); size -= temp; next += temp; @@ -141,6 +150,7 @@ static DEVICE_ATTR(pools, 0444, show_pool struct dma_pool *dma_pool_create(const char *name, struct device *dev, size_t size, size_t align, size_t boundary) { + unsigned int boundary_shift; struct dma_pool *retval; size_t allocation; bool empty = false; @@ -150,10 +160,10 @@ struct dma_pool *dma_pool_create(const c else if (align & (align - 1)) return NULL; - if (size == 0) + if (size == 0 || size > SZ_2G) return NULL; - else if (size < 4) - size = 4; + else if (size < sizeof(pool_idx_t)) + size = sizeof(pool_idx_t); if ((size % align) != 0) size = ALIGN(size, align); @@ -165,6 +175,9 @@ struct dma_pool *dma_pool_create(const c else if ((boundary < size) || (boundary & (boundary - 1))) return NULL; + boundary_shift = get_count_order_long(min(boundary, allocation)); + boundary = 1U << boundary_shift; + retval = kmalloc_node(sizeof(*retval), GFP_KERNEL, dev_to_node(dev)); if (!retval) return retval; @@ -177,8 +190,29 @@ struct dma_pool *dma_pool_create(const c INIT_LIST_HEAD(&retval->page_list[1]); spin_lock_init(&retval->lock); retval->size = size; - retval->boundary = boundary; retval->allocation = allocation; + retval->boundary_shift = boundary_shift; + retval->blks_per_boundary = boundary / size; + retval->blks_per_alloc = + (allocation / boundary) * retval->blks_per_boundary + + (allocation % boundary) / size; + if (boundary >= allocation || boundary % size == 0) { + /* + * When the blocks are packed together, an individual block + * will never cross the boundary, so the boundary doesn't + * matter in this case. Enable some faster codepaths that skip + * boundary calculations for a small speedup. + */ + retval->blks_per_boundary = 0; + } + if (retval->blks_per_alloc > POOL_IDX_MAX) { + /* + * This would only affect archs with large PAGE_SIZE. Limit + * the total number of blocks per allocation to avoid + * overflowing dma_in_use and dma_free_idx. + */ + retval->blks_per_alloc = POOL_IDX_MAX; + } INIT_LIST_HEAD(&retval->pools); @@ -214,20 +248,73 @@ struct dma_pool *dma_pool_create(const c } EXPORT_SYMBOL(dma_pool_create); +/* + * Convert the index of a block of size pool->size to its offset within an + * allocated chunk of memory of size pool->allocation. + */ +static unsigned int pool_blk_idx_to_offset(struct dma_pool *pool, + unsigned int blk_idx) +{ + unsigned int offset; + + if (pool->blks_per_boundary == 0) { + offset = blk_idx * pool->size; + } else { + offset = ((blk_idx / pool->blks_per_boundary) << + pool->boundary_shift) + + (blk_idx % pool->blks_per_boundary) * pool->size; + } + return offset; +} + +/* + * Convert an offset within an allocated chunk of memory of size + * pool->allocation to the index of the possibly-smaller block of size + * pool->size. If the given offset is not located at the beginning of a valid + * block, then the return value will be >= pool->blks_per_alloc. + */ +static unsigned int pool_offset_to_blk_idx(struct dma_pool *pool, + unsigned int offset) +{ + unsigned int blk_idx; + + if (pool->blks_per_boundary == 0) { + blk_idx = (likely(offset % pool->size == 0)) + ? (offset / pool->size) + : pool->blks_per_alloc; + } else { + unsigned int offset_within_boundary = + offset & ((1U << pool->boundary_shift) - 1); + unsigned int idx_within_boundary = + offset_within_boundary / pool->size; + + if (likely(offset_within_boundary % pool->size == 0 && + idx_within_boundary < pool->blks_per_boundary)) { + blk_idx = (offset >> pool->boundary_shift) * + pool->blks_per_boundary + + idx_within_boundary; + } else { + blk_idx = pool->blks_per_alloc; + } + } + return blk_idx; +} + static void pool_initialize_free_block_list(struct dma_pool *pool, void *vaddr) { + unsigned int next_boundary = 1U << pool->boundary_shift; unsigned int offset = 0; - unsigned int next_boundary = pool->boundary; + unsigned int i; + + for (i = 0; i < pool->blks_per_alloc; i++) { + *(pool_idx_t *)(vaddr + offset) = (pool_idx_t) i + 1; - do { - unsigned int next = offset + pool->size; - if (unlikely((next + pool->size) > next_boundary)) { - next = next_boundary; - next_boundary += pool->boundary; + offset += pool->size; + if (unlikely((offset + pool->size) > next_boundary)) { + offset = next_boundary; + next_boundary += 1U << pool->boundary_shift; } - *(int *)(vaddr + offset) = next; - offset = next; - } while (offset < pool->allocation); + } } static struct page *pool_alloc_page(struct dma_pool *pool, gfp_t mem_flags) @@ -248,7 +335,7 @@ static struct page *pool_alloc_page(stru page = virt_to_page(vaddr); page->dma = dma; - page->dma_free_o = 0; + page->dma_free_idx = 0; page->dma_in_use = 0; return page; @@ -272,8 +359,8 @@ static void pool_free_page(struct dma_po page->dma_list.next = NULL; page->dma_list.prev = NULL; page->dma = 0; - page->dma_free_o = 0; - page_mapcount_reset(page); /* clear dma_in_use */ + page->dma_free_idx = 0; + page->dma_in_use = 0; if (busy) { dev_err(pool->dev, @@ -342,9 +429,10 @@ EXPORT_SYMBOL(dma_pool_destroy); void *dma_pool_alloc(struct dma_pool *pool, gfp_t mem_flags, dma_addr_t *handle) { + unsigned int blk_idx; + unsigned int offset; unsigned long flags; struct page *page; - size_t offset; void *retval; void *vaddr; @@ -370,9 +458,10 @@ void *dma_pool_alloc(struct dma_pool *po ready: vaddr = page_to_virt(page); page->dma_in_use++; - offset = page->dma_free_o; - page->dma_free_o = *(int *)(vaddr + offset); - if (page->dma_free_o >= pool->allocation) { + blk_idx = page->dma_free_idx; + offset = pool_blk_idx_to_offset(pool, blk_idx); + page->dma_free_idx = *(pool_idx_t *)(vaddr + offset); + if (page->dma_free_idx >= pool->blks_per_alloc) { /* Move page from the "available" list to the "full" list. */ list_del(&page->dma_list); list_add(&page->dma_list, &pool->page_list[POOL_FULL_IDX]); @@ -383,8 +472,8 @@ void *dma_pool_alloc(struct dma_pool *po { int i; u8 *data = retval; - /* page->dma_free_o is stored in first 4 bytes */ - for (i = sizeof(page->dma_free_o); i < pool->size; i++) { + /* a pool_idx_t is stored at the beginning of the block */ + for (i = sizeof(pool_idx_t); i < pool->size; i++) { if (data[i] == POOL_POISON_FREED) continue; dev_err(pool->dev, @@ -426,6 +515,7 @@ void dma_pool_free(struct dma_pool *pool struct page *page; unsigned long flags; unsigned int offset; + unsigned int blk_idx; if (unlikely(!virt_addr_valid(vaddr))) { dev_err(pool->dev, @@ -438,21 +528,28 @@ void dma_pool_free(struct dma_pool *pool offset = offset_in_page(vaddr); if (unlikely((dma - page->dma) != offset)) { + bad_vaddr: dev_err(pool->dev, "dma_pool_free %s, %p (bad vaddr)/%pad (or bad dma)\n", pool->name, vaddr, &dma); return; } + blk_idx = pool_offset_to_blk_idx(pool, offset); + if (unlikely(blk_idx >= pool->blks_per_alloc)) + goto bad_vaddr; + spin_lock_irqsave(&pool->lock, flags); #ifdef DMAPOOL_DEBUG { void *page_vaddr = vaddr - offset; - unsigned int chain = page->dma_free_o; - size_t total_free = 0; + unsigned int chain_idx = page->dma_free_idx; + unsigned int n_free = 0; + + while (chain_idx < pool->blks_per_alloc) { + unsigned int chain_offset; - while (chain < pool->allocation) { - if (unlikely(chain == offset)) { + if (unlikely(chain_idx == blk_idx)) { spin_unlock_irqrestore(&pool->lock, flags); dev_err(pool->dev, "dma_pool_free %s, dma %pad already free\n", @@ -461,15 +558,15 @@ void dma_pool_free(struct dma_pool *pool } /* - * The calculation of the number of blocks per - * allocation is actually more complicated than this - * because of the boundary value. But this comparison - * does not need to be exact; it just needs to prevent - * an endless loop in case a buggy driver causes a - * circular loop in the freelist. + * A buggy driver could corrupt the freelist by + * use-after-free, buffer overflow, etc. Besides + * checking for corruption, this also prevents an + * endless loop in case corruption causes a circular + * loop in the freelist. */ - total_free += pool->size; - if (unlikely(total_free >= pool->allocation)) { + if (unlikely(++n_free + page->dma_in_use > + pool->blks_per_alloc)) { + freelist_corrupt: spin_unlock_irqrestore(&pool->lock, flags); dev_err(pool->dev, "dma_pool_free %s, freelist corrupted\n", @@ -477,20 +574,24 @@ void dma_pool_free(struct dma_pool *pool return; } - chain = *(int *)(page_vaddr + chain); + chain_offset = pool_blk_idx_to_offset(pool, chain_idx); + chain_idx = + *(pool_idx_t *) (page_vaddr + chain_offset); } + if (n_free + page->dma_in_use != pool->blks_per_alloc) + goto freelist_corrupt; } memset(vaddr, POOL_POISON_FREED, pool->size); #endif page->dma_in_use--; - if (page->dma_free_o >= pool->allocation) { + if (page->dma_free_idx >= pool->blks_per_alloc) { /* Move page from the "full" list to the "available" list. */ list_del(&page->dma_list); list_add(&page->dma_list, &pool->page_list[POOL_AVAIL_IDX]); } - *(int *)vaddr = page->dma_free_o; - page->dma_free_o = offset; + *(pool_idx_t *)vaddr = page->dma_free_idx; + page->dma_free_idx = blk_idx; /* * Resist a temptation to do * if (!is_page_busy(page)) pool_free_page(pool, page); From patchwork Thu Aug 2 20:01:47 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Battersby X-Patchwork-Id: 10554149 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B656014E2 for ; Thu, 2 Aug 2018 20:01:52 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id AF8A72C094 for ; Thu, 2 Aug 2018 20:01:52 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A2F572C177; Thu, 2 Aug 2018 20:01:52 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=unavailable version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E222B2C094 for ; Thu, 2 Aug 2018 20:01:51 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C0FEA6B026F; Thu, 2 Aug 2018 16:01:50 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id BC0E06B0270; Thu, 2 Aug 2018 16:01:50 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AD6AC6B0271; Thu, 2 Aug 2018 16:01:50 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-qt0-f199.google.com (mail-qt0-f199.google.com [209.85.216.199]) by kanga.kvack.org (Postfix) with ESMTP id 864A66B026F for ; Thu, 2 Aug 2018 16:01:50 -0400 (EDT) Received: by mail-qt0-f199.google.com with SMTP id i23-v6so2527835qtf.9 for ; Thu, 02 Aug 2018 13:01:50 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:subject :to:message-id:date:user-agent:mime-version :content-transfer-encoding:content-language; bh=+x6kYjlA0T+mCYv39uZCAj8gyVtFZtsm0EMCmnnXRfg=; b=YGLj4h/FKMgXsZUlsa3OibEhLkOC/vRgnMMO+ZVZk6eudK8vLLdxsJ2SV2KpWPC6oy cc4mzcvGveUfWnP/Afm32v4MEQ2qp/z2NBmAiMuJG3gDRevfd4nvSvmp7v/uWGdSsZ7a fJxcFoIkqKZSRaEGJHRTMIA8KPSDbliGSP9D7l05rN2VytUJZTrHLPWYaiIWhCGaD5d6 ixLFZ10cozbAZ5UAAn4HKk/HiQ8ihyXCwiGTk94GhMS5FgtKOXE3LA1IrpgYnV/wp6aU 6zM2DOMmmc4LUeUKEhwcfg1k+NNHxlDJPTCh8gCq97JMOhazXgUkKlZPFtTTXib0nw7b gDwA== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of btv1==75224751413==tonyb@cybernetics.com designates 173.71.130.66 as permitted sender) smtp.mailfrom="btv1==75224751413==tonyb@cybernetics.com" X-Gm-Message-State: AOUpUlEgfnKjAV9eGl4KKP8iU8jSem6RixE+l5aQ4+RLOryXnrcK/VE7 HPfdPPL4eT5CzKNCo/ewHG9RzBp0udvAV7X79ext0tDzeGeD2iDCUHv6gN5OJWd0gt8GAglPuHA sOH4/xZV3Nl2hfdYPS+gignDJG/INtPAnadoThGKHKxUEV3S4MqkydPjX8R/3ykBI8g== X-Received: by 2002:a37:bb86:: with SMTP id l128-v6mr932187qkf.211.1533240110273; Thu, 02 Aug 2018 13:01:50 -0700 (PDT) X-Google-Smtp-Source: AAOMgpdncW7Rb1zyBxorDMdE5jc0l1LXoDMwhofXfJhO6EbPrOVijDltPiJnPP4RzW4ya++UFpsr X-Received: by 2002:a37:bb86:: with SMTP id l128-v6mr932105qkf.211.1533240109298; Thu, 02 Aug 2018 13:01:49 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1533240109; cv=none; d=google.com; s=arc-20160816; b=d8lbIynvJkF90pTAURpYlwpA7H/b5UvDKmXdZIeyWELUVw+oxGJtfwsuoklpjOw3WW 9UfIk9GZTlCNGNl2WaYzhaMHkpFovQ2SWs1b9/ym2AsXTvsCUakmCtnmpSnVuUvjUmCg FjAh14mw5CQMMytNcbr1VVWhkurk9clIaIFSdMEt8Oigt6JDXoavq+J3JrWKGuhTi28u MGdh5EdFBVwUFbT3UK2Q7GhIaWhDhqp3VhvrNFMev+UxS+OBdISC/JyU7kAeWgezm1zj shYeMHiVF/fK/akDkLkrueohDadmN8l2rGRY5MfMJRTRgFAxdihQc+UtF2H3KzcDQaBE urAw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-language:content-transfer-encoding:mime-version:user-agent :date:message-id:to:subject:from:arc-authentication-results; bh=+x6kYjlA0T+mCYv39uZCAj8gyVtFZtsm0EMCmnnXRfg=; b=eyOSxTw/+wahQzLBrY5VVWQ845PGanOD4Ecmtt/KRxHu/U0tUfSMEnb93ashW5Wmiu UZVgPy263jVc5CJUqhySG92LtxaKRM6LOxQfWwlET2j4GsJyLGmf7+onbEudy91aY1tg YTffV8Q5mXbeBLUeo0ktOm3bjIqAzCdNkYSBjkqeFX789+vTUyS3R0tk0JChllDzkXST nkrILEiDGlSq6r6AI5T91AxgZxEpWcSmHZQ4ZFgtTukKsqhr79sHGOdHCDxt/c7GGYxL 6QROjhv6TWzbWxKaDTXmMfG4y8uHvUgglJWWaEkhQpmPrxi94CZRI5YnEVTOvYGvFX8N cd9w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of btv1==75224751413==tonyb@cybernetics.com designates 173.71.130.66 as permitted sender) smtp.mailfrom="btv1==75224751413==tonyb@cybernetics.com" Received: from mail.cybernetics.com (mail.cybernetics.com. [173.71.130.66]) by mx.google.com with ESMTPS id o3-v6si610509qkd.87.2018.08.02.13.01.49 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 02 Aug 2018 13:01:49 -0700 (PDT) Received-SPF: pass (google.com: domain of btv1==75224751413==tonyb@cybernetics.com designates 173.71.130.66 as permitted sender) client-ip=173.71.130.66; Authentication-Results: mx.google.com; spf=pass (google.com: domain of btv1==75224751413==tonyb@cybernetics.com designates 173.71.130.66 as permitted sender) smtp.mailfrom="btv1==75224751413==tonyb@cybernetics.com" X-ASG-Debug-ID: 1533240107-0fb3b01fb33f5a60001-v9ZeMO Received: from cybernetics.com ([10.157.1.126]) by mail.cybernetics.com with ESMTP id DUrgNdEOSvYJ6hE0 (version=SSLv3 cipher=DES-CBC3-SHA bits=112 verify=NO); Thu, 02 Aug 2018 16:01:47 -0400 (EDT) X-Barracuda-Envelope-From: tonyb@cybernetics.com X-ASG-Whitelist: Client Received: from [10.157.2.224] (account tonyb HELO [192.168.200.1]) by cybernetics.com (CommuniGate Pro SMTP 5.1.14) with ESMTPSA id 8317857; Thu, 02 Aug 2018 16:01:47 -0400 From: Tony Battersby Subject: [PATCH v2 9/9] [SCSI] mpt3sas: replace chain_dma_pool To: Matthew Wilcox , Christoph Hellwig , Marek Szyprowski , Sathya Prakash , Chaitra P B , Suganath Prabu Subramani , iommu@lists.linux-foundation.org, linux-mm@kvack.org, linux-scsi@vger.kernel.org, MPT-FusionLinux.pdl@broadcom.com X-ASG-Orig-Subj: [PATCH v2 9/9] [SCSI] mpt3sas: replace chain_dma_pool Message-ID: Date: Thu, 2 Aug 2018 16:01:47 -0400 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.9.1 MIME-Version: 1.0 Content-Language: en-US X-Barracuda-Connect: UNKNOWN[10.157.1.126] X-Barracuda-Start-Time: 1533240107 X-Barracuda-Encrypted: DES-CBC3-SHA X-Barracuda-URL: https://10.157.1.122:443/cgi-mod/mark.cgi X-Barracuda-Scan-Msg-Size: 7782 X-Virus-Scanned: by bsmtpd at cybernetics.com X-Barracuda-BRTS-Status: 1 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Replace chain_dma_pool with direct calls to dma_alloc_coherent() and dma_free_coherent(). Since the chain lookup can involve hundreds of thousands of allocations, it is worthwile to avoid the overhead of the dma_pool API. Signed-off-by: Tony Battersby --- No changes since v1. The original code called _base_release_memory_pools() before "goto out" if dma_pool_alloc() failed, but this was unnecessary because mpt3sas_base_attach() will call _base_release_memory_pools() after "goto out_free_resources". It may have been that way because the out-of-tree vendor driver (from https://www.broadcom.com/support/download-search) has a slightly-more-complicated error handler there that adjusts max_request_credit, calls _base_release_memory_pools() and then does "goto retry_allocation" under some circumstances, but that is missing from the in-tree driver. diff --git a/drivers/scsi/mpt3sas/mpt3sas_base.c b/drivers/scsi/mpt3sas/mpt3sas_base.c index 569392d..2cb567a 100644 --- a/drivers/scsi/mpt3sas/mpt3sas_base.c +++ b/drivers/scsi/mpt3sas/mpt3sas_base.c @@ -4224,6 +4224,134 @@ void mpt3sas_base_clear_st(struct MPT3SAS_ADAPTER *ioc, } /** + * _base_release_chain_lookup - release chain_lookup memory pools + * @ioc: per adapter object + * + * Free memory allocated from _base_allocate_chain_lookup. + */ +static void +_base_release_chain_lookup(struct MPT3SAS_ADAPTER *ioc) +{ + unsigned int chains_avail = 0; + struct chain_tracker *ct; + int i, j; + + if (!ioc->chain_lookup) + return; + + /* + * NOTE + * + * To make this code easier to understand and maintain, the for loops + * and the management of the chains_avail value are designed to be + * similar to the _base_allocate_chain_lookup() function. That way, + * the code for freeing the memory is similar to the code for + * allocating the memory. + */ + for (i = 0; i < ioc->scsiio_depth; i++) { + if (!ioc->chain_lookup[i].chains_per_smid) + break; + + for (j = ioc->chains_per_prp_buffer; + j < ioc->chains_needed_per_io; j++) { + /* + * If chains_avail is 0, then the chain represents a + * real allocation, so free it. + * + * If chains_avail is nonzero, then the chain was + * initialized at an offset from a previous allocation, + * so don't free it. + */ + if (chains_avail == 0) { + ct = &ioc->chain_lookup[i].chains_per_smid[j]; + if (ct->chain_buffer) + dma_free_coherent( + &ioc->pdev->dev, + ioc->chain_allocation_sz, + ct->chain_buffer, + ct->chain_buffer_dma); + chains_avail = ioc->chains_per_allocation; + } + chains_avail--; + } + kfree(ioc->chain_lookup[i].chains_per_smid); + } + + kfree(ioc->chain_lookup); + ioc->chain_lookup = NULL; +} + +/** + * _base_allocate_chain_lookup - allocate chain_lookup memory pools + * @ioc: per adapter object + * @total_sz: external value that tracks total amount of memory allocated + * + * Return: 0 success, anything else error + */ +static int +_base_allocate_chain_lookup(struct MPT3SAS_ADAPTER *ioc, u32 *total_sz) +{ + unsigned int aligned_chain_segment_sz; + const unsigned int align = 16; + unsigned int chains_avail = 0; + struct chain_tracker *ct; + dma_addr_t dma_addr = 0; + void *vaddr = NULL; + int i, j; + + /* Round up the allocation size for alignment. */ + aligned_chain_segment_sz = ioc->chain_segment_sz; + if (aligned_chain_segment_sz % align != 0) + aligned_chain_segment_sz = + ALIGN(aligned_chain_segment_sz, align); + + /* Allocate a page of chain buffers at a time. */ + ioc->chain_allocation_sz = + max_t(unsigned int, aligned_chain_segment_sz, PAGE_SIZE); + + /* Calculate how many chain buffers we can get from one allocation. */ + ioc->chains_per_allocation = + ioc->chain_allocation_sz / aligned_chain_segment_sz; + + for (i = 0; i < ioc->scsiio_depth; i++) { + for (j = ioc->chains_per_prp_buffer; + j < ioc->chains_needed_per_io; j++) { + /* + * Check if there are any chain buffers left in the + * previously-allocated block. + */ + if (chains_avail == 0) { + /* Allocate a new block of chain buffers. */ + vaddr = dma_alloc_coherent( + &ioc->pdev->dev, + ioc->chain_allocation_sz, + &dma_addr, + GFP_KERNEL); + if (!vaddr) { + pr_err(MPT3SAS_FMT + "chain_lookup: dma_alloc_coherent failed\n", + ioc->name); + return -1; + } + chains_avail = ioc->chains_per_allocation; + } + + ct = &ioc->chain_lookup[i].chains_per_smid[j]; + ct->chain_buffer = vaddr; + ct->chain_buffer_dma = dma_addr; + + /* Go to the next chain buffer in the block. */ + vaddr += aligned_chain_segment_sz; + dma_addr += aligned_chain_segment_sz; + *total_sz += ioc->chain_segment_sz; + chains_avail--; + } + } + + return 0; +} + +/** * _base_release_memory_pools - release memory * @ioc: per adapter object * @@ -4235,8 +4363,6 @@ void mpt3sas_base_clear_st(struct MPT3SAS_ADAPTER *ioc, _base_release_memory_pools(struct MPT3SAS_ADAPTER *ioc) { int i = 0; - int j = 0; - struct chain_tracker *ct; struct reply_post_struct *rps; dexitprintk(ioc, pr_info(MPT3SAS_FMT "%s\n", ioc->name, @@ -4326,22 +4452,7 @@ void mpt3sas_base_clear_st(struct MPT3SAS_ADAPTER *ioc, kfree(ioc->hpr_lookup); kfree(ioc->internal_lookup); - if (ioc->chain_lookup) { - for (i = 0; i < ioc->scsiio_depth; i++) { - for (j = ioc->chains_per_prp_buffer; - j < ioc->chains_needed_per_io; j++) { - ct = &ioc->chain_lookup[i].chains_per_smid[j]; - if (ct && ct->chain_buffer) - dma_pool_free(ioc->chain_dma_pool, - ct->chain_buffer, - ct->chain_buffer_dma); - } - kfree(ioc->chain_lookup[i].chains_per_smid); - } - dma_pool_destroy(ioc->chain_dma_pool); - kfree(ioc->chain_lookup); - ioc->chain_lookup = NULL; - } + _base_release_chain_lookup(ioc); } /** @@ -4784,29 +4895,8 @@ void mpt3sas_base_clear_st(struct MPT3SAS_ADAPTER *ioc, total_sz += sz * ioc->scsiio_depth; } - ioc->chain_dma_pool = dma_pool_create("chain pool", &ioc->pdev->dev, - ioc->chain_segment_sz, 16, 0); - if (!ioc->chain_dma_pool) { - pr_err(MPT3SAS_FMT "chain_dma_pool: dma_pool_create failed\n", - ioc->name); + if (_base_allocate_chain_lookup(ioc, &total_sz)) goto out; - } - for (i = 0; i < ioc->scsiio_depth; i++) { - for (j = ioc->chains_per_prp_buffer; - j < ioc->chains_needed_per_io; j++) { - ct = &ioc->chain_lookup[i].chains_per_smid[j]; - ct->chain_buffer = dma_pool_alloc( - ioc->chain_dma_pool, GFP_KERNEL, - &ct->chain_buffer_dma); - if (!ct->chain_buffer) { - pr_err(MPT3SAS_FMT "chain_lookup: " - " pci_pool_alloc failed\n", ioc->name); - _base_release_memory_pools(ioc); - goto out; - } - } - total_sz += ioc->chain_segment_sz; - } dinitprintk(ioc, pr_info(MPT3SAS_FMT "chain pool depth(%d), frame_size(%d), pool_size(%d kB)\n", diff --git a/drivers/scsi/mpt3sas/mpt3sas_base.h b/drivers/scsi/mpt3sas/mpt3sas_base.h index f02974c..7ee81d5 100644 --- a/drivers/scsi/mpt3sas/mpt3sas_base.h +++ b/drivers/scsi/mpt3sas/mpt3sas_base.h @@ -1298,7 +1298,6 @@ struct MPT3SAS_ADAPTER { /* chain */ struct chain_lookup *chain_lookup; struct list_head free_chain_list; - struct dma_pool *chain_dma_pool; ulong chain_pages; u16 max_sges_in_main_message; u16 max_sges_in_chain_message; @@ -1306,6 +1305,8 @@ struct MPT3SAS_ADAPTER { u32 chain_depth; u16 chain_segment_sz; u16 chains_per_prp_buffer; + u32 chain_allocation_sz; + u32 chains_per_allocation; /* hi-priority queue */ u16 hi_priority_smid;