From patchwork Wed Jan 17 20:22:00 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 10172571 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 4136660230 for ; Thu, 18 Jan 2018 09:42:24 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 33E1C25D99 for ; Thu, 18 Jan 2018 09:42:24 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2807D2623D; Thu, 18 Jan 2018 09:42:24 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.1 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id A995A25D99 for ; Thu, 18 Jan 2018 09:42:23 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id ABF0D6E532; Thu, 18 Jan 2018 09:39:57 +0000 (UTC) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from bombadil.infradead.org (bombadil.infradead.org [65.50.211.133]) by gabe.freedesktop.org (Postfix) with ESMTPS id DAB766E1C4 for ; Wed, 17 Jan 2018 20:23:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=NcPrSNl0PJX8QPkrt8w3KQSJyJwFYqJs0WM0icyZM4s=; b=rY3Wl+r0k7Afw9kHtRZYNbmTA +UMjVIiMFrLJIhj6DjBHGHeiKjU6yMZKSLT0VTJ7xFATPVjiOOL1hHZkGGu2AS+njwV2qhc+jsWTU o0bU5xeWqqEYAZKqMemX94Bglp8E3kiOZHvlmoE5UsgpsQ38WfxeDn1h+VXJ6zDOIGVLFlqx+PTIR Lpmrzwu6bGE7gSkX13hRXUJ8owpcUSQLvFvPpNewKFdL7Qem3zwjGoIDpJyAcGj8CXCRcTLRjrDPe 1zzXn1SLxXyrBw5Ap2wzAhKyYa51m1xm+dHV4yAzW7756tQLyPyqBXtKpUpJtFaCrsPym17n5eKKv GO9zBHtBw==; Received: from willy by bombadil.infradead.org with local (Exim 4.89 #1 (Red Hat Linux)) id 1ebuEj-0006Lx-ML; Wed, 17 Jan 2018 20:23:05 +0000 From: Matthew Wilcox To: linux-kernel@vger.kernel.org Date: Wed, 17 Jan 2018 12:22:00 -0800 Message-Id: <20180117202203.19756-97-willy@infradead.org> X-Mailer: git-send-email 2.14.3 In-Reply-To: <20180117202203.19756-1-willy@infradead.org> References: <20180117202203.19756-1-willy@infradead.org> Cc: linux-s390@vger.kernel.org, David Howells , linux-nilfs@vger.kernel.org, Matthew Wilcox , linux-sh@vger.kernel.org, intel-gfx@lists.freedesktop.org, linux-usb@vger.kernel.org, linux-remoteproc@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-xfs@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org, Stefano Stabellini , linux-fsdevel@vger.kernel.org, cgroups@vger.kernel.org, Bjorn Andersson , linux-btrfs@vger.kernel.org Subject: [Intel-gfx] [PATCH v6 96/99] dma-debug: Convert to XArray X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Virus-Scanned: ClamAV using ClamSMTP From: Matthew Wilcox This is an unusual way to use the xarray tags. If any other users come up, we can add an xas_get_tags() / xas_set_tags() API, but until then I don't want to encourage this kind of abuse. Signed-off-by: Matthew Wilcox --- lib/dma-debug.c | 105 +++++++++++++++++++++++++------------------------------- 1 file changed, 46 insertions(+), 59 deletions(-) diff --git a/lib/dma-debug.c b/lib/dma-debug.c index fb4af570ce04..965b3837d060 100644 --- a/lib/dma-debug.c +++ b/lib/dma-debug.c @@ -22,7 +22,6 @@ #include #include #include -#include #include #include #include @@ -30,6 +29,7 @@ #include #include #include +#include #include #include #include @@ -465,9 +465,8 @@ EXPORT_SYMBOL(debug_dma_dump_mappings); * At any time debug_dma_assert_idle() can be called to trigger a * warning if any cachelines in the given page are in the active set. */ -static RADIX_TREE(dma_active_cacheline, GFP_NOWAIT); -static DEFINE_SPINLOCK(radix_lock); -#define ACTIVE_CACHELINE_MAX_OVERLAP ((1 << RADIX_TREE_MAX_TAGS) - 1) +static DEFINE_XARRAY_FLAGS(dma_active_cacheline, XA_FLAGS_LOCK_IRQ); +#define ACTIVE_CACHELINE_MAX_OVERLAP ((1 << XA_MAX_TAGS) - 1) #define CACHELINE_PER_PAGE_SHIFT (PAGE_SHIFT - L1_CACHE_SHIFT) #define CACHELINES_PER_PAGE (1 << CACHELINE_PER_PAGE_SHIFT) @@ -477,37 +476,40 @@ static phys_addr_t to_cacheline_number(struct dma_debug_entry *entry) (entry->offset >> L1_CACHE_SHIFT); } -static int active_cacheline_read_overlap(phys_addr_t cln) +static unsigned int active_cacheline_read_overlap(struct xa_state *xas) { - int overlap = 0, i; + unsigned int tags = 0; + xa_tag_t tag; - for (i = RADIX_TREE_MAX_TAGS - 1; i >= 0; i--) - if (radix_tree_tag_get(&dma_active_cacheline, cln, i)) - overlap |= 1 << i; - return overlap; + for (tag = 0; tag < XA_MAX_TAGS; tag++) + if (xas_get_tag(xas, tag)) + tags |= 1U << tag; + + return tags; } -static int active_cacheline_set_overlap(phys_addr_t cln, int overlap) +static int active_cacheline_set_overlap(struct xa_state *xas, int overlap) { - int i; + xa_tag_t tag; if (overlap > ACTIVE_CACHELINE_MAX_OVERLAP || overlap < 0) return overlap; - for (i = RADIX_TREE_MAX_TAGS - 1; i >= 0; i--) - if (overlap & 1 << i) - radix_tree_tag_set(&dma_active_cacheline, cln, i); + for (tag = 0; tag < XA_MAX_TAGS; tag++) { + if (overlap & (1U << tag)) + xas_set_tag(xas, tag); else - radix_tree_tag_clear(&dma_active_cacheline, cln, i); + xas_clear_tag(xas, tag); + } return overlap; } -static void active_cacheline_inc_overlap(phys_addr_t cln) +static void active_cacheline_inc_overlap(struct xa_state *xas) { - int overlap = active_cacheline_read_overlap(cln); + int overlap = active_cacheline_read_overlap(xas); - overlap = active_cacheline_set_overlap(cln, ++overlap); + overlap = active_cacheline_set_overlap(xas, ++overlap); /* If we overflowed the overlap counter then we're potentially * leaking dma-mappings. Otherwise, if maps and unmaps are @@ -517,21 +519,22 @@ static void active_cacheline_inc_overlap(phys_addr_t cln) */ WARN_ONCE(overlap > ACTIVE_CACHELINE_MAX_OVERLAP, "DMA-API: exceeded %d overlapping mappings of cacheline %pa\n", - ACTIVE_CACHELINE_MAX_OVERLAP, &cln); + ACTIVE_CACHELINE_MAX_OVERLAP, &xas->xa_index); } -static int active_cacheline_dec_overlap(phys_addr_t cln) +static int active_cacheline_dec_overlap(struct xa_state *xas) { - int overlap = active_cacheline_read_overlap(cln); + int overlap = active_cacheline_read_overlap(xas); - return active_cacheline_set_overlap(cln, --overlap); + return active_cacheline_set_overlap(xas, --overlap); } static int active_cacheline_insert(struct dma_debug_entry *entry) { phys_addr_t cln = to_cacheline_number(entry); + XA_STATE(xas, &dma_active_cacheline, cln); unsigned long flags; - int rc; + struct dma_debug_entry *exists; /* If the device is not writing memory then we don't have any * concerns about the cpu consuming stale data. This mitigates @@ -540,32 +543,32 @@ static int active_cacheline_insert(struct dma_debug_entry *entry) if (entry->direction == DMA_TO_DEVICE) return 0; - spin_lock_irqsave(&radix_lock, flags); - rc = radix_tree_insert(&dma_active_cacheline, cln, entry); - if (rc == -EEXIST) - active_cacheline_inc_overlap(cln); - spin_unlock_irqrestore(&radix_lock, flags); + xas_lock_irqsave(&xas, flags); + exists = xas_create(&xas); + if (exists) + active_cacheline_inc_overlap(&xas); + else + xas_store(&xas, entry); + xas_unlock_irqrestore(&xas, flags); - return rc; + return xas_error(&xas); } static void active_cacheline_remove(struct dma_debug_entry *entry) { phys_addr_t cln = to_cacheline_number(entry); + XA_STATE(xas, &dma_active_cacheline, cln); unsigned long flags; /* ...mirror the insert case */ if (entry->direction == DMA_TO_DEVICE) return; - spin_lock_irqsave(&radix_lock, flags); - /* since we are counting overlaps the final put of the - * cacheline will occur when the overlap count is 0. - * active_cacheline_dec_overlap() returns -1 in that case - */ - if (active_cacheline_dec_overlap(cln) < 0) - radix_tree_delete(&dma_active_cacheline, cln); - spin_unlock_irqrestore(&radix_lock, flags); + xas_lock_irqsave(&xas, flags); + xas_load(&xas); + if (active_cacheline_dec_overlap(&xas) < 0) + xas_store(&xas, NULL); + xas_unlock_irqrestore(&xas, flags); } /** @@ -578,12 +581,8 @@ static void active_cacheline_remove(struct dma_debug_entry *entry) */ void debug_dma_assert_idle(struct page *page) { - static struct dma_debug_entry *ents[CACHELINES_PER_PAGE]; - struct dma_debug_entry *entry = NULL; - void **results = (void **) &ents; - unsigned int nents, i; - unsigned long flags; - phys_addr_t cln; + struct dma_debug_entry *entry; + unsigned long cln; if (dma_debug_disabled()) return; @@ -591,21 +590,9 @@ void debug_dma_assert_idle(struct page *page) if (!page) return; - cln = (phys_addr_t) page_to_pfn(page) << CACHELINE_PER_PAGE_SHIFT; - spin_lock_irqsave(&radix_lock, flags); - nents = radix_tree_gang_lookup(&dma_active_cacheline, results, cln, - CACHELINES_PER_PAGE); - for (i = 0; i < nents; i++) { - phys_addr_t ent_cln = to_cacheline_number(ents[i]); - - if (ent_cln == cln) { - entry = ents[i]; - break; - } else if (ent_cln >= cln + CACHELINES_PER_PAGE) - break; - } - spin_unlock_irqrestore(&radix_lock, flags); - + cln = page_to_pfn(page) << CACHELINE_PER_PAGE_SHIFT; + entry = xa_find(&dma_active_cacheline, &cln, + cln + CACHELINES_PER_PAGE - 1, XA_PRESENT); if (!entry) return;