From patchwork Wed Dec 6 00:41:54 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 10094301 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 7D87D602BF for ; Wed, 6 Dec 2017 00:45:57 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6DEB329627 for ; Wed, 6 Dec 2017 00:45:57 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 62B452989A; Wed, 6 Dec 2017 00:45:57 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id CFA2D29627 for ; Wed, 6 Dec 2017 00:45:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753501AbdLFAnD (ORCPT ); Tue, 5 Dec 2017 19:43:03 -0500 Received: from bombadil.infradead.org ([65.50.211.133]:53204 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753396AbdLFAmR (ORCPT ); Tue, 5 Dec 2017 19:42:17 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=u3WOCOAgLn1p1SipknK1VgO/Gj0mxsJxdiXlOJ0vhyE=; b=KS3PfMd9xZHbQsxFSL9cpyBtR RR8k9j33y+OF6IkpS6lcckrnhxdMvtrAH2AdVBtZJ6Tj5ntWMbfRAktdtxLZV3oJEHFAxspv2q0kS AbHssCPlJZqS0VGf6KWAZVWag8Lbd9HdzqQf/+5gCupmNbdiScNOvI+6c1GUzIfkni4miMVrttlNe 5kG/l6Ho1kBPBqJf5AWXLpjVqgJ9JCc/e5vXBk1xecpLwKOiw88a77ejPCa7tI2OWhtlX1nR84jYr prNAWmQsC6Iftvl6xbMmxLhPVgByXGeODc4jt4+Vr0QC+aSral4dv8FcZUeOD8eBvh6s/TbAGpvn4 7higocaJg==; Received: from willy by bombadil.infradead.org with local (Exim 4.87 #1 (Red Hat Linux)) id 1eMNmw-0001CP-CM; Wed, 06 Dec 2017 00:42:14 +0000 From: Matthew Wilcox Cc: Matthew Wilcox , Ross Zwisler , Jens Axboe , Rehas Sachdeva , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-xfs@vger.kernel.org, linux-usb@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 68/73] brd: Convert to XArray Date: Tue, 5 Dec 2017 16:41:54 -0800 Message-Id: <20171206004159.3755-69-willy@infradead.org> X-Mailer: git-send-email 2.9.5 In-Reply-To: <20171206004159.3755-1-willy@infradead.org> References: <20171206004159.3755-1-willy@infradead.org> To: unlisted-recipients:; (no To-header on input) Sender: linux-btrfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Matthew Wilcox Convert brd_pages from a radix tree to an XArray. Simpler and smaller code; in particular another user of radix_tree_preload is eliminated. Signed-off-by: Matthew Wilcox --- drivers/block/brd.c | 87 ++++++++++++++--------------------------------------- 1 file changed, 23 insertions(+), 64 deletions(-) diff --git a/drivers/block/brd.c b/drivers/block/brd.c index 8028a3a7e7fd..4d8ae1b399e6 100644 --- a/drivers/block/brd.c +++ b/drivers/block/brd.c @@ -17,7 +17,7 @@ #include #include #include -#include +#include #include #include #include @@ -29,9 +29,9 @@ #define PAGE_SECTORS (1 << PAGE_SECTORS_SHIFT) /* - * Each block ramdisk device has a radix_tree brd_pages of pages that stores - * the pages containing the block device's contents. A brd page's ->index is - * its offset in PAGE_SIZE units. This is similar to, but in no way connected + * Each block ramdisk device has an xarray brd_pages that stores the pages + * containing the block device's contents. A brd page's ->index is its + * offset in PAGE_SIZE units. This is similar to, but in no way connected * with, the kernel's pagecache or buffer cache (which sit above our block * device). */ @@ -41,13 +41,7 @@ struct brd_device { struct request_queue *brd_queue; struct gendisk *brd_disk; struct list_head brd_list; - - /* - * Backing store of pages and lock to protect it. This is the contents - * of the block device. - */ - spinlock_t brd_lock; - struct radix_tree_root brd_pages; + struct xarray brd_pages; }; /* @@ -62,17 +56,9 @@ static struct page *brd_lookup_page(struct brd_device *brd, sector_t sector) * The page lifetime is protected by the fact that we have opened the * device node -- brd pages will never be deleted under us, so we * don't need any further locking or refcounting. - * - * This is strictly true for the radix-tree nodes as well (ie. we - * don't actually need the rcu_read_lock()), however that is not a - * documented feature of the radix-tree API so it is better to be - * safe here (we don't have total exclusion from radix tree updates - * here, only deletes). */ - rcu_read_lock(); idx = sector >> PAGE_SECTORS_SHIFT; /* sector to page index */ - page = radix_tree_lookup(&brd->brd_pages, idx); - rcu_read_unlock(); + page = xa_load(&brd->brd_pages, idx); BUG_ON(page && page->index != idx); @@ -87,7 +73,7 @@ static struct page *brd_lookup_page(struct brd_device *brd, sector_t sector) static struct page *brd_insert_page(struct brd_device *brd, sector_t sector) { pgoff_t idx; - struct page *page; + struct page *curr, *page; gfp_t gfp_flags; page = brd_lookup_page(brd, sector); @@ -108,62 +94,36 @@ static struct page *brd_insert_page(struct brd_device *brd, sector_t sector) if (!page) return NULL; - if (radix_tree_preload(GFP_NOIO)) { - __free_page(page); - return NULL; - } - - spin_lock(&brd->brd_lock); idx = sector >> PAGE_SECTORS_SHIFT; page->index = idx; - if (radix_tree_insert(&brd->brd_pages, idx, page)) { + curr = xa_cmpxchg(&brd->brd_pages, idx, NULL, page, GFP_NOIO); + if (curr) { __free_page(page); - page = radix_tree_lookup(&brd->brd_pages, idx); + page = curr; BUG_ON(!page); BUG_ON(page->index != idx); } - spin_unlock(&brd->brd_lock); - - radix_tree_preload_end(); return page; } /* - * Free all backing store pages and radix tree. This must only be called when + * Free all backing store pages and xarray. This must only be called when * there are no other users of the device. */ -#define FREE_BATCH 16 static void brd_free_pages(struct brd_device *brd) { - unsigned long pos = 0; - struct page *pages[FREE_BATCH]; - int nr_pages; - - do { - int i; - - nr_pages = radix_tree_gang_lookup(&brd->brd_pages, - (void **)pages, pos, FREE_BATCH); - - for (i = 0; i < nr_pages; i++) { - void *ret; - - BUG_ON(pages[i]->index < pos); - pos = pages[i]->index; - ret = radix_tree_delete(&brd->brd_pages, pos); - BUG_ON(!ret || ret != pages[i]); - __free_page(pages[i]); - } - - pos++; - - /* - * This assumes radix_tree_gang_lookup always returns as - * many pages as possible. If the radix-tree code changes, - * so will this have to. - */ - } while (nr_pages == FREE_BATCH); + XA_STATE(xas, &brd->brd_pages, 0); + struct page *page; + + /* lockdep can't know there are no other users */ + xas_lock(&xas); + xas_for_each(&xas, page, ULONG_MAX) { + BUG_ON(page->index != xas.xa_index); + __free_page(page); + xas_store(&xas, NULL); + } + xas_unlock(&xas); } /* @@ -373,8 +333,7 @@ static struct brd_device *brd_alloc(int i) if (!brd) goto out; brd->brd_number = i; - spin_lock_init(&brd->brd_lock); - INIT_RADIX_TREE(&brd->brd_pages, GFP_ATOMIC); + xa_init(&brd->brd_pages); brd->brd_queue = blk_alloc_queue(GFP_KERNEL); if (!brd->brd_queue)