From patchwork Mon May 20 05:40:11 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Tobin C. Harding" X-Patchwork-Id: 10949923 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7D57A1390 for ; Mon, 20 May 2019 05:42:26 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6C2AB282EC for ; Mon, 20 May 2019 05:42:26 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 6013228649; Mon, 20 May 2019 05:42:26 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D7EEE28646 for ; Mon, 20 May 2019 05:42:25 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B767F6B0269; Mon, 20 May 2019 01:42:24 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id B263E6B026A; Mon, 20 May 2019 01:42:24 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A159B6B026B; Mon, 20 May 2019 01:42:24 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-qt1-f200.google.com (mail-qt1-f200.google.com [209.85.160.200]) by kanga.kvack.org (Postfix) with ESMTP id 836286B0269 for ; Mon, 20 May 2019 01:42:24 -0400 (EDT) Received: by mail-qt1-f200.google.com with SMTP id b46so13233652qte.6 for ; Sun, 19 May 2019 22:42:24 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=US5HWl6Yr+o7Tqlb3g83eU3pKx3YxwNaogU4TX1LtRs=; b=Bar/2Ns0A9o5SW3o/3Z83fI6RERJV45TFObZL2eHbqHfwRv+oYimfyTElLwVDd/8yy kD2k6r2/6yOuv9pCSZJbU0k5+BaPo3N/TVq4r5AyI508VUixjiOuWTDZNIE5P9K4V6l1 uUqMk+yLe/k369uYum1ZADczT5rSDTQ353NGZrNaT3jIydCBmYIp3isfn/bmBjPJ0Yyj 7v9UpfvEAI4HeDeiMqos898bDyOJheFcctGNWTrEULmU84sepsHb0qi8a2SuvIbRO4tI TOdgkO1iP+LyKuRQeWEoA3pxRJ3br1ILGyS1ZwK5QP5Kt9aXkXxpsAvJIHiQfgnzjI+6 IiIA== X-Gm-Message-State: APjAAAWoFnBLYEWkKx/3h+RAokf5fI6GczW97B5ugnnZexJnLpZDwEvQ aIWoOF/ynypILzr+Eu94VyAQ/SOcfI0ddJBJlGJEIgzCIgEsZiWobp0iZWvbBuktGmuBn2xW2Th 6ZwCHOooM+f5D1G0/bVe9+imstlJbS5qtm4EjSENRVstj3ALquh0AW9HBamKv0Ow= X-Received: by 2002:a0c:98ab:: with SMTP id f40mr50984249qvd.177.1558330944302; Sun, 19 May 2019 22:42:24 -0700 (PDT) X-Google-Smtp-Source: APXvYqwK20FjSDoudI30XPA0naBBiCs9xA/SkhF7QzQlv4Aq6L/bHuHJqfcc0oOblPh5toHTcoae X-Received: by 2002:a0c:98ab:: with SMTP id f40mr50984197qvd.177.1558330943297; Sun, 19 May 2019 22:42:23 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1558330943; cv=none; d=google.com; s=arc-20160816; b=hhWvvzosXEmSejY3Z8H+xJjvr+5RfWgdn2ZlP1h3cYl4rCjuWXhA+icoWY4LGm3G6G PEsfym2JwA+lJ/y++osO1OeJrLENQtlWAXjXcsXOVvUJRBTaBdIsnNyDq3vzYJ/lHjXQ w5i1ftDrfiHXjJb8uR++sF6mO4FN8eFTpie72hFvMwW2fplAMGxpWwTY5127Hszr9/Bt jjN8jczRN1k4CYr3JX8BTDV0r7qBGeYTSTn/pkZZKzVdX6zlxPTrujS2BY1j0cH0sXrf 8peD/B+qrZ4JmYeyBO3r240vS96+mPUHmU0HPAbrGyO5PwySFC2UbdFfaP1ytdfx9opK OfGA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=US5HWl6Yr+o7Tqlb3g83eU3pKx3YxwNaogU4TX1LtRs=; b=qFEH7Ln093Drap+F/e7qOxIPSTI0cdlPdG24xtkaUVXUkyDOO0crS63OBzTB3NClgI zyIGHd+fVRy39gtbdzzEa3w/3uU3CefdfEdfDBNErDFWbZFFdJCbTdgz1CLueU+thjy1 h3py3XJ8i3gDEeRWUrYkZnyvfy/21h4RY7+AoTVU9ZwdZsuYesaaaH6KMlwBvgnPoRBl +qnXaowdgORr4PveLOCWunUeHBaC3xllg2R28ueLUDUyrNQm/XPhHX6DW8PB43tT98cH PiCRQuttHluHsoaJSRNeElHbK/Fx1kWcyJqTkweLIVUEwqmOsn0TyvSc5rToDWrVmHQa 2kxw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@messagingengine.com header.s=fm2 header.b=kc0kWlJs; spf=softfail (google.com: domain of transitioning tobin@kernel.org does not designate 66.111.4.230 as permitted sender) smtp.mailfrom=tobin@kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from new4-smtp.messagingengine.com (new4-smtp.messagingengine.com. [66.111.4.230]) by mx.google.com with ESMTPS id s92si3604756qtd.48.2019.05.19.22.42.23 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sun, 19 May 2019 22:42:23 -0700 (PDT) Received-SPF: softfail (google.com: domain of transitioning tobin@kernel.org does not designate 66.111.4.230 as permitted sender) client-ip=66.111.4.230; Authentication-Results: mx.google.com; dkim=pass header.i=@messagingengine.com header.s=fm2 header.b=kc0kWlJs; spf=softfail (google.com: domain of transitioning tobin@kernel.org does not designate 66.111.4.230 as permitted sender) smtp.mailfrom=tobin@kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailnew.nyi.internal (Postfix) with ESMTP id 045C9BC13; Mon, 20 May 2019 01:42:23 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute3.internal (MEProxy); Mon, 20 May 2019 01:42:23 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm2; bh=US5HWl6Yr+o7Tqlb3g83eU3pKx3YxwNaogU4TX1LtRs=; b=kc0kWlJs 0NmjnfFnQi6LJk07eViEaoQIS4EmoXA3ywM5OjQF4mkY8O/fnqPxMiPO4eJ8GufP 9CEETvaMZ5OeCvk7DNGRQS4MdzhgMBOx9/f6mwIBOaRjjW0gsGAba7cI/Vj+rkCH dqdNfbqvrGDv5OgqZmFlGxGS90D2eESO0qDO3VN37SwQcI0YUHkqRMeyUQEyyUA4 1vho7FU/MKIdoJyvgGcNb7BTfBvEDaPlR6ZpSUWpGA1Iqi60m33B1iIIBIoF2fNW xxp75W69/QaOZpEOP/JEVcwmXEcp0qnKw9YflyGEDCy0BKnRld5h5QLLZnCSC9DH RN4TnjdXHArysw== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduuddruddtjedguddtudcutefuodetggdotefrod ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd enucfjughrpefhvffufffkofgjfhgggfestdekredtredttdenucfhrhhomhepfdfvohgs ihhnucevrdcujfgrrhguihhnghdfuceothhosghinheskhgvrhhnvghlrdhorhhgqeenuc fkphepuddvgedrudeiledrudehiedrvddtfeenucfrrghrrghmpehmrghilhhfrhhomhep thhosghinheskhgvrhhnvghlrdhorhhgnecuvehluhhsthgvrhfuihiivgeple X-ME-Proxy: Received: from eros.localdomain (124-169-156-203.dyn.iinet.net.au [124.169.156.203]) by mail.messagingengine.com (Postfix) with ESMTPA id DBBF380064; Mon, 20 May 2019 01:42:15 -0400 (EDT) From: "Tobin C. Harding" To: Andrew Morton , Matthew Wilcox Cc: "Tobin C. Harding" , Roman Gushchin , Alexander Viro , Christoph Hellwig , Pekka Enberg , David Rientjes , Joonsoo Kim , Christopher Lameter , Miklos Szeredi , Andreas Dilger , Waiman Long , Tycho Andersen , Theodore Ts'o , Andi Kleen , David Chinner , Nick Piggin , Rik van Riel , Hugh Dickins , Jonathan Corbet , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [RFC PATCH v5 10/16] xarray: Implement migration function for xa_node objects Date: Mon, 20 May 2019 15:40:11 +1000 Message-Id: <20190520054017.32299-11-tobin@kernel.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190520054017.32299-1-tobin@kernel.org> References: <20190520054017.32299-1-tobin@kernel.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Recently Slab Movable Objects (SMO) was implemented for the SLUB allocator. The XArray can take advantage of this and make the xa_node slab cache objects movable. Implement functions to migrate objects and activate SMO when we initialise the XArray slab cache. This is based on initial code by Matthew Wilcox and was modified to work with slab object migration. Cc: Matthew Wilcox Co-developed-by: Christoph Lameter Signed-off-by: Tobin C. Harding --- lib/xarray.c | 61 ++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 61 insertions(+) diff --git a/lib/xarray.c b/lib/xarray.c index a528a5277c9d..c6b077f59e88 100644 --- a/lib/xarray.c +++ b/lib/xarray.c @@ -1993,12 +1993,73 @@ static void xa_node_ctor(void *arg) INIT_LIST_HEAD(&node->private_list); } +static void xa_object_migrate(struct xa_node *node, int numa_node) +{ + struct xarray *xa = READ_ONCE(node->array); + void __rcu **slot; + struct xa_node *new_node; + int i; + + /* Freed or not yet in tree then skip */ + if (!xa || xa == XA_RCU_FREE) + return; + + new_node = kmem_cache_alloc_node(xa_node_cachep, GFP_KERNEL, numa_node); + if (!new_node) { + pr_err("%s: slab cache allocation failed\n", __func__); + return; + } + + xa_lock_irq(xa); + + /* Check again..... */ + if (xa != node->array) { + node = new_node; + goto unlock; + } + + memcpy(new_node, node, sizeof(struct xa_node)); + + if (list_empty(&node->private_list)) + INIT_LIST_HEAD(&new_node->private_list); + else + list_replace(&node->private_list, &new_node->private_list); + + for (i = 0; i < XA_CHUNK_SIZE; i++) { + void *x = xa_entry_locked(xa, new_node, i); + + if (xa_is_node(x)) + rcu_assign_pointer(xa_to_node(x)->parent, new_node); + } + if (!new_node->parent) + slot = &xa->xa_head; + else + slot = &xa_parent_locked(xa, new_node)->slots[new_node->offset]; + rcu_assign_pointer(*slot, xa_mk_node(new_node)); + +unlock: + xa_unlock_irq(xa); + xa_node_free(node); + rcu_barrier(); +} + +static void xa_migrate(struct kmem_cache *s, void **objects, int nr, + int node, void *_unused) +{ + int i; + + for (i = 0; i < nr; i++) + xa_object_migrate(objects[i], node); +} + + void __init xarray_slabcache_init(void) { xa_node_cachep = kmem_cache_create("xarray_node", sizeof(struct xa_node), 0, SLAB_PANIC | SLAB_RECLAIM_ACCOUNT, xa_node_ctor); + kmem_cache_setup_mobility(xa_node_cachep, NULL, xa_migrate); } #ifdef XA_DEBUG