From patchwork Fri Mar 8 04:14:12 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Tobin C. Harding" X-Patchwork-Id: 10844145 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CF728922 for ; Fri, 8 Mar 2019 04:15:07 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BB79A2E07D for ; Fri, 8 Mar 2019 04:15:07 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id AF5282E097; Fri, 8 Mar 2019 04:15:07 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3E5772E07D for ; Fri, 8 Mar 2019 04:15:07 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3402C8E0004; Thu, 7 Mar 2019 23:15:05 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 2C8168E0002; Thu, 7 Mar 2019 23:15:05 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 16DCB8E0004; Thu, 7 Mar 2019 23:15:05 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-qk1-f200.google.com (mail-qk1-f200.google.com [209.85.222.200]) by kanga.kvack.org (Postfix) with ESMTP id DDCEA8E0002 for ; Thu, 7 Mar 2019 23:15:04 -0500 (EST) Received: by mail-qk1-f200.google.com with SMTP id 207so15118139qkf.9 for ; Thu, 07 Mar 2019 20:15:04 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=YrkkvhKvCcH6zbANtTOO3evnEL/yeR53kt/mCY+Xjj4=; b=Haltu6ueLrfnfR16x/0r0QImi4CLDeJZQp4RJdUzDN5NWN7igzxnLApIBWU4GhH98i d0zV74PlyVXLKWPBTZEpfBd7fzjXWjfdEM/jL232xf0Nb9dFQU9U5qkteDf24uvUFnjN KUJRmWEdusNCnfopEcpHpBEPxu11WngGgqqNdyTLCKBoRh1fep5RpD6cPR/rb1KJM0Tf dyeFqoKOrH97ahsinAHuYGQy5paRcA1jlM4wai62qAvyCm5l/u1jtHl3qedqOr/lYKJl plBgDSeMXOK04gLpzCyI7R+sZuYhnrOm0jvX0fzLWwgSId6Psb0/sQP8lfNWESgSbsLc bBvA== X-Gm-Message-State: APjAAAUu7zezWw9tGGenftNHvkpZQDWdAHXeglybHOVBH+/eD41wbfi4 TKGzcn0IVH9uj07WOOvR3c4sBsUfb4wvlSVaURdSezoSIooTg9nrLUFb1huGYByAnVQXYRm0UbK zsnZX/6VnLH4ZGh1WHoOR4lhEj1GKf8c1L4bc7+aI/c9oQIaOaPE5viOYvaBbA4Q= X-Received: by 2002:a0c:aecf:: with SMTP id n15mr13479608qvd.1.1552018504614; Thu, 07 Mar 2019 20:15:04 -0800 (PST) X-Google-Smtp-Source: APXvYqwSNBTI37z1skK9JamvJWH6mQ+OsND8RNk9FvuuCSmEJfs5ZHWFR1yoz3lh0JvpTCUBD6n/ X-Received: by 2002:a0c:aecf:: with SMTP id n15mr13479570qvd.1.1552018503680; Thu, 07 Mar 2019 20:15:03 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1552018503; cv=none; d=google.com; s=arc-20160816; b=0JRQL9tUZPIMcUcHig1xufWYoHy33udy2xM28GbX3A7Qf5VXDa+7dMKfrYreMtBIAq GvlBxbnrAFTZhT25Bq+QYoCvUHXGQ813uCCXQUnuFXcNZIIZTA49CsMweoVxtafe8rC6 QWM3oZDPIsH+GKOOKk99kgu/efZ11qM4GfikMRisNxNdrZHxRRFJG4L12wWs8ff4HVZ6 5E2AFll8nkNJ4DYEEZcy310Lkb423+DaOgHcJDQansyybUhC79UnevX22mSQfJo/l0bL QLj4WGyvP7ZwTVOKz+7vsvIlWaDKWCccCCgRtAwNpwGjOfMVPEfItYYyEo1tdmtlHZvf Yn4g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=YrkkvhKvCcH6zbANtTOO3evnEL/yeR53kt/mCY+Xjj4=; b=weRK5XmVoYBaRuzJaKqF9vr4uQ1hS6lOVYf/9k6vvzaCLGTRlw8NMI6mWDGu+Slrxe LL/ECPdK0RYR1/F5M6aO68014TWXkvmlRPoGpUGD8o6Z/WJ3KUCWC6trdgGJxAxdQjbc 5LubwgoFT+HxF3DhjbI6llNZh5bWRJ5EZRcWS0SWzA9ZU5j/KsMJKS//gCvvIY3o/Ls1 jMQKg20xdCd9jf0h7Up5LWw96jBJFQrYlpR8Fg/H1prJJHExs66qVP07RskaiuBKUhdt MQDCNZAAkiwN2Q7XG7V/twOA00Ggty5BQwUtjEWk5cRoAPXA2WyElMwTxu+MmVh9rDrC KzKQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@messagingengine.com header.s=fm2 header.b=OlWWky5J; spf=softfail (google.com: domain of transitioning tobin@kernel.org does not designate 64.147.123.25 as permitted sender) smtp.mailfrom=tobin@kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from wout2-smtp.messagingengine.com (wout2-smtp.messagingengine.com. [64.147.123.25]) by mx.google.com with ESMTPS id f75si4286277qka.125.2019.03.07.20.15.03 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 07 Mar 2019 20:15:03 -0800 (PST) Received-SPF: softfail (google.com: domain of transitioning tobin@kernel.org does not designate 64.147.123.25 as permitted sender) client-ip=64.147.123.25; Authentication-Results: mx.google.com; dkim=pass header.i=@messagingengine.com header.s=fm2 header.b=OlWWky5J; spf=softfail (google.com: domain of transitioning tobin@kernel.org does not designate 64.147.123.25 as permitted sender) smtp.mailfrom=tobin@kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailout.west.internal (Postfix) with ESMTP id 24F41173F; Thu, 7 Mar 2019 23:15:02 -0500 (EST) Received: from mailfrontend1 ([10.202.2.162]) by compute3.internal (MEProxy); Thu, 07 Mar 2019 23:15:02 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm2; bh=YrkkvhKvCcH6zbANtTOO3evnEL/yeR53kt/mCY+Xjj4=; b=OlWWky5J 7OckGmNFMQaEaOY6sf1Ou0vdYhzjSmfwaHiUVmGSfl5JXRrSck5DtHCtovmRKRTb J0ur6QFL9dnuLNJ5Kkxuk3BSAQ2cFa+SrtgSrY7rAUH8ZENrontbrNldEybTAceD 46/5jJvX19lViao4nLZ7GTkUNu8x6A/fhkIq2As6xYgi2joDCRWfPvdfQXa0YsZG Cagk0p6DpL/bTLgcxaVwzEWMVa92noye933XVO8NHRpqeyJkIJrcj8JwQ2yV6cWJ maJt388s5hzuw+6WqClq78c5JEBgKJC2JstrjA6BZXICVRc99k8WlnT3jp6xBy/i 7d2h8zkKlRDxCA== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedutddrfeelgdeifecutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc fjughrpefhvffufffkofgjfhgggfestdekredtredttdenucfhrhhomhepfdfvohgsihhn ucevrdcujfgrrhguihhnghdfuceothhosghinheskhgvrhhnvghlrdhorhhgqeenucfkph epuddvgedrudeiledrhedrudehkeenucfrrghrrghmpehmrghilhhfrhhomhepthhosghi nheskhgvrhhnvghlrdhorhhgnecuvehluhhsthgvrhfuihiivgeptd X-ME-Proxy: Received: from eros.localdomain (124-169-5-158.dyn.iinet.net.au [124.169.5.158]) by mail.messagingengine.com (Postfix) with ESMTPA id A72B1E4548; Thu, 7 Mar 2019 23:14:58 -0500 (EST) From: "Tobin C. Harding" To: Andrew Morton Cc: "Tobin C. Harding" , Christopher Lameter , Pekka Enberg , Matthew Wilcox , Tycho Andersen , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [RFC 01/15] slub: Create sysfs field /sys/slab//ops Date: Fri, 8 Mar 2019 15:14:12 +1100 Message-Id: <20190308041426.16654-2-tobin@kernel.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190308041426.16654-1-tobin@kernel.org> References: <20190308041426.16654-1-tobin@kernel.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Create an ops field in /sys/slab/*/ops to contain all the callback operations defined for a slab cache. This will be used to display the additional callbacks that will be defined soon to enable movable objects. Display the existing ctor callback in the ops fields contents. Co-developed-by: Christoph Lameter Signed-off-by: Tobin C. Harding --- mm/slub.c | 13 +++++++++---- 1 file changed, 9 insertions(+), 4 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index dc777761b6b7..69164aa7cbbf 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -5009,13 +5009,18 @@ static ssize_t cpu_partial_store(struct kmem_cache *s, const char *buf, } SLAB_ATTR(cpu_partial); -static ssize_t ctor_show(struct kmem_cache *s, char *buf) +static ssize_t ops_show(struct kmem_cache *s, char *buf) { + int x = 0; + if (!s->ctor) return 0; - return sprintf(buf, "%pS\n", s->ctor); + + if (s->ctor) + x += sprintf(buf + x, "ctor : %pS\n", s->ctor); + return x; } -SLAB_ATTR_RO(ctor); +SLAB_ATTR_RO(ops); static ssize_t aliases_show(struct kmem_cache *s, char *buf) { @@ -5428,7 +5433,7 @@ static struct attribute *slab_attrs[] = { &objects_partial_attr.attr, &partial_attr.attr, &cpu_slabs_attr.attr, - &ctor_attr.attr, + &ops_attr.attr, &aliases_attr.attr, &align_attr.attr, &hwcache_align_attr.attr, From patchwork Fri Mar 8 04:14:13 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Tobin C. Harding" X-Patchwork-Id: 10844147 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 577CE922 for ; Fri, 8 Mar 2019 04:15:11 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 436182E07D for ; Fri, 8 Mar 2019 04:15:11 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 375622E097; Fri, 8 Mar 2019 04:15:11 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6CE7B2E07D for ; Fri, 8 Mar 2019 04:15:10 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3E0558E0005; Thu, 7 Mar 2019 23:15:09 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 366208E0002; Thu, 7 Mar 2019 23:15:09 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1DE788E0005; Thu, 7 Mar 2019 23:15:09 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-qt1-f197.google.com (mail-qt1-f197.google.com [209.85.160.197]) by kanga.kvack.org (Postfix) with ESMTP id E62E28E0002 for ; Thu, 7 Mar 2019 23:15:08 -0500 (EST) Received: by mail-qt1-f197.google.com with SMTP id q11so17340649qtj.16 for ; Thu, 07 Mar 2019 20:15:08 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=S945EuPNIvb2lzYR5n8bNRwR4B4LkW68HRQmRnZ/c7o=; b=OCm16Oq/R8JwO04WpU+ETcnMWdV/ai2PWbeBYYYng3mdAd0P7EqnjXi1iWJ4STjITw TqY0seOwLmaWoliSs20Lq1DbRwHWkvpkkNX4Z14ZO2q7EXo+/r6Jl7gBhRuSPv/adXTz VvLqF7taWgXUpd6lauURh7+L9vAQf0cEvpg/goaaJN8BVbZuWW5w6pf+3v001WtXxdON BcqiM7KNYgwnb2opWmIIwoYRjzi+BYtUQK+yaS509x+j3ZM7iHe4JoTsqQrKrquVL/Nm g7b1UJj9YK20+LEMt/7Tr5m1gB/oMFTPnnTCO+DX6QPCR2IC/b8NL3RI207UHxHbROKE qiWQ== X-Gm-Message-State: APjAAAV0lRm2ETJc8M/5ACkN5if/fIKkE/Ua5A50zVoO6lRXm0KEAdAX PWytgwMXTP77kanYsVacw/EKeLupNxHjq/K9G7skXWXqV3yYcGIjS8B9dSC8GwZDLtYt+S14v0J PxXpRfZHGBzpTNJS1HWzzSxRdrKDtvU7jbgYfaERxgsK7uIlMl2lBYgJFX4NmkPQ= X-Received: by 2002:a0c:d0f5:: with SMTP id b50mr13684906qvh.241.1552018508683; Thu, 07 Mar 2019 20:15:08 -0800 (PST) X-Google-Smtp-Source: APXvYqwBMo4t04WTk2KoOWkCNx6g/0mIyQjbzuyVZAPKB9B1eUK1Z/wMiakNzp9zGyX/RK5AzG8W X-Received: by 2002:a0c:d0f5:: with SMTP id b50mr13684869qvh.241.1552018507486; Thu, 07 Mar 2019 20:15:07 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1552018507; cv=none; d=google.com; s=arc-20160816; b=L7R6NLbXQ18BdCVpA0Saw1LQNc8W0jv5K+9DeSorl4mK/FqKtWu2cHRgK1m1Tod3ah 097juctPLD7necD3yZQxbYjdLuvq9hwpI0h5KC5Ik3Ub0gsBkGu0HpJibYVN4QcnN6b/ ErSWeSdBjOtUQIp+20+on+8YOgkrFNY70k2mAvQtVlXfWhoCN2x2kThhrkRqMtrpZFrA BQtJo/FW4S91LO1cA8wVzxFHqy1DClFQtsoPDMXwDnARp4OVOaMml+1BWfWaCe4k3EKo 7dZpwwXBwha9MyQgxp5KKdYDps4FoSW7CskzHQouMBN2npT75Tm02s+FBJBOFrL/PCbo 501g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=S945EuPNIvb2lzYR5n8bNRwR4B4LkW68HRQmRnZ/c7o=; b=0HNWbkw4Hbpu8SwOocaY3o5OsGu7amLY9K+LSC0fAsvPbTecn/WyrpiJ+DuWstV9Yq lB3n2dyh0LJlmRtqNQJnf01VWVd6o5Jb0TB+xj8JhYsrOhibsvMhjyqrHYVe7y1GfzKz tMLPFroSyrFEq4HA5LfOIvlBMIAVnZveQwJZ75XGo28WhMTBbdD6QYRavugw8Mt1LHhY CH6+EwTO5MCP+h4/yY+WGPp6rc9GqgZLSmJH8SC0yFP18N1+Pw5muNz29Jq0lfIvq804 90m2s5pz0aR9Aq/ij/pr3Lvyfk0/9lU6UBzkoOZSLLjpEc+jrXaA0D0ZJ65qG9HVTWEA yQmw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@messagingengine.com header.s=fm2 header.b="SpG+6r/P"; spf=softfail (google.com: domain of transitioning tobin@kernel.org does not designate 64.147.123.25 as permitted sender) smtp.mailfrom=tobin@kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from wout2-smtp.messagingengine.com (wout2-smtp.messagingengine.com. [64.147.123.25]) by mx.google.com with ESMTPS id o70si1847709qkl.238.2019.03.07.20.15.07 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 07 Mar 2019 20:15:07 -0800 (PST) Received-SPF: softfail (google.com: domain of transitioning tobin@kernel.org does not designate 64.147.123.25 as permitted sender) client-ip=64.147.123.25; Authentication-Results: mx.google.com; dkim=pass header.i=@messagingengine.com header.s=fm2 header.b="SpG+6r/P"; spf=softfail (google.com: domain of transitioning tobin@kernel.org does not designate 64.147.123.25 as permitted sender) smtp.mailfrom=tobin@kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailout.west.internal (Postfix) with ESMTP id F272C33C0; Thu, 7 Mar 2019 23:15:05 -0500 (EST) Received: from mailfrontend1 ([10.202.2.162]) by compute3.internal (MEProxy); Thu, 07 Mar 2019 23:15:06 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm2; bh=S945EuPNIvb2lzYR5n8bNRwR4B4LkW68HRQmRnZ/c7o=; b=SpG+6r/P b1zyzhX1lDp8bJRO9zY1vt+oKVxOBSb+xxxRDFuE6Ob97ESzfxiPWtwiGQv+ViLV Jwpko9MB0AR6KnspLc22jq8c/J3V4lrRoIVo/nFnwtgIDn/jfUlD8yqaDpJ1tMLM EOkPVgO2Ytg47CCgs4YmWaV6Mv6pkIaHwwtZ+HkJyT5PB1qRjpAxTXhW7YXHiSRr /oldHEHLZDx0IYDCvJIz6+L4KM9ZQI41HAOdWh/BtewP0cGZQBemOry6Z+rXRD1k B1o1HCIN2ACjCYtCBif2JPWNvwoBGB6LIH0YqIN3gcmeemioICG6qfgE3cLmXeTk YYyTy3ic9kdEng== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedutddrfeelgdeifecutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc fjughrpefhvffufffkofgjfhgggfestdekredtredttdenucfhrhhomhepfdfvohgsihhn ucevrdcujfgrrhguihhnghdfuceothhosghinheskhgvrhhnvghlrdhorhhgqeenucfkph epuddvgedrudeiledrhedrudehkeenucfrrghrrghmpehmrghilhhfrhhomhepthhosghi nheskhgvrhhnvghlrdhorhhgnecuvehluhhsthgvrhfuihiivgepud X-ME-Proxy: Received: from eros.localdomain (124-169-5-158.dyn.iinet.net.au [124.169.5.158]) by mail.messagingengine.com (Postfix) with ESMTPA id 5F06CE4383; Thu, 7 Mar 2019 23:15:02 -0500 (EST) From: "Tobin C. Harding" To: Andrew Morton Cc: "Tobin C. Harding" , Christopher Lameter , Pekka Enberg , Matthew Wilcox , Tycho Andersen , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [RFC 02/15] slub: Add isolate() and migrate() methods Date: Fri, 8 Mar 2019 15:14:13 +1100 Message-Id: <20190308041426.16654-3-tobin@kernel.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190308041426.16654-1-tobin@kernel.org> References: <20190308041426.16654-1-tobin@kernel.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Add the two methods needed for moving objects and enable the display of the callbacks via the /sys/kernel/slab interface. Add documentation explaining the use of these methods and the prototypes for slab.h. Add functions to setup the callbacks method for a slab cache. Add empty functions for SLAB/SLOB. The API is generic so it could be theoretically implemented for these allocators as well. Co-developed-by: Christoph Lameter Signed-off-by: Tobin C. Harding --- include/linux/slab.h | 69 ++++++++++++++++++++++++++++++++++++++++ include/linux/slub_def.h | 3 ++ mm/slab_common.c | 4 +++ mm/slub.c | 42 ++++++++++++++++++++++++ 4 files changed, 118 insertions(+) diff --git a/include/linux/slab.h b/include/linux/slab.h index 11b45f7ae405..22e87c41b8a4 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -152,6 +152,75 @@ void memcg_create_kmem_cache(struct mem_cgroup *, struct kmem_cache *); void memcg_deactivate_kmem_caches(struct mem_cgroup *); void memcg_destroy_kmem_caches(struct mem_cgroup *); +/* + * Function prototypes passed to kmem_cache_setup_mobility() to enable + * mobile objects and targeted reclaim in slab caches. + */ + +/** + * typedef kmem_cache_isolate_func - Object migration callback function. + * @s: The cache we are working on. + * @ptr: Pointer to an array of pointers to the objects to migrate. + * @nr: Number of objects in array. + * + * The purpose of kmem_cache_isolate_func() is to pin each object so that + * they cannot be freed until kmem_cache_migrate_func() has processed + * them. This may be accomplished by increasing the refcount or setting + * a flag. + * + * The object pointer array passed is also passed to + * kmem_cache_migrate_func(). The function may remove objects from the + * array by setting pointers to NULL. This is useful if we can determine + * that an object is being freed because kmem_cache_isolate_func() was + * called when the subsystem was calling kmem_cache_free(). In that + * case it is not necessary to increase the refcount or specially mark + * the object because the release of the slab lock will lead to the + * immediate freeing of the object. + * + * Context: Called with locks held so that the slab objects cannot be + * freed. We are in an atomic context and no slab operations + * may be performed. + * Return: A pointer that is passed to the migrate function. If any + * objects cannot be touched at this point then the pointer may + * indicate a failure and then the migration function can simply + * remove the references that were already obtained. The private + * data could be used to track the objects that were already pinned. + */ +typedef void *kmem_cache_isolate_func(struct kmem_cache *s, void **ptr, int nr); + +/** + * typedef kmem_cache_migrate_func - Object migration callback function. + * @s: The cache we are working on. + * @ptr: Pointer to an array of pointers to the objects to migrate. + * @nr: Number of objects in array. + * @node: The NUMA node where the object should be allocated. + * @private: The pointer returned by kmem_cache_isolate_func(). + * + * This function is responsible for migrating objects. Typically, for + * each object in the input array you will want to allocate an new + * object, copy the original object, update any pointers, and free the + * old object. + * + * After this function returns all pointers to the old object should now + * point to the new object. + * + * Context: Called with no locks held and interrupts enabled. Sleeping + * is possible. Any operation may be performed. + */ +typedef void kmem_cache_migrate_func(struct kmem_cache *s, void **ptr, + int nr, int node, void *private); + +/* + * kmem_cache_setup_mobility() is used to setup callbacks for a slab cache. + */ +#ifdef CONFIG_SLUB +void kmem_cache_setup_mobility(struct kmem_cache *, kmem_cache_isolate_func, + kmem_cache_migrate_func); +#else +static inline void kmem_cache_setup_mobility(struct kmem_cache *s, + kmem_cache_isolate_func isolate, kmem_cache_migrate_func migrate) {} +#endif + /* * Please use this macro to create slab caches. Simply specify the * name of the structure and maybe some flags that are listed above. diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h index 3a1a1dbc6f49..a7340a1ed5dc 100644 --- a/include/linux/slub_def.h +++ b/include/linux/slub_def.h @@ -99,6 +99,9 @@ struct kmem_cache { gfp_t allocflags; /* gfp flags to use on each alloc */ int refcount; /* Refcount for slab cache destroy */ void (*ctor)(void *); + kmem_cache_isolate_func *isolate; + kmem_cache_migrate_func *migrate; + unsigned int inuse; /* Offset to metadata */ unsigned int align; /* Alignment */ unsigned int red_left_pad; /* Left redzone padding size */ diff --git a/mm/slab_common.c b/mm/slab_common.c index f9d89c1b5977..754acdb292e4 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -298,6 +298,10 @@ int slab_unmergeable(struct kmem_cache *s) if (!is_root_cache(s)) return 1; + /* + * s->isolate and s->migrate imply s->ctor so no need to + * check them explicitly. + */ if (s->ctor) return 1; diff --git a/mm/slub.c b/mm/slub.c index 69164aa7cbbf..0133168d1089 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -4325,6 +4325,34 @@ int __kmem_cache_create(struct kmem_cache *s, slab_flags_t flags) return err; } +void kmem_cache_setup_mobility(struct kmem_cache *s, + kmem_cache_isolate_func isolate, + kmem_cache_migrate_func migrate) +{ + /* + * Mobile objects must have a ctor otherwise the object may be + * in an undefined state on allocation. Since the object may + * need to be inspected by the migration function at any time + * after allocation we must ensure that the object always has a + * defined state. + */ + if (!s->ctor) { + pr_err("%s: cannot setup mobility without a constructor\n", + s->name); + return; + } + + s->isolate = isolate; + s->migrate = migrate; + + /* + * Sadly serialization requirements currently mean that we have + * to disable fast cmpxchg based processing. + */ + s->flags &= ~__CMPXCHG_DOUBLE; +} +EXPORT_SYMBOL(kmem_cache_setup_mobility); + void *__kmalloc_track_caller(size_t size, gfp_t gfpflags, unsigned long caller) { struct kmem_cache *s; @@ -5018,6 +5046,20 @@ static ssize_t ops_show(struct kmem_cache *s, char *buf) if (s->ctor) x += sprintf(buf + x, "ctor : %pS\n", s->ctor); + + if (s->isolate) { + x += sprintf(buf + x, "isolate : "); + x += sprint_symbol(buf + x, + (unsigned long)s->isolate); + x += sprintf(buf + x, "\n"); + } + + if (s->migrate) { + x += sprintf(buf + x, "migrate : "); + x += sprint_symbol(buf + x, + (unsigned long)s->migrate); + x += sprintf(buf + x, "\n"); + } return x; } SLAB_ATTR_RO(ops); From patchwork Fri Mar 8 04:14:14 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Tobin C. Harding" X-Patchwork-Id: 10844149 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E3C491390 for ; Fri, 8 Mar 2019 04:15:14 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id CF49B2E07D for ; Fri, 8 Mar 2019 04:15:14 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id C1A102E097; Fri, 8 Mar 2019 04:15:14 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 30F302E07D for ; Fri, 8 Mar 2019 04:15:14 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2A4088E0006; Thu, 7 Mar 2019 23:15:13 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 252E08E0002; Thu, 7 Mar 2019 23:15:13 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0F6938E0006; Thu, 7 Mar 2019 23:15:13 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-qt1-f199.google.com (mail-qt1-f199.google.com [209.85.160.199]) by kanga.kvack.org (Postfix) with ESMTP id D77D18E0002 for ; Thu, 7 Mar 2019 23:15:12 -0500 (EST) Received: by mail-qt1-f199.google.com with SMTP id i3so17348812qtc.7 for ; Thu, 07 Mar 2019 20:15:12 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=O+DTezmhhc3P3jb0PBTMRAkkiTEY+lCxD+tCh+cgBHo=; b=VkkJOYSTAidUINmFaWx59iFq0Z4eMM5xOmY+KFUVMTRumYm9pIKud1txg75QsTuqvF oCShhvFKZiKCROr72rCwYEkWsBarQx11SKSoiEL4AuJY4LwCtmkp7lm1sSf73BQtnra2 2c1w5uq+4iT86Z9v1IYsy/Ux4/RwZ322EfT94HPaPYLANoxMPNlwMETRLcHkpJ4tQO5h TEM8OK/TxWUEpESa8UwkxUetlwMnuDJq3Q91rrbD9CJUBHBL5rjt09cAWUumCojw8BDC X4Iwt9/0wcBoL+pWpZOXlne9r802EE4pjxI3GKc8Fm8JtYghdFFr9K8aRQ2sIlD7QkIk wrng== X-Gm-Message-State: APjAAAXQUAPbEupgyPMvsvW3hXmHakNBll5NmqxzVQvx03pSgiw35d4O KYC9fQZrMXUzuh1Tw3Rdux08rfrko/vhSMzbV1RLXd+k5CJZX0YzNW2XE2rka63T9MToN+9sv6q /1tjP3C1/LtowtNGecmL5VOqJwKgVDe43bVBtErahDZbudU60tJ1KuEWy6rFbd3s= X-Received: by 2002:ac8:1b24:: with SMTP id y33mr12811447qtj.111.1552018512633; Thu, 07 Mar 2019 20:15:12 -0800 (PST) X-Google-Smtp-Source: APXvYqwh/3PchvHnGZ/7Nz/QB9B3jeW57U5oeWRs304vQNdq78Hgz1lJU3MLhkjh2Kuh+MrWlucz X-Received: by 2002:ac8:1b24:: with SMTP id y33mr12811391qtj.111.1552018511343; Thu, 07 Mar 2019 20:15:11 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1552018511; cv=none; d=google.com; s=arc-20160816; b=gdv3g0IgqrZzO11F58+9Qf7mwr9kYccIswdTbuGQUe4HUUHNhDi/EBEZqPLsQQKes+ o3xZUnfGkBDaMfgywZDHasciuu6XNB+qeUNUekpChPo9O53CzjGciCRgR3fqPhBQnvzS aDf0hjLT1ryN7/qKLY67A2u+8KJrlvbfXaHH1HnUyTxWNHPpmOJsz5mgraNJbBUhTbAu oId+ngXvNx2Jb2vcV/uVBRWMpyZ0phSoh3mRKp4SUu0up2mIt+O/VBLpfryIv/nVmB+W 5b0O3utsZFHcnJFfdfuTcD2pwXUaU6Cx6vUl55HenGY2DzpENxdSn5OZG7SoeL6KiKD3 Rwbw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=O+DTezmhhc3P3jb0PBTMRAkkiTEY+lCxD+tCh+cgBHo=; b=OKm5EqPByp/uA5YTbQuN+QafoM1t5ItD46hSvgEVQjKQk7bdzpslu3Xlz/zQ4VSbA5 7GQpyBEAmB63So/fyC0NVs3KM9QXenFPrpizXhltSwZm5yvZJ5wOw/hrf4yi/lOmUfrk 4sTviRWOyXel4fhW8JMIQd0K9Uubt9624EiVEySlhLzjhyPHEMBjPGgDy4e4WqDjtsw+ lV6IddyKvu07rWxo+N9YrbntnMqxcjAXIVsYZKsBVHNvBvSjKMuk16OkGoMSOE/vBUbZ i/n/IBuh+ro7Q/bMQck43K4zU6ofIpNooieklqWMPFkoNILjNi1WlIYSv5zJ8uyI1dXQ riAQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@messagingengine.com header.s=fm2 header.b=8PgI+TUX; spf=softfail (google.com: domain of transitioning tobin@kernel.org does not designate 64.147.123.25 as permitted sender) smtp.mailfrom=tobin@kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from wout2-smtp.messagingengine.com (wout2-smtp.messagingengine.com. [64.147.123.25]) by mx.google.com with ESMTPS id k14si4234707qtf.371.2019.03.07.20.15.11 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 07 Mar 2019 20:15:11 -0800 (PST) Received-SPF: softfail (google.com: domain of transitioning tobin@kernel.org does not designate 64.147.123.25 as permitted sender) client-ip=64.147.123.25; Authentication-Results: mx.google.com; dkim=pass header.i=@messagingengine.com header.s=fm2 header.b=8PgI+TUX; spf=softfail (google.com: domain of transitioning tobin@kernel.org does not designate 64.147.123.25 as permitted sender) smtp.mailfrom=tobin@kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailout.west.internal (Postfix) with ESMTP id D3BD236A7; Thu, 7 Mar 2019 23:15:09 -0500 (EST) Received: from mailfrontend1 ([10.202.2.162]) by compute3.internal (MEProxy); Thu, 07 Mar 2019 23:15:10 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm2; bh=O+DTezmhhc3P3jb0PBTMRAkkiTEY+lCxD+tCh+cgBHo=; b=8PgI+TUX d++yguSzS1y5c4WM0fBOLRR6TTLDx4kG9oKzNZDHCt7viNyjm0wNEIkRA2Fu+psU cA9580Ry0ysSv/RbM3+27g2Qxf33jRydYnLfTfzG9/Nkeq+JVYxpx5HGJw0fzB3N /Oq8eG35zbMlg1GDqb/8ElS68ZgC2MjxbRgusH5jn1c4ICGVsBp0o1RLElQRR5Zj XjY82iFl2rl6cmxWytjWqJbAxlPyQs0WYstoQhzvCevIRyNr5nYvIcyGdevSc1xH X81e3CVfHUmEtf/E2C2kSOBpajX8kVjHdkDLd1BfjNR7ghG62GiYNbazFdIM1g6Z Dk7PaKdLfcclKw== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedutddrfeelgdeifecutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc fjughrpefhvffufffkofgjfhgggfestdekredtredttdenucfhrhhomhepfdfvohgsihhn ucevrdcujfgrrhguihhnghdfuceothhosghinheskhgvrhhnvghlrdhorhhgqeenucfkph epuddvgedrudeiledrhedrudehkeenucfrrghrrghmpehmrghilhhfrhhomhepthhosghi nheskhgvrhhnvghlrdhorhhgnecuvehluhhsthgvrhfuihiivgepvd X-ME-Proxy: Received: from eros.localdomain (124-169-5-158.dyn.iinet.net.au [124.169.5.158]) by mail.messagingengine.com (Postfix) with ESMTPA id 3CCD8E46B8; Thu, 7 Mar 2019 23:15:05 -0500 (EST) From: "Tobin C. Harding" To: Andrew Morton Cc: "Tobin C. Harding" , Christopher Lameter , Pekka Enberg , Matthew Wilcox , Tycho Andersen , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [RFC 03/15] tools/vm/slabinfo: Add support for -C and -F options Date: Fri, 8 Mar 2019 15:14:14 +1100 Message-Id: <20190308041426.16654-4-tobin@kernel.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190308041426.16654-1-tobin@kernel.org> References: <20190308041426.16654-1-tobin@kernel.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP -F lists caches that support object migration. -C lists caches that use a ctor. Add command line options to show caches with a constructor and caches with that are migratable (i.e. have isolate and migrate functions). Co-developed-by: Christoph Lameter Signed-off-by: Tobin C. Harding --- tools/vm/slabinfo.c | 40 ++++++++++++++++++++++++++++++++++++---- 1 file changed, 36 insertions(+), 4 deletions(-) diff --git a/tools/vm/slabinfo.c b/tools/vm/slabinfo.c index 73818f1b2ef8..6ba8ffb4ea50 100644 --- a/tools/vm/slabinfo.c +++ b/tools/vm/slabinfo.c @@ -33,6 +33,7 @@ struct slabinfo { unsigned int hwcache_align, object_size, objs_per_slab; unsigned int sanity_checks, slab_size, store_user, trace; int order, poison, reclaim_account, red_zone; + int movable, ctor; unsigned long partial, objects, slabs, objects_partial, objects_total; unsigned long alloc_fastpath, alloc_slowpath; unsigned long free_fastpath, free_slowpath; @@ -67,6 +68,8 @@ int show_report; int show_alias; int show_slab; int skip_zero = 1; +int show_movable; +int show_ctor; int show_numa; int show_track; int show_first_alias; @@ -109,14 +112,17 @@ static void fatal(const char *x, ...) static void usage(void) { - printf("slabinfo 4/15/2011. (c) 2007 sgi/(c) 2011 Linux Foundation.\n\n" - "slabinfo [-aADefhilnosrStTvz1LXBU] [N=K] [-dafzput] [slab-regexp]\n" + printf("slabinfo 4/15/2017. (c) 2007 sgi/(c) 2011 Linux Foundation/(c) 2017 Jump Trading LLC.\n\n" + "slabinfo [-aACDefFhilnosrStTvz1LXBU] [N=K] [-dafzput] [slab-regexp]\n" + "-a|--aliases Show aliases\n" "-A|--activity Most active slabs first\n" "-B|--Bytes Show size in bytes\n" + "-C|--ctor Show slabs with ctors\n" "-D|--display-active Switch line format to activity\n" "-e|--empty Show empty slabs\n" "-f|--first-alias Show first alias\n" + "-F|--movable Show caches that support movable objects\n" "-h|--help Show usage information\n" "-i|--inverted Inverted list\n" "-l|--slabs Show slabs\n" @@ -588,6 +594,12 @@ static void slabcache(struct slabinfo *s) if (show_empty && s->slabs) return; + if (show_movable && !s->movable) + return; + + if (show_ctor && !s->ctor) + return; + if (sort_loss == 0) store_size(size_str, slab_size(s)); else @@ -602,6 +614,10 @@ static void slabcache(struct slabinfo *s) *p++ = '*'; if (s->cache_dma) *p++ = 'd'; + if (s->movable) + *p++ = 'F'; + if (s->ctor) + *p++ = 'C'; if (s->hwcache_align) *p++ = 'A'; if (s->poison) @@ -636,7 +652,8 @@ static void slabcache(struct slabinfo *s) printf("%-21s %8ld %7d %15s %14s %4d %1d %3ld %3ld %s\n", s->name, s->objects, s->object_size, size_str, dist_str, s->objs_per_slab, s->order, - s->slabs ? (s->partial * 100) / s->slabs : 100, + s->slabs ? (s->partial * 100) / + (s->slabs * s->objs_per_slab) : 100, s->slabs ? (s->objects * s->object_size * 100) / (s->slabs * (page_size << s->order)) : 100, flags); @@ -1256,6 +1273,13 @@ static void read_slab_dir(void) slab->alloc_node_mismatch = get_obj("alloc_node_mismatch"); slab->deactivate_bypass = get_obj("deactivate_bypass"); chdir(".."); + if (read_slab_obj(slab, "ops")) { + if (strstr(buffer, "ctor :")) + slab->ctor = 1; + if (strstr(buffer, "migrate :")) + slab->movable = 1; + } + if (slab->name[0] == ':') alias_targets++; slab++; @@ -1332,6 +1356,8 @@ static void xtotals(void) } struct option opts[] = { + { "ctor", no_argument, NULL, 'C' }, + { "movable", no_argument, NULL, 'F' }, { "aliases", no_argument, NULL, 'a' }, { "activity", no_argument, NULL, 'A' }, { "debug", optional_argument, NULL, 'd' }, @@ -1367,7 +1393,7 @@ int main(int argc, char *argv[]) page_size = getpagesize(); - while ((c = getopt_long(argc, argv, "aAd::Defhil1noprstvzTSN:LXBU", + while ((c = getopt_long(argc, argv, "aACd::DefFhil1noprstvzTSN:LXBU", opts, NULL)) != -1) switch (c) { case '1': @@ -1423,6 +1449,12 @@ int main(int argc, char *argv[]) case 'z': skip_zero = 0; break; + case 'C': + show_ctor = 1; + break; + case 'F': + show_movable = 1; + break; case 'T': show_totals = 1; break; From patchwork Fri Mar 8 04:14:15 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Tobin C. Harding" X-Patchwork-Id: 10844151 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 72A251390 for ; Fri, 8 Mar 2019 04:15:18 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5E7762E07D for ; Fri, 8 Mar 2019 04:15:18 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 52C0A2E097; Fri, 8 Mar 2019 04:15:18 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C42C92E07D for ; Fri, 8 Mar 2019 04:15:17 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B07918E0007; Thu, 7 Mar 2019 23:15:16 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id AB7FA8E0002; Thu, 7 Mar 2019 23:15:16 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 980D88E0007; Thu, 7 Mar 2019 23:15:16 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-qt1-f200.google.com (mail-qt1-f200.google.com [209.85.160.200]) by kanga.kvack.org (Postfix) with ESMTP id 6C7838E0002 for ; Thu, 7 Mar 2019 23:15:16 -0500 (EST) Received: by mail-qt1-f200.google.com with SMTP id 35so17518473qtq.5 for ; Thu, 07 Mar 2019 20:15:16 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=JsjE/DB7CgND14aqwEzfQQRvORRnlRv5KuF04km8r4g=; b=JZkaVz+lv6gP6HWmAocjemJ42+nbo2Uka5XzFjVYo5XHRy2aDd7ADxak9KsOqq+O/Y d6K8/nzG/k/7tpNUrjfwjryfrbFdEqo6Jw3pcbHnetY/s0aMTCXPtZquvKI4JBzMDVOW ZfrOOslOLyCdGeWlowbxeE7QM4CljTuMix/8+50B8F6ApPcjnw52Zmd8dKRYupbkJ0du 2/pxtgXwHQsYVn9WIzLRYjvvclSQj8xZ2mvtBMfejCbXQlvHIR+Y6fDXXLlXehuRrA7d TGYy34Nh/nRsxXjqtBxGpFG0oY/FDuu8U1mKYQnOm6W601q0JAh4RyYPa/66qNY57cPS YgXA== X-Gm-Message-State: APjAAAWjmnkXPQfOSOMDfM2KgIAWoDaT5sKZ476vzTLs76t+yFHvLUai XJVFmXbRcrXmW/Ev9xiWEZYVq4ezUrUwH2CI9aUgZslPxFME03MPBMYJY3ehtNS6eaH18BGZMT2 kfUr7xPNCFnqM7QuZu1X2g/crB5+FDcYlI09y071pnigulVjPgyL3lbopWAsxGqw= X-Received: by 2002:a37:4e58:: with SMTP id c85mr12898670qkb.89.1552018516166; Thu, 07 Mar 2019 20:15:16 -0800 (PST) X-Google-Smtp-Source: APXvYqwZnZgR52PBFcXIRF6Iu6muaLBKduodd7AkV3fDTnrydsfjWGJKTZ2CPyJ0YX7LGSi/vSJ5 X-Received: by 2002:a37:4e58:: with SMTP id c85mr12898617qkb.89.1552018514722; Thu, 07 Mar 2019 20:15:14 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1552018514; cv=none; d=google.com; s=arc-20160816; b=dzIweopGXQxrAIIpYcKQsvK72+1YtQ7r5KdAyqQVYCfL4qb6ZHLtkxJYgRRuAirFdq FZy/sPbaNG2VEQd4jN/2lHjVKBrcd/k7u0OQaB5sa1Wm2EnCV5itcAwEcUNCOBPF4vZd ptnXNeNunctjm0ZqMKVvx6423c7E+9pddw5kqDUeSe/s3o2SpADPs21F4Wyv8Tlzhjbr b+mflxw4hNIEZZcwbWzpUJCJwSlvbl6h9FmTAGn4S3xhmuzAQdBFkJx8mQk2guHhwYRR oRiA4sGwS7vgmxiLkDVGqX3yP0waetDC1gKOIywZAcvNq9qdBciCLgTPI4q/ror7Pa1h QB9A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=JsjE/DB7CgND14aqwEzfQQRvORRnlRv5KuF04km8r4g=; b=nZbXIoj3r/C23123T5sTzBmuFunYN7m979BiDSZDLWmSUjhxfIFRigBAXLv3lzk5sw pB0n3kJIeqe4ytUAZrh+/7OKUACgYs5RxJ+qnKIIwGJIdyE6sxUy088XhJtVUcLkPIGN RawHbkUDuACSawkM7HEXhs5YuZfFrXb8eLXphBOqqJndVG5EqnZ57ODXlKuSLaPSuANY oih7HdhY8cwkJFfgFpNK5vr1aiJ3FeUFrhpjtGbQ/tnPLdD3zroloFFifGlkbBYVRkKb ZrHBefTdcBCXfHqJuBg8hh8U+m9YM6IhCvAsJQ5vIwwQVG76T49tlxHm4f3vhV5sbG8J kv4g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@messagingengine.com header.s=fm2 header.b=5HYCTy6Q; spf=softfail (google.com: domain of transitioning tobin@kernel.org does not designate 64.147.123.25 as permitted sender) smtp.mailfrom=tobin@kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from wout2-smtp.messagingengine.com (wout2-smtp.messagingengine.com. [64.147.123.25]) by mx.google.com with ESMTPS id r47si4132237qte.237.2019.03.07.20.15.14 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 07 Mar 2019 20:15:14 -0800 (PST) Received-SPF: softfail (google.com: domain of transitioning tobin@kernel.org does not designate 64.147.123.25 as permitted sender) client-ip=64.147.123.25; Authentication-Results: mx.google.com; dkim=pass header.i=@messagingengine.com header.s=fm2 header.b=5HYCTy6Q; spf=softfail (google.com: domain of transitioning tobin@kernel.org does not designate 64.147.123.25 as permitted sender) smtp.mailfrom=tobin@kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailout.west.internal (Postfix) with ESMTP id 46DC2344B; Thu, 7 Mar 2019 23:15:13 -0500 (EST) Received: from mailfrontend1 ([10.202.2.162]) by compute3.internal (MEProxy); Thu, 07 Mar 2019 23:15:13 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm2; bh=JsjE/DB7CgND14aqwEzfQQRvORRnlRv5KuF04km8r4g=; b=5HYCTy6Q SrpVPWr/y+ND4NM8vTUOx2ewIo94P8eltVKp2IISnKwCwKGZpS7bBcL5LhXseLEf BFc3fPcknZqqUCRxQLGeyeIJH8jV82MMXd68wn5ZyBCjgOSNxyxN6s66vpg8kN6+ 9N5SlPIlYPUa0n+55Rw1mG5uewIztNB24PdDIsi9uZ1R6N4mXIK3gsY+7ylL4XZ9 mcwf7Y6iJ/qdj45+nd8XdsDgTPCC85svuHWJsVwblWMXRu17KzS6Fonpednchwzj CssUz1rmk/3sOkyr1qd7/tJh+ALGFCfejcKwQ1Ns50O9P+QeCATwIj85kM11HrmN UmDYJba33ioUWQ== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedutddrfeelgdeifecutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc fjughrpefhvffufffkofgjfhgggfestdekredtredttdenucfhrhhomhepfdfvohgsihhn ucevrdcujfgrrhguihhnghdfuceothhosghinheskhgvrhhnvghlrdhorhhgqeenucfkph epuddvgedrudeiledrhedrudehkeenucfrrghrrghmpehmrghilhhfrhhomhepthhosghi nheskhgvrhhnvghlrdhorhhgnecuvehluhhsthgvrhfuihiivgepvd X-ME-Proxy: Received: from eros.localdomain (124-169-5-158.dyn.iinet.net.au [124.169.5.158]) by mail.messagingengine.com (Postfix) with ESMTPA id EA496E4481; Thu, 7 Mar 2019 23:15:09 -0500 (EST) From: "Tobin C. Harding" To: Andrew Morton Cc: "Tobin C. Harding" , Christopher Lameter , Pekka Enberg , Matthew Wilcox , Tycho Andersen , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [RFC 04/15] slub: Enable Slab Movable Objects (SMO) Date: Fri, 8 Mar 2019 15:14:15 +1100 Message-Id: <20190308041426.16654-5-tobin@kernel.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190308041426.16654-1-tobin@kernel.org> References: <20190308041426.16654-1-tobin@kernel.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP We have now in place a mechanism for adding callbacks to a cache in order to be able to implement object migration. Add a function __move() that implements SMO by moving all objects in a slab page using the isolate/migrate callback methods. Co-developed-by: Christoph Lameter Signed-off-by: Tobin C. Harding --- mm/slub.c | 85 +++++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 85 insertions(+) diff --git a/mm/slub.c b/mm/slub.c index 0133168d1089..6ce866b420f1 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -4325,6 +4325,91 @@ int __kmem_cache_create(struct kmem_cache *s, slab_flags_t flags) return err; } +/* + * Allocate a slab scratch space that is sufficient to keep pointers to + * individual objects for all objects in cache and also a bitmap for the + * objects (used to mark which objects are active). + */ +static inline void *alloc_scratch(struct kmem_cache *s) +{ + unsigned int size = oo_objects(s->max); + + return kmalloc(size * sizeof(void *) + + BITS_TO_LONGS(size) * sizeof(unsigned long), + GFP_KERNEL); +} + +/* + * __move() - Move all objects in the given slab. + * @page: The slab we are working on. + * @scratch: Pointer to scratch space. + * @node: The target node to move objects to. + * + * If the target node is not the current node then the object is moved + * to the target node. If the target node is the current node then this + * is an effective way of defragmentation since the current slab page + * with its object is exempt from allocation. + */ +static void __move(struct page *page, void *scratch, int node) +{ + unsigned long objects; + struct kmem_cache *s; + unsigned long flags; + unsigned long *map; + void *private; + int count; + void *p; + void **vector = scratch; + void *addr = page_address(page); + + local_irq_save(flags); + slab_lock(page); + + BUG_ON(!PageSlab(page)); /* Must be s slab page */ + BUG_ON(!page->frozen); /* Slab must have been frozen earlier */ + + s = page->slab_cache; + objects = page->objects; + map = scratch + objects * sizeof(void **); + + /* Determine used objects */ + bitmap_fill(map, objects); + for (p = page->freelist; p; p = get_freepointer(s, p)) + __clear_bit(slab_index(p, s, addr), map); + + /* Build vector of pointers to objects */ + count = 0; + memset(vector, 0, objects * sizeof(void **)); + for_each_object(p, s, addr, objects) + if (test_bit(slab_index(p, s, addr), map)) + vector[count++] = p; + + if (s->isolate) + private = s->isolate(s, vector, count); + else + /* Objects do not need to be isolated */ + private = NULL; + + /* + * Pinned the objects. Now we can drop the slab lock. The slab + * is frozen so it cannot vanish from under us nor will + * allocations be performed on the slab. However, unlocking the + * slab will allow concurrent slab_frees to proceed. So the + * subsystem must have a way to tell from the content of the + * object that it was freed. + * + * If neither RCU nor ctor is being used then the object may be + * modified by the allocator after being freed which may disrupt + * the ability of the migrate function to tell if the object is + * free or not. + */ + slab_unlock(page); + local_irq_restore(flags); + + /* Perform callback to move the objects */ + s->migrate(s, vector, count, node, private); +} + void kmem_cache_setup_mobility(struct kmem_cache *s, kmem_cache_isolate_func isolate, kmem_cache_migrate_func migrate) From patchwork Fri Mar 8 04:14:16 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Tobin C. Harding" X-Patchwork-Id: 10844153 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C8EFE922 for ; Fri, 8 Mar 2019 04:15:21 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B5E1D2E07D for ; Fri, 8 Mar 2019 04:15:21 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A9C1E2E097; Fri, 8 Mar 2019 04:15:21 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3B7282E07D for ; Fri, 8 Mar 2019 04:15:21 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 146FD8E0008; Thu, 7 Mar 2019 23:15:20 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 0CC758E0002; Thu, 7 Mar 2019 23:15:20 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id ED74F8E0008; Thu, 7 Mar 2019 23:15:19 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-qk1-f200.google.com (mail-qk1-f200.google.com [209.85.222.200]) by kanga.kvack.org (Postfix) with ESMTP id C2B6C8E0002 for ; Thu, 7 Mar 2019 23:15:19 -0500 (EST) Received: by mail-qk1-f200.google.com with SMTP id q193so14944802qke.12 for ; Thu, 07 Mar 2019 20:15:19 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=7aXu4Ayrrg1lqUVhlvuMknXlREOy+clc066/B0tC+rw=; b=CZbjFUH7k8c63KOm5ayjzTRLjbmJwwseDfqmVjXaWsS36knfX3zta61X+LtVpYDSaT oR1gN0OfLg+7r3AJ1zkyLPluyhgFE4DIwpEYevifry+PLF8RXg4R4g4lXYkvWQLzB0o/ THkdmjE782XOwnn3vIESxdwK32LhmDBsa+lY6qgyGXoC/qz1fVpJprTXj2/RPIjfCk7/ o4t/FVuRpEejIfZzoXqldfOGo3XABGMK3LhjEluu5cQHmh3+Uu1lKPkl0rRn/JMLgZ5F emgfgV6v+9UO+o92G1jitrLrpzjMMtt9rpEGhq12Bf3GDf0Jm84I9DAPFlTnNEzd4fXH hDLQ== X-Gm-Message-State: APjAAAW3Rva1zRjR9dZxGbnTdUQIaFYj1u6rX1CBcdKOHh5toT74pmjP AWqEATSST0gni/7eadzuWBuZghZYfe5qeHMwwRfITUqPjcWPPtm3E/+EVL69clhngApeBLM8/HW EU1mQCIqoUC7x/Dh2jFPOc4+Dzhil4RiVluXv8QDTRnb+QL9vEWAQqmyraMzWNDM= X-Received: by 2002:a37:bd81:: with SMTP id n123mr12668561qkf.249.1552018519548; Thu, 07 Mar 2019 20:15:19 -0800 (PST) X-Google-Smtp-Source: APXvYqxe0x3bCh5gjBzTqcLt5UnDCC5ayqzgejHT0FVAmdxGAfuYQMdkyUSvsySyPta5h/lNJwxi X-Received: by 2002:a37:bd81:: with SMTP id n123mr12668524qkf.249.1552018518549; Thu, 07 Mar 2019 20:15:18 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1552018518; cv=none; d=google.com; s=arc-20160816; b=BVSJXSm2XLLg5aND9/KK3Vxvb8FmO+8uRuiwngZHqLjP6tZY1Y7jWYDu0S+WReGIaD eKAmUI/ypwzrGScva+mPlLQMZCZfCYSzR+0TSaBcaEbtEOOcboFS7MlYolsaFQJuBg5t o95QOGJj+4Rg5Ex/P1IjG+A9kXsT40qQ0mKNng7SeKDDoi+Z7W1VIqXSuSxVuuijznFq tpIXBr5seFINQcgDaEbJTZGbSMULB7I3vM2Vnh1GhI57RIN2Kq+VVOvKJNPq8Qz4tOpS tk9NNT6lw4wE89i4zdyBQkM3ivfznpl0Cb43iFX16tV5nyKpeexilryPbFPCzBxrD/d8 fhag== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=7aXu4Ayrrg1lqUVhlvuMknXlREOy+clc066/B0tC+rw=; b=juCaHLrFSjIubRteW7lAaslt1JTp9ygagsI9EYsROXmaclWylvRn7ZDXUiR5NmVnwP 2kY7JxYhIqS/LLZM2y4VJwHl3dVLLVfva0w8adiPIQtKfnnGxUKkk/iPurtG0LOZL3cu EsBIDCvTb1yV7p/kXbuoRToDeioUnLmL/WJcaVZNNAhV7/8ZxXDMkH9ya+MyYACBGDMZ 1fkyBRk4ok5JFSvjofgKj/IalAtmYsG2A9w7e13qVZoCw4uhDC9WbzYS2Tyapye+T7Iu 17t6Mn8FUXpVCU9FpsepLijprsFCm1vQKv3yELDCqd4sa98pGn2FnBJxKwEOOkUsxPkE uC8g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@messagingengine.com header.s=fm2 header.b=5CgYTMir; spf=softfail (google.com: domain of transitioning tobin@kernel.org does not designate 64.147.123.25 as permitted sender) smtp.mailfrom=tobin@kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from wout2-smtp.messagingengine.com (wout2-smtp.messagingengine.com. [64.147.123.25]) by mx.google.com with ESMTPS id q33si3942514qvc.139.2019.03.07.20.15.18 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 07 Mar 2019 20:15:18 -0800 (PST) Received-SPF: softfail (google.com: domain of transitioning tobin@kernel.org does not designate 64.147.123.25 as permitted sender) client-ip=64.147.123.25; Authentication-Results: mx.google.com; dkim=pass header.i=@messagingengine.com header.s=fm2 header.b=5CgYTMir; spf=softfail (google.com: domain of transitioning tobin@kernel.org does not designate 64.147.123.25 as permitted sender) smtp.mailfrom=tobin@kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailout.west.internal (Postfix) with ESMTP id 170A336A8; Thu, 7 Mar 2019 23:15:17 -0500 (EST) Received: from mailfrontend1 ([10.202.2.162]) by compute3.internal (MEProxy); Thu, 07 Mar 2019 23:15:17 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm2; bh=7aXu4Ayrrg1lqUVhlvuMknXlREOy+clc066/B0tC+rw=; b=5CgYTMir sAHs6VuabE1G9iZrzN0Oi5mXbESfWpoQZyZRGmnw4vSL55ewvT5H4u8AeazvgsMc rBbvfmAZdeB25lubI+EM1xiVKSwZeeys5B8eRxi+inmikijvx6nHd/8Tnntb3twx 7PwPKdTAvpLyCjJGRCGxKIbhsIuh4buEjLuDNfszP6jm+/FrPDmdT5Y2dHQ2QE2i KZ7ZmY7za7/dNYV/AxNH19RNfLr5ccEHuVTJNyl4WSYdA93J0yS9d5Lp+2/py9HX Jof8yQjdClrJMhbQKcrKin5JiEl59DbFAfiBu+fYq2gyP5artxGvyE9+f6rz6saK 7xPc5fBThFgsRA== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedutddrfeelgdeifecutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc fjughrpefhvffufffkofgjfhgggfestdekredtredttdenucfhrhhomhepfdfvohgsihhn ucevrdcujfgrrhguihhnghdfuceothhosghinheskhgvrhhnvghlrdhorhhgqeenucfkph epuddvgedrudeiledrhedrudehkeenucfrrghrrghmpehmrghilhhfrhhomhepthhosghi nheskhgvrhhnvghlrdhorhhgnecuvehluhhsthgvrhfuihiivgepge X-ME-Proxy: Received: from eros.localdomain (124-169-5-158.dyn.iinet.net.au [124.169.5.158]) by mail.messagingengine.com (Postfix) with ESMTPA id BE59DE4548; Thu, 7 Mar 2019 23:15:13 -0500 (EST) From: "Tobin C. Harding" To: Andrew Morton Cc: "Tobin C. Harding" , Christopher Lameter , Pekka Enberg , Matthew Wilcox , Tycho Andersen , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [RFC 05/15] slub: Sort slab cache list Date: Fri, 8 Mar 2019 15:14:16 +1100 Message-Id: <20190308041426.16654-6-tobin@kernel.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190308041426.16654-1-tobin@kernel.org> References: <20190308041426.16654-1-tobin@kernel.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP It is advantageous to have all defragmentable slabs together at the beginning of the list of slabs so that there is no need to scan the complete list. Put defragmentable caches first when adding a slab cache and others last. Co-developed-by: Christoph Lameter Signed-off-by: Tobin C. Harding --- mm/slab_common.c | 2 +- mm/slub.c | 6 ++++++ 2 files changed, 7 insertions(+), 1 deletion(-) diff --git a/mm/slab_common.c b/mm/slab_common.c index 754acdb292e4..1d492b59eee1 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -397,7 +397,7 @@ static struct kmem_cache *create_cache(const char *name, goto out_free_cache; s->refcount = 1; - list_add(&s->list, &slab_caches); + list_add_tail(&s->list, &slab_caches); memcg_link_cache(s); out: if (err) diff --git a/mm/slub.c b/mm/slub.c index 6ce866b420f1..f37103e22d3f 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -4427,6 +4427,8 @@ void kmem_cache_setup_mobility(struct kmem_cache *s, return; } + mutex_lock(&slab_mutex); + s->isolate = isolate; s->migrate = migrate; @@ -4435,6 +4437,10 @@ void kmem_cache_setup_mobility(struct kmem_cache *s, * to disable fast cmpxchg based processing. */ s->flags &= ~__CMPXCHG_DOUBLE; + + list_move(&s->list, &slab_caches); /* Move to top */ + + mutex_unlock(&slab_mutex); } EXPORT_SYMBOL(kmem_cache_setup_mobility); From patchwork Fri Mar 8 04:14:17 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Tobin C. Harding" X-Patchwork-Id: 10844155 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7AB061390 for ; Fri, 8 Mar 2019 04:15:25 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 674492E07D for ; Fri, 8 Mar 2019 04:15:25 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 5B9C82E097; Fri, 8 Mar 2019 04:15:25 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DDE692E07D for ; Fri, 8 Mar 2019 04:15:24 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 90CBD8E0009; Thu, 7 Mar 2019 23:15:23 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 86DAE8E0002; Thu, 7 Mar 2019 23:15:23 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 738678E0009; Thu, 7 Mar 2019 23:15:23 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-qt1-f198.google.com (mail-qt1-f198.google.com [209.85.160.198]) by kanga.kvack.org (Postfix) with ESMTP id 4913B8E0002 for ; Thu, 7 Mar 2019 23:15:23 -0500 (EST) Received: by mail-qt1-f198.google.com with SMTP id c9so17340502qte.11 for ; Thu, 07 Mar 2019 20:15:23 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=8pAdMmQJ+aEt5aO3NlY8w+BP/DccAEmjq/sdk333X44=; b=nMOeuonCfx8TMvb0a40EBvExJtZeIduB1gq9j/0JQt7SKP2oP/f7obKzJTM6euS8sX FE+REnkng46MmKfD7YvSiGRujXT7vc2+12z2i3/EezS2isvRxCyrtmp6uir9N37KUW4W +CVq38VfM13DXNQ9s0CBiv/Wy9hVaUkDKkidpa+8J9eOTzeQG4IaQC5JrvNoP0rM8wNi 1pglF6R4Da02iAQTTAap2PgsDAYitSyh9zqxIBrUgtbVfqDHtwU8hnZqCPRWlzOzTxrw S8XvALFnPLwZz7pPTHAgKssR0yJvrFGROToNoK6SeklMKWbtPMwkAh+eMGHWhc2CHLNI 6K0g== X-Gm-Message-State: APjAAAU2ynrXkpHxrnYwwfu9LmsyKUvAEUnqR9hh6Gc2FikVJeI/C4mh MkLL0lUdN84HDNQ+tOy/i33sHraUAHFbVDitOGY1GmkuqCMDTaIDh6uF/tG8R19xZhKotTZ9Kva PYJwwdA1VOZoN32G5o/IDZDmcr/TC+Glv40U9IQ/HwBMtTWPS15t8KdDKSSZUH+k= X-Received: by 2002:aed:2269:: with SMTP id o38mr13505005qtc.222.1552018523074; Thu, 07 Mar 2019 20:15:23 -0800 (PST) X-Google-Smtp-Source: APXvYqzaJ5bm7cDuze9XmG93p7OKZ0sT3O3zbYFCcnaeuUFHnsf92OpjdOEAEmbQIvxKyGi08mu3 X-Received: by 2002:aed:2269:: with SMTP id o38mr13504964qtc.222.1552018522140; Thu, 07 Mar 2019 20:15:22 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1552018522; cv=none; d=google.com; s=arc-20160816; b=uAaTt6JTC6i6Vj0C4L0qTr3zPxKp09H8R86EW3NvXVUFPtjzQb3P09UMIcXYtoVPU8 OitQrjyzEmMko9dnfxd0bE8I/LUv1gxC25zD0g3eDAvzVhOZgXYIZzOpInjYCgXzTTJL LXFtV8TVikPSA0XWWtC+41C6ylBkZsK2C4Z7mj9E5cCK4Hciw8sB/nwWinvif7s1QtU9 uQC4hRZ9wc6XM0oYX918UoceXx7KdeD5m50hj6ADsSi8Yu+nOvY6I9xPYuRaXlAz/v9U g/tFk/NVRbTyztoHOQxwYMoR+zEVo4RInDjoHMR1+pekDamtGXsYGifP3kcFW8sbgBUI 6mMQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=8pAdMmQJ+aEt5aO3NlY8w+BP/DccAEmjq/sdk333X44=; b=zoVsKywRjqifVKKiXcDDIwHZ0iz03NoRH3CQtWqIidM/F+6xbYK82JUHhFiC6U6hKr YV0mTFpgIm1aINZ+HTkfhdBlbibINXZkbEdWd5i5g0BpTcYf7PjVosPPFEvxVRaX6+Fm 7YaLp6GyYrmrmGA5eGKu2Or4YnIW5nkPbxaSebVUNO5XcWExGyAmeUykgNiO9K7aAMec JOYe+3NnhLD6k35hOsinXK8NTpCISPGkiJcB/X3MzCykBpU4VJyREtVsJnr2j6FdwUcz fY4yqNYcJ0meq0Bg121l7NWu7gjtp2+h4muT9P6fP8Inj+OFwuaBl3G9PxTATavtw+wz kT5g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@messagingengine.com header.s=fm2 header.b=SwqL8g6D; spf=softfail (google.com: domain of transitioning tobin@kernel.org does not designate 64.147.123.25 as permitted sender) smtp.mailfrom=tobin@kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from wout2-smtp.messagingengine.com (wout2-smtp.messagingengine.com. [64.147.123.25]) by mx.google.com with ESMTPS id u58si4167412qtb.4.2019.03.07.20.15.21 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 07 Mar 2019 20:15:22 -0800 (PST) Received-SPF: softfail (google.com: domain of transitioning tobin@kernel.org does not designate 64.147.123.25 as permitted sender) client-ip=64.147.123.25; Authentication-Results: mx.google.com; dkim=pass header.i=@messagingengine.com header.s=fm2 header.b=SwqL8g6D; spf=softfail (google.com: domain of transitioning tobin@kernel.org does not designate 64.147.123.25 as permitted sender) smtp.mailfrom=tobin@kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailout.west.internal (Postfix) with ESMTP id A63323328; Thu, 7 Mar 2019 23:15:20 -0500 (EST) Received: from mailfrontend1 ([10.202.2.162]) by compute3.internal (MEProxy); Thu, 07 Mar 2019 23:15:21 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm2; bh=8pAdMmQJ+aEt5aO3NlY8w+BP/DccAEmjq/sdk333X44=; b=SwqL8g6D FEedjSAwEi1G9wOQRIytv8jzzsvvSKFGH9BKDQDbRVqSfYc25o9m5aYPY7gGNhDz S4V8w7FcdMyO9/s8jimaE/Xbqkk5PGXe6VRIBA6jG2UuuNLf0Z48cm3urAEDkbXL uvnYXh8clb9ZbI15b83RrTyi/GTpmw8G4zHg/w1Bw/ZGHlLTvk2wo+cZvGknxOwB Pwn64GpwAi2a1IFAm0lhXFamG2wmW0GatncqQ9vbvmf16x76BbVM4AvKyp4w+Jvl YLQ6b22uLYq2kXh+JqOQyTxd2RY/S7i1FsCfEsLKpMb7/wNJMR8xV7j8W2D4X8Q6 md0l6GTR01PM6Q== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedutddrfeelgdeifecutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc fjughrpefhvffufffkofgjfhgggfestdekredtredttdenucfhrhhomhepfdfvohgsihhn ucevrdcujfgrrhguihhnghdfuceothhosghinheskhgvrhhnvghlrdhorhhgqeenucfkph epuddvgedrudeiledrhedrudehkeenucfrrghrrghmpehmrghilhhfrhhomhepthhosghi nheskhgvrhhnvghlrdhorhhgnecuvehluhhsthgvrhfuihiivgepge X-ME-Proxy: Received: from eros.localdomain (124-169-5-158.dyn.iinet.net.au [124.169.5.158]) by mail.messagingengine.com (Postfix) with ESMTPA id 55EEEE4362; Thu, 7 Mar 2019 23:15:17 -0500 (EST) From: "Tobin C. Harding" To: Andrew Morton Cc: "Tobin C. Harding" , Christopher Lameter , Pekka Enberg , Matthew Wilcox , Tycho Andersen , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [RFC 06/15] tools/vm/slabinfo: Add remote node defrag ratio output Date: Fri, 8 Mar 2019 15:14:17 +1100 Message-Id: <20190308041426.16654-7-tobin@kernel.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190308041426.16654-1-tobin@kernel.org> References: <20190308041426.16654-1-tobin@kernel.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Add output line for NUMA remote node defrag ratio. Signed-off-by: Tobin C. Harding --- tools/vm/slabinfo.c | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/tools/vm/slabinfo.c b/tools/vm/slabinfo.c index 6ba8ffb4ea50..9cdccdaca349 100644 --- a/tools/vm/slabinfo.c +++ b/tools/vm/slabinfo.c @@ -34,6 +34,7 @@ struct slabinfo { unsigned int sanity_checks, slab_size, store_user, trace; int order, poison, reclaim_account, red_zone; int movable, ctor; + int remote_node_defrag_ratio; unsigned long partial, objects, slabs, objects_partial, objects_total; unsigned long alloc_fastpath, alloc_slowpath; unsigned long free_fastpath, free_slowpath; @@ -377,6 +378,10 @@ static void slab_numa(struct slabinfo *s, int mode) if (skip_zero && !s->slabs) return; + if (mode) { + printf("\nNUMA remote node defrag ratio: %3d\n", + s->remote_node_defrag_ratio); + } if (!line) { printf("\n%-21s:", mode ? "NUMA nodes" : "Slab"); for(node = 0; node <= highest_node; node++) @@ -1272,6 +1277,8 @@ static void read_slab_dir(void) slab->cpu_partial_free = get_obj("cpu_partial_free"); slab->alloc_node_mismatch = get_obj("alloc_node_mismatch"); slab->deactivate_bypass = get_obj("deactivate_bypass"); + slab->remote_node_defrag_ratio = + get_obj("remote_node_defrag_ratio"); chdir(".."); if (read_slab_obj(slab, "ops")) { if (strstr(buffer, "ctor :")) From patchwork Fri Mar 8 04:14:18 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Tobin C. Harding" X-Patchwork-Id: 10844157 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E93BB1390 for ; Fri, 8 Mar 2019 04:15:28 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D59F32E18A for ; Fri, 8 Mar 2019 04:15:28 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id CA0242E1B0; Fri, 8 Mar 2019 04:15:28 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 381F62E18A for ; Fri, 8 Mar 2019 04:15:28 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E62C08E000A; Thu, 7 Mar 2019 23:15:26 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id DEBD08E0002; Thu, 7 Mar 2019 23:15:26 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C6B918E000A; Thu, 7 Mar 2019 23:15:26 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-qt1-f197.google.com (mail-qt1-f197.google.com [209.85.160.197]) by kanga.kvack.org (Postfix) with ESMTP id 956358E0002 for ; Thu, 7 Mar 2019 23:15:26 -0500 (EST) Received: by mail-qt1-f197.google.com with SMTP id f24so17505645qte.4 for ; Thu, 07 Mar 2019 20:15:26 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=7KlkGOfn94pNrR5k4Aw3I2CbER2w6gJmF5fBjSSPEBI=; b=Elp5JBQEUlQccGfAkoNToOz+qiB9cXwdcDTESQ6i5pp3AdyLmEP0BK8MFiQ9w2R+St wPIgR9Qgeu6daLKyDXWDy2jyRSaJS36OQ6JRL9ePpEF0lRNt6Jn4BcTT8VtE5T6+y84Q jlO0GAyRtDbgNzhMfOkfwzatP34az39AVA+Ik918tbr597bJN4APXHy9ausfaQ8+ea2V RQbJuJoc/AV28VLI4ETO24v68hBUqtg36mAvvsRunioceAgw86wXPwUvn87hWqD4abNJ 7QneUI6dhx1UcyqkPlE3K+Qb2Og/CGj1CzaDvLOYC4RkqdczULG+6pojWImirWJy7NB6 1U8A== X-Gm-Message-State: APjAAAVmHA7muXQiWOi0BIN8Ewro9jrzf+hwpPOM2NfIVfi0h/vPonwi xwKgJqql7VRBcCppaEIgXl4d1LRLbROBnabgvT6GR/uEmN4A6PKcNNIzPPPvbyf95bTOZFXJWgS 44IUuG0qwEc4k1Xkf1ctdvmyRyBBNhH6W8QUIa7j1Z/MFA4sldtW/L1Qijhqnvfg= X-Received: by 2002:a37:5e01:: with SMTP id s1mr4938501qkb.38.1552018526339; Thu, 07 Mar 2019 20:15:26 -0800 (PST) X-Google-Smtp-Source: APXvYqwV+w5iS+mVYzUVlVy3IS4ScvJYvkucesEpg++SYoIE1ks3KRNLYSvrh+oWfmn/N1B0ynP7 X-Received: by 2002:a37:5e01:: with SMTP id s1mr4938464qkb.38.1552018525492; Thu, 07 Mar 2019 20:15:25 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1552018525; cv=none; d=google.com; s=arc-20160816; b=btP2+YsqGmwdzE5oNVAU6dqJrFxDK58GEjNVdJ3f3P+IasFqRSCBKH2pYXT10+4tQg rHldpSFTK4uCGGeocoCzMfAtFm3uXXQDeK8LdHDcurUOkIkbf0E/48LyVFbBDZtykcnW +p375QHQUbsLtFoWvrEPU+2PfpujCJzWNsD0tio92o2EEy3X8IZG07hPgIbo6SAAo1jn /l4ubhfC79HgJSRgAWFGbt2AxntYr+SUxfAA1c7Ri81Rd+XlVHNEFxq9UIMRWnGhU4AQ QL2IkPVxmLFHlIdGHG6feybE5tVjsafrgnfXQ+ekXfdgtQMg/N3SiYVK4AecAKidOGXE FoSA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=7KlkGOfn94pNrR5k4Aw3I2CbER2w6gJmF5fBjSSPEBI=; b=CLtne4ZgsMzjej/MpX/Fvx9s6fGz8npP11PnaxT8Q3RLNhxuNnccQFOD6adWgVDCmq DXhB5uOMhKpMQ0F5u1/jy1sz3UPfsx1FGQEIstDHfFzwtwU/G7YLvYk8G3IZexpOJzU9 bA/Cvv0a35T/cDBcVSZl8MTg4KkaW6UKILIY0ptSMRVxxiGLMq1gvdKim0ScN+Rdx/G0 wdXEKmKQ5qXmUIxSohpdDRbxQ4f3rW/GuWCurNkGbeA7VEkj8F8BEpy2MaEWcfSH9WDf AWHxdx3VX13Lf7FYJnInD8fqIAKwKH8leSzesMYA8RGVJDr8dWlq1RIA5NhHiHuMYeE6 VhrQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@messagingengine.com header.s=fm2 header.b=6IBJtDUG; spf=softfail (google.com: domain of transitioning tobin@kernel.org does not designate 64.147.123.25 as permitted sender) smtp.mailfrom=tobin@kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from wout2-smtp.messagingengine.com (wout2-smtp.messagingengine.com. [64.147.123.25]) by mx.google.com with ESMTPS id c39si1778461qtc.192.2019.03.07.20.15.25 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 07 Mar 2019 20:15:25 -0800 (PST) Received-SPF: softfail (google.com: domain of transitioning tobin@kernel.org does not designate 64.147.123.25 as permitted sender) client-ip=64.147.123.25; Authentication-Results: mx.google.com; dkim=pass header.i=@messagingengine.com header.s=fm2 header.b=6IBJtDUG; spf=softfail (google.com: domain of transitioning tobin@kernel.org does not designate 64.147.123.25 as permitted sender) smtp.mailfrom=tobin@kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailout.west.internal (Postfix) with ESMTP id D48D234AD; Thu, 7 Mar 2019 23:15:23 -0500 (EST) Received: from mailfrontend1 ([10.202.2.162]) by compute3.internal (MEProxy); Thu, 07 Mar 2019 23:15:24 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm2; bh=7KlkGOfn94pNrR5k4Aw3I2CbER2w6gJmF5fBjSSPEBI=; b=6IBJtDUG jLqtpyAzG6aFaZlCfeLvOnLxWfO1jabUczoqCLk7mqzrK2HVQC5UXOECilcK3dxQ zXtgiiEJp1hsugZUgJUyEiFKgooWTLvx7Tol42sg7kViarMkIAvlWlRw733oMLwn kdqfyezNHWAOifKGI1s3dTd4OPmRij+AzuhQVZmy8Xqy1sIDq4KsLkSe7Tu2Btps iXfHRe9JIhvEHQRKO+Y3lTl2x10ZdvGwZFADEjgOD3pi/ecBq9+woSLAdanhdpEd K0FUH/nZILktpcgtrxj2OiB6OVsIND0pzQULCFezSZvHEXBoG2EFNxOreyZM/YtW 6pjuI2Sr61CC2w== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedutddrfeelgdeifecutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc fjughrpefhvffufffkofgjfhgggfestdekredtredttdenucfhrhhomhepfdfvohgsihhn ucevrdcujfgrrhguihhnghdfuceothhosghinheskhgvrhhnvghlrdhorhhgqeenucfkph epuddvgedrudeiledrhedrudehkeenucfrrghrrghmpehmrghilhhfrhhomhepthhosghi nheskhgvrhhnvghlrdhorhhgnecuvehluhhsthgvrhfuihiivgepie X-ME-Proxy: Received: from eros.localdomain (124-169-5-158.dyn.iinet.net.au [124.169.5.158]) by mail.messagingengine.com (Postfix) with ESMTPA id 736AAE4548; Thu, 7 Mar 2019 23:15:20 -0500 (EST) From: "Tobin C. Harding" To: Andrew Morton Cc: "Tobin C. Harding" , Christopher Lameter , Pekka Enberg , Matthew Wilcox , Tycho Andersen , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [RFC 07/15] slub: Add defrag_used_ratio field and sysfs support Date: Fri, 8 Mar 2019 15:14:18 +1100 Message-Id: <20190308041426.16654-8-tobin@kernel.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190308041426.16654-1-tobin@kernel.org> References: <20190308041426.16654-1-tobin@kernel.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP In preparation for enabling defragmentation of slab pages. "defrag_used_ratio" is used to set the threshold at which defragmentation should be attempted on a slab page. "defrag_used_ratio" is a percentage in the range of 0 - 100 (inclusive). If less than that percentage of slots in a slab page are in use then the slab page will become subject to defragmentation. Add a defrag ratio field and set it to 30% by default. A limit of 30% specifies that more than 3 out of 10 available slots for objects need to be in use otherwise slab defragmentation will be attempted on the remaining objects. Co-developed-by: Christoph Lameter Signed-off-by: Tobin C. Harding --- Documentation/ABI/testing/sysfs-kernel-slab | 14 ++++++++++++ include/linux/slub_def.h | 7 ++++++ mm/slub.c | 24 +++++++++++++++++++++ 3 files changed, 45 insertions(+) diff --git a/Documentation/ABI/testing/sysfs-kernel-slab b/Documentation/ABI/testing/sysfs-kernel-slab index 29601d93a1c2..7770c03be6b4 100644 --- a/Documentation/ABI/testing/sysfs-kernel-slab +++ b/Documentation/ABI/testing/sysfs-kernel-slab @@ -180,6 +180,20 @@ Description: list. It can be written to clear the current count. Available when CONFIG_SLUB_STATS is enabled. +What: /sys/kernel/slab/cache/defrag_used_ratio +Date: February 2019 +KernelVersion: 5.0 +Contact: Christoph Lameter + Pekka Enberg , +Description: + The defrag_used_ratio file allows the control of how aggressive + slab fragmentation reduction works at reclaiming objects from + sparsely populated slabs. This is a percentage. If a slab has + less than this percentage of objects allocated then reclaim will + attempt to reclaim objects so that the whole slab page can be + freed. 0% specifies no reclaim attempt (defrag disabled), 100% + specifies attempt to reclaim all pages. The default is 30%. + What: /sys/kernel/slab/cache/deactivate_to_tail Date: February 2008 KernelVersion: 2.6.25 diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h index a7340a1ed5dc..6da6197ca973 100644 --- a/include/linux/slub_def.h +++ b/include/linux/slub_def.h @@ -107,6 +107,13 @@ struct kmem_cache { unsigned int red_left_pad; /* Left redzone padding size */ const char *name; /* Name (only for display!) */ struct list_head list; /* List of slab caches */ + int defrag_used_ratio; /* + * Ratio used to check against the + * percentage of objects allocated in a + * slab page. If less than this ratio + * is allocated then reclaim attempts + * are made. + */ #ifdef CONFIG_SYSFS struct kobject kobj; /* For sysfs */ struct work_struct kobj_remove_work; diff --git a/mm/slub.c b/mm/slub.c index f37103e22d3f..515db0f36c55 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3642,6 +3642,7 @@ static int kmem_cache_open(struct kmem_cache *s, slab_flags_t flags) set_cpu_partial(s); + s->defrag_used_ratio = 30; #ifdef CONFIG_NUMA s->remote_node_defrag_ratio = 1000; #endif @@ -5261,6 +5262,28 @@ static ssize_t destroy_by_rcu_show(struct kmem_cache *s, char *buf) } SLAB_ATTR_RO(destroy_by_rcu); +static ssize_t defrag_used_ratio_show(struct kmem_cache *s, char *buf) +{ + return sprintf(buf, "%d\n", s->defrag_used_ratio); +} + +static ssize_t defrag_used_ratio_store(struct kmem_cache *s, + const char *buf, size_t length) +{ + unsigned long ratio; + int err; + + err = kstrtoul(buf, 10, &ratio); + if (err) + return err; + + if (ratio <= 100) + s->defrag_used_ratio = ratio; + + return length; +} +SLAB_ATTR(defrag_used_ratio); + #ifdef CONFIG_SLUB_DEBUG static ssize_t slabs_show(struct kmem_cache *s, char *buf) { @@ -5585,6 +5608,7 @@ static struct attribute *slab_attrs[] = { &validate_attr.attr, &alloc_calls_attr.attr, &free_calls_attr.attr, + &defrag_used_ratio_attr.attr, #endif #ifdef CONFIG_ZONE_DMA &cache_dma_attr.attr, From patchwork Fri Mar 8 04:14:19 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Tobin C. Harding" X-Patchwork-Id: 10844159 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C8DDD922 for ; Fri, 8 Mar 2019 04:15:32 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B5A552E18A for ; Fri, 8 Mar 2019 04:15:32 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id AA3522E1B0; Fri, 8 Mar 2019 04:15:32 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 82FD42E18A for ; Fri, 8 Mar 2019 04:15:31 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E8A9C8E000B; Thu, 7 Mar 2019 23:15:29 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id E5FC88E0002; Thu, 7 Mar 2019 23:15:29 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D03C28E000B; Thu, 7 Mar 2019 23:15:29 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-qk1-f199.google.com (mail-qk1-f199.google.com [209.85.222.199]) by kanga.kvack.org (Postfix) with ESMTP id A25B18E0002 for ; Thu, 7 Mar 2019 23:15:29 -0500 (EST) Received: by mail-qk1-f199.google.com with SMTP id o2so14972185qkb.11 for ; Thu, 07 Mar 2019 20:15:29 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=4XtFG0o08VP7KrL9BCqmvaLWxmrIynX8gM/X1EsO8JE=; b=R+QEZXylOCuKuVnNRdigt6T8TykhAmGYfpvLWoa+WUC9RuP9tuxWMjYUrvIScUj0Yr odn+mnyTjY9fdFIasAoBt+WfUuPo4nxk9ziqW/LVggnuNL1IcxG7Vrgvuwv5boajuZ7Y 88qUJI84BZ/w7DCUDkVqTDL/Bwl8f0NzLzM8+Hx9Rj5vZlN2znQu2+l7goOPxPJ1ymtB 5SR8KzCW76Coi8a9a5SQ+jQAaqVMSgtiBi7lcYnXmNStABYER0rPOqbgmOK2yCDM3IHZ ela9+nRZZOrJVw7HPSxU3apj4C7YFsSO+THrwVJjvQ/MT4UZxT+s7s43RRkx5RCcJ/9A JAqQ== X-Gm-Message-State: APjAAAUCsVC1PcnLvihszmVH0NXSoKnBGj8qShHQLVbulvIDJs86R8B7 ypjOID4LTINPyjfjMjbMIfwMC8stcx7hcEX/8/8tpFq9ZoiCm+//97AF+9rLU0TmP4IKqnZ6RUW /h3XidA/JeINdW06q2oW56BEEY4uDbJ2JMxnMDiwZseFO7Rl6n26QQ8K03sqeE7o= X-Received: by 2002:ac8:8d7:: with SMTP id y23mr932319qth.249.1552018529404; Thu, 07 Mar 2019 20:15:29 -0800 (PST) X-Google-Smtp-Source: APXvYqwmxzrtjrGku56xOE0B0qKBLFGOeJUbT1P7L5z6uLaJpFX060qaIKdKD1zxclXUo4v1ZbMI X-Received: by 2002:ac8:8d7:: with SMTP id y23mr932287qth.249.1552018528502; Thu, 07 Mar 2019 20:15:28 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1552018528; cv=none; d=google.com; s=arc-20160816; b=yv/sD0AIuaLZ2xq7owCu3num3WGXy3yo8ViNmrfhkmsBkZzKkF3ZyxGeXi2D1LP1VR kJesZ+fT9ZBXl2pwtb4cuYTStu+sdEiqXQmWKa7S1SXY1RT86pl3orjqVspwnN/XUlyZ uK0oJx19VF+osSBqRB8Ux17XEx0G3YWOzOPwdG7xL1cQZ+1MgGKYEcCdAUPHP15nb9dX aon6sErAR2XyG5wYBGMs6w2Mny8IheqmdykvMDFI4P/6DWC2VVQPa9eNQzjOBMq7MR7k yYwm62V0yEtcQzgAO6fYPj+KVyu2hzVbW1cHWLvB+W7lBLQAog0Gfy2bjCejoHCQQveE XxAA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=4XtFG0o08VP7KrL9BCqmvaLWxmrIynX8gM/X1EsO8JE=; b=fHs52yHC0iMyRTQUVp8IhHUUuFIjz736ICRTodQAC7Eq0WuOY6rI0enug+QlAHZiPi 91bKh/z5oQymxrN52/nTN5rIqM1ps8P9bHciBd4UGrfkTv8k7RuPCFJGtroObO/jYhwb S6qE/rLwVPSR5ix6Y0hHGxkdy/8E9TN4tbyoTxDcFfB7P3NIAC2zDyM0ESkqzOa+4LX9 UESzBqepszMNr/KKZ2cxpjaqg/ap+q6Ud8/OQjd1ZUmu9r4ANnCDPNum2uIvLT+xN6LM xCZZTnEnzGP9bwkWk+gv662nv7yZpeuwIRM94u6EzibECrOKlBiE5BSVFPlz5JbXHUKL +O5A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@messagingengine.com header.s=fm2 header.b=FwMcvt+I; spf=softfail (google.com: domain of transitioning tobin@kernel.org does not designate 64.147.123.25 as permitted sender) smtp.mailfrom=tobin@kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from wout2-smtp.messagingengine.com (wout2-smtp.messagingengine.com. [64.147.123.25]) by mx.google.com with ESMTPS id v26si2627721qtc.201.2019.03.07.20.15.28 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 07 Mar 2019 20:15:28 -0800 (PST) Received-SPF: softfail (google.com: domain of transitioning tobin@kernel.org does not designate 64.147.123.25 as permitted sender) client-ip=64.147.123.25; Authentication-Results: mx.google.com; dkim=pass header.i=@messagingengine.com header.s=fm2 header.b=FwMcvt+I; spf=softfail (google.com: domain of transitioning tobin@kernel.org does not designate 64.147.123.25 as permitted sender) smtp.mailfrom=tobin@kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailout.west.internal (Postfix) with ESMTP id 0A22836B0; Thu, 7 Mar 2019 23:15:26 -0500 (EST) Received: from mailfrontend1 ([10.202.2.162]) by compute3.internal (MEProxy); Thu, 07 Mar 2019 23:15:27 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm2; bh=4XtFG0o08VP7KrL9BCqmvaLWxmrIynX8gM/X1EsO8JE=; b=FwMcvt+I Q6eBIB2iK4dL8TExfycPnlKNVCKTFW8q3Yq/KTCK1JuEko249NK4a1+3yEsOkkhv IU3BzXm7+zBA2Gr19tP2teZwKZeTUiseqG/p4mMXXUympNWXRYPPxjwbgTNFmO81 /SrCZK0JuRHwIiOp5giq/JBqjKQXUi/fkJCgd3lh+shN2aRX2OCHOLiy0NLv3wR2 AaaeYRe0sTgtPbRvmMgjmzfQMKNWVR7Fn1DFjryvgHD2g0Dpc3MeaTvxykX8z26/ JZk/i8jxYjkGVQPkPJnW5nWIA4PK6vp3w0SaMp1fgJpYaWm3OFPu7qX/DjlPjAgF NeZAV6H54Fu6Og== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedutddrfeelgdeifecutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc fjughrpefhvffufffkofgjfhgggfestdekredtredttdenucfhrhhomhepfdfvohgsihhn ucevrdcujfgrrhguihhnghdfuceothhosghinheskhgvrhhnvghlrdhorhhgqeenucfkph epuddvgedrudeiledrhedrudehkeenucfrrghrrghmpehmrghilhhfrhhomhepthhosghi nheskhgvrhhnvghlrdhorhhgnecuvehluhhsthgvrhfuihiivgepie X-ME-Proxy: Received: from eros.localdomain (124-169-5-158.dyn.iinet.net.au [124.169.5.158]) by mail.messagingengine.com (Postfix) with ESMTPA id BE16FE4481; Thu, 7 Mar 2019 23:15:23 -0500 (EST) From: "Tobin C. Harding" To: Andrew Morton Cc: "Tobin C. Harding" , Christopher Lameter , Pekka Enberg , Matthew Wilcox , Tycho Andersen , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [RFC 08/15] tools/vm/slabinfo: Add defrag_used_ratio output Date: Fri, 8 Mar 2019 15:14:19 +1100 Message-Id: <20190308041426.16654-9-tobin@kernel.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190308041426.16654-1-tobin@kernel.org> References: <20190308041426.16654-1-tobin@kernel.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Add output for the newly added defrag_used_ratio sysfs knob. Signed-off-by: Tobin C. Harding --- tools/vm/slabinfo.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/tools/vm/slabinfo.c b/tools/vm/slabinfo.c index 9cdccdaca349..8cf3bbd061e2 100644 --- a/tools/vm/slabinfo.c +++ b/tools/vm/slabinfo.c @@ -34,6 +34,7 @@ struct slabinfo { unsigned int sanity_checks, slab_size, store_user, trace; int order, poison, reclaim_account, red_zone; int movable, ctor; + int defrag_used_ratio; int remote_node_defrag_ratio; unsigned long partial, objects, slabs, objects_partial, objects_total; unsigned long alloc_fastpath, alloc_slowpath; @@ -549,6 +550,8 @@ static void report(struct slabinfo *s) printf("** Slabs are destroyed via RCU\n"); if (s->reclaim_account) printf("** Reclaim accounting active\n"); + if (s->movable) + printf("** Defragmentation at %d%%\n", s->defrag_used_ratio); printf("\nSizes (bytes) Slabs Debug Memory\n"); printf("------------------------------------------------------------------------\n"); @@ -1279,6 +1282,7 @@ static void read_slab_dir(void) slab->deactivate_bypass = get_obj("deactivate_bypass"); slab->remote_node_defrag_ratio = get_obj("remote_node_defrag_ratio"); + slab->defrag_used_ratio = get_obj("defrag_used_ratio"); chdir(".."); if (read_slab_obj(slab, "ops")) { if (strstr(buffer, "ctor :")) From patchwork Fri Mar 8 04:14:20 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Tobin C. Harding" X-Patchwork-Id: 10844161 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 224611390 for ; Fri, 8 Mar 2019 04:15:37 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0E2942E18A for ; Fri, 8 Mar 2019 04:15:37 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 0210A2E1B0; Fri, 8 Mar 2019 04:15:36 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0F48D2E18A for ; Fri, 8 Mar 2019 04:15:36 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id ABC0A8E000C; Thu, 7 Mar 2019 23:15:34 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id A68658E0002; Thu, 7 Mar 2019 23:15:34 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8E9ED8E000C; Thu, 7 Mar 2019 23:15:34 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-qt1-f199.google.com (mail-qt1-f199.google.com [209.85.160.199]) by kanga.kvack.org (Postfix) with ESMTP id 5F5758E0002 for ; Thu, 7 Mar 2019 23:15:34 -0500 (EST) Received: by mail-qt1-f199.google.com with SMTP id i3so17349401qtc.7 for ; Thu, 07 Mar 2019 20:15:34 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=8QX/a55REiyFrStRj7zkmM3pdCMENMhdp/xRcRhpQ7Q=; b=tpdZakJEQ1LvOhbkt/iodWe1XRO2HU9bHOqE+0laAmtqhNcRDYr9qeZ/vTvTw8ZUj1 cmZn6arWlDNxVZnuOSFV/PlmmQ8RlbuNx2wV59we7TzVi6uWNPlXxhZp8BOz9uWou2Lq 0esmD8FSDfrIiJBrjaGZzzeLrittJDpuaehrUJh435p9ykRtW06XjH0JlNY7Y2QX4t3I zIuY1ISRL2mGSduTo/cCcu8UzuIYKh3zOI+EkxKwXOYhOIxbkDc6IexHclTHSvZ4x2Le KS+Ahd4Jg0G0X9FuLFtl3phWpzeXCFT5pTqwL0EKFIDCFgCnxhS8sxHoo0zi95Mzch64 it7Q== X-Gm-Message-State: APjAAAVQY3sGwnFYn4G5+4K3FEztD9ncklf6z2YmsgWPbzRqamKRXFnq MlwX+boPs8PsvdhrtlPRu7kGs0LH9dp4HP3pX9YIzBesVgQejqNdvmnf4/vQa5wqOVlE84IgvxB 8mT8G9i1cnaVbxiJXUX8s0y2iuHZHdAX9i+paT7YnnaKkxQwbPkXJPWZUIeah2hI= X-Received: by 2002:a37:5786:: with SMTP id l128mr12576420qkb.263.1552018534090; Thu, 07 Mar 2019 20:15:34 -0800 (PST) X-Google-Smtp-Source: APXvYqz0vu0JXVHJypOH1w76f4CEjge2//Ztc1PDrSktUW31ccrVP8kqz1C1iBqV5gKYAdavQqSx X-Received: by 2002:a37:5786:: with SMTP id l128mr12576376qkb.263.1552018532780; Thu, 07 Mar 2019 20:15:32 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1552018532; cv=none; d=google.com; s=arc-20160816; b=Yld44B1koIbgeOWkK39uDIYEJWtxnKYAD9ScNb++bBdOD+9j5JRFUsEmSLtpajE67I rpPMhiHxXo2hnKzuMzHRQuMYp8PWqxuww91awMxZajJFfomZzp94kpcG0LlqsfcRhkzj QtieCCM9hK08LOdtPoMs6cgwH9Go/VhWobKdRTjORUvRk86PMd5Bcc4SZUX6/0KmgrVk 6b9wW2w9eGuf7Yu3MWIELJIhMLjKn6zhkjXV4pvoIRgqRpQlWm4uEDCnNTJrokMQMjND Dk0VBGf9lcMDEHOEYjKj4MNMnerW5Jlah6exfjRYqlj0WWEqkT+aUHKBKbAXXxBhGII2 aHMA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=8QX/a55REiyFrStRj7zkmM3pdCMENMhdp/xRcRhpQ7Q=; b=PhlqSlLifKyKpaHgT4ZY6oIvOmmPpY27vHVktMkwQcUdh6IIpQfvxQO51zszM0g4MM Wlq80u4k4NbALwJ1419rfEbDAiF2dTtt0ouQS7sHE1qzWK7E0HxI/1Iumb0S3uH9/4J/ EhEvslvNZzzfX2mbrq5VPH/wkilB/ksXzc2/uF32urrP0iozbtJ1IhB/vtwfH1LYZbPk gSTPVSwawFEPDXD/5uXtvqkQOBuufLq7wS5JDBR/HHUMhvVJwPypcdarVJ6H9IZe22iR l9XW7zyqKlEgH7ptdMCtmXIdz9uiMb/WSNLjHyovr3TODpHfINPWFtPa20mDWrSRa6FT lbFA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@messagingengine.com header.s=fm2 header.b=4OgKHpNJ; spf=softfail (google.com: domain of transitioning tobin@kernel.org does not designate 64.147.123.25 as permitted sender) smtp.mailfrom=tobin@kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from wout2-smtp.messagingengine.com (wout2-smtp.messagingengine.com. [64.147.123.25]) by mx.google.com with ESMTPS id g189si4728946qka.158.2019.03.07.20.15.32 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 07 Mar 2019 20:15:32 -0800 (PST) Received-SPF: softfail (google.com: domain of transitioning tobin@kernel.org does not designate 64.147.123.25 as permitted sender) client-ip=64.147.123.25; Authentication-Results: mx.google.com; dkim=pass header.i=@messagingengine.com header.s=fm2 header.b=4OgKHpNJ; spf=softfail (google.com: domain of transitioning tobin@kernel.org does not designate 64.147.123.25 as permitted sender) smtp.mailfrom=tobin@kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailout.west.internal (Postfix) with ESMTP id 540D736AD; Thu, 7 Mar 2019 23:15:31 -0500 (EST) Received: from mailfrontend1 ([10.202.2.162]) by compute3.internal (MEProxy); Thu, 07 Mar 2019 23:15:31 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm2; bh=8QX/a55REiyFrStRj7zkmM3pdCMENMhdp/xRcRhpQ7Q=; b=4OgKHpNJ 1u0eko9YuAGds8yLt2tfSydWns7NHejGA2AAwya5Bz3c0CwCeK70XZqnpT0p1sBL c9Ph1Vdcc3bz148QGhCZRVvschnqBbGge0KzukdGjW16FCctqo0voU2skY0wg8n0 EywD+QLBn0poqfSWeSiCDJWN1hRader6UwHc2kkBKRGKp4dEBwd6T0sk2uArtHFW WFRqglq0qTU8UTqL81s784f7C3PrbQZOlYX0gZ3QQqMeGHldWAnL3iCXuI63eppF ES2sLOEFDMvqVD/m2YvVrLWJA6J2hN2hZbKZqDJHFCaHAQ9ltEBJsdRvbIrAhLB2 mTtGV/h1ohjjLg== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedutddrfeelgdeifecutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc fjughrpefhvffufffkofgjfhgggfestdekredtredttdenucfhrhhomhepfdfvohgsihhn ucevrdcujfgrrhguihhnghdfuceothhosghinheskhgvrhhnvghlrdhorhhgqeenucfkph epuddvgedrudeiledrhedrudehkeenucfrrghrrghmpehmrghilhhfrhhomhepthhosghi nheskhgvrhhnvghlrdhorhhgnecuvehluhhsthgvrhfuihiivgepke X-ME-Proxy: Received: from eros.localdomain (124-169-5-158.dyn.iinet.net.au [124.169.5.158]) by mail.messagingengine.com (Postfix) with ESMTPA id 6A98CE4548; Thu, 7 Mar 2019 23:15:27 -0500 (EST) From: "Tobin C. Harding" To: Andrew Morton Cc: "Tobin C. Harding" , Christopher Lameter , Pekka Enberg , Matthew Wilcox , Tycho Andersen , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [RFC 09/15] slub: Enable slab defragmentation using SMO Date: Fri, 8 Mar 2019 15:14:20 +1100 Message-Id: <20190308041426.16654-10-tobin@kernel.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190308041426.16654-1-tobin@kernel.org> References: <20190308041426.16654-1-tobin@kernel.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP If many objects are allocated with the slab allocator and freed in an arbitrary order then the slab caches can become internally fragmented. Now that the slab allocator supports movable objects we can defragment any cache that has this feature enabled. Slab defragmentation may occur: 1. Unconditionally when __kmem_cache_shrink() is called on a slab cache by the kernel calling kmem_cache_shrink(). 2. Unconditionally through the use of the slabinfo command. slabinfo -s 3. Conditionally via the use of kmem_cache_defrag() Use SMO when shrinking cache. Currently when the kernel calls kmem_cache_shrink() we curate the partial slabs list. If object migration is not enabled for the cache we still do this, if however SMO is enabled, we attempt to move objects in partially full slabs in order to defragment the cache. Shrink attempts to move all objects in order to reduce the cache to a single partial slab for each node. kmem_cache_defrag() differs from shrink in that it operates dependent on the defrag_used_ratio and only attempts to move objects if the number of partial slabs exceeds MAX_PARTIAL (for each node). Add function kmem_cache_defrag(int node). kmem_cache_defrag() only performs defragmentation if the usage ratio of the slab is lower than the configured percentage (sysfs file added in previous patch). Fragmentation ratios are measured by calculating the percentage of objects in use compared to the total number of objects that the slab page can accommodate. The scanning of slab caches is optimized because the defragmentable slabs come first on the list. Thus we can terminate scans on the first slab encountered that does not support defragmentation. kmem_cache_defrag() takes a node parameter. This can either be -1 if defragmentation should be performed on all nodes, or a node number. Defragmentation may be disabled by setting defrag ratio to 0 echo 0 > /sys/kernel/slab//defrag_used_ratio In order for a cache to be defragmentable the cache must support object migration (SMO). Enabling SMO for a cache is done via a call to the recently added function: void kmem_cache_setup_mobility(struct kmem_cache *, kmem_cache_isolate_func, kmem_cache_migrate_func); Co-developed-by: Christoph Lameter Signed-off-by: Tobin C. Harding --- include/linux/slab.h | 1 + mm/slub.c | 266 +++++++++++++++++++++++++++++++------------ 2 files changed, 194 insertions(+), 73 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index 22e87c41b8a4..b9b46bc9937e 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -147,6 +147,7 @@ struct kmem_cache *kmem_cache_create_usercopy(const char *name, void (*ctor)(void *)); void kmem_cache_destroy(struct kmem_cache *); int kmem_cache_shrink(struct kmem_cache *); +int kmem_cache_defrag(int node); void memcg_create_kmem_cache(struct mem_cgroup *, struct kmem_cache *); void memcg_deactivate_kmem_caches(struct mem_cgroup *); diff --git a/mm/slub.c b/mm/slub.c index 515db0f36c55..53dd4cb5b5a4 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -354,6 +354,12 @@ static __always_inline void slab_lock(struct page *page) bit_spin_lock(PG_locked, &page->flags); } +static __always_inline int slab_trylock(struct page *page) +{ + VM_BUG_ON_PAGE(PageTail(page), page); + return bit_spin_trylock(PG_locked, &page->flags); +} + static __always_inline void slab_unlock(struct page *page) { VM_BUG_ON_PAGE(PageTail(page), page); @@ -3959,79 +3965,6 @@ void kfree(const void *x) } EXPORT_SYMBOL(kfree); -#define SHRINK_PROMOTE_MAX 32 - -/* - * kmem_cache_shrink discards empty slabs and promotes the slabs filled - * up most to the head of the partial lists. New allocations will then - * fill those up and thus they can be removed from the partial lists. - * - * The slabs with the least items are placed last. This results in them - * being allocated from last increasing the chance that the last objects - * are freed in them. - */ -int __kmem_cache_shrink(struct kmem_cache *s) -{ - int node; - int i; - struct kmem_cache_node *n; - struct page *page; - struct page *t; - struct list_head discard; - struct list_head promote[SHRINK_PROMOTE_MAX]; - unsigned long flags; - int ret = 0; - - flush_all(s); - for_each_kmem_cache_node(s, node, n) { - INIT_LIST_HEAD(&discard); - for (i = 0; i < SHRINK_PROMOTE_MAX; i++) - INIT_LIST_HEAD(promote + i); - - spin_lock_irqsave(&n->list_lock, flags); - - /* - * Build lists of slabs to discard or promote. - * - * Note that concurrent frees may occur while we hold the - * list_lock. page->inuse here is the upper limit. - */ - list_for_each_entry_safe(page, t, &n->partial, lru) { - int free = page->objects - page->inuse; - - /* Do not reread page->inuse */ - barrier(); - - /* We do not keep full slabs on the list */ - BUG_ON(free <= 0); - - if (free == page->objects) { - list_move(&page->lru, &discard); - n->nr_partial--; - } else if (free <= SHRINK_PROMOTE_MAX) - list_move(&page->lru, promote + free - 1); - } - - /* - * Promote the slabs filled up most to the head of the - * partial list. - */ - for (i = SHRINK_PROMOTE_MAX - 1; i >= 0; i--) - list_splice(promote + i, &n->partial); - - spin_unlock_irqrestore(&n->list_lock, flags); - - /* Release empty slabs */ - list_for_each_entry_safe(page, t, &discard, lru) - discard_slab(s, page); - - if (slabs_node(s, node)) - ret = 1; - } - - return ret; -} - #ifdef CONFIG_MEMCG static void kmemcg_cache_deact_after_rcu(struct kmem_cache *s) { @@ -4411,6 +4344,193 @@ static void __move(struct page *page, void *scratch, int node) s->migrate(s, vector, count, node, private); } +/* + * __defrag() - Defragment node. + * @s: cache we are working on. + * @node: The node to move objects from. + * @target_node: The node to move objects to. + * @ratio: The defrag ratio (percentage, between 0 and 100). + * + * Release slabs with zero objects and try to call the migration function + * for slabs with less than the 'ratio' percentage of objects allocated. + * + * Moved objects are allocated on @target_node. + * + * Return: The number of partial slabs left on the node after the operation. + */ +static unsigned long __defrag(struct kmem_cache *s, int node, int target_node, + int ratio) +{ + struct kmem_cache_node *n = get_node(s, node); + struct page *page, *page2; + LIST_HEAD(move_list); + unsigned long flags; + + if (node == target_node && n->nr_partial <= 1) { + /* + * Trying to reduce fragmentation on a node but there is + * only a single or no partial slab page. This is already + * the optimal object density that we can reach. + */ + return n->nr_partial; + } + + spin_lock_irqsave(&n->list_lock, flags); + list_for_each_entry_safe(page, page2, &n->partial, lru) { + if (!slab_trylock(page)) + /* Busy slab. Get out of the way */ + continue; + + if (page->inuse) { + if (page->inuse > ratio * page->objects / 100) { + slab_unlock(page); + /* + * Skip slab because the object density + * in the slab page is high enough. + */ + continue; + } + + list_move(&page->lru, &move_list); + if (s->migrate) { + /* Stop page being considered for allocations */ + n->nr_partial--; + page->frozen = 1; + } + slab_unlock(page); + } else { /* Empty slab page */ + list_del(&page->lru); + n->nr_partial--; + slab_unlock(page); + discard_slab(s, page); + } + } + + if (!s->migrate) { + /* + * No defrag method. By simply putting the zaplist at the + * end of the partial list we can let them simmer longer + * and thus increase the chance of all objects being + * reclaimed. + * + */ + list_splice(&move_list, n->partial.prev); + } + + spin_unlock_irqrestore(&n->list_lock, flags); + + if (s->migrate && !list_empty(&move_list)) { + void **scratch = alloc_scratch(s); + struct page *page, *page2; + + if (scratch) { + /* Try to remove / move the objects left */ + list_for_each_entry(page, &move_list, lru) { + if (page->inuse) + __move(page, scratch, target_node); + } + kfree(scratch); + } + + /* Inspect results and dispose of pages */ + spin_lock_irqsave(&n->list_lock, flags); + list_for_each_entry_safe(page, page2, &move_list, lru) { + list_del(&page->lru); + slab_lock(page); + page->frozen = 0; + + if (page->inuse) { + /* + * Objects left in slab page, move it to the + * tail of the partial list to increase the + * chance that the freeing of the remaining + * objects will free the slab page. + */ + n->nr_partial++; + list_add_tail(&page->lru, &n->partial); + slab_unlock(page); + } else { + slab_unlock(page); + discard_slab(s, page); + } + } + spin_unlock_irqrestore(&n->list_lock, flags); + } + + return n->nr_partial; +} + +/** + * kmem_cache_defrag() - Defrag slab caches. + * @node: The node to defrag or -1 for all nodes. + * + * Defrag slabs conditional on the amount of fragmentation in a page. + */ +int kmem_cache_defrag(int node) +{ + struct kmem_cache *s; + unsigned long left = 0; + + /* + * kmem_cache_defrag may be called from the reclaim path which may be + * called for any page allocator alloc. So there is the danger that we + * get called in a situation where slub already acquired the slub_lock + * for other purposes. + */ + if (!mutex_trylock(&slab_mutex)) + return 0; + + list_for_each_entry(s, &slab_caches, list) { + /* + * Defragmentable caches come first. If the slab cache is not + * defragmentable then we can stop traversing the list. + */ + if (!s->migrate) + break; + + if (node == -1) { + int nid; + + for_each_node_state(nid, N_NORMAL_MEMORY) + if (s->node[nid]->nr_partial > MAX_PARTIAL) + left += __defrag(s, nid, nid, s->defrag_used_ratio); + } else { + if (s->node[node]->nr_partial > MAX_PARTIAL) + left += __defrag(s, node, node, s->defrag_used_ratio); + } + } + mutex_unlock(&slab_mutex); + return left; +} +EXPORT_SYMBOL(kmem_cache_defrag); + +/** + * __kmem_cache_shrink() - Shrink a cache. + * @s: The cache to shrink. + * + * Reduces the memory footprint of a slab cache by as much as possible. + * + * This works by: + * 1. Removing empty slabs from the partial list. + * 2. Migrating slab objects to denser slab pages if the slab cache + * supports migration. If not, reorganizing the partial list so that + * more densely allocated slab pages come first. + * + * Not called directly, called by kmem_cache_shrink(). + */ +int __kmem_cache_shrink(struct kmem_cache *s) +{ + int node; + int left = 0; + + flush_all(s); + for_each_node_state(node, N_NORMAL_MEMORY) + left += __defrag(s, node, node, 100); + + return left; +} +EXPORT_SYMBOL(__kmem_cache_shrink); + void kmem_cache_setup_mobility(struct kmem_cache *s, kmem_cache_isolate_func isolate, kmem_cache_migrate_func migrate) From patchwork Fri Mar 8 04:14:21 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Tobin C. Harding" X-Patchwork-Id: 10844163 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 151F0180E for ; Fri, 8 Mar 2019 04:15:43 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id F2EEE2E18A for ; Fri, 8 Mar 2019 04:15:42 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E6C9E2E1B0; Fri, 8 Mar 2019 04:15:42 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7885C2E18A for ; Fri, 8 Mar 2019 04:15:41 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1696B8E000D; Thu, 7 Mar 2019 23:15:40 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 0F6788E0002; Thu, 7 Mar 2019 23:15:40 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id ED7048E000D; Thu, 7 Mar 2019 23:15:39 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-qk1-f200.google.com (mail-qk1-f200.google.com [209.85.222.200]) by kanga.kvack.org (Postfix) with ESMTP id BBDFD8E0002 for ; Thu, 7 Mar 2019 23:15:39 -0500 (EST) Received: by mail-qk1-f200.google.com with SMTP id v67so14970169qkl.22 for ; Thu, 07 Mar 2019 20:15:39 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=QmV7ZEyKxExjVycQ3/nLatxUyOxud0wzuK6ffRy4uC8=; b=SxkUZ/5fHoccy1iRVEgFm2OKDGLVVq8DCjEi75HZZPShfBsfUAFX3q+UzF2DSoIIx/ JgOzCPnIjkP2kXmQxFgGvbNIZ4qv6sKuDsO+kv59tu55wpi1WW2giNE7q0WRNznqhYjV KTQr2cW/5xz+GJMTN7bQU+Zi43VSNxPsqpPlCZ6sj8dV6TCJk945dcwDRNRoQWIC014a RTk9eX7JMJbDKR3KCEDZEGiHkO+Yuv0XlwBQSZfWTRWWGlpaDShx7AoLVsTrmi8AG3Cs y5/IVjV3l+JUU25GVapa4CMdHLO9eIpZD9qsdXg6QOj2xKMQHjZ/rCfxp23AKI1IEtM7 YIFQ== X-Gm-Message-State: APjAAAX9YgvT8h98szf3kERH3C9IBWSe3F7Awvhew5MXVn0KPaWUarpc sHE3u0WYi0D79tovJUR9Uc//ydjl4jUspOuhEQ0Z2FlOlS/wzepNEbJuTBZe9Q97PRDa5gt9A8T 10IWPvurRNbZP7piBSCIUKZD7IbSZlZ3s8egqT4Y50RhGB+wWnGJQCEgygiiwuog= X-Received: by 2002:a0c:d24a:: with SMTP id o10mr13735804qvh.170.1552018539442; Thu, 07 Mar 2019 20:15:39 -0800 (PST) X-Google-Smtp-Source: APXvYqwZ8lcZ1Hv+ehQSkcVVIuFJCbQDDZzc0V3UdvItWkVo2xeXupqsxorPb/CdullcwPI4Vniv X-Received: by 2002:a0c:d24a:: with SMTP id o10mr13735702qvh.170.1552018536803; Thu, 07 Mar 2019 20:15:36 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1552018536; cv=none; d=google.com; s=arc-20160816; b=s9j4n1GTFRwA1teDdRILViXHu/MVA9sSitT9qdj9ywoK44fDhkN0NABSWCesI3kWP6 8xbr+bI09Ie1glTTbsTyprhAHr+5E5O2rh6UDSnk1WYKw0Fj9qu96zkSqUs7tYowtqlz cvMF0fS2ICbyIRL2SAqcgpSuFDb81DAR3KmRV66huTBkwI5d+YxT/isslL5mUbStUSNZ kNwk+/PUleiJ+xr+2AUiMUQhlC5H85GA4wgPxEzlsZbhviwCVGXoLVDhpC4SWiMUJUPI njvw/lgKwXVs+YFdCRCZi7XVu45wY8vwqfCweyX1wHP9jJi+Bo3y/qeTX+ZjakZubuS2 9Ddw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=QmV7ZEyKxExjVycQ3/nLatxUyOxud0wzuK6ffRy4uC8=; b=fWiVB5mWzLZTzfqa/Aauy6HBri/3zVJHDkZHVQ7zL9AUwO50NA0SjQ3M0I16E3IeyK c4PEkCXExajD5JRsgBP+P+NJPgPYDk5h0CLN+C4IR5PcLGW6Pge0Hp27KfGkAZqNzk1N gM0bCfx2xSEdFG9mAm1o/nGX3htFDUUsVqw8PGsEfAAJp8PzX8lx1FTJog0rOTN52Yie ZOBJDSS8qJY9DVEeheabgdkVaH69QUak/KkF/5SqJ8grJz7e66pavk27zrznsf/w/QyW 7HdyBnlDK8WvDSbS/u+CIvNmYmzzjAB+/1lusSDQP7wcZ3woqhhouCKVt2UyYbDyGBoA VRPQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@messagingengine.com header.s=fm2 header.b="PV5ezQi/"; spf=softfail (google.com: domain of transitioning tobin@kernel.org does not designate 64.147.123.25 as permitted sender) smtp.mailfrom=tobin@kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from wout2-smtp.messagingengine.com (wout2-smtp.messagingengine.com. [64.147.123.25]) by mx.google.com with ESMTPS id p23si3296094qve.170.2019.03.07.20.15.36 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 07 Mar 2019 20:15:36 -0800 (PST) Received-SPF: softfail (google.com: domain of transitioning tobin@kernel.org does not designate 64.147.123.25 as permitted sender) client-ip=64.147.123.25; Authentication-Results: mx.google.com; dkim=pass header.i=@messagingengine.com header.s=fm2 header.b="PV5ezQi/"; spf=softfail (google.com: domain of transitioning tobin@kernel.org does not designate 64.147.123.25 as permitted sender) smtp.mailfrom=tobin@kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailout.west.internal (Postfix) with ESMTP id 5C4983536; Thu, 7 Mar 2019 23:15:35 -0500 (EST) Received: from mailfrontend1 ([10.202.2.162]) by compute3.internal (MEProxy); Thu, 07 Mar 2019 23:15:35 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm2; bh=QmV7ZEyKxExjVycQ3/nLatxUyOxud0wzuK6ffRy4uC8=; b=PV5ezQi/ slrQ8Pxwo+lwwFUVqDQ6cM3ipG0ngidhQUGPLwFW++kdE4BHNPINgyYmVe6cB4a4 Mb9LtMJE0kb2hvLQ3wSacBDaAtxHFYt3eCRfzpWY+7kmW4HVmwbiG6jUtQRZVXc2 eRztomlagJp0CyqEuTEGUyOkeG2lPfcZdta756LWOdIeTf/awCW1errGQtpuY4Cw jjKcZb+wICWjxen1Dd3Eh50C7b/wJRhZwAa43bu4/uGbTSsvswzKq9sNDRTCg22k HVpJCvz27j1kLm/253fOmU4urReIPjvtD3DPU5kRHq5iADOC+3b+a41CnAyWKbXD Ap1cX4haAL2kcQ== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedutddrfeelgdeifecutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc fjughrpefhvffufffkofgjfhgggfestdekredtredttdenucfhrhhomhepfdfvohgsihhn ucevrdcujfgrrhguihhnghdfuceothhosghinheskhgvrhhnvghlrdhorhhgqeenucfkph epuddvgedrudeiledrhedrudehkeenucfrrghrrghmpehmrghilhhfrhhomhepthhosghi nheskhgvrhhnvghlrdhorhhgnecuvehluhhsthgvrhfuihiivgeple X-ME-Proxy: Received: from eros.localdomain (124-169-5-158.dyn.iinet.net.au [124.169.5.158]) by mail.messagingengine.com (Postfix) with ESMTPA id 9133DE4481; Thu, 7 Mar 2019 23:15:31 -0500 (EST) From: "Tobin C. Harding" To: Andrew Morton Cc: "Tobin C. Harding" , Christopher Lameter , Pekka Enberg , Matthew Wilcox , Tycho Andersen , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [RFC 10/15] tools/testing/slab: Add object migration test module Date: Fri, 8 Mar 2019 15:14:21 +1100 Message-Id: <20190308041426.16654-11-tobin@kernel.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190308041426.16654-1-tobin@kernel.org> References: <20190308041426.16654-1-tobin@kernel.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP We just implemented slab movable objects for the SLUB allocator. We should test that code. In order to do so we need to be able to a number of things - Create a cache - Allocate objects to the cache - Free objects from within specific slabs of the cache - Enable Slab Movable Objects for the cache We can do all this via a loadable module. Add a module that defines functions that can be triggered from userspace via a debugfs entry. From the source: /* * SLUB defragmentation a.k.a. Slab Movable Objects (SMO). * * This module is used for testing the SLUB allocator. Enables * userspace to run kernel functions via a debugfs file. * * debugfs: /sys/kernel/debugfs/smo/callfn (write only) * * String written to `callfn` is parsed by the module and associated * function is called. See fn_tab for mapping of strings to functions. */ References to allocated objects are kept by the module in a linked list so that userspace can control which object to free. We introduce the following four functions via the function table "alloc X": Allocates X objects "free X [Y]": Frees X objects starting at list position Y (default Y==0) "enable": Enables object migration for the test cache. "test": Runs [stress] tests from within the module (see below). {"alloc", smo_alloc_objects}, {"free", smo_free_object}, {"enable", smo_enable_cache_mobility}, {"test", smo_run_module_tests}, Freeing from the start of the list creates a hole in the slab being freed from (i.e. creates a partial slab). The results of running these commands can be see using `slabinfo` (available in tools/vm/): make -o slabinfo tools/vm/slabinfo.c Stress tests can be run from within the module. These tests are internal to the module because we verify that object references are still good after object migration. These are called 'stress' tests because it is intended that they create/free a lot of objects. Userspace can control the number of objects to create, default is 1000. Example test session -------------------- Relevant /proc/slabinfo column headers: name # mount -t debugfs none /sys/kernel/debug/ $ cd path/to/linux/tools/testing/slab; make ... # insmod slub_defrag.ko # cat /proc/slabinfo | grep smo_test | sed 's/:.*//' smo_test 0 0 392 20 2 From this we can see that the module created cache 'smo_test' with 20 objects per slab and 2 pages per slab (and cache is currently empty). We can run the stress tests (with the default number of objects): # cd /sys/kernel/debug/smo # echo 'test' > callfn [ 3.576617] smo: test using nr_objs: 1000 keep: 10 [ 3.580169] smo: Module tests completed successfully And we can play with the slab allocator manually: # insmod slub_defrag.ko # echo 'alloc 21' > callfn # cat /proc/slabinfo | grep smo_test | sed 's/:.*//' smo_test 21 40 392 20 2 We see here that 21 active objects have been allocated creating 2 slabs (40 total objects). # slabinfo smo_test --report Slabcache: smo_test Aliases: 0 Order : 1 Objects: 21 Sizes (bytes) Slabs Debug Memory ------------------------------------------------------------------------ Object : 56 Total : 2 Sanity Checks : On Total: 16384 SlabObj: 392 Full : 1 Redzoning : On Used : 1176 SlabSiz: 8192 Partial: 1 Poisoning : On Loss : 15208 Loss : 336 CpuSlab: 0 Tracking : On Lalig: 7056 Align : 8 Objects: 20 Tracing : Off Lpadd: 704 Now free an object from the first slot of the first slab # echo 'free 1' > callfn # cat /proc/slabinfo | grep smo_test | sed 's/:.*//' smo_test 20 40 392 20 2 # slabinfo smo_test --report Slabcache: smo_test Aliases: 0 Order : 1 Objects: 20 Sizes (bytes) Slabs Debug Memory ------------------------------------------------------------------------ Object : 56 Total : 2 Sanity Checks : On Total: 16384 SlabObj: 392 Full : 0 Redzoning : On Used : 1120 SlabSiz: 8192 Partial: 2 Poisoning : On Loss : 15264 Loss : 336 CpuSlab: 0 Tracking : On Lalig: 6720 Align : 8 Objects: 20 Tracing : Off Lpadd: 704 Calling shrink now on the cache does nothing because object migration is not enabled (output omitted). If we enable object migration then shrink the cache we expect the object from the second slab to me moved to the first slot in the first slab and the second slab to be removed from the partial list. # echo 'enable' > callfn # slabinfo smo_test --shrink # slabinfo smo_test --report Slabcache: smo_test Aliases: 0 Order : 1 Objects: 20 ** Defragmentation at 30% Sizes (bytes) Slabs Debug Memory ------------------------------------------------------------------------ Object : 56 Total : 1 Sanity Checks : On Total: 8192 SlabObj: 392 Full : 1 Redzoning : On Used : 1120 SlabSiz: 8192 Partial: 0 Poisoning : On Loss : 7072 Loss : 336 CpuSlab: 0 Tracking : On Lalig: 6720 Align : 8 Objects: 20 Tracing : Off Lpadd: 352 Signed-off-by: Tobin C. Harding --- tools/testing/slab/Makefile | 10 + tools/testing/slab/slub_defrag.c | 566 +++++++++++++++++++++++++++++++ 2 files changed, 576 insertions(+) create mode 100644 tools/testing/slab/Makefile create mode 100644 tools/testing/slab/slub_defrag.c diff --git a/tools/testing/slab/Makefile b/tools/testing/slab/Makefile new file mode 100644 index 000000000000..440c2e3e356f --- /dev/null +++ b/tools/testing/slab/Makefile @@ -0,0 +1,10 @@ +obj-m += slub_defrag.o + +KTREE=../../.. + +all: + make -C ${KTREE} M=$(PWD) modules + +clean: + make -C ${KTREE} M=$(PWD) clean + diff --git a/tools/testing/slab/slub_defrag.c b/tools/testing/slab/slub_defrag.c new file mode 100644 index 000000000000..502ddd8a67e8 --- /dev/null +++ b/tools/testing/slab/slub_defrag.c @@ -0,0 +1,566 @@ +// SPDX-License-Identifier: GPL-2.0+ +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +/* + * SLUB defragmentation a.k.a. Slab Movable Objects (SMO). + * + * This module is used for testing the SLUB allocator. Enables + * userspace to run kernel functions via a debugfs file. + * + * debugfs: /sys/kernel/debugfs/smo/callfn (write only) + * + * String written to `callfn` is parsed by the module and associated + * function is called. See fn_tab for mapping of strings to functions. + */ + +/* debugfs commands accept two optional arguments */ +#define SMO_CMD_DEFAUT_ARG -1 + +#define SMO_DEBUGFS_DIR "smo" +struct dentry *smo_debugfs_root; + +#define SMO_CACHE_NAME "smo_test" +static struct kmem_cache *cachep; + +struct smo_slub_object { + struct list_head list; + char buf[32]; /* Unused except to control size of object */ + long id; +}; + +/* Our list of allocated objects */ +LIST_HEAD(objects); + +static void list_add_to_objects(struct smo_slub_object *so) +{ + /* + * We free from the front of the list so store at the + * tail in order to put holes in the cache when we free. + */ + list_add_tail(&so->list, &objects); +} + +/** + * smo_object_ctor() - SMO object constructor function. + * @ptr: Pointer to memory where the object should be constructed. + */ +void smo_object_ctor(void *ptr) +{ + struct smo_slub_object *so = ptr; + + INIT_LIST_HEAD(&so->list); + memset(so->buf, 0, sizeof(so->buf)); + so->id = -1; +} + +/** + * smo_cache_migrate() - kmem_cache migrate function. + * @cp: kmem_cache pointer. + * @objs: Array of pointers to objects to migrate. + * @size: Number of objects in @objs. + * @node: NUMA node where the object should be allocated. + * @private: Pointer returned by kmem_cache_isolate_func(). + */ +void smo_cache_migrate(struct kmem_cache *cp, void **objs, int size, + int node, void *private) +{ + struct smo_slub_object **so_objs = (struct smo_slub_object **)objs; + struct smo_slub_object *so_old, *so_new; + int i; + + for (i = 0; i < size; i++) { + so_old = so_objs[i]; + + so_new = kmem_cache_alloc_node(cachep, GFP_KERNEL, node); + if (!so_new) { + pr_debug("kmem_cache_alloc failded\n"); + return; + } + + /* Copy object */ + so_new->id = so_old->id; + + /* Update references to old object */ + list_del(&so_old->list); + list_add_to_objects(so_new); + + kmem_cache_free(cachep, so_old); + } +} + +/* + * smo_alloc_objects() - Allocate objects and store reference. + * @nr_objs: Number of objects to allocate. + * @node: NUMA node to allocate objects on. + * + * Allocates @n smo_slub_objects. Stores a reference to them in + * the global list of objects (at the tail of the list). + * + * Return: The number of objects allocated. + */ +static int smo_alloc_objects(int nr_objs, int node) +{ + struct smo_slub_object *so; + int i; + + /* Set sane parameters if no args passed in */ + if (nr_objs == SMO_CMD_DEFAUT_ARG) + nr_objs = 1; + if (node == SMO_CMD_DEFAUT_ARG) + node = NUMA_NO_NODE; + + for (i = 0; i < nr_objs; i++) { + if (node == NUMA_NO_NODE) + so = kmem_cache_alloc(cachep, GFP_KERNEL); + else + so = kmem_cache_alloc_node(cachep, GFP_KERNEL, node); + if (!so) { + pr_err("smo: Failed to alloc object %d of %d\n", i, nr_objs); + return i; + } + list_add_to_objects(so); + } + return nr_objs; +} + +/* + * smo_free_object() - Frees n objects from position. + * @nr_objs: Number of objects to free. + * @pos: Position in global list to start freeing. + * + * Iterates over the global list of objects to position @pos then frees @n + * objects from there (or to end of list). Does nothing if @n > list length. + * + * Calling with @n==0 frees all objects starting at @pos. + * + * Return: Number of objects freed. + */ +static int smo_free_object(int nr_objs, int pos) +{ + struct smo_slub_object *cur, *tmp; + int deleted = 0; + int i = 0; + + /* Set sane parameters if no args passed in */ + if (nr_objs == SMO_CMD_DEFAUT_ARG) + nr_objs = 1; + if (pos == SMO_CMD_DEFAUT_ARG) + pos = 0; + + list_for_each_entry_safe(cur, tmp, &objects, list) { + if (i < pos) { + i++; + continue; + } + + list_del(&cur->list); + kmem_cache_free(cachep, cur); + deleted++; + if (deleted == nr_objs) + break; + } + return deleted; +} + +static int smo_enable_cache_mobility(int _unused, int __unused) +{ + /* Enable movable objects: BOOM! */ + kmem_cache_setup_mobility(cachep, NULL, smo_cache_migrate); + pr_info("smo: kmem_cache %s defrag enabled\n", SMO_CACHE_NAME); + return 0; +} + +static int index_for_expected_id(long *expected, int size, long id) +{ + int i; + + /* Array is unsorted, just iterate the whole thing */ + for (i = 0; i < size; i++) { + if (expected[i] == id) + return i; + } + return -1; /* Not found */ +} + +static int assert_have_objects(int nr_objs, int keep) +{ + struct smo_slub_object *cur; + long *expected; /* Array of expected IDs */ + int nr_ids; /* Length of array */ + long id; + int index, i; + + nr_ids = nr_objs / keep + 1; + + expected = kmalloc_array(nr_ids, sizeof(long), GFP_KERNEL); + if (!expected) + return -ENOMEM; + + id = 0; + for (i = 0; i < nr_ids; i++) { + expected[i] = id; + id += keep; + } + + list_for_each_entry(cur, &objects, list) { + index = index_for_expected_id(expected, nr_ids, cur->id); + if (index < 0) { + pr_err("smo: ID not found: %ld\n", cur->id); + return -1; + } + + if (expected[index] == -1) { + pr_err("smo: ID already encountered: %ld\n", cur->id); + return -1; + } + expected[index] = -1; + } + return 0; +} + +/* + * smo_run_module_tests() - Runs unit tests from within the module + * @nr_objs: Number of objects to allocate. + * @keep: Free all but 1 in @keep objects. + * + * Allocates @nr_objects then iterates over the allocated objects + * freeing all but 1 out of every @keep objects i.e. for @keep==10 + * keeps the first object then frees the next 9. + * + * Caller is responsible for ensuring that the cache has at most a + * single slab on the partial list without any objects in it. This is + * easy enough to ensure, just call this when the module is freshly + * loaded. + */ +static int smo_run_module_tests(int nr_objs, int keep) +{ + struct smo_slub_object *so; + struct smo_slub_object *cur, *tmp; + long i; + + if (!list_empty(&objects)) { + pr_err("smo: test requires clean module state\n"); + return -1; + } + + /* Set sane parameters if no args passed in */ + if (nr_objs == SMO_CMD_DEFAUT_ARG) + nr_objs = 1000; + if (keep == SMO_CMD_DEFAUT_ARG) + keep = 10; + + pr_info("smo: test using nr_objs: %d keep: %d\n", nr_objs, keep); + + /* Perhaps we got called like this 'test 1000' */ + if (keep == 0) { + pr_err("Usage: test \n"); + return -1; + } + + /* Test constructor */ + so = kmem_cache_alloc(cachep, GFP_KERNEL); + if (!so) { + pr_err("smo: Failed to alloc object\n"); + return -1; + } + if (so->id != -1) { + pr_err("smo: Initial state incorrect"); + return -1; + } + kmem_cache_free(cachep, so); + + /* + * Test that object migration is correctly implemented by module + * + * This gives us confidence that if new code correctly enables + * object migration (via correct implementation of migrate and + * isolate functions) then the slub allocator code that does + * object migration is correct. + */ + + for (i = 0; i < nr_objs; i++) { + so = kmem_cache_alloc(cachep, GFP_KERNEL); + if (!so) { + pr_err("smo: Failed to alloc object %ld of %d\n", + i, nr_objs); + return -1; + } + so->id = (long)i; + list_add_to_objects(so); + } + + assert_have_objects(nr_objs, 1); + + i = 0; + list_for_each_entry_safe(cur, tmp, &objects, list) { + if (i++ % keep == 0) + continue; + + list_del(&cur->list); + kmem_cache_free(cachep, cur); + } + + /* Verify shrink does nothing when migration is not enabled */ + kmem_cache_shrink(cachep); + assert_have_objects(nr_objs, 1); + + /* Now test shrink */ + kmem_cache_setup_mobility(cachep, NULL, smo_cache_migrate); + kmem_cache_shrink(cachep); + /* + * Because of how migrate function deletes and adds objects to + * the objects list we have no way of knowing the order. We + * want to confirm that we have all the objects after shrink + * that we had before we did the shrink. + */ + assert_have_objects(nr_objs, keep); + + /* cleanup */ + list_for_each_entry_safe(cur, tmp, &objects, list) { + list_del(&cur->list); + kmem_cache_free(cachep, cur); + } + kmem_cache_shrink(cachep); /* Remove empty slabs from partial list */ + + pr_info("smo: Module tests completed successfully\n"); + return 0; +} + +/* + * struct functions() - Map command to a function pointer. + */ +struct functions { + char *fn_name; + int (*fn_ptr)(int arg0, int arg1); +} fn_tab[] = { + /* + * Because of the way we parse the function table no command + * may have another command as its prefix. + * i.e. this will break: 'foo' and 'foobar' + */ + {"alloc", smo_alloc_objects}, + {"free", smo_free_object}, + {"enable", smo_enable_cache_mobility}, + {"test", smo_run_module_tests}, +}; + +#define FN_TAB_SIZE (sizeof(fn_tab) / sizeof(struct functions)) + +/* + * parse_cmd_buf() - Gets command and arguments command string. + * @buf: Buffer containing the command string. + * @cmd: Out parameter, pointer to the command. + * @arg1: Out parameter, stores the first argument. + * @arg2: Out parameter, stores the second argument. + * + * Parses and tokenizes the input command buffer. Stores a pointer to the + * command (start of @buf) in @cmd. Stores the converted long values for + * argument 1 and 2 in the respective out parameters @arg1 and @arg2. + * + * Since arguments are optional, if they are not found the default values are + * returned. In order for the caller to differentiate defaults from arguments + * of the same value the number of arguments parsed is returned. + * + * Return: Number of arguments found. + */ +static int parse_cmd_buf(char *buf, char **cmd, long *arg1, long *arg2) +{ + int found; + char *ptr; + int ret; + + *arg1 = SMO_CMD_DEFAUT_ARG; + *arg2 = SMO_CMD_DEFAUT_ARG; + found = 0; + + /* Jump over the command, check if there are any args */ + ptr = strsep(&buf, " "); + if (!ptr || !buf) + return found; + + ptr = strsep(&buf, " "); + ret = kstrtol(ptr, 10, arg1); + if (ret < 0) { + pr_err("failed to convert arg, defaulting to %d. (%s)\n", + SMO_CMD_DEFAUT_ARG, ptr); + return found; + } + found++; + if (!buf) /* No second arg */ + return found; + + ptr = strsep(&buf, " "); + ret = kstrtol(ptr, 10, arg2); + if (ret < 0) { + pr_err("failed to convert arg, defaulting to %d. (%s)\n", + SMO_CMD_DEFAUT_ARG, ptr); + return found; + } + found++; + + return found; +} + +/* + * call_function() - Calls the function described by str. + * @str: ' []' + * + * Does table lookup on , calls appropriate function passing + * as a the argument. Optional arg defaults to 1. + */ +static void call_function(char *str) +{ + char *cmd; + long arg1 = 0; + long arg2 = 0; + int i; + + if (!str) + return; + + (void)parse_cmd_buf(str, &cmd, &arg1, &arg2); + + for (i = 0; i < FN_TAB_SIZE; i++) { + char *fn_name = fn_tab[i].fn_name; + + if (strcmp(fn_name, str) == 0) { + fn_tab[i].fn_ptr(arg1, arg2); + return; /* All done */ + } + } + + pr_err("failed to call function for cmd: %s\n", str); +} + +/* + * smo_callfn_debugfs_write() - debugfs write function. + * @file: User file + * @user_buf: Userspace buffer + * @len: Length of the user space buffer + * @off: Offset within the file + * + * Used for triggering functions by writing command to debugfs file. + * + * echo ' ' > /sys/kernel/debug/smo/callfn + * + * Return: Number of bytes copied if request succeeds, + * the corresponding error code otherwise. + */ +static ssize_t smo_callfn_debugfs_write(struct file *file, + const char __user *ubuf, + size_t len, + loff_t *off) +{ + char *kbuf; + int nbytes = 0; + + if (*off != 0 || len == 0) + return -EINVAL; + + kbuf = kzalloc(len, GFP_KERNEL); + if (!kbuf) + return -ENOMEM; + + nbytes = strncpy_from_user(kbuf, ubuf, len); + if (nbytes < 0) + goto out; + + if (kbuf[nbytes - 1] == '\n') + kbuf[nbytes - 1] = '\0'; + + call_function(kbuf); /* Tokenizes kbuf */ +out: + kfree(kbuf); + return nbytes; +} + +const struct file_operations fops_callfn_debugfs = { + .owner = THIS_MODULE, + .write = smo_callfn_debugfs_write, +}; + +static int __init smo_debugfs_init(void) +{ + struct dentry *d; + + smo_debugfs_root = debugfs_create_dir(SMO_DEBUGFS_DIR, NULL); + d = debugfs_create_file("callfn", 0200, smo_debugfs_root, NULL, + &fops_callfn_debugfs); + if (IS_ERR(d)) + return PTR_ERR(d); + + return 0; +} + +static void __exit smo_debugfs_cleanup(void) +{ + debugfs_remove_recursive(smo_debugfs_root); +} + +static int __init smo_cache_init(void) +{ + cachep = kmem_cache_create(SMO_CACHE_NAME, + sizeof(struct smo_slub_object), + 0, 0, smo_object_ctor); + if (!cachep) + return -1; + + return 0; +} + +static void __exit smo_cache_cleanup(void) +{ + struct smo_slub_object *cur, *tmp; + + list_for_each_entry_safe(cur, tmp, &objects, list) { + list_del(&cur->list); + kmem_cache_free(cachep, cur); + } + kmem_cache_destroy(cachep); +} + +static int __init smo_init(void) +{ + int ret; + + ret = smo_cache_init(); + if (ret) { + pr_err("smo: Failed to create cache\n"); + return ret; + } + pr_info("smo: Created kmem_cache: %s\n", SMO_CACHE_NAME); + + ret = smo_debugfs_init(); + if (ret) { + pr_err("smo: Failed to init debugfs\n"); + return ret; + } + pr_info("smo: Created debugfs directory: /sys/kernel/debugfs/%s\n", + SMO_DEBUGFS_DIR); + + pr_info("smo: Test module loaded\n"); + return 0; +} +module_init(smo_init); + +static void __exit smo_exit(void) +{ + smo_debugfs_cleanup(); + smo_cache_cleanup(); + + pr_info("smo: Test module removed\n"); +} +module_exit(smo_exit); + +MODULE_LICENSE("GPL"); +MODULE_AUTHOR("Tobin C. Harding"); +MODULE_DESCRIPTION("SLUB Movable Objects test module."); From patchwork Fri Mar 8 04:14:22 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Tobin C. Harding" X-Patchwork-Id: 10844165 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B2347922 for ; Fri, 8 Mar 2019 04:15:45 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9D6B42E18A for ; Fri, 8 Mar 2019 04:15:45 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 91C012E1B0; Fri, 8 Mar 2019 04:15:45 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7064F2E18A for ; Fri, 8 Mar 2019 04:15:44 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 514518E000E; Thu, 7 Mar 2019 23:15:42 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 4C53C8E0002; Thu, 7 Mar 2019 23:15:42 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3406B8E000E; Thu, 7 Mar 2019 23:15:42 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-qk1-f198.google.com (mail-qk1-f198.google.com [209.85.222.198]) by kanga.kvack.org (Postfix) with ESMTP id 075DE8E0002 for ; Thu, 7 Mar 2019 23:15:42 -0500 (EST) Received: by mail-qk1-f198.google.com with SMTP id j1so14950260qkl.23 for ; Thu, 07 Mar 2019 20:15:42 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=RSgAG8NWAqhqaqFn6hmlGT2i6l9FjY7KjetbZTZBKNE=; b=Bq/P0j6cT4MwijUTOxMTE3AJaB2He1qVuAsYINmhzchuyAhnHsP3OaWylg4qsUWQxC k86+l8zAkyCinqMqBqM/vOB+jCH7QPVBLfeiM4HhK8IoHYhilX91ZojJmfGgwkRHw8bD 9aTsYR0MvwsK8y4HVje1ZuWllLZ6UXhyIhXUa2WbzIZSEbEGDzHLVQfBsKqgP4URdMKC /AsLFzQhoz9leshQuK3GGEsK6km2VbocQ3MxIPU32KGlV4+cbOedWPRrU0OzLbVD/eS3 3/dUeP3Mr2K4yPz7h7Xg9LVnB2SYbUbVbkzo6ATP5ioCwNoCacz6R+zu7kI1MFf0Dn6f Gvcg== X-Gm-Message-State: APjAAAV9dR6H9HxXyt6ynf8mgif+pgnbo2UZ+/T1KikOBTMVbCeopg9t Iw87zMhMXESYdeknPaXf9Aw5aPz+ZcMqzlCuxNKv4w7iZ+t8TR39egamx90W1eEfHnRr5cWhskU GLjjvQam6LVbMM7r+a0/mZ8iUgpiSnXcPklKdb9fiTVP1IlkyAyPlDdteJaWeAAQ= X-Received: by 2002:ae9:e8d8:: with SMTP id a207mr12465903qkg.199.1552018541692; Thu, 07 Mar 2019 20:15:41 -0800 (PST) X-Google-Smtp-Source: APXvYqzcJjJbfHgME0OWkwfZwdDHKxLy5Cjrb/emcMwiBfGBks3AG+q3OXHlzsiRGR36bVuOcAUq X-Received: by 2002:ae9:e8d8:: with SMTP id a207mr12465869qkg.199.1552018540421; Thu, 07 Mar 2019 20:15:40 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1552018540; cv=none; d=google.com; s=arc-20160816; b=QbIcSq+QeiTJnchw/kQm0L/SeK6mehi7YoRFCiy2+kH1f7FHvKatrfp9ex6tZ3p67B /FuooFIGHI5r+LbAl37t+lHFiCHDtdD794iEK8ci9cUFR4lAYkGrFctnfG0K0qxRBDTF rcK1tMQl0JtwjZ5tQRnicE/Dwk2aoclWb/FP4jCF4JMlNAPgZirMF84XayvPY0fT+/pq LvsUH4/3skMYrdn0U+/yUsNZEvbfW3j7rJM40wOd3Hhs2IEm7N6tzrt8+fp8g69NlHXa WpBrEqHC0AeONg4UtQcj+j9l2R8dPQDGPVkXTWIpDk4LZPxXT+jrB6tfH3o0Rk1js+Z2 Kvtg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=RSgAG8NWAqhqaqFn6hmlGT2i6l9FjY7KjetbZTZBKNE=; b=OzvPA+2wL2zW+AFAd4mLEGJnCEM3JWy/K3xOOjc/2Vr8Knv0vjRU+YF3OKSsdKheNx L2R9hemQ1ZvS9cuaywWC116FyL4ijfKBv90zXc9t+sjO7hPOvejihjBLI0Vo8MKLRQ7H 1LKfwfxcmE7GMfNKBEQMR7cwX8pKNBK8eig35PX3hwx73ylHy+9gXavi8B13a9+N+5Ut S4r/bGb6vs58LArKFzXIg7FwH2cR2LP1U+vdnYEoP2s70F+7Ni4nvMfNSQGS2AkYV9tY yK1AV7kvyNbDrMNWgKm3RN6beNvOCT/2oeaNtuuXykrC49eK/2KfD3RWRVG0IBLPJ28F O9hQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@messagingengine.com header.s=fm2 header.b=cFTe3Ppy; spf=softfail (google.com: domain of transitioning tobin@kernel.org does not designate 64.147.123.25 as permitted sender) smtp.mailfrom=tobin@kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from wout2-smtp.messagingengine.com (wout2-smtp.messagingengine.com. [64.147.123.25]) by mx.google.com with ESMTPS id t24si4176696qtt.183.2019.03.07.20.15.40 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 07 Mar 2019 20:15:40 -0800 (PST) Received-SPF: softfail (google.com: domain of transitioning tobin@kernel.org does not designate 64.147.123.25 as permitted sender) client-ip=64.147.123.25; Authentication-Results: mx.google.com; dkim=pass header.i=@messagingengine.com header.s=fm2 header.b=cFTe3Ppy; spf=softfail (google.com: domain of transitioning tobin@kernel.org does not designate 64.147.123.25 as permitted sender) smtp.mailfrom=tobin@kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailout.west.internal (Postfix) with ESMTP id EDEAF344B; Thu, 7 Mar 2019 23:15:38 -0500 (EST) Received: from mailfrontend1 ([10.202.2.162]) by compute3.internal (MEProxy); Thu, 07 Mar 2019 23:15:39 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm2; bh=RSgAG8NWAqhqaqFn6hmlGT2i6l9FjY7KjetbZTZBKNE=; b=cFTe3Ppy SQ5wqRk6MbzNZia4g99/MsVNWiihWpqeb5dh7RrpU01ASM2x8hy8ugngve7oykzj OA9Ra/0lMtUGy5zeZ3DlF9Sx2RMykqMAIGeIJsabgFHImFPyEqvypZA7oIWEqlHw Ohd4IkxYBItyQ1Vjz0PbS4zt1dNLfuqFM2cD6k0uEYXGZ1mxuF6vQfmZLH7kwqyL LcfM/zG8UCQB4BDnnxjcwh2CrF/gznx5+eSLVI/LhOKN5XdHMPZgbql7h3PRB16i OgY+AlXXW3gYws6+qBS5uisMVK1t1qs10AT9M5yoKexITPQKh7vgUrPjOksH0MUL Kz1Lsvwv9RUDUA== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedutddrfeelgdeifecutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc fjughrpefhvffufffkofgjfhgggfestdekredtredttdenucfhrhhomhepfdfvohgsihhn ucevrdcujfgrrhguihhnghdfuceothhosghinheskhgvrhhnvghlrdhorhhgqeenucfkph epuddvgedrudeiledrhedrudehkeenucfrrghrrghmpehmrghilhhfrhhomhepthhosghi nheskhgvrhhnvghlrdhorhhgnecuvehluhhsthgvrhfuihiivgeple X-ME-Proxy: Received: from eros.localdomain (124-169-5-158.dyn.iinet.net.au [124.169.5.158]) by mail.messagingengine.com (Postfix) with ESMTPA id 82352E4548; Thu, 7 Mar 2019 23:15:35 -0500 (EST) From: "Tobin C. Harding" To: Andrew Morton Cc: "Tobin C. Harding" , Christopher Lameter , Pekka Enberg , Matthew Wilcox , Tycho Andersen , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [RFC 11/15] tools/testing/slab: Add object migration test suite Date: Fri, 8 Mar 2019 15:14:22 +1100 Message-Id: <20190308041426.16654-12-tobin@kernel.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190308041426.16654-1-tobin@kernel.org> References: <20190308041426.16654-1-tobin@kernel.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP We just added a module that enables testing the SLUB allocators ability to defrag/shrink caches via movable objects. Tests are better when they are automated. Add automated testing via a python script for SLUB movable objects. Example output: $ cd path/to/linux/tools/testing/slab $ /slub_defrag.py Please run script as root $ sudo ./slub_defrag.py $ sudo ./slub_defrag.py --debug Loading module ... Slab cache smo_test created Objects per slab: 20 Running sanity checks ... Running module stress test (see dmesg for additional test output) ... Removing module slub_defrag ... Loading module ... Slab cache smo_test created Running test non-movable ... testing slab 'smo_test' prior to enabling movable objects ... verified non-movable slabs are NOT shrinkable Running test movable ... testing slab 'smo_test' after enabling movable objects ... verified movable slabs are shrinkable Removing module slub_defrag ... Signed-off-by: Tobin C. Harding --- tools/testing/slab/slub_defrag.c | 1 + tools/testing/slab/slub_defrag.py | 451 ++++++++++++++++++++++++++++++ 2 files changed, 452 insertions(+) create mode 100755 tools/testing/slab/slub_defrag.py diff --git a/tools/testing/slab/slub_defrag.c b/tools/testing/slab/slub_defrag.c index 502ddd8a67e8..206545d62021 100644 --- a/tools/testing/slab/slub_defrag.c +++ b/tools/testing/slab/slub_defrag.c @@ -337,6 +337,7 @@ static int smo_run_module_tests(int nr_objs, int keep) /* * struct functions() - Map command to a function pointer. + * If you update this please update the documentation in slub_defrag.py */ struct functions { char *fn_name; diff --git a/tools/testing/slab/slub_defrag.py b/tools/testing/slab/slub_defrag.py new file mode 100755 index 000000000000..41747c0db39b --- /dev/null +++ b/tools/testing/slab/slub_defrag.py @@ -0,0 +1,451 @@ +#!/usr/bin/env python3 +# SPDX-License-Identifier: GPL-2.0 + +import subprocess +import sys +from os import path + +# SLUB Movable Objects test suite. +# +# Requirements: +# - CONFIG_SLUB=y +# - CONFIG_SLUB_DEBUG=y +# - The slub_defrag module in this directory. + +# Test SMO using a kernel module that enables triggering arbitrary +# kernel code from userspace via a debugfs file. +# +# Module code is in ./slub_defrag.c, basically the functionality is as +# follows: +# +# - Creates debugfs file /sys/kernel/debugfs/smo/callfn +# - Writes to 'callfn' are parsed as a command string and the function +# associated with command is called. +# - Defines 4 commands (all commands operate on smo_test cache): +# - 'test': Runs module stress tests. +# - 'alloc N': Allocates N slub objects +# - 'free N POS': Frees N objects starting at POS (see below) +# - 'enable': Enables SLUB Movable Objects +# +# The module maintains a list of allocated objects. Allocation adds +# objects to the tail of the list. Free'ing frees from the head of the +# list. This has the effect of creating free slots in the slab. For +# finer grained control over where in the cache slots are free'd POS +# (position) argument may be used. + +# The main() function is reasonably readable; the test suite does the +# following: +# +# 1. Runs the module stress tests. +# 2. Tests the cache without movable objects enabled. +# - Creates multiple partial slabs as explained above. +# - Verifies that partial slabs are _not_ removed by shrink (see below). +# 3. Tests the cache with movable objects enabled. +# - Creates multiple partial slabs as explained above. +# - Verifies that partial slabs _are_ removed by shrink (see below). + +# The sysfs file /sys/kernel/slab//shrink enables calling the +# function kmem_cache_shrink() (see mm/slab_common.c and mm/slub.cc). +# Shrinking a cache attempts to consolidate all partial slabs by moving +# objects if object migration is enable for the cache, otherwise +# shrinking a cache simply re-orders the partial list so as most densely +# populated slab are at the head of the list. + +# Enable/disable debugging output (also enabled via -d | --debug). +debug = False + +# Used in debug messages and when running `insmod`. +MODULE_NAME = "slub_defrag" + +# Slab cache created by the test module. +CACHE_NAME = "smo_test" + +# Set by get_slab_config() +objects_per_slab = 0 +pages_per_slab = 0 +debugfs_mounted = False # Set to true if we mount debugfs. + + +def eprint(*args, **kwargs): + print(*args, file=sys.stderr, **kwargs) + + +def dprint(*args, **kwargs): + if debug: + print(*args, file=sys.stderr, **kwargs) + + +def run_shell(cmd): + return subprocess.call([cmd], shell=True) + + +def run_shell_get_stdout(cmd): + return subprocess.check_output([cmd], shell=True) + + +def assert_root(): + user = run_shell_get_stdout('whoami') + if user != b'root\n': + eprint("Please run script as root") + sys.exit(1) + + +def mount_debugfs(): + mounted = False + + # Check if debugfs is mounted at a known mount point. + ret = run_shell('mount -l | grep /sys/kernel/debug > /dev/null 2>&1') + if ret != 0: + run_shell('mount -t debugfs none /sys/kernel/debug/') + mounted = True + dprint("Mounted debugfs on /sys/kernel/debug") + + return mounted + + +def umount_debugfs(): + dprint("Un-mounting debugfs") + run_shell('umount /sys/kernel/debug') + + +def load_module(): + """Loads the test module. + + We need a clean slab state to start with so module must + be loaded by the test suite. + """ + ret = run_shell('lsmod | grep %s > /dev/null' % MODULE_NAME) + if ret == 0: + eprint("Please unload slub_defrag module before running test suite") + return -1 + + dprint('Loading module ...') + ret = run_shell('insmod %s.ko' % MODULE_NAME) + if ret != 0: # ret==1 on error + return -1 + + dprint("Slab cache %s created" % CACHE_NAME) + return 0 + + +def unload_module(): + ret = run_shell('lsmod | grep %s > /dev/null' % MODULE_NAME) + if ret == 0: + dprint('Removing module %s ...' % MODULE_NAME) + run_shell('rmmod %s > /dev/null 2>&1' % MODULE_NAME) + + +def get_sysfs_value(filename): + """ + Parse slab sysfs files (single line: '20 N0=20') + """ + path = '/sys/kernel/slab/smo_test/%s' % filename + f = open(path, "r") + s = f.readline() + tokens = s.split(" ") + + return int(tokens[0]) + + +def get_nr_objects_active(): + return get_sysfs_value('objects') + + +def get_nr_objects_total(): + return get_sysfs_value('total_objects') + + +def get_nr_slabs_total(): + return get_sysfs_value('slabs') + + +def get_nr_slabs_partial(): + return get_sysfs_value('partial') + + +def get_nr_slabs_full(): + return get_nr_slabs_total() - get_nr_slabs_partial() + + +def get_slab_config(): + """Get relevant information from sysfs.""" + global objects_per_slab + + objects_per_slab = get_sysfs_value('objs_per_slab') + if objects_per_slab < 0: + return -1 + + dprint("Objects per slab: %d" % objects_per_slab) + return 0 + + +def verify_state(nr_objects_active, nr_objects_total, + nr_slabs_partial, nr_slabs_full, nr_slabs_total, msg=''): + err = 0 + got_nr_objects_active = get_nr_objects_active() + got_nr_objects_total = get_nr_objects_total() + got_nr_slabs_partial = get_nr_slabs_partial() + got_nr_slabs_full = get_nr_slabs_full() + got_nr_slabs_total = get_nr_slabs_total() + + if got_nr_objects_active != nr_objects_active: + err = -1 + + if got_nr_objects_total != nr_objects_total: + err = -2 + + if got_nr_slabs_partial != nr_slabs_partial: + err = -3 + + if got_nr_slabs_full != nr_slabs_full: + err = -4 + + if got_nr_slabs_total != nr_slabs_total: + err = -5 + + if err != 0: + dprint("Verify state: %s" % msg) + dprint(" what\t\t\twant\tgot") + dprint("-----------------------------------------") + dprint(" %s\t%d\t%d" % ('nr_objects_active', nr_objects_active, got_nr_objects_active)) + dprint(" %s\t%d\t%d" % ('nr_objects_total', nr_objects_total, got_nr_objects_total)) + dprint(" %s\t%d\t%d" % ('nr_slabs_partial', nr_slabs_partial, got_nr_slabs_partial)) + dprint(" %s\t\t%d\t%d" % ('nr_slabs_full', nr_slabs_full, got_nr_slabs_full)) + dprint(" %s\t%d\t%d\n" % ('nr_slabs_total', nr_slabs_total, got_nr_slabs_total)) + + return err + + +def exec_via_sysfs(command): + ret = run_shell('echo %s > /sys/kernel/debug/smo/callfn' % command) + if ret != 0: + eprint("Failed to echo command to sysfs: %s" % command) + + return ret + + +def enable_movable_objects(): + return exec_via_sysfs('enable') + + +def alloc(n): + exec_via_sysfs("alloc %d" % n) + + +def free(n, pos = 0): + exec_via_sysfs('free %d %d' % (n, pos)) + + +def shrink(): + ret = run_shell('slabinfo smo_test -s') + if ret != 0: + eprint("Failed to execute slabinfo -s") + + +def sanity_checks(): + # Verify everything is 0 to start with. + return verify_state(0, 0, 0, 0, 0, "sanity check") + + +def test_non_movable(): + one_over = objects_per_slab + 1 + + dprint("testing slab 'smo_test' prior to enabling movable objects ...") + + alloc(one_over) + + objects_active = one_over + objects_total = objects_per_slab * 2 + slabs_partial = 1 + slabs_full = 1 + slabs_total = 2 + ret = verify_state(objects_active, objects_total, + slabs_partial, slabs_full, slabs_total, + "non-movable: initial allocation") + if ret != 0: + eprint("test_non_movable: failed to verify initial state") + return -1 + + # Free object from first slot of first slab. + free(1) + objects_active = one_over - 1 + objects_total = objects_per_slab * 2 + slabs_partial = 2 + slabs_full = 0 + slabs_total = 2 + ret = verify_state(objects_active, objects_total, + slabs_partial, slabs_full, slabs_total, + "non-movable: after free") + if ret != 0: + eprint("test_non_movable: failed to verify after free") + return -1 + + # Non-movable cache, shrink should have no effect. + shrink() + ret = verify_state(objects_active, objects_total, + slabs_partial, slabs_full, slabs_total, + "non-movable: after shrink") + if ret != 0: + eprint("test_non_movable: failed to verify after shrink") + return -1 + + # Cleanup + free(objects_per_slab) + shrink() + + dprint("verified non-movable slabs are NOT shrinkable") + return 0 + + +def test_movable(): + one_over = objects_per_slab + 1 + + dprint("testing slab 'smo_test' after enabling movable objects ...") + + alloc(one_over) + + objects_active = one_over + objects_total = objects_per_slab * 2 + slabs_partial = 1 + slabs_full = 1 + slabs_total = 2 + ret = verify_state(objects_active, objects_total, + slabs_partial, slabs_full, slabs_total, + "movable: initial allocation") + if ret != 0: + eprint("test_movable: failed to verify initial state") + return -1 + + # Free object from first slot of first slab. + free(1) + objects_active = one_over - 1 + objects_total = objects_per_slab * 2 + slabs_partial = 2 + slabs_full = 0 + slabs_total = 2 + ret = verify_state(objects_active, objects_total, + slabs_partial, slabs_full, slabs_total, + "movable: after free") + if ret != 0: + eprint("test_movable: failed to verify after free") + return -1 + + # movable cache, shrink should move objects and free slab. + shrink() + objects_active = one_over - 1 + objects_total = objects_per_slab * 1 + slabs_partial = 0 + slabs_full = 1 + slabs_total = 1 + ret = verify_state(objects_active, objects_total, + slabs_partial, slabs_full, slabs_total, + "movable: after shrink") + if ret != 0: + eprint("test_movable: failed to verify after shrink") + return -1 + + # Cleanup + free(objects_per_slab) + shrink() + + dprint("verified movable slabs are shrinkable") + return 0 + + +def dprint_start_test(test): + dprint("Running %s ..." % test) + + +def dprint_done(): + dprint("") + + +def run_test(fn, desc): + dprint_start_test(desc) + ret = fn() + if ret < 0: + fail_test(desc) + dprint_done() + + +# Load and unload the module for this test to ensure clean state. +def run_module_stress_test(): + dprint("Running module stress test (see dmesg for additional test output) ...") + + unload_module() + ret = load_module() + if ret < 0: + cleanup_and_exit(ret) + + exec_via_sysfs("test"); + + unload_module() + + dprint() + + +def fail_test(msg): + eprint("\nFAIL: test failed: '%s' ... aborting\n" % msg) + cleanup_and_exit(1) + + +def display_help(): + print("Usage: %s [OPTIONS]\n" % path.basename(sys.argv[0])) + print("\tRuns defrag test suite (a.k.a. SLUB Movable Objects)\n") + print("OPTIONS:") + print("\t-d | --debug Enable verbose debug output") + print("\t-h | --help Print this help and exit") + + +def cleanup_and_exit(return_code): + global debugfs_mounted + + if debugfs_mounted == True: + umount_debugfs() + + unload_module() + + sys.exit(return_code) + + +def main(): + global debug + + if len(sys.argv) > 1: + if sys.argv[1] == '-h' or sys.argv[1] == '--help': + display_help() + sys.exit(0) + + if sys.argv[1] == '-d' or sys.argv[1] == '--debug': + debug = True + + assert_root() + + # Use cleanup_and_exit() instead of sys.exit() after mounting debugfs. + debugfs_mounted = mount_debugfs() + + # Loads and unloads the module. + run_module_stress_test() + + ret = load_module() + if (ret < 0): + cleanup_and_exit(ret) + + ret = get_slab_config() + if (ret != 0): + fail_test("get slab config details") + + run_test(sanity_checks, "sanity checks") + + run_test(test_non_movable, "test non-movable") + + ret = enable_movable_objects() + if (ret != 0): + fail_test("enable movable objects") + + run_test(test_movable, "test movable") + + cleanup_and_exit(0) + +if __name__== "__main__": + main() From patchwork Fri Mar 8 04:14:23 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Tobin C. Harding" X-Patchwork-Id: 10844167 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4708A922 for ; Fri, 8 Mar 2019 04:15:48 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 334732E18A for ; Fri, 8 Mar 2019 04:15:48 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 27DEA2E1B0; Fri, 8 Mar 2019 04:15:48 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9B73D2E18A for ; Fri, 8 Mar 2019 04:15:47 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4C68C8E000F; Thu, 7 Mar 2019 23:15:45 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 49D588E0002; Thu, 7 Mar 2019 23:15:45 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3663B8E000F; Thu, 7 Mar 2019 23:15:45 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-qk1-f199.google.com (mail-qk1-f199.google.com [209.85.222.199]) by kanga.kvack.org (Postfix) with ESMTP id 066C08E0002 for ; Thu, 7 Mar 2019 23:15:45 -0500 (EST) Received: by mail-qk1-f199.google.com with SMTP id i66so14969163qke.21 for ; Thu, 07 Mar 2019 20:15:45 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=3ICf1F4yYMSB9kWf6DNRwgEAL3fOm46vEzPIjpQE3AA=; b=gNM2kvVD7gJTjYHlu5H7/Ae2xTbRgc4VXFH+GedR/FxWJS6+bxQHPAaQIst5Q20isI Tjvqt3WM/l7H6jW3QvzsRqhR8TqqcbN3kXloPQD65h5RXWaSAjGufuNpgSdji2jUtNEE ji/Ap/X/My9lhNrLSUrsOz4Jk19XYTqxTUf54WvuYNV2iS9e/I/Jj4UgR1HvoQ0gMWjF U9oSgzHvE5eLKcWY9PPcJBbt9XR0dY8ynxGYvzZqTRVgu1Q6QW/5M3z/6MujJLHIHDj0 jZEOVXinEPGxxzQDWP0OXbPWjLKuBKxr635bibYoAg+KZJbrBgFtnO2SUxwj8RwIsvhj 41hg== X-Gm-Message-State: APjAAAUQZTQCBGOBfGr2hGN7a4qIxzEeMcHSI/sY0HdSjsGriTw5Mq25 nwxwU9LjzcbmhZcOKLtqnNmpD6bcF28g7Y/Fy90ddZzehIO4l/kbmB3D5u+wlQBfUsscVSmFZ32 +azf6Rop/KMOA+9Q0JXX9j0yUNqyZZKXx/pqS6tUxv4bEnnA+ezN+Md+K1IZOORE= X-Received: by 2002:ae9:e913:: with SMTP id x19mr12699037qkf.45.1552018544788; Thu, 07 Mar 2019 20:15:44 -0800 (PST) X-Google-Smtp-Source: APXvYqxaAa676OUvZnP8ColV8X+YpWynk+qJd60PP9EPSwz41bpr5hcTsV3RZh09x5VilYh7+/x6 X-Received: by 2002:ae9:e913:: with SMTP id x19mr12699001qkf.45.1552018543689; Thu, 07 Mar 2019 20:15:43 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1552018543; cv=none; d=google.com; s=arc-20160816; b=go0sI4rr8+DIiCNijk72FvB9dXIefZAVs2qHqfnt+2tSDue+0m+yuSbnByEjySEXF2 uNYCD7uuWPo3/KQng+RmhSgcfIParwWtBJT7o8RYPPxo/+Ir3DcxeE8gAjESM0ECHh/w E/7cxJCqXbcRvLmNCDZT9Pjt3HKbJvSz6zbshsWuab7nvT6+jNz8l+r+z5gX1+aTwfy4 rLfrZzUrXIIer9qiuJkhzwXVMXwmA4muuwX/rO7cmKOYvd3M7UZOY23/HWnvqjgageCE yBGMN+vILgamSrR1vWnwGZzrWmK8FAHWydR5c4+hEMgMM+RV44pnvItMZSrJEyEQsoi5 sH6w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=3ICf1F4yYMSB9kWf6DNRwgEAL3fOm46vEzPIjpQE3AA=; b=EKndcbjKvg5dkBrap41HoTr75/4VSBu3pXVyrMtnYz7HOucLYWiJrH80STxi+6vm3p mCwbkvVnDoYNjeLLOyjhUqlxb4pXhGxjP7iR7h6ddAKya+LnuBT11sBKY8iD7nExRz0o 905V6A/NGa7l/UZ7WunX5u+j2fLvDX7OjfH0AmWyEkLEHkoAa86CGv0jTaOiW7Oy+HvB H0S8MUe3kKV6XHt+qrvwJ1LMlKSsXf/mO0PhosxQjzLpik/Z3JVxYeLmZ/Gxytn7GKjE RIOjVsq4Lfg4l7VSOiTy+JWxR3QASAzMDmdi5Z+ufRckAyvK4ezM1yJX/ezmdEfZlGNt aFDQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@messagingengine.com header.s=fm2 header.b=wnPbtlN6; spf=softfail (google.com: domain of transitioning tobin@kernel.org does not designate 64.147.123.25 as permitted sender) smtp.mailfrom=tobin@kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from wout2-smtp.messagingengine.com (wout2-smtp.messagingengine.com. [64.147.123.25]) by mx.google.com with ESMTPS id p6si79931qkk.40.2019.03.07.20.15.43 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 07 Mar 2019 20:15:43 -0800 (PST) Received-SPF: softfail (google.com: domain of transitioning tobin@kernel.org does not designate 64.147.123.25 as permitted sender) client-ip=64.147.123.25; Authentication-Results: mx.google.com; dkim=pass header.i=@messagingengine.com header.s=fm2 header.b=wnPbtlN6; spf=softfail (google.com: domain of transitioning tobin@kernel.org does not designate 64.147.123.25 as permitted sender) smtp.mailfrom=tobin@kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailout.west.internal (Postfix) with ESMTP id 3F395173F; Thu, 7 Mar 2019 23:15:42 -0500 (EST) Received: from mailfrontend1 ([10.202.2.162]) by compute3.internal (MEProxy); Thu, 07 Mar 2019 23:15:42 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm2; bh=3ICf1F4yYMSB9kWf6DNRwgEAL3fOm46vEzPIjpQE3AA=; b=wnPbtlN6 RC57mehAdbkgLwnR/WvJ/TT8udSoYHVbvFCmZkCjkSaSMoUAm5LlwMU312bg57+H 6xwUIAkxleRGO/DO1Mo2+EvPzombVF6OqLWYKPFTldgVN9nDbPCpJQGa1MDwpLhB DFyag4Efz62jiOXOvKDvJIfFE5Z2yGLheTL5xI8RPuZu8/X0Eb2m616FqlsJPjR1 dEtwyMJrztDmNqW4SbHfw665vzC4m7Vb3PAFpX+MOUXgQoPEMHhtaOupOqYaqtqP q9BdKJ08/wTkvvix56kpnZDHp8l9wijvkwFdnG+kSzHMuZqGz1byU5qfGsrzT+gy p4CKCE9JbR+oag== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedutddrfeelgdeifecutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc fjughrpefhvffufffkofgjfhgggfestdekredtredttdenucfhrhhomhepfdfvohgsihhn ucevrdcujfgrrhguihhnghdfuceothhosghinheskhgvrhhnvghlrdhorhhgqeenucfkph epuddvgedrudeiledrhedrudehkeenucfrrghrrghmpehmrghilhhfrhhomhepthhosghi nheskhgvrhhnvghlrdhorhhgnecuvehluhhsthgvrhfuihiivgepuddu X-ME-Proxy: Received: from eros.localdomain (124-169-5-158.dyn.iinet.net.au [124.169.5.158]) by mail.messagingengine.com (Postfix) with ESMTPA id EB429E4362; Thu, 7 Mar 2019 23:15:38 -0500 (EST) From: "Tobin C. Harding" To: Andrew Morton Cc: "Tobin C. Harding" , Christopher Lameter , Pekka Enberg , Matthew Wilcox , Tycho Andersen , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [RFC 12/15] xarray: Implement migration function for objects Date: Fri, 8 Mar 2019 15:14:23 +1100 Message-Id: <20190308041426.16654-13-tobin@kernel.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190308041426.16654-1-tobin@kernel.org> References: <20190308041426.16654-1-tobin@kernel.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Implement functions to migrate objects. This is based on initial code by Matthew Wilcox and was modified to work with slab object migration. Co-developed-by: Christoph Lameter Signed-off-by: Tobin C. Harding --- lib/radix-tree.c | 13 +++++++++++++ lib/xarray.c | 44 ++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 57 insertions(+) diff --git a/lib/radix-tree.c b/lib/radix-tree.c index 14d51548bea6..9412c2853726 100644 --- a/lib/radix-tree.c +++ b/lib/radix-tree.c @@ -1613,6 +1613,17 @@ static int radix_tree_cpu_dead(unsigned int cpu) return 0; } +extern void xa_object_migrate(void *tree_node, int numa_node); + +static void radix_tree_migrate(struct kmem_cache *s, void **objects, int nr, + int node, void *private) +{ + int i; + + for (i = 0; i < nr; i++) + xa_object_migrate(objects[i], node); +} + void __init radix_tree_init(void) { int ret; @@ -1627,4 +1638,6 @@ void __init radix_tree_init(void) ret = cpuhp_setup_state_nocalls(CPUHP_RADIX_DEAD, "lib/radix:dead", NULL, radix_tree_cpu_dead); WARN_ON(ret < 0); + kmem_cache_setup_mobility(radix_tree_node_cachep, NULL, + radix_tree_migrate); } diff --git a/lib/xarray.c b/lib/xarray.c index 81c3171ddde9..4f6f17c87769 100644 --- a/lib/xarray.c +++ b/lib/xarray.c @@ -1950,6 +1950,50 @@ void xa_destroy(struct xarray *xa) } EXPORT_SYMBOL(xa_destroy); +void xa_object_migrate(struct xa_node *node, int numa_node) +{ + struct xarray *xa = READ_ONCE(node->array); + void __rcu **slot; + struct xa_node *new_node; + int i; + + /* Freed or not yet in tree then skip */ + if (!xa || xa == XA_FREE_MARK) + return; + + new_node = kmem_cache_alloc_node(radix_tree_node_cachep, + GFP_KERNEL, numa_node); + + xa_lock_irq(xa); + + /* Check again..... */ + if (xa != node->array || !list_empty(&node->private_list)) { + node = new_node; + goto unlock; + } + + memcpy(new_node, node, sizeof(struct xa_node)); + + /* Move pointers to new node */ + INIT_LIST_HEAD(&new_node->private_list); + for (i = 0; i < XA_CHUNK_SIZE; i++) { + void *x = xa_entry_locked(xa, new_node, i); + + if (xa_is_node(x)) + rcu_assign_pointer(xa_to_node(x)->parent, new_node); + } + if (!new_node->parent) + slot = &xa->xa_head; + else + slot = &xa_parent_locked(xa, new_node)->slots[new_node->offset]; + rcu_assign_pointer(*slot, xa_mk_node(new_node)); + +unlock: + xa_unlock_irq(xa); + xa_node_free(node); + rcu_barrier(); +} + #ifdef XA_DEBUG void xa_dump_node(const struct xa_node *node) { From patchwork Fri Mar 8 04:14:24 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Tobin C. Harding" X-Patchwork-Id: 10844169 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8A9931390 for ; Fri, 8 Mar 2019 04:15:52 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 766CA2E18A for ; Fri, 8 Mar 2019 04:15:52 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 6A0412E1B0; Fri, 8 Mar 2019 04:15:52 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8B9582E18A for ; Fri, 8 Mar 2019 04:15:51 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C66BC8E0010; Thu, 7 Mar 2019 23:15:49 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id C3F6C8E0002; Thu, 7 Mar 2019 23:15:49 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B06EB8E0010; Thu, 7 Mar 2019 23:15:49 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-qt1-f200.google.com (mail-qt1-f200.google.com [209.85.160.200]) by kanga.kvack.org (Postfix) with ESMTP id 86D5E8E0002 for ; Thu, 7 Mar 2019 23:15:49 -0500 (EST) Received: by mail-qt1-f200.google.com with SMTP id 43so17313569qtz.8 for ; Thu, 07 Mar 2019 20:15:49 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=q1Zg6VaVMDZOs2QFiRBrbBs3HOCPcZfoC+LWrWiAmJ0=; b=fDBB6MGo11A+/eRYcchQTwRYWcWXRtV46XpR9nlN8lZd9txcOiHN7sH/805WlYFVDJ oNveTY2Ew6U2UwG5e2G3OH7Xid4UIY1JukS1njjWfyF/1BHb5vm3rkerkR9Drr/UTr43 E/a/zOe7HkGR6PLCiIpt6VTIyJ2t3+cpv9u8STVNN4sub8q06X9h7OAxRxk1YZgbhi1i EDn1Lxf1LjPsOEq0j/8BJeIFOoova5OcBaJ/KivAsSUnrmHZc1+MGhHcsrOLFunzNAs6 BQPyoCWHRtuR9X5c0ELY36ajw6KlIVM3sxYRO6MZ/4JtlmteGA341WnbPnWzn+p6lln3 BC8Q== X-Gm-Message-State: APjAAAUVSuL5oyov3JixO7zXJdavaWkfKwQAVE+fjZEuxaVY1GBtcAdu lS9v6nn6Pc6n4oHATDOEjp9qSG/t5Zvq+OFu+mq15XOya9tCcZa81p8YLIUiBA0oYXpZ9lvYdQ7 ugRDNQqI/24BFJiMdhB08AfApsmpj2jEHTr0lEUBgdL2tF7bjmUJhaaF3UQZBF7Y= X-Received: by 2002:a37:d98f:: with SMTP id q15mr12572142qkl.213.1552018549283; Thu, 07 Mar 2019 20:15:49 -0800 (PST) X-Google-Smtp-Source: APXvYqy++kFtQki+AF1V+RUsikoOzJ4YXKXnnVSZcsmSIZ1Wj449q4O39cwhbbfHB2Qtzf+bYNiE X-Received: by 2002:a37:d98f:: with SMTP id q15mr12572061qkl.213.1552018547005; Thu, 07 Mar 2019 20:15:47 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1552018547; cv=none; d=google.com; s=arc-20160816; b=wSZun/MCq3M/HwxD2/cxCtzZuUZ+WoRTaLpeK1Ssam1PmKuta/wNjl8s2NsxSGTcOG t8rdQvzQwPPL8fILAhr4BmQQXe0FGiP+Svh9JEAID0qbwh1fUyraZg8oIkY8Dv16k8wN 5fRSI+fecB6hlUvOUKMP6Hxj8xIN/s/oQ9O5CAagc+kIliGd0f/fpSZbgPQiaxhzOB7b mAi8RAkoQM4pcaxUQzDOTYKK4oznJj2qoCXj4uH8hAmHjz66U9DEYBWowf8Vk8f/GFGA q5qyBiR/wtTN2hLhBobuHpFyAGbSEceoDnj2w3dWg6JK22VPYJk6tVr8ZpIH6mDFOmvg KuMg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=q1Zg6VaVMDZOs2QFiRBrbBs3HOCPcZfoC+LWrWiAmJ0=; b=mnDji+3hrEkbne1IFk3kmpw+88Z4GlpXGjL2ueyLDSc5gdvBgJusDUKgmXEEybUgSN t7M/uEuPfV6UekXrD5zZ2q6tCFEp8NXWaZv7Q2qGVo1pqIP5suvzpnm3dAUc5dA6SVSC fZVqKc2yotq4U0mSER9DPoxrIY/rzXxFY526tmK5M0JHczTKASKMFpEm6YG+kmApMgyr qBOlgPYj8qJyafuVOaq6DjRmHeeaMl7oQ6tCXfnpQ/t84wFsRPTiQ2dUh6MN2JemWBr5 6RiaMjAE8YkQJbbGcmUBOeP00AZjyqVdFdH4fllN+zgr2dXEu/0P8Y0jLuWaczkeZX46 84HA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@messagingengine.com header.s=fm2 header.b=M17mSv40; spf=softfail (google.com: domain of transitioning tobin@kernel.org does not designate 64.147.123.25 as permitted sender) smtp.mailfrom=tobin@kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from wout2-smtp.messagingengine.com (wout2-smtp.messagingengine.com. [64.147.123.25]) by mx.google.com with ESMTPS id 1si4043749qvq.52.2019.03.07.20.15.46 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 07 Mar 2019 20:15:46 -0800 (PST) Received-SPF: softfail (google.com: domain of transitioning tobin@kernel.org does not designate 64.147.123.25 as permitted sender) client-ip=64.147.123.25; Authentication-Results: mx.google.com; dkim=pass header.i=@messagingengine.com header.s=fm2 header.b=M17mSv40; spf=softfail (google.com: domain of transitioning tobin@kernel.org does not designate 64.147.123.25 as permitted sender) smtp.mailfrom=tobin@kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailout.west.internal (Postfix) with ESMTP id 88A6D36A8; Thu, 7 Mar 2019 23:15:45 -0500 (EST) Received: from mailfrontend1 ([10.202.2.162]) by compute3.internal (MEProxy); Thu, 07 Mar 2019 23:15:46 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm2; bh=q1Zg6VaVMDZOs2QFiRBrbBs3HOCPcZfoC+LWrWiAmJ0=; b=M17mSv40 saruX5/D4mmOPIOnA8AsVzCapGeV4Fd2E7o+A6pbEA2JFuM7OfnqnQpafoebY5ST uzw95LG3YSfvWRjFLgAp15/ehQa4/FCjlY41mxwBbjYRlKUj7dLCu5+i/9WK33sa C9OvhHIrr2RKr/dACPUwqmkPNUwXUq5f0zz5YpF/X4MJeFJWqSFAr8qCHUxqrPuc +i6MWzC1E8DmXNZFgfZsKPC02ZB/wDlBsGiGjlbmoEEl5CaAjrRJY/a0m6B8CaG7 9hE4jImWiay2UuMbArxVgPOwlU0bwuHFWia7g7NuVxc3mVbDuH90CMi+f2H2X9DM Q7CABmLWI760Dw== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedutddrfeelgdeifecutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc fjughrpefhvffufffkofgjfhgggfestdekredtredttdenucfhrhhomhepfdfvohgsihhn ucevrdcujfgrrhguihhnghdfuceothhosghinheskhgvrhhnvghlrdhorhhgqeenucfkph epuddvgedrudeiledrhedrudehkeenucfrrghrrghmpehmrghilhhfrhhomhepthhosghi nheskhgvrhhnvghlrdhorhhgnecuvehluhhsthgvrhfuihiivgepuddv X-ME-Proxy: Received: from eros.localdomain (124-169-5-158.dyn.iinet.net.au [124.169.5.158]) by mail.messagingengine.com (Postfix) with ESMTPA id 3F6EFE4360; Thu, 7 Mar 2019 23:15:42 -0500 (EST) From: "Tobin C. Harding" To: Andrew Morton Cc: "Tobin C. Harding" , Christopher Lameter , Pekka Enberg , Matthew Wilcox , Tycho Andersen , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [RFC 13/15] tools/testing/slab: Add XArray movable objects tests Date: Fri, 8 Mar 2019 15:14:24 +1100 Message-Id: <20190308041426.16654-14-tobin@kernel.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190308041426.16654-1-tobin@kernel.org> References: <20190308041426.16654-1-tobin@kernel.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP We just implemented movable objects for the XArray. Let's test it intree. Add test module for the XArray's movable objects implementation. Functionality of the XArray Slab Movable Object implementation can usually be seen by simply by using `slabinfo` on a running machine since the radix tree is typically in use on a running machine and will have partial slabs. For repeated testing we can use the test module to run to simulate a workload on the XArray then use `slabinfo` to test object migration is functioning. If testing on freshly spun up VM (low radix tree workload) it may be necessary to load/unload the module a number of times to create partial slabs. Example test session -------------------- Relevant /proc/slabinfo column headers: name Prior to testing slabinfo report for radix_tree_node: # slabinfo radix_tree_node --report Slabcache: radix_tree_node Aliases: 0 Order : 2 Objects: 8352 ** Reclaim accounting active ** Defragmentation at 30% Sizes (bytes) Slabs Debug Memory ------------------------------------------------------------------------ Object : 576 Total : 497 Sanity Checks : On Total: 8142848 SlabObj: 912 Full : 473 Redzoning : On Used : 4810752 SlabSiz: 16384 Partial: 24 Poisoning : On Loss : 3332096 Loss : 336 CpuSlab: 0 Tracking : On Lalig: 2806272 Align : 8 Objects: 17 Tracing : Off Lpadd: 437360 Here you can see the kernel was built with Slab Movable Objects enabled for the XArray (XArray uses the radix tree below the surface). After inserting the test module (note we have triggered allocation of a number of radix tree nodes increasing the object count but decreasing the number of partial slabs): # slabinfo radix_tree_node --report Slabcache: radix_tree_node Aliases: 0 Order : 2 Objects: 8442 ** Reclaim accounting active ** Defragmentation at 30% Sizes (bytes) Slabs Debug Memory ------------------------------------------------------------------------ Object : 576 Total : 499 Sanity Checks : On Total: 8175616 SlabObj: 912 Full : 484 Redzoning : On Used : 4862592 SlabSiz: 16384 Partial: 15 Poisoning : On Loss : 3313024 Loss : 336 CpuSlab: 0 Tracking : On Lalig: 2836512 Align : 8 Objects: 17 Tracing : Off Lpadd: 439120 Now we can shrink the radix_tree_node cache: # slabinfo radix_tree_node --shrink # slabinfo radix_tree_node --report Slabcache: radix_tree_node Aliases: 0 Order : 2 Objects: 8515 ** Reclaim accounting active ** Defragmentation at 30% Sizes (bytes) Slabs Debug Memory ------------------------------------------------------------------------ Object : 576 Total : 501 Sanity Checks : On Total: 8208384 SlabObj: 912 Full : 500 Redzoning : On Used : 4904640 SlabSiz: 16384 Partial: 1 Poisoning : On Loss : 3303744 Loss : 336 CpuSlab: 0 Tracking : On Lalig: 2861040 Align : 8 Objects: 17 Tracing : Off Lpadd: 440880 Note the single remaining partial slab. Signed-off-by: Tobin C. Harding --- tools/testing/slab/Makefile | 2 +- tools/testing/slab/slub_defrag_xarray.c | 211 ++++++++++++++++++++++++ 2 files changed, 212 insertions(+), 1 deletion(-) create mode 100644 tools/testing/slab/slub_defrag_xarray.c diff --git a/tools/testing/slab/Makefile b/tools/testing/slab/Makefile index 440c2e3e356f..44c18d9a4d52 100644 --- a/tools/testing/slab/Makefile +++ b/tools/testing/slab/Makefile @@ -1,4 +1,4 @@ -obj-m += slub_defrag.o +obj-m += slub_defrag.o slub_defrag_xarray.o KTREE=../../.. diff --git a/tools/testing/slab/slub_defrag_xarray.c b/tools/testing/slab/slub_defrag_xarray.c new file mode 100644 index 000000000000..06444a280820 --- /dev/null +++ b/tools/testing/slab/slub_defrag_xarray.c @@ -0,0 +1,211 @@ +// SPDX-License-Identifier: GPL-2.0+ +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#define SMOX_CACHE_NAME "smox_test" +static struct kmem_cache *cachep; + +/* + * Declare XArrays globally so we can clean them up on module unload. + */ + +/* Used by test_smo_xarray()*/ +DEFINE_XARRAY(things); + +/* Thing to store pointers to in the XArray */ +struct smox_thing { + long id; +}; + +/* It's up to the caller to ensure id is unique */ +static struct smox_thing *alloc_thing(int id) +{ + struct smox_thing *thing; + + thing = kmem_cache_alloc(cachep, GFP_KERNEL); + if (!thing) + return ERR_PTR(-ENOMEM); + + thing->id = id; + return thing; +} + +/** + * smox_object_ctor() - SMO object constructor function. + * @ptr: Pointer to memory where the object should be constructed. + */ +void smox_object_ctor(void *ptr) +{ + struct smox_thing *thing = ptr; + + thing->id = -1; +} + +/** + * smox_cache_migrate() - kmem_cache migrate function. + * @cp: kmem_cache pointer. + * @objs: Array of pointers to objects to migrate. + * @size: Number of objects in @objs. + * @node: NUMA node where the object should be allocated. + * @private: Pointer returned by kmem_cache_isolate_func(). + */ +void smox_cache_migrate(struct kmem_cache *cp, void **objs, int size, + int node, void *private) +{ + struct smox_thing **ptrs = (struct smox_thing **)objs; + struct smox_thing *old, *new; + struct smox_thing *thing; + unsigned long index; + void *entry; + int i; + + for (i = 0; i < size; i++) { + old = ptrs[i]; + + new = kmem_cache_alloc(cachep, GFP_KERNEL); + if (!new) { + pr_debug("kmem_cache_alloc failded\n"); + return; + } + + new->id = old->id; + + /* Update reference the brain dead way */ + xa_for_each(&things, index, thing) { + if (thing == old) { + entry = xa_store(&things, index, new, GFP_KERNEL); + if (entry != old) { + pr_err("failed to exchange new/old\n"); + return; + } + } + } + kmem_cache_free(cachep, old); + } +} + +/* + * test_smo_xarray() - Run some tests using an XArray. + */ +static int test_smo_xarray(void) +{ + const int keep = 3; /* Free 4 out of 5 items */ + const int nr_items = 10000; + struct smox_thing *thing; + unsigned long index; + void *entry; + int expected; + int i; + + /* + * Populate XArray, this adds to the radix_tree_node cache as + * well as the smox_test cache. + */ + for (i = 0; i < nr_items; i++) { + thing = alloc_thing(i); + entry = xa_store(&things, i, thing, GFP_KERNEL); + if (xa_is_err(entry)) { + pr_err("smox: failed to allocate entry: %d\n", i); + return -ENOMEM; + } + } + + /* Now free items, putting holes in both caches */ + for (i = 0; i < nr_items; i++) { + if (i % keep == 0) + continue; + + thing = xa_erase(&things, i); + if (xa_is_err(thing)) + pr_err("smox: error erasing entry: %d\n", i); + kmem_cache_free(cachep, thing); + } + + expected = 0; + xa_for_each(&things, index, thing) { + if (thing->id != expected || index != expected) { + pr_err("smox: error; got %ld want %d at %ld\n", + thing->id, expected, index); + return -1; + } + expected += keep; + } + + /* + * Leave caches sparsely allocated. Shrink caches manually with: + * + * slabinfo radix_tree_node -s + * slabinfo smox_test -s + */ + + return 0; +} + +static int __init smox_cache_init(void) +{ + cachep = kmem_cache_create(SMOX_CACHE_NAME, + sizeof(struct smox_thing), + 0, 0, smox_object_ctor); + if (!cachep) + return -1; + + return 0; +} + +static void __exit smox_cache_cleanup(void) +{ + struct smox_thing *thing; + unsigned long i; + + xa_for_each(&things, i, thing) { + kmem_cache_free(cachep, thing); + } + xa_destroy(&things); + kmem_cache_destroy(cachep); +} + +static int __init smox_init(void) +{ + int ret; + + ret = smox_cache_init(); + if (ret) { + pr_err("smo_xarray: failed to create cache\n"); + return ret; + } + pr_info("smo_xarray: created kmem_cache: %s\n", SMOX_CACHE_NAME); + + kmem_cache_setup_mobility(cachep, NULL, smox_cache_migrate); + pr_info("smo_xarray: kmem_cache %s defrag enabled\n", SMOX_CACHE_NAME); + + /* + * Running this test consumes memory unless you shrink the + * radix_tree_node cache manually with `slabinfo`. + */ + ret = test_smo_xarray(); + if (ret) + pr_warn("test_smo_xarray failed: %d\n", ret); + + pr_info("smo_xarray: module loaded successfully\n"); + return 0; +} +module_init(smox_init); + +static void __exit smox_exit(void) +{ + smox_cache_cleanup(); + + pr_info("smo_xarray: module removed\n"); +} +module_exit(smox_exit); + +MODULE_LICENSE("GPL"); +MODULE_AUTHOR("Tobin C. Harding"); +MODULE_DESCRIPTION("SMO XArray test module."); From patchwork Fri Mar 8 04:14:25 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Tobin C. Harding" X-Patchwork-Id: 10844171 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CA614922 for ; Fri, 8 Mar 2019 04:15:55 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B5E4F2E18A for ; Fri, 8 Mar 2019 04:15:55 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A9BEE2E1B0; Fri, 8 Mar 2019 04:15:55 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id CF62F2E18A for ; Fri, 8 Mar 2019 04:15:54 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 935258E0011; Thu, 7 Mar 2019 23:15:52 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 8BD3B8E0002; Thu, 7 Mar 2019 23:15:52 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7365B8E0011; Thu, 7 Mar 2019 23:15:52 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-qk1-f197.google.com (mail-qk1-f197.google.com [209.85.222.197]) by kanga.kvack.org (Postfix) with ESMTP id 477398E0002 for ; Thu, 7 Mar 2019 23:15:52 -0500 (EST) Received: by mail-qk1-f197.google.com with SMTP id s65so14958492qke.16 for ; Thu, 07 Mar 2019 20:15:52 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=L214L83KhqUBPr9WyhX49pLHqdAqJvMWalLp9qh2UjE=; b=kJMPIY7VOT9lynXStIcn5yvzjxpm9TV4cF+Gp7SCi3pjtJ7xTlHW7HljTT1AVsqpdN mHq/9G8Sl9aT1xWAzY/Cx9jZEycYb51pV/9laofJejB+NifhubCSRygdt+wTSUQXTix4 B2erPKO7Swi8OL19IrnmCv4yYCTESiggR60l09JFYQ2x/LVMR1E8Yq1u2i/3BHPq1C6Q 0YegIAUj58D1r2N6tLcFagJyb+WERZO5MDLZxuuKJQh3PKm8WAsv4F0/i+oCt3poe+ak GYcQUcr4sw682dhSPOznZU5OJsSCJYx86Vadj4LB9GfhoVtACWtvquWqewgafKFNxrvl X7Eg== X-Gm-Message-State: APjAAAU1f2Eyr0k2dDa2Wan0VdLn3Mkou6pz4z4Ti7lAG7UYZPOAAfEO 5HdSRMSI/6hImPxizgt4cfz31YVByl3o5MV+fWcbGOp/RzESSzJhrnckgBgSIxOlURbFKFC9VaJ f2Goj0QPgSf/xt3nU9xgx6tfQ74sRGhpWZ/YkpYAtnplWDuBukK++SOnsNjLVZmA= X-Received: by 2002:ac8:3f46:: with SMTP id w6mr13548691qtk.175.1552018552037; Thu, 07 Mar 2019 20:15:52 -0800 (PST) X-Google-Smtp-Source: APXvYqyqHemeosG3wOY9klYP7KaVHe8oUd/RzShHGF7EKgTVAZMBu6b9uHwvO6M3sj4ekXEsSvai X-Received: by 2002:ac8:3f46:: with SMTP id w6mr13548619qtk.175.1552018550426; Thu, 07 Mar 2019 20:15:50 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1552018550; cv=none; d=google.com; s=arc-20160816; b=TpO6MMNRGG5xHmvb2Iyh28fTOfAJNx3NuZ/lIkFCUDhyL5wgVusen3V64aK9QTppEy eIjSvGXU6Ulb54K0ytNsNxeEP89nKnV9TL+8OV/L5Hqa7yo+VUnvGkFe8RgeI4wMBRmt Pzrld4n4cBRnfHNHewku0ZvgARoaJikaa8czE/sQBaDqU1hUpF+Za4sa8PCVbgsTlAlA fd84bc6OZRBAA7SdwEzRAYsJcLISpO/aF2M5oSQBCMVLhVMVzoAil8+NdVbMBuNI/dI6 /rvcG55F93EU1m60RYwKRuT+FjCwWzR29epkTHDkeJz7cYIfHP6KZlI6wZtQLRuZOvX0 bDAw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=L214L83KhqUBPr9WyhX49pLHqdAqJvMWalLp9qh2UjE=; b=HMdpZ5bfyPDGDbJ+velCLTtsVlwqKL0NHWX0K76kwl6eA2wNNE8q62vlrILFsRd9CK Qxi3GJCbfQ/vedkI4/7qTMeMJIZoILNFMqcsxdw2pGROQBMLsz6hTQ46ZPSuNtADclNZ 0XOOlaf49VzaghzoRIO+fE8q+rnIGPN6OAVD6COiPFX5pQSo0UaVbbKrfJOIREnQ6ALI GmI2fkFxRRA+p+DyASEQ9K/4n9nLDYVgPswqKB9jmb32wocflSExMqqXI+1H+9kk8akp /V8GOlCDLvKhV8tSe7lJATFYdXkjc0zHwTrwigHuZKeA8XRoDFbTw6MBsnT9YO/mMMw2 EAhw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@messagingengine.com header.s=fm2 header.b=ktuyz6Qt; spf=softfail (google.com: domain of transitioning tobin@kernel.org does not designate 64.147.123.25 as permitted sender) smtp.mailfrom=tobin@kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from wout2-smtp.messagingengine.com (wout2-smtp.messagingengine.com. [64.147.123.25]) by mx.google.com with ESMTPS id z4si40072qvm.216.2019.03.07.20.15.50 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 07 Mar 2019 20:15:50 -0800 (PST) Received-SPF: softfail (google.com: domain of transitioning tobin@kernel.org does not designate 64.147.123.25 as permitted sender) client-ip=64.147.123.25; Authentication-Results: mx.google.com; dkim=pass header.i=@messagingengine.com header.s=fm2 header.b=ktuyz6Qt; spf=softfail (google.com: domain of transitioning tobin@kernel.org does not designate 64.147.123.25 as permitted sender) smtp.mailfrom=tobin@kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailout.west.internal (Postfix) with ESMTP id DE73D36C0; Thu, 7 Mar 2019 23:15:48 -0500 (EST) Received: from mailfrontend1 ([10.202.2.162]) by compute3.internal (MEProxy); Thu, 07 Mar 2019 23:15:49 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm2; bh=L214L83KhqUBPr9WyhX49pLHqdAqJvMWalLp9qh2UjE=; b=ktuyz6Qt LklEfjHoNGOdFg9jp4yBWv6DHFksCu73Yg0+lqbTTPqJAiy/2UZDLzt+afmR2Vnf 4nnNz6cJWhOHRlvYrJT5FMVm0q3vclisGDcUBlo4gPEELIdbDZv9HnIZVdGJmU01 FQMn1bf3pnl6jSn4yOz/5yFB/HHcO8r+dzz6nffcYjEWDBBG97CyHur2W3hfxXLw YwTeJNJbDIgV1jG3mBQIpWkkXgWScIf+boOYGB4EBSgdGcDSbU9Fido+us+A/cej y7jB9W12mTLQSMnh1yvfCWAoQ/PY++tkBZvOctWX49d216pIm7oxrjcx00uolqrJ idBYZp9iC0aZyQ== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedutddrfeelgdeifecutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc fjughrpefhvffufffkofgjfhgggfestdekredtredttdenucfhrhhomhepfdfvohgsihhn ucevrdcujfgrrhguihhnghdfuceothhosghinheskhgvrhhnvghlrdhorhhgqeenucfkph epuddvgedrudeiledrhedrudehkeenucfrrghrrghmpehmrghilhhfrhhomhepthhosghi nheskhgvrhhnvghlrdhorhhgnecuvehluhhsthgvrhfuihiivgepuddv X-ME-Proxy: Received: from eros.localdomain (124-169-5-158.dyn.iinet.net.au [124.169.5.158]) by mail.messagingengine.com (Postfix) with ESMTPA id 8BC1AE4362; Thu, 7 Mar 2019 23:15:45 -0500 (EST) From: "Tobin C. Harding" To: Andrew Morton Cc: "Tobin C. Harding" , Christopher Lameter , Pekka Enberg , Matthew Wilcox , Tycho Andersen , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [RFC 14/15] slub: Enable move _all_ objects to node Date: Fri, 8 Mar 2019 15:14:25 +1100 Message-Id: <20190308041426.16654-15-tobin@kernel.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190308041426.16654-1-tobin@kernel.org> References: <20190308041426.16654-1-tobin@kernel.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP We have just implemented Slab Movable Objects (object migration). Currently object migration is used to defrag a cache. On NUMA systems it would be nice to be able to move objects and control the source and destination nodes. Add CONFIG_SMO_NODE to guard this feature. CONFIG_SMO_NODE depends on CONFIG_SLUB_DEBUG because we use the full list. Leave it like this for the RFC because the patch will be less cluttered to review, separate full list out of CONFIG_DEBUG before doing a PATCH version. Implement moving all objects (including those in full slabs) to a specific node. Expose this functionality to userspace via a sysfs entry. Add sysfs entry: /sysfs/kernel/slab//move With this users get access to the following functionality: - Move all objects to specified node. echo "N1" > move - Move all objects from specified node to other specified node (from N1 -> to N2): echo "N1 N2" > move This also enables shrinking slabs on a specific node: echo "N1 N1" > move Signed-off-by: Tobin C. Harding --- mm/Kconfig | 7 ++ mm/slub.c | 249 +++++++++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 256 insertions(+) diff --git a/mm/Kconfig b/mm/Kconfig index 25c71eb8a7db..47040d939f3b 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -258,6 +258,13 @@ config ARCH_ENABLE_HUGEPAGE_MIGRATION config ARCH_ENABLE_THP_MIGRATION bool +config SMO_NODE + bool "Enable per node control of Slab Movable Objects" + depends on SLUB && SYSFS + select SLUB_DEBUG + help + On NUMA systems enable moving objects to and from a specified node. + config PHYS_ADDR_T_64BIT def_bool 64BIT diff --git a/mm/slub.c b/mm/slub.c index 53dd4cb5b5a4..ac9b8f592e10 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -4344,6 +4344,106 @@ static void __move(struct page *page, void *scratch, int node) s->migrate(s, vector, count, node, private); } +#ifdef CONFIG_SMO_NODE +/* + * kmem_cache_move() - Move _all_ slabs from node to target node. + * @s: The cache we are working on. + * @node: The node to move objects away from. + * @target_node: The node to move objects on to. + * + * Attempts to move all objects (partial slabs and full slabs) to target + * node. + * + * Context: Takes the list_lock. + * Return: The number of slabs remaining on node. + */ +static unsigned long kmem_cache_move(struct kmem_cache *s, + int node, int target_node) +{ + struct kmem_cache_node *n = get_node(s, node); + LIST_HEAD(move_list); + struct page *page, *page2; + unsigned long flags; + void **scratch; + + if (!s->migrate) { + pr_warn("%s SMO not enabled, cannot move objects\n", s->name); + goto out; + } + + scratch = alloc_scratch(s); + if (!scratch) + goto out; + + spin_lock_irqsave(&n->list_lock, flags); + + list_for_each_entry_safe(page, page2, &n->partial, lru) { + if (!slab_trylock(page)) + /* Busy slab. Get out of the way */ + continue; + + if (page->inuse) { + list_move(&page->lru, &move_list); + /* Stop page being considered for allocations */ + n->nr_partial--; + page->frozen = 1; + + slab_unlock(page); + } else { /* Empty slab page */ + list_del(&page->lru); + n->nr_partial--; + slab_unlock(page); + discard_slab(s, page); + } + } + list_for_each_entry_safe(page, page2, &n->full, lru) { + if (!slab_trylock(page)) + continue; + + list_move(&page->lru, &move_list); + page->frozen = 1; + slab_unlock(page); + } + + spin_unlock_irqrestore(&n->list_lock, flags); + + list_for_each_entry(page, &move_list, lru) { + if (page->inuse) + __move(page, scratch, target_node); + } + kfree(scratch); + + /* Bail here to save taking the list_lock */ + if (list_empty(&move_list)) + goto out; + + /* Inspect results and dispose of pages */ + spin_lock_irqsave(&n->list_lock, flags); + list_for_each_entry_safe(page, page2, &move_list, lru) { + list_del(&page->lru); + slab_lock(page); + page->frozen = 0; + + if (page->inuse) { + if (page->inuse == page->objects) { + list_add(&page->lru, &n->full); + slab_unlock(page); + } else { + n->nr_partial++; + list_add_tail(&page->lru, &n->partial); + slab_unlock(page); + } + } else { + slab_unlock(page); + discard_slab(s, page); + } + } + spin_unlock_irqrestore(&n->list_lock, flags); +out: + return atomic_long_read(&n->nr_slabs); +} +#endif /* CONFIG_SMO_NODE */ + /* * __defrag() - Defragment node. * @s: cache we are working on. @@ -4460,6 +4560,32 @@ static unsigned long __defrag(struct kmem_cache *s, int node, int target_node, return n->nr_partial; } +#ifdef CONFIG_SMO_NODE +/* + * __move_all_objects_to() - Move all slab objects to node. + * @s: The cache we are working on. + * @node: The target node to move objects to. + * + * Attempt to move all slab objects from all nodes to @node. + * + * Return: The total number of slabs left on emptied nodes. + */ +static unsigned long __move_all_objects_to(struct kmem_cache *s, int node) +{ + unsigned long left = 0; + int nid; + + for_each_node_state(nid, N_NORMAL_MEMORY) { + if (nid == node) + continue; + + left += kmem_cache_move(s, nid, node); + } + + return left; +} +#endif + /** * kmem_cache_defrag() - Defrag slab caches. * @node: The node to defrag or -1 for all nodes. @@ -5592,6 +5718,126 @@ static ssize_t shrink_store(struct kmem_cache *s, } SLAB_ATTR(shrink); +#ifdef CONFIG_SMO_NODE +static ssize_t move_show(struct kmem_cache *s, char *buf) +{ + return 0; +} + +/* + * parse_move_store_input() - Parse buf getting integer arguments. + * @buf: Buffer to parse. + * @length: Length of @buf. + * @arg0: Return parameter, first argument. + * @arg1: Return parameter, second argument. + * + * Parses the input from user write to sysfs file 'move'. Input string + * should contain either one or two node specifiers of form Nx where x + * is an integer specifying the NUMA node ID. 'N' or 'n' may be used. + * n/N may be omitted. + * + * e.g. + * echo 'N1' > /sysfs/kernel/slab/cache/move + * or + * echo 'N0 N2' > /sysfs/kernel/slab/cache/move + * + * Regex matching accepted forms: '[nN]?[0-9]( [nN]?[0-9])?' + * + * FIXME: This is really fragile. Input must be exactly correct, + * spurious whitespace causes parse errors. + * + * Return: 0 if an argument was successfully converted, or an error code. + */ +static ssize_t parse_move_store_input(const char *buf, size_t length, + long *arg0, long *arg1) +{ + char *s, *save, *ptr; + int ret = 0; + + if (!buf) + return -EINVAL; + + s = kstrdup(buf, GFP_KERNEL); + if (!s) + return -ENOMEM; + save = s; + + if (s[length - 1] == '\n') { + s[length - 1] = '\0'; + length--; + } + + ptr = strsep(&s, " "); + if (!ptr || strcmp(ptr, "") == 0) { + ret = 0; + goto out; + } + + if (*ptr == 'N' || *ptr == 'n') + ptr++; + ret = kstrtol(ptr, 10, arg0); + if (ret < 0) + goto out; + + if (s) { + if (*s == 'N' || *s == 'n') + s++; + ret = kstrtol(s, 10, arg1); + if (ret < 0) + goto out; + } + + ret = 0; +out: + kfree(save); + return ret; +} + +static bool is_valid_node(int node) +{ + int nid; + + for_each_node_state(nid, N_NORMAL_MEMORY) { + if (nid == node) + return true; + } + return false; +} + +/* + * move_store() - Move objects between nodes. + * @s: The cache we are working on. + * @buf: String received. + * @length: Length of @buf. + * + * Writes to /sys/kernel/slab//move are interpreted as follows: + * + * echo "N1" > move : Move all objects (from all nodes) to node 1. + * echo "N0 N1" > move : Move all objects from node 0 to node 1. + * + * 'N' may be omitted: + */ +static ssize_t move_store(struct kmem_cache *s, const char *buf, size_t length) +{ + long arg0 = -1; + long arg1 = -1; + int ret; + + ret = parse_move_store_input(buf, length, &arg0, &arg1); + if (ret < 0) + return -EINVAL; + + if (is_valid_node(arg0) && is_valid_node(arg1)) + (void)kmem_cache_move(s, arg0, arg1); + else if (is_valid_node(arg0)) + (void)__move_all_objects_to(s, arg0); + + /* FIXME: What should we be returning here? */ + return length; +} +SLAB_ATTR(move); +#endif /* CONFIG_SMO_NODE */ + #ifdef CONFIG_NUMA static ssize_t remote_node_defrag_ratio_show(struct kmem_cache *s, char *buf) { @@ -5716,6 +5962,9 @@ static struct attribute *slab_attrs[] = { &reclaim_account_attr.attr, &destroy_by_rcu_attr.attr, &shrink_attr.attr, +#ifdef CONFIG_SMO_NODE + &move_attr.attr, +#endif &slabs_cpu_partial_attr.attr, #ifdef CONFIG_SLUB_DEBUG &total_objects_attr.attr, From patchwork Fri Mar 8 04:14:26 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Tobin C. Harding" X-Patchwork-Id: 10844173 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A6EFC922 for ; Fri, 8 Mar 2019 04:15:58 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 90C0C2E18A for ; Fri, 8 Mar 2019 04:15:58 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 84E452E1B0; Fri, 8 Mar 2019 04:15:58 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E26F22E1BE for ; Fri, 8 Mar 2019 04:15:57 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C1A518E0012; Thu, 7 Mar 2019 23:15:55 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id BCC278E0002; Thu, 7 Mar 2019 23:15:55 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A45F08E0012; Thu, 7 Mar 2019 23:15:55 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-qk1-f200.google.com (mail-qk1-f200.google.com [209.85.222.200]) by kanga.kvack.org (Postfix) with ESMTP id 7166B8E0002 for ; Thu, 7 Mar 2019 23:15:55 -0500 (EST) Received: by mail-qk1-f200.google.com with SMTP id u66so3628406qkf.17 for ; Thu, 07 Mar 2019 20:15:55 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=C1UFNBfZ1PMvCNxNU0QdAtVm0PxPQxmOQ26CPtV4v/w=; b=DTkg+g1yP5eHoJWJXe8s7Aru4TjL97sN8ePZBnN45PeTW4Z1UP0rlipgk/R1pAeZ9h H9cc5siGP15XAfQqjSR89oEbGHr91Xw0B7WyqwqeI3jzQ0wAXH2ca/iXtf/33RvYEr5x 3BRyNctrWE8bdyGbQCh3gEIbSeqO7MnbIH3AZkmYM1fWxGx8WBcbcjpt8fiOOGFzGq83 VRIK5SS1+7VGAVDk3ovLPXDOMjC8mUGxZ/eREvbLY0n5x20ubZ32MjAHeFqEVrmvSzFY ZKktvVzkeEGXOztyItmcOANUCAvAA8qbBXqIy/qzcW4qVzHBiVCFR3fSoVDXIrGwa1i6 zvjA== X-Gm-Message-State: APjAAAWEaUG8k9FB/bHY8YpMR8Ed1A7v+C95dv3ZhYuKUoLNEx/jDRlI 0EBrBFY9W6RhKXaW9pZwmYGU7M8Sk/2mq7xyAou7MrnxoVZWDC2X9XXP/e2icGWeUW57JQU+MnN EhGZCd29GT18FXyBxCMUahOcrEb2kxsngkFTbPiAMb4m6kVsfQLJbUt9OyenOxYQ= X-Received: by 2002:ac8:3774:: with SMTP id p49mr13036312qtb.388.1552018555237; Thu, 07 Mar 2019 20:15:55 -0800 (PST) X-Google-Smtp-Source: APXvYqzptyvHXAxhDIOOijZGn6kDLD9nzvLEERikgOmS+bGI3X3QtmFkmNeKqTA/G5bNZW2wQ5Py X-Received: by 2002:ac8:3774:: with SMTP id p49mr13036277qtb.388.1552018554296; Thu, 07 Mar 2019 20:15:54 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1552018554; cv=none; d=google.com; s=arc-20160816; b=yVgbUejb1qtREN3hZqcPeA4GK5TT2WsMsEu6tgLP0NS4EiddGWa1raaD0FH8OYeyPj Hx6bwanCf1xwyJMG/93sjMF6HzedDJGoEOtriqDEvTCApmqIZJCUfVpouIb1Ob7wMD1D suNWeQeOVmcRuIQqdKq7BNAn18Hr+MfyxeEPldJVOk2J+JqF1W5Vbb/Kdyk4SYuy1F3m ZDm+T9X89nPkj+9RFwxDnnrkrXDMvLfHJhKnh2NG6wIvz4q7TVPNT2Ivr+5UVRUHadRb aG98YEjNxftK1nYwMJmEDeY+eGP5N1RSGggIui2+Fsnp9Jq404iuuOJ9zit6+2KS2QKu 1J5A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=C1UFNBfZ1PMvCNxNU0QdAtVm0PxPQxmOQ26CPtV4v/w=; b=HWCtEVLx7APx/hw0Ia2+wcfOo0xct54cErFdN8+etqZ2TTF2m30Eq3IVqDKJKVuEMa y+D2N25+gCNTfDICyw5iljOJ+nih2Ck0RJ3rPQZ7CWdN4y1co/7mT4q1pPag9uw94c3j MZdSe6Kh6W3SzaPuIRKOgWsHBNKb1xhjpjhfzFrAtUKi/8teY3eM2wjo2Hu1YnF2wCwW BYASL41y8g7nWKcVANC7lfp4VjLPF2u/NN6AvNF1zZ70YTri/TQcR5u+USQR40vgbP1Y i5LZqIFqIxeo2bPMfoBIyJzSAE7VOQJADmbCfs2Gvoi+8uPFnh1TzkjxRwHN7pOnhKFa /5KQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@messagingengine.com header.s=fm2 header.b=Qmgo7VSB; spf=softfail (google.com: domain of transitioning tobin@kernel.org does not designate 64.147.123.25 as permitted sender) smtp.mailfrom=tobin@kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from wout2-smtp.messagingengine.com (wout2-smtp.messagingengine.com. [64.147.123.25]) by mx.google.com with ESMTPS id m49si4172790qvm.204.2019.03.07.20.15.54 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 07 Mar 2019 20:15:54 -0800 (PST) Received-SPF: softfail (google.com: domain of transitioning tobin@kernel.org does not designate 64.147.123.25 as permitted sender) client-ip=64.147.123.25; Authentication-Results: mx.google.com; dkim=pass header.i=@messagingengine.com header.s=fm2 header.b=Qmgo7VSB; spf=softfail (google.com: domain of transitioning tobin@kernel.org does not designate 64.147.123.25 as permitted sender) smtp.mailfrom=tobin@kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailout.west.internal (Postfix) with ESMTP id D3DC7344B; Thu, 7 Mar 2019 23:15:52 -0500 (EST) Received: from mailfrontend1 ([10.202.2.162]) by compute3.internal (MEProxy); Thu, 07 Mar 2019 23:15:53 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm2; bh=C1UFNBfZ1PMvCNxNU0QdAtVm0PxPQxmOQ26CPtV4v/w=; b=Qmgo7VSB 25Hv9qMlIjQRkzsqOTVGEP5pLcSbpt3DJWN8NWVNcmjDKNpiyIyEtFWdajaruenR 33v3iwjZWOfl5JkRUIRJC7kJAtaxoxgARgjAq1sufoH46mTMp1QyzGDSsfgqa5O+ 6w+qnrmhq4BO9tyIg4nccAOu0S+AjwSUgNpgIuqfcJ8fTCW664/tarqTE/ut3iGn PDO2Xsa0JWcl2TVuPC7cKht8NTrIxwF6mdZUrsCduNPz+y6CojsMKS1qQMo6szLW aoGsCaX8wrSfQbcVPagNzwupUVF6poQuujexNn1h9A/qGLh/msC9US0BgBrnwYm4 zMDmQoMU8kUyiQ== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedutddrfeelgdeifecutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc fjughrpefhvffufffkofgjfhgggfestdekredtredttdenucfhrhhomhepfdfvohgsihhn ucevrdcujfgrrhguihhnghdfuceothhosghinheskhgvrhhnvghlrdhorhhgqeenucfkph epuddvgedrudeiledrhedrudehkeenucfrrghrrghmpehmrghilhhfrhhomhepthhosghi nheskhgvrhhnvghlrdhorhhgnecuvehluhhsthgvrhfuihiivgepudeg X-ME-Proxy: Received: from eros.localdomain (124-169-5-158.dyn.iinet.net.au [124.169.5.158]) by mail.messagingengine.com (Postfix) with ESMTPA id 38D21E4548; Thu, 7 Mar 2019 23:15:48 -0500 (EST) From: "Tobin C. Harding" To: Andrew Morton Cc: "Tobin C. Harding" , Christopher Lameter , Pekka Enberg , Matthew Wilcox , Tycho Andersen , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [RFC 15/15] slub: Enable balancing slab objects across nodes Date: Fri, 8 Mar 2019 15:14:26 +1100 Message-Id: <20190308041426.16654-16-tobin@kernel.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190308041426.16654-1-tobin@kernel.org> References: <20190308041426.16654-1-tobin@kernel.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP We have just implemented Slab Movable Objects (SMO). On NUMA systems slabs can become unbalanced i.e. many objects on one node while other nodes have few objects. Using SMO we can balance the objects across all the nodes. The algorithm used is as follows: 1. Move all objects to node 0 (this has the effect of defragmenting the cache). 2. Calculate the desired number of slabs for each node (this is done using the approximation nr_slabs / nr_nodes). 3. Loop over the nodes moving the desired number of slabs from node 0 to the node. Feature is conditionally built in with CONFIG_SMO_NODE, this is because we need the full list (we enable SLUB_DEBUG to get this). Future version may separate final list out of SLUB_DEBUG. Expose this functionality to userspace via a sysfs entry. Add sysfs entry: /sysfs/kernel/slab//balance Write of '1' to this file triggers balance, no other value accepted. This feature relies on SMO being enable for the cache, this is done with a call to, after the isolate/migrate functions have been defined. kmem_cache_setup_mobility(s, isolate, migrate) Signed-off-by: Tobin C. Harding --- mm/slub.c | 115 ++++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 115 insertions(+) diff --git a/mm/slub.c b/mm/slub.c index ac9b8f592e10..65cf305a70c3 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -4584,6 +4584,104 @@ static unsigned long __move_all_objects_to(struct kmem_cache *s, int node) return left; } + +/* + * __move_n_slabs() - Attempt to move 'num' slabs to target_node, + * Return: The number of slabs moved or error code. + */ +static long __move_n_slabs(struct kmem_cache *s, int node, int target_node, + long num) +{ + struct kmem_cache_node *n = get_node(s, node); + LIST_HEAD(move_list); + struct page *page, *page2; + unsigned long flags; + void **scratch; + long done = 0; + + if (node == target_node) + return -EINVAL; + + scratch = alloc_scratch(s); + if (!scratch) + return -ENOMEM; + + spin_lock_irqsave(&n->list_lock, flags); + list_for_each_entry_safe(page, page2, &n->full, lru) { + if (!slab_trylock(page)) + /* Busy slab. Get out of the way */ + continue; + + list_move(&page->lru, &move_list); + page->frozen = 1; + slab_unlock(page); + + if (++done >= num) + break; + } + spin_unlock_irqrestore(&n->list_lock, flags); + + list_for_each_entry(page, &move_list, lru) { + if (page->inuse) + __move(page, scratch, target_node); + } + kfree(scratch); + + /* Inspect results and dispose of pages */ + spin_lock_irqsave(&n->list_lock, flags); + list_for_each_entry_safe(page, page2, &move_list, lru) { + list_del(&page->lru); + slab_lock(page); + page->frozen = 0; + + if (page->inuse) { + /* + * This is best effort only, if slab still has + * objects just put it back on the partial list. + */ + n->nr_partial++; + list_add_tail(&page->lru, &n->partial); + slab_unlock(page); + } else { + slab_unlock(page); + discard_slab(s, page); + } + } + spin_unlock_irqrestore(&n->list_lock, flags); + + return done; +} + +/* + * __balance_nodes_partial() - Balance partial objects. + * @s: The cache we are working on. + * + * Attempt to balance the objects that are in partial slabs evenly + * across all nodes. + */ +static void __balance_nodes_partial(struct kmem_cache *s) +{ + struct kmem_cache_node *n = get_node(s, 0); + unsigned long desired_nr_slabs_per_node; + unsigned long nr_slabs; + int nr_nodes = 0; + int nid; + + (void)__move_all_objects_to(s, 0); + + for_each_node_state(nid, N_NORMAL_MEMORY) + nr_nodes++; + + nr_slabs = atomic_long_read(&n->nr_slabs); + desired_nr_slabs_per_node = nr_slabs / nr_nodes; + + for_each_node_state(nid, N_NORMAL_MEMORY) { + if (nid == 0) + continue; + + __move_n_slabs(s, 0, nid, desired_nr_slabs_per_node); + } +} #endif /** @@ -5836,6 +5934,22 @@ static ssize_t move_store(struct kmem_cache *s, const char *buf, size_t length) return length; } SLAB_ATTR(move); + +static ssize_t balance_show(struct kmem_cache *s, char *buf) +{ + return 0; +} + +static ssize_t balance_store(struct kmem_cache *s, + const char *buf, size_t length) +{ + if (buf[0] == '1') + __balance_nodes_partial(s); + else + return -EINVAL; + return length; +} +SLAB_ATTR(balance); #endif /* CONFIG_SMO_NODE */ #ifdef CONFIG_NUMA @@ -5964,6 +6078,7 @@ static struct attribute *slab_attrs[] = { &shrink_attr.attr, #ifdef CONFIG_SMO_NODE &move_attr.attr, + &balance_attr.attr, #endif &slabs_cpu_partial_attr.attr, #ifdef CONFIG_SLUB_DEBUG