From patchwork Wed Aug 28 09:35:14 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kanchana P Sridhar X-Patchwork-Id: 13780969 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6B420C54795 for ; Wed, 28 Aug 2024 09:35:24 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D4CE16B0095; Wed, 28 Aug 2024 05:35:23 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CD6086B0096; Wed, 28 Aug 2024 05:35:23 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B03CE6B0098; Wed, 28 Aug 2024 05:35:23 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 8A7BA6B0095 for ; Wed, 28 Aug 2024 05:35:23 -0400 (EDT) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 242AE161D8F for ; Wed, 28 Aug 2024 09:35:23 +0000 (UTC) X-FDA: 82501146126.10.5B051A8 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.10]) by imf22.hostedemail.com (Postfix) with ESMTP id 149FDC0007 for ; Wed, 28 Aug 2024 09:35:20 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=F+cUprlN; spf=pass (imf22.hostedemail.com: domain of kanchana.p.sridhar@intel.com designates 192.198.163.10 as permitted sender) smtp.mailfrom=kanchana.p.sridhar@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1724837634; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=vGbt+zm88UsOZri9YFcjRa+V/L3WuFrEf9oMyrgaGj8=; b=nQeUMZufpjkRyTs37akGOqzTioQwnnBiD3Lkzk3quDFpYxOm5nkfiqeRcj6bg2eElU3jhh Nm+5yQMKeMY4QP97ffdZJ3ZMcKckL5k46twMOjS3JNFxSGjcXWRGecsD0Xgs2K6bkU54kf YKuf13fblBh1djYzCL1SeYAyIRQKA7E= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1724837634; a=rsa-sha256; cv=none; b=zDxJhONvcC6sCG+1Gf+T1iWVUZkY53nbNqoInj5o1j/yGh9XOQ7axUJu8STwEYcvVq8Fv7 x6T6Os3oQmGDM5sLXIjgSQ82ZXh661qaPiiNzZ569Y+GLqbb5oEgWzEOG7vCn2XcRan9sY 6Q+WLJ623NGFFms1qBQuqjpZ4ZsP2+s= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=F+cUprlN; spf=pass (imf22.hostedemail.com: domain of kanchana.p.sridhar@intel.com designates 192.198.163.10 as permitted sender) smtp.mailfrom=kanchana.p.sridhar@intel.com; dmarc=pass (policy=none) header.from=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1724837721; x=1756373721; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=AeqSRk7TDyGPqnrVN5BCXf4W+HLcFRFHZnOza8fR2pk=; b=F+cUprlNZRTKEOp7ARSbuD73Ta22jN4qSZ/NBJsC1Fh/2ReJWNZ6jLF5 Zoq+3ySzO5WMKXJkdDGSKX2b5TO/QhywT38ygftZIy0WloO0R+HtP93BJ pI5gGnL2c+AegsOxH0xkwYXrrZCgiUyOPHHUH5m+EuBjKMyKFGeTYaZET mw/4cJq8r2Zlug77e1de/3VpxbAbi9k28bBqpTmbPgGJYWC3YCeykA1Iy qowcfT1rSbDaCRw4wJKf4NW/MR8mG/kfh5MGNwTJA8XYX6FPtbRI0jYzY kfpqgYq/gkFIBWpVin6iS69fLMfb4KIvoS3UWVYvOJ7vJ9gc6N73ax3DU A==; X-CSE-ConnectionGUID: iamQzEFrTCi5bs7JQLoi9Q== X-CSE-MsgGUID: k4Lz2HWVQN2vKoKkaR25jw== X-IronPort-AV: E=McAfee;i="6700,10204,11177"; a="34763866" X-IronPort-AV: E=Sophos;i="6.10,182,1719903600"; d="scan'208";a="34763866" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by fmvoesa104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Aug 2024 02:35:16 -0700 X-CSE-ConnectionGUID: pxwLaIv9QNKK2iVMFyiFEg== X-CSE-MsgGUID: ToDHsLGZRnyyoT3XdFWnww== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.10,182,1719903600"; d="scan'208";a="100678971" Received: from jf5300-b11a338t.jf.intel.com ([10.242.51.6]) by orviesa001.jf.intel.com with ESMTP; 28 Aug 2024 02:35:16 -0700 From: Kanchana P Sridhar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org, hannes@cmpxchg.org, yosryahmed@google.com, nphamcs@gmail.com, ryan.roberts@arm.com, ying.huang@intel.com, 21cnbao@gmail.com, akpm@linux-foundation.org Cc: nanhai.zou@intel.com, wajdi.k.feghali@intel.com, vinodh.gopal@intel.com, kanchana.p.sridhar@intel.com Subject: [PATCH v5 1/3] mm: Define obj_cgroup_get() if CONFIG_MEMCG is not defined. Date: Wed, 28 Aug 2024 02:35:14 -0700 Message-Id: <20240828093516.30228-2-kanchana.p.sridhar@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20240828093516.30228-1-kanchana.p.sridhar@intel.com> References: <20240828093516.30228-1-kanchana.p.sridhar@intel.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 149FDC0007 X-Stat-Signature: d8j5yxjuesnswonodd3khzhaqphnmrag X-Rspamd-Server: rspam09 X-Rspam-User: X-HE-Tag: 1724837720-181305 X-HE-Meta: U2FsdGVkX1++mEKhbqHaJHjtSVxlwpBgfmnDc0WCcHLwsv2jp0KZvqRSfN2QVqim8wJvuQPBqFURk1SH5AJ5eduPe2mpbrf7unOGD0wb6rtrrTnH4qNOmfbB2BMQyf1ApT83MZS+JForBeRVGXwEBPBlHBnoxxlr26JA0osllZRyvFnN38o6HpoIeXSNc6Cg1VpMIwRiKQuh/9qkvggtepVbbgXsFmtWyjQEfohOcBY9zcBEEV+D/wigGZ7K8u2BmbeLS2OkvFi3ibApg7OZU2uVXJ2hI+N1ZTstRvb79LeXzsdroKU6VpRZqVk/pYzlj3IhhgHDdetM73jbEXIVjwPVbzwI88RNI+Z0e2yuYUTqphqZusD7kHLdH1Oxr9GqmSPKhhlqHc4tHxeDUcFgNWj7O1hx5/fgvZJvwUN+t9xFFRJ/pppmqatvt9LaL8LqALVB0PMMaue/t7CjiBsuANXvRCwRvfkBGY8cUxOkcCxxr//82XXDHqY62DaXz8o4hbcGQgw73QIxT1kPx+41IBrWGIeJr/Gt8Eu5Zxg4og1hBONT7444xaWXVZuQivw+wBaY4PCkbx9Yq7Djb58jGhZc96Ifmz+aHzS1VMOSwsgsPmjEVpsdC2D9UcULt/pMU8xecZ/yQCa4VMJwRSV8t8mOMFN7wEZJGCiZr8QY0U1Uv4k64rfKQYYLdy5EoBbbKMCl3CajNsYZWk1JXQQMeDjXngBPiZGV04QS11kM5qH7smOJtbkTL0ls3b5Pg9U5hHK1cfQ+7gFJ/oWWWu85uIi/CJPtq2joOrg+yTABgWncwqmsK1GVEIRZ8SCVyS3LQf3lDcCZCOqy8Rui1NHqYUvIKU+XeG3GIdt/vNWwoe0KvERelocLquCV04O71vxTkekULICZJuP+CAiCUp4/0dp29c1NNhuvx03MUwWH6Mc7Lm2KuW/GpG1QI63Bw/OwyIxOuOKt4XdBw0It/rX Ck5wFgMr xZGf6stZ5dh/DQNNqjd+wsr27ugsGUPjXOTQmYkvsZdVeZiGDlfWMytEBS1F7TaDU5cBvZ0hOh6L4DTdDixRVS+occP5P5a72GIvfdliqUj4VVVa30lM7FpMuzpzsDDPa1SZeWNIV5/e31DWCaeAyEop5C+a+3djUCMMuSG2Mh4X0p6EhOyapo6+ziz1ptgcq/cd0tQCvWdEjjDdEfJIiiWZTFOJj09t8pP7kLR510dFWCyY= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: This resolves an issue with obj_cgroup_get() not being defined if CONFIG_MEMCG is not defined. Before this patch, we would see build errors if obj_cgroup_get() is called from code that is agnostic of CONFIG_MEMCG. The zswap_store() changes for mTHP in subsequent commits will require the use of obj_cgroup_get() in zswap code that falls into this category. Signed-off-by: Kanchana P Sridhar --- include/linux/memcontrol.h | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index fe05fdb92779..f693d254ab2a 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -1281,6 +1281,10 @@ struct mem_cgroup *mem_cgroup_from_css(struct cgroup_subsys_state *css) return NULL; } +static inline void obj_cgroup_get(struct obj_cgroup *objcg) +{ +} + static inline void obj_cgroup_put(struct obj_cgroup *objcg) { } From patchwork Wed Aug 28 09:35:15 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kanchana P Sridhar X-Patchwork-Id: 13780970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 104AFC5475C for ; Wed, 28 Aug 2024 09:35:27 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1B4B76B0096; Wed, 28 Aug 2024 05:35:25 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 117996B0098; Wed, 28 Aug 2024 05:35:25 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E5DD36B0099; Wed, 28 Aug 2024 05:35:24 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id B9ED46B0096 for ; Wed, 28 Aug 2024 05:35:24 -0400 (EDT) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 6214E1C6CC7 for ; Wed, 28 Aug 2024 09:35:24 +0000 (UTC) X-FDA: 82501146168.30.8F91A1D Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.10]) by imf06.hostedemail.com (Postfix) with ESMTP id 2BE2E180004 for ; Wed, 28 Aug 2024 09:35:21 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=fm7ONb5M; spf=pass (imf06.hostedemail.com: domain of kanchana.p.sridhar@intel.com designates 192.198.163.10 as permitted sender) smtp.mailfrom=kanchana.p.sridhar@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1724837653; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=v3syzhnLtkmjwGHswwxuF3GJsIJt0yuSq8IHxuI0qUg=; b=t/DM03yFCxfMpr2PjfMoNxVPKkF6cdVVIGS3f6GB29RU44VL+dVPtjR2AxypO3MryMU0Qe IZfQc8CoehTIlrqvDQIpgtm8icdntVO2mU3ASadS1ty3VSNlOqJ8GRd9NGhcysp5T/vvne QIS9C5si7bDP0VFfV9UEphSA+Eq1X6A= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=fm7ONb5M; spf=pass (imf06.hostedemail.com: domain of kanchana.p.sridhar@intel.com designates 192.198.163.10 as permitted sender) smtp.mailfrom=kanchana.p.sridhar@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1724837653; a=rsa-sha256; cv=none; b=re2hc1QNozv8r5kxR5nTuGlUykTadJIfrhbaFFXZD/uKyk5pcLR4/Vf02/ZCOcrbHIFxcR p8AxWpRZnxdaDmVDSDvbmK3xkRfrecqaLKfElLzC0lyADKjcdKsgOvTO/sGFWe+RzAsPXD d0khuokjKUO9Ga04KrR8lEJdepBOaTs= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1724837722; x=1756373722; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=K3yroI8ejev2DdrGq2bi16BmrcYH0+k9ICg7i86HaWs=; b=fm7ONb5Ml56iq1nysix9O1w3YGuk6M4uZ9Z6BXtQODci7h5hxsLxkCRZ 6ps5E/Nih6tmYxMiVDwJ07Lm3gTWXiivIARuehnx1vprc1kGbit3Z9gAF 53WrJ1o7+ozbqBAxA9FBWPhlXUHnCcNUAor0cSEccXg+f9+cY705UapBY jzkbekmvc2Orh1TJvUH+nsBKRuvsSX5kx5HWTXMHg6w73U2ReKPncg+23 BDvQS+XRcFQgfjDgusizX2QaMMXyJMyq6dMynm37QXSnLcoLeRKHwnbAJ LXB3G6uRjK+/Y6fW6nqer8MIXZCrPmCPqXe+Nic8NwBzZbdls05l1vh/i Q==; X-CSE-ConnectionGUID: PeQYE9nHRoytbBT5NmbeKg== X-CSE-MsgGUID: t+1T0uEIQk+xY0myMDDERQ== X-IronPort-AV: E=McAfee;i="6700,10204,11177"; a="34763873" X-IronPort-AV: E=Sophos;i="6.10,182,1719903600"; d="scan'208";a="34763873" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by fmvoesa104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Aug 2024 02:35:17 -0700 X-CSE-ConnectionGUID: Bqk7Q+llS7+LZM2jckH8EQ== X-CSE-MsgGUID: eEaT/vxHTaq60gtZsRhdxg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.10,182,1719903600"; d="scan'208";a="100678975" Received: from jf5300-b11a338t.jf.intel.com ([10.242.51.6]) by orviesa001.jf.intel.com with ESMTP; 28 Aug 2024 02:35:16 -0700 From: Kanchana P Sridhar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org, hannes@cmpxchg.org, yosryahmed@google.com, nphamcs@gmail.com, ryan.roberts@arm.com, ying.huang@intel.com, 21cnbao@gmail.com, akpm@linux-foundation.org Cc: nanhai.zou@intel.com, wajdi.k.feghali@intel.com, vinodh.gopal@intel.com, kanchana.p.sridhar@intel.com Subject: [PATCH v5 2/3] mm: zswap: zswap_store() extended to handle mTHP folios. Date: Wed, 28 Aug 2024 02:35:15 -0700 Message-Id: <20240828093516.30228-3-kanchana.p.sridhar@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20240828093516.30228-1-kanchana.p.sridhar@intel.com> References: <20240828093516.30228-1-kanchana.p.sridhar@intel.com> MIME-Version: 1.0 X-Stat-Signature: 9bwzajmse8taya5sa9bziw8t5rb6mdii X-Rspam-User: X-Rspamd-Queue-Id: 2BE2E180004 X-Rspamd-Server: rspam02 X-HE-Tag: 1724837721-150709 X-HE-Meta: U2FsdGVkX18+6ih8H1G2zukqRJ1Hhz127FVreXsPVvDDPBFcs8c4b0Ip/feSXP8PU8iVu7gTHEAwYOyHBT1Qknb6B6hjRKVs/2lvinHIy9WzkYM5lQkJ9gKZCe8ClgSFEBVqVryi8S+KjZfGeRNSNTkTIYJyRoeWoeKp9ELcgjbbMESPbuPDJkrjHW0N2cDRqDPXvcVIj6kNPByvV//546heDg8LjdrTGMpREhCD+2TCUBmwsIe4WvDBirhYGRFhIs+0D5zKTG1V1Nk00LWDOGxXqMMknl0x5tkgy22Iv+aSddFLql604KoZjcjMx014qx5fJeKL7STBHG5M8ytv2Ro/vjGHv9heJw7c0loQpNPW+NAlmCGZlRWr5SnwMnQPS3VIVTEJXWJy8fn/qnL72BeeSo9GxRYFbprV1phaIWer0BwqRoxatY/Fu4BWw1ABVwbaM/uK6nZhO6McYu3Y1bzoGjucIUcNT7I5v+3Ppstm0h+VzlBqRcluJ5W/W3FJSX4UUghgQBMZba2LR+99Kub91GuL5xhiHMH8ZtDqzk57Jbw1YW7TIXeWhsePthzXqdBiccFyQgadoIBse2hwlzgVzIeKF8kY9lz+khPraT8A4t9F+vVVeMmNiA8lC/0zIiLCPffkU+3NQgdK835e+jFaZQczdgGxyUXbOVoukEZnhYAo78giHRl81jZuiEwVl3Uh89zMMhBKZsjzSIryNqaeV1dPeWuYOcRKaASrudy8xUY38RR7YFi8yrtiA8Mz6GcGf4isLLns507LJZfHdT6/+LpYaYVbA2jZ/K5cfXZOs3zMB8TAte8h5/uB/8ZQs6mWxJgsH1FJQ3qaKXZLItV1PdXa4AGtdxiMkHD3PN+GcK+smjv6O+V+fqLDorFW/iODM5kN7CYzV3L/vB8575HCw1Ek0K2/abEp21QHUEJtBoQTQCqS+0jd2QdrXABs6QXdu/gmWlRF0Twc9Ux Cm4CD9Va tg+CjD/H+xyKRAdfw/5v27jS55ZB4lr7LYxVNUcTf947Ys2a3/vF8WiCvHQOzs8n5tz21F2k6KY5A/a1uYs+RL7h6kLmQ9PXYdrIZUQA2vMI6okFccIrDRNBUlTMzP+/qi9hR86JHokAqIjKA8iTJqETJHCb/DLjYfqcROoNrBvAGP1LivmKkPC0uGqWZ56YAaebF0hczvD80g0loQplc2o7zLJM7DrELrOJGWfaFqzR6CPzjJe5OMpcsJJ2pITEWh7QxXtv/bIRewCI9wSVJZRBqTbNahj7MQuNMOOcw+ZUWAUAnyJf92RnxE+p7fATlGBAoKCWQrvjnJZ5OzEaevWrSVvkA9+DHEPOj X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: zswap_store() will now process and store mTHP and PMD-size THP folios. This change reuses and adapts the functionality in Ryan Roberts' RFC patch [1]: "[RFC,v1] mm: zswap: Store large folios without splitting" [1] https://lore.kernel.org/linux-mm/20231019110543.3284654-1-ryan.roberts@arm.com/T/#u This patch provides a sequential implementation of storing an mTHP in zswap_store() by iterating through each page in the folio to compress and store it in the zswap zpool. Towards this goal, zswap_compress() is modified to take a page instead of a folio as input. Each page's swap offset is stored as a separate zswap entry. If an error is encountered during the store of any page in the mTHP, all previous pages/entries stored will be invalidated. Thus, an mTHP is either entirely stored in ZSWAP, or entirely not stored in ZSWAP. This forms the basis for building batching of pages during zswap store of large folios, by compressing batches of up to say, 8 pages in an mTHP in parallel in hardware, with the Intel In-Memory Analytics Accelerator (Intel IAA). Also, addressed some of the RFC comments from the discussion in [1]. Made a minor edit in the comments for "struct zswap_entry" to delete the comments related to "value" since same-filled page handling has been removed from zswap. Co-developed-by: Ryan Roberts Signed-off-by: Signed-off-by: Kanchana P Sridhar --- mm/zswap.c | 231 +++++++++++++++++++++++++++++++++++++++-------------- 1 file changed, 170 insertions(+), 61 deletions(-) diff --git a/mm/zswap.c b/mm/zswap.c index 449914ea9919..d6f012ca67d8 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -190,7 +190,6 @@ static struct shrinker *zswap_shrinker; * section for context. * pool - the zswap_pool the entry's data is in * handle - zpool allocation handle that stores the compressed page data - * value - value of the same-value filled pages which have same content * objcg - the obj_cgroup that the compressed memory is charged to * lru - handle to the pool's lru used to evict pages. */ @@ -876,7 +875,7 @@ static int zswap_cpu_comp_dead(unsigned int cpu, struct hlist_node *node) return 0; } -static bool zswap_compress(struct folio *folio, struct zswap_entry *entry) +static bool zswap_compress(struct page *page, struct zswap_entry *entry) { struct crypto_acomp_ctx *acomp_ctx; struct scatterlist input, output; @@ -894,7 +893,7 @@ static bool zswap_compress(struct folio *folio, struct zswap_entry *entry) dst = acomp_ctx->buffer; sg_init_table(&input, 1); - sg_set_folio(&input, folio, PAGE_SIZE, 0); + sg_set_page(&input, page, PAGE_SIZE, 0); /* * We need PAGE_SIZE * 2 here since there maybe over-compression case, @@ -1404,35 +1403,82 @@ static void shrink_worker(struct work_struct *w) /********************************* * main API **********************************/ -bool zswap_store(struct folio *folio) + +/* + * Returns true if the entry was successfully + * stored in the xarray, and false otherwise. + */ +static bool zswap_store_entry(struct xarray *tree, + struct zswap_entry *entry) { - swp_entry_t swp = folio->swap; - pgoff_t offset = swp_offset(swp); - struct xarray *tree = swap_zswap_tree(swp); - struct zswap_entry *entry, *old; - struct obj_cgroup *objcg = NULL; - struct mem_cgroup *memcg = NULL; + struct zswap_entry *old; + pgoff_t offset = swp_offset(entry->swpentry); - VM_WARN_ON_ONCE(!folio_test_locked(folio)); - VM_WARN_ON_ONCE(!folio_test_swapcache(folio)); + old = xa_store(tree, offset, entry, GFP_KERNEL); - /* Large folios aren't supported */ - if (folio_test_large(folio)) + if (xa_is_err(old)) { + int err = xa_err(old); + + WARN_ONCE(err != -ENOMEM, "unexpected xarray error: %d\n", err); + zswap_reject_alloc_fail++; return false; + } - if (!zswap_enabled) - goto check_old; + /* + * We may have had an existing entry that became stale when + * the folio was redirtied and now the new version is being + * swapped out. Get rid of the old. + */ + if (old) + zswap_entry_free(old); - /* Check cgroup limits */ - objcg = get_obj_cgroup_from_folio(folio); - if (objcg && !obj_cgroup_may_zswap(objcg)) { - memcg = get_mem_cgroup_from_objcg(objcg); - if (shrink_memcg(memcg)) { - mem_cgroup_put(memcg); - goto reject; - } - mem_cgroup_put(memcg); + return true; +} + +/* + * If the zswap store fails or zswap is disabled, we must invalidate the + * possibly stale entries which were previously stored at the offsets + * corresponding to each page of the folio. Otherwise, writeback could + * overwrite the new data in the swapfile. + * + * This is called after the store of the i-th offset in a large folio has + * failed. All zswap entries in the folio must be deleted. This helps make + * sure that a swapped-out mTHP is either entirely stored in zswap, or + * entirely not stored in zswap. + * + * This is also called if zswap_store() is invoked, but zswap is not enabled. + * All offsets for the folio are deleted from zswap in this case. + */ +static void zswap_delete_stored_offsets(struct xarray *tree, + pgoff_t offset, + long nr_pages) +{ + struct zswap_entry *entry; + long i; + + for (i = 0; i < nr_pages; ++i) { + entry = xa_erase(tree, offset + i); + if (entry) + zswap_entry_free(entry); } +} + +/* + * Stores the page at specified "index" in a folio. + */ +static bool zswap_store_page(struct folio *folio, long index, + struct obj_cgroup *objcg, + struct zswap_pool *pool) +{ + swp_entry_t swp = folio->swap; + int type = swp_type(swp); + pgoff_t offset = swp_offset(swp) + index; + struct page *page = folio_page(folio, index); + struct xarray *tree = swap_zswap_tree(swp); + struct zswap_entry *entry; + + if (objcg) + obj_cgroup_get(objcg); if (zswap_check_limits()) goto reject; @@ -1445,42 +1491,20 @@ bool zswap_store(struct folio *folio) } /* if entry is successfully added, it keeps the reference */ - entry->pool = zswap_pool_current_get(); - if (!entry->pool) + if (!zswap_pool_get(pool)) goto freepage; - if (objcg) { - memcg = get_mem_cgroup_from_objcg(objcg); - if (memcg_list_lru_alloc(memcg, &zswap_list_lru, GFP_KERNEL)) { - mem_cgroup_put(memcg); - goto put_pool; - } - mem_cgroup_put(memcg); - } + entry->pool = pool; - if (!zswap_compress(folio, entry)) + if (!zswap_compress(page, entry)) goto put_pool; - entry->swpentry = swp; + entry->swpentry = swp_entry(type, offset); entry->objcg = objcg; entry->referenced = true; - old = xa_store(tree, offset, entry, GFP_KERNEL); - if (xa_is_err(old)) { - int err = xa_err(old); - - WARN_ONCE(err != -ENOMEM, "unexpected xarray error: %d\n", err); - zswap_reject_alloc_fail++; + if (!zswap_store_entry(tree, entry)) goto store_failed; - } - - /* - * We may have had an existing entry that became stale when - * the folio was redirtied and now the new version is being - * swapped out. Get rid of the old. - */ - if (old) - zswap_entry_free(old); if (objcg) { obj_cgroup_charge_zswap(objcg, entry->length); @@ -1511,23 +1535,108 @@ bool zswap_store(struct folio *folio) store_failed: zpool_free(entry->pool->zpool, entry->handle); put_pool: - zswap_pool_put(entry->pool); + zswap_pool_put(pool); freepage: zswap_entry_cache_free(entry); reject: obj_cgroup_put(objcg); if (zswap_pool_reached_full) queue_work(shrink_wq, &zswap_shrink_work); -check_old: + + return false; +} + +/* + * Modified to store mTHP folios. Each page in the mTHP will be compressed + * and stored sequentially. + */ +bool zswap_store(struct folio *folio) +{ + long nr_pages = folio_nr_pages(folio); + swp_entry_t swp = folio->swap; + pgoff_t offset = swp_offset(swp); + struct xarray *tree = swap_zswap_tree(swp); + struct obj_cgroup *objcg = NULL; + struct mem_cgroup *memcg = NULL; + struct zswap_pool *pool; + bool ret = false; + long index; + + VM_WARN_ON_ONCE(!folio_test_locked(folio)); + VM_WARN_ON_ONCE(!folio_test_swapcache(folio)); + + if (!zswap_enabled) + goto reject; + /* - * If the zswap store fails or zswap is disabled, we must invalidate the - * possibly stale entry which was previously stored at this offset. - * Otherwise, writeback could overwrite the new data in the swapfile. + * Check cgroup limits: + * + * The cgroup zswap limit check is done once at the beginning of an + * mTHP store, and not within zswap_store_page() for each page + * in the mTHP. We do however check the zswap pool limits at the + * start of zswap_store_page(). What this means is, the cgroup + * could go over the limits by at most (HPAGE_PMD_NR - 1) pages. + * However, the per-store-page zswap pool limits check should + * hopefully trigger the cgroup aware and zswap LRU aware global + * reclaim implemented in the shrinker. If this assumption holds, + * the cgroup exceeding the zswap limits could potentially be + * resolved before the next zswap_store, and if it is not, the next + * zswap_store would fail the cgroup zswap limit check at the start. */ - entry = xa_erase(tree, offset); - if (entry) - zswap_entry_free(entry); - return false; + objcg = get_obj_cgroup_from_folio(folio); + if (objcg && !obj_cgroup_may_zswap(objcg)) { + memcg = get_mem_cgroup_from_objcg(objcg); + if (shrink_memcg(memcg)) { + mem_cgroup_put(memcg); + goto put_objcg; + } + mem_cgroup_put(memcg); + } + + if (zswap_check_limits()) + goto put_objcg; + + pool = zswap_pool_current_get(); + if (!pool) + goto put_objcg; + + if (objcg) { + memcg = get_mem_cgroup_from_objcg(objcg); + if (memcg_list_lru_alloc(memcg, &zswap_list_lru, GFP_KERNEL)) { + mem_cgroup_put(memcg); + goto put_pool; + } + mem_cgroup_put(memcg); + } + + /* + * Store each page of the folio as a separate entry. If we fail to store + * a page, unwind by removing all the previous pages we stored. + */ + for (index = 0; index < nr_pages; ++index) { + if (!zswap_store_page(folio, index, objcg, pool)) + goto put_pool; + } + + ret = true; + +put_pool: + zswap_pool_put(pool); +put_objcg: + obj_cgroup_put(objcg); + if (zswap_pool_reached_full) + queue_work(shrink_wq, &zswap_shrink_work); +reject: + /* + * If the zswap store fails or zswap is disabled, we must invalidate + * the possibly stale entries which were previously stored at the + * offsets corresponding to each page of the folio. Otherwise, + * writeback could overwrite the new data in the swapfile. + */ + if (!ret) + zswap_delete_stored_offsets(tree, offset, nr_pages); + + return ret; } bool zswap_load(struct folio *folio) From patchwork Wed Aug 28 09:35:16 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kanchana P Sridhar X-Patchwork-Id: 13780971 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C4D98C54799 for ; Wed, 28 Aug 2024 09:35:29 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3DA766B0098; Wed, 28 Aug 2024 05:35:28 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3385F6B009B; Wed, 28 Aug 2024 05:35:28 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 13F7B6B009C; Wed, 28 Aug 2024 05:35:27 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id D8BB16B0098 for ; Wed, 28 Aug 2024 05:35:27 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 7FA2112044F for ; Wed, 28 Aug 2024 09:35:26 +0000 (UTC) X-FDA: 82501146252.14.AD781A5 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.10]) by imf22.hostedemail.com (Postfix) with ESMTP id 556D0C0007 for ; Wed, 28 Aug 2024 09:35:23 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=f9dcPQ4L; spf=pass (imf22.hostedemail.com: domain of kanchana.p.sridhar@intel.com designates 192.198.163.10 as permitted sender) smtp.mailfrom=kanchana.p.sridhar@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1724837637; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=nqdSQww6uEu4a2u/7hSPuFcCXnOozboCRBNq3UNoJ3k=; b=cvtWpHPKdlpcHT8LJk77G6yarboItediPbtPb4AejcspmXVMk7TqNinI9+n1wVaD1el4uC bVuVUSX3aPlZhDJpumC9OiiA7EwPvR81W8R9QBEgxsyovUbRsYJinCADepgEJzhgtinz0X 4LVIzfHBo50trTw3dAF+eKDATTl0PWA= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1724837637; a=rsa-sha256; cv=none; b=fvb5hmdFuYGMrm4V9D5A9XN5Tyi/NMnTe2x2EljXid/PzqIOxoFr7yn4wC3u3MUuUudko8 Q3sLP+LV6wPAne486G7+MtwhdfWcYcKw010LJWO/S6KyKFNJ8gUFCqf+xufVMhTZwdCNoG QR23TPF5klqTsxP89GXaTgMqpMauH74= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=f9dcPQ4L; spf=pass (imf22.hostedemail.com: domain of kanchana.p.sridhar@intel.com designates 192.198.163.10 as permitted sender) smtp.mailfrom=kanchana.p.sridhar@intel.com; dmarc=pass (policy=none) header.from=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1724837724; x=1756373724; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=z3IdVzEmkKayZcVA4c5Lx8bWa1kn1UvcXwMHQ9hiN50=; b=f9dcPQ4LnXbkXyPO3CeSn3dtBSc6flUacGydfCtBfFxTsr6O/uxLFTuh 7i7zRpa516Gv/T8NMIO6dOTiLTeoW97Pd3OTwY2rfprxWevrcInyOBKda b9d/+BHEFEKP7W7SIMXoQzSpPVJDShn8AHoV7kMe/2vuQHlPSWtcio8j8 GP6mlpjN4n57sdSDA679KVTo9UwqOr7IQ1ULGaHy8Trj4XSHLeX3HZf7X 5U/wrqwsy/hxmPbjLSBGtumjNzo8I722mwpXoKdOLpWFwbcMlY8UnCtVm kU2Y8UMO5qtbqix8Ftk+8ibxRBR6/22fGj9ySLR9fuItJEE6aVh437cZQ A==; X-CSE-ConnectionGUID: QI3lfYpuT5yeC+AMJTfA3A== X-CSE-MsgGUID: YeFXY9LyQVKr/3P3GLpwnA== X-IronPort-AV: E=McAfee;i="6700,10204,11177"; a="34763880" X-IronPort-AV: E=Sophos;i="6.10,182,1719903600"; d="scan'208";a="34763880" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by fmvoesa104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Aug 2024 02:35:17 -0700 X-CSE-ConnectionGUID: PdKxBT0kTjeKQaE0G6sGVw== X-CSE-MsgGUID: qK/TMayGQfCohrla7Z6y4g== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.10,182,1719903600"; d="scan'208";a="100678978" Received: from jf5300-b11a338t.jf.intel.com ([10.242.51.6]) by orviesa001.jf.intel.com with ESMTP; 28 Aug 2024 02:35:16 -0700 From: Kanchana P Sridhar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org, hannes@cmpxchg.org, yosryahmed@google.com, nphamcs@gmail.com, ryan.roberts@arm.com, ying.huang@intel.com, 21cnbao@gmail.com, akpm@linux-foundation.org Cc: nanhai.zou@intel.com, wajdi.k.feghali@intel.com, vinodh.gopal@intel.com, kanchana.p.sridhar@intel.com Subject: [PATCH v5 3/3] mm: swap: Count successful mTHP ZSWAP stores in sysfs mTHP zswpout stats. Date: Wed, 28 Aug 2024 02:35:16 -0700 Message-Id: <20240828093516.30228-4-kanchana.p.sridhar@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20240828093516.30228-1-kanchana.p.sridhar@intel.com> References: <20240828093516.30228-1-kanchana.p.sridhar@intel.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 556D0C0007 X-Stat-Signature: tr4kzw8iqw94imh8mcafc438qcb17g4z X-Rspamd-Server: rspam09 X-Rspam-User: X-HE-Tag: 1724837723-51154 X-HE-Meta: U2FsdGVkX19E8QqwGx5EMGViO8lELOP23gelHA0GG6Ymsro8jifB5c+Xh6Xv+8UKA8V+YVodBgxdljRHXhBwOfWGOu+qqj2nhQXe4pfzTWIS62QQVsgc7yzeWx+bMiS6wLHHgpADB+t2j562mzdXs3BPhNZj3/0IyIwNFwnsvnfenkSvWQKiXsuZUbFwi8LAw72m3khW0fcybjtLxaHzByt8uCYxTBI2RgGxohjx9UELpHswV7qKwtPb7TOpBPnVpirr6SAP9e9Q5RURHzxHS0hdMh12OoFBXH477HKCtxsq9UJ9ycbTE+/cEok0Xf3EhTD0ECjbM6DBnI5hxA0a4yluUK+1tv2kUoj7g3ggdJeTTSwb/OvjN+X5cQggL772yS6ZEhrSU6YOAkyLVZDXYH3d35tMHMze8lvhWye0Bj7ZojyULRXgV5wpNIX+1XbeG+frE+V2DLLrw1AFiHX+ybBuy6MzskkI2oKtAhfKEJN6nxZk1jg57a3OJbGi+8R2m1GTxw4j+bT1NJLkORMs/sSaGd31QNjkTvMO1dT3pNdnkgR7gsL1I9Kfs/AeKMdFjSn4uEKZ/yWb0iOWoqoa7kVKDLqtpzMk3oUiAa8GAAH/jLSp3kqDw1NS9+kqcSDIEEKmZix+PCLqCrfq+C/4qZFxHX/hasm7QyoyDwR4y3AF5gV8kxDZ2A5z14Am3/KvD/7NRzok1M0FdZ0oboS5sEIA+QefYE2RKi8qyQCHmlNavope+jMbSapoBxVgGTgTS87mKJj3ruotHv5K8pYaid/C+hmRIHDcUgC7k5eEclmxHbbxY4KGnKfP97Vg4v5x8ehYlayVTl7WAwH6e7meOj3EQW2OzkZRj9v2JBbLSAv1VmpkNAWgg8QiAqoua7JrE6dcCYrOFdod9rsbseu/CiHSSsXqi1vp7B+O1VxfIV7grz3fDAoYznMdnLrM55Zy1h1AjUkma+RJ18kORWx qqTLi7km imaA4tYWLNrfsAq/N24bNyU3VjzKVvKVmTEpva3VYPlz+HMM//qgEb0tlpVxfwzVMguWwF5EKNd8wZ9a5OjFII0jgaXAeEOSBPwJHlntZWeUh6EhYBuAPSif7pfC8BX9LhcCvNJ8aaAGCPo7mgRwW4EUZwHbjSPU3Ui+DaC0USWq5X9RaceR+gcbWug== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Add a new MTHP_STAT_ZSWPOUT entry to the sysfs mTHP stats so that per-order mTHP folio ZSWAP stores can be accounted. If zswap_store() successfully swaps out an mTHP, it will be counted under the per-order sysfs "zswpout" stats: /sys/kernel/mm/transparent_hugepage/hugepages-*kB/stats/zswpout Other block dev/fs mTHP swap-out events will be counted under the existing sysfs "swpout" stats: /sys/kernel/mm/transparent_hugepage/hugepages-*kB/stats/swpout Based on changes made in commit 61e751c01466ffef5dc72cb64349454a691c6bfe ("mm: cleanup count_mthp_stat() definition"), this patch also moves the call to count_mthp_stat() in count_swpout_vm_event() to be outside the "ifdef CONFIG_TRANSPARENT_HUGEPAGE". Signed-off-by: Kanchana P Sridhar --- include/linux/huge_mm.h | 1 + mm/huge_memory.c | 3 +++ mm/page_io.c | 3 ++- 3 files changed, 6 insertions(+), 1 deletion(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 4902e2f7e896..8b8045d4a351 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -118,6 +118,7 @@ enum mthp_stat_item { MTHP_STAT_ANON_FAULT_ALLOC, MTHP_STAT_ANON_FAULT_FALLBACK, MTHP_STAT_ANON_FAULT_FALLBACK_CHARGE, + MTHP_STAT_ZSWPOUT, MTHP_STAT_SWPOUT, MTHP_STAT_SWPOUT_FALLBACK, MTHP_STAT_SHMEM_ALLOC, diff --git a/mm/huge_memory.c b/mm/huge_memory.c index a81eab98d6b8..45b26c8b3d8a 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -587,6 +587,7 @@ static struct kobj_attribute _name##_attr = __ATTR_RO(_name) DEFINE_MTHP_STAT_ATTR(anon_fault_alloc, MTHP_STAT_ANON_FAULT_ALLOC); DEFINE_MTHP_STAT_ATTR(anon_fault_fallback, MTHP_STAT_ANON_FAULT_FALLBACK); DEFINE_MTHP_STAT_ATTR(anon_fault_fallback_charge, MTHP_STAT_ANON_FAULT_FALLBACK_CHARGE); +DEFINE_MTHP_STAT_ATTR(zswpout, MTHP_STAT_ZSWPOUT); DEFINE_MTHP_STAT_ATTR(swpout, MTHP_STAT_SWPOUT); DEFINE_MTHP_STAT_ATTR(swpout_fallback, MTHP_STAT_SWPOUT_FALLBACK); #ifdef CONFIG_SHMEM @@ -605,6 +606,7 @@ static struct attribute *anon_stats_attrs[] = { &anon_fault_fallback_attr.attr, &anon_fault_fallback_charge_attr.attr, #ifndef CONFIG_SHMEM + &zswpout_attr.attr, &swpout_attr.attr, &swpout_fallback_attr.attr, #endif @@ -637,6 +639,7 @@ static struct attribute_group file_stats_attr_grp = { static struct attribute *any_stats_attrs[] = { #ifdef CONFIG_SHMEM + &zswpout_attr.attr, &swpout_attr.attr, &swpout_fallback_attr.attr, #endif diff --git a/mm/page_io.c b/mm/page_io.c index b6f1519d63b0..26106e745d73 100644 --- a/mm/page_io.c +++ b/mm/page_io.c @@ -289,6 +289,7 @@ int swap_writepage(struct page *page, struct writeback_control *wbc) swap_zeromap_folio_clear(folio); } if (zswap_store(folio)) { + count_mthp_stat(folio_order(folio), MTHP_STAT_ZSWPOUT); folio_unlock(folio); return 0; } @@ -308,8 +309,8 @@ static inline void count_swpout_vm_event(struct folio *folio) count_memcg_folio_events(folio, THP_SWPOUT, 1); count_vm_event(THP_SWPOUT); } - count_mthp_stat(folio_order(folio), MTHP_STAT_SWPOUT); #endif + count_mthp_stat(folio_order(folio), MTHP_STAT_SWPOUT); count_vm_events(PSWPOUT, folio_nr_pages(folio)); }